If you haven’t been living under a rock for the last few months, you’ve no doubt heard the buzz about ChatGPT and other generative AI text and image systems. After the initial hype, we’ve seen much pooh-poohing of the real capabilities of these tools.

Instead, I want to plant a flag that these baby AI systems are doing exactly what we should expect from developing intelligences, and before too much longer, they’re likely to blow us all away.

Better Know an AI

If you’re not familiar with generative AI as a technology, I’ve accordioned a brief summary below.

Generative AI is a type of artificial intelligence that is capable of creating new content or data based on a set of input data or parameters. This can include generating text, images, music, or other types of media.

Generative AI algorithms use machine learning techniques to analyze and understand a dataset and then generate new content that is similar to the input data in some way. For example, a generative AI algorithm might be trained on a dataset of images of animals and then be able to generate new images of animals that are similar to the ones in the training dataset.

Generative AI has the potential to revolutionize many fields, including art, music, and journalism, by allowing algorithms to generate new content that is similar to existing content but not identical. It also has the potential to be used for more practical applications, such as generating synthetic data for machine learning tasks or creating new drug compounds for pharmaceutical research.

(*This answer created was by a generative AI with no human editing.)

ChatGPT, built by Open AI, is one of the splashiest recent releases. When public access opened on Nov 30th, 2022, a rush of social posts, podcast episodes, and media opinions ensued. They ranged from “This is a cool gimmick” (and it is) to “This tool isn’t very smart” (and it isn’t).

But I believe most of the initial critiques are missing a critical point. They’re treating a single generative AI tool like ChatGPT (which often confuses fact and fiction in its answers) or MidJourney (an art tool that struggles with human hands and feet) like other tools we’re familiar with. But the truth is that as intelligences, these tools can learn. And they’re just getting started.

Let’s compare a tool like ChatGPT with something like a lamp. You acquire said lamp principally for a single purpose – to provide light. If you didn’t need/want light, you could easily place a vase or potted cacti on the same end table and call it a day. The lamp will provide light in a binary way, either on or off. If it’s a super fancy lamp, maybe you get some stages of brightness or even a dimmer (Look at you with your boujee lamp).

But once the lamp leaves the lamp factory, it will never learn new ways of providing light. It can’t learn your preference for brightness. It won’t acquire new ways of providing light over time. It won’t be able to change its own bulb or suddenly begin to use new power sources. The oft-used “smart” label, now being applied to everything from toasters to showerheads (yes, really), describes tools that are connected to the internet and can gain the ability to learn and deploy the product’s core components in new ways over time.

When Tesla needs to issue an update to their cars, many tweaks can be done remotely by installing new software via an over-the-air update. My favorite work tool, the reMarkable tablet, will often greet me in the morning with the news that it has a new capability because of a software update. As inconvenient as phone updates may be when they want to reboot in the middle of a busy day, you’ll often get new features and capabilities as a result.

This ability to develop over time means that we’d be wrong to treat an AI system as a complete tool, like the lamp sitting on a store shelf. A better analogy would be to compare these intelligences to the intelligences they’re modeled after, namely us humans.

Good and Getting Better

The most common critiques say that ChatGPT does several things poorly. It can provide incorrect information and is described as hallucinating about reality, by which we mean it can’t distinguish between reality and fiction and can’t apply even simple logic to its answers.

These critiques amount to a reassurance that the AIs aren’t coming for us, but I think the evidence is strong that a “yet” needs to be tagged onto the end of that sentence.

It’s hard to discuss growth in technology without referencing Kurzweil’s law of accelerating returns. On Kurzwiel’s roadmap, it’s important to note that we don’t make magic leaps in progress. We don’t jump from using an abacus to a graphing calculator overnight. The technology iterates and improves.

In Kurzweil’s telling, we should expect to see AI first reach a level of intelligence equivalent to an average person, then the level of the smartest human, then a group of our smartest humans, and only then outpacing the totality of the human race.

I contend that the current crop of generative AI systems are as good or better than the average human at researching, writing, and making art. And they’re poised to improve at a much faster pace than we are.

What’s Wrong with ChatGPT?

First, Chat GPT is inaccurate at times.

True, but then so are most people. I’ve met plenty of business professionals whose overconfidence belies their actual knowledge or who are operating on outdated or outright wrong information. Not only are these people often not kept in check, but they’re frequently allowed to yield positions of prominence in terrifying ways. And lower on the professional ladder, the business world widely laments the amount of training that new hires need.

Those common errors in human knowledge have created a $14 Billion US market for business coaching and $350+ Billion global training industry. On the AI front, Open AI issued an update taking ChatGPT 3.0 to 3.5 only a few weeks after its public debut and giving the tool the ability to check its own confidence level and admit when it didn’t know the answer.

Rerun some of the experiments from early December 2022 that caused the tool to hallucinate, for instance that Christopher Columbus was alive in the 1980s, and the tool now catches itself. When I asked what Isaac Newton loved about the 1980s, the chatbot suggested I was confused and thinking of someone else since Newton had already been dead 250 years by the time Deloreans came into fashion. When I asked which Ninja Turtle would live the longest, the bot correctly identified them as fictional characters, and therefore, the question didn’t make sense. No webinar series on research methods or 3-day conference on adapting current technology to business trends is needed. With a single update, ChatGPT went from pretty dumb to reasonably smart – almost faster than we could talk about it.

Second, ChatGPT is overconfident in its answers.

The confidence with which ChatGPT states its answers can have a multiplying impact when it’s wrong. Again, this isn’t very different from people I know and has already begun to be addressed in recent updates to the tool.

But that ability to correctly attribute a confidence interval to one’s own beliefs and predictions is actually exceedingly rare. In Philip Tetlock’s work around so-called Super Forecasters, a repeated theme is the ability to understand and check one’s own cognitive biases and overconfidence. The best forecasters know what they don’t know, in a sense, and are able to estimate the probability that they’re correct based on the information they know to be factual.

Because these abilities are so rare, Tetlock and colleagues have published books, founded the Good Judgment institute, and even offer prediction training to businesses and government professions.

So already we see an AI moving very quickly from the level of more basic intelligence, being wrong about things and not knowing it, to gaining a much rarer human ability. Open AI’s next trick, ChatGPT 4, will utilize significantly more training data, essentially leveling up its intelligence even further, and the incorporation of other abilities like WebGPT, which can search the open Internet for answers, could help the system instantly fact-check its own generative outputs.

Much as Kurzweil predicted, the AI systems are acquiring knowledge and cognitive skills at a breakneck pace compared to humans’ ability to learn.

Third, ChatGPT isn’t creative.

Another common critique of the current crop of AI tools is the procedural nature of the generative process is simply regurgitating existing data, lacking creativity and artistry. The tools essentially remix their inputs and commonly recreate biases from their training data. In some cases, they will introduce biases that the progenitors of the system didn’t even know existed, as in the case of the Amazon hiring AI that exhibited a bias against female programmers.

However, I don’t see this lack of creativity as a bug, but rather a feature and a strong suggestion of the greatest opportunity for generative AI. In 1997, Gary Kasparov famously lost a game of chess to IBM’s Deep Blue. The experience convinced him of the power of computational systems to solve big problems, and Kasparov has emerged as an interesting voice on AI research and computer intelligence.

In various interviews, Kasparov has expressed the idea the best possible future is not one where we proactively spurn technology to ensure human freedom, nor one where we willingly submit to our new robot overlords. Instead, the best option is for humans to work alongside these emerging intelligences to allow humans to do what humans do best.

In his estimation, an AI that serves as a crack researcher, copywriter, and graphic artist can allow a creative human to imagine new stories, find new inferences, or create niche content at a price point that our current human-based generative systems can’t support.

The Next Big Thing

There are legitimate grips with ChatGPT. For instance, the tool’s seeming inability to do math and or apply logic is severely limiting. But without a doubt, these problems will be identified and solved in short order. The bigger long-term limitations are likely to be the inability of the system to exhibit creativity, infer non-logical interconnective links, tell compelling stories, and reason morally.

But that’s exactly what humans are for! Working alongside AI tools will help scale our own intelligence to new heights. The real challenge for us then will be to adapt our current systems to incorporate these tools in the most beneficial ways. That will almost certainly mean reimagining our educational systems and professional lives.

One example is professors who are lamenting ChatGPT’s ability to quickly generate term paper answers that students may not really understand. While a fundamental grasp of truth and the ability to analyze information is critical, the written term paper may simply be the latest in a long line of antiquated pedagogical tools.

In Plato’s Phaedrus, we hear Socrates worry that writing will erode the memory and mental reasoning that were developed under oral cultures. Students in Victorian England were expected to memorize and repeat endless dates and poems because that’s how education worked. Our pedagogical understanding has evolved beyond those opinions, and now, we recognize that most people will have access to spell check when they need it and a calculator for more complex math needs.

Here are two examples that help illustrate the potential transformative power of these tools. As an avid reader, I asked ChatGPT to help define some complicated ideas that I’ve come across.

Philip K. Dick was an American science fiction writer who was known for his philosophical and metaphysical themes. In his writing, he often explored the nature of reality and the ways in which humans perceive and understand the world around them.

The “Black Iron Prison” is a concept that appears in several of Dick’s works, including the novels “VALIS” and “The Man in the High Castle.” It refers to the idea that humanity is trapped within a limited and rigid system of thought and understanding that prevents them from seeing the true nature of reality. This system is often depicted as a sort of prison or cage, which confines the mind and limits the potential for understanding and growth.

In Dick’s writing, the Black Iron Prison is often depicted as a metaphor for the ways in which society and culture can shape and limit our perceptions and understanding of the world. It is a reminder that our understanding of reality is subjective and that there may be other ways of seeing and understanding the world that are beyond our current comprehension.

The concept of the Black Iron Prison has been influential in science fiction and philosophical circles, and it continues to be a topic of discussion and debate among fans and scholars of Dick’s work.

Randolph Otto was a German theologian and philosopher who is best known for his concept of the “numinous,” which he developed in his 1917 book “The Idea of the Holy.”

Otto defined the numinous as an experience of the divine or the sacred that is characterized by feelings of awe, mystery, and reverence. He argued that the numinous is an experience that is beyond the ordinary and that it is fundamentally different from other kinds of religious experiences.

According to Otto, the numinous is an experience that is ineffable and cannot be fully captured or described in words. It is an experience that is fundamentally otherworldly and that transcends the limits of human understanding and language.

Otto’s concept of the numinous has been influential in the study of religion and spirituality, and it has been widely discussed and debated by theologians and philosophers. The idea of the numinous has also been influential in the field of psychology, where it has been used to describe the psychological and emotional dimensions of religious and spiritual experiences.

It got it exactly right! No doubt, it would be helpful for someone to actually wade through Otto’s Idea of the Holy to fully absorb the concept of the Numinous or to attempt to decipher PKD’s Exegesis to truly grok the Black Iron Prison, but the rate of information creation today is making it increasingly impossible to consume everything. A tool like ChatGPT can quickly fill in the missing gaps in a reader’s knowledge to better parse jargon, in-group lexicons, and canonical ideas.

Conclusion

It’s clear that we’re witnessing the evolution of artificial intelligence in real time. These systems are developing quickly, and while we’re not in danger of them replacing human creativity and ingenuity, they are poised to quickly surpass many basic human capabilities.

Whether we are able to effectively integrate these tools into our business and professional lives remains to be seen, and it’s an open question of whether the inevitable economic disruption can be negotiated in a humane manner. Moreso than maybe any other technology on the horizon, these tools point to a possible future where the rich get even richer.

But in the end, we should not discount the current capabilities of these tools because they’re not yet at the level of our greatest human operators. These intelligences are doing exactly what we should expect at this stage of the development cycle and gaining ground quickly.

alexpacton Non-fiction, Writing

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.