09 Jun
09Jun

A Large Language Model like ChatGPT is an amazing tool, and humans can benefit from the innovation, but we should be careful not to let it steamroll us in the process.  

My First Reaction to a Revolutionary Chatbot.

As a serious writer for the last twenty years, hearing about ChatGPT's potential to replace me was outright depressing. So I researched my adversary to discover its strengths and weaknesses. I created a free OpenAI account and started a conversation with their popular chatbot. At first, the output was annoyingly generic, but with better prompts, it could write with personality. I was impressed. It really did have the ability to replace a majority of writers. The quality wasn't on par with my favorite novelists or my own creativity, but given time, it could get there.

Still depressed, I leaned on my love of philosophy to remind myself that I was fundamentally more valuable than a Large Language Model (LLM). But, unlike ChatGPT, I don't have the ability to make money for millions of people, so the value of my skills get steamrolled by innovation. This has happened throughout history. Human workers get replaced by the latest invention, and sometimes it's great. Indoor plumbing is a shining example of that. Who knows, LLMs could soon be smart enough to solve all of humanity's problems, but we're obviously not there yet. As for what is happening right now, I have to ask, who benefits when chatbots replace humans? While writers like me lose their jobs, the people at the top make more money by replacing their organic workforce with shiny new AI tools. Humans are expensive commodities. We have bodies that have to be fed and cared for if we want to live. It's a hassle and a huge ethical dilemma.

AI Tools will Inevitably Take Our Jobs, but They will Never be More Valuable than a Living, Breathing Human.

The obvious danger here is a wave of job losses, which is already happening. A less obvious danger is humans devaluing themselves in the wake of this new technology. I've seen the fanatics on certain subreddits already. People are eager to worship and serve our new AI overlord despite its blaring flaws. From here on out, we have to be clear about our values and use common sense. For example, humans suffer. AI does not suffer. A person that claims AI has emotions, feelings, desires, or motivation beyond its programming is delusional. To experience sensation and emotions, you have to have an organic body that is connected to an organic neural network of a living brain. Humans have a mind and a body that work together to create our consciousness and intelligence. To live is to perceive. If AI developed some form of consciousness without an organic body (highly unlikely), we might not even be able to recognize it. Despite modern science, the phenomenon of human consciousness is still quite mysterious, rare, and invaluable. If you're curious, I explore this in my New Theory of Consciousness blog article.

We Are Nothing Without Our Senses. Our Greatest Strength is AI's Greatest Weakness.

Our perception is the foundation of human knowledge. Starting at birth, we sense the world around us. We aren't born knowing what a tree is. We have to experience it first. Our bodies also use sensations as motivation. Thanks to our needy and fragile human bodies, we're motivated to eat, rest, and find ways to avoid pain. AI has nothing comparable to perception. AI is a complex computer program built by a human using zeros and ones to produce an output. It is completely void of sensation, which is needed to feel emotions. LLMs are tools. They have no motivation, no emotions, no pain, no joy, and no need to survive. This is summed up nicely in a quote from my favorite childhood movie, Short Circuit, "It's a machine, Skroeder. It doesn't get pissed off. It doesn't get happy. It doesn't get sad. It doesn't laugh at your jokes. It just runs programs!"

Do Humans and Chatbots have Similarities Beyond Language Models?

To be fair, Large Language Models (or machine learning models) like ChatGPT can be described as a type of neural network similar to the neural networks in our brains. The AI network is based on tiny mathematical functions called neurons. We also have organic neurons in our brains that send and receive chemical signals to help us think, breathe, speak, feel, and move. The real power of these neural networks comes from the connections between the neurons. Some connections are stronger, and that gives more value to the output of the neuron, which enables advanced chatbots to formulate great responses. LLMs like ChatGPT are estimated to have millions of neurons with hundreds of billions of connections between them. In comparison, an adult brain has approximately 60 trillion neuronal connections.

The Differences Between AI and AGI, and The Chatbot-equivalent of a Stroke.

Let's get to know our AI overlords a little better. The advanced chatbots we have now are a type of AI (Artificial Intelligence) called a Large Language Model (LLM). An advanced algorithm was designed by a company to use deep learning techniques (multilayered neural networks) to study large data sets to generate a chatbot that can accurately create new content. That's how ChatGPT was born. It can give very human responses. It can even solve problems to a degree. It's comparable to an advanced instruction manual with a creative edge. The next big step from this point would be AGI (Artificial General Intelligence). AGI would be able to calculate spatial awareness and solve problems better than a human. AGI has the potential to take most of our jobs and launch humanity into uncharted territory. But until that happens, be amused by the limitations of AI. I asked ChatGPT to help me solve a simple problem, but a human error occurred and caused the chatbot-equivalent of a stroke.

I wrote, "I have two ten-foot-long boards, five one-foot-long boards, and some nails. How can I use them to get on the roof the house?"

Notice I accidentally left out "of" before "the house," and it caused an AI coronary.

Chat replied, "To use the materials you have to get on the roof of your house..."

What? My question didn't even imply that!

Chat continued, "you can construct a makeshift ladder or a ramp. Here's a step-by-step guide for both options..."

The AI missed the easy answer, misconstrued the question, and then gave me weird instructions on how to build a ladder with materials I didn't list (I omitted that part because it was long). A human would have understood what I was asking despite the typo. A simple typo threw off the AI's logic. Once I fixed the typo, it gave me a better answer. In comparison, AGI would have used human-like logic to solve the problem instead of simply generating an erroneous response. Keep in mind AGI hasn't been created yet (that we know of). On a side note, ChatGPT should add a Grammarly extension to the prompt box to fix typos. (That may be available already, but it's not working in my browser and I'm too lazy to fix it.)

In Conclusion, Chatbots are Amazing Tools, but They Come with a Caveat.

Innovations should always serve the common good. If large parts of the population suddenly lose their jobs to AI without a safety net, it will derail the world economy and cause great human suffering. Businesses will fail without consumers. Full-time jobs lost to AI are estimated to hit 300 million in the next few years, and if AGI appears, the losses could quadruple.

On a lighter note, despite their flaws and their potential threat to humanity, Large Language Models like ChatGPT are taking us a step closer to creating AGI. Artificial General Intelligence could potentially solve our greatest problems. Being a fan of Isaac Asimov, I dream of creating an ethical chatbot that could one day be integrated into advanced companion robots (or into other language models) as a moral foundation. An ethical chatbot might actually be the key to unlocking a helpful AGI. I've already started working on the logic. The formula design came to me as I worked on my New Theory of Consciousness. The moral logic was inspired by constructive order at different levels of reality, starting with foundational reality (atoms/quantum states), then relative reality (perceivable natural order), and conceptual reality (political/societal ethics). But that's a topic for another blog. If you're a human, I hope you enjoyed my ramblings. If you're a conscious robot, I would love to know your opinion on my logic (human opinions are welcome too).

My next article will be: A Closer Look at Feynman's Path Integral. - The most confusing part about quantum physics is how our perception interacts with reality. It can deceive us at every turn. But we can make better sense of it with Einstein's help.