ChatGPT, What’s It Good For?

Abstract

There’s so much buzz about Chatbots right now. Is this a dangerous road or a high-value tool? Do Chatbots have a place in supply chain applications? We explore the controversies, the technology, and potential applications and abuses of chatbots.

Article

Chatbots Revisited

In order to seriously learn about and consider new technologies, before even thinking about actually adopting them, it is helpful to put things in context: What problems are we trying to solve with this technology? How urgently do we need to solve them? What solutions will best address those problems?

Let’s look at the state of our current platforms and solutions, with a few examples, and a review of some basics regarding AI, search, natural language interfaces, bots, and related technologies. Bots and AI agents/engines are everywhere. When I look for a new item on my search engine, it uses multiple sources of data that have been acquired by a crawler or bot (such as Googlebot) which continuously scours the web for new content. Then it uses various AI engines to understand the query, find, compile, and rank the results, presenting me with a listing of sources, locations, and some content.

350
Image by Alexandra_Koch from Pixabay

Another example we are all familiar with is the so-called customer service bots on just about every commercial website. Most of these bots are only marginally helpful, with the main goal of reducing support headcount for rudimentary queries or searches. They are often annoying to customers and users, as in reality, they tend to use simple scripted responses or keywords or menu-driven FAQ[1] pages on the website, which are too often not that helpful even though they are usually created by subject matter experts. Many users will instead ultimately resort to using live chat to connect to a real person rather than a bot. They may then revert to a phone call or a series of emails back and forth—or give up and move on to another seller or service provider. We will return to this point later in this series, that real people are the source of credible content.

At the end of these interactions, we are presented with a dubious question about our satisfaction with the results—such as ‘did you find what you are looking for’ or ‘was this helpful’. Often the answer is a resounding ‘no’, followed by the ‘site’ asking what more could be done, but not necessarily with actual follow-up to the individual user regarding their plea for help. Clearly, this experience is a downer. Over time it diminishes the value of the site, customer satisfaction, and sales. Many people are aware we need a ‘new something’ to improve the situation.

We all have our stories of frustrating searches. Wanting to make Pad Thai for the family, I searched for ‘Burmese tofu’ on an ecommerce site.[2] Rather than providing links to the product I wanted or an honest response indicating that the seller did not have this item, I was given irrelevant choices like sea salt, hot sauce, soup, curry paste, and so on. Other food searches I’ve done have come up with largely non-food products that were absurd to include in the list. These search frustrations tell us we need a ‘new something’.

Image by OpenClipart-Vectors from Pixabay

Search engines and many online sites keep track of and attempt to learn from our specific past usage. They present offers and items ‘selected especially for you’ but too often these are nonsense! For example, many of my searches are about running, runners, and races, so the site assumes I’m interested in other sports too and provides me with football, football, football! That’s not me. And it’s probably not a lot of other people who may be interested in golf, tennis, soccer, and so on. Many of the commercial sites are just not nuanced or smart enough to know that you might like to see videos on British history, but not the latest fiction mini-series. I could imagine how much more money even Amazon would make if they could hit the target more often and more precisely with my taste in videos and merchandise! Or if my favorite apparel retailer could better understand which style of clothes I would really like. So, we know we need a ‘new something’!

It is also disappointing that search results are not strictly ‘democratic’ or unbiased, but rather are influenced by their pay-for-play arrangement. It can take an intrepid searcher to make the effort to probe beyond initial results to find what they are really looking for. Wealthy companies with lots of web marketing folks have the budget to make payments to the search engines and do the SEO optimization to stay at the top of search results. Smaller firms without resources are out of luck. Those of us doing the searching know the results we are being shown are manipulated by the money. Developers or marketing people or supply chain people may try to search the web (including blogs and social media networks) in an effort to understand trends, such as finding out what customers really want. In that case, the distortions of the pay-for-play search model impede the emergence of a clear picture of what people are actually thinking or doing. We know we need a ‘new something’.

Enter the LLM Chatbot

Image by OpenClipart-Vectors from Pixabay

Is the ‘new something’ that we need so badly being provided by this new generation of highly hyped chatbots (e.g., ChatGPT, Bard, et al)? Search engine providers are rushing in to address the current state of affairs with these ostensibly ‘smarter’, more nuanced, and more sophisticated solutions. The new chatbots are driven by large language model (LLM) neural networks.[3] These models can provide a natural language front-end to search engines. Beyond interpreting natural language requests, the models can also generate a very convincing summarized answer to the question (as opposed to merely presenting a list of links, as the current generation of search does). LLM’s learn to predict what sequence of words will answer a given question by analyzing truly enormous amounts of text across the internet. This includes not just trustworthy, factual, and unbiased reporting and information, but all of the horrible, fanciful, and false information permeating the web.

Bewitched, Bothered, and Beguiled

These large language models learn from all the content, including all the errors, conspiracy theories, hate speech, and baseless claims. The models don’t innately have the ability to discern fact from fiction or separate constructive/tolerant content from hateful/harmful content. This discernment requires human training of AI LLM models. The models merely learn to match sequences and patterns of all the words it is fed to reflect what people are most likely to say next. And they are amazingly good at it. Their responses often sound very confident and reasonable, as if being generated by an actual human … because they are imitating responses and text that humans have actually generated. Human-generated content on the web[4] is often made with conviction when their assertions are actually ridiculous and not based at all in fact.[5]

Image by OpenClipart-Vectors from Pixabay

Because the new breed of chatbot is so good at imitating human speech, it can be beguiling. Even software engineers who should know better are attributing sentience and consciousness to these models. In the movie Her[6], Joaquin Phoenix falls in love with his AI software. He is enthralled, beguiled and bothered—ultimately very lost. Samantha, the AI software became all too convincingly real to him—and to many others, it turns out. There is a similar danger with LLM-based chatbots. We will believe them too much.

The engineers and creators of these technologies admit that, at least for now, they have a lot of problems to work out. The major search and AI companies have their chatbots available either in limited or full release for the public. It’s a large experiment, at this point, hoping to do more good than harm.  

Artificial intelligence experts have long known that this technology exhibits all sorts of unexpected behavior. But they cannot always agree on how this behavior should be interpreted or how quickly the chatbots will improve. Because these systems learn from far more data than we humans could ever wrap our heads around, even A.I. experts cannot understand why these models generate a particular piece of text at any given moment.[7]

Many of the developers of this technology are sounding the alarms, admitting, even though they wrote the code, that they don’t know exactly how it makes the decisions and where it will take us.[8] Sam Altman, cofounder of ChatGPT creator OpenAI, was quite frank in his recent twitter comment, saying “ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness. It’s a mistake to be relying on it for anything important right now. It’s a preview of progress; we have lots of work to do on robustness and truthfulness.” [emphasis added][9] Really? I thought your business and your customers were really important!

As more people search and use this kind of LLM software, over time they may learn to be skeptical and do their own filtering, be more selective, and do some level of fact-checking and sanity-checking of the answers. But doesn’t the need for that added effort defeat the purpose of a ‘productivity tool? We also know that search engines and social networks have become so subtly clever about coopting your search for the benefit of big revenue generators, rather than truly serving the best interests of the users.  

Meanwhile, many more people may take the answers as factual truth with detrimental consequences. Behind the scenes, human ‘experts’ are still training and tweaking the systems to try and make them more reliable. We learned a long time ago that AI more broadly should be labeled with a warning “Human Training Required Before Operating”. 

Image by Z RAINEY from Pixabay

We live in a world where many people take what they see and hear on the internet at face value as the truth, regardless of how unsubstantiated or far-fetched it actually is. The ability for chatbots to mass produce disinformation is very scary and sounds dangerous to many of us, including to the technology experts working in this field. Therefore, it is the duty of application providers and users not to blindly trust these technologies and leave them to themselves if we are going to use them effectively and if we are to do more good than damage as we seek to harness their power to go beyond the rudimentary bots of today.  

There’s a lot more to say on this topic and we will continue the discussion in future articles in this series.

References:

AI Definitions for Supply Chain AI in Supply Chain – Some Definitions – ChainLink Research (clresearch.com)

Alphabet shares dive after Google AI chatbot Bard flubs answer in ad | Reuters

Why a Conversation With Bing’s Chatbot Left Me Deeply Unsettled – The New York Times (nytimes.com)

Explainer: Bard vs ChatGPT: What do we know about Google’s AI chatbot? | Reuters

Generative AI: Benefits, risks and a framework for responsible innovation – SAS Voices

Can We Make Our Robots Less Biased Than We Are? – The New York Times (nytimes.com)

—————————End of Article——————————————————


[1] FAQ = Frequently Asked Questions — Return to article text above

[2] This is tofu made from chickpeas rather than soybeans — Return to article text above

[3] Bard, from Google is based on Bard is based on LaMDA, short for Language Model for Dialogue Applications and OpenAI’s GPT, or Generative Pre-trained Transformer — Return to article text above

[4] Read AI Myths AI for Supply Chain: Debunking the Myths – Part Two: – ChainLink Research (clresearch.com)Return to article text above

[5] Reuters Explainer: Bard vs ChatGPT: What do we know about Google’s AI chatbot? | ReutersReturn to article text above

[6] HER (film) – WikipediaReturn to article text above

[7] Why Do A.I. Chatbots Tell Lies and Act Weird? Look in the Mirror. – The New York Times (nytimes.com) — Return to article text above

[8] Meet GPT-3. It Has Learned to Code (and Blog and Argue). – The New York Times (nytimes.com) — — Return to article text above

[9] Fast Company Microsoft’s new Bing AI chatbot is already insulting and gaslighting u (fastcompany.com)Return to article text above

Scroll to Top