/

What is ChatGPT? AI and Search Perspectives

What is ChatGPT? AI and Search Perspectives

What is ChatGPT

First in a Three-Part Blog Series on Conversational Search

In this blog, we want to pursue the questions around what ChatGPT is, and how it might impact the search engine and search application space.  This blog will be the first in a three-part series, with the subsequent blogs exploring GPT-3, the deep learning model that ChatGPT is based on, and the topic of Large Language Models, of which GPT-3 is just one of many.

If you are following the AI / chatbot / natural language processing space, it’s impossible to ignore the hype and public interest in ChatGPT in the last few months. In fact, the Google Keyword planner tool is projecting a 3-month volume of over 823,000 searches per month, with an off-the-charts growth rate (literally).

Obviously, a lot has contributed to this hype, not the least of which is that OpenAI, the creator of ChatGPT, made the tool publicly available, and over a million people registered in the first month to play with the AI-driven interactive chat tool.

What is ChatGPT?

Basically, ChatGPT is a chatbot on steroids. But for a more complete and succinct definition, I will defer to Wikipedia:

ChatGPT (Generative Pre-trained Transformer) is a chatbot launched by OpenAI in November 2022. It is built on top of OpenAI’s GPT-3 family of large language models, and is fine-tuned with both supervised and reinforcement learning techniques.

The technology came along when consumers (and developers) had started to become rather jaded with the supposed flop of the initial “chatbot revolution.” In their first iteration, chatbots were supposed to take over the world by 2020.  Or at the very least, improve customer service by automating certain tasks and taking some of the load off for human customer service representatives.

You could argue that some original chatbots achieved the latter goal, but they still fell short of the hype. The problem is that, despite a plethora of development tools, good chatbots still required significant thoughtful design and engineering.  Developers had to outline (guess?) certain “intents” from users, and then script a set of responses to help customers accomplish a certain task.  The problems came when customers would go “off script” and either not use the expected language in their queries, or submit a query for which there was no programmed response.  But hey, Murphy’s Law or Chaos Theory should have predicted that users would go off-script more than not.

After being exposed to chatbots everywhere, customer could easily spot when they were interacting with a chatbot.  And bad interactions often led to frustration and, ultimately, a negative feeling towards a company or brand.  If you have had a bad chatbot experience, you can relive it in this article on funniest chatbot fails.

And yet, even when chatbots failed, something was going on.

People were getting mad at the chatbots and behaving as if there was intelligence on the other end (albeit terribly simple or flawed).  This expectation or desire for truly human-like conversational AI has roots from over 50 years ago.

  • In 1964, Joseph Wezeinbaum of MIT’s Artificial Intelligence Laboratory created ELIZA, an early chatbot that emulated a Rogerian psychotherapist who asks patients opening questions and then followed up with more open questions that include the patient’s earlier responses. ELIZA is primitive by today’s standards, but it still fooled a lot of people. You can still chat with ELIZA yourself and see what you think.
  • Similarly, in the film 2001: A Space Odyssey, which came out in 1968, we saw the famous interactions between the human astronaut, Dr. Dave Bowman, and HAL 9000, the super-AI that could converse like a human, operate a spacecraft, and also become schizophrenic and murderous.

The AI hype has had its ups and downs over the past several decades.  But the pace of change has accelerated almost exponentially in the last five years.  So perhaps the hype around ChatGPT is not so surprising since, with respect to conversational AI and machine intelligence, we are all wondering “are we there yet?”

The “Wow Factor” Behind ChatGPT

While AI has been heading up a steep hype curve for the last several years, a key milestone occurred with the launch of the ChatGPT prototype on November 30, 2022. 

To understand the buzz around ChatGPT, you have to realize that this was all part of OpenAI’s broad marketing plan for its AI-based services. While many of the more than one million subscribers in the first month were developers, no doubt many tech journalists and media sources were alerted as well.

OpenAI did a masterful job with the simple interface and the launch, getting the attention of even less technical, mass-media journalists. ChatGPT quickly garnered attention because:

  • The interface is natural and conversational, allowing for follow-up questions and dialog
  • It provides detailed responses and articulate answers across many domains of knowledge
  • Its responses are virtually indistinguishable from human responses
  • The responses are quick, with little lag time (subject to OpenAI’s computational resources)

We’ll include a list of related articles at the end of this blog with details, but the breadth and depth of examples of interactions with ChatGPT were, at the same time, remarkable, surprising, humorous, and maybe a little bit unnerving.

People discovered that you could ask ChatGPT to do a wide variety of things, including:

  • Write an article or blog – here’s an example of a blog on ChatGPT, written by ChatGPT
  • Create an original joke – “Why hasn’t Apple created any foldable smart phones yet?” – “Because they can’t figure out how to fold their prices” – OK, original but not funny
  • Explain a complex concept – “Explain wormholes to me like I am 5”
  • Solve tricky math problems, step-by-step – “John can mow a lawn in 30 mins. Joe mows a lawn in 45 minutes. How long does it take them to mow one lawn together?”
  • Write in almost any genre, or language – “Write me a rap song about Elon Musk, in German”
  • Write, debug, and explain code – in multiple languages like PHP, or JavaScript
  • Act like a chat companion – “Hi, I am bored. Tell me a joke about cats and dogs…”

Like any viral phenomenon, the buzz fed on itself and reached a point where a day didn’t go by where you didn’t see, hear, or read a mention of ChatGPT in popular traditional and social media. And the buzz came in all shapes and forms:

  • “Students are going to use it to cheat on writing assignments!”
  • “It is going to put content writers out of a job!”
  • “This is a great tool to help writers improve content!”
  • “ChatGPT is going to kill Google!”
  • “AI is finally going to take over the world!”

Of course, OpenAI could not have asked for a better marketing result, as long as they could manage the positive with the negative. As they say, “there’s no such thing as bad publicity.”

Expected Blowback and Reaction to ChatGPT Hype

As Newton’s Third Law states, “For every action, there is an equal and opposite reaction.”

As more people analyzed ChatGPT output, they discovered that it was by no means perfect.

  • It can be just plain wrong – we asked it 13 questions specific to the search space and got 5 wrong answers (which we may list in a future short blog post). Even OpenAI warns of this in the interface.
  • Bias can be baked into the system – if there was bias in the content that it was trained on, that bias exists also in the tool.
  • Recency is a problem – the model was trained with content from a certain time period; it may not know anything about recent events.
  • It requires tremendous computational resources – nobody except OpenAI knows the real numbers, but a million users interacting with a complex AI model is going to require a LOT of GPUs.

“It’s a mistake to be relying on it for anything important right now… It’s a preview of progress; we have lots of work to do on robustness and truthfulness. Fun creative inspiration; great! reliance for factual queries; not such a good idea. We are working hard to improve!” – Sam Altman, CEO of OpenAI

Despite everything, we still recommend you give the tool a look ChatGPT (openai.com) Though, you may run into capacity issues. The end of this blog will list resources like instructional videos and articles that discuss the tool’s pros and cons.

You have to admit that the tool’s launch has been a tremendous success for OpenAI. Rumors are that Microsoft plans to invest another $10 billion in OpenAI after an initial $1 billion investment.  This puts OpenAI’s valuation at approximately $29 billion.

It seems like a Battle of the Titans is shaping up between Microsoft and Google. Though there are still a lot of questions to consider.

  • What will Microsoft do with ChatGPT?
  • How will Google respond?
  • Will AWS jump into the fray since this is ultimately a battle the cloud services providers?
  • What is the commercialization model for these AI models? How affordable will these AI services be?

Annoyed at the attention OpenAI is getting, Google just recently introduced Bard, a conversational AI service powered by Language Model for Dialog Applications (or LaMDA for short). We’ll have more on Bard in future blogs.

In the meantime, Pureinsights has some pragmatic perspectives and advice on all this.

Summary: Pragmatic AI and Search Application Perspectives

So, if you are a user, developer, or provider of search applications and services, what might all this mean? Because, despite all its shortcomings, the last thing to do is ignore what is happening.

1.  ChatGPT will accelerate the evolution to conversational search interfaces.

We are already talking to our phones and in-home digital assistants. And ChatGPT is leaps and bounds ahead of the last generation of chatbots. It’s not likely that ChatGPT will “kill Google.” But it will force Google to accelerate its plans to evolve into a conversational question answering interface leveraging tools like Google MUM, improved variations of BERT ,or the newly announced Bard.

2.  ChatGPT makes mistakes and only “knows what it knows.”

ChatGPT can – and does – make silly mistakes. Results can incorrect, but deceptively authentic. Human curation is not a practical solution. Furthermore, recency is an issue. The model was only trained on content up to 2021. Meanwhile, Google is continuously crawling the Internet.

3.  ChatGPT is a great generative tool – but how good is it really at finding content?

Most of the cool examples show the tool generating content. But how good is the tool at analyzing content or finding the right answer? Good search applications still rely on basic content preparation and tagging, and tried-and-true search technologies. The best search solutions are likely hybrids of traditional and AI-driven technologies.

4.  The monetization of ChatGPT is still evolving.

What will it cost you to use  ChatGPT? Pricing is evolving rapidly, with the latest being a ChatGPT Plus plan for $20 per month.  But this is really only for paid experimentation.  For true business production deployments, OpenAI pricing for GPT-3 may provide some clues (GPT-3 is the deep learning model that ChatGPT is based on).  Needless to say, this is an extremely complex buying decision, which we’ll address more in future blogs.

5.  Customizing and expanding ChatGPT capabilities may be tricky and costly.

If you want ChatGPT to be “smarter” about topics you care about, GPT-3 has to be trained with additional content. But this raises some issues. Who owns this new, smarter model? And what if some of this content includes proprietary IP? The fine print currently suggests that if you pay to extend OpenAI’s deep learning model for your purposes, OpenAI also reaps the benefits.

With OpenAI openly acknowledging that ChatGPT “makes mistakes,” we can summarize our advice as follows:

  • “Proceed with caution” or “wait and see.” ChatGPT’s faults may eventually be fixed. Or they may not.
  • Other choices will emerge from Google and other players, including some open-source options.
  • Some uses call for the latest powerful deep learning models, but less powerful and open-source models (like Google BERT) may suffice for your application.

You can read more about this in future blogs where we delve into GPT-3 and other Large Language Models in general.

As always, please CONTACT US if you have any comments or questions, or to request a free consultation to discuss your ongoing search and AI projects.

…………………………….

RELATED RESOURCES

Other Blogs in This Series

YouTube Videos

Positive / General Articles and Resources:

Cautionary Articles and Resources:

Google’s Response:

Twitter
LinkedIn

Stay up to date with our latest insights!