金沙8888js官方(中國)官方網站-App Platform

ChatGPT: Will the Future Be AI Generated?

ChatGPT: Will the Future Be AI Generated?

Read Time: 6 Minutes

[Ed. Note: The introduction to this article was written by ChatGPT.]

This Q&A features Barak Turovsky, a GLG Network Member and artificial intelligence (AI) and natural language processing (NLP) expert who shares his insights on ChatGPT, a language model developed by OpenAI. Turovsky compares ChatGPT with other AI and large language model (LLM) technologies and discusses its potential use cases beyond search. He also gives his opinion on Microsoft’s investment in OpenAI and the challenges of adoption for consumers and enterprises. The article concludes with a discussion of the future of the AI wars and the potential impact on NLP technology.

How do you view the technology incorporated in ChatGPT vs. what we’ve seen in the AI and LLM markets historically (e.g., how truly differentiated is this)?

The main breakthrough of ChatGPT technology is its fluency — natural, human sounding — on virtually any topic. This is achieved by using groundbreaking Transformer neural networks developed and open-sourced by Google in 2017. Transformer networks are a leap forward from older models that process sentences sequentially, grouping earlier words together even when the true meaning of a sentence depends on words that may occur later in a sentence. By contrast, Transformer models relationships among all words in a sentence at the same time —regardless of their position — resulting in more authentic-seeming outcomes. What’s more, Transformer neural networks are trained on a corpus of tens of billions of sample dialogues so they learn and synthesize information much more effectively than AI forbears.

What do you see as the nearest disruption opportunities for this technology? Much focus has been placed on search thus far, but is this the most applicable use case for LLM technology?

I believe that use cases aimed at improving creative and/or workplace productivity are much more practical for generative AI to achieve in the near to medium term, versus information-seeking, decision-support use cases like many search queries. As outlined in my article, I use the following framework to evaluate generative AI/LLM use cases:

  • Need for accuracy — How important is accuracy for the use case? It might not be important when you are writing a poem but very important when providing users with recommendations for a major purchase.
  • Need for fluency — Are you striving for a natural-sounding “story”? That would be important if you were writing a science fiction book but less important when providing data for a business decision.
  • How high stakes is the use case? What is the risk if AI gets the answer wrong? The risk of inaccurate answers when using AI to write a poem is much lower than relying on AI to decide where to book your next vacation or which dishwasher to buy.

We’ve seen Microsoft up its investment in OpenAI as it integrates ChatGPT technology into Bing. Do you think this will move the needle for Microsoft in search? Why or why not?

I am not sure that ChatGPT will significantly move the needle in search. There are two important things to consider here. One is distribution. Google dominates distribution both on desktop devices (Chrome) and especially on mobile (Android and, now, IOS). The second is the user perception that Google search is a great product as a whole. I believe that ChatGPT will offer great but not disruptive improvement for “mid-stake” questions (e.g., How old is Barack Obama?) and consumers will be slow to adopt ChatGPT for “high-stake” queries (e.g., What washing machine should I buy?).

That said, I believe that Microsoft’s incorporation of ChatGPT is a win-win-win scenario for the company. Every 1% search share that Microsoft might take away from Google (and I believe Microsoft could take a few percentage points because of their PR and marketing blitz) translates to $1B-$2B in extra revenue for the company.

What’s more, the data from Bing helps Microsoft train its large language models to benefit two other massive businesses. The first is Microsoft Office, where using ChatGPT as a “writing aid” could increase productivity (emails, docs, presentations, etc.) and rejuvenate growth of this mature but extremely profitable business. The second is generative-AI-as-a-service that could spark a new wave of growth in its Azure Cloud offering, giving Microsoft the opportunity to overtake AWS as the number one cloud provider.

What are the biggest headwinds to adoption of this technology for consumers and enterprises?

First, LLMs could be confidently (e.g., fluently) inaccurate. The closest human behavior in this case is people who can very confidently and convincingly talk about any topic: When they don’t know something, they very confidently make stuff up. Given their charisma and “smooth talk,” we are often carried away by their confident demeanor and believe what they say (especially if we don’t fully understand the topic). Dialogue-trained LLMs demonstrate similar behavior: They are trained to provide you with an answer (even if it is incorrect or doesn’t make sense!). Moreover, in addition to producing confidently incorrect answers, LLMs can at times produce offensive answers, or results that introduce or reinforce existing biases.

Second, we can’t always assume linear, super-fast improvement in LLMs’ accuracy with more training data. In order to work well, they need to be trained on huge amounts of training data. Both ChatGPT and Google LAMBDA models are reportedly trained on billions of words. Given the massive sizes of training corpus, doubling it might produce relatively incremental improvement in accuracy. Improving accuracy of LLMs, while possible, is very complex and more art than a science.

What does the future of the AI wars look like?

I prefer to call it the “AI Revolution.” This new incarnation of disruptive technology (many compare AI to the invention of electricity or fire) will impact every aspect of our lives. I consider myself particularly lucky to have worked both on the first major AI breakthrough (using deep neural networks on a first-ever product at enormous scale with Google Translate) and on a new generation of AI that can produce natural, humanlike outputs for virtually any topic.

Currently, a lot of focus on the disruptive impact of ChatGPT is focused on search. However, given the utmost importance of accuracy, I believe it will take a long while for ChatGPT to disrupt how we search for information. But there are many use cases where fluency (natural-sounding communication) is at a premium. Think of customer-facing interaction in virtually any industry. Here are just a couple of representative examples:

  • Customer service: A ChatGPT-based chatbot interacting with financial services, healthcare, telecom, transportation, etc. customers can sound much more “human” and knowledgeable than poorly trained, offshore-based customer service reps, due to its fluency and ability to rephrase any response, as opposed to repeating cookie-cutter, not greatly worded knowledge-based articles.
  • Education: ChatGPT already showed that it can pass nursing, law bar, and other exams. This will unleash a new era of how we learn necessary skills for the rapidly changing workplace of the future.

More fluent conversational layers enabled by ChatGPT, combined with the “enhance-ability” to generate customized content from an established enterprise knowledge base(s), could be the actual “sleeper” killer app in the enterprise, in a relatively short time.

Therefore, I believe that many use cases like those will be disrupted much faster than search — and many companies should be thinking about it sooner than later. It is either “ChatGPT your competition, or your competition will ChatGPT you:-).”

As with every new and disruptive technology (think early days of railroads, planes, etc.), productizing and monetizing this groundbreaking AI technology is both exciting and scary, full of complexities and nuances required to cross the chasm. Like with many such transformational tech platforms, businesses will face a stark choice of how to leverage this new capability or how to protect themselves from faster, nimbler competitors.

订阅 GLG 洞见趋势月度专栏

输入您的电子邮件,接收我们的月度通讯,获取来自全球约 100 万名 GLG 专家团成员的专业洞见。

XML 地图