
The Surprising Dual Nature of Artificial Intelligence Abilities
The Future of Artificial Intelligence: Demystifying the Current State
Computer scientist Yejin Choi takes us on a journey to demystify the current state of massive artificial intelligence systems like ChatGPT, highlighting three key problems with cutting-edge large language models. She also shares some humorous instances of them failing at basic commonsense reasoning.
A New Era in Artificial Intelligence
We are living in an era where artificial intelligence (AI) is becoming almost like a new intellectual species. These massive AI systems have the ability to process vast amounts of data, learn from experience, and adapt to new situations. However, with this power comes a set of challenges that we need to address.
Three Key Problems with Large Language Models
-
Lack of Common Sense
- ChatGPT and other large language models are trained on vast amounts of text data, but they often lack basic common sense.
- For example, when asked about the time it takes to boil water, ChatGPT responded with a range of times from 5 minutes to several hours, showing its inability to apply simple physical laws.
- This is because language models are trained on text data that often contains inaccuracies or contradictions. As a result, they learn to reproduce these errors rather than correcting them.
- ChatGPT and other large language models are trained on vast amounts of text data, but they often lack basic common sense.
-
Lack of Human Intuition
- Large language models struggle to understand human intuition and emotions.
- For instance, when asked about the concept of "home," ChatGPT provided a dry, factual definition, failing to capture the emotional significance of home in human experience.
- This is because language models are trained on text data that often lacks context and nuance. As a result, they struggle to understand the subtleties of human communication.
- Large language models struggle to understand human intuition and emotions.
-
Lack of Transparency
- Large language models often lack transparency in their decision-making processes.
- For example, when asked about its reasoning for recommending a particular answer, ChatGPT responded with a vague statement about "pattern matching" without providing any concrete evidence.
- This is because language models are complex systems that rely on proprietary algorithms and data. As a result, it’s often difficult to understand how they arrive at their conclusions.
- Large language models often lack transparency in their decision-making processes.
Building Smaller AI Systems Trained on Human Norms and Values
While large language models have made significant progress in recent years, they still have limitations. To overcome these challenges, we need to focus on building smaller AI systems that are trained on human norms and values.
- This approach involves creating AI systems that are designed to understand and replicate human behavior.
- By focusing on smaller, more manageable systems, we can build AI that is more transparent, accountable, and aligned with human values.
- Smaller AI systems also have the potential to be more efficient and effective in solving complex problems.
- By avoiding the overhead of massive data processing and instead focusing on specific tasks, we can create AI that is faster, cheaper, and more reliable.
Q&A with Chris Anderson
Chris Anderson: "Yejin, you’ve highlighted some significant challenges with large language models. What do you think are the next steps for researchers and developers?"
Yejin Choi: "I believe the next step is to focus on building smaller AI systems that are trained on human norms and values. By doing so, we can create AI that is more transparent, accountable, and aligned with human values."
Chris Anderson: "That’s a great point. And what about the benefits of this approach? How do you see it impacting society?"
Yejin Choi: "Smaller AI systems have the potential to be more efficient and effective in solving complex problems. They can also help us better understand human behavior and decision-making processes."
Chris Anderson: "That’s fascinating. Finally, what advice would you give to our audience who are interested in learning more about AI and its applications?"
Yejin Choi: "I would encourage them to explore the field of human-centered AI design. By focusing on building smaller AI systems that understand and replicate human behavior, we can create a more equitable and sustainable future for all."
Join the Conversation
We hope you’ve enjoyed this talk by Yejin Choi. If you’re interested in learning more about artificial intelligence and its applications, join us at TED.com to explore our collection of talks, transcripts, translations, and personalized recommendations.
- Visit https://go.ted.com/yejinchoi to watch more videos on technology, entertainment, design, science, business, global issues, the arts, and much more.
- Become a TED Member today to support our mission of spreading ideas:https://ted.com/membership.