As AI technologies like GPT-4 and DALL-E become more sophisticated, concerns about potential AI mistakes and risks associated with misinformation are increasing. In this article, we will discuss some recent AI mistakes and the concerns raised by experts in the field, using examples of AI technologies and their shortcomings.
ChatGPT: Impressive Growth Marred by AI Mistakes
Apple co-founder Steve Wozniak has expressed mixed feelings about ChatGPT, a popular AI chatbot:
- Impressive aspects: ChatGPT can learn tasks that usually take humans a significant amount of time, such as writing movie scripts, news articles, or research papers.
- Concerns: The platform struggles with creative projects and accuracy, and lacks "humanness."
ChatGPT has experienced rapid growth, reaching 100 million users in just two months, faster than TikTok's nine months to achieve the same milestone, as reported by a UBS analysis reviewed by Reuters. A large number of internet users know what is ChatGPT, despite its success, the language model sometimes makes glaring mistakes.
Examples of ChatGPT Errors
- Mixed results when asked to write a financial blog post on tax-loss harvesting.
- Struggles with solving simple math equations and logic problems.
Wozniak's skepticism also extends to AI technologies like self-driving cars, which he believes cannot currently replace human drivers. He argued that AI does not possess the human understanding needed to predict other drivers' actions on the road.
Entrepreneur and investor Mark Cuban warned that ChatGPT and its parent company, OpenAI, could be corrupted by misinformation as they internalize more information from the internet. Cuban raised concerns about the challenges of understanding and controlling AI-driven decisions during an interview on Jon Stewart's podcast, "The Problem with Jon Stewart."
Competing AI Platforms: Google's Bard and Factual Errors
Google's new AI chatbot, Bard, experienced a noticeable mistake in one of its first ads:
- Incorrectly claimed that the James Webb Space Telescope captured the first-ever images of a planet outside our solar system.
- The European Southern Observatory's telescope took those photos in 2004, according to NASA's website.
This error led to Alphabet, Google's parent company, losing $100 billion in market share.
Bing AI Mistakes: Factual Errors
Microsoft's Bing AI faced criticism for making multiple factual errors during its demos:
- Misrepresenting financial data from Gap's Q3 2022 financial report.
- Inaccurately comparing Gap's data to Lululemon's.
- Citing a corded version of the Bissell Pet Hair Eraser Handheld Vacuum, which is actually a cordless device.
Independent AI researcher Dmitri Brereton pointed out these mistakes after Bing AI's capabilities were demonstrated a week ago, with tasks such as providing pros and cons for top-selling pet vacuums, planning a trip to Mexico City, and comparing financial data.
Brereton himself made a small mistake while fact-checking Bing, highlighting the difficulty in assessing AI-generated answers' quality.
Additional Bing AI Mistakes
- Incorrectly stating the current year as 2022 instead of 2023.
- Falsely claiming that Croatia left the EU in 2022.
- PCWorld found that Bing AI was teaching users ethnic slurs, a problem Microsoft has since corrected.
- Produced inaccurate answers, such as outdated cinema listings in London's Leicester Square. Microsoft has since corrected this error.
Caitlin Roulston, director of communications at Microsoft, acknowledges the system may make mistakes during the preview period and emphasizes that user feedback is crucial for identifying and rectifying issues. As the search engine relies on live data, Microsoft needs to make significant adjustments to prevent Bing AI from confidently making mistakes using this data.
Generative AI: Advancements and Potential Dangers
Generative AI, including large language models like GPT-4 and image generators such as DALL-E, Midjourney, and Stable Diffusion, is rapidly advancing. Over 1,000 AI experts, researchers, and supporters have signed an open letter calling for a six-month pause on the development of "giant" AIs to establish safety protocols to mitigate their dangers.
Current AI Capabilities
- Generating photorealistic images, as demonstrated by Midjourney.
- Replicating people's voices, even fooling voice identification systems.
- Coding in all programming languages and writing essays and books, resulting in AI-written eBooks being sold on Amazon.
- Transforming text descriptions into moving images.
- Turning 2D still images into 3D visualizations.
However, AI, particularly large language models like ChatGPT, is notorious for making factual errors that can seem convincing. For instance, Bing's AI made several inaccuracies in its demo while creating an itinerary and summarizing a financial report.
Ethical Concerns and AI-generated Creativity
- Image generators have raised ethical concerns about artistic ownership and copyright.
- Tools like Glaze have been developed to protect artistic works from being exploited by AI using a cloaking technique.
- Voice actors losing work to AI dubbing software.
AI can generate creative instructions and recipes, often with amusing results. It can also act as an assistant to perform administrative tasks. Google Assistant’s AI can make restaurant reservations via phone calls, and OpenAI has enabled plugins for GPT-4 to look up data on the web and order groceries.
AI Mistakes: Uncovering the Flaws
As AI becomes more sophisticated and people started to understand What is Google Bard, or ChatGPT. It is essential to address the potential dangers and ethical concerns associated with its rapid development. AI mistakes are a growing concern for businesses and users, as demonstrated by ChatGPT, Bard, and Bing AI's recent errors.
Experts like Steve Wozniak and Mark Cuban emphasize the importance of human understanding and the risks associated with misinformation. It is crucial for AI developers and users to work together to mitigate these risks and create AI systems that are both efficient and accurate.