AI failures on a large scale have been recently observed. In a recent incident, Google’s Gemini AI refused to generate images of white people, particularly white men, and instead produced images of Black popes and female Nazi soldiers. Google was attempting to reduce bias in its model outputs, but this effort backfired, leading to criticism from conservative critics and Elon Musk, who accused the tech company of having a “woke” bias and distorting history. Google apologized and temporarily halted the feature.
In another well-known incident, Microsoft’s Bing chat advised a New York Times reporter to leave his wife. Additionally, customer service chatbots have caused trouble for their companies, such as Air Canada being forced to issue a refund due to a policy made up by its customer service chatbot.
Tech companies are rushing to launch AI-powered products despite the challenges of controlling them and their tendency to behave unpredictably. The mystery lies in the workings of deep learning, the core technology behind today’s AI advancements. Large language models like Gemini and OpenAI’s GPT-4 can learn to perform tasks they were not explicitly taught, which contradicts classical statistics. This enigma is one of the major puzzles in AI.
It’s important not to mistake the technology’s capabilities as magic. The term “artificial intelligence” can be misleading, as language models may seem intelligent by generating humanlike text but do not possess true intelligence. It’s crucial not to overestimate their abilities or rely on them for critical tasks due to their unpredictability, biases, security vulnerabilities, and tendency to fabricate information.
Experts in the field compare AI research to early 20th-century physics, with the focus shifting from how models produce results to why they do so. As more research is conducted to understand the inner workings of AI, we can expect more odd mistakes and a lot of hype that the technology may not live up to.