Imagine a world where machines aren’t confined to pre-programmed tasks but operate with human-like autonomy and competence. A world where computer minds pilot self-driving cars, delve into complex scientific research, provide personalized customer service and even explore the unknown. This is the potential of artificial general intelligence (AGI), a hypothetical technology that may be poised to revolutionize nearly every aspect of human life and work. While AGI remains theoretical, organizations can take proactive steps to prepare for its arrival by building a robust data infrastructure and fostering a collaborative environment where humans and AI work together seamlessly.
AGI, sometimes referred to as strong AI, is the science-fiction version of artificial intelligence (AI), where artificial machine intelligence achieves human-level learning, perception and cognitive flexibility. But, unlike humans, AGIs don’t experience fatigue or have biological needs and can constantly learn and process information at unimaginable speeds. The prospect of developing synthetic minds that can learn and solve complex problems promises to revolutionize and disrupt many industries as machine intelligence continues to assume tasks once thought the exclusive purview of human intelligence and cognitive abilities.
Imagine a self-driving car piloted by an AGI. It cannot only pick up a passenger from the airport and navigate unfamiliar roads but also adapt its conversation in real time. It might answer questions about local culture and geography, even personalizing them based on the passenger’s interests. It might suggest a restaurant based on preferences and current popularity. If a passenger has ridden with it before, the AGI can use past conversations to personalize the experience further, even recommending things they enjoyed on a previous trip.
AI systems like LaMDA and GPT-3 excel at generating human-quality text, accomplishing specific tasks, translating languages as needed, and creating different kinds of creative content. While these large language model (LLM) technologies might seem like it sometimes, it’s important to understand that they are not the thinking machines promised by science fiction. Achieving these feats is accomplished through a combination of sophisticated algorithms, natural language processing (NLP) and computer science principles. LLMs like ChatGPT are trained on massive amounts of text data, allowing them to recognize patterns and statistical relationships within language. NLP techniques help them parse the nuances of human language, including grammar, syntax and context. By using complex AI algorithms and computer science methods, these AI systems can then generate human-like text, translate languages with impressive accuracy, and produce creative content that mimics different styles.
Today’s AI, including generative AI (gen AI), is often called narrow AI and it excels at sifting through massive data sets to identify patterns, apply automation to workflows and generate human-quality text. However, these systems lack genuine understanding and can’t adapt to situations outside their training. This gap highlights the vast difference between current AI and the potential of AGI. While the progress is exciting, the leap from weak AI to true AGI is a significant challenge. Researchers are actively exploring artificial consciousness, general problem-solving and common-sense reasoning within machines. While the timeline for developing a true AGI remains uncertain, an organization can prepare its technological infrastructure to handle future advancement by building a solid data-first infrastructure today.
How can organizations prepare for AGI? The theoretical nature of AGI makes it challenging to pinpoint the exact tech stack organizations need. However, if AGI development uses similar building blocks as narrow AI, some existing tools and technologies will likely be crucial for adoption. The exact nature of general intelligence in AGI remains a topic of debate among AI researchers. Some, like Goertzel and Pennachin, suggest that AGI would possess self-understanding and self-control. Microsoft and OpenAI have claimed that GPT-4’s capabilities are strikingly close to human-level performance. Most experts categorize it as a powerful, but narrow AI model. Current AI advancements demonstrate impressive capabilities in specific areas. Self-driving cars excel at navigating roads and supercomputers like IBM Watson® can analyze vast amounts of data. Regardless, these are examples of narrow AI. These systems excel within their specific domains but lack the general problem-solving skills envisioned for AGI. Regardless, given the wide range of predictions for AGI’s arrival, anywhere from 2030 to 2050 and beyond, it’s crucial to manage expectations and begin by using the value of current AI applications.
While leaders have some reservations about the benefits of current AI, organizations are actively investing in gen AI deployment, significantly increasing budgets, expanding use cases, and transitioning projects from experimentation to production. According to Andreessen Horowitz (link resides outside IBM.com), in 2023, the average spend on foundation model application programming interfaces (APIs), self-hosting and fine-tuning models across surveyed companies reached USD 7 million. Nearly all respondents reported promising early results from gen AI experiments and planned to increase their spending in 2024 to support production workloads. Interestingly, 2024 is seeing a shift in funding through software line items, with fewer leaders allocating budgets from innovation funds, hinting that gen AI is fast becoming an essential technology.
On a smaller scale, some organizations are reallocating gen AI budgets towards headcount savings, particularly in customer service. One organization reported saving approximately USD 6 per call served by its LLM-powered customer service system, translating to a 90% cost reduction, a significant justification for increased gen AI investment. Beyond cost savings, organizations seek tangible ways to measure gen AI’s return on investment (ROI), focusing on factors like revenue generation, cost savings, efficiency gains and accuracy improvements, depending on the use case. A key trend is the adoption of multiple models in production. This multi-model approach uses multiple AI models together to combine their strengths and improve the overall output. This approach also serves to tailor solutions to specific use cases, avoid vendor lock-in and capitalize on rapid advancement in the field. 46% of survey respondents in 2024 showed a preference for open source models. While cost wasn’t the primary driver, it reflects a growing belief that the value generated by gen AI outweighs the price tag. It illustrates that the executive mindset increasingly recognizes that getting an accurate answer is worth the money.
Enterprises remain interested in customizing models, but with the rise of high-quality open source models, most opt not to train LLMs from scratch. Instead, they’re using retrieval augmented generation or fine-tuning open source models for their specific needs. The majority (72%) of enterprises that use APIs for model access use models hosted on their cloud service providers. Also, applications that don’t just rely on an LLM for text generation but integrate it with other technologies to create a complete solution and significantly rethink enterprise workflows and proprietary data use are seeing strong performance in the market. Deloitte (link resides outside IBM.com) explored the value of output being created by gen AI among more than 2,800 business leaders. Here are some areas where organizations are seeing a ROI: Text (83%): Gen AI assists with automating tasks like report writing, document summarization and marketing copy generation. Code (62%): Gen AI helps developers write code more efficiently and with fewer errors. Audio (56%): Gen AI call centers with realistic audio assist customers and employees. Image (55%): Gen AI can simulate how a product might look in a customer’s home or reconstruct an accident scene to assess insurance claims and liability. Other potential areas: Video generation (36%) and 3D model generation (26%) can create marketing materials, virtual renderings and product mockups. The skills gap in gen AI development is a significant hurdle. Startups offering tools that simplify in-house gen AI development will likely see faster adoption due to the difficulty of acquiring the right talent within enterprises. While AGI promises machine autonomy far beyond gen AI, even the most advanced systems still require human expertise to function effectively. Building an in-house team with AI, deep learning, machine learning (ML) and data science skills is a strategic move. Most importantly, no matter the strength of AI (weak or strong), data scientists, AI engineers, computer scientists and ML specialists are essential for developing and deploying these systems. These use areas are sure to evolve as AI technology progresses. However, by focusing on these core areas, organizations can position themselves to use the power of AI advancements as they arrive. Improving AI to reach AGI While AI has made significant strides in recent years, achieving true AGI, machines with human-level intelligence, still require overcoming significant hurdles. Here are 7 critical skills that current AI struggles with and AGI would need to master: Visual perception: While computer vision has…
Source link