Learn more quickly. Explore more deeply. Look further ahead. Time and time again, prominent scientists, technologists, and philosophers have made incredibly inaccurate predictions about the future of innovation. Even Einstein himself was not exempt, stating, “There is not the slightest indication that nuclear energy will ever be obtainable,” just a decade before Enrico Fermi built the first fission reactor in Chicago. Shortly after, the consensus shifted to concerns about a potential nuclear apocalypse. Nowadays, experts are warning of an impending artificial general intelligence (AGI) doomsday, while others argue that large language models (LLMs) have already reached their peak. It’s hard to dispute David Collingridge’s influential theory that trying to forecast the risks posed by new technologies is a futile endeavor. Considering that our top scientists and technologists are often mistaken about technological progress, how can our policymakers effectively regulate the emerging risks from artificial intelligence (AI)? We should take heed of Collingridge’s caution that technology evolves in unpredictable ways.
However, there is one type of AI risk that can typically be anticipated in advance. These are risks arising from a misalignment between a company’s financial incentives to profit from its proprietary AI model in a specific way and society’s interests in how the AI model should be monetized and utilized. The best way to overlook such misalignment is by solely focusing on technical aspects of AI models, without considering the socio-economic context in which these models will function and be designed for profit. Concentrating on the economic risks posed by AI is not just about preventing monopolies, self-preference, or Big Tech dominance. It’s about ensuring that the economic environment that fosters innovation does not encourage unpredictable technological risks as companies rush to make profits or dominate the market. It’s also about guaranteeing that the value generated by AI is evenly distributed by preventing premature consolidation.
We can foster more innovation if emerging AI tools are accessible to everyone, enabling a diverse ecosystem of new businesses, startups, and AI tools to emerge. OpenAI, with its $2 billion in annual revenue and millions of users, is already a major player. Its GPT store and developer tools must provide value to those who create them to maintain viable and diverse innovation ecosystems. By carefully examining the economic incentives behind innovations and how technologies are monetized in practice, we can gain a better understanding of the economic and technological risks supported by a market’s structure. Market structure is not just about the number of firms but also about the cost structure and economic incentives in the market resulting from institutions, relevant government regulations, and available financing.
It is enlightening to consider how the algorithmic technologies that supported the older aggregator platforms (such as Amazon, Google, and Facebook) were initially used to benefit users but later reprogrammed to increase profits for the platform. The issues stemming from social media, search, and recommendation algorithms were not primarily engineering problems but rather issues of financial incentives not aligning with the safe, effective, and fair deployment of algorithms. To comprehend how platforms allocate value and what can be done about it, we explored the role of algorithms and the unique informational setup of digital markets in extracting economic rents from users and producers on platforms. In economic theory, rents are profits above what would be feasible in a competitive market and reflect control over a scarce resource.
For digital platforms, extracting digital rents typically involves diminishing the quality of information shown to users based on their “ownership” of access to a large customer base. Over time, a misalignment between the initial promise of providing user value and the need to increase profit margins has led to negative platform behavior. Platforms like Amazon have shifted away from their original customer-centric mission towards an extractive business model. Google, Meta, and other major online aggregators have also prioritized economic interests over their initial commitments to users and ecosystem partners.
The way in which platforms allocate value to themselves and their ecosystem members is crucial in determining the direction of economic activity and human attention towards productive goals. The risks posed by the next generation of AI systems are significant and will impact not only the information shown to users but also the economic implications of AI deployment. Safeguards on algorithms and more transparent disclosure about how platforms monetize their algorithms may help prevent negative behavior in the future. Algorithms have become key gatekeepers in markets and allocators of value, and understanding the risks posed by AI systems is essential for directing economic activity towards positive outcomes.
Source link