Since its release in November 2022, almost everyone involved with technology has experimented with ChatGPT: students, faculty, and professionals in almost every discipline. Almost every company has undertaken AI projects, including companies that, at least on the face of it, have “no AI” policies. Last August, OpenAI stated that 80% of Fortune 500 companies have ChatGPT accounts. Interest and usage have increased as OpenAI has released more capable versions of its language model: GPT-3.5 led to GPT-4 and multimodal GPT-4V, and OpenAI has announced an Enterprise service with better guarantees for security and privacy. Google’s Bard/Gemini, Anthropic’s Claude, and other models have made similar improvements. AI is everywhere, and even if the initial frenzy around ChatGPT has died down, the big picture hardly changes. If it’s not ChatGPT, it will be something else, possibly something users aren’t even aware of: AI tools embedded in documents, spreadsheets, slide decks, and other tools in which AI fades into the background. AI will become part of almost every job, ranging from manual labor to management. With that in mind, we need to ask what companies must do to use AI responsibly. Ethical obligations and responsibilities don’t change, and we shouldn’t expect them to. The problem that AI introduces is the scale at which automated systems can cause harm. AI magnifies issues that are easily rectified when they affect a single person. For example, every company makes poor hiring decisions from time to time, but with AI all your hiring decisions can quickly become questionable, as Amazon discovered. The New York Times’ lawsuit against OpenAI isn’t about a single article; if it were, it would hardly be worth the legal fees. It’s about scale, the potential for reproducing their whole archive. O’Reilly Media has built an AI application that uses our authors’ content to answer questions, but we compensate our authors fairly for that use: we won’t ignore our obligations to our authors, either individually or at scale.
Learn faster. Dig deeper. See farther.
It’s essential for companies to come to grips with the scale at which AI works and the effects it creates. What are a corporation’s responsibilities in the age of AI—to its employees, its customers, and its shareholders? The answers to this question will define the next generation of our economy. Introducing new technology like AI doesn’t change a company’s basic responsibilities. However, companies must be careful to continue living up to their responsibilities. Workers fear losing their jobs “to AI,” but also look forward to tools that can eliminate boring, repetitive tasks. Customers fear even worse interactions with customer service, but look forward to new kinds of products. Stockholders anticipate higher profit margins, but fear seeing their investments evaporate if companies can’t adopt AI quickly enough. Does everybody win? How do you balance the hopes against the fears? Many people believe that a corporation’s sole responsibility is to maximize short-term shareholder value with little or no concern for the long term. In that scenario, everybody loses—including stockholders who don’t realize they’re participating in a scam. How would corporations behave if their goal were to make life better for all of their stakeholders? That question is inherently about scale. Historically, the stakeholders in any company are the stockholders. We need to go beyond that: the employees are also stakeholders, as are the customers, as are the business partners, as are the neighbors, and in the broadest sense, anyone participating in the economy. We need a balanced approach to the entire ecosystem. O’Reilly tries to operate in a balanced ecosystem with equal weight going toward customers, shareholders, and employees. We’ve made a conscious decision not to manage our company for the good of one group while disregarding the needs of everyone else. From that perspective, we want to dive into how we believe companies need to think about AI adoption and how their implementation of AI needs to work for the benefit of all three constituencies. Being a Responsible Employer While the number of jobs lost to AI so far has been small, it’s not zero. Several copywriters have reported being replaced by ChatGPT; one of them eventually had to “accept a position training AI to do her old job.” However, a few copywriters don’t make a trend. So far, the total numbers appear to be small. One report claims that in May 2023, over 80,000 workers were laid off, but only about 4,000 of these layoffs were caused by AI, or 5%. That’s a very partial picture of an economy that added 390,000 jobs during the same period. But before dismissing the fear-mongering, we should wonder whether this is the shape of things to come. 4,000 layoffs could become a much larger number very quickly. Fear of losing jobs to AI is probably lower in the technology sector than in other business sectors. Programmers have always made tools to make their jobs easier, and GitHub Copilot, the GPT family of models, Google’s Bard, and other language models are tools that they’re already taking advantage of. For the immediate future, productivity improvements are likely to be relatively small: 20% at most. However, that doesn’t negate the fear; and there may well be more fear in other sectors of the economy. Truckers and taxi drivers wonder about autonomous vehicles; writers (including novelists and screenwriters, in addition to marketing copywriters) worry about text generation; customer service personnel worry about chatbots; teachers worry about automated tutors; and managers worry about tools for creating strategies, automating reviews, and much more. An easy reply to all this fear is “AI is not going to replace humans, but humans with AI are going to replace humans without AI.” We agree with that statement, as far as it goes. But it doesn’t go very far. This attitude blames the victim: if you lose your job, it’s your own fault for not learning how to use AI. That’s a gross oversimplification. Second, while most technological changes have created more jobs than they destroyed, that doesn’t mean that there isn’t a time of dislocation, a time when the old professions are dying out but the new ones haven’t yet come into being. We believe that AI will create more jobs than it destroys—but what about that transition period? The World Economic Forum has published a short report that lists the 10 jobs most likely to see a decline, and the 10 most likely to see gains. Suffice it to say that if your job title includes the word “clerk,” things might not look good—but your prospects are looking up if your job title includes the word “engineer” or “analyst.” The best way for a company to honor its commitment to its employees and to prepare for the future is through education. Most jobs won’t disappear, but all jobs will change. Providing appropriate training to get employees through that change may be a company’s biggest responsibility. Learning how to use AI effectively isn’t as trivial as a few minutes of playing with ChatGPT makes it appear. Developing good prompts is serious work and it requires training. That’s certainly true for technical employees who will be developing applications that use AI systems through an API. It’s also true for non-technical employees who may be trying to find insights from data in a spreadsheet, summarize a group of documents, or write text for a company report. AI needs to be told exactly what to do and, often, how to do it. One aspect of this change will be verifying that the output of an AI system is correct. Everyone knows that language models make mistakes, often called “hallucinations.” While these mistakes may not be as dramatic as making up case law, AI will make mistakes—errors at the scale of AI—and users will need to know how to check its output without being deceived (or in some cases, bullied) by its overconfident voice. The frequency of errors may go down as AI technology improves, but errors won’t disappear in the foreseeable future. And even with error rates as low as 1%, we’re easily talking about thousands of errors sprinkled randomly through software, press releases, hiring decisions, catalog entries—everything AI touches. In many cases, verifying that an AI has done its work correctly may be as difficult as it would be for a human to do the work in the first place. This process is…
Source link