Kevlin Henney and I recently had a discussion regarding the possibility of automated code generation replacing higher-level languages in the future. We pondered whether ChatGPT N (for large N) could potentially generate executable machine code directly, similar to how compilers work today, instead of generating code in a high-level language like Python. This question is not purely academic. As coding assistants become more accurate, it is plausible to assume that they may eventually take over the task of writing code rather than just assisting programmers. This would be a significant change for professional programmers, although coding is just a small aspect of their work.
To some extent, this transition is already underway. ChatGPT 4’s “Advanced Data Analysis” feature can generate Python code, execute it in a sandbox, collect error messages, and even attempt to debug it. Google’s Bard offers similar capabilities. While Python is an interpreted language without machine code, there is no reason why this process couldn’t incorporate a C or C++ compiler.
Historically, similar changes have occurred in the field of programming. In the early days, programmers used to “write” programs by physically plugging in wires or toggling binary numbers. Then, they progressed to writing assembly language code before finally adopting higher-level languages like COBOL and FORTRAN in the late 1950s. For those who were accustomed to circuit diagrams and switches, these early languages seemed as radical as generative AI programming appears today. COBOL was an attempt to simplify programming by making it more akin to writing in English.
Kevlin highlighted the importance of higher-level languages as a “repository of determinism” that we currently rely on. While the term “repository of determinism” may sound ominous, it signifies the need for a reliable foundation. At each stage of programming history, there has always been a repository of determinism. In assembly language programming, developers had to examine the binary representation of instructions to understand the computer’s behavior. With higher-level languages like FORTRAN and C, the source code expressed the desired outcome, and it was the responsibility of the compiler to generate the correct machine instructions. However, the reliability of early compilers was questionable, especially when it came to optimization. Portability was also a challenge as different vendors had their own compilers with unique quirks and extensions. Assembly language remained a last resort for debugging. The repository of determinism was limited to a specific vendor, computer, and operating system.
To ensure consistency across computing platforms, language standards and specifications were developed. As a result, today, very few programmers need to work with assembly language. By using higher-level languages like C or Python, programmers can read code and understand its behavior. If the program behaves unexpectedly, it is more likely that the programmer misunderstood some aspect of the language’s specification rather than the compiler or interpreter making a mistake. This predictability is crucial for successful debugging. The source code provides a clear representation of what the computer is doing at a reasonable level of abstraction. If the program doesn’t perform as intended, it can be analyzed and corrected. While this may require revisiting programming literature, it is a well-understood and manageable problem. We no longer need to delve into machine language, which is significantly more complex today due to factors like instruction reordering, speculative execution, and long pipelines. The layer of abstraction provided by higher-level languages is essential. However, it must also be deterministic and consistently produce the same outcome every time the program is compiled and executed.
Determinism is necessary because all computing, including AI, relies on computers performing reliably and consistently, often millions or billions of times. Without a precise understanding of what the software does or the possibility of it changing on every compilation, it becomes impossible to build a business around it. Maintenance, extension, and debugging become extremely challenging. While automated code generation shows promise, it does not yet possess the reliability we expect from traditional programming. Simon Willison refers to this as “vibes-based development.” Humans still play a crucial role in testing and fixing errors. Additionally, generating code multiple times during the development process is common, and the results are likely to vary. Bard even offers several alternative code options to choose from. This lack of repeatability poses challenges for understanding program behavior and tracking progress towards a solution. It might be tempting to assume that this variation can be controlled by setting variables like GPT-4’s “temperature” to 0, which controls the level of variation or unpredictability in responses. However, this doesn’t solve the problem. Temperature has its limits, and one of those limits is that the prompt must remain constant. Modifying the prompt to guide the AI towards generating correct or well-designed code takes us beyond those limits. Furthermore, the model itself is subject to change, and such changes are beyond the programmer’s control. Models are regularly updated, and there is no guarantee that the code generated by an updated model will remain the same. Therefore, the source code produced by AI cannot serve as the repository of determinism.
This doesn’t mean that AI-generated code is not beneficial. It can provide a starting point for further work. However, when it comes to reproducing and understanding bugs, repeatability becomes crucial, and surprises cannot be tolerated. At that stage, programmers must refrain from regenerating high-level code from natural language prompts. The AI effectively acts as a first draft creator, which may save effort compared to starting from scratch. When transitioning from version 1.0 to 2.0 and adding new features, similar challenges arise. Even the largest context windows can’t encompass an entire software system, so working on one source file at a time remains necessary. This mirrors the current programming approach, where the source code serves as the repository of determinism. Moreover, it is difficult to instruct a language model on what it is allowed to change and what should remain unchanged. Asking it to modify only a specific loop within a file may or may not be successful.
These arguments do not apply to coding assistants like GitHub Copilot, which act as assistants to programmers rather than replacing them. With Copilot, programmers can specify precisely what they want to be done and where. In contrast, when using ChatGPT or Bard to write code, programmers assume the role of passengers rather than pilots or copilots. While you can instruct a pilot to fly you to New York,
Source link