Notes on vibe coding 3
Will non-coders be able to "write code"?

In this third post of the series about vibe coding, I reflect on my experiment, and speculate about its future. (The previous two posts are here, and here.)
In particular, will non-coders be able to "write code"?
It's obvious that for certain projects, it is already possible for a non-coder to obtain functional code via an AI. The prerequisites are an ability to articulate what one wants, and ample patience, because for now, some degree of steering is required. For truly complicated applications, I'm not sure it's there yet.
It is indeed possible to imagine a world requiring less steering, which implies that the AI coders will have developed even better sense of what the user might be looking to do. For example, there may come a day when the AI would devise an image indexing strategy on its own, obviating the need for me to prompt it.
Let's ponder what that world would look like. The user asks to archive all the blog posts at a website, informing that AI coder that the posts are filled with images, and it's important to match images to their respective posts. The AI coder figures out the image indexing strategy, plus the directory structure, plus the anti-blocking techniques, and produces functional code that requires no further steering.
This future world looks very familiar! It's the world of software as we know it. When we execute a find and replace within a Word document, what happens? Behind the scenes, the application executes code that finds the word and replaces it, repeating these operations until the entire document is read through. The key words are "behind the scenes." When we use Word, we don't think about the code that forms Word.
All of software is code but most of the time, users don't see or notice any code.
I think that's the world we're heading towards. Right now, the framing of the issue is a bit off-kilter. Non-coders don't want to write code, read code, or think about code. They just want to things done.
The ideal interface for this future is not a chatbot. It's something that accepts natural language prompts, and then delivers the results the user is seeking. This user experience is similar to running any command within an application like Word or Excel. It isn't one in which the user takes an action, expecting to receive a piece of code that the user then executes in order to obtain outputs.
***
This future world is also different from the world of Word, Excel, etc. in two fundamental ways.
First, the software is constructed in real time. In the old world, Microsoft engineers have written the find-and-replace code once, and every time a user clicks the command, that same code runs. In the new world, when the user issues the prompt, the AI composes the code on demand, and then executes it.
This shift to real-time has major implications. Software becomes more flexible and customizable. In the old world, the find-and-replace function only admits minor variations, such as whether to match case or not. The user can't ask for some wrinkle that wasn't pre-conceived and suggested by the software developer. In this AI world that I imagine, the user can request a find-and-replace operation for "apple" that only applies if the "apple" in a sentence refers to the fruit. This is possible because the code is written in real time at the user's prompting.
My find-and-replace code will be different from yours because we issue different requirements.
This flexibility comes at a cost. The behavior of software will become more variable. Even if we both want the same find-and-replace, the AI code will likely be somewhat different, which means there is a good chance that the outputs might vary. In the old world, by contrast, the outputs must be the same since it's the same piece of code. I suspect that the loss in reliability will be tolerated in many applications.
Another change in this new world is how users communicate with their software. In the old world, it's all buttons, menus, and links. To accommodate customizable software, the new interface must let users articulate what they want to get done. A natural-language interface is the answer, and large-language models are perfect for this purpose.
If the point of vibe coding is to let AI do all of the coding, then it's inevitable the AI has to take control over our computers. We would effectively have to make the AI a "super-user" on our computers, with rights to edit, create and delete files; install software; etc. This inevitably creates risks over privacy and security.
In my experiment, the AI didn't directly run any code on my computer. I downloaded each script and ran it myself. Even in this mode, I assumed some risk because I didn't read the code. It'd have been better to pass the code first through some kind of malware detector. Besides, the potential harm could also become from bad code, rather than malice, which is even harder to prevent.
***
In conclusion, vibe coding places the attention on coding but what is really innovative about this new AI world of coding is that we are coming closer and closer to software that can be customized and written in real time, and then executed behind the scenes to deliver outputs to users. The key difference in user experience we'll feel is the ability to use natural language to describe what we want to get done, and because of the new flexibility, the scope of what can be done is vastly expanded.
Meanwhile, expect the software to be less reliable, and even more insecure.