Notes on vibe coding 2

I continue a vibe-coding experiment. Did the AI-written code run?

Notes on vibe coding 2
Spring vibes

This post continues the prior post (link) about my blog archive project.

In view of the impending shutdown of Typepad, I want to "scrape" my own blog so that I can keep a complete archive of several thousands of image-heavy blog posts going back almost 20 years. It seems like the right project to test "vibe coding," which is an AI hype of the week. Vibe coding promises to make it possible for businesses to replace human coders with AI coders, and also promises to make it possible for non-coders to write code.

At the end of my previous post (link), I ran the first piece of code written by GPT. I had read GPT's description of what its code does, and hadn't seen anything troubling. Notably, I did not read the code before running it. That, to me, is the essence of vibe coding.


You have come across that AI magic story if you are on any social media. Someone writes down a prompt, and then magically, AI delivers a perfect piece of code, one that works out of the box.

Did my GPT5 code work just like that? Funny you asked.

The code ran without errors but it didn't produce anything useful. What does this mean? It created the entire file structure with one folder per blog post, as intended. All folders were found empty. Hmmm.

I relayed this discovery to the AI coder. It pinpointed the problem: it had mistakenly assumed that the Typepad export file references each post's URL as "URL" but in fact, the name of the reference is "UNIQUE URL". It then fixed its own code, and offered a revised file.

I ran the revised code; it finished without error, and this time, the folders were populated with data.


At some point during the above process, I concocted a different way of organizing the data. Instead of having thousands of folders in the directory, I'd set up a single folder to hold all the images. The key is to assign a unique number to each image, and also to associate each image number to the pertinent blog post.

I sketched out how I'd like to set up the image indexing scheme and the new directory structure, and issued a new prompt. GPT responded with a new script that implements these ideas.

This script also ran without errors. Again, the first attempt was only partially successful. When I opened the process tracker, I found that only about half of the blog images were successfully captured.

I learned that some of the image links grabbed from the HTML code were not really what they appeared to be. For example, some links pointed to Amazon-generated pages for my books, which had expired, but in any case, not images that I want to keep. There were also other links that encountered various HTTP error codes.

At this point, I explicitly asked GPT to contend with blocking technology as indicated by the HTTP 403 errors (forbidden). Even though the AI knew from the start that 403s could be an issue, the initial code did not include any counter-measures. With each new report of blocked URLs, the AI codes now added another layer of code that executed a specific anti-blocking tactic.

Other refinement was necessary. At first, the AI coder ignored my instruction to set each image's name to the image index - it sometimes retained the original name. Next, when it switched the name to the image index, it dropped the suffix (.jpg, .png, etc.). The chatbot interface proves very convenient for steering the AI coder and fixing these minor issues.


At one point, I jumped ship to another AI coder, Claude. That was when GPT got twisted around like a pretzel. I was then starting to encounter coding errors. As usual, I relayed the errors to GPT: it kept telling me it had fixed the problem when the offending code was still there. Now, I had two AIs running side by side. GPT is still the main code generator; I no longer took the GPT code and ran it directly - I passed it to Claude, which checked for the same coding error, and if present, fixed it.

It turns out that current AI coders may have a habit of falling into such traps. For a different project, for which I used Claude as the main code generator, it got stranded in a corner where it kept telling me an offending line of code has been removed, when the new file clearly still contains it. So I had to fire up GPT to get a lift out of that dark corner.


I'm still amazed by how much working code was produced. At the end, I obtained code that ran through the process of setting up the directory structure, and populating it the way I wanted to. The image index worked as expected, tying each image to the blog post it belonged to.

And I haven't read a single line of code.

That's vibe coding. The user does have to steer the AI coder in the right direction, and correct the course as needed but as demonstrated here, I didn't have to rewrite any code myself.

In the next post, I'll discuss where I think this is all heading. Is it true that non-coders will use AI to write code?