I’ve seen this term thrown around a lot lately and I just wanted to read your opinion on the matter. I feel like I’m going insane.
Vibe coding is essentially asking AI to do the whole coding process, and then checking the code for errors and bugs (optional).
I mean, at some point you have to realize that instructing an AI on every single thing you want to do starts to look a lot like programming.
Programming isn’t just writing code. It’s being able to reason about a method of doing things. Until AI is at the level of designer, you can expect humans to have to do the brunt of the work to bring software to life.
lol wut, asking AI to do the work and then going back and fixing bugs…?
To me, vibe coding is pick a project to work on and get building. Very basic planning stages without much design, like building with legos without instruction manuals. I make design decisions and refactor as I code. I certainly get some AI input when I don’t know how to implement something, but I will usually work “blindly” using my own ideas and documentation. I probably visit stackoverflow while vibe coding more than I do chatgpt.
Somewhat impressive, but still not quite a threat to my professional career, as it cannot produce reliable software for business use.
It does seem to open up for novices to create ‘bespoke software’ where they previously would not have been able to, or otherwise unable to justify the time commitment, which is fun. This means more software gets created which otherwise would not have existed, and I like that.
IMO it will “succeed” in the early phase. Pre-seed startups will be able demo and get investors more easily, which I hear is already happening.
However, it’s not sustainable, and either somebody figures out a practical transition/rewrite strategy as they try to go to market, or the startup dies while trying to scale up.
We’ll see a lower success rate from these companies, in a bit of an I-told-you-so-moment, which reduces over-investment in the practice. Under a new equilibrium, vibe coding remains useful for super early demos, hackathons, and throwaway explorations, and people learn to do the transition/rewrite either earlier or not at all for core systems, depending on the resources founders have available at such an early stage.
As an experiment / as a bit of a gag, I tried using Claude 3.7 Sonnet with Cline to write some simple cryptography code in Rust - use ECDHE to establish an ephemeral symmetric key, and then use AES256-GCM (with a counter in the nonce) to encrypt packets from client->server and server->client, using off-the-shelf RustCrypto libraries.
It got the interface right, but it got some details really wrong:
- It stored way more information than it needed in the structure tracking state, some of it very sensitive.
- It repeatedly converted back and forth between byte arrays and the proper types unnecessarily - reducing type safety and making things slower.
- Instead of using type safe enums it defined integer constants for no good reason.
- It logged information about failures as variable length strings, creating a possible timing side channel attack.
- Despite having a 96 bit nonce to work with (-1 bit to identify client->server and server->client), it used a 32 bit integer to represent the sequence number.
- And it “helpfully” used
wrapping_add
to increment the 32 sequence number! For those who don’t know much Rust and/or much cryptography: the golden rule of using ciphers like GCM is that you must never ever re-use the same nonce for the same key (otherwise you leak the XOR of the two messages).wrapping_add
explicitly means when you get up to the maximum number (and remember, it’s only 32 bits, so there’s only about 4.3 billion numbers) it silently wraps back to 0. The secure implementation would be to explicitly fail if you go past the maximum size for the integer before attempting to encrypt / decrypt - and the smart choice would be to use at least 64 bits. - It also rolled its own bespoke hash-based key extension function instead of using HKDF (which was available right there in the library, and callable with far less code than it generated).
To be fair, I didn’t really expect it to work well. Some kind of security auditor agent that does a pass over all the output might be able to find some of the issues, and pass it back to another agent to correct - which could make vibe coding more secure (to be proven).
But right now, I’d not put “vibe coded” output into production without someone going over it manually with a fine-toothed comb looking for security and stability issues.
We should let these twits enjoy their shit on twitter. The AI hype is just like the crypto hype, it’ll fade.
The name vibe coding sounds like a drunk evening with friends getting an MVP off the ground, but nothing more.
For personal projects, I don’t really care what you do. If someone who doesn’t know how to write a line of code asks an LLM to generate a simple program for them to use on their own, that doesn’t really bother me. Just don’t ask me to look at the code, and definitely don’t ask me to use the tool.
So you mean debugging then?
fake
That’s a bad vibe if I’ve ever seen one.
Nearly every time I ask ChatGPT a question about a well established tech stack, it’s responses are erroneous to the point of being useless. It frequently provides examples using fabricated, non-existent functionality and the code samples are awful.
What’s the point in getting AI to write code that I’m just going to have to completely rewrite?
There’s one valid use-case for LLMs: when you have writer’s block, it can help to have something resembling an end product instead of a blank page. Sadly, this doesn’t really work for programming, because incorrect code is simply worse than no code at all. Every line of code is a potential bug and every line of incorrect code is a guaranteed bug.
I use an LLM with great success to write bad fanfiction though.
But it’s AI
This seems like a game you’d do with other programmers, lol.
I can understand using AI to write some potentially verbose or syntactically hell lines to save time and headaches.
The whole coding process? No. 😭
You can save time at the cost of headaches, or you can save headaches at the cost of time. You cannot save both time and headaches, you can at most defer the time and the headaches until the next time you have to touch the code, but the time doubles and the headaches triple.
If it wasn’t for the fact that even an AI trained on only factually correct data can conflagrate those data points into entirely novel data that may no longer be factually accurate, I wouldn’t mind the use of AI tools for this or much of anything.
But they can literally just combine everything they know to create something that appears normal and correct, while being absolutely fucked. I feel like using AI to generate code would just give you more work and waste time, because you’ll still need to fucking verify that it didn’t just output a bunch of unusable bullshit.
Relying on these things is absolutely stupid.
Completely agree. My coworkers spend more time prompting and trying to get useful text from ChatGPT and then fixing that text than the time it’d take them to actually write the thing in the first place. It’s nonsense.
Nah. I only used AI as a last resort, and in my case, it has worked out. I cannot see myself using AI for codes again.
Based on my experience of AI coding I think this will only work for simple/common tasks, like writing a Python script download a CSV file and convert it to JSON.
As soon as you get anywhere that isn’t all over the internet it starts to bullshit.
But if you’re working in a domain it’s decent at, why not? I found in those cases fixing the AI’s mistakes can be faster than writing it myself. Actually often I find it useful for helping me decide how I want to write code because the AI does something dumb, and I go “no I obviously don’t want it like that”…