I’ve seen this term thrown around a lot lately and I just wanted to read your opinion on the matter. I feel like I’m going insane.
Vibe coding is essentially asking AI to do the whole coding process, and then checking the code for errors and bugs (optional).
Seems like a recipe for subtle bugs and unmaintainable systems. Also those Eloi from the time machine, where they don’t know how anything works anymore.
Management is probably salivating at the idea of firing all those expensive engineers that tell them stuff like “you can’t draw three red lines all perpendicular in yellow ink”
I’m also reminded of that ai-for-music guy that was like “No one likes making art!”. Soulless husk.
They can vibe as much as they want, but don’t ever ask me to touch the mess they create.
Once companies recognize the full extent of their technical debt, they will likely need to hire a substantial number of highly experienced software engineers to address the issues, many of which stem from over-reliance on copying and pasting outputs from large language models.
I like writing code. Like, physically typing it. It’s fun and probably my favorite pastime.
Why would I wanna give that up?
This seems like a game you’d do with other programmers, lol.
I can understand using AI to write some potentially verbose or syntactically hell lines to save time and headaches.
The whole coding process? No. 😭
If you don’t write a single line then you aren’t coding
That’s a bad vibe if I’ve ever seen one.
So you mean debugging then?
If it wasn’t for the fact that even an AI trained on only factually correct data can conflagrate those data points into entirely novel data that may no longer be factually accurate, I wouldn’t mind the use of AI tools for this or much of anything.
But they can literally just combine everything they know to create something that appears normal and correct, while being absolutely fucked. I feel like using AI to generate code would just give you more work and waste time, because you’ll still need to fucking verify that it didn’t just output a bunch of unusable bullshit.
Relying on these things is absolutely stupid.
Completely agree. My coworkers spend more time prompting and trying to get useful text from ChatGPT and then fixing that text than the time it’d take them to actually write the thing in the first place. It’s nonsense.
Nearly every time I ask ChatGPT a question about a well established tech stack, it’s responses are erroneous to the point of being useless. It frequently provides examples using fabricated, non-existent functionality and the code samples are awful.
What’s the point in getting AI to write code that I’m just going to have to completely rewrite?
There’s one valid use-case for LLMs: when you have writer’s block, it can help to have something resembling an end product instead of a blank page. Sadly, this doesn’t really work for programming, because incorrect code is simply worse than no code at all. Every line of code is a potential bug and every line of incorrect code is a guaranteed bug.
I use an LLM with great success to write bad fanfiction though.





