Companies like Lovable, Base44, Replit, and Netlify use AI to let anyone build a web app in seconds—and in thousands of cases, spill highly sensitive data onto the public internet.
It’s a different approach, you don’t abandon best practices, but this new tool does give information that was previously more difficult / costly to access - so use it too.
There are things an LLM can show you that are undeniably correct, like: this line of code here calls a “free” on a pointer which might be NULL, and in-fact will be NULL if you follow this path through the code: …
Think of it like “NP hard problems” - there are problems where the solution is hard to find, but easy to verify once you are given it.
When an LLM is giving you those hard to find, easy to veryify observations, that’s value. It doesn’t have to be perfect, it doesn’t have to be 100% complete.
Or, you can hire a team of engineers to burn their brains for months on end to maybe find the same things, maybe not.
There’s a problem with both human attention spans, and LLMs’ context window capacity - neither are up to the task of reviewing a full code base for something like a browser and “finding all the flaws” - but, if the LLM can give you flaws that humans haven’t been able to find… you should be taking those wins - before somebody else does and puts them to different uses.
So it makes more mistakes but catches slightly less mistakes? Sounds effective.
It’s a different approach, you don’t abandon best practices, but this new tool does give information that was previously more difficult / costly to access - so use it too.
But if you don’t have that information already how can you trust the information is correct?
There are things an LLM can show you that are undeniably correct, like: this line of code here calls a “free” on a pointer which might be NULL, and in-fact will be NULL if you follow this path through the code: …
Think of it like “NP hard problems” - there are problems where the solution is hard to find, but easy to verify once you are given it.
When an LLM is giving you those hard to find, easy to veryify observations, that’s value. It doesn’t have to be perfect, it doesn’t have to be 100% complete.
Or, you can hire a team of engineers to burn their brains for months on end to maybe find the same things, maybe not.
There’s a problem with both human attention spans, and LLMs’ context window capacity - neither are up to the task of reviewing a full code base for something like a browser and “finding all the flaws” - but, if the LLM can give you flaws that humans haven’t been able to find… you should be taking those wins - before somebody else does and puts them to different uses.