• A Sharky Anthro@fedia.io
    link
    fedilink
    arrow-up
    27
    ·
    2 days ago

    Given that experts had already warned of the dangers and it happened, its tragic that the slop apps are still a thing. Like, there is a reason why programming is a profession, its complex and has a lot of moving parts. LLM slop is never going to approach the art, skill, or reasonable security that a sane programmer can.

    • KairuByte@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      ·
      12 hours ago

      I do think machine learning (likely not an LLM) will get to the point of being able to reliably code a lot of things. But I don’t think it’s going to happen in the near future.

    • SleeplessCityLights@programming.dev
      link
      fedilink
      English
      arrow-up
      6
      ·
      19 hours ago

      I predict that the slopicalypse will hit in 2027 and all of the corporations that jumped head first into the filling swimming pool will hit bottom. Using an LLM to code will only make things worse. It’s a fucking entropy injector. You can’t continously add entropy without hitting a point where it just is not cohesive anymore and LLMs get really bad when the context is too large.

      • A Sharky Anthro@fedia.io
        link
        fedilink
        arrow-up
        4
        ·
        12 hours ago

        I hope the slopicalypse does hit in 2027 because there is no way to escape technical debt, unless you build software with smart people carefully maintaining it as they add features. LLMs could never do that and will cause the worst tech disasters. I cannot wait to see the aftermath of all these corporations fucking around and finally finding out how stupid they really are!

        • SleeplessCityLights@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 hours ago

          It already happened to a project at my work. Nobody understands enough of the code base and/or can make sense of it to be able to add features. It is the buggiest fucking thing ever, making LLM debugging an endless excersise of finding more bugs. This also means that we can’t prompt an LLM effectively to make targeted changes. The only thing left is letting an agent fuck shit up worse by running it with a vague prompt. We don’t know what to do. It did cost a lot to make after counting man hours and traditional software development mentality hates throwing something completely away.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 days ago

      You know what helps? After you’ve coded something that works - whether “vibe coding” or the old fashioned way, review it for security issues. “Vibe code reviews” performed by the same LLM tools that do “vibe coding” can be even more effective at finding issues than traditional methods.

      But, just like real people, if you don’t bother to care about security, you’ll have holes.

      • KairuByte@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        12 hours ago

        If you (or the LLM) didn’t know enough to prevent the security issue, how exactly are you (or the LLM) going to know to look for it during a review?

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 hours ago

          It’s a different approach, you don’t abandon best practices, but this new tool does give information that was previously more difficult / costly to access - so use it too.

            • MangoCats@feddit.it
              link
              fedilink
              English
              arrow-up
              1
              ·
              30 minutes ago

              There are things an LLM can show you that are undeniably correct, like: this line of code here calls a “free” on a pointer which might be NULL, and in-fact will be NULL if you follow this path through the code: …

              Think of it like “NP hard problems” - there are problems where the solution is hard to find, but easy to verify once you are given it.

              When an LLM is giving you those hard to find, easy to veryify observations, that’s value. It doesn’t have to be perfect, it doesn’t have to be 100% complete.

              Or, you can hire a team of engineers to burn their brains for months on end to maybe find the same things, maybe not.

              There’s a problem with both human attention spans, and LLMs’ context window capacity - neither are up to the task of reviewing a full code base for something like a browser and “finding all the flaws” - but, if the LLM can give you flaws that humans haven’t been able to find… you should be taking those wins - before somebody else does and puts them to different uses.

  • Miller@lemmy.world
    link
    fedilink
    English
    arrow-up
    43
    ·
    2 days ago

    Governments around the world are commissioning studies to look into the massive increase in air disasters that began around the time people without any flight training were allowed to be pilots.

  • Kokesh@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    2 days ago

    Vibe coded apps? Let’s stop calling that. Call it with correct name: AI slop nonsense

  • FlashMobOfOne@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    2 days ago

    And yet, the push continues. I keep hearing all this hype about AI agents that are going to do all my menial tasks for me, but they’re quite dishonest, because of none of these pitches mention that the AI agent will almost certainly fuck a lot of it up and create more unnecessary work for me.

    • Curious_Canid@piefed.ca
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 days ago

      The billionaires who are pushing AI technology are willing to take the risk that their AI will fuck things up for a lot of people.