• NigelFrobisher@aussie.zone
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    1
    ·
    12 hours ago

    At a beach restaurant the other night I kept hearing a loud American voice cut across all conversation, going on and on about “AI” and how it would get into all human “workflows” (new buzzword?). His confidence and loudness was only matched by his obvious lack of understanding of how LLMs actually work.

    • Chaotic Entropy@feddit.uk
      link
      fedilink
      English
      arrow-up
      11
      ·
      10 hours ago

      Some people can only hear “AI means I can pay people less/get rid of them entirely” and stop listening.

      • anon_8675309@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        8 hours ago

        AI means C level jobs should be on the block as well. The board can make decisions based on their output.

        • Knock_Knock_Lemmy_In@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          6 hours ago

          The whole ex-Mckinsey management layer is at risk. Whole teams of people who were dedicated to producing pretty slides with “action titles” for managers higher up the chain to consume and regurgitate are now having their lunch eaten by AI.

    • Blackmist@feddit.uk
      link
      fedilink
      English
      arrow-up
      6
      ·
      10 hours ago

      I’ve noticed that the people most vocal about wanting to use AI get very coy when you ask them what it should actually do.

      • ameancow@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        4 hours ago

        I also notice the ONLY people who can offer firsthand reports how it’s actually useful in any way are in a very, very narrow niche.

        Basically, if you’re not a programmer, and even then a very select set of programmers, then your life is completely unimpacted by generative AI broadly. (Not counting the millions of students who used it to write papers for them.)

        AI is currently one of those solutions in search of a problem. In its current state, it can’t really do anything useful broadly. It can make your written work sound more professional and at the same time, more mediocre. It can generate very convincing pictures if you invest enough time into trying to decode the best sequence of prompts and literally just get lucky, but it’s far too inacurate and inconsistent to generate say, a fully illustrated comic book or cartoon, unless you already have a lot of talent in that field. I have tried many times to use AI in my current job to analyze PDF documents and spreadsheets and it’s still completely unable to do work that requires mathematics as well as contextual understanding of what that math represents.

        You can have really fun or cool conversations with it, but it’s not exactly captivating. It is also wildly inaccurate for daily use. I ask it for help finding songs by describing the lyrics and other clues, and it confidentially points me to non-existing albums by hallucinated artists.

        I have no doubt in time it’s going to radically change our world, but that time frame is going to require a LOT more time and baking before it’s done. Despite how excited a few select people are, nothing is changing overnight. We’re going to have a century-long “singularity” and won’t realize we’ve been through it until it’s done. As history tends to go.

    • Zement@feddit.nl
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      11 hours ago

      I really like the idea of an LLM being narrowly configured to filter, summarize data which comes in at a irregular/organic form.

      You would have to do it multiples in parallel with different models and slightly different configurations to reduce hallucinations (Similar to sensor redundancies in Industrial Safety Levels) but still, … that alone is a game changer in “parsing the real world” … that energy amount needed to do this “right >= 3x” is cut short by removing the safety and redundancy because the hallucinations only become apparent down the line somewhere and only sometimes.

      They poison their own well because they jump directly to the enshittyfication stage.

      So people talking about embedding it into workflow… hi… here I am! =D

      • AA5B@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 hours ago

        A buddy of mine has been doing this for months. As a manager, his first use case was summarizing the statuses of his team into a team status. Arguably hallucinations aren’t critical

        • sudneo@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 hours ago

          I would argue that this makes the process microscopically more efficient and macroscopically way less efficient. That whole process probably is useless, and imagine wasting so much energy, water and computing power just to speed this useless process up and saving a handful of minutes (I am a lead and it takes me 2/3 minutes to put together a status of my team, and I don’t usually even request a status from each member).

          I keep saying this to everyone in my company who pushes for LLMs for administrative tasks: if you feel like LLMs can do this task, we should stop doing it at all because it means we are just going through the motions and pleasing a process without purpose. You will have people producing reports via LLM from a one-line prompt, the manager assembling it together with LLM and at vest someone reading it distilling it once again with LLMs. It is all a great waste of money, energy, time, cognitive effort that doesn’t benefit anybody.

          As soon as someone proposes to introduce LLMs in a process, raise with cutting that process altogether. Let’s produce less bullshit, instead of more while polluting even more in the process.

      • ameancow@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        edit-2
        4 hours ago

        I would also add “hopeful delusionals” and “unhinged cultist” to that list of labels.

        Seriously, we have people right now making their plans for what they’re going to do with their lives once Artificial Super Intelligence emerges and changes the entire world to some kind of post-scarcity, Star-Trek world where literally everyone is wealthy and nobody has to work. They think this is only several years away. Not a tiny number either, and they exist on a broad spectrum.

        Our species is so desperate for help from beyond, a savior that will change the current status-quo. We’ve been making fantasies and stories to indulge this desire for millenia and this is just the latest incarnation.

        No company on Earth is going to develop any kind of machine or tool that will destabilize the economic markets of our capitalist world. A LOT has to change before anyone will even dream of upending centuries of wealth-building.

      • AItoothbrush@lemmy.zip
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 hours ago

        AI itself too i guess. Also i have to point this out every time but my username was chosen way before all this shit blew up into our faces. Ive used this one on every platform for years.