• Eager Eagle@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      I get the spirit, but it’s capitalism that accelerates climate change. LLMs are just the new blockchain scapegoat.

      • ObtuseDoorFrame@lemmy.zip
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        You’re both right. Capitalism accelerates climate change but so does the outrageous electricity requirements of LLMs.

        • Honytawk@feddit.nl
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          3 months ago

          If it weren’t for Capitalism, those LLMs would have been designed with a lower climate impact from the get-go. But since that hurts the shareholders bottom line, they aren’t.

        • Sadbutdru@sopuli.xyz
          link
          fedilink
          arrow-up
          0
          ·
          3 months ago

          Many human activities cause climate change. LLMs are a relatively new one, and a disproportionate energy user, so it’s fair enough to shout about it and try to minimise adoption. Things that are already entrenched like consumer culture or aviation will be harder and slower to undo.

          Also, whether it’s right or not that capitalism is largely to blame, if you take that to mean that the only useful action against climate change is fighting against capitalism, or let yourself feel like you’ve ‘done your part’ by having anticapitalist opinions, I think that’s counterproductive.

  • scott@lemmy.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    AI does not exist. Large language models are not intelligent, they are language models.

      • ozymandias117@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        I would argue that, prior to chatgpt’s marketing, AI did mean that.

        When talking about specific, non-general, techniques, it was called things like ML, etc.

        After openai coopted AI to mean an LLM, people started using AGI to mean what AI used to mean.

            • Klear@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              3 months ago

              So? I don’t see how that’s relevant to the point that AI has been used for very simple decision algorithms since for along time, and it makes no sense to not use it for LLMs too.

        • brisk@aussie.zone
          link
          fedilink
          arrow-up
          0
          ·
          3 months ago

          That would be a deeply ahistorical argument.

          https://en.wikipedia.org/wiki/AI_effect

          AI is a very old field, and has always suffered from things being excluded from popsci as soon as they are achievable and commonplace. Path finding, OCR, chess engines and decision trees are all AI applications, as are machine learning and LLMs.

          That Wikipedia article has a great line in it too

          The Bulletin of the Atomic Scientists organization views the AI effect as a worldwide strategic military threat.[4] They point out that it obscures the fact that applications of AI had already found their way into both US and Soviet militaries during the Cold War.[4]

          The discipline of Artificial Intelligence was founded in the 50s. Some of the current vibe is probably due to the “Second AI winter” of the 90s, the last time calling things AI was dangerous to your funding

        • Ignotum@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          3 months ago

          To common people perhaps, but never in the field itself, much simpler and dumber systems than LLMs were still called AI

      • Bronzebeard@lemmy.zip
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        A thermostat is an algorithm. Maybe. Can be done mechanically. That’s not much of a decision, “is number bigger?”

    • TranscendentalEmpire@lemmy.today
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      This can’t be true… Businesses wouldn’t reshape their entire portfolios, spending billions of dollars on a technology with limited to no utility. Ridiculous.

      Anyways, I got these tulip bulbs to sell, real cheap, just like give me your house or something.

      • marcos@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        Remember, investment in LLM infrastructure on the US is currently larger than consumer spending.

        And they will cut interest rates soon, so expect the number to go up (the investment number, that is, not value).

        • Captain_Faraday@programming.dev
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          Can confirm, I’m an electrical engineer working on a power substation supplying power to a future datacenter (not sure if an Ai project, there’s more that one). Let’s just say, money is no issue, commissioning schedule and functionality are their priorities.

  • kibiz0r@midwest.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    People with no clue about AI are exactly why a dumb-as-a-brick LLM could very well end up destroying the world.

    • Angry_Autist (he/him)@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      No, that scenario comes from AI’s use in eliminating opponents of fascism

      It’s pretty funny that while everyone is whining about artist rights and making a huge fucking deal about delusional people who think they’ve ‘birthed’ the first self aware AI, Palantir is using our internet histories to compile a list of dissidents to target

      Screenshotting for my eventual ban.

      • stingpie@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        That’s crazy! That can’t be real!

        On an unrelated note, I’ve recently got into machine learning myself. I’ve been working on some really wacky designs. Did you know that you can get 64gb gpu modules for super cheap? Well, relatively cheap compared to a real GPU. I recently got two Nvidia Jetson Xavier agx 64gb for 400$. If you’re clever, you can even use distributed training to combine the speed and memory of multiple together.

        Have you heard about OpenAI’s new open source model? I can’t run the 120b variant, but I could probably use the 20b variant. Of course OpenAI, being as obsessive about safety as they are, did a couple experiments to demonstrate their model was incapable of playing capture-the-flag, even if it was fine tuned. It turns out, their model simply isn’t capable of doing the abstract planning required to do a task like that. It’s ‘thought’ process is just too linear.

        I’ve recently been experimenting with topological deep learning. It’s basically training neural networks to work with graphs. I’ve been trying to get a neural networks to model the multiple possibilities of getting a sandwich. You could use ingredients at home, you could go out and get ingredients, you could even buy one at a restaurant. Anyway, since most LLMs know what ingredients go into a sandwich, the hardest problem is actually deciding the method of getting a sandwich.

        TL;DR: I have a great deal of trust in the government, I enjoy saving money, I think it’s great how safety-conscious OpenAI is, and I love eating sandwiches!!

  • GreenKnight23@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    3 months ago

    not worried about AI taking over the world.

    I’m worried that corporate interests are using AI as a mechanism to circumvent our already weak control of the world and enslaving us only for AI to be used inappropriately in ways that further the divide between the wealthy and poor.

    I’m worried that these corporate interests are doing irreversible damage to the environment by consuming energy and resources that could be better spent on solving real problems like homelessness, joblessness, healthcare, and hunger.

    I’m worried that my kids will grow up in a world where technology has become so toxic to intelligence that they’re living in a mashup of Fahrenheit 451 and 1984.

    AI couldn’t find its way out of a wet paper sack. The maliciously negligent rich assholes that are forcing us to use AI when it’s a half-baked pyramid scheme filled with crypto-bros and fascist supporting scumbags – those guys I’m concerned about.

  • nialv7@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    Geoffrey Hinton, is worried about AI taking over the world. I wouldn’t say he knows nothing about AI…

  • njm1314@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    3 months ago

    It’s not the “AI” that I’m worried about destroying the world. It’s the tech Bros and CEOs who are trying to force it on us all I’m worried about. Cuz I don’t trust them to think things through or think ahead.

  • Honytawk@feddit.nl
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    The current capabilities are only indicative of future improvements. It says nothing about how capable it will be in the far future.

  • Mesa@programming.dev
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    3 months ago

    The thing is, AI doesn’t need to take over the world if the BiG tHiNkErS are so eager to replace humans regardless of its merit.

    • silasmariner@programming.dev
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      The thing is, if they’re wrong, their businesses will fail, and anyone who didn’t jump on the hype train and didn’t piss revenue away should have better financials

  • bss03@infosec.pub
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    I’ve never really liked this meme. I quite dislike AI, but just because your NN sucks doesn’t mean NNs or AI in general is fundamentally poor.

    I often write very poorly performing programs due to mistakes, lack of knowledge, or just general incompetence. That doesn’t mean all my programs perform poorly. It certainly doesn’t mean all your programs perform poorly.

    “AI” sucks for a lot of reasons, but so does this image.