Qualification:

No, seriously, it’s just awful. I’m starting to worry that I’ll end up homeless or working in low-paying jobs like mining, if, of course, such jobs still exist and aren’t taken by other people. Maybe I should move to a more or less decent village, where at least I’ll have the opportunity to grow my own food and get water from a well?

But I saw that life in the villages is very hard, and I’m not ready to work from morning until late at night without days off.

The year 2027 scares me a lot.

  • minorkeys@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    17 hours ago

    There is no way to know what paths will be left available once LLM integration starts to prove effective or in what roles. History shows very clearly that the powerful do not care if the peasantry suffers and dies because of the changes they force on the world. They care primarily, and often exclusively, for their own ambitions.

    If we are unnecessary for those ambitions, our welfare is not considered at all. Our only value to them has been our labour and without that value we are all under threat of being denied access to any resources at all, including those necessary to stay alive, including the resource of physical space, which land ownership allows them to deny us.

  • gergo@lemmy.world
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    19 hours ago

    Okay ai, someone locked themselves out of their apartment. Solve. Okay ai, here’s this flat tyre on my bike. Solve. Okay ai, I have a leak in my kitchen. Solve.

    In short, I’m not too afraid :)

    • Randomgal@lemmy.ca
      link
      fedilink
      arrow-up
      2
      arrow-down
      3
      ·
      20 hours ago

      Don’t worry I had AI TL&DR it for you:

      Summary of “The Reverse-Centaur’s Guide to Criticizing AI”

      Cory Doctorow distinguishes between centaurs (humans assisted by machines) and reverse-centaurs (humans serving as appendages to machines). His core thesis: AI tools are marketed as centaur-making devices but deployed to create reverse-centaurs—workers subjected to algorithmic control and expected to catch machine errors while being blamed for failures.

      The AI bubble exists primarily to maintain growth stock valuations. Once tech monopolies dominate their sectors, they face market reclassification from “growth” to “mature” stocks, triggering massive valuation drops. AI hype keeps investors convinced of continued expansion potential.

      AI’s actual business model: Replace high-wage workers (coders, radiologists, illustrators) with AI systems that cannot actually perform those jobs, while retaining skeleton crews as “accountability sinks”—humans blamed when AI fails. This strategy reduces payroll while maintaining superficial human oversight.

      Why expanding copyright won’t help creators: Despite 50 years of copyright expansion, creative workers earn less both absolutely and proportionally while media conglomerates profit enormously. New training-related copyrights would simply become contractual obligations to employers, not worker protections.

      The effective counter-strategy: The U.S. Copyright Office’s position that AI-generated works cannot receive copyright protection undermines corporate incentives to replace human creators entirely. Combined with sectoral bargaining rights (allowing industry-wide worker negotiations), this creates material resistance to worker displacement.

      On AI art specifically: Generative systems produce “eerie” outputs—superficially competent but communicatively hollow. They cannot transfer the “numinous, irreducible feeling” that defines art because they possess no intentionality beyond statistical word/pixel prediction.

      The bubble will collapse, leaving behind useful commodity tools (transcription, image processing) while eliminating economically unsustainable foundation models. Effective criticism should target AI’s material drivers—the growth-stock imperative and labor displacement economics—not peripheral harms like deepfakes or “AI safety” concerns about sentience.

  • Corporal_Punishment@feddit.uk
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    1
    ·
    2 days ago

    In a world where people have nothing to lose, the people in power will need to be afraid.

    Seriously, if we end up in a situation where we have mass unemployment with no safety net because of AI, billionaires and politicians will be hanging from lampposts.

    • rain_lover@lemmy.ml
      link
      fedilink
      arrow-up
      3
      ·
      17 hours ago

      They just wont let it get that bad. Make sure people can afford a netflix subscription and some shitty junk food each day and there will be no revolution.

  • ☂️-@lemmy.ml
    link
    fedilink
    arrow-up
    8
    arrow-down
    2
    ·
    2 days ago

    being existentially afraid of new tech instead of excited for the possibilities is a terribly capitalist problem to have.

    • communism@lemmy.ml
      link
      fedilink
      arrow-up
      3
      ·
      18 hours ago

      New tech isn’t socially neutral. Should we be excited about the possibilities of new missiles and warplanes? If you understand how that new technology can be bad, you can understand how other new technologies can also not be “exciting”. Capitalism produces for the sake of production. We have plenty of useless shit that exists for the ouroboros of profit and marketing rather than to fulfil some natural use case. I think modern LLMs fall into the former, not to mention the energy cost of the current demand. I think LLMs can be cool as toys/for demos/as academic projects/etc but the current prevalence is purely due to marketing and AI companies trying to make something that is quite expensive, profitable.

  • davel@lemmy.ml
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 days ago

    Individually, I don’t know. Collectively, socialist revolution. Workers in socialist states aren’t nearly as anxious about AI as workers in capitalist ones.