I promise this question is asked in good faith. I do not currently see the point of generative AI and I want to understand why there’s hype. There are ethical concerns but we’ll ignore ethics for the question.

In creative works like writing or art, it feels soulless and poor quality. In programming at best it’s a shortcut to avoid deeper learning, at worst it spits out garbage code that you spend more time debugging than if you had just written it by yourself.

When I see AI ads directed towards individuals the selling point is convenience. But I would feel robbed of the human experience using AI in place of human interaction.

So what’s the point of it all?

  • SplashJackson@lemmy.ca
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    2 hours ago

    I wish I could have an AI in my head that woukd do all the talking for me because socializing is so exhausting

  • orcrist@lemm.ee
    link
    fedilink
    arrow-up
    3
    ·
    4 hours ago

    There is no point. There are billions of points, because there are billions of people, and that’s the point.

    You know that there are hundreds or thousands of reasonable uses of generative AI, whether it’s customer support or template generation or brainstorming or the list goes on and on. Obviously you know that. So I’m not sure that you’re asking a meaningful question. People are using a tool to solve various problems, but you don’t see the point in that?

    If your position is that they should use other tools to solve their problems, that’s certainly a legitimate view and you could argue for it. But that’s not what you wrote and I don’t think that’s what you feel.

  • CaptainBlagbird@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    5 hours ago

    I generate D&D characters and NPCs with it, but that’s not really a strong argument.

    For programming though it’s quite handy. Basically a smarter code completion that takes the already written stuff into account. From machine code through assembly up to higher languages, I think it’s a logical next step to be able to tell the computer, in human language, what you actually are trying to achieve. That doesn’t mean it is taking over while the programmer switches off their brain of course, but it already saved me quite some time.

  • kronisk @lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    5 hours ago

    There are some great use cases, for instance transcribing handwritten records and making them searchable is really exciting to me personally. They can also be a great tool if you learn to work with them (perhaps most importantly, know when not to use them - which in my line of work is most of the time).

    That being said, none of these cases, or any of the cases in this thread, is going to return the large amounts of money now being invested in AI.

    • Xavienth@lemmygrad.ml
      link
      fedilink
      arrow-up
      1
      ·
      4 hours ago

      Generative AI is actually really bad at transcription. It imagines dialogues that never happened. There was some institution, a hospital I think? They said every transcription had at least one major error like that.

      • octochamp@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        15 minutes ago

        This is an issue if it’s unsupervised, but the transcription models are good enough now that with oversight then they’re usually useful: checking and correcting the AI generated transcription is almost always quicker than transcribing entirely by hand.

        If we approach tasks like these assuming that they are error-prone regardless whether they are done by human or machine, and will always need some oversight and verification, then the AI tools can be very helpful in very non-miraculous ways. I think it was Jason Koebler said in a recent 404 podcast that at Vice he used to transcribe every word of every interview he did as a journalist, but now transcribes everything with AI and has saved hundreds of work hours doing so, but he still manually checks every transcript to verify it.

  • Ekky@sopuli.xyz
    link
    fedilink
    arrow-up
    3
    ·
    6 hours ago

    I think genAI would be pretty neat for bit banging tests, aka. Throwing semi-random requests and/or signals at some device in the hopes of finding obscure edge-cases or security holes.

  • weeeeum@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    10 hours ago

    I think LLMs could be great if they were used for education, learning and trained on good data. The encyclopedia Britannica is building an AI exclusively trained on its data.

    It also allows for room for writers to add more to the database, to provide broader knowledge for the AI, so people keep their jobs.

  • whome@discuss.tchncs.de
    link
    fedilink
    arrow-up
    8
    ·
    13 hours ago

    I use it to sort days and create tables which is really helpful. And the other thing that really helped me and I would have never tried to figure out on my own:

    I work with the open source GIS software qgis. I’m not a cartographer or a programmer but a designer. I had a world map and wanted to create geojson files for each country. So I asked chatgpt if there was a way to automate this within qgis and sure thing it recommend to create a Python script that could run in the software, to do just that and after a few tweaks it did work. that saved me a lot of time and annoyances. Would it be good to know Python? Sure but I know my brain has a really hard time with code and script. It never clicked and likely never will. So I’m very happy with this use case. Creative work could be supported in a drafting phase but I’m not so sure about this.

  • mindbleach@sh.itjust.works
    link
    fedilink
    arrow-up
    3
    ·
    11 hours ago

    What doesn’t exist yet, but is obviously possible, is automatic tweening. Human animators spend a lot of time drawing the drawings between other drawings. If they could just sketch out what’s going on, about once per second, they could probably do a minute in an hour. This bullshit makes that feasible.

    We have the technology to fill in crisp motion at whatever framerate the creator wants. If they’re unhappy with the machine’s guesswork, they can insert another frame somewhere in-between, and the robot will reroute to include that instead.

    We have the technology to let someone ink and color one sketch in a scribbly animatic, and fill that in throughout a whole shot. And then possibly do it automatically for all labeled appearances of the same character throughout the project.

    We have the technology to animate any art style you could demonstrate, as easily as ink-on-celluloid outlines or Phong-shaded CGI.

    Please ignore the idiot money robots who are rendering eye-contact-mouth-open crowd scenes in mundane settings in order to sell you branded commodities.

    • Mr_Blott@feddit.uk
      link
      fedilink
      arrow-up
      4
      ·
      9 hours ago

      For the 99% of us who don’t know what tweening is and were scared to Google it in case it was perverted, it’s short for in-betweening and means the short frames of an animation in-between two main scenes

      • mindbleach@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        7 hours ago

        I had not. There’s a variety of demos for guessing what comes between frames, or what fills in between lines… because those are dead easy to train from. This technology will obviously be integrated into the process of animation, so anything predictable Just Works, and anything fucky is only as hard as it used to be.

  • Schorsch@feddit.org
    link
    fedilink
    arrow-up
    31
    ·
    19 hours ago

    It’s kinda handy if you don’t want to take the time to write a boring email to your insurance or whatever.

    • Odelay42@lemmy.world
      link
      fedilink
      arrow-up
      17
      arrow-down
      3
      ·
      18 hours ago

      I sorta disagree though, based on my experience with llms.

      The email it generates will need to be read carefully and probably edited to make sure it conveys your point accurately. Especially if it’s related to something as serious as insurance.

      If you already have to specifically create the prompt, then scrutinize and edit the output, you might as well have just written the damn email yourself.

      It seems only useful to write slop that doesn’t matter that only gets consumed by other machines and dutifully logged away in a slop container.

      • Scrubbles@poptalk.scrubbles.tech
        link
        fedilink
        English
        arrow-up
        8
        ·
        18 hours ago

        For us who are bad at writing though that’s exactly why we use it. I’m bad with greetings, structure, things that people expect and I’ve had people get offended at my emails because they come off as rude. I don’t notice those things. For that llms have been a godsend. Yes, I of course have to validate it, but it conveys the message I’m trying to usually

      • Random Dent@lemmy.ml
        link
        fedilink
        English
        arrow-up
        21
        ·
        18 hours ago

        It does sort of solve the ‘blank page problem’ though IMO. It sometimes takes me ages to start something like a boring insurance letter because I open up LibreOffice and the blank page just makes me want to give up. If I have AI just fart out a letter and then I start to edit it, I’m already mid-project so it actually does save me some time in that way.

    • Random Dent@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      18 hours ago

      Yeah that’s how I use it, essentially as an office intern. I get it to write cover letters and all the other mindless piddly crap I don’t want to do so I can free up some time to do creative things or read a book or whatever. I think it has some legit utility in that regard.

    • Pechente@feddit.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      18 hours ago

      I get the point here but I think it’s the wrong approach. If you feel the email needs too much business fluff, just write it more casual and get to the point quicker.

  • m-p{3}@lemmy.ca
    link
    fedilink
    arrow-up
    12
    ·
    16 hours ago

    I treat it as a newish employee. I don’t let it do important tasks without supervision, but it does help building something rough that I can work on.

  • solomon42069@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    edit-2
    15 hours ago

    There was a legitimate use case in art to draw on generative AI for concepts and a stopgap for smaller tasks that don’t need to be perfect. While art is art, not every designer out there is putting work out for a gallery - sometimes it’s just an ad for a burger.

    However, as time has gone on for the industry to react I think that the business reality of generative AI currently puts it out of reach as a useful tool for artists. Profit hungry people in charge will always look to cut corners and will lack the nuance of context that a worker would have when deciding when or not to use AI in the work.

    But you could provide this argument about any tool given how fucked up capitalism is. So I guess that my 2c - generative AI is a promising tool but capitalism prevents it from being truly useful anytime soon.

  • simple@lemm.ee
    link
    fedilink
    English
    arrow-up
    22
    ·
    edit-2
    19 hours ago

    People keep meaning different things when they say “Generative AI”. Do you mean the tech in general, or the corporate AI that companies overhype and try to sell to everyone?

    The tech itself is pretty cool. GenAI is already being used for quick subtitling and translating any form of media quickly. Image AI is really good at upscaling low-res images and making them clearer by filling in the gaps. Chatbots are fallible but they’re still really good for specific things like generating testing data or quickly helping you in basic tasks that might have you searching for 5 minutes. AI is huge in video games for upscaling tech like DLSS which can boost performance by running the game at a low resolution then upscaling it, the result is genuinely great. It’s also used to de-noise raytracing and show cleaner reflections.

    Also people are missing the point on why AI is being invested in so much. No, I don’t think “AGI” is coming any time soon, but the reason they’re sucking in so much money is because of what it could be in 5 years. Saying AI is a waste of effort is like saying 3D video games are a waste of time because they looked bad in 1995. It will improve.

    • robot_dog_with_gun [they/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      3
      ·
      14 hours ago

      AI is huge in video games for upscaling tech like DLSS which can boost performance by running the game at a low resolution then upscaling it, the result is genuinely great

      frame gen is blurry af and eats shit on any fast motion. rendering games at 640x480 and then scaling them to sensible resolutions is horrible artistic practice.

      • PolandIsAStateOfMind@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        6 hours ago

        rendering games at 640x480 and then scaling them to sensible resolutions is horrible artistic practice.

        Is that a reason a lot of pixel art games are looking like shit? I remember the era of 320x240 and 640x480 and the modern pixel art are looking noticeably worse.

          • Horse {they/them}@lemmygrad.ml
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            2 hours ago

            a good example is dracula’s eyes in symphony of the night, on crt the red bleeds over giving a really good red eyes effect
            on lcd they are just single red pixels and look awful

          • PolandIsAStateOfMind@lemmy.ml
            link
            fedilink
            arrow-up
            1
            ·
            2 hours ago

            Quite possibly, old games also look worse on emus (and don’t even let me start about those remasters, i got incredibly hyped for incoming Suikoden 1+2 on PC but my eyes fucking bleed).

  • Fondots@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    14 hours ago

    I was asked to officiate my friend’s wedding a few months back, I’m no writer, and I wanted to do a bit better than just a generic wedding ceremony for them

    So I fired up chatgpt, told it I needed a script for a wedding ceremony, described some of the things I wanted to mention, some of the things they requested, and it spit out a pretty damn good wedding ceremony. I gave it a little once over and tweaked a little bit of what it gave me but 99% of it was pretty much just straight chatgpt. I got a lot of compliments on it.

    I think that’s sort of the use case. For those of us who aren’t professional writers and public speakers, who have the general idea of what we need to say for a speech or presentation but can’t quite string the words together in a polished way.

    Here’s pretty much what it spit out (Their wedding was in a cave)

    Cell Phone Reminder

    Officiant: Before we begin, I’d like to kindly remind everyone to silence your phones and put them away for the ceremony. Groom and Bride want this moment to be shared in person, free from distractions, so let’s focus on the love and beauty of this moment.

    Giving Away the Bride

    And before we move forward, we have a special moment. Tradition asks: Who gives this woman to be married to this man?

    [Response from Bride’s dad]

    Thank you.

    Greeting

    Welcome, everyone. We find ourselves here in this remarkable setting—surrounded by the quiet strength of these ancient walls, a fitting place for Groom and Bride to declare their love. The cave, much like marriage, is carved out over time—through patience, care, and sometimes a little hard work. And yet, what forms is something enduring, something that stands the test of time.

    Today, we’re here to witness Groom and Bride join their lives together in marriage. In this moment, we’re reminded that love is not about perfection, but about commitment—choosing one another, day after day, even when things get messy, or difficult, or dark. And through it all, we trust in love to guide us, just as God’s love guides us through life’s journey.

    Declaration of Intent

    [Officiant turns toward Groom and Bride]

    Groom, Bride, you are about to make promises to each other that will last a lifetime. Before we continue, I’ll ask each of you to answer a very important question.

    Officiant: Groom, do you take Bride to be your lawfully wedded wife, to have and to hold, for better or for worse, in sickness and in health, for as long as you both shall live?

    Groom: I do.

    Officiant: Bride, do you take Groom to be your lawfully wedded husband, to have and to hold, for better or for worse, in sickness and in health, for as long as you both shall live?

    Bride: I do.

    Exchange of Vows

    Officiant: Now, as a sign of this commitment, Groom and Bride will exchange their vows—promises made not just to each other, but before all of us here and in the sight of God.

    [Groom and Bride share their vows]

    Rings

    Officiant: The rings you’re about to exchange are a symbol of eternity, a reminder that your love, too, is without end. May these rings be a constant reminder of the vows you have made today, and of the love that surrounds and holds you both.

    [Groom and Bride exchange rings]

    Officiant: And now, by the power vested in me, and with the blessing of God, I pronounce you husband and wife. Groom you may kiss your bride.

    [Groom and Bride kiss]

    Officiant: Friends and family, it is my great honor to introduce to you, for the first time, Mr. and Mrs. [Name].

    I pretty much just tweaked the formatting, worked in a couple little friendly jabs at the groom, subbed their names in for Bride and Groom, and ad-libbed a little bit where appropriate

  • mindbleach@sh.itjust.works
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    11 hours ago

    Video generators are going to eat Hollywood alive. A desktop computer can render anything, just by feeding in a rough sketch and describing what it’s supposed to be. The input could be some kind of animatic, or yourself and a friend in dollar-store costumes, or literal white noise. And it’ll make that look like a Pixar movie. Or a photorealistic period piece starring a dead actor. Or, given enough examples, how you personally draw shapes using chalk. Anything. Anything you can describe to the point where the machine can say it’s more [thing] or less [thing], it can make every frame more [thing].

    Boring people will use this to churn out boring fluff. Do you remember Terragen? It’s landscape rendering software, and it was great for evocative images of imaginary mountains against alien skies. Image sites banned it, by name, because a million dorks went ‘look what I made!’ and spammed their no-effort hey-neat renders. Technically unique - altogether dull. Infinite bowls of porridge.

    Creative people will use this to film their pet projects without actors or sets or budgets or anyone else’s permission. It’ll be better with any of those - but they have become optional. You can do it from text alone, as a feral demo that people think is the whole point. The results are massively better from even clumsy effort to do things the hard way. Get the right shapes moving around the screen, and the robot will probably figure out which ones are which, and remove all the pixels that don’t look like your description.

    The idiots in LA think they’re gonna fire all the people who write stories. But this gives those weirdos all the power they need to put the wild shit inside their heads onto a screen in front of your eyeballs. They’ve got drawers full of scripts they couldn’t hassle other people into making. Now a finished movie will be as hard to pull off as a decent webcomic. It’s gonna get wild.

    And this’ll be great for actors, in ways they don’t know yet.

    Audio tools mean every voice actor can be a Billy West. You don’t need to sound like anything, for your performance to be mapped to some character. Pointedly not: “mapped to some actor.” Why would an animated character have to sound like any specific person? Do they look like any specific person? Does a particular human being play Naruto, onscreen? No. So a game might star Nolan North, exclusively, without any two characters really sounding alike. And if the devs need to add a throwaway line later, then any schmuck can half-ass the tone Nolan picked for little Suzy, and the audience won’t know the difference. At no point will it be “licensing Nolan North’s voice.” You might have no idea what he sounds like. He just does a very convincing… everybody.

    Video tools will work the same way for actors. You will not need to look like anything, to play a particular character. Stage actors already understand this - but it’ll come to movies and shows in the form of deep fakes for nonexistent faces. Again: why would a character have to look like any specific person? They might move like a particular actor, but what you’ll see is somewhere between motion-capture and rotoscoping. It’s CGI… ish. And it thinks perfect photorealism is just another artistic style.