• humanspiral@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    I’m skeptical of author’s credibility and vision of the future, if he has not even reached blink tag technology in his progress.

    • rational_lib@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      30 days ago

      I can’t help but read this while replacing “rock” with “large language model”

      Heuristics that almost always work. Hmm.

    • racemaniac@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      0
      ·
      29 days ago

      Is it worthless to say “(the current iteration of) AI won’t be a huge revolution”. For sure, it might be, the next decade will determine that.

      Is it worhtless to say that many companies are throwing massive amounts of money at it, and taking huge risks on it, while it clearly won’t deliver for them? I would say no, that is useful.

      And in the end, that’s what this complaint seems like for me. The issue isn’t “AI might be the next big thing”, but “We need to do everything with AI right now”, and then in a couple of years when they see how bad the results are, and how it negatively impacted them, noone will have seen it coming…

    • Excrubulent@slrpnk.net
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      Interesting article, but you have to be aware of the flipside: “people said flight was impossible”, “people said the earth didn’t revolve around the sun”, “people said the internet was a fad, and now people think AI is a fad”.

      It’s cherry-picking. They’re taking the relatively rare examples of transformative technology and projecting that level of impact and prestige onto their new favoured fad.

      And here’s the thing, the “information superhighway” was a fad that also happened to be an important technology.

      Also the rock argument vanishes the moment anyone arrives with actual reasoning that goes beyond the heuristic. So here’s some actual reasoning:

      GenAI is interesting, but it has zero fidelity. Information without fidelity is just noise, so a system that can’t solve the fidelity problem can’t do information work. Information work requires fidelity.

      And “fidelity” is just a fancy way of saying “truth”, or maybe “meaning”. Even as conscious beings we haven’t really cracked that issue, and I don’t think you can make a machine that understands meaning without creating AGI.

      Saying we can solve the fidelity problem is like Jules Verne in 1867 saying we could get to the moon with a cannon because of “what progress artillery science has made during the last few years”. We’re just not there yet, and until we are, the cannon might have some uses, but it’s not space technology.

      Interestingly, artillery science had its role in getting us to the moon, but that was because it gave us the rotating workpiece lathe for making smooth bore holes, which gave us efficient steam engines, which gave us the industrial revolution. Verne didn’t know it, but that critical development had already happened nearly a century prior. Cannons weren’t really a factor in space beyond that.

      Edit: actually metallurgy and solid fuel propellants were crucial for space too, and cannons had a lot to do with that as well.

      • merc@sh.itjust.works
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago

        Saying we can solve the fidelity problem is like Jules Verne in 1867 saying we could get to the moon with a cannon because of “what progress artillery science has made during the last few years”.

        Do rockets count as artillery science? The first rockets basically served the same purpose as artillery, and were operated by the same army groups. The innovation was to attach the propellant to the explosive charge and have it explode gradually rather than suddenly. Even the shape of a rocket is a refinement of the shape of an artillery shell.

        Verne wasn’t able to imagine artillery without the cannon barrel, but I’d argue he was right. It was basically “artillery science” that got humankind to the moon. The first “rocket artillery” were the V1 and V2 bombs. You could probably argue that the V1 wasn’t really artillery, and that’s fair, but also it wasn’t what the moon missions were based on. The moon missions were a refinement of the V2, which was a warhead delivered by launching something on a ballistic path.

        As for generative AI, it doesn’t have zero fidelity, it just has relatively low fidelity. What makes that worse is that it’s trained to sound extremely confident, so people trust it when they shouldn’t.

        Personally, I think it will take a very long time, if ever, before we get to the stage where “vibe coding” actually works well. OTOH, a more reasonable goal is a GenAI tool that you basically treat as an intern. You don’t trust it, you expect it to do bone-headed things frequently, but sometimes it can do grunt work for you. As long as you carefully check over its work, it might save you some time/effort. But, I’m not sure if that can be done at a price that makes sense. So far the GenAI companies are setting fire to money in the hope that there will eventually be a workable business model.

        • Excrubulent@slrpnk.net
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          29 days ago

          He proposed a moon cannon. The moon cannon was wrong, as wrong as thinking an LLM can have any fidelity whatsoever. That’s all that’s needed for my analogy to make the point I want to make. Whether rockets count as artillery or not really doesn’t change that.

          Cannons are not rockets. LLMs are not thinking machines.

          Being occasionally right like a stopped clock is not what “fidelity” means in this context. Fidelity implies some level of adherence to a model of the world, but the LLM simply has no model, so it has zero fidelity.

        • jmp242@sopuli.xyz
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          30 days ago

          I feel this also misses something rather big. I find there’s a huge negative value of people I have to help through doing a task - I can usually just get it done at least 2x if not 5x or more faster and move on with life. At least with a good intern I can hope they’ll learn and eventually actually be able to be assigned tasks and I can ignore those most of the time. Current AI can’t learn that way for various reasons, some I think technical, some business model driven, whatever. It’s like always having the first day on the job intern to “help”.

          The other problem is - unless I have 0 data security rules, there’s just so much the AI cannot know. Like I thought today I’d have Claude 3.7 thinking write me a bash script. I wanted it to query a system group and make sure the members of that group are in the current users .k5login. (Now, part of this is me not knowing how to prompt, but it’s also stuff a decent intern ought to be able to figure out.) One, it’s done a lot of code to work out what the realm is - this is useful generically, but is just code that could contain bugs when we know the realm and there’s only one it’ll ever operate in.

          I also had to re-prompt because I realized it misunderstood me the first time, whereas I think an intern would have access to the e-mail context so would have known what I meant.

          Though I will say it’s better than most scripters in that it actually does a lot of “safety” stuff we would find tedious and usually have to have something go wrong to add in, so … swings and roundabouts? It did save me time, assuming we all think it’s method is good enough - but this is also such a simple task that I think in some ways it’s barely above filling out a lot of boilerplate. It’s exactly the sort of thing I would have expected to see on stack overflow back in the day.

          EDIT: I actually had a task that felt 100% AI could have done… if there was any way for it to know lots and lots of context. I had to basically fill out a long docx file with often AI like text describing local IT security standards, processes, responsibilities and delegations. Probably over 60% I had to “just make up” cause I didn’t have the context - for higher ups to eventually massage into a final form. But I literally cannot even upload the confidential blank form, forget about have some magic way for AI to get a brain dump from me about the last 10ish years of spoken knowledge and restricted wiki pages. Anything it could have made up mostly would have “been done” by the time I made a functional prompt.

          I don’t think we solve this till we can run frontier models locally at prices less than a human salary, with integrations into everything a human in that position could access.

  • superkret@feddit.org
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    This technology solves every development problem we have had. I can teach you how with my $5000 course.

    Yes, I would like to book the $5000 Silverlight course, please.

  • andioop@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 month ago

    For all these people insisting you will be left behind for not using the tool, even if it is not magic:

    How do I learn to read code? Because I will never blindly trust AI output, but I also do not know how to read it for correctness. Just how to create a test. I am beginner enough that writing is a lot easier than reading, and reading would honestly take me awhile.

    Everything in me is straining against using AI and not having the skills to actually check its output, while knowing it sometimes spews bullshit that looks correct, is an actual legitimate barrier to using it and not just my personal distaste. Meanwhile I at least understand what I wrote. If I am to ever change my mind and unhappily jump on the train, feeling very very dirty but also not wanting to be left behind in a paradigm shift then I still have to be able to error-check it.

    Yes I get the point of the article, but also there are some inventions that really did change the way we did things and probably had some people hyping it up as a must-have as per always—we just also have tons of examples of must-haves that did not turn out to be that way. And while you are being bombarded by the hype it is hard to know if the invention will fade away, will have a place but you can also get away without using it, or if it’ll be a thing everyone uses and that you’ll seem crazy not to, like refrigerators and the internet. Hindsight is 20/20, but in the present we’re walking around in heavy fog, possibly with blindfolds on.

    I figure I’ll just keep not using AI and if I do get left behind, then I’ll force myself to use it. Learning to read code is useful either way ;)

  • fuzzzerd@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    I don’t remember progressive web apps having anywhere near the level of fanfare as the other things on this list, and as someone that has built several pwas I feel their usefulness is undervalued.

    More apps in the app store should be pwas instead.

    Otherwise this list is great and I love it.

      • Ferk@programming.dev
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        1 month ago

        I mean, isn’t that what “get on or get left behind” means?

        It does not necessarily mean you’ll lose your job. Nor does “get on” mean you have to become a specialist on it.

        The post picks specifically on things that didn’t catch on (or that only catched on for a period of time but were eventually superseeded), but does not apply it to other successful technologies.

      • Jankatarch@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        1 month ago

        There is still difference.

        Cloud was FOR the IT people. Machine learning is for predicting patterns following data.

        Maybe stock predictors will adapt or replace but average programmer didn’t have to switch to replit because it’s “cloud IDE”

    • Rusty@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      I don’t think it was supposed to replace everyone in IT, but every company had system administrators or IT administrators that would work with physical servers and now there are gone. You can say that the new SRE are their replacement, but it’s a different set of skills, more similar to SDE than to system administrators.

      • MinFapper@startrek.website
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        And some companies (like mine) just have their SDEs do the SRE job as well. Apparently it incentivizes us to write more stable code or something

    • Blackmist@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Many of our customers store their backups in our “cloud storage solution”.

      I think they’d be rather less impressed to see the cloud is in fact a jumble of PCs scattered all around our office.

    • Colonel Panic@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Naming it “The Cloud” and not “Someone else’s old computer running in their basement” was a smart move though.

      It just sounds better.

  • Refurbished Refurbisher@lemmy.sdf.org
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    I still think PWAs are a good idea instead of needing to download an app on your phone for every website. Like, for example, PWAs can easilly replace most banking apps, which are already just PWAs with added tracking.

    • Deebster@infosec.pub
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      They’re great for users, which is why Google and Apple are letting them die from lack of development so apps can make them money.

  • Maxxie@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    1 month ago

    (Allow me to preach for a bit, I have to listen to my boss gushing about AI every meeting)

    Compare AI tools: now vs 3 years ago. All those 2022 “Prompt engineer” courses are totally useless in 2025.

    Extrapolate into the future and realize, that you’re not losing anything valuable by not learning AI tools today. The whole point of them is they don’t require any proficiency. It “just works”.

    Instead focus on what makes you a good developer: understanding how things work, which solution is good for what problem, centering your divs.

    • Dr. Moose@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      Key skill is to be able to communicate your problem and requirements which turns out to be really hard.

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        30 days ago

        It’s also a damn useful skill whether you’re working with AI or humans. Probably worth investing some effort into that regardless of what the future holds.

        • jmp242@sopuli.xyz
          link
          fedilink
          arrow-up
          0
          ·
          30 days ago

          Though it’s more work with current AI at least compared to another team member - the AI cannot have access to a lot of context due to data security rules.