even if you disable the feature, I have zero to no trust I’m OpenAI to respect that decision after having a history of using copyrighted content to enhance their LLMs

  • vane@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    edit-2
    1 day ago

    It’s interesting to watch from a perspective of a person, who used to be able to find knowledge only in books. I’m slowly start to feel like Neanderthal. This global (d)arpanet experiment on humans looks more and more intriguing.

    • cecilkorik@lemmy.ca
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      1 day ago

      AI is just a search engine you can talk to that summarizes everything it finds into a small nugget for you to consume, and in the process sometimes lies to you and makes up answers. I have no idea how people think it is an effective research tool. None of the “knowledge” it is sharing is actually created by it, it’s just automated plagiarism. We still need humans writing books (and websites) or the AI won’t know what to talk about.

      Books are going to keep doing just fine.

      • Dasus@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 hours ago

        Books are going to keep doing just fine.

        Books haven’t been the go to for several decades. When’s the last time you went to search something in a library before Googling it? Or hell, in general. Because we used to have to do that you know. When I was a kid and I wanted to know something, I had to cycle to library.

        Now I can ask my phone about it, then ask it for the source, then check the source and I can use a search engine to find an actual book on the source on the subject.

        It’s a tool.

        It’s a poor craftsman who blames his tools. If you’re trying to use a hammer as a screwdriver, ofc it’s gonna suck.

        • cecilkorik@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 hours ago

          Both the tool and the craftsman are to blame if you intend to use duct tape to build a house. The appropriate and acceptable uses of AI chatbots are similarly limited.

          • Dasus@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            6 hours ago

            Yeah I’m not gonna build a house with duct tape, but I most definitely like keeping a roll around, because it’s very useful in certain situations.

            As of now LLM’s are little more than glorified chatbots, but I find them useful when cooking / making drinks. I’ll have an idea, query something, ask about whether it’s generally thought that x spice goes well in y dish or how the temperature of a drink will affect the layering of it or something.

            It’s decent enough for that. But like for any data that’s not as stable as cooking (which is subjective at its core anyway more or less) etc, it’s not good. Movie released for instance? Nah. Because the release dates change and the batch of data it’s uses for training can have a different date than it does.

            That happened in December when Kraven the Hunter was coming out. It told me it had premiered like 6 months ago when I knew it was gonna be in a week or so.

            But on the other hand I once accidentally made this cool drink where I got bits of pineapple to go up and down for 10-15 minutes after served, pretty furiously. Couldn’t replicate it until I talked to Gemini for a minute. And the input would’ve been so niche it would’ve yielded no direct results online. I’d have had to refresh some basic chemistry for at least 10-20 min prolly. But now I just got the answer in one.

            Decent enough.

            I know AI is overhyped, but it’s also overhated. I too hate the overhyping, but I don’t hate the tool itself. It’s just not anywhere near as versatile or complex as some people make it out to be, but it’s also rather more useful than some make it out to be.

      • vane@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 hours ago

        It’s all because it’s cheaper to talk to LLM machine that outputs most probable phrases based on statistics than to talk to people these days. It’s accessibility thing. You have a feeling that you’re speaking with a person, it’s whole trick. It says much about who we are as people.

        Amount of effort needed to ask questions and not being hated in real life is way bigger than asking LLM.

        Adding more to the topic of AI as a whole is that you need to realize that we have completly new kind of computer software that is non deterministic. It’s a completly new thing and comparing it to traditional software is just pointless and confusing.

        I’m not saying I would provide my life to LLM, but the fact that we developed software that is always generating readable output is a huge step in software development.

  • zenpocalypse@lemm.ee
    link
    fedilink
    English
    arrow-up
    37
    arrow-down
    2
    ·
    1 day ago

    I’m not going to defend OpenAI in general, but that difference is meaningless outside of how the LLM interacts with you.

    If data privacy is your focus, it doesn’t matter that the LLM has access to it during your session to modify how it reacts to you. They don’t need the LLM at all to use that history.

    This isn’t an “I’m out” type of change for privacy. If it is, you missed your stop when they started keeping a history.

  • huppakee@lemm.ee
    link
    fedilink
    English
    arrow-up
    97
    ·
    2 days ago

    The headline: ChatGPT Will Soon Remember Everything You’ve Ever Told It

    • Australis13@fedia.io
      link
      fedilink
      arrow-up
      41
      arrow-down
      1
      ·
      2 days ago

      The irony is that, according to the article, it already does. What is changing is that the LLM will be able to use more of that data:

      OpenAI is rolling out a new update to ChatGPT’s memory that allows the bot to access the contents of all of your previous chats. The idea is that by pulling from your past conversations, ChatGPT will be able to offer more relevant results to your questions, queries, and overall discussions.

      ChatGPT’s memory feature is a little over a year old at this point, but its function has been much more limited than the update OpenAI is rolling out today… Previously, the bot stored those data points in a bank of “saved memories.” You could access this memory bank at any time and see what the bot had stored based on your conversations… However, it wasn’t perfect, and couldn’t naturally pull from past conversations, as a feature like “memory” might imply.

      • huppakee@lemm.ee
        link
        fedilink
        English
        arrow-up
        19
        ·
        edit-2
        2 days ago

        For context: it already saved your data as you had acces to your previous chats. Then came the memory feature, which meant they saved like a summary in to a new dataset (eg ‘the user lives in country x’ and ‘the user doesn’t like birthdays’), so you are right it does save it already. The news is that they will now the bot will acces more of your chat history, I think when they write ChatGPT they mean it as ‘your personal chatbot’ instead of ‘the company that offers the chatbot’.

      • Tim_Bisley@piefed.social
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 day ago

        Hmm this is interesting because I had a lengthy chat with gpt over this. It randomly recalled a previous convo acting like it was part of the current convo. I asked about it and it was like oh I can’t recall previous conversations. I was like yet you did and after going back and forth it was pretty much like oops my bad I’m not supposed to do that but I accidentally did.

    • Buffalox@lemmy.world
      link
      fedilink
      English
      arrow-up
      44
      ·
      2 days ago

      If we knew it was altruistic, and only working for our benefit, it might be.
      But as it is, it is not working for you, you are not its master.
      Big corps and governments are.

      • snooggums@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        I would be even more worried if it was altruistic and for our benefit because we fuck that shit up all the time even before malicious actors are able to weasel their way into power and turn it into something horrible.

  • gravitas_deficiency@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    46
    arrow-down
    1
    ·
    2 days ago

    This will never ever be used in a surveillance capacity by an administration that’s turning the country into a fascist hyper capitalist oligarchical hellscape. Definitely not. No way. It can’t happen here.

    • Frjttr@lemm.ee
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      1
      ·
      2 days ago

      This will be useful to the user, but it won’t change privacy. Humans at OpenAI still have full access to your history, and this will only expand AI capabilities to tap into previous conversations. However, rogue and unlawful administrations will still seek to access that data regardless.

        • Frjttr@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          If this were true, the attacker would need to send prompts to retrieve information, making it an easy attack for the user to spot. However, if the malicious actor has the power to delete prompts and chats, I would suspect they already have access to every other chat.

    • PattyMcB@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 days ago

      It reminds me of the kids in 1984 who turn their father in for being an enemy of the state

    • Balder@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      2 days ago

      There’s a difference between OpenAI storing conversations and the LLM being able to search all your previous conversations in every clean session you start.

      • zenpocalypse@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 day ago

        That is the difference, but it’s a pretty minimal difference. Open AI hardly needs to give the LLM access to your conversations during your session to access your conversations.

        In fact, I don’t see any direct benefit to OpenAI with this change. All it does is (probably) improve its answers to the user during a session.

        • huppakee@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 day ago

          The benefit is them offering a better service which might help them sell more subscriptions. They don’t need this change for the more malicious benefits like more data for training or more insight in their customers etc.

  • Björn Tantau@swg-empire.de
    link
    fedilink
    English
    arrow-up
    36
    arrow-down
    4
    ·
    2 days ago

    They literally tell you when you sign up that they can and will look at what you tell ChatGPT. This changes absolutely nothing about that.

    • Balder@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      2
      ·
      edit-2
      2 days ago

      Maybe for training new models, which is a totally different thing. This update is like everything you type will be stored and used as context.

      I already never share any personal thing on these cloud-based LLMs, but it’s getting more and more important to have a local private LLM on your computer.

      • WhatAmLemmy@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        2
        ·
        edit-2
        2 days ago

        Always has been. Nothing has changed. Every conversation you’ve ever had with chatGPT is stored and owned by open AI. This is why I’ve largely rejected their use.

        If it’s not local or E2EE, you are the product (even when you pay for the service).

        • Balder@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          17 hours ago

          Always has been. Nothing has changed.

          The fact that OpenAI stores all input typed doesn’t mean you can make a prompt and ChatGPT will use any prior information as context, unless you had that memory feature turned on (which allowed you to explicitly “forget” what you choose from the context).

          What OpenAI stores and what the LLM uses as input when you start a session are totally separate things. This update is about the LLM being able to search your prior conversations and referencing it (using it as input, in practice), so saying “Nothing has changed” is false.

          • zenpocalypse@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            1 day ago

            I think you might be confused about the difference between giving the LLM access to your stored conversations during your session and using OpenAI using AI to search your stored conversations.

            What the LLM has access to during your session changes nothing but your session.

            It’s not some “I, Robot” central AI that either has access or doesn’t as a whole.

  • mattc@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    2 days ago

    What worries me, is all the info from those conversations actually becoming public. I haven’t fed it personal info, but I bet a lot of people do. Not only stuff you might tell it, but information fed from people you know. Friends, family, acquaintances, even enemies could say some really personal or downright false things about you to it and it could one day add that to public ChatGPT. Sounds like some sort of Black Mirror episode, but I think it could happen. Wouldn’t be surprised if intelligence agencies already have access to this data. Maybe one day cyber criminals or even potential employers will have all this data too.

  • Yerbouti@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    12
    ·
    2 days ago

    I run deepseek locally on a M1, good enough for 80% of what I need. OpenAi is just another wannabe gafam, can’t trust it.

  • ReverendIrreverence@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 days ago

    This only works if you have an account and sign in. Don’t do that and have your browser clear Cookies and Site Data at quit and the problem is solved.

    • sunzu2@thebrainbin.org
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      1 day ago

      Some people are still in the dark or outright denial stages about how all of these companies operate and their dual purpose operations.

    • Iamnotafish@lemmy.ml
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 days ago

      I worked in cybersecurity and my global org was handing over details about who used ChatGPT during certain timeframes at the request of the feds (United States) two years ago on at least one occasion.

      • huppakee@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        Yeah I don’t think they encrypt it anyway so I guess if they would deny a governments request they might still find a way to get to data like this.

  • RuBisCO@slrpnk.net
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 days ago

    Where is this being stored? What is the capacity? How many accounts would be needed to overflow storage?

  • Imgonnatrythis@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 days ago

    Seems like if they weren’t completely evil the obvious way to execute something like this would be to give people the option to keep all the personal data locally. This probably amounts to a few hundred kb of data that the complex server side LLM could just temporarily pull as needed. In my mind this seems most useful for a LLM home assistant but the idea of openai keeping a database of learned trends, preferences, and behaviors is pretty repulsive.