Actually, really liked the Apple Intelligence announcement. It must be a very exciting time at Apple as they layer AI on top of the entire OS. A few of the major themes.

Step 1 Multimodal I/O. Enable text/audio/image/video capability, both read and write. These are the native human APIs, so to speak.

Step 2 Agentic. Allow all parts of the OS and apps to inter-operate via “function calling”; kernel process LLM that can schedule and coordinate work across them given user queries.

Step 3 Frictionless. Fully integrate these features in a highly frictionless, fast, “always on”, and contextual way. No going around copy pasting information, prompt engineering, or etc. Adapt the UI accordingly.

Step 4 Initiative. Don’t perform a task given a prompt, anticipate the prompt, suggest, initiate.

Step 5 Delegation hierarchy. Move as much intelligence as you can on device (Apple Silicon very helpful and well-suited), but allow optional dispatch of work to cloud.

Step 6 Modularity. Allow the OS to access and support an entire and growing ecosystem of LLMs (e.g. ChatGPT announcement).

Step 7 Privacy. <3

We’re quickly heading into a world where you can open up your phone and just say stuff. It talks back and it knows you. And it just works. Super exciting and as a user, quite looking forward to it.

https://x.com/karpathy/status/1800242310116262150?s=46

    • reattach@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 months ago

      I thought the original post was satire - list all of the privacy issues, then throw in “Privacy <3” at the end. Seriously, almost every one of those points has a potential privacy issue.

      Guess I was being too generous.

    • Z4rK@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      7 months ago

      How so? Many people want to use AI in privacy, but it’s too hard for most people to set it up for themselves currently.

      Having AI tools on the OS level so you can use it in almost any app and that is guaranteed to be processed on device in privacy will be very useful if done right.

      • TheFriar@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        You think your iPhone isn’t collecting data on you? Is that what you’re saying?

        • Z4rK@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          7 months ago

          Unless you are designing and creating your own chips for processing, networking etc, then privacy today is about trust, not technology. There’s no escaping it. I know iPhone and Apple is collecting data about me. I currently trust them the most on how they use it.

          • MigratingtoLemmy@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 months ago

            Running FOSS and taking control of your network will do a far better trick of privacy vs convenience than most people can imagine

          • EngineerGaming@feddit.nl
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            7 months ago

            There are degrees of trust though. You can trust the developers and people who audited the code if you have no skill/desire to audit it yourself, or you can trust just the developers.

            And even closed systems’ behavior can be monitored and analyzed.

            • Z4rK@lemmy.worldOP
              link
              fedilink
              English
              arrow-up
              0
              arrow-down
              1
              ·
              7 months ago

              Yes definitely, Apple claimed that their privacy could be independently audited and verified; we will have to wait and see what’s actually behind that info.

              • Rustmilian@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                7 months ago

                Apple claimed that their privacy could be independently audited and verified.

                How? The only way to truly be able to do that to a 100% verifiable degree is if it were open source, and I highly doubt Apple would do that, especially considering it’s OS level integration. At best, they’d probably only have a self-report mechanism which would also likely be proprietary and therefore not verifiable in itself.

        • ji17br@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          7 months ago

          The phone is, Apple isn’t. They outline everything in the keynote if you are interested.

          • Rustmilian@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            7 months ago

            Their keynotes are irrelevant, their official privacy policies and legal disclosures take precedence over marketing claims or statements made in keynotes or presentations. Apple’s privacy policy states that the company collects data necessary to provide and improve its products and services. The OS-level AI would fall under this category, allowing Apple to collect data processed by the AI for improving its functionality and models. Apple’s keynotes and marketing materials do not carry legal weight when it comes to their data practices. With the AI system operating at the OS level, it likely has access to a wide range of user data, including text inputs, conversations, and potentially other sensitive information.

      • Zoot@reddthat.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        Yeah just like Microsoft Recall right? An AI that has access to every single thing you do (and would also be recording, otherwise how does it know “you”) can never be private by design. Its literal design is to know everything about you, your actions, and your habits. I wouldn’t trust anyone to be able to create an actually secure piece of software that does the above. It will always be able to be stolen/sold/abused.

        • Z4rK@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          7 months ago

          macOS and Windows could already be doing this today behind your back regardless of any new AI technology. Don’t use an OS you don’t trust.

            • Z4rK@lemmy.worldOP
              link
              fedilink
              English
              arrow-up
              0
              arrow-down
              1
              ·
              7 months ago

              That’s fair, but you are misunderstanding the technology if you’re bashing the AI from Apple for making macOS less secure. Most likely, it will be just as secure as for example their password functionality, although we don’t have details yet. You either trust the OS or not.

              Microsoft Recall was designed so badly, there’s no hope for it.

              • Zoot@reddthat.com
                link
                fedilink
                English
                arrow-up
                1
                ·
                7 months ago

                I simply don’t, and wouldn’t trust Apple. They will tell you they are all about privacy, and happily sell your data behind your back. Just like any other company.

      • Rustmilian@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        7 months ago

        you can use it in almost any app
        if done right

        How are you going to be able to use it in “almost any app” in a way that is secure? How are you going to design it so that the apps don’t abuse the AI to get more information on the user out of it than intended? Seems pretty damn inherently insecure to me.

        • Z4rK@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          7 months ago

          That’s why it’s on the OS-level. For example, for text, it seems to work in any text app that uses the standard text input api, which Apple controls.

          User activates the “AI overlay” on the OS, not in the app, OS reads selected text from App and sends text suggestions back.

          The App is (possibly) unaware that AI has been used / activated, and has not received any user information.

          Of course, if you don’t trust the OS, don’t use this. And I’m 100% speculating here based on what we saw for the macOS demo.

          • Rustmilian@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            7 months ago
            • Malicious actors could potentially exploit vulnerabilities in the AI system to gain unauthorized access or control over device functions and data, potentially leading to severe privacy breaches, unauthorized data access, or even the ability to inject malicious content or commands through the AI system.
            • Privacy breaches are possible if the AI system is compromised, exposing user data, activities, and conversations processed by the AI.
            • Integrating AI functionality deeply into the operating system increases the overall attack surface, providing more potential entry points for malicious actors to exploit vulnerabilities and gain unauthorized access or control.
            • Human reviewers have access to annotate and process user conversations for improving the AI models. To effectively train and improve the AI models powering the OS-level integration, Apple would likely need to collect and process user data, such as text inputs, conversations, and interactions with the AI.
            • Apple’s privacy policy states that the company collects data necessary to provide and improve its products and services. The OS-level AI would fall under this category, allowing Apple to collect data processed by the AI for improving its functionality and models.
            • Despite privacy claims, Apple has a history of collecting various types of user data, including device usage, location, health data, and more, as outlined in their privacy policies.
            • If Apple partners with third-party AI providers, there is a possibility of user data being shared or accessed by those entities, as permitted by Apple’s privacy policy.
            • With the AI system operating at the OS level, it likely has access to a wide range of user data, including text inputs, conversations, and potentially other sensitive information. This raises privacy concerns about how this data is handled, stored, and potentially shared or accessed by the AI provider or other parties.
            • Lack of transparency for users about when and how their data is being processed by the AI system & users not being fully informed about data collection related to the AI. Additionally, if the AI integration is controlled solely at the OS level, users may have limited control over enabling or disabling this functionality.