Why are people happy or approving of AI on apple products, when it seems like the same thing was treated (rightly) horribly when Microsoft just did it.

Is Apple doing it better in some way? Both said it’ll be local only, but then Apple is doing some cloud processing now. Do people really just trust Apple more???

  • dick_stitches@lemm.ee
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    6 months ago

    I’m excited for it. I don’t think hallucinations will be a huge concern. Knowing about all (or most) of the content on my devices is a MUCH easier prospect than knowing everything about everything, which is an idea that OpenAI or Google certainly aren’t trying too hard to refute about their models

    • jacksilver@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 months ago

      Is there any good videos/articles detailing real world use cases for this stuff. I watched a couple of things on Recall, but there wasn’t much in terms of what I would actually use it for. While I do have issues from time to time finding things, it doesn’t feel like that big of a mover for the cost (privacy or compute).

      • dick_stitches@lemm.ee
        link
        fedilink
        English
        arrow-up
        7
        ·
        6 months ago

        I thought Apple’s WWDC keynote showed some good uses for it, but you’re right, it kind of is just incremental, and may or may not be worth the privacy/compute cost. I personally am mostly excited that Siri will be able to contextualize my calendars, notes, messages, etc. There are lots of bits of information I’ve lost over the years, that isn’t actually lost, but just buried, and current search just isn’t up to the task of finding it. Or searching through notes: instead of having to remember when I took a note and where I asked it, I can just ask Siri a question and it’ll basically search through my notes and find the answer.

        I also think it’s going to completely change academic research. Instead of going to Jstor and using a traditional search bar, you could just tell the AI assistant what you’re thinking about, what your theories are, etc, and it will search the catalog and find relevant sources for you. It removes a layer of friction, which I think will make a lot of people more efficient/effective.

        The main argument I see against it is “well that is all well and good, but none of that will matter when the internet is full of AI-generated crap.” I mean yeah, that’s true, but the internet is already full of non-AI-generated crap. Sifting through the shitty ads and “sponsored posts” has already made the internet nearly unusable IMO. That’s a bigger problem that we need to deal with, that’s separate from AI.

    • garretble@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      6 months ago

      Yeah, I agree with this take, though I did see an article that quoted Tim as saying they wouldn’t be able to totally get rid of hallucinations, so I’m still a little reserved on it all.

      • dick_stitches@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 months ago

        I think healthy skepticism is always a good thing. A lot of people seem to be looking at this tech as a panacea, which it absolutely isn’t. It’s still really important that we have the ability to identify when it may be hallucinating, just like we really need the ability to think critically about literally anything on the internet.