I know it’s not even close there yet. It can tell you to kill yourself or to kill a president. But what about when I finish school in like 7 years? Who would pay for a therapist or a psychologist when you can ask for help a floating head on your computer?

You might think this is a stupid and irrational question. “There is no way AI will do psychology well, ever.” But I think in today’s day and age it’s pretty fair to ask when you are deciding about your future.

  • TimewornTraveler@lemm.ee
    link
    fedilink
    arrow-up
    23
    arrow-down
    1
    ·
    1 year ago

    homie lemme let you in on a secret that shouldn’t be secret

    in therapy, 40% of positive client outcomes come from external factors changing

    10% come from my efforts

    10% come from their efforts

    and the last 40% comes from the therapeutic alliance itself

    people heal through the relationship they have with their counselor

    not a fucking machine

    this field ain’t going anywhere, not any time soon. not until we have fully sentient general ai with human rights and shit

  • Macaroni_ninja@lemmy.world
    link
    fedilink
    arrow-up
    14
    ·
    1 year ago

    I don’t think the AI everyone is so buzzed about today is really a true AI. As someone summed it up: it’s more like a great autocomplete feature but it’s not great at understanding things.

    It will be great to replace Siri and the Google assistant but not at giving people professional advice by a long shot.

    • Zeth0s@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      10
      ·
      edit-2
      1 year ago

      Not saying an LLM should substitute a professional psychological consultant, but that someone is clearly wrong and doesn’t understand current AI. Just FYI

      • Macaroni_ninja@lemmy.world
        link
        fedilink
        arrow-up
        9
        ·
        1 year ago

        Care to elaborate?

        It’s an oversimplified statement from someone (sorry I don’t have the source) and I’m not exactly an AI expert but my understanding is the current commercial AI products are nowhere near the “think and judge like a human” definition. They can scrape the internet for information and use it to react to prompts and can do a fantastic job to imitate humans, but the technology is simply not there.

        • Zeth0s@lemmy.world
          link
          fedilink
          arrow-up
          2
          arrow-down
          3
          ·
          edit-2
          1 year ago

          The technology for human intelligence? Any technology would be always very different from human intelligence. What you probably are referring to is AGI, that is defined as artificial general intelligence, which is an “intelligent” agent that doesn’t excel in anything, but is able to handle a huge variety of scenarios and tasks, such as humans.

          LLM are specialized models to generate fluent text, but very different from autocompletes because can work with concepts, semantics and (pretty surprisingly) with rather complex logic.

          As oversimplification even humans are fancy autocomplete. They are just different, as LLMs are different.

  • Havald@lemmy.world
    link
    fedilink
    arrow-up
    13
    ·
    1 year ago

    I won’t trust a tech company with my most intimate secrets. Human therapists won’t get fully replaced by ai

  • 4am@lemm.ee
    link
    fedilink
    arrow-up
    8
    ·
    1 year ago

    AI cannot think, it does not logic or reason. It outputs a result from an input prompt. That will not solve psychological problems.

    • baked_tea@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      2
      ·
      1 year ago

      It’s what AI does at the moment. Which may not necessarily be true in a few years, what’s what OP is asking about.

  • ???@lemmy.world
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    1 year ago

    No, it won’t. I don’t think I would have made it here today alive without my therapist. There may be companies that have AI agents doing therapy sessions but your qualifications will still be priceless and more effective in comparison.

  • NMS@startrek.website
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    Hey, maybe your back ground in psychology will help with unfucking an errant LLM or actual AI someday :P

  • Zeth0s@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    1 year ago

    AI won’t do psychology redundant. Might allow for an easier and broader access to low level psychological first support.

    What is more likely to make psychological consultants a risky investment is the economic crisis. People are already prioritizing food over psychological therapy. Psychological therapy unfortunately is nowadays a “luxury item”.

  • DABDA@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    All my points have already been (better) covered by others in the time it took me to type them, but instead of deleting will post anyway :)


    If your concerns are about AI replacing therapists & psychologists why wouldn’t that same worry apply to literally anything else you might want to pursue? Ostensibly anything physical can already be automated so that would remove “blue-collar” trades and now that there’s significant progress into creative/“white-collar” sectors that would mean the end of everything else.

    Why carve wood sculptures when a CNC machine can do it faster & better? Why learn to write poetry when there’s LLMs?

    Even if there was a perfect recreation of their appearance and mannerisms, voice, smell, and all the rest – would a synthetic version of someone you love be equally as important to you? I suspect there will always be a place and need for authentic human experience/output even as technology constantly improves.

    With therapy specifically there’s probably going to be elements that an AI can [semi-]uniquely deal with just because a person might not feel comfortable being completely candid with another human; I believe that’s what using puppets or animals or whatever to act as an intermediary are for. Supposedly even a really basic thing like ELIZA was able convince some people it was intelligent and they opened up to it and possibly found some relief from it, and there’s nothing in it close to what is currently possible with AI. I can envision a scenario in the future where a person just needs to vent and having a floating head just compassionately listen and offer suggestions will be enough; but I think most(?) people would prefer/need an actual human when the stakes are higher than that – otherwise the suicide hotlines would already just be pre-recorded positive affirmation messages.

  • theherk@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    Many valid points here, but here is a slightly different perspective. Let’s say for the sake of discussion AI is somehow disruptive here. So?

    You cannot predict what will happen in this very fast space. You should not attempt to do so in a way that compromises your path toward your interests.

    If you like accounting or art or anything else that AI may disrupt… so what? Do it because you are interested. It may be hyper important to have people that did so in any given field no matter how unexpected. And most importantly, doing what interest you is always at least part of a good plan.

  • realharo@lemm.ee
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    1 year ago

    It’s definitely possible, but such an AI would probably be good enough to take over every other field too. So it’s not like you can avoid it by choosing something else anyway.

    And the disruption would be large enough that governments will have to react in some way.

  • Lvxferre@lemmy.ml
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    1 year ago

    If you’re going to avoid psychology, do it because of the replication crisis. What is being called “AI” should play no role on that. Here’s why.

    Let us suppose for a moment that some AI 7y from now is able to accurately diagnose and treat psychological issues that someone might have. Even then the AI in question is not a moral agent that can be held responsible for its actions, and that is essential when you’re dealing with human lives. In other words you’ll still need psychologists picking the output of said AI and making informed decisions on what the patient should [not] do.

    Furthermore, I do not think that those “AI systems” will be remotely as proficient at human tasks in, say, a decade, as some people are claiming that they will be. AI is a misnomer, those systems are not intelligent. Model-based text generators are a great example of that (and relevant in your case): play a bit with ChatGPT or Bard, and look at their output in a somewhat consistent way (without cherry picking the hits and ignoring the misses). Then you’ll notice that they don’t really understand anything - they’re reproducing grammatical patterns regardless of their content. (Just like they were programmed to.)

      • Lvxferre@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        It boils down to scientists not knowing if they’re actually reaching some conclusion or just making shit up. It’s actually a big concern across multiple sciences, it’s just that Psychology is being hit really hard, and for clinical psychologists this means that they simply can’t trust as much the theoretical frameworks guiding their decisions as they were supposed to.

  • KISSmyOS@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    I’m sure as fuck glad my therapist is a human and not a Chatbot.

    Also, psychologists will be needed to design AI interfaces so humans have an easy time using them.
    A friend of mine studied psychology and now works for a car company, designing their infotainment system UI so that people can instinctively use it without consulting a manual. Those kinds of jobs will become more, not less in the future.