• EnderMB@lemmy.world
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    5
    ·
    8 months ago

    I work in AI. LLM’s are cool and all, but I think it’s all mostly hype at this stage. While some jobs will be lost (voice work, content creation) my true belief is that we’ll see two increases:

    1. The release of productivity tools that use LLM’s to help automate or guide menial tasks.

    2. The failure of businesses that try to replicate skilled labour using AI.

    In order to stop point two, I would love to see people and lawmakers really crack down on AI replacing jobs, and regulating the process of replacing job roles with AI until they can sufficiently replace a person. If, for example, someone cracks self-driving vehicles then it should be the responsibility of owning companies and the government to provide training and compensation to allow everyone being “replaced” to find new work. This isn’t just to stop people from suffering, but to stop the idiot companies that’ll sack their entire HR department, automate it via AI, and then get sued into oblivion because it discriminated against someone.

    • Donkter@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      8 months ago

      I’ve also heard it’s true that as far as we can figure, we’ve basically reached the limit on certain aspects of LLMs already. Basically, LLMs need a FUCK ton of data to be good. And we’ve already pumped them full of the entire internet so all we can do now is marginally improve these algorithms that we barely understand how they work. Think about that, the entire Internet isnt enough to successfully train LLMs.

      LLMs have taken some jobs already (like audio transcription, basic copyediting, and aspects of programming), we’re just waiting for the industries to catch up. But we’ll need to wait for a paradigm shift before they start producing pictures and books or doing complex technical jobs with few enough hallucinations that we can successfully replace people.

      • EnderMB@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        8 months ago

        My own personal belief is very close to what you’ve said. It’s a technology that isn’t new, but had been assumed to not be as good as compositional models because it would cost a fuck-ton to build and would result in dangerous hallucinations. It turns out that both are still true, but people don’t particularly care. I also believe that one of the reasons why ChatGPT has performed so well compared to other LLM initiatives is because there is a huge amount of stolen data that would get OpenAI in a LOT of trouble.

        IMO, the real breakthroughs will be in academia. Now that LLM’s are popular again, we’ll see more research into how they can be better utilised.

        • Donkter@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 months ago

          Afaik open ai got their training data from basically a free resource that they just had to request access to. They didn’t think much about it along with everyone else. No one could have predicted that it would be that valuable until after the fact where in retrospect it seems obvious.

      • prime_number_314159@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        8 months ago

        The (really, really, really) big problem with the internet is that so much of it is garbage data. The number of false and misleading claims spread endlessly on the internet is huge. To rule those beliefs out of the data set, you need something that can grasp the nuances of published, peer-reviewed data that is deliberately misleading propaganda, and fringe conspiracy nuts that believe the Earth is controlled by lizards with planes, and only a spritz bottle full of vinegar can defeat them, and everything in between.

        There is no person, book, journal, website, newspaper, university, or government that has reliably produced good, consistent help on questions of science, religion, popular lies, unpopular truths, programming, human behavior, economic models, and many, many other things that continuously have an influence on our understanding of the world.

        We can’t build an LLM that won’t consistently be wrong until we can stop being consistently wrong.

        • Donkter@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          8 months ago

          Yeah I’ve heard medical LLMs are promising when they’ve been trained exclusively on medical texts. Same with the ai that’s been trained exclusively on DNA etc.

    • Ð Greıt Þu̇mpkin@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      8 months ago

      Nah fuck HR, they’re the shield of the companies to discriminate withing margins from behind

      I think the proper route is a labor replacement tax to fund retraining and replacement pensions

    • funkless_eck@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      3
      ·
      8 months ago

      I sincerely doubt AI voice over will out perform human actors in the next 100 years in any metric, including cost or time savings.

      • EnderMB@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 months ago

        Not sure why you’re downvoted, but this is already happening. There was a story a few days ago of a long-time BBC voice-over artist that lost their gig. There have also been several stories of VA workers being handed contracts that allow the reuse of their voice for AI purposes.

        • funkless_eck@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          2
          ·
          edit-2
          8 months ago

          The artist you’re referring to is Sara Poyzer - https://m.imdb.com/name/nm1528342/ - she was replaced in one specific way:

          The BBC is making a documentary about someone (as yet unknown), who is dying and has lost the ability to speak. Poyzer was on pencil (like standby, hold the date - but not confirmed).to narrate the dying person’s words. Instead they contracted an AI agency to use AI to mimic the dying persons voice (from when they could still speak).

          It would likely be cheaper and easier to hire an impressionist, or Ms Poyzer herself but I assume they are doing it for the “novelty” value, and with the blessing of the terminally ill person.

          For that reason I think my point still stands, they have made the work harder and more expensive, and created a negative PR storm - all problems created by AI and not solved by.

          You are incorrect that AI voice contracts are common place, as SAG negotiated that use of AI voice tools is to be compensated as if the actor recorded the lines themselves - which most actors do from home nowadays, so again it’s at best the same cost for an inferior product - but actually more expensive because you were paying just the actor, but now you’re paying the actor AND the AI techs.

          edit: and not just that, AI voice products are bad. Yes, you can maybe fudge the uncanny Valley a bit by sculpting the prompts and the script to edge towards short sentences, delivered in a monotone, narrating an emotionless description without caring about stress patterns or emphasis, meter, inflection or caesura, and without any breathing sounds (sometimes a positive sometimes a negative) - but that’s all in an actors wheelhouse for free.

    • Sotuanduso@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      8 months ago

      Are you saying that if a company adopts AI to replace a job, they should have to help the replaced workers find new work? Sounds like something one can loophole by cutting the department for totally unrelated reasons before coincidentally realizing that they can have AI do that work, which they totally didn’t think of before firing people.