• maegul (he/they)@lemmy.ml
    link
    fedilink
    English
    arrow-up
    174
    arrow-down
    4
    ·
    7 months ago

    The moment word was that Reddit (and now Stackoverflow) were tightening APIs to then sell our conversations to AI was when the game was given away. And I’m sure there were moments or clues before that.

    This was when the “you’re the product if its free” arrangement metastasised into “you’re a data farming serf for a feudal digital overlord whether you pay or not”.

    Google search transitioning from Good search engine for the internet -> Bad search engine serving SEO crap and ads -> Just use our AI and forget about the internet is more of the same. That their search engine is dominated by SEO and Ads is part of it … the internet, IE other people’s content isn’t valuable any more, not with any sovereignty or dignity, least of all the kind envisioned in the ideals of the internet.

    The goal now is to be the new internet, where you can bet your ass that there will not be any Tim Berners-Lee open sourcing this. Instead, the internet that we all made is now a feudal landscape on which we all technically “live” and in which we all technically produce content, but which is now all owned, governed and consumed by big tech for their own profits.


    I recall back around the start of YouTube, which IIRC was the first hype moment for the internet after the dotcom crash, there was talk about what structures would emerge on the internet … whether new structures would be created or whether older economic structures would impose themselves and colonise the space. I wasn’t thinking too hard at the time, but it seemed intuitive to that older structures would at least try very hard to impose themselves.

    But I never thought anything like this would happen. That the cloud, search/google, mega platforms and AI would swallow the whole thing up.

    • erwan@lemmy.ml
      link
      fedilink
      English
      arrow-up
      20
      ·
      7 months ago

      Especially coming from Google, who was one of the good guys pushing open standards and interoperability.

      • lanolinoil@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        5
        ·
        7 months ago

        We ruined the world by painting certain men or groups as bad. The centralization of power is the bad thing. That’s the whole purpose of all Republics as I understand it. Something we used to know and have almost completely forgotten

    • Hoxton@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      7 months ago

      Well said! I’m still wondering what happens when the enviable ouroboros of AI content referencing AI content referencing AI content makes the whole internet a self perpetuating mess of unreadable content and makes anything of value these companies once gained basically useless.

      Would that eventually result in fresh, actual human created content only coming from social media? I guess clauses about using your likeness will be popping up in TikTok at some point (if they aren’t already)

      • maegul (he/they)@lemmy.ml
        link
        fedilink
        English
        arrow-up
        9
        ·
        edit-2
        7 months ago

        I dunno, my feeling is that even if the hype dies down we’re not going back. Like a real transition has happened just like when Facebook took off.

        Humans will still be in the loop through their prompts and various other bits and pieces and platforms (Reddit is still huge) … while we may just adjust to the new standard in the same way that many reported an inability to do deep reading after becoming regular internet users.

        • Hoxton@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 months ago

          You’re absolutely right about not going back. Web 3.0 I guess. I want to be optimistic that a distinction between all the garbage and actual useful or real information will be visible to people, but like you said, general tech and media literacy isn’t encouraging, hey?

          Slightly related, but I’ve actually noticed a government awareness campaign where I live about identifying digital scams. Be nice if that could be extended to incorrect or misleading AI content too.

      • assassin_aragorn@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        7 months ago

        It should end up self regulating once AI is using AI material. That’s the downfall of the companies not bothering to put very clear identification of AI produced material. It’ll spiral into a hilarious mess.

        • Hoxton@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          7 months ago

          I’m legit looking forward to when Google returns completely garbled and unreadable search results, because someone is running an automated Ads campaign that sources another automated campaign and so on, with the only reason it rises to the top is that they put the highest bid amount.

          I doubt Google will do shit about it, but at least the memes will be good!

        • afraid_of_zombies@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 months ago

          Hasn’t it already happened? All culture is derivative, yes all of it. And look at how much of it is awful, yet we navigate fine. I keep hearing stats like every one second YouTube gets 4 hours more content and yet I use YouTube daily. Despite being very very confident that all but a fraction of a percent of what it has is of any value to me.

          Same for books, magazines, news, podcasts, radio programs, music, art, comics, recipes, articles…

          We already live in the post information explosion. Where the same stuff gets churned over and over again. All I am seeing AI doing is speeding this up. Now instead of a million YouTube vids I won’t watch getting added next week it will be ten million.

      • afraid_of_zombies@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 months ago

        Tik Tok was banned so it ain’t coming from there. Can’t get universal healthcare but we can make sure to protect kids from the latest dance craze.

    • Rolando@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 months ago

      But I never thought anything like this would happen. That the cloud, search/google, mega platforms and AI would swallow the whole thing up.

      I didn’t think so either. The funny thing is, Blade Runner, The Matrix, and the whole cyberpunk genre was warning us…

      • maegul (he/they)@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        Yea but this feels quicker than anyone expected. It’s easy to forget, but alpha Go beating the best in the world was shocking at the time and no one saw it coming. We hadn’t sorted out what to do with big monopoly corps yet, we weren’t ready for a whole new technology.

  • NeoNachtwaechter@lemmy.world
    link
    fedilink
    English
    arrow-up
    150
    arrow-down
    6
    ·
    edit-2
    7 months ago

    "AGI is going to create tremendous wealth. And if that wealth is distributed—even if it’s not equitably distributed, but the closer it is to equitable distribution, it’s going to make everyone incredibly wealthy.”

    So delusional.

    Do they think that their AI will actually dig the cobalt from the mines, or will the AI simply be the one who sends the children in there to do the digging?

    • lanolinoil@lemmy.world
      link
      fedilink
      English
      arrow-up
      47
      arrow-down
      21
      ·
      7 months ago

      It will design the machines to build the autonomous robots that mine the cobalt… doing the jobs of several companies at one time and either freeing up several people to pursue leisure or the arts or starve to death from being abandoned by society.

      • riodoro1@lemmy.world
        link
        fedilink
        English
        arrow-up
        52
        arrow-down
        3
        ·
        7 months ago

        Have you seen the real fucking world?

        It’s gonna make the rich richer and the poor poorer. At least until the gilded age passes.

      • funkless_eck@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        7
        ·
        7 months ago

        AI absolutely will not design machines.

        It may be used within strict parameters to improve the speed of theoretically testing types of bearing or hinge or alloys or something to predict which ones would perform best under stress testing - prior to acutal testing to eliminate low-hanging fruit, but it will absolutely not generate a new idea for a machine because it can’t generate new ideas.

          • funkless_eck@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            7 months ago

            can

            might

            sure. But, like I said, those are subject to a lot of caveats - that humans have to set the experiments up to ask the right questions to get those answers.

            • essteeyou@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              1
              ·
              7 months ago

              That’s how it currently is, but I’d be astounded if it didn’t progress quickly from now.

              • funkless_eck@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                7 months ago

                i would be extremely surprised if before 2100 we see AI that has no human operator and no data scientist team even at a 3rd party distributor - and those things are neither a lie, nor a weaselly marketing stunt (“technically the operators are contractors and not employed by the company” etc).

                We invented the printing press 584 years ago, it still requires a team of human operators.

                • essteeyou@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  7 months ago

                  A printing press is not a technology with intelligence. It’s like saying we still have to manually operate knives… of course we do.

        • lanolinoil@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          8
          ·
          7 months ago

          The model T will absolutely not replace horse drawn carts – Maybe some small group of people or a family for a vacation but we’ve been using carts to do war logistics for 1000s of years. You think some shaped metal put together is going to replace 1000s of men and horses? lol yeah right

          • funkless_eck@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            5
            ·
            7 months ago

            apples and oranges.

            You’re comparing two products with the same value prop: transporting people and goods more effectively than carrying/walking.

            In terms of mining, a drilling machine is more effective than a pickaxe. But we’re comparing current drilling machines to potential drilling machines, so the actual comparison would be:

            • is an AI-designed drilling machine likely to be more productive (for any given definition of productivity) than a human-designed one?

            Well, we know from experience that when (loosely defined) “AI” is used in, for e.g. pharma research, it reaps some benefits - but does not replace wholesale the drug approval process and its still a tool used by - as I originally said - human beings that impose strict parameters on both input and output as part of a larger product and method.

            Back to your example: could a series of algorithmic steps - without any human intervention - provide a better car than any modern car designers? As it stands, no, nor is it on the horizon. Can it be used to spin through 4 million slight variations in hood ornaments and return the top 250 in terms of wind resistance? Maybe, and only if a human operator sets up the experiment correctly.

            • lanolinoil@lemmy.world
              link
              fedilink
              English
              arrow-up
              8
              arrow-down
              1
              ·
              edit-2
              7 months ago

              No, the thing I’m comparing is our inability to discern where a new technology will lead and our history of smirking at things like books, cars, the internet and email, AI, etc.

              The first steam engines pulling coal out of the ground were so inefficient they wouldn’t make sense for any use case than working to get the fuel that powers them. You could definitely smirk and laugh about engines vs 10k men and be totally right in that moment, and people were.

              The more history you learn though, you more you realize this is not only a hubrisy thing, it’s also futile as how we feel about the proliferation of technology has never had an impact on that technology’s proliferation.

              And, to be clear, I’m not saying no humans will work or have anything to do – I’m saying significantly MORE humans will have nothing to do. Sure you still need all kinds of people even if the robots design and build themselves mostly, but it would be an order of magnitude less than the people needed otherwise.

            • sailingbythelee@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              ·
              7 months ago

              I agree that AI is just a tool, and it excels in areas where an algorithmic approach can yield good results. A human still has to give it the goal and the parameters.

              What’s fascinating about AI, though, is how far we can push the algorithmic approach in the real world. Fighter pilots will say that a machine can never replace a highly-trained human pilot, and it is true that humans do some things better right now. However, AI opens up new tactics. For example, it is virtually certain that AI-controlled drone swarms will become a favored tactic in many circumstances where we currently use human pilots. We still need a human in the loop to set the goal and the parameters. However, even much of that may become automated and abstracted as humans come to rely on AI for target search and acquisition. The pace of battle will also accelerate and the electronic warfare environment will become more saturated, meaning that we will probably also have to turn over a significant amount of decision-making to semi-autonomous AI that humans do not directly control at all times.

              In other words, I think that the line between dumb tool and autonomous machine is very blurry, but the trend is toward more autonomous AI combined with robotics. In the car design example you give, I think that eventually AI will be able to design a better car on its own using an algorithmic approach. Once it can test 4 million hood ornament variations, it can also model body aerodynamics, fuel efficiency, and any other trait that we tell it is desirable. A sufficiently powerful AI will be able to take those initial parameters and automate the process of optimizing them until it eventually spits out an objectively better design. Yes, a human is in the loop initially to design the experiment and provide parameters, but AI uses the output of each experiment to train itself and automate the design of the next experiment, and the next, ad infinitum. Right now we are in the very early stages of AI, and each AI experiment is discrete. We still have to check its output to make sure it is sensible and combine it with other output or tools to yield useable results. We are the mind guiding our discrete AI tools. But over a few more decades, a slow transition to more autonomy is inevitable.

              A few decades ago, if you had asked which tasks an AI would NOT be able to perform well in the future, the answers almost certainly would have been human creative endeavors like writing, painting, and music. And yet, those are the very areas where AI is making incredible progress. Already, AI can draw better, write better, and compose better music than the vast, vast majority of people, and we are just at the beginning of this revolution.

      • ObliviousEnlightenment@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        7 months ago

        either freeing up several people to pursue leisure or the arts or starve to death from being abandoned by society.

        You know EXACTLY which one it’s gonna be.

      • afraid_of_zombies@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        It isn’t the intelligence of the machine designer that is the issue, it is the middlemen and the end user.

        Continuously having to downgrade machines. Wouldn’t want some sales rep seeing something new.

      • Grandwolf319@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        3
        ·
        7 months ago

        Hahaha, current ML is basically good guessing, that doesn’t really transfer to building machines that actually have to obey the laws of physics.

        • lanolinoil@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 months ago

          is it good guessing that you know when you step out of your bed without looking you won’t fall to your death?

    • Passerby6497@lemmy.world
      link
      fedilink
      English
      arrow-up
      22
      ·
      7 months ago

      if

      This word is like Atlas, holding up the world’s shittiest argument that anyone with 3 working braincells can see through.

    • TrueStoryBob@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 months ago

      Nah, they’re probably planning to do what Amazon did with their “Just Walk Out” stores… force children into mines and just claim it’s actually AI. As NFT’s, Cryptocurrency, and so many other hype tech fads have taught us: marketing is cheaper than development.

    • garibaldi_biscuit@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      7 months ago

      Let’s not forget this is all driven by people with the right skillset, in the right place at the right time, who are hell-bent on making vast amounts of money.

      The “visionary technological change” is a secondary justification.

      Permission granted to scrape this comment too, if you like.

    • PoliticalAgitator@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 months ago

      The very first prompt this AGI is given will be “secure as much wealth as possible without breaking any laws that might see us punished”.

      • TheGrandNagus@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        7 months ago

        To be fair, that did improve things for the average person, and by a staggering amount.

        The vast majority of people working before the industrial revolution were lowly paid agricultural workers who had enormous instability in employment. Employment was also typically very seasonal, and very hard work.

        That’s before we even get into things like stuff being made cheaper, books being widely available, transport being opened up, medical knowledge skyrocketing, famines going from regular occurrence to rare occurrence, etc as a result of the industrial revolution.

        We had been on a constant trajectory of everyone getting wealthier up until the late 1970s where afterwards we saw a sharp rise in inequality, a trend that hasn’t stopped. (Thatcher and her other shithead twin Reagan?)

        In the mid 70s, the top 1% owned 19.9% of wealth. Now that figure is around 53%.

        • afraid_of_zombies@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 months ago

          Even then it is “only” the west. China was starving only two generations ago. As a whole humanity just keeps getting richer and richer. No part of what I am saying is meant to excuse the damage neoliberalism did to wealthy equality in the developed world.

          • TheGrandNagus@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 months ago

            Well yeah, the industrial revolution only helped the areas it affected. But that kinda goes without saying.

  • Elias Griffin@lemmy.world
    link
    fedilink
    English
    arrow-up
    59
    arrow-down
    3
    ·
    7 months ago

    Quote from the subtitle of the article

    and you can’t stop it.

    Don’t ever let life-deprived, perspective-bubble wearing, uncompassiontate, power hungry manipulators, “News” people, tell you what you can and cannot do. Doesn’t even pass the smell test.

    My advice, if a Media Outlet tries to Groom you to think that nothing you do matters, don’t ever read it again.

    • fukurthumz420@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      1
      ·
      7 months ago

      god, i love this statement. it’s so true. people have to understand our collective power. even if the only tool we have is a hammer, we can still beat their doors down and crush them with it. all it takes is organization and willingness.

  • fukurthumz420@lemmy.world
    link
    fedilink
    English
    arrow-up
    53
    arrow-down
    10
    ·
    7 months ago

    our collective time would be better spent destroying capitalism than trying to stop AI. AI is wonderful in the right social system.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      7 months ago

      On the other hand, assuming the social system isn’t the right one, hypothetically AI fully realized could make it more unreasonable and more tightly stuck the way it is.

      • TheFriar@lemm.ee
        link
        fedilink
        English
        arrow-up
        6
        ·
        7 months ago

        Not to mention, any other, more just social system wouldn’t be fucking decimating the environment, ultimately hurting the poorer nations first, for money. And AI is accelerating our CO2 output when we need to be drastically cutting it back. This is very much a pacifying tool as we barrel toward oblivion.

          • TheFriar@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            edit-2
            7 months ago

            https://www.ft.com/content/61bd45d9-2c0f-479a-8b24-605d5e72f1ab

            https://www.technologyreview.com/2023/12/05/1084417/ais-carbon-footprint-is-bigger-than-you-think/

            https://hai.stanford.edu/news/ais-carbon-footprint-problem

            When the world needs to be drastically altering our way of life to avert the worst of climate change, these companies are getting away with accelerating their output and generating tons of investment and revenue because “that’s what the market dictates.” Just like with crypto/blockchain a few years ago, adding “AI” into any business pitch/model is basically printing money. So companies are more inclined to incorporate this machine learning tech into their business, and this is all happening while the energy demand for increased usage and the constant “updates” and advancements in the field are gobbling up way more energy than we can honestly afford—and really even conceive of. Because they’re trying to hide this fact, given, yknow, the world fuckin ending. Basically, the market and the entire system of media is encouraging and fawning over this “leap” in tech, when we can’t realistically afford to continue our habits we had before this market even existed. So they are accelerating co2 output, everyone cheers, and we all ride merrily to the edge of our doom.

            It’s capitalism once again destroying us and the planet for profit. And everyone who mindlessly jumps on board, ooh’ing and aww’ing at the stupid new shit they’re doing (while they infringe upon the work of all artists without compensation, driving human creativity out in the job market in favor of saving corporations some scratch by firing their artists and using AI instead…I genuinely can’t really conceive of how people seem so on board with this concept.

            • fukurthumz420@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              ·
              7 months ago

              "Cutting-edge technology doesn’t have to harm the planet, and research like this is very important in helping us get concrete numbers about emissions. It will also help people understand that the cloud we think that AI models live on is actually very tangible, says Sasha Luccioni, an AI researcher at Hugging Face who led the work.

              Once we have those numbers, we can start thinking about when using powerful models is actually necessary and when smaller, more nimble models might be more appropriate, she says."

              that’s a shame and i’m not surprised at all to see that corporations are using AI for completely unimportant things.

              But one thing to consider is that AI could also lead to solutions that help save the planet, like solving problems with fusion technology. I still believe in science, and I still believe that capitalism is the root of the problem, not the technology itself.

              • TheFriar@lemm.ee
                link
                fedilink
                English
                arrow-up
                1
                ·
                7 months ago

                I mean, sure, I agree with you. Capitalism is the problem, no question. I would love a job-replacing tech so people could live lives of leisure and art. But…this system is being built for capitalist ends. It’s built by, funding by, and being put in the hands of the exact people causing the problem.

                I agree that in a hypothetical world, machine learning technology could very well help humanity. But the code and money is in the hands of people who aren’t interested in helping humanity.

                I’m no fan of forced labor for basic necessities. And I’m not advocating for that system by any means, but this tech, in this world, will drive the cost of labor down, drive people from the jobs they’ve been forced to rely upon, and it’s literally taking one of the few job fields where people actually got to express their humanity for their wages: art. Creative writing and design/visual art were one of the few fields people actually dreamt of doing. Because it offered us a living for creating. For being human. And that tiny outlet of humanity in the vast contrivance of capitalism is being devoured by this tech.

                That’s just one small part of my distrust of “AI.” But the underlying problem is as I stated first, which is that this tech, existing in this world at this point in time, isn’t going to free us. It’s another tool by the ownership class to cut costs, decimate the environment, and drive profit. While also killing the small little sliver of human creativity that was allowed to exist under capitalism.

                So again, hypothetically, yes, the tech could be a force for good and for human liberation from meaningless work. But it’s actually making our work even more meaningless, while sequestering another huge chunk of power for the ruling class. It would be great if it could reach its potential as a force for good. But given everything, that is not how it’s being implemented.

                • fukurthumz420@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  7 months ago

                  your points are completely valid, which is why we really need to start banding together to dismantle the ownership class

                  by

                  any

                  means

                  necessary

                  for the sake of humanity (and all other living things on the planet)

    • TheGrandNagus@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      7 months ago

      Exactly. I know it’s easy to automatically froth at the mouth with rage when seeing “AI”, and here anything mentioning it gets automatically rejected, but there are genuinely good usecases.

      Amazing speech synthesis and recognition is useful for anybody, but especially people with certain disabilities.

      Much better translation, spell checking, help with writing. Helping people understand texts that are written in a complicated way (legalese, technical jargon, condensing EULA’s, etc)

      Infrastructure planning and traffic control.

      Grid energy usage and distribution.

      Image recognition, useful for anybody for things like searching a photo library for a specific thing, but also for people with visual issues who previously had to rely on awful screen reader software that can’t tell you the content of images unless it was properly tagged (as someone with a blind sister who uses computers - rare!)

      Spotting fake reviews, a massive issue online. Flagging bot accounts.

      The potential for them to take over some jobs and free up people to pursue other things in life.

      This technology, if trained ethically, and not used to siphon more data from people, is amazing. It’s how megacorps are using it that’s the problem.

  • Alpha71@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    ·
    7 months ago

    “Yeah, let’s go up against the woman who sued Disney and won What could go wrong!?”

  • pixxelkick@lemmy.world
    link
    fedilink
    English
    arrow-up
    49
    arrow-down
    25
    ·
    7 months ago

    I mean, that’s just how it has always worked, this isn’t actually special to AI.

    Tom Hanks does the voice for Woody in Toy Story movies, but, his brother Jim Hanks has a very similar voice, but since he isnt Tom Hanks he commands a lower salary.

    So many video games and whatnot use Jim’s voice for Woody instead to save a bunch of money, and/or because Tom is typically busy filming movies.

    This isn’t an abnormal situation, voice actors constantly have “sound alikes” that impersonate them and get paid literally because they sound similar.

    OpenAI clearly did this.

    It’s hilarious because normally fans are foaming at the mouth if a studio hires a new actor and they sound even a little bit different than the prior actor, and no one bats an eye at studios efforts to try really hard to find a new actor that sounds as close as possible.

    Scarlett declined the offer and now she’s malding that OpenAI went and found some other woman who sounds similar.

    Thems the breaks, that’s an incredibly common thing that happens in voice acting across the board in video games, tv shows, movies, you name it.

    OpenAI almost certainly would have won the court case if they were able to produce who they actually hired and said person could demo that their voice sounds the same as Gippity’s.

    If they did that, Scarlett wouldn’t have a leg to stand on in court, she cant sue someone for having a similar voice to her, lol.

    • Xhieron@lemmy.world
      link
      fedilink
      English
      arrow-up
      36
      arrow-down
      5
      ·
      7 months ago

      She sure can’t. Sounds like all OpenAI has to do is produce the voice actor they used.

      So where is she? …

      Right.

    • BrianTheeBiscuiteer@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      7 months ago

      Well, in the “soundalike” situation you describe people were getting paid to voice things. Now it’s just an AI model that’s not getting paid and the people that made the model probably got paid even less than a soundalike voice actor would. It’s just more money going to the top.

    • athairmor@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      7 months ago

      Scarlett actually would have a good case if she can show the court that people think it’s her. Tom Waits won a case against Frito Lay for “voice misappropriation” when they had someone imitate his voice for a commercial.

    • catloaf@lemm.ee
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      2
      ·
      7 months ago

      The difference is that apparently they asked ScarJo first and she said no. When they ask Tom Hanks (or really his agent, I assume) the answer is “he’s too busy with movies, try Jim”.

          • gaylord_fartmaster@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            3
            ·
            7 months ago

            Having a talking woman in your phone is not stealing Scarlet Johansson’s likeness, even if they sound somewhat similar. US copyright law is already ridiculous, and you want to make it even more bullshit?

            By that logic her role in Her was already stealing the voice actor for Siri’s likeness, and she should have sued for that too.

        • afraid_of_zombies@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 months ago

          If you don’t own your image what do you own?

          Also you know scale. There is a difference between an Elvis impersonation in Vegas vs a huge ass corporation.

          • gaylord_fartmaster@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 months ago

            You own the pile of money you earned for the role you played in someone else’s creative project.

            This isn’t back to the future 2 making a Crispin Glover face mask and putting it on an extra, its using a woman for a voice acting role for an AI speaking from your phone, and somehow that’s stealing from a movie with the same concept, but not stealing from the actual phone AIs voiced by women that existed before the movie.

            • afraid_of_zombies@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              7 months ago

              How would you feel if I made wheelbarrows of money off your face or voice without your consent and not paying you a penny? What about your family, got a relatives you care about who would look great in my AI generated porno?

              The world is schizophrenic about this. On one hand we know that data is king and knowing about a person and having access to what they produce is a super important very lucrative field. The biggest companies on earth buy and sell data about people. On the other hand we argue that your image and data has no value and anyone can do what they want with it.

              • gaylord_fartmaster@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                7 months ago

                Then I’d have grounds to sue you for stealing my likeness, just like Crispin Glover did in the example I just gave.

                Are you under the impression that’s what happened here? It isn’t. The voice is clearly not Scarlet Johansson’s, and she doesn’t have any kind of ownership over the concept of an AI in your phone using an upbeat woman’s voice to speak to you.

    • PrincessLeiasCat@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      7 months ago

      Wouldn’t the difference here wrt Tom/Woody be that Tom had already played the role before so there is some expectation that a similar voice would be used for future versions of Woody if Tom wasn’t available?

      Serious question, I never thought about the point you made so now I’m curious.

  • Optional@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    ·
    7 months ago

    The Johansson scandal is merely a reminder of AI’s manifest-destiny philosophy: This is happening, whether you like it or not.

    It’s just so fitting that microsoft is the company most fervently wallowing in it.

  • Flying Squid@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    7 months ago

    I hate that I have to keep saying this- No one seems to be talking about the fact that by giving their AI a human-like voice with simulated emotions, it inherently makes it seem more trustworthy and will get more people to believe its hallucinations are true. And then there will be the people convinced it’s really alive. This is fucking dangerous.

  • Rolando@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    2
    ·
    7 months ago

    OpenAI should have given some money to the people who own the movie “Her”. Then they could have claimed they were just mimicking the character.

  • Cringe2793@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    21
    ·
    7 months ago

    Scarlett Johansson is a troublemaker. “Sounds eerily similar”. It’s not like she has such a unique voice after all.