• BURN@lemmy.world
    link
    fedilink
    English
    arrow-up
    175
    arrow-down
    49
    ·
    1 year ago

    Good

    AI should not be given free reign to train on anything and everything we’ve ever created. Copyright holders should be able to decide if their works are allowed to be used for model training, especially commercial model training. We’re not going to stop a hobbyist, but google/Microsoft/openAI should be paying for materials they’re using and compensating the creators.

    • SatanicNotMessianic@lemmy.ml
      link
      fedilink
      English
      arrow-up
      99
      arrow-down
      3
      ·
      edit-2
      1 year ago

      While that’s understandable, I think it’s important to recognize that this is something where we’re going to have to treat pretty carefully.

      If a human wants to become a writer, we tell them to read. If you want to write science fiction, you should both study the craft of writing ranging from plots and storylines to character development to Stephen King’s advice on avoiding adverbs. You also have to read science fiction so you know what has been done, how the genre handles storytelling, what is allowed versus shunned, and how the genre evolved and where it’s going. The point is not to write exactly like Heinlein (god forbid), but to throw Heinlein into the mix with other classic and contemporary authors.

      Likewise, if you want to study fine art, you do so by studying other artists. You learn about composition, perspective, and color by studying works of other artists. You study art history, broken down geographically and by period. You study DaVinci’s subtle use of shading and Mondrian’s bold colors and geometry. Art students will sit in museums for hours reproducing paintings or working from photographs.

      Generative AI is similar. Being software (and at a fairly early stage at that), it’s both more naive and in some ways more powerful than human artists. Once trained, it can crank out a hundred paintings or short stories per hour, but some of the people will have 14 fingers and the stories might be formulaic and dull. AI art is always better when glanced at on your phone than when looked at in detail on a big screen.

      In both the cases of human learners and generative AI, a neural network(-like) structure is being conditioned to associate weights between concepts, whether it’s how to paint a picture or how to create one by using 1000 words.

      A friend of mine who was an attorney used to say “bad facts make bad law.” It means that misinterpretation, over-generalization, politicization, and a sense of urgency can make for both bad legislation and bad court decisions. That’s especially true when the legislators and courts aren’t well educated in the subjects they’re asked to judge.

      In a sense, it’s a new technology that we don’t fully understand - and by “we” I’m including the researchers. It’s theoretically and in some ways mechanically grounded in old technology that we also don’t understand - biological neural networks and complex adaptive systems.

      We wouldn’t object to a journalism student reading articles online to learn how to write like a reporter, and we rightfully feel anger over the situation of someone like Aaron Swartz. As a scientist, I want my papers read by as many people as possible. I’ve paid thousands of dollars per paper to make sure they’re freely available and not stuck behind a paywall. On the other hand, I was paid while writing those papers. I am not paid for the paper, but writing the paper was part of my job.

      I realize that is a case of the copyright holder (me) opening up my work to whoever wants a copy. On the other other hand, we would find it strange if an author forbade their work being read by someone who wants to learn from it, even if they want to learn how to write. We live in a time where technology makes things like DRM possible, which attempts to make it difficult or impossible to create a copy of that work. We live in societies that will send people to prison for copying literal bits of information without a license to do so. You can play a game, and you can make a similar game. You can play a thousand games, and make one that blends different elements of all of them. But if you violate IP, you can be sued.

      I think that’s what it comes down to. We need to figure out what constitutes intellectual property and what rights go with it. What constitutes cultural property, and what rights do people have to works made available for reading or viewing? It’s easy to say that a company shouldn’t be able to hack open a paywall to get at WSJ content, but does that also go for people posting open access to Medium?

      I don’t have the answers, and I do want people treated fairly. I recognize the tremendous potential for abuse of LLMs in generating viral propaganda, and I recognize that in another generation they may start making a real impact on the economy in terms of dislocating people. I’m not against legislation. I don’t expect the industry to regulate itself, because that’s not how the world works. I’d just like for it to be done deliberately and realistically and with the understanding that we’re not going to get it right and will have to keep tuning the laws as the technology and our understanding continue to evolve.

      • hypnotoad__@lemmy.world
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        3
        ·
        1 year ago

        Sorry this is a bit too level-headed for me, can you please repeat with a bullhorn, and use 4-letter words instead? I need to know who to blame here.

    • Swervish@lemmy.ml
      link
      fedilink
      English
      arrow-up
      56
      arrow-down
      8
      ·
      edit-2
      1 year ago

      Not trying to argue or troll, but I really don’t get this take, maybe I’m just naive though.

      Like yea, fuck Big Data, but…

      Humans do this naturally, we consume data, we copy data, sometimes for profit. When a program does it, people freak out?

      edit well fuck me for taking 10 minutes to write my comment, seems this was already said and covered as I was typing mine lol

      • QHC@lemmy.world
        link
        fedilink
        English
        arrow-up
        21
        arrow-down
        9
        ·
        1 year ago

        It’s just a natural extension of the concept that entities have some kind of ownership of their creation and thus some say over how it’s used. We already do this for humans and human-based organizations, so why would a program not need to follow the same rules?

        • Hello Hotel@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          1 year ago

          I like that argument as it applies to our ai, which isnt ment to reject bad ideas or motiefs but to never have a bad idea in the first place. This setup results in the bot’s path of least resistance being to copy someones homework. Nobody wants the bot to do that.

          Someday we may have AI that argument is harder to apply to

          i attempt explain, irrelivant

          text generators have a “most correct” output that looks and behaves simmlar to pressing the first of the keyboard suggested words repeatedly. We add noise, where the bot is on a dice roll forced to add a random letter to it’s output. Like the above example if you typed a 5 letter word every so often instead.

    • lily33@lemm.ee
      link
      fedilink
      English
      arrow-up
      50
      arrow-down
      20
      ·
      edit-2
      1 year ago

      No.

      • A pen manufacturer should not be able to decide what people can and can’t write with their pens.
      • A computer manufacturer should not be able to limit how people use their computers (I know they do - especially on phones and consoles - and seem to want to do this to PCs too now - but they shouldn’t).
      • In that exact same vein, writers should not be able to tell people what they can use the books they purchased for.

      .

      We 100% need to ensure that automation and AI benefits everyone, not a few select companies. But copyright is totally the wrong mechanism for that.

      • BURN@lemmy.world
        link
        fedilink
        English
        arrow-up
        27
        arrow-down
        8
        ·
        1 year ago

        A pen is not a creative work. A creative work is much different than something that’s mass produced.

        Nobody is limiting how people can use their pc. This would be regulations targeted at commercial use and monetization.

        Writers can already do that. Commercial licensing is a thing.

        • lily33@lemm.ee
          link
          fedilink
          English
          arrow-up
          13
          arrow-down
          5
          ·
          1 year ago

          Nobody is limiting how people can use their pc. This would be regulations targeted at commercial use and monetization.

          … Google’s proposed Web Integrity API seems like a move in that direction to me.

          But that’s besides the point, I was trying to establish the principle that people who make things shouldn’t be able to impose limitations on how these things are used later on.

          A pen is not a creative work. A creative work is much different than something that’s mass produced.

          Why should that difference matter, in particular when it comes to the principle I mentioned?

          • walrusintraining@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            arrow-down
            6
            ·
            edit-2
            1 year ago

            It’s not like AI is using works to create something new. Chatgpt is similar to if someone were to buy 10 copies of different books, put them into 1 book as a collection of stories, then mass produce and sell the “new” book. It’s the same thing but much more convoluted.

            Edit: to reply to your main point, people who make things should absolutely be able to impose limitations on how they are used. That’s what copyright is. Someone else made a song, can you freely use that song in your movie since you listened to it once? Not without their permission. You wrote a book, can I buy a copy and then use it to make more copies and sell? Not without your permission.

            • lily33@lemm.ee
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              3
              ·
              1 year ago

              Except it’s not a collection of stories, it’s an amalgamation - and at a very granular level at that. For instance, take the beginning of a sentence from the middle of first book, then switch to a sentence in the 3-rd, then finish with another part of the original sentence. Change some words here and there, add one for good measure (based on some sentence in the 7-th book). Then fix the grammar. All the while, keeping track that there’s some continuity between the sentences you’re stringing together.

              That counts as “new” for me. And a lot of stuff humans do isn’t more original.

              • legion02@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                2
                ·
                1 year ago

                The maybe bigger argument against free-reign training is that you’re attributing personal rights to a language model. Also even people aren’t completely free to derive things from memory (legally) which is why clean-room-design is a thing.

          • yokonzo@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            3
            ·
            1 year ago

            I can see your argument it’s just your metaphor wasn’t very strong and I think it just made things a bit confusing

          • BURN@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            3
            ·
            edit-2
            1 year ago

            Google web integrity is very much different than what I’m proposing. “Nobody” was more in relation to regulating this.

            I hold the opposite opinion in that creatives (I’d almost say individuals only, no companies) own all rights to their work and can impose any limitations they’d like on (edit: commercial) use. Current copyright law doesn’t extend quite that far though.

            A creative work is not a reproduceable quantifiable product. No two are exactly alike until they’re mass produced.

            Your analogy works more with a person rather than a pen, in that why is it ok when a person reads something and uses it as inspiration and not a computer? This comes back around to my argument about transformative works. An AI cannot add anything new, only guess based on historical knowledge. One of the best traits of the human race is our ability to be creative and bring completely new ideas.

            Edit: added in a commercial use specifier after it was pointed out that the rules over individuals would be too restrictive.

            • lily33@lemm.ee
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              edit-2
              1 year ago

              I hold the opposite opinion in that creatives (I’d almost say individuals only, no companies) own all rights to their work and can impose any limitations they’d like on use. Current copyright law doesn’t extend quite that far though.

              I think that point’s worth discussing by itself - leaving aside the AI - as you wrote it quite general.

              I came up with some examples:

              • Let’s say an author really hates when quotes are taken out of context, and has stipulated that their book must only appear in whole. Do you think I should be able to decorate the interior of my own room with quotes from it?
              • What about an author that insists readers read no more than one chapter per day, to force them to think on the chapter before moving in. Would that be a valid use restriction?
              • If an author wrote a book to critique capitalism - and insists that is its purpose. But when I read the book, I interpreted it very differently, and saw in its pages a very strong argument for capitalism. Should I be able to use said book to make said argument for capitalism?

              Taking your statement at face value - the answers should be: no (I can’t decorate), yes (it’s a valid restriction), and no (I can’t use it to illustrate my argument). But maybe you didn’t mean it quite that strict? What do you think on each example and why?

              • BURN@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                1 year ago

                Fair points. I think the restrictions in most part would have to be in place for commercial use primarily.

                So under your examples

                • Yes, you should. As there’s no commercial usage you’re not profiting off of their work, you’re simply using your copy of it to decorate a personal space

                • If we restrict the copyright protections to only apply to commercial use then this becomes a non-issue. The copyright extends to reproduction (or performance in the case of music) of the work in any kind, but does not extend to complete control over personal usage.

                • Personal interpretation is fine. If you start using that argument in some kind of publication or “performance”, then you end up with fair use being called into question. Quoting, with appropriate attribution is fine, but say you print a chapter of the book, then a chapter of critique. Where is that line drawn? Right now it’s ambiguous at best, downright invisible at most times.

                I appreciate the well thought out response. I hold sting views on copyright of an individuals creative work as a musician and developer, and believe that they should have control over how their products are used to make money. These views probably are a little too restrictive for the general public, and probably won’t ever garner a huge amount of support.

                I dropped the ball on making sure to specify use as in commercial use, I’ll put an edit at the bottom of the op to clarify it too

      • DarkWasp@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        4
        ·
        edit-2
        1 year ago

        All of the examples you listed have nothing to do with how OpenAI was created and set up. It was trained on copyrighted work, how is that remotely comparable to purchasing a pen?

      • fkn@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        3
        ·
        1 year ago

        You made two arguments for why they shouldn’t be able to train on the work for free and then said that they can with the third?

        Did openai pay for the material? If not, then it’s illegal.

        Additionally, copywrite and trademarks and patents are about reproduction, not use.

        If you bought a pen that was patented, then made a copy of the pen and sold it as yours, that’s illegal. This is the analogy of what openai is going with books.

        Plagiarism and reproduction of text is the part that is illegal. If you take the “ai” part out, what openai is doing is blatantly illegal.

        • lily33@lemm.ee
          link
          fedilink
          English
          arrow-up
          6
          ·
          edit-2
          1 year ago

          Just now, I tried to get Llama-2 (I’m not using OpenAI’s stuff cause they’re not open) to reproduce the first few paragraphs of Harry Potter and the philosophers’ stone, and it didn’t work at all. It created something vaguely resembling it, but with lots of made-up stuff that doesn’t make much sense. I certainly can’t use it to read the book or pirate it.

          • ShittyBeatlesFCPres@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            3
            ·
            1 year ago

            Maybe it’s trained not to repeat JK Rowling’s horseshit verbatim. I’d probably put that in my algorithm. “No matter how many times a celebrity is quoted in these articles, do not take them seriously. Especially JK Rowling. But especially especially Kanye West.”

          • fkn@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            4
            ·
            edit-2
            1 year ago

            Openai:

            I’m sorry, but I can’t provide verbatim excerpts from copyrighted texts. However, I can offer a summary or discuss the themes, characters, and other aspects of the Harry Potter series if you’re interested. Just let me know how you’d like to proceed!

            That doesn’t mean the copyrighted material isn’t in there. It also doesn’t mean that the unrestricted model can’t.

            Edit: I didn’t get it to tell me that it does have the verbatim text in its data.

            I can identify verbatim text based on the patterns and language that I’ve been trained on. Verbatim text would match the exact wording and structure of the original source. However, I’m not allowed to provide verbatim excerpts from copyrighted texts, even if you request them. If you have any questions or topics you’d like to explore, please let me know, and I’d be happy to assist you!

            Here we go, I can get chat gpt to give me sentence by sentence:

            “Mr. and Mrs. Dursley, of number four, Privet Drive, were proud to say that they were perfectly normal, thank you very much.”

            • BURN@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              1 year ago

              Most publically available/hosted (self hosted models are an exception to this) have an absolute laundry list of extra parameters and checks that are done on every query to limit the model as much as possible to tailor the outputs.

            • fkn@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              4
              ·
              1 year ago

              This wasn’t even hard… I got it spitting out random verbatim bits of Harry Potter. It won’t do the whole thing, and some of it is garbage, but this is pretty clear copyright violations.

      • QHC@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        1 year ago

        Computer manufacturers aren’t making AI software. If someone uses an HP copier to make illegal copies of a book and then distributes those pages to other people for free, the person that used the copier is breaking the law, not the company that made the copier.

      • Vent@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        5
        ·
        1 year ago

        They didn’t pay the writers though, that’s the whole point

        • lily33@lemm.ee
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          2
          ·
          edit-2
          1 year ago

          True - but I don’t think the goal here is to establish that AI companies must purchase 1 copy of each book they use. Rather, the point seems to be that they should need separate, special permission for AI training.

          • BURN@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            1 year ago

            I believe this is where it’ll inevitably go. However I’m not sure it’ll be just AI, rather hopefully more protections around individual creative work and how that can be used by corporations for internal or external data collection.

            This really does depend on privacy laws as well and probably data collection, retention and usage too.

    • coheedcollapse@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      1
      ·
      edit-2
      1 year ago

      With that mindset, only the powerful will have access to these models.

      Places like Reddit, Google, Facebook, etc, places that can rope you into giving away rights to your data with TOS stipulations.

      Locking down everything available on the Internet by piling more bullshit onto already draconian copyright rules isn’t the answer and it surprises the shit out of me how quickly fellow artists, writers, and creatives piled onto the side with Disney, the RIAA, and other former enemies the second they started perceiving ML as a threat to their livelihood.

      I do believe restrictions should be looked into when it comes to large organizations and industries replacing creators with ML, but attacking open ML models directly is going to result in the common folk losing access to the tools and corporations continuing to work exactly as they are right now by paying for access to locked-down ML based on content from companies who trade in huge amounts of data.

      Not to mention it’s going to give the giants who have been leveraging their copyright powers against just about everyone on the internet more power to do just that. That’s the last thing we need.

    • ArmokGoB@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      8
      ·
      1 year ago

      I disagree. I think that there should be zero regulation of the datasets as long as the produced content is noticeably derivative, in the same way that humans can produce derivative works using other tools.

      • Hello Hotel@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        edit-2
        1 year ago

        Good in theory, Problem is if your bot is given too mutch exposure to a specific piece of media and when the “creativity” value that adds random noise (and for some setups forces it to improvise) is too low, you get whatever impression the content made on the AI, like an imperfect photocopy (non expert, explained “memorization”). Too high and you get random noise.

        • ArmokGoB@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          if your bot is given too mutch exposure to a specific piece of media and when the “creativity” value that adds random noise (and for some setups forces it to improvise) is too low, you get whatever impression the content made on the AI, like an imperfect photocopy

          Then it’s a cheap copy, not noticeably derivative, and whoever is hosting the trained bot should probably take it down.

          Too high and you get random noise.

          Then the bot is trash. Legal and non-infringing, but trash.

          There is a happy medium where SD, MJ, and many other text-to-image generators currently exist. You can prompt in such a way (or exploit other vulnerabilities) to create “imperfect photocopies,” but you can also create cheap, infringing works with any number of digital and physical tools.

      • adrian783@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        5
        ·
        1 year ago

        LLM are not human, the process to train LLM is not human-like, LLM don’t have human needs or desires, or rights for that matter.

        comparing it to humans has been a flawed analogy since day 1.

        • King@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          1 year ago

          Llm no desires = no derivative works? Let llm handle your comments they will make more sense

    • TheDarkKnight@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      2
      ·
      1 year ago

      I understand the sentiment (and agree on moral grounds) but I hink this would put us at an extreme disadvantage in the development of this technology compared to competing nations. Unless you can get all countries to agree and somehow enforce this I think it dramatically hinders our ability to push forward in this space.

    • makyo@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      1 year ago

      I think any LLM should be required to be free to use. They can pay for extra bells and whistles like document upload but the core model must be free. They’re free to make their billions, but it shouldn’t be on a model built by scraping all the information of humanity for free.

    • Hangglide@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      8
      ·
      1 year ago

      Bullshit. If I learn engineering from a textbook, or a website, and then go on to design a cool new widget that makes millions, the copyright holder of the textbook or website should get zero dollars from me.

      It should be no different for an AI.

        • Marxine@lemmy.ml
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          3
          ·
          1 year ago

          While I agree, corporations shouldn’t make bucks on knowledge(sorta) they basically eavesdropped and violated the privacy of millions of people for.

          AI solutions are made from people’s ideas, and should be freely accessible by the people by definition. It not being sustainable as a business model is also a feature in this case, since there’d be no intrinsic incentive to steal data and violate privacy.

      • Treczoks@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        3
        ·
        1 year ago

        Yes, but what about you going into teaching engineering, and writing a text book for it that is awfully close to the ones you have used? Current AI is at a stage where it just “remixes” content it gobbled in, and not (yet) advanced enough to actually learn and derive from it.

      • Mouselemming@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        3
        ·
        1 year ago

        Last time I looked, textbooks were fucking expensive. You might be able to borrow one from the library, of course. But most people who study something pay up front for the information they’re studying on

      • Shazbot@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        7
        ·
        1 year ago

        Every time I see this argument it reminds me of how little people understand how copyright works.

        • When you buy that book the monetary amount is fair compensation for the contents inside. What you do afterwards is your own business so long as it does not violate the terms within the fine print of the book (no unauthorized reproductions, etc.)
        • When someone is contracted for an ad campaign there will be usage rights in the contract detailing the time frame and scope for fair compensation (the creative fee + expenses). If the campaign does well, they can negotiate residuals (if not already included) because the scope now exceeds the initial offer of fair compensation.
        • When you watch a movie on TV, the copyright holder(s) of that movie are given fair compensation for the number of times played. From the copyright holders, every artist is paid a royalty. Jackie Chan and Chris Tucker still get royalty checks whenever Rush Hour 2 airs or is streamed, as do all the other obscure actors and contributing artists.
        • Deviant Art and ArtStation provide free hosting for artists in exchange for a license that lets them distribute images to visitors. The artists have agreed to fair compensation in the form of free hosting and potential promotion should their work start trending, reaching all front page visitors of the site. Similarly, when the artists use the printing services of these sites they provide a license to reproduce and ship their works, as fair compensation the sites receive a portion of the artists’ asking price.

        The crux is fair compensation. The rights holder has to agree to the usage, with clear terms and conditions for their creative works, in exchange for a monetary sum (single or reoccurring) and/or a service of similar or equal value with a designated party. That’s why AI continues to be in hot water. Just because you can suck up the data does not mean the data is public domain. Nor does it mean the license used between interested parties transfers to an AI company during collection. If AI companies want to monetize their services, they’re going to have to provide fair compensation for the non-public domain works used.

      • VonCesaw@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        7
        ·
        1 year ago

        Human experience considers context, experience, and relation to previous works

        ‘AI’ has the words verbatim in it’s database and will occasionally spit them out verbatim

    • Falmarri@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      4
      ·
      1 year ago

      What’s the basis for this? Why can a human read a thing and base their knowledge on it, but not a machine?

      • BURN@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        7
        ·
        1 year ago

        Because a human understands and transforms the work. The machine runs statistical analysis and regurgitates a mix of what it was given. There’s no understanding or transformation, it’s just what is statistically the 3rd most correct word that comes next. Humans add to the work, LLMs don’t.

        Machines do not learn. LLMs do not “know” anything. They make guesses based on their inputs. The reason they appear to be so right is the scale of data they’re trained on.

        This is going to become a crazy copyright battle that will likely lead to the entirety of copyright law being rewritten.

        • fkn@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          5
          ·
          1 year ago

          I don’t know if I agree with everything you wrote but I think the argument about llms basically transforming the text is important.

          Converting written text into numbers doesn’t fundamentally change the text. It’s still the authors original work, just translated into a vector format. Reproduction of that vector format is still reproduction without citation.

        • Dojan@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          It’s also the scale of their context, not just the data. More (good) data and lots of (good) varied data is obviously better, but the perceived cleverness isn’t owed to data alone.

          I do hope copyright law gets rewritten. It is dated and hasn’t kept up with society or technology at all.

        • atzanteol@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          1 year ago

          This is going to become a crazy copyright battle that will likely lead to the entirety of copyright law being rewritten.

          I think this is very unlikely. All of law is precedent.

          Google uses copyrighted works for many things that are “algorithmic” but not AI and people aren’t shitting themselves over it.

          Why would AI be different? So long as copyright isn’t infringed at least.

      • gcheliotis@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        5
        ·
        edit-2
        1 year ago

        That machine is a commercial product. Quite unlike a human being, in essence, purpose and function. So I do not think the comparison is valid here unless it were perhaps a sentient artificial being, free to act of its own accord. But that is not what we’re talking about here. We must not be carried away by our imaginations, these language models are (often proprietary and for profit) products.

        • Falmarri@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          1 year ago

          I don’t see how that’s relevant. A company can pay someone to read copyrighted work, learn from it, and then perform a task for the benefit of the company related to the learning.

          • krische@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            2
            ·
            1 year ago

            But how did that person acquire the copyrighted work? Was the copyrighted material paid for?

            That’s the crux of the issue, Open AI isn’t paying for the copyrighted work they are “reading”, are they?

            • Falmarri@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              What does paying for anything have to do with what we’re talking about here. They’re ingesting freely available content, that anyone with a web browser could read

      • BURN@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        1 year ago

        Open sourcing the models does absolutely nothing. The fact of the matter is that the people who create these models aren’t able to quantifiably show how they work, because those levels have been abstracted so far into code that there’s no way to understand them.

      • BURN@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        3
        ·
        1 year ago

        Or a creative who hates to see the entire soul of the human race boiled down to a computer doing a whole lot of math.

        AI isn’t going to put office workers out of a job, not just yet, but it’s sure going to end the careers of a whole lot of artists who won’t get entry level opportunities anymore because an AI is able to do 90% of the job and all they need is someone to sort the outputs.