• Iunnrais@lemm.ee
    link
    fedilink
    English
    arrow-up
    94
    arrow-down
    1
    ·
    2 months ago

    Just let anyone scrape it all for any reason. It’s science. Let it be free.

    • chicken@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      4
      ·
      2 months ago

      The OP tweet seems to be leaning pretty hard on the “AI bad” sentiment. If LLMs make academic knowledge more accessible to people that’s a good thing for the same reason what Aaron Swartz was doing was a good thing.

      • funkless_eck@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        2 months ago

        That would be good if they did that but that is not the intent of the org, the purpose of the tool, the expected or even available outcome.

        It’s important to remember this data is not being scraped to make it available or presentable but to make a machine that echos human authography convincingly more convincingly.

        On an extremely simplified level, it doesn’t want to answer 1+1=? with “2”, it wants to appear like a human confidently answering an arithmetic question, even if the exchange is “1+1=?” “yes, 2+3 does equal 9”

        Obviously it can handle simple sums, this is an illustrative example

        • chicken@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          2 months ago

          that is not the … available outcome.

          It demonstrably is already though. Paste a document in, then ask questions about its contents; the answer will typically take what’s written there into account. Ask about something you know is in a Wikipedia article that would have been part of its training data, same deal. If you think it can’t do this sort of thing, you can just try it yourself.

          Obviously it can handle simple sums, this is an illustrative example

          I am well aware that LLMs can struggle especially with reasoning tasks, and have a bad habit of making up answers in some situations. That’s not the same as being unable to correlate and recall information, which is the relevant task here. Search engines also use machine learning technology and have been able to do that to some extent for years. But with a search engine, even if it’s smart enough to figure out what you wanted and give you the correct link, that’s useless if the content behind the link is only available to institutions that pay thousands a year for the privilege.

          Think about these three things in terms of what information they contain and their capacity to convey it:

          • A search engine

          • Dataset of pirated contents from behind academic paywalls

          • A LLM model file that has been trained on said pirated data

          The latter two each have their pros and cons and would likely work better in combination with each other, but they both have an advantage over the search engine: they can tell you about the locked up data, and they can be used to combine the locked up data in novel ways.

          • funkless_eck@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 months ago

            the problem is you can’t take those weaknesses and call it “academic” - it’s a contradiction in terms.

            When a real academic makes up answers its a problem, when chatgpt does it its part of the expectation.

  • CosmicTurtle0@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    1
    ·
    2 months ago

    To paraphrase Nixon:

    “When you’re a company, it’s not illegal.”

    To paraphrase Trump:

    “When you’re a company, they just let you do it.”

    • TheOakTree@lemm.ee
      link
      fedilink
      English
      arrow-up
      14
      ·
      2 months ago

      I did some digging. It’s a parody finance website that makes it seem like you can invest in falcons and make a blockchain (flockchain) with them. Dig a little further, go to the linked forum, and you’ll see it’s just a community of people shitposting (mostly).

  • EmbarrassedDrum@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    13
    ·
    2 months ago

    and in due time, we’ll hack OpenAI and get the sources from the chat module…

    I’ve seen a few glitches before that made ChatGPT just drop entire articles in varying languages.

  • electricprism@lemmy.ml
    link
    fedilink
    English
    arrow-up
    9
    ·
    2 months ago

    Remember what you learned in school: Working as a team to solve a test or problem is unacceptable!!! Unless you are a company town.

  • doctortran@lemm.ee
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    17
    ·
    edit-2
    2 months ago

    Can we be honest about this, please?

    Aaron Swartz went into a secure networking closet and left a computer there to covertly pull data from the server over many days without permission from anyone, which is absolutely not the same thing as scraping public data from the internet.

    He was a hero that didn’t deserve what happened, but it’s patently dishonest to ignore that he was effectively breaking and entering, plus installing a data harvesting device in the server room, which any organization in the world would rightfully identity as hostile behavior. Even your local library would call the cops if you tried to do that.

    • youmaynotknow@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      Wao, it’s not often we get to see someone posting a comment so full of shit while making sure to obscure many facts to see if it sticks.

      “Can we be honest”? Apparently you cannot.

    • Venia Silente@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      Why don’t you speak what you truly believe instead of copy-pasting the same gaslighting everywhere? We already made you, anyway.