• 0 Posts
  • 136 Comments
Joined 2 years ago
cake
Cake day: June 4th, 2023

help-circle



  • And it allows users to create their own one-off objects that they need rather than a corporation creating an immense surplus of parts the majority of which will never reach consumer hands and will end up in a landfill.

    This is key. You can 3D print things to fit your exact needs. Mass produced injection molded plastic is only cheap because of the mass production. Molds are expensive. That means they necessarily have to produce a lot more than people need and market them to people who don’t actually need the item in order to make up for the upfront cost.







  • It’s not a perfect analogy because no one is paid to eat ice cream. People do get paid to produce art, and that allows a lot of people to pursue their passion while still being able to house and feed themselves. Automating art and making it cheaper than humans means they would no longer be able to do that. We’ve automated away jobs that people actually enjoy doing. It’s not banning per se, but it greatly reduces how much time people can spend on it.


  • It has nothing to do with the meaning. If your training set consists of a bunch of strings consisting of A’s and B’s together and another subset consisting of C’s and D’s together (i.e. [AB]+ and [CD]+ in regex) and the LLM outputs “ABBABBBDA”, then that’s statistically unlikely because D’s don’t appear with A’s and B’s. I have no idea what the meaning of these sequences are, nor do I need to know to see that it’s statistically unlikely.

    In the context of language and LLMs, “statistically likely” roughly means that some human somewhere out there is more likely to have written this than the alternatives because that’s where the training data comes from. The LLM doesn’t need to understand the meaning. It just needs to be able to compute probabilities, and the probability of this excerpt should be low because the probability that a human would’ve written this is low.