• 0 Posts
  • 26 Comments
Joined 2 years ago
cake
Cake day: July 2nd, 2023

help-circle
  • Yes, you’re anthropomorphizing far too much. An LLM can’t understand, or recall (in the common sense of the word, i.e. have a memory), and is not aware.

    Those are all things that intelligent, thinking things do. LLMs are none of that. They are a giant black box of math that predicts text. It doesn’t even understand what a word is, orthe meaning of anything it vomits out. All it knows is what is the statistically most likely text to come next, with a little randomization to add “creativity”.




  • Eranziel@lemmy.worldtoTechnology@lemmy.worldThe GPT Era Is Already Ending
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    4 months ago

    This article and discussion is specifically about massively upscaling LLMs. Go follow the links and read OpenAI’s CEO literally proposing data centers which require multiple, dedicated grid-scale nuclear reactors.

    I’m not sure what your definition of optimization and efficiency is, but that sure as heck does not fit mine.







  • I worked on an industrial robot once, and we parked it such that the middle section of the arm was up above the robot and supposed to be level. I could tell from 50 feet away and a glance that it wasn’t, so we checked. It was off by literally 1 degree.

    Degrees are bigger than we think, but also our eyes are incredible instruments.