Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.

Spent many years on Reddit before joining the Threadiverse as well.

  • 0 Posts
  • 830 Comments
Joined 1 year ago
cake
Cake day: March 3rd, 2024

help-circle

  • My main recordings folder is 175 GB, for a little over ten years worth of recordings. That’s not really all that much - consider how much a terabyte hard drive costs these days, it’s a trivial expense. Even when you include the various backups I keep (definitely don’t want a crash to take all that out).

    My GPU’s reasonably hefty, an RTX 4090 with 24GB of VRAM. But AI is a rapidly changing technology right now so who knows what the next six months will bring. Someone might come out with an awesome lightweight model, someone might announce they’re going to be selling a cheap AI-specific card. My view has always been “save the data now because you can’t save it later if you didn’t save it now. You can process it any time.”



  • Not that often, but my search tools aren’t very refined yet so it’s probably a bit of a chicken-and-egg problem. The technology is advancing rapidly right now so I’m not putting a whole lot of effort into polish yet, since in six months some dramatic new tool might come out that invalidates everything I did so far. Like that potential WhisperX switch.

    Most recently, I remember a situation where I didn’t remember the name of some NPCs from a roleplaying game that were only in one or two adventures. I did a little searching and found them in a transcript from 2017. That was fun.





  • You asked:

    Everyone likes to believe they’re thinking independently. That they’ve arrived at their beliefs through logic, self-honesty, and some kind of epistemic discipline. But here’s the problem - that belief itself is suspiciously comforting. So how can you tell it’s true? […] I’m asking: what’s your actual evidence that you think the way you think you do? Not in terms of the content of your beliefs, but the process behind them. What makes you confident you’re reasoning - not just rationalizing?

    And I’m answering that. You literally asked for “actual evidence,” and I gave links to the specific research I’m referencing.

    I’m not here to argue with you over the meaning of the word “consciousness” when you didn’t even ask about that in your question in the first place. If you think I’m talking about something other than consciousness go ahead and tell me what other word for it suits you.



  • You might be referring to the split-brain experiments, where researchers studied patients who had their brain hemispheres separated by cutting the corpus callosum – the “bridge” between the two sides.

    Nope, I would have described the split-brain experiments if that’s what I was referring to. I dug around a bit to find a direct reference and I think it was Movement Intention After Parietal Cortex Stimulation in Humans by Desmurget et al. In particular:

    the fact that patients experienced a conscious desire to move indicates that stimulation did not merely evoke a mental image of a movement but also the intention to produce a movement, an internal state that resembles what Searle called “intention in action”

    I did misremember the fact that they only felt the intention to move, they didn’t actually move their limbs when those brain regions were stimulated.

    A related bit of research I dug up on this reference hunt that I’d forgotten about but is also neat; Libet in the 1980s, who used observation of the timing of brain activity to measure when a person formed an intention to do something compared to when they became consciously aware that they had formed an intention to do something. There was a significant delay between those two events, with the intention coming first and only later with the conscious mind “catching up” and deciding that it was going to do the thing that the brain was already in the process of doing.

    As for consciousness, I think you might be using the term a bit differently from how it’s typically used in philosophical discussions.

    Probably, I’m less interested in philosophy than I am in actual measurable neurology. The whole point of all this is that human introspection appears to be flawed, and a lot of philosophy relies heavily on introspection. So I’d rather read about people measuring brain activity than about people merely thinking about brain activity.

    This, I (and many others) would argue, is the only thing in the entire universe that cannot be an illusion.

    You can argue it all you like, but in the end science requires evidence to back it up.


  • It’s funny. I’ve seen research about LLMs “reasoning” and “introspecting” that has shown that they make up stories when you ask them why they answered questions in certain ways that don’t match how their neurons actually fired, and a common response in the comments is to triumphantly crow about how this shows they’re not “self aware” or “actually thinking” or whatever.

    But it may be the same with humans. There’s been fun experiments where people would have neurons artificially stimulated in their brains that cause them to take some action, such as reaching out with their hand, and then when you ask them why they did that they’ll say - and believe - that they did it for some made-up reason like they were just stretching or that they wanted to pick something up. Even knowing full well that they’re in an experiment that’s going to use artificial stimulus to make them do that.

    I suspect that much of what we call “consciousness” is just made up after-the-fact to explain to ourselves why we do the things that we do. Maybe even all of it, for all we currently know. It’s a fun shower thought to ponder, if nothing else. And perhaps now that we’ve got AI to experiment with in addition to just our messy organic brains we’ll be able to figure it all out with more rigor. Interesting times ahead.

    I’m not terribly concerned about it, though. If it turns out that this is how we’ve been operating all along, well, it’s how we’ve been operating all along. I’ve liked being me so far, why should that change when the curtain’s pulled back and I can see the hamster in the wheel that’s been making me work like that all along? It doesn’t really change anything, and I’d like to know.



  • I watched a very comprehensive and professional video by Captain Steeeve on this subject earlier today. He didn’t outright literally say that one of the pilots deliberately downed the plane, but it was very clear that he thought that was the only explanation that really made sense here. Why do you say it sounds like they “did not mean to do so”? The switches are designed to not be movable without considerable deliberation and intent, you can’t just bump these with your knee and switch them off. And both pilots were plenty experienced enough to know that you don’t turn those switches off at that point in the flight.


  • FaceDeer@fedia.iotoAsk Lemmy@lemmy.worldOn Journaling
    link
    fedilink
    arrow-up
    16
    arrow-down
    2
    ·
    3 days ago

    Yes, though not a “traditional” one. I’ve got a voice recorder, I use it when I’m walking my dog to ramble on about whatever’s on my mind. The day’s events, my personal thoughts, to-do lists and notes, whatever. When I get home I dump the recording into a folder where some scripts I’ve written process the audio to produce a transcript (using the Whisper model from OpenAI) and then an LLM to create summaries and subject tags and so forth from it (currently Qwen3), entirely local on my computer. I’ve got an index for searching through them based on those AI-generated tags and summaries so I can more easily find old stuff if I need them or am curious for whatever reason.

    I use entirely local AI because I am completely open and honest in there. Probably a bunch of blackmail material to be found if you dug deeply enough. I’m very careful with data security, none of this ever leaves my local systems.

    I’ve been doing this for over ten years now, almost daily. I’ve always had a vague plan that someday I’d feed it all into an AI, it;s only just the past two years where that’s actually started to become a reality. This weekend I’m going to experiment with upgrading my transcription AI to WhisperX, if it does a significantly better job I may have to rerun the whole dang thing through it all. Could take weeks, maybe months. I’m almost hoping it doesn’t work. :)






  • PJM has lost more than 5.6 net gigawatts in the last decade as power plants shut faster than new ones enter service, according to a PJM presentation filed with regulators this year. PJM added about 5 gigawatts of power-generating capacity in 2024, fewer than smaller grids in California and Texas. Meanwhile, data center demand is surging. By 2030, PJM expects 32 gigawatts of increased demand on its system, with all but two of those gigawatts coming from data centers.

    So this is a combination of utter mismanagement by the power companies, combined with growth in data center demand. Data centers are not purely AI. And I would expect that if PJM continues to be a basket case with exceptionally high prices those data centers will move elsewhere, or at least not get set up so more in those locations. Data centers generally don’t have to be located in specific places, by their nature. AI-specific ones in particular since the bandwidth required is a lot smaller than their processing power.