nickwitha_k (he/him)

  • 4 Posts
  • 443 Comments
Joined 2 years ago
cake
Cake day: July 16th, 2023

help-circle
  • It’s been years since I’ve been in the lab but it really will depend a lot on the subject matter and the type of experiment.

    If it’s a subject matter that is fairly well explored and defined, the alternative hypotheses might be fairly straightforward. Take, for example, an experiment from a while ago where entomologists suspected that desert ants navigate by using dead reckoning, effectively counting their steps, remembering their changes in direction measured by a biological compass, and integrating them together, in a process similar to “fusion” in electronic position sensors.

    To validate part of this hypothesis, they needed to get more granular and isolate one part of it. So, they formulated a “sub-hypothesis” that stated that the ants had some sort of innate awareness of the distance that they covered with each step, knowing the length of their legs and this their stride length, similar to how cats know their healthy body width. The experimental hypothesis would be something like:

    “Altering the length of desert ant legs will result in navigation failure with longer legs causing them to overshoot and shorter legs causing them to undershoot. The navigational trajectories should otherwise be identical.”

    Building alternative hypotheses for this relatively simple experiment, prior to conducting it would be straightforward, as you appear to be suspecting. They could be as simple as:

    “The length of the desert ant’s legs will have no impact on their navigation because they are not directly related. This will be apparent through the ants showing no discernable difference in the paths that they take when navigating, regardless of leg length.”

    “The length of the desert ant’s legs will have some impact on their navigation but, they are able to compensate for discrepancies in stride length through some as of yet unknown mechanism. This will likely be apparent in statistically significant distance-related navigation errors in their paths.”

    After the experiment, the data would be analyzed and checked for a match against the established hypotheses. If there is not a good match or there is an unexpected shape to the data, further experiments may be required to see if it is an anomaly or if something else might be going on.

    (In this case, it was found that, yes, desert ants have some sort of innate awareness of what their stride length should be and changes in their leg lengths throw off their navigation, as expected.)

    Now, when it gets to subjects that are less clear and established, alternative hypotheses can get a lot more challenging because often the difference between the data fit that proves or disproves a hypothesis can be miniscule. Or, the data points might form a completely unexpected shape that doesn’t match currently known phenomena.










  • Your take is illogical, unless you are arguing for some sort of pre industrial communism which is never going to happen because I think any sane person can agree that technology has vastly improved our lives. It has introduced pains sure, but everything is a process.

    That’s quite a leap. Not all technology is worthwhile or improves the overall human experience. Are you getting there by assuming that the world is black and white; embracing all technology or rejecting all technology? If so, I would recommend re-evaluation of such assumptions because they do not hold up to reality.

    Oh and speaking of computers did computers and automated production lines destroy the ability for people to make a living?

    Were they developed and pushed for that explicit reason? No. LLMs are. The only reason that they receive as much funding as they do is that billionaires want to keep everything for themselves, end any democratic rule, and indirectly (and sometimes directly) cause near extinction-level deaths, so that there are fewer people to resist the new feudalism that they want. It sounds insane but it is literally what a number of tech billionaires have stated.

    Maybe temporarily and then new jobs popped up.

    Not this time. As many at the Church of Accelerationism fail to see, we’re at a point where there are practically no social safety nets left (at least in the US), which has not been the case in over a century, and people are actively dying because of anthropogenic climate, which is something that has never happened in recorded history. When people lost jobs before, they could at least get training or some other path that would allow them to make a living.

    Now, we’re at record levels of homelessness too. This isn’t going to result in people magically gaining class consciousness. People are just going to die miseable, preventable deaths.

    But I want to understand exactly where you are coming from, like do you think that we should stop all technological progress and simply maintain our civilization in stasis or roll it back to some other time or what?

    Ok. Yes. It does appear that you are figuring a black and white world view where all technology is “progress” and all implements of technology are “tools” with no other classification or differentiation on their value to the species or consideration for how they are implemented. Again, I would recommend reflection as this view does not mesh well with observable reality.

    Someone else already made the apt comparison between this wave of AI tech with nuclear weapons. Another good comparison would be phosgene gas. When it was first mass produced, it was used only for mass murder (as the current LLMs’ financial supports desire them to be used) only the greater part of a century later did the gas get used for something beneficial to humanity, namely doping semiconductors however, its production and use is still very dangerous to people and the environment.

    I’m addition to all of this, it really appears that you fail to acknowledge the danger that accelerating the loss of the ability of the planet to sustain human life poses. Again, for emphasis, I’ll state: AI is not going to save us from this. The actions required are already known - it won’t help us to find them. The technology is being used, nearly exclusively to worsen human life, make genocide more efficient, and increase the rate of poverty, while accelerating global climate change. It provides no net value to humanity in the implementations that are funded. The only emancipation that it is doing is emancipating people from living.






  • That’s…a take. And clearly not sounding like a cultist at all. /S

    Giving corpos free reign to exploit whatever they want has never resulted in positive things, generally, just bloodshed and suffering. Pretending that flagrant violation of IP when done to train models is ok doesn’t do much for big companies but it does obliterate individuals ability to support themselves. This is the only reason that this environmentally disasterous and unprofitable tech has been so heavily embraced; to be used as a tool of exploitation.

    AI is not going to save anyone. It is not going to emancipate anyone. Absolutely none of the financial benefits are being shared with the working class. And, if they were, it would have little impact on LLMs’ big picture value as they are vastly accelerating the destruction of the planet’s biosphere. When that’s gone, humanity is finished.

    Embracing the current forms of commercialized AI is only to the detriment of humanity and the likelihood of the creation of any artificial sentience.


  • So what an Ai does is the same thing as every human ever who has read/saw/listened a work and then wrote more words being influenced by that book/artwork/piece.

    Nope. This has been thoroughly debunked by both neuroscientists and AI researchers. It’s nothing but hand-waiving to claim that corporate exploitation is ok because…reasons.

    LLMs and similar models are literally statistical models of the data that they have been fed. They have no thought, consciousness, or creativity. They are fundamentally incapable of synthesizing anything not already existing in their dataset.

    These same bunk pro-corpo-AI talking points are getting pretty old and should be allowed to retire at this point.