

That’s probably debatable, if they have permission. They probably shouldn’t have been given permission, but that’s a separate issue
That’s probably debatable, if they have permission. They probably shouldn’t have been given permission, but that’s a separate issue
It’s only true of badly designed bridges, these days. Modern engineering tools can calculate the resonant frequencies, and they make certain that those are far away from the frequencies which humans or wind can create
It’s certainly not as bad as the problems generative AI tend to have, but it’s still difficult to avoid strange and/or subtle biases.
Very promising technology, but likely to be good at diagnosing problems in Californian students and very hit-and-miss with demographics which don’t tend to sign up for studies in silicon valley
I think the trick is to make an effort to cover as many possibilities as can be dealt with by a reasonable effort (definition of “reasonable” varies significantly by context) when setting up something which you expect the general public to interact with. Not so much assuming that any given person has some disability you can’t see, but that any large group of people will have at least a few.
Interactions with a specific person are another matter entirely, as you point out. There, I think the best you can do is roll with it if someone tells you that they’re unable to do something without subjecting them to interrogation or scepticism
Sure, but there are far more things which will kill the entire person at the same dose they’ll kill the cancer than things which can be carefully controlled by choosing the right dose.
These studies which claim to kill cancer in a petri dish usually turn out to be the former, because not killing the host is the difficult part
Israel and trump appear to be claiming to have defeated the Iranian air defense, and achieve air supremacy over the Iranian capital.
If that’s true then Iran is in deep trouble, and inviting them to surrender wouldn’t be unreasonable. I very much doubt that it is true, but that’s what they seem to believe
It’s far harder to achieve mass manipulation of the ballot when it’s all being handled by a lot of human hands. If it’s managed by computers, then by finding a bug or other vulnerability in the software or database you could alter the whole election.
Meanwhile, to manipulate a paper ballot & hand-counted election in the same way you’d need the cooperation of a huge number of people, and you’d need them all to keep their mouths shut. That’s far more difficult than defeating a computerised system
That’s an implementation detail, not really relevant to my point.
I don’t think you appreciate how powerful those magnets are. Any ferromagnetic object would be doing well to avoid binding up completely when held right up to the device
Realistically, the mechanism would jam. I doubt the hammer would fall, being squeezed hard against whatever structure supports it
Honestly I think it’s misleading to describe it as being “defined” as 1, precisely because it makes it sounds like someone was trying to squeeze the definition into a convenient shape.
I say, rather, that it naturally turns out to be that way because of the nature of the sequence. You can’t really choose anything else
X^0 and 0! aren’t actually special cases though, you can reach them logically from things which are obvious.
For X^0: you can get from X^(n) to X^(n-1) by dividing by X. That works for all n, so we can say for example that 2³ is 2⁴/2, which is 16/2 which is 8. Similarly, 2¹/2 is 2⁰, but it’s also obviously 1.
The argument for 0! is basically the same. 3! is 1x2x3, and to go to 2! you divide it by 3. You can go from 1! to 0! by dividing 1 by 1.
In both cases the only thing which is special about 1 is that any number divided by itself is 1, just like any number subtracted from itself is 0
You’re still putting words in my mouth.
I never said they weren’t stealing the data
I didn’t comment on that at all, because it’s not relevant to the point I was actually making, which is that people treating the output of an LLM as if it were derived from any factual source at all is really problematic, because it isn’t.
You’re putting words in my mouth, and inventing arguments I never made.
I didn’t say anything about whether the training data is stolen or not. I also didn’t say a single word about intelligence, or originality.
I haven’t been tricked into using one piece of language over another, I’m a software engineer and know enough about how these systems actually work to reach my own conclusions.
There is not a database tucked away in the LLM anywhere which you could search through and find the phrases which it was trained on, it simply doesn’t exist.
That isn’t to say it’s completely impossible for an LLM to spit out something which formed part of the training data, but it’s pretty rare. 99% of what it generates doesn’t come from anywhere in particular, and you wouldn’t find it in any of the sources which were fed to the model in training.
That simply isn’t true. There’s nothing in common between an LLM and a search engine, except insofar as the people developing the LLM had access to search engines, and may have used them during their data gathering efforts for training data
Except these AI systems aren’t search engines, and people treating them like they are is really dangerous
I couldn’t find the actual pinout for the 8 pin package, but the block diagrams make me think they’re power, ground, and 6 general purpose pins which can all be GPIO. Other functions, like ADC, SPI and I2C (all of which it has) will be secondary or tertiary functions on those same pins, selected in software.
So the actual answer you’re looking for is basically that all of the pins are everything, and the pinout is almost entirely software defined
BGA, like in the photo, isn’t the only option. There are options only slightly larger with hand-solderable packages (if you’re good at soldering)
How did you calculate that? The question didn’t even mention a specific speed, just “near the speed of light”.
The kinetic energy for a grain of sand near the speed of light is somewhere between “quite a lot” and “literally infinity” (which is, in a sense, the reason you can’t actually reach light speed without a way to supply infinite energy).
It was pointed out to me a while back that the paradox of tolerance is only a paradox if you consider tolerance to be a philosophical position.
In fact, we don’t treat it like that. We treat it as a social contract, in which context it is no paradox at all to say that if you aren’t tolerant then other people aren’t obliged to tolerate you in turn
Oh, absolutely. It’s not something which should be encouraged, and against a well designed modern system it probably isn’t possible (there must be some challenge-response type NFC systems on the market).
I’m just saying it isn’t unambiguously “illegitimate”