I think the trick is to make an effort to cover as many possibilities as can be dealt with by a reasonable effort (definition of “reasonable” varies significantly by context) when setting up something which you expect the general public to interact with. Not so much assuming that any given person has some disability you can’t see, but that any large group of people will have at least a few.
Interactions with a specific person are another matter entirely, as you point out. There, I think the best you can do is roll with it if someone tells you that they’re unable to do something without subjecting them to interrogation or scepticism
It’s certainly not as bad as the problems generative AI tend to have, but it’s still difficult to avoid strange and/or subtle biases.
Very promising technology, but likely to be good at diagnosing problems in Californian students and very hit-and-miss with demographics which don’t tend to sign up for studies in silicon valley