

I’m not saying that we can’t ever build a machine that can think. You can do some remarkable things with math. I personally don’t think our brains have baked in gradient descent, and I don’t think neural networks are a lot like brains at all.
The stochastic parrot is a useful vehicle for criticism and I think there is some truth to it. But I also think LMMs display some super impressive emergent features. But I still think they are really far from AGI.
I’m not sure I can give a satisfying answer. There are a lot of moving parts here, and a big issue here is definitions which you also touch upon with your reference to Searle.
I agree with the sentiment that there must be some objective measure of reasoning ability. To me, reasoning is more than following logical rules. It’s also about interpreting the intent of the task. The reasoning models are very sensitive to initial conditions and tend to drift when the question is not super precise or if they don’t have sufficient context.
The AI models are in a sense very fragile to the input. Organic intelligence on the other hand is resilient and also heuristic. I don’t have any specific idea for the test, but it should test the ability to solve a very ill-posed problem.