

This is what I have been hoping LLMs would provoke since the beginning.
Testing understanding via asking students to parrot textbooks with changed wording was always a shitty method, and one that de-incentivizes deep learning:
It allows for teachers that do not understand their field beyond a superficial level to teach, and to evaluate. What happens when a student given the test question “Give an intuitive description of an orbit in your own words” answers by describing orbital mechanics in a relative frame instead of a global frame, when the textbook only mentions global frame? They demonstrate understanding beyond the material which is excellent but all they do is risk being marked down by a teacher who can’t see the connection.
A student who only memorized the words and has the ability to rearrange them a bit, gets full marks no risk.
Guess the “Free Speech Absolutist” in charge of Twitter was too busy heiling hitler to step in huh.