• Bamboodpanda@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    8 hours ago

    "I recall Ethan Mollick discussing a professor who required students to use LLMs for their assignments. However, the trade-off was that accuracy and grammar had to be flawless, or their grades would suffer. This approach makes me think—we need to reshape our academic standards to align with the capabilities of LLMs, ensuring that we’re assessing skills that truly matter in an AI-enhanced world.

    • taiyang@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 hours ago

      That’s actually something that was discussed like, two years ago within the institutions I’m connected to. I don’t think it was ever fully resolved, but I get the sense that the inaccurate results made it too troublesome.

      My mentally coming out of an education degree, if your assessment can be done by AI, you’re relying too much on memorization and not enough on critical thinking. I complain in my reply, but the honest truth is these students mostly lost points because they didn’t apply theory to the example (although it’s because the example wasn’t fully understood since it wasn’t their own). K-12 generally fails on this, which is why freshmen have the hardest time with these things, GPT or otherwise.