• MotoAsh@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    edit-2
    1 year ago

    The underlying truth of this joke is: Programming syntax is less confusing than mathematical syntax. There are genuinely ambiguous layouts of syntax in math (to a human reader that hasn’t internalized PEMDAS, anyways) whereas you get a compilation error if ANYTHING is ambiguous in programming. (yes, I am WELL aware of the frustrations of runtime errors)

    • dejected_warp_core@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Also: sometimes, a mathematician just has to invent some concept or syntax to convey something unconventional. The specific use of subscript/superscript, whatever ‘phi’ is being used for, etc. on whatever paper you’re reading doesn’t have to correlate to how other work uses the same concepts. It’s bad form, but sometimes its needed, and if useful enough is added to the general canon of what we call “math”. Meanwhile, you can encapsulate and obfuscate things in software, sure, but you can always get down to the bedrock of what the language supports; there’s no inventing anything new.

      • MotoAsh@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 year ago

        Yea, that’s it. Math syntax was created for humans, and programming syntax had to always remain deterministic for computers. It’s not an insult to either, just interesting how ambiguities show up often when humans are involved. I say ‘often’ for the general case: Math should be just as deterministic as programming, but it’s not in some situations.