I want to launch Oobabooga Textgen WebUI from the command line with its serial output. I also want to run a while loop that retrieves the Nvidia GPU memory available and temperature for display on the header bar with a 5 second sleep delay. How do I run both of those at the same time?

  • experbia@lemmy.world
    link
    fedilink
    arrow-up
    14
    ·
    10 months ago

    as already mentioned, ampersand allows you to “background” a task. but if you’d like the output from your program alongside the loop monitoring system info, consider using a terminal multiplexer like tmux.

    on the terminal, this will let you open a “split screen” pane with another shell. you can use hotkeys to create, destroy, or move between views.

  • ndupont@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    10 months ago

    I’d just use tmux. Check NetworkChuck’s recent video on the topic

  • future_turtle@sh.itjust.works
    link
    fedilink
    arrow-up
    2
    ·
    10 months ago

    You probably want to make a launcher script. An easy start would be to background your main process and route the output wherever you want. Run your monitor loop and send the output wherever you want. Then you can examine and kill the main background pid on script exit. The simplest way in bash might be something like kill $(jobs -p)

    This can get a bit more complicated if you want it all to exit if anything fails or something like that. Read up on pkill, disown, kill, $$, trap…tons of possibilities

    Some of these things aren’t very portable though, so do check if you decide to switch shells….or do what the rest of us do and scratch your head for an hour before cussing ourselves for not being posix compliant, swear we will next time, then don’t

  • just_another_person@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    10 months ago

    Shouldn’t your LLM be solving this problem.for you? 😂

    Seriously though, if not two terms, then just background one or both of these commands.

    • j4k3@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      I didn’t ask AI about this one yet. It could likely give me most of the answers here to various degrees. My best models are similar in scope to stack exchange combined with the randomness of whatever happens to pop up first in DDG results. It isn’t good at explaining the various ways a task can be done in practice. Like it likely ‘knows’ all the various ways, and will generate each of them if prompted slightly differently from scratch each time. But, if you try and have a longer dialog context where it has previously generated a solution, it will likely struggle to accurately describe other methods. LLMs are also pretty good with bash, but they suck at sh or busybox’s ash. There is just not enough training data present in these niches with the general models.

      However, I asked here primarily in an attempt to increase my posts contributions to Lemmy in hopes of keeping people engaged and around long term. Who doesn’t like helping random people with things in this format. I could easily find the answer to this on my own. Asking something on the back of my mind that I have been putting off was just engagement and trying to improve my fundamental scripting skills. Sorry if that somehow offends.

  • ik5pvx@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    10 months ago

    Is there a particular reason you can’t just open 2 xterm and run each command in its own ?