GPU Task Spooler - A SLURM alternative/job scheduler for a single simulation machine

Overview

GPU Task Spooler

Originally, Task Spooler by Lluís Batlle i Rossell.

Introduction

A CPU-only version that is more faithful to the original Task Spooler is being actively developed in here.

As in freshmeat.net:

task spooler is a Unix batch system where the tasks spooled run one after the other. The amount of jobs to run at once can be set at any time. Each user in each system has his own job queue. The tasks are run in the correct context (that of enqueue) from any shell/process, and its output/results can be easily watched. It is very useful when you know that your commands depend on a lot of RAM, a lot of disk use, give a lot of output, or for whatever reason it's better not to run them all at the same time, while you want to keep your resources busy for maximum benfit. Its interface allows using it easily in scripts.

For your first contact, you can read an article at linux.com, which I like as overview, guide and examples (original url). On more advanced usage, don't neglect the TRICKS file in the package.

Changelog

See CHANGELOG.

Tutorial

A tutorial with colab is available here.

Features

I wrote Task Spooler because I didn't have any comfortable way of running batch jobs in my linux computer. I wanted to:

  • Queue jobs from different terminals.
  • Use it locally in my machine (not as in network queues).
  • Have a good way of seeing the output of the processes (tail, errorlevels, ...).
  • Easy use: almost no configuration.
  • Easy to use in scripts.

At the end, after some time using and developing ts, it can do something more:

  • It works in most systems I use and some others, like GNU/Linux, Darwin, Cygwin, and FreeBSD.
  • No configuration at all for a simple queue.
  • Good integration with renice, kill, etc. (through ts -p and process groups).
  • Have any amount of queues identified by name, writting a simple wrapper script for each (I use ts2, tsio, tsprint, etc).
  • Control how many jobs may run at once in any queue (taking profit of multicores).
  • It never removes the result files, so they can be reached even after we've lost the ts task list.
  • Transparent if used as a subprogram with -nf.
  • Optional separation of stdout and stderr.

ts-sample

Setup

Install Task Spooler

First, clone the repository

git clone https://github.com/justanhduc/task-spooler

To setup Task Spooler with GPU support, one needs to set a CUDA_HOME environment variable. Otherwise, if you need only the CPU version, perform a checkout first

git checkout cpu-only

Then, simple run the provided script

./install

to use CMake, or

./reinstall

to use Makefile.

Common problems

  • Cannot find CUDA: Did you set a CUDA_HOME flag?
  • /usr/bin/ld: cannot find -lnvidia-ml: This lib lies in $CUDA_HOME/lib64/stubs. Please append this path to LD_LIBRARY_PATH. Sometimes, this problem persists even after adding the lib path. Then one can add -L$(CUDA_HOME)/lib64/stubs to this line in the Makefile.

Uinstall Task Spooler

./uninstall

Why would you want to do that anyway?

Known issues

  • This is not an issue, but when multiple consecutive GPU jobs are queued, after the first job runs, there is a small delay for the next GPU job to run in order to ensure that the same GPUs are not claimed by different jobs. There was an issue causing this delay significantly longer as reported in #2 but has been fixed in 176d0b76. To avoid the delay, you can use -g to indicate the exact GPU IDs for the job.

Mailing list

I created a GoogleGroup for the program. You look for the archive and the join methods in the taskspooler google group page.

Alessandro Öhler once maintained a mailing list for discussing newer functionalities and interchanging use experiences. I think this doesn't work anymore, but you can look at the old archive or even try to subscribe.

How it works

The queue is maintained by a server process. This server process is started if it isn't there already. The communication goes through a unix socket usually in /tmp/.

When the user requests a job (using a ts client), the client waits for the server message to know when it can start. When the server allows starting , this client usually forks, and runs the command with the proper environment, because the client runs run the job and not the server, like in 'at' or 'cron'. So, the ulimits, environment, pwd,. apply.

When the job finishes, the client notifies the server. At this time, the server may notify any waiting client, and stores the output and the errorlevel of the finished job.

Moreover the client can take advantage of many information from the server: when a job finishes, where does the job output go to, etc.

History

Андрей Пантюхин (Andrew Pantyukhin) maintains the BSD port.

Alessandro Öhler provided a Gentoo ebuild for 0.4, which with simple changes I updated to the ebuild for 0.6.4. Moreover, the Gentoo Project Sunrise already has also an ebuild (maybe old) for ts.

Alexander V. Inyukhin maintains unofficial debian packages for several platforms. Find the official packages in the debian package system.

Pascal Bleser packed the program for SuSE and openSuSE in RPMs for various platforms.

Gnomeye maintains the AUR package.

Eric Keller wrote a nodejs web server showing the status of the task spooler queue (github project).

Duc Nguyen took the project and develops a GPU-support version.

Manual

See below or man ts for more details.

swap two jobs in the queue. -B in case of full queue on the server, quit (2) instead of waiting. -h show this help -V show the program version Options adding jobs: -n don't store the output of the command. -E Keep stderr apart, in a name like the output file, but adding '.e'. -z gzip the stored output (if not -n). -f don't fork into background. -m send the output by e-mail (uses sendmail). -d the job will be run after the last job ends. -D the job will be run after the job of given IDs ends. -W the job will be run after the job of given IDs ends well (exit code 0). -L name this task with a label, to be distinguished on listing. -N number of slots required by the job (1 default). ">
usage: ts [action] [-ngfmdE] [-L ] [-D ] [cmd...]
Env vars:
  TS_SOCKET  the path to the unix socket used by the ts command.
  TS_MAILTO  where to mail the result (on -m). Local user by default.
  TS_MAXFINISHED  maximum finished jobs in the queue.
  TS_MAXCONN  maximum number of ts connections at once.
  TS_ONFINISH  binary called on job end (passes jobid, error, outfile, command).
  TS_ENV  command called on enqueue. Its output determines the job information.
  TS_SAVELIST  filename which will store the list, if the server dies.
  TS_SLOTS   amount of jobs which can run at once, read on server start.
  TMPDIR     directory where to place the output files and the default socket.
Long option actions:
  --set_gpu_wait             set time to wait before running the next GPU job (30 seconds by default).
  --get_gpu_wait                  get time to wait before running the next GPU job.
  --get_label      || -a [id]     show the job label. Of the last added, if not specified.
  --full_cmd       || -F [id]     show full command. Of the last added, if not specified.
  --count_running  || -R          return the number of running jobs
  --last_queue_id  || -q          show the job ID of the last added.
Long option adding jobs:
  --gpus           || -G [num]    number of GPUs required by the job (1 default).
  --gpu_indices    || -g  the job will be on these GPU indices without checking whether they are free.
Actions:
  -K          kill the task spooler server
  -C          clear the list of finished jobs
  -l          show the job list (default action)
  -S [num]    get/set the number of max simultaneous jobs of the server.
  -t [id]     \"tail -n 10 -f\" the output of the job. Last run if not specified.
  -c [id]     like -t, but shows all the lines. Last run if not specified.
  -p [id]     show the pid of the job. Last run if not specified.
  -o [id]     show the output file. Of last job run, if not specified.
  -i [id]     show job information. Of last job run, if not specified.
  -s [id]     show the job state. Of the last added, if not specified.
  -r [id]     remove a job. The last added, if not specified.
  -w [id]     wait for a job. The last added, if not specified.
  -k [id]     send SIGTERM to the job process group. The last run, if not specified.
  -T          send SIGTERM to all running job groups.
  -u [id]     put that job first. The last added, if not specified.
  -U   swap two jobs in the queue.
  -B          in case of full queue on the server, quit (2) instead of waiting.
  -h          show this help
  -V          show the program version
Options adding jobs:
  -n           don't store the output of the command.
  -E           Keep stderr apart, in a name like the output file, but adding '.e'.
  -z           gzip the stored output (if not -n).
  -f           don't fork into background.
  -m           send the output by e-mail (uses sendmail).
  -d           the job will be run after the last job ends.
  -D   the job will be run after the job of given IDs ends.
  -W   the job will be run after the job of given IDs ends well (exit code 0).
  -L      name this task with a label, to be distinguished on listing.
  -N      number of slots required by the job (1 default).

Thanks

Author

Acknowledgement

  • To Raúl Salinas, for his inspiring ideas
  • To Alessandro Öhler, the first non-acquaintance user, who proposed and created the mailing list.
  • Андрею Пантюхину, who created the BSD port.
  • To the useful, although sometimes uncomfortable, UNIX interface.
  • To Alexander V. Inyukhin, for the debian packages.
  • To Pascal Bleser, for the SuSE packages.
  • To Sergio Ballestrero, who sent code and motivated the development of a multislot version of ts.
  • To GNU, an ugly but working and helpful ol' UNIX implementation.

Related projects

Messenger

Comments
  • Bug: cannot add a very long command to queue

    Bug: cannot add a very long command to queue

    When trying to add a command with many parameters, task-spooler will crash at: https://github.com/justanhduc/task-spooler/blob/e5a99117ebf879bad8d845cb3246bd7f77864b4a/execute.c#L45

    I'm currently using a workaround by having a script that stores the command in a variable, and then pass task-spooler a script that reads the command from the variable and execute it. But that's just a hack of course.

    Here are my workaround scripts:

    ts_helper with usage ts_helper OPTIONS @ LONG_COMMAND:

    #!/bin/bash
    IFS='@' read -ra ARGS <<< "[email protected]";
    if [ ${#ARGS[@]} -eq 2 ];
    then
      SCRIPTRAW="${ARGS[1]}"
      SCRIPT="$SCRIPTRAW" ts "${ARGS[0]}" ts_helper_runner;
    else
      echo "Wrong number of arguments";
    fi;
    

    which calls the script ts_helper_runner:

    #!/bin/bash
    exec $SCRIPT
    
    opened by orsharir 14
  • Getting error of `Wrong server version.`

    Getting error of `Wrong server version.`

    Hi, I follow the installation instruction at Ubuntu18.04 and simply ran ts. But, I got the error message: Wrong server version. Received 1048576, expecting 730. Could you help me to resolve this issue?

    bug 
    opened by iseong83 10
  • Can I limit the distributed GPUs?

    Can I limit the distributed GPUs?

    I'm using your script on a machine with 16 GPUs. For my tasks, I want specific GPUs to not be used or rather select which GPUs are used.

    For example, I want GPUs 0-8 to be available to ts but 9-15 be left alone. Is this something that can be done?

    opened by bermeitinger-b 9
  • " option">

    "munmap_chunk(): invalid pointer" error using "-L

    Hi! First of all, thank you for a great project: it very helps us to run some jobs in a batch manner. As we don't need a GPU, we use currently a cpu-only version 1.2.1 of Task Spooler on a Linux 64-bit platform (SLES 15 sp3).

    However, we encountered an error that looks like a bug with a memory allocation/freeing. When we are trying to use "ts" with an "-L

    opened by COshmyan 8
  • Add -M option for machine-readable JSON output

    Add -M option for machine-readable JSON output

    This PR adds the -M short option which converts the output of ts or ts -l to JSON. Also, I enabled CUDA build for an older CMake version (3.16) which is the latest by default in Ubuntu's package manager.

    I implemented it with cJSON, which I added directly into the project. It is compatible with ANSI C so should not restrict any platforms. I added a test to testbench.sh, not sure if any other test suite exists.

    Please let me know if there are any other requirements for implementing this feature.

    Example:

    ~/code/task-spooler-structured-output/build-gpu master*
    ❯ ./ts -K
    
    ~/code/task-spooler-structured-output/build-gpu master*
    ❯ ./ts echo foo
    0
    
    ~/code/task-spooler-structured-output/build-gpu master*
    ❯ ./ts sleep 10s
    1
    
    ~/code/task-spooler-structured-output/build-gpu master*
    ❯ ./ts
    ID   State      Output               E-Level  Time   GPUs  Command [run=1/1]
    1    running    /tmp/ts-out.vj9wHU                   0     sleep 10s
    0    finished   /tmp/ts-out.tFVNpx   0         0.00s 0     echo foo
    
    ~/code/task-spooler-structured-output/build-gpu master*
    ❯ ./ts -M json
    [{"ID":1,"State":"running","Output":"/tmp/ts-out.vj9wHU","E-Level":null,"Time_ms":null,"GPUs":0,"Command":"sleep 10s"},{"ID":0,"State":"finished","Output":"/tmp/ts-out.tFVNpx","E-Level":0,"Time_ms":0.0018889999482780695,"GPUs":0,"Command":"echo foo"}]
    
    hacktoberfest-accepted 
    opened by bstee615 7
  • Real-time log with Task-spooler

    Real-time log with Task-spooler

    Thank you a lot for such a great tool. I have been looking for this tool for a while.

    In your tutorial, it said that:

    To see the output, use the -c or -t flag. You should see the training in real-time. You can use ctrl+c to stop getting stdout anytime without actually canceling the experiment.

    However, when I ran this command, ts -c <id>, it hang out for a while and returned the entire log once until the task finished.

    When I checked the manual via man ts, I just see:

    -c [id ... It will block until all the output can be sent to standard output, and will exit with the job errorlevel as in -c.

    Is there any way to work around this problem to view the stdout in real-time?

    opened by davidnvq 7
  • Enhancement requests

    Enhancement requests

    Hi! First of all, I'd like to thank for this great project - it is really useful!

    At the same time, I'd like to share my troubles also and ask some enhancement requests. At the moment we use a cpu-only version 1.2.1 of Task Spooler.

    1. I'd like to monitor the current size of queue. It's possible to use the "-R" option to receive the number of jobs that are currently running; however, there is no a simple similar option to obtain the number of jobs that are currently waiting in the queue (or totally: sum of running and queued jobs, i.e. current queue size).

    2. I'd like to have possibility to add job to the queue only if this job still has not been added.

    Use case 1 Some jobs are scheduled to be placed into a queue via CRON (for example, every 10 minutes). However, sometimes it could be occurred that this job is running longer then usual (for example, 15-20 minutes). In this situation the CRON daemon will place this job to the queue twice or thrice that is not needed. In this case I need to place job to the queue only if it's absent in this queue in either the "running" or "queued" states.

    Use case 2 Job is placed to a queue by some external signal. Job should process some new data, and a signal has meaning: "some new data has arrived and ready for processing". However, sometimes new data could arrive just during the job is running. In this case this job should be re-started again when it completes; so it should be placed to the queue again. At the same time, if this job is queued already, it's not necessary to place it to this queue more that once. In this case I need to place job to the queue only if it's absent in this queue in "queued" state only (even if it's present in "running" state).

    1. Probably, the usage of labels could be extended. At least, it could be useful to check a state of some specific jobs identifying them by label instead of job ID.

    2. All these tasks could be solved using some scripts obtaining the list of jobs with all their attributes (state, label, command line, etc.). However, the current version shortens some long fields (at least, label and command lines) according to the screen size; even if the command "ts -l" is used in scripts and STDOUT is not a terminal but redirected to the file or pipe. It causes to problem: script should initially obtain the list of jobs, and then to loop by this list using a job ID for every job and use some options like "-i", "-s", "-p" or "-a" to obtain a details for a specific job. Besides an additional work, it's impossible to obtain a consistent information, as during this loop the state is changing. Is it possible, please, to have possibility to obtain the full queue list (independently of a label and command sizes) for processing in scripts? It could be acceptable if this list has some well known format (for example, CSV or TAB-separated fields).

    enhancement 
    opened by COshmyan 6
  • does not work in 3 gpus

    does not work in 3 gpus

    It works well in 2 gpus. However, if I am using 3 gpus, it seems the 3rd job in the queue will not be executed. It shows the 3rd one is running but the text under Output is (...)

    BTW, should I use the -S option before any other command?

    opened by yqtianust 6
  • Advice on how to cancel (kill or remove) task

    Advice on how to cancel (kill or remove) task

    First off, thank you so much for forking/maintaining this project!

    I want to know the best way to NOT run a task in the case that I do not know whether it is running or queued.

    I see that -r throws an error if it is running and -k throws an error if it is not (e.g. queued).

    Based on this, I came up with the command: ts -r ${taskid} || ts -k ${taskid}. Does that seem like the best approach?

    opened by powelleric 5
  • Make cpu_only a make/cmake toggle and drop the cpu_only branch ?

    Make cpu_only a make/cmake toggle and drop the cpu_only branch ?

    Hello!

    To simplify packaging (I want to write an ebuild for Gentoo's GURU) so the code given in the releases tarballs can be used directly, it would be awesome if you could merge the cpu_only into master and make it into a cmake toggle for example, and/or maybe make the GPU queuing a toggle. Is it very difficult for you ? Another option is to add the cpu_only source code to the releases.

    Thanks!

    Adel

    opened by AdelKS 5
  • can I query the GPU ids allocated for a job?

    can I query the GPU ids allocated for a job?

    Thanks for developing the tool! Right now ts -i only returns the number of GPUs allocated and ts -p returns the pid of the main process. To know which GPUs the process and its child processes actually uses, I am using pgrep and crossing ref with nvidia-smi. Is there an easy way to do so?

    opened by ShuyangCao 5
  • Evaluate $(...) in commands at run not at enqueue

    Evaluate $(...) in commands at run not at enqueue

    ts is such a great tool. Thanks for maintaining it.

    I'm passing a port number to many jobs, and the port has to be free for the job to run correctly. I have a script that finds a free port, and I use it to run the command using ts. i.e., ts -G 1 run --port=$(find_free_port). The problem is doing it like above evaluates the script when the command is added to the queue! Is there a way to evaluate it when ts runs the command?

    enhancement 
    opened by jasam-sheja 1
  • Cpu only for multi-user version

    Cpu only for multi-user version

    For multi-user, each user has the same opportunity to invoke a new job, if the user's slot and the total slot are large enough.

    usage:

    1. the task-spooler server can only be run by the root
    2. socket file is created at ./${tmpdir}/socket-ts.root or which could be specific by TS_SOCKET environment variable.(server_start.c)
    3. the default user file is specifiec in user.c which could be modified by the enivorment variable TS_USER_PATH. Moreover a log file is also controlled by user.c
    4. format of user file (Max 100 users):
    # 1231 # comments
    TS_SLOTS = 4 # Set the total TS_SLOTS in task-spooler
    # TS_FIRST_JOBID = 2000 # Set the index of the first job in task-spooler
    # uid     name    slots
    1000      Kylin   10
    3021     test1    10
    1001     test0    100
    34        user2    30
    

    New features/Commands and the potential problem

    1. --daemon Run the server as daemon by Root only.
    2. --hold and --restart [jobid] hold-on and restart a task.
    3. --lock and --unlock Lock and unlock the task-spooler servers to avoid the potential conflict
    4. --stop and --cont [user], pause and continue all tasks, or lock/unlock all user by root
    5. -A show all user information and all tasks
    6. -X refresh the user configure on-the-fly
    7. -K kill the task spooler server
    8. -r remove a job, even it is running

    The main problem of my work is that the root server cannot control the task run by the other normal user. I found in my service I cannot stop/pause the task owner by the other normal user. Could you have a look on the c_remove_job() function in the client.c

    opened by kylincaster 4
  • Please edit the README

    Please edit the README

    First of all, thank you so much for your work and commitment to this old, so very usefull and so very underrated tool.

    I ended up here accidentaly, by googling "task-spooler". I already know about the original repo by Luis and everything written there. I was looking for more updated information and what people are doing, and plan to to, nowadays, 2022 with this tool.

    I then stumbled on your github repo. Very nice to know somebody has interest and is commited to working on this tool.

    But, it took me a lot of unneccessayr wasted time and a looooot of trouble understanding what your repo is about.

    There is an explanation for that.

    You have an horrible README, greeting every one that lands the first time in your page.

    You seem to have copied the original README and started doing "little edits", crossing out somethings, addding little notes here and there.

    You just put the "homepage" of the project as a link to a blog post from 2021. That's not very professional.

    Please dont take this personally. But the end result is an horrible Frankenstein. It makes it sooooo difficult to quickly understand,

    • what is this Repo ? Is it a fork ? Is it a mirror of the original one ?
    • is it some guy taking over the old one and maintaining it
    • besides being a fork, what are the main points ? is it bug correction ? or is it new features planned ?
    • what are your future intentions ? is this for personal use ? do you plan to keep on with it in a year or so ?

    All of the above would be solved, if you just had a smaller, simple, to the point README. Not copied or adapted from the original repo.

    My personal opinion would be that you should actually have choosen another name, for example "task-spooler-ng". The original repo is dead. And is not gonna update with your work here. You have every right to fork. And deserve every credit for you work here. Since you are independent, are not pulling any more updates from the original repo, you, are effectevely the "new" task-spooler.

    And eventually, what if the Luis old repo wakes up from the dead ? And starts updating it again ? A big mess. Again, my personal opinnion, change the name of your repo. It makes the work of other people eventually packgaging it for a Linux distribution like Debian or Archlinux unnecessary complex.

    Start with a simple sentence, "This is a fork of XYZ, link". Enough.

    Then add a couple more sentences about what "you" are doing here. Not what the old repo "task-spooler" did. That is history. Freshmeat and Luis is history. Put it in a HISTORY file.

    Keep the README small. Put other stuff in the right place. Use the good old ones CHANGELOG file, a TODO , or a NEWS file. Simple plain text files.

    Get rid of the old files written by Luis "OBJECTIVE" , "PLANS" etc. They dont make sense here. Put them in an "HISTORY" or "ARCHIVE" folder. You dont have to in chains like in a prison following the structure of the old repo.

    Dont put that advanced or niche stuff with the cpu/only or planned features before the important stuff. Important stuff, for first time users, is, "what is the point of task-spoooler", and "how to quick start".

    What is this (justanhduc) repo doing different from Luis repo is advanced or historic stuff.

    Again, hope you take this as a constructive critic. The reason and my interest is actually that I use Archlinux. We have a package for task-spooler using the old repo. With some patches. We saw your repo, and are considering a possible change, if the future looks good and stable. Just like the Debian package.

    Thanks in advance.

    opened by m040601 1
  • Way to remove task from queue that is in

    Way to remove task from queue that is in "allocating" state

    Sorry if bringing this up as an issue here is incorrect, but I was wondering if there is a way to remove a job from the queue that has not yet started but is in the "allocating" state. I can't seem to -k it because it says the job is not finished or is not running, and I can't -r it either as it says the job cannot be removed. Thanks!

    bug 
    opened by aka-Ani 3
  • Less intelligent mode for GPU allocation

    Less intelligent mode for GPU allocation

    Let's say I have 2 GPUs that are shared with others, I would like to allocate a single job to a single GPU.

    Using the --gpus option will require that a GPU is considered free, but setting the right free percentage might be tricky. The -g flag will ignore the free requirement but consecutive jobs assigned to the same GPU will start as long as there are available slots. The high level view is that there will be a single slot for each GPU and jobs will run on a GPU as long as the current user does not have a process running on it.

    Essentially I want to be able to just specify the number of gpus needed by a job, and task-spooler will allocate the gpus based on whether there are any running jobs on the GPU, regardless of the memory usage. It is a hybrid mode between the automatic allocation and manual allocation.

    What I am currently doing is to create two different ts servers that uses different TMPDIR and use the -g flag to force a single GPU for jobs submitted to a given server, which isn't ideal and kinda defeats the purpose of ts.

    BTW could there be a configuration file that permanently sets the env vars? It would be great if things like the GPU wait could be set permanently as well.

    enhancement 
    opened by kouyk 6
Releases(v1.3.1)
  • v1.3.1(Jun 10, 2022)

    Release Notes

    This release mainly concentrates on stability with a only a handful of new features.

    Stability enhancements

    • Fixed many memory leaks
    • Fixed a redefinition bugs (#15)
    • Fixed an error that allows jobs to be able to depend on future jobs

    New features

    • ts can now allocate GPUs released by other processes that are not managed by ts before
    • ts -g lists all currently running GPU jobs and the corresponding GPU ID(s)
    • ts can now be installed without sudo privilege (#18)

    Full Changelog: https://github.com/justanhduc/task-spooler/compare/v1.3.0...v1.3.1

    Source code(tar.gz)
    Source code(zip)
  • v1.3.0(Nov 20, 2021)

    Release Notes

    Internal changes

    • GPUs are assigned to clients instead of being selected as before. Thus, queued jobs are executed immediately if there are enough GPUs and slots. As a result, the two flags --set_gpu_wait and --get_gpu_wait are redundant and will be removed in the next major release.

    New features

    • Log name and log folder can be changed via command lines (#7).
    • Environment variables in server side can be seen and modified via --getenv, --setenv and --unsetenv.
    • Let only specific GPUs be visible to ts server via TS_VISIBLE_DEVICES (must be set before starting ts for the first time or via --setenv).
    • Let users decide the availability of GPUs by setting the free memory percentage via --get_gpu_free_perc and --set_gpu_free_perc.
    • More human readable representation for time in -i.

    Stability enhancements

    • Free malloc'ed memory (#13).
    • Fixed a bug that does not pass the error message from server when using -i.
    Source code(tar.gz)
    Source code(zip)
  • v1.2.1(Oct 7, 2021)

  • v1.2(May 28, 2021)

    Release Notes

    Overview

    This release strides towards stability. Various bugs and cleanups are addressed in this release. For more details, please see the Changelog.

    Bug Fixes

    This release fixes a bug described in #2. This issue is that a task status changes to running but output shows (...) because the client sleeps multiple times due to this bug, and it happens programmatically when consecutive GPU jobs are scheduled too soon one after another. This bug is fixed here by making sure the client sleeps only once.

    Cleanups

    Minor cleanups and bug fixes are pushed including various refactorings and memory freeings.

    Updates

    Man page is finally updated.

    Source code(tar.gz)
    Source code(zip)
Sqrt OS is a simulation of an OS scheduler and memory manager using different scheduling algorithms including Highest Priority First (non-preemptive), Shortest Remaining Time Next, and Round Robin.

A CPU scheduler determines an order for the execution of its scheduled processes; it decides which process will run according to a certain data structure that keeps track of the processes in the system and their status. A process, upon creation, has one of the three states: Running, Ready, Blocked (doing I/O, using other resources than CPU or waiting on unavailable resource).

Abdallah Hemdan 18 Apr 15, 2022
C++17 and reactor mode task/timer executor

reactor A C++17 single-file header-only, based on reactor mode, It can add tasks and timers and file descriptor to reactor(one loop one thread) Simple

null 7 Sep 15, 2021
A small self-contained alternative to readline and libedit

Linenoise A minimal, zero-config, BSD licensed, readline replacement used in Redis, MongoDB, and Android. Single and multi line editing mode with the

Salvatore Sanfilippo 3.1k Nov 27, 2022
Isocline is a pure C library that can be used as an alternative to the GNU readline library

Isocline: a portable readline alternative. Isocline is a pure C library that can be used as an alternative to the GNU readline library (latest release

Daan 135 Nov 25, 2022
Simple Virtual Machine with its own Bytecode and Assembly language.

BM Simple Virtual Machine with its own Bytecode and Assembly language. Build We are using nobuild build system which requires a bootstrapping step wit

Tsoding 86 Dec 1, 2022
Infocom Z-machine build environment for 25 retro computer systems, preconfigured for PunyInform

Puddle BuildTools (for PunyInform and other libraries and compilers targeting the Infocom Z-machine) If you're into classic 8-bit and 16-bit home comp

Stefan Vogt 44 Nov 25, 2022
Lightweight state machine implemented in C++

Intro This is my second take on C++ state machine implementation. My first attempt can be found here. The main goals of the implementation are: No dyn

Łukasz Gemborowski 21 Nov 17, 2022
Edf is an event-driven framework for embedded system (e.g. FreeRTOS) with state machine and subscriber-publisher pattern.

Edf means event-driven framework. Event-driven programming is a common pattern in embedded systems. However, if you develop software directly on top o

Arrow89 7 Oct 16, 2022
A shebang-friendly script for "interpreting" single C99, C11, and C++ files, including rcfile support.

c99sh Basic Idea Control Files Shebang Tricks C++ C11 Credits Basic Idea A shebang-friendly script for "interpreting" single C99, C11, and C++ files,

Rhys Ulerich 101 Nov 12, 2022
Dead simple C logging library contained in a single header (.h) file

Seethe Logging so simple, you only need to include a single header file. seethe supports 6 different log levels (DEBUG, INFO, NOTICE, WARNING, ERROR,

Jason Nguyen 28 Nov 24, 2022
RapidObj is an easy-to-use, single-header C++17 library that loads and parses Wavefront .obj files.

RapidObj About Integration Prerequisites Manual Integration CMake Integration API RapidObj Result Next Steps OS Support Third Party Tools and Resource

Slobodan Pavlic 103 Nov 14, 2022
curl4cpp - single header cURL wrapper for C++ around libcURL.

curl4cpp - single header cURL wrapper for C++ around libcURL.

Ferhat Geçdoğan 15 Oct 13, 2022
convert elf file to single c/c++ header file

elf-to-c-header Split ELF to single C/C++ header file

Musa Ünal 2 Nov 4, 2021
A easy to use multithreading thread pool library for C. It is a handy stream like job scheduler with an automatic garbage collector. This is a multithreaded job scheduler for non I/O bound computation.

A easy to use multithreading thread pool library for C. It is a handy stream-like job scheduler with an automatic garbage collector for non I/O bound computation.

Hyoung Min Suh 12 Jun 4, 2022
Lucy job system - Fiber-based job system with extremely simple API

Lucy Job System This is outdated compared to Lumix Engine. Use that instead. Fiber-based job system with extremely simple API. It's a standalone versi

Mikulas Florek 79 Sep 11, 2022
pg_cron is a simple cron-based job scheduler for PostgreSQL that runs inside the database as an extension.

pg_cron is a simple cron-based job scheduler for PostgreSQL (9.5 or higher) that runs inside the database as an extension.

Citus Data 1.8k Dec 2, 2022
A hybrid thread / fiber task scheduler written in C++ 11

Marl Marl is a hybrid thread / fiber task scheduler written in C++ 11. About Marl is a C++ 11 library that provides a fluent interface for running tas

Google 1.4k Nov 29, 2022
EnkiTS - A permissively licensed C and C++ Task Scheduler for creating parallel programs. Requires C++11 support.

Support development of enkiTS through Github Sponsors or Patreon enkiTS Master branch Dev branch enki Task Scheduler A permissively licensed C and C++

Doug Binks 1.4k Nov 29, 2022
A library for enabling task-based multi-threading. It allows execution of task graphs with arbitrary dependencies.

Fiber Tasking Lib This is a library for enabling task-based multi-threading. It allows execution of task graphs with arbitrary dependencies. Dependenc

RichieSams 788 Nov 18, 2022
PrintNightmare - Windows Print Spooler RCE/LPE Vulnerability (CVE-2021-34527, CVE-2021-1675) proof of concept exploits

PrintNightmare - Windows Print Spooler RCE/LPE Vulnerability (CVE-2021-34527, CVE-2021-1675) Summary This is a remote code execution vulnerability tha

Jay K 72 Nov 18, 2022