This is an old revision of the document!


This page aims at gathering information about the different job types (might be renamed job tags someday…), as passed to oarsub using the -t option.

Some job types have effects on the job execution mechanisms, some on the job scheduling, some (user defined) can be abstract/virtual, or only have effect outsite of OAR (e.g. in the prologue/epilogue scripts set by the administrator). Job types control is managed by the admission rule. The 15th admission rule handle the syntax checking of the accepted job types (the administrator can enable/disable types in that admission rule).

Types in OAR 2.5.4 and upward

besteffort

Jobs of type besteffort are killed whenever any other non-besteffort job wants one of its resources.

idempotent

Make a job be resubmitted is it previously was in error (TBC).

token

TBC

deploy

  • uses the deploy frontend (see oar.conf) as the job connection node (first node)
  • no ping checker on nodes
  • runs prologue/epilogue on the deploy frontend
  • job is killed if the deploy frontend is rebooted.

No hardcoded specific links to kadeploy. It's the adminitrator responsability to use kadeploy (or other tools) command in the prologue/epilogue to set rights, deploy or reboot machines.

cosystem

Almost the same as the deploy type, except the name (uses the cosystem frontend, set in oar.conf).

Conceptually, this type was thought to be use to delegate the management of the resources if a job to another RJMS.

noop

Reserve the nodes but nothing else.

  • job always ends in error (walltime or oardel)
  • only server_prologue/server_epilogue are executed (on the server)
  • the oarsub frontend can be rebooted.

timesharing

Note: below, the “user” and “name” are bare wors. They are NOT to be replaced by the actual job user or job name.

  • timesharing=*,*
  • timesharing=user,*
  • timesharing=*,name
  • timesharing=user,name

container/inner

A containter is a job which creates a new gantt to enclose some other subsequent jobs.

Subsequent jobs are submitted with “-t inner=<job id of the container job>”

E.g:

  • oarsub -t container → job id X
  • oarsub -t inner=X

Rq: both container and inner can be advance reservations

set_placeholder/use_placeholder

Placeholder jobs. Not enabled in the default admission rules. See OAR 2.5.5 were they are renamed placeholder/allowed.

desktop_computing

Deprecated. OAR desktop_computing functions are unmaintained.

New types in OAR 2.5.5 and upward

Additionally to the previous ones, the following types comes with OAR 2.5.5 and later versions.

placeholder/allowed

Allow one to reserve some resources but let other users use them if they are granted to. A typical use case is for a reservation of resources for a group of users.

Usage:

  • One submits a job with -t placeholder=<a name>
  • As usual, other users cannot use the job resources, unless they use -t allowed=<same name> (and admission rules let them do so)

This somehow functions like -t timesharing=name,*, except that the user which submits the placeholder job and the users which submit the allowed job do not need to use the same job name (job names are let to other needs/uses, possibly combined with placeholder/allowed). More importantly 2 allowed jobs are not sharing resources, while this would be the case with -t timesharing=name,*.

Also, unlike with timesharing=*,user, the user which submits the placeholder job and the users which submit the allowed job are not related. If no admission rule prevents it, any user which use allowed=<same name> could use the resources affected to the job placeholder=<a name>.

In a typical use case, the allowed type usage is to be controlled by admission rules, e.g. only users of group X can submit jobs with type placeholder=X or allowed=X.

Finally, unlike container/inner jobs, allowed jobs a not constrained to the boundaries of the placeholder job.

NB:

  • placeholder jobs and timesharing jobs are orthogonal. Both can be used together.
  • placeholder advance reservations are handled by OAR metascheduler → all queues.
  • placeholder batch jobs require that the queue uses one of the *_and_placeholder scheduler
  • placeholder and allowed jobs do not have to be in a same queue
  • oarsub -t allower=blue -t allowed=red or oar -t allowed=blue+red or whatever other syntax intended to mix multiple placeholder is not supported, see WARNING

expire

Gives an expiration date to a job: “-t expire=yyyymmdd[ hh:mm[:ss]]”.

Passed this date, the job is deleted if not running yet (current date > expire date).

Rq: Does not apply to an advance reservation job

postpone

Postpone the earliest possible start date of a job: “-t postpone=yyyymmdd[ hh:mm[:ss]]”.

Rq: Does not apply to an advance reservation job

deadline

Gives an deadline date to a job: “-t deadline=yyyymmdd[ hh:mm[:ss]]”.

Unvalidate any scheduling decision which would make the job end (walltime) after the deadline.

If the job is still not running when the current date passes the deadline date, the job is deleted.

Rq: Does not apply to an advance reservation job

Types in OAR 2.6.0

Almost all types present in OAR 2.5.5 will be in OAR 2.6.0. A few changes are to be noticed however.

container/inner

Container can now be named, i.e referenced by any string if not a job id.

  • Job id container:
oarsub -t container
oarsub -t inner=<job_id>
  • Named conainter:
oarsub -t container=<name>
oarsub -t inner=<name>

With named container, the gantt of the container can have several “holes”, if more than one job with “-t container=<same name>” are created.

Rq: both container and inner can be advance reservations

constraints

Allows to set time period for the job scheduling, e.g. all nights and week-ends.

The syntax is: “-t constraints=jjjjj/hh:mm[/hh:mm]][/iterations[/start date]][,*]”

If iterations/start date are not set, use weekly rolling iterations (by default: 4 rolling weeks).

Ex:

  • All nights and week-ends: -t constraints=1234/20:00/12,5/20:00/60
    • Monday, Tuesday, Wednesday, Thursday, starting 20:00 for 12 hours
    • Friday, starting 20:00 for 60 hours (2 days + 12 hours)

Rq: A job cannot be of both type constraints and container.

extensible

Allows all processes from a job to migrate from its cgroup/cpuset to the one of a next job on any common host between the 2 jobs. This assumes that the next job shares some execution time with the previous job.

It is though to eventually be used in conjunction with the depends, clone, and timesharing types.

Syntax: “-t extensible”.

clone

Allows a next job to have the same resources as a previous job.

Syntax: “-t clone=<job id>”

depends

With OAR 2.5.x and as of now in OAR 2.6.x development branch, dependence between job can be set with the -a switch (or –after), but it could become a job type instead.

Dependences are set given the job id of the previous job, and optionally a relative start time window.

Syntax is: “-a <job id>[,start time window]”.

Syntax for the start time window is: “min start time[,max start time]”.

Syntax for the start and stop time is:

  • [s → time relative to the start time of the previous job, shifted for s seconds (s can be a negative number)
  • ]s → time relative to the stop time of the previous job, shifted for s seconds (s can be a negative number)

E.g.:

oarsub -a 42,]-300,]-30

Here the next job must start within the time window starting 5 minutes, and ending 30 seconds before the end of job 42.

Rq: Does not apply to an advance reservation job

Types defined by the administrators

New types can be defined by the administrator of the cluster. They have not internal functionality in OAR by themselves.

allow_classic_ssh

On the Grid'5000 platform, this type allows to enable conncections to nodes using the standard SSH protocol.

Epilogue/prologues trigger some PAM access setup to allow the job user to access to nodes via the ssh command, while it is disallowed otherwise (one must use oarsh/oarcp).

redeploy

On the Digitalis platform.

Mixes (admission rule) the deploy type with the timesharing=user,name types.

With some additional code in the deploy frontend prologue/epilogue, a user can extend a deploy job.

E.g: If a user has a new redeloy job overlapping a existing redeploy job, machines are not rebooted at the end of the first job.

ht

On the Froggy/CIMENT platform.

Activates hyperthreading on the nodes (not activated by default). It is only effective for jobs using full nodes. There's no change into the OAR_FILE_NODES file, so the user must be aware of the threads (launching 2 processes per line of the OAR_FILE_NODES for example).

It is a simple modification of the job_resource_manager.pl script.

See also using hyperthreading on nodes.

WARNING

For job type of the form key=value handled by the scheduler (e.g. inner, timesharing, placeholder, …), passing many time the same key with different values results in only one being taken into account by the scheduler. E.g. if running oarsub -t inner=123 -t inner=456, inner=456 overrides inner=123, the container used will be job 456 only, not job 123.

wiki/job_types.1530131913.txt.gz · Last modified: 2018/06/27 22:38 by neyron
Recent changes RSS feed GNU Free Documentation License 1.3 Donate Powered by PHP Valid XHTML 1.0 Valid CSS Driven by DokuWiki