This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
wiki:disk_reservation [2016/11/29 23:01] – [Server prologue/epilogue] neyron | wiki:disk_reservation [2016/12/02 18:33] (current) – [Admission rules] neyron | ||
---|---|---|---|
Line 2: | Line 2: | ||
E.g. Each node has " | E.g. Each node has " | ||
This would give users the capability of keeping data on disk for a longer time than the compute time, e.g. a more persistent storage for data. | This would give users the capability of keeping data on disk for a longer time than the compute time, e.g. a more persistent storage for data. | ||
+ | |||
===== Setup ===== | ===== Setup ===== | ||
We create //disk// resources, then setup some coupling so that a //compute// resource is **tagged** when a user has disks reserved on it. Tagging in done in the //disk// property of the //compute// resource (type // | We create //disk// resources, then setup some coupling so that a //compute// resource is **tagged** when a user has disks reserved on it. Tagging in done in the //disk// property of the //compute// resource (type // | ||
Line 109: | Line 110: | ||
In order to: | In order to: | ||
- simplify the user interface | - simplify the user interface | ||
- | - allow one to submit a compute job before the disk job actually starts | + | - allow one to submit a compute job before the //disk// job actually starts |
- | a job type '' | + | A job type '' |
First modify the job type checking: '' | First modify the job type checking: '' | ||
Line 156: | Line 157: | ||
Mind setting a relevant priority for the new admission rule. | Mind setting a relevant priority for the new admission rule. | ||
- | Finally, just like for deploy jobs, we allow only whole nodes for //compute with disk// jobs. For that, we edit the corresponding rule: '' | + | Finally, just like for deploy jobs, we might want to allow only whole nodes for //compute with disk// jobs. For that, we edit the corresponding rule: '' |
<code perl> | <code perl> | ||
-# Restrict allowed properties for deploy jobs to force requesting entire nodes | -# Restrict allowed properties for deploy jobs to force requesting entire nodes | ||
Line 276: | Line 277: | ||
* This workaround does not allow to place and show (schedule) the //compute// jobs in the future (i.e. before the //disk// job is running)... While not scheduled, //compute// jobs can be submitted ahead of time however. Also the scheduling decision could place //compute// jobs after the end of the running //disk// job, so that they would actually not launch in fine. | * This workaround does not allow to place and show (schedule) the //compute// jobs in the future (i.e. before the //disk// job is running)... While not scheduled, //compute// jobs can be submitted ahead of time however. Also the scheduling decision could place //compute// jobs after the end of the running //disk// job, so that they would actually not launch in fine. | ||
* Users may want to select specific disks/hosts when using only a part of the reserved disks for a given compute job. This mechanism does not allow it. | * Users may want to select specific disks/hosts when using only a part of the reserved disks for a given compute job. This mechanism does not allow it. | ||
+ | * The '' | ||
+ | * A // | ||
+ | * While batch jobs which are not started yet will be move with regard to previous scheduling decisions, some may have started before the disk property of the resources is changed, making resources whose disks are reserved unavailable for the duration of those jobs. | ||
+ | * Advance reservations could also be accepted on the resources: resources are //booked// upon submission acceptation for an advance reservation, |