Admission rules are managed with the ''oaradmissionrule'' command. See the usage or man page of the command.
The code of the admission rules is executed within oarsub at submission time.
Many internal variables are available, making the mechanism very powerful... and dangerous.
Hence the administrator must be very careful when playing with admission rules.
==== Example 1: avoid the ugly SQL error warning which appear in case of a typo in the oarsub -l syntax ====
The following admission rule can be added, which check the syntax for valid resource propoerties
# Check the validity of the resource request (-l option of oarsub) to avoid SQL ugly message
# The following variable lists the valid resource properties that the user can use
# See the output of the `oarproperty -l' command for the list of available properties.
# Only properties which defines the implicit resource hierarchy need to be listed.
my @properties=("host","cpu","core","thread");
foreach my $group (@$ref_resource_list) {
foreach my $subgroup (@$group) {
foreach my $moldable (@$subgroup) {
foreach my $resource (@{$moldable->{resources}}) {
while (my ($k,$v) = each(%$resource)) {
if ($k eq "resource" and not grep (/^$v$/, ("nodes", "resource_id", @properties))) {
warn "Admission Rule ERROR : Unknown resource property \"$v\"\n";
exit 1;
}
}
}
}
}
}
==== Example 2: enforce a minimum number of hosts in job====
==Question==
I'm looking for an admission rule that would reject job requests if the user ask for less than N hosts.
E.g:
$ oarsub -l host=2 -I
would fail if I configure N=3.
I have a very limited set of properties configured:
$ oarproperty -l
last_available_upto
host
==Answer==
You could inspect what resources are requested by looking at the the
oarsub command line in an admission rule, using the
$initial_request_string variable.
However, this could be very tricky to handle any request case.
You could also use the $ref_resource_list variable, which is a reference
to the structure containing the result of the parsing of the resource
request.
That structure is complex, since it must store information such as the
resources hierarchy, moldable jobs, properties, etc. See the code for the admission rule below:
my $min_resources = 5; # N=5
my $error = 0;
foreach my $group (@$ref_resource_list) {
foreach my $subgroup (@$group) {
foreach my $moldable (@$subgroup) {
foreach my $resource (@{$moldable->{resources}}) {
while (my ($k,$v) = each(%$resource)) {
if ($k eq "resource" and ($v ne "network_address" or $v ne "host")) {
warn "requesting resource of type $v is not allowed, use nodes or host\n";
$error = 1;
} elsif ($k eq "value" and $v < $min_resources) {
warn "requesting less than $min_resources nodes is not allowed, you requested $v\n";
$error = 1;
}
}
exit 1 if $error;
}
}
}
}
==== Example 3: managing 2 compute nodes generations in the same cluster ====
After upgrading the cluster with new nodes, we need a rule to:
- send the interactive sessions on the old nodes as a default
- send the passive sessions on the new nodes as a default
- avoid mixing old and new nodes for parallel jobs as a default
- users are allowed to override this behavior via setting a property in their request.
==Answer==
First we need to add a property. We called it "**cluster**" and it is an integer (the year of the node installation): **2011** or **2016**
$ oarproperty -a cluster
Nodes kareline-0-x (old one) are inserted in the OAR database with:
oarnodesetting -a -h ’kareline-0-0’ -p host=’kareline-0-0’ .... -p "cluster=2011"
Nodes kareline-1-x (new one) are inserted in the OAR database with:
oarnodesetting -a -h ’kareline-1-0’ -p host=’kareline-1-0’ .... -p "cluster=2016"
Then we add a rule to match our requirements, based on this property:
my $cluster;
if ($jobType eq "INTERACTIVE") {
print "[DEBUG] Interactive job";
$cluster = ’2011’;
} else {
print "[DEBUG] Passive job";
$cluster = ’2016’;
}
if (index($jobproperties, ’cluster’) == -1) {
print " without cluster (".$jobproperties.")";
if ($jobproperties ne ""){
$jobproperties = "($jobproperties) AND cluster = ’".$cluster."’";
} else {
$jobproperties = "cluster = ’".$cluster."’";
}
print "\n $jobproperties \n";
}
(You can remove the "print" lines used to check what the rule is doing)
This rule is written in a text file: //regle.oar// and added ro oar with:
oaradmissionrules -n -r ./regle.oar
==== Example 4: limit access to a queue, based on usernames set in a file ====
==Answer==
# Title : limit access to the challenge queue to authorized usersqueue
if ($queue_name eq "challenge"){
open(FILE, "< $ENV{HOME}/challenge.users") || die("[ADMISSION RULE] Challenge user list is empty");
my $authorized = 0;
while (){
if (m/^\s*$user\s*$/m){
$authorized = 1;
}
}
close(FILE);
if($authorized ne 1){
die("[ADMISSION RULE] $user is not authorized to submit in challenge queue\n");
}
}
==== Example 5: give a more privilege to the owners of nodes ====
Give a more privilege to the owners of nodes (e.g. people who payed for the nodes) to submit, by restricting others to besteffort jobs:
* non-owners compete on the resources according to the scheduling policy of the besteffort queue ;
* but owners can get jobs running, just like if those jobs did not exist (i.e. besteffort jobs are killed whenever non besteffort jobs require the same resources).
==Answer==
* First add a property to resources: //dedicated=NO// or //dedicated=team_name// for nodes owned by each team.
* Then add an admission rule which describes the team list with regard to unix users and groups.
* Non-besteffort jobs will automatically bound to resources set with //dedicated=NO//, except for members of one or more of the defined teams, whose jobs are automatically directed to the union of their dedicated resources (//dedicated=team_of_the_user//) and the //dedicated=NO// resources.
* Besteffort jobs do not have constraints: they are scheduled on any resources with the property //besteffort=YES// (usually all nodes).
# Description : set dedicated property for exceptional usages
my @projects;
my %serpico_project = ( 'dedicated' => 'serpico', 'groups' => [ 'serpico', 'e-serpico' ], 'users' => [] );
my %neurinfo_project = ( 'dedicated' => 'neurinfo', 'groups' => ['visages'], 'users' => [] );
my %fluminance_project = ( 'dedicated' => 'user1', 'groups' => [], 'users' => ['user1'] );
push(@projects, \%serpico_project);
push(@projects, \%neurinfo_project);
push(@projects, \%fluminance_project);
my $jobaddedprop = "\( dedicated='NO'" ;
# original : IGRIDA admission rule
# modified : nef @ Inria Sophia
if (!(grep(/^besteffort/, @{$type_list}))) {
# Any user can run besteffort jobs on any resource :
# no additional verification, no modification
# Search for projects to which the user belongs
foreach my $project (@projects) {
my @members;
foreach (@{$project->{'groups'}}) {
my $gr;
(undef,undef,undef, $gr) = getgrnam($_);
my @group_members = split(/\s+/,$gr);
#print(@group_members);
push(@members, @group_members);
}
push(@members, @{$project->{'users'}});
my %h = map { $_ => 1 } @members;
if ( $h{$user} eq 1 ) {
#print($user);
$jobaddedprop .= " OR dedicated=\'$project->{'dedicated'}\'";
} else {
if ((grep(/dedicated(\s+|)=(\s+|)'$project->{'dedicated'}\'/, $jobproperties))) {
die("[ADMISSION RULE] $project->{'dedicated'} dedicated nodes are only available for best-effort jobs.\n");
}
}
}
$jobaddedprop .= " \)";
# Always need to limit request to permitted nodes
if ($queue_name ne "admin") {
if (grep(/\S/, $jobproperties)) {
$jobproperties = "($jobproperties) AND $jobaddedprop";
} else {
$jobproperties = "$jobproperties $jobaddedprop";
}
print("[ADMISSION RULE] Automatically add constraint to go on nodes permitted for the user.\n");
}
}
That set up:
* For a user of team 'serpico' : ''$ oarsub -I''
* => property set: ''dedicated='NO' OR dedicated='serpico'''
* That user can also force use of dedicated nodes only: ''$ oarsub -I -p "dedicated='serpico'"''
* => property set: ''dedicated='serpico'''
* For a user of teams 'serpico' and 'neurinfo': ''$ oarsub -I''
* => property set: ''dedicated='NO' OR dedicated='serpico' OR dedicated='neurinfo'''
* For a user that does not belong to a team which owns nodes: ''$ oarsub -I''
* => property set: ''dedicated='NO'''
* And for that user: ''$ oarsub -I -p "dedicated='serpico'"''
* => ''ERROR : [ADMISSION RULE] serpico dedicated nodes are only available for best-effort jobs.''
* But with: ''$ oarsub -I -t besteffort'' he can get jobs running on the dedicated nodes, which however will be stopped as soon as a job of a member of the owners' team need the nodes.
(There may be some limitations in that property filtering, which could allow malicious users to overcome the usage policy)
==== Example 5: verify correct resources definitions ====
Verify that the ''oarsub -l [resource request]'' gives a correct resources definition.
OAR resource request hierarchies are implicit in the OAR database, but they can be enforced by an admission rule.
Lets assume that valid resources hierarchies are:
* ''switch > cluster > host > cpu > gpu > core''
* ''cluster > switch > host > cpu > gpu > core''
* ''cluster > switch > host > disk''
* ''switch > cluster > host > disk''
* ''license''
Here both switch > cluster, or cluster > switch can be valid (some clusters spread their nodes on many switches, some clusters share a same switch). A GPU never spans several CPUs. The disk property defines special resources to reserve disks on hosts, independently from cpu, gpu and core properties. The license property is yet a completely independent type of resource.
Any of those resources properties can define a valid hierarchy or resources, for instance:
* ''oarsub -l switch=2/core=1'' → get 2 cores on different switches
* ''oarsub -l cluster=1/host=2/disk=1'' → get 2 disks of 2 different hosts but on a same cluster
* ''oarsub -l license=1'' → get 1 license
But incorrect hierachry should raise an error:
* ''oarsub -l gpu=1/host=2'' → cannot get 2 hosts for a same gpu
* ''oarsub -l host=1/disk=1/core=1'' → cannot mix disk and core
* ''oarsub -l switch=1/cluster=1/switch=1'' → obviously wrong
==Answer==
# definition of the valid implicit hierarchies of resources.
my %valid_children = (
# both cluster > switch and switch > cluster are relevant
"cluster" => ["switch", "cpu", "host", "core", "gpu", "disk"],
"switch" => ["cluster", "cpu", "host", "core", "gpu", "disk"],
"host" => ["cpu", "core", "gpu", "disk"],
"cpu" => ["core", "gpu"],
"gpu" => ["core"],
"disk" => [],
"license" => [],
);
# nodes is synonym of host
my %aliases = (
"nodes" => "host",
);
# valid values are in one of the possible hierarchies
my %valid_resources;
foreach my $key (keys %valid_children) {
$valid_resources{$key} = undef;
foreach my $child (@{%valid_children{$key}}) {
$valid_resources{$child} = undef;
}
}
my @valid_resources = keys %valid_resources;
# test the user's request for possible errors
foreach my $mold (@{$ref_resource_list}) { # loop on all moldable resources requests (oarsub -l ... -l ...)
foreach my $r (@{$mold->[0]}) { # loop on all joint resources requests (oarsub -l ...+...)
my @resources_hierarchy;
my $parent_resource;
foreach my $h (@{$r->{resources}}) { # loop on every resource, look at the resource name only ($h->{resource}) not the value ($h->{value})
if(!grep { $_ eq $h->{resource} } @valid_resources) {
die("[ADMISSION RULE] Error: the requested resources '$h->{resource}' is not valid, please check your syntax.\n");
}
if (!grep { $_ eq $h->{resource} } @resources_hierarchy) {
push @resources_hierarchy, $h->{resource};
} else {
# catching the case `oarsub -l /cluster=a/switch=b/cluster=c` or `oarsub -l /switch=a/cluster=b/switch=c`
push @resources_hierarchy, $h->{resource};
die("[ADMISSION RULE] Error: duplicated resource '$h->{resource}' in the requested resources hiearchy '".join(" > ", @resources_hierarchy)."'.\n");
}
if (defined($parent_resource)) {
if (!grep { $_ eq $h->{resource} } @{$valid_children{$parent_resource}}) {
die("[ADMISSION RULE] Error: the requested resources hierarchy '".join(" > ", @resources_hierarchy)."' is not relevant.\n");
}
}
if (exists $aliases{$h->{resource}}) {
$parent_resource = $aliases{$h->{resource}};
} else {
$parent_resource = $h->{resource};
}
}
}
}