This is an old revision of the document!


Admission rules are managed with the oaradmissionrule command. See the usage or man page of the command.

The code of the admission rules is executed within oarsub at submission time. Many internal variables are available, making the mechanism very powerful… and dangerous. Hence the administrator must be very careful when playing with admission rules.

Example 1: avoid the ugly SQL error warning which appear in case of a typo in the oarsub -l syntax

The following admission rule can be added, which check the syntax for valid resource propoerties

# Check the validity of the resource request (-l option of oarsub) to avoid SQL ugly message
# The following variable lists the valid resource properties that the user can use
# See the output of the `oarproperty -l' command for the list of available properties.
# Only properties which defines the implicit resource hierarchy need to be listed.
my @properties=("host","cpu","core","thread");
 
foreach my $group (@$ref_resource_list) {
  foreach my $subgroup (@$group) {
    foreach my $moldable (@$subgroup) {
      foreach my $resource (@{$moldable->{resources}}) {
        while (my ($k,$v) = each(%$resource)) {
          if ($k eq "resource" and not grep (/^$v$/, ("nodes", "resource_id", @properties))) {
            warn "Admission Rule ERROR : Unknown resource property \"$v\"\n";
            exit 1;
          }
        }
      }
    }
  }
}

Example 2: enforce a minimum number of hosts in job

Question

I'm looking for an admission rule that would reject job requests if the user ask for less than N hosts. E.g:

$ oarsub -l host=2 -I

would fail if I configure N=3.

I have a very limited set of properties configured:

$ oarproperty -l
last_available_upto
host
Answer

You could inspect what resources are requested by looking at the the oarsub command line in an admission rule, using the $initial_request_string variable.

However, this could be very tricky to handle any request case.

You could also use the $ref_resource_list variable, which is a reference to the structure containing the result of the parsing of the resource request. That structure is complex, since it must store information such as the resources hierarchy, moldable jobs, properties, etc. See the code for the admission rule below:

my $min_resources = 5; # N=5
my $error = 0;
foreach my $group (@$ref_resource_list) {
  foreach my $subgroup (@$group) {
    foreach my $moldable (@$subgroup) {
      foreach my $resource (@{$moldable->{resources}}) {
        while (my ($k,$v) = each(%$resource)) {
          if ($k eq "resource" and ($v ne "network_address" or $v ne "host")) {
            warn "requesting resource of type $v is not allowed, use nodes or host\n";
            $error = 1;
          } elsif ($k eq "value" and $v < $min_resources) {
            warn "requesting less than $min_resources nodes is not allowed, you requested $v\n";
            $error = 1;
          }
        }
        exit 1 if $error;
      }
    }
  }
}

Example 3: managing 2 compute nodes generations in the same cluster

After upgrading the cluster with new nodes, we need a rule to:

  1. send the interactive sessions on the old nodes as a default
  2. send the passive sessions on the new nodes as a default
  3. avoid mixing old and new nodes for parallel jobs as a default
  4. users are allowed to override this behavior via setting a property in their request.
Answer

First we need to add a property. We called it “cluster” and it is an integer (the year of the node installation): 2011 or 2016

$ oarproperty -a cluster

Nodes kareline-0-x (old one) are inserted in the OAR database with:

oarnodesetting -a -h ’kareline-0-0-p host=’kareline-0-0’  .... -p "cluster=2011"

Nodes kareline-1-x (new one) are inserted in the OAR database with:

oarnodesetting -a -h ’kareline-1-0-p host=’kareline-1-0’  .... -p "cluster=2016"

Then we add a rule to match our requirements, based on this property:

my $cluster;
if ($jobType eq "INTERACTIVE") {
     print "[DEBUG] Interactive job";
     $cluster =2011;
} else {
     print "[DEBUG] Passive job";
     $cluster =2016;
}
if (index($jobproperties, ’cluster’) == -1) {
     print " without cluster (".$jobproperties.")";
     if ($jobproperties ne ""){
          $jobproperties = "($jobproperties) AND cluster = ’".$cluster."’";
     } else {
          $jobproperties = "cluster = ’".$cluster."’";
     }
     print "\n $jobproperties \n";
}

(You can remove the “print” lines used to check what the rule is doing)

This rule is written in a text file: regle.oar and added ro oar with:

oaradmissionrules -n -r ./regle.oar 

Example 4:

Limit access to a queue, based on usernames set in a file

Answer
# Title : limit access to the challenge queue to authorized usersqueue
if ($queue_name eq "challenge"){
  open(FILE, "< $ENV{HOME}/challenge.users") || die("[ADMISSION RULE] Challenge user list is empty");
  my $authorized = 0;
  while (<FILE>){
    if (m/^\s*$user\s*$/m){
      $authorized = 1;
    }
  }
  close(FILE);
  if($authorized ne 1){
    die("[ADMISSION RULE] $user is not authorized to submit in challenge queue\n");
  }
}

Example 5:

Give a more privilege to the owners of nodes (e.g. people who payed for the nodes) to submit, by restricting others to besteffort jobs:

  • non-owners compete on the resources according to the scheduling policy of the besteffort queue ;
  • but owners can get jobs running, just like if those jobs did not exist (i.e. besteffort jobs are killed whenever non besteffort jobs require the same resources).
Answer
  • First add a property to resources: dedicated=none or dedicated=team_name for nodes owned by each team.
  • Then add an admission rule which describes the team list with regard to unix users and groups.
  • Non-besteffort jobs will automatically bound to resources set with dedicated=none, except for members of one of the defined teams, whose jobs are automatically directed to their dedicated resources (dedicated=team_of_the_user).
  • Besteffort jobs do not have constraints: they are scheduled on any resources with the property besteffort=YES (usually all nodes).
# Description : set dedicated property
 
my @projects;
my %serpico_project = ( 'dedicated' => 'serpico', 'groups' => [ 'serpico', 'e-serpico' ], 'users' => [] );
my %neurinfo_project = ( 'dedicated' => 'neurinfo', 'groups' => ['visages'], 'users' => [] );
my %fluminance_project = ( 'dedicated' => 'user1', 'groups' => [], 'users' => ['user1'] );
 
push(@projects, \%serpico_project);
push(@projects, \%neurinfo_project);
push(@projects, \%fluminance_project);
 
my $dedicated_none = 0;
my $user_in_one_group = 0;
 
foreach my $project (@projects) {
  my @members;
  foreach (@{$project->{'groups'}}) {
    my $gr;
    (undef,undef,undef, $gr) = getgrnam($_);
    my @group_members = split(/\s+/,$gr);
    push(@members, @group_members);
  }
  push(@members, @{$project->{'users'}});
  my %h = map { $_ => 1 } @members;
  if ( $h{$user} eq 1 ) {
    if (!(grep(/dedicated(\s+|)=/, $jobproperties))){
      $jobproperties = "($jobproperties) AND dedicated = \'$project->{'dedicated'}\'";
      print("[ADMISSION RULE] Automatically add the constraint to go on the $project->{'dedicated'} dedicated nodes.\n");
      $user_in_one_group = 1;
    }
  } else {
    if (!(grep(/dedicated(\s+|)=/, $jobproperties))) {
      $dedicated_none = 1;
    }
    if ((grep(/dedicated(\s+|)=(\s+|)'$project->{'dedicated'}\'/, $jobproperties)) and !(grep(/^besteffort/, @{$type_list}))) {
      die("[ADMISSION RULE] $project->{'dedicated'} dedicated nodes are only available for best-effort jobs.\n");
    }
  }
}
 
if (($dedicated_none == 1) and ($user_in_one_group == 0) and ($queue_name ne "admin")) {
  if (!(grep(/^besteffort/, @{$type_list}))) {
    $jobproperties = "($jobproperties) AND dedicated = \'none\'";
  }
}

That set up:

  • For a user of team 'serpico' : $ oarsub -I
    • ⇒ property set: dedicated = 'serpico'
  • That user can also use non dedicated nodes: $ oarsub -I -p “dedicated='none'”
    • ⇒ property set: dedicated='none'
  • For a user that does not belong to a team which owns nodes1: $ oarsub -I
    • ⇒ property set: dedicated = 'none'
  • And for that user: $ oarsub -I -p “dedicated='serpico'”
    • ERROR : [ADMISSION RULE] serpico dedicated nodes are only available for best-effort jobs.
  • But with: $ oarsub -I -t besteffort he can get jobs running on the dedicated nodes, which however will be stopped as soon as a job of a member of the owners' team need the nodes.

(Note that there are some known limitations in that property filtering, which could allow malicious users to overcome the usage policy)

wiki/some_examples_of_admission_rules.1494406145.txt.gz · Last modified: 2017/05/10 10:49 by neyron
Recent changes RSS feed GNU Free Documentation License 1.3 Donate Powered by PHP Valid XHTML 1.0 Valid CSS Driven by DokuWiki