Green computing

In this page, you'll find tips for optimizing the fluids consumptions of your clusters

Warning: this tip is now partly obsoleted by the new hulot module that comes with the latest oar release. this energy saving module has got a keepalive feature. take a look at the comments above all the energy* variables into the oar.conf file.

Activating the dynamic on/off of nodes but keeping a few nodes always ready

First of all, you have to set up the ecological feature as told into the FAQ: How to configure a more ecological cluster.

Note: if you have an ordinary cluster with nodes that are always available, you may set the cm_availability property to 2147483646 (infinite minus 1)

Note: once this feature has been activated, the absent status may not always really mean absent, but standby as oar may want to automatically power on the node. to put a node into a real absent status, you have to set the cm_availability property to 0

This tip supposes that you have set up your nodes to automatically set them to the Alive state when they boot and to the Absent state when they shutdown. You may refer to the FAQ for this: How to manage start/stop of the nodes? or to this section of the Customization tips.

Here, we provide 3 scripts that you may customize and that make your ecological configuration a bit smarter than the default as it will be aware of keeping powered on a few nodes (4 in this example) that will be ready for incoming jobs:

wake_up_nodes.sh
#!/bin/bash
 
IPMI_HOST="admin"
POWER_ON_CMD="cpower --up --quiet"
 
NODES=`cat`
 
for NODE in $NODES
do
  ssh $IPMI_HOST $POWER_ON_CMD $NODE
done

Very simple script containing the command that powers on your nodes. In this example, suitable for an SGI Altix Ice, we do a cpower from an admin host. You'll probably have to customize this. This script is to be put in front of the SCHEDULER_NODE_MANAGER_WAKE_UP_CMD option of the oar.conf file, like this:

 SCHEDULER_NODE_MANAGER_WAKE_UP_CMD="/usr/lib/oar/oardodo/oardodo /usr/local/sbin/wake_up_nodes.sh"
set_standby_nodes.sh
#!/bin/bash
set -e
 
# This script is intended to be used from the SCHEDULER_NODE_MANAGER_SLEEP_CMD
# variable of the oar.conf file.
# It halts the nodes given in the stdin, but refuses to stop nodes if this
# results in less than #NODES_KEEP_ALIVE alive nodes, because we generally
# want to have some nodes ready for treating immediately some jobs.
 
NODES_KEEP_ALIVE=4
 
NODES=`cat`
 
ALIVE_NODES=`oarnodes  --sql "state = 'Alive' and network_address NOT IN (SELECT distinct(network_address) FROM resources where resource_id IN (SELECT resource_id  FROM assigned_resources WHERE assigned_resource_index = 'CURRENT'))" | grep '^network_address' | sort -u`
 
NODES_TO_SHUTDOWN=""
 
for NODE in $NODES
do
  if [ $ALIVE_NODES -gt $NODES_KEEP_ALIVE ]
  then
    NODES_TO_SHUTDOWN="$NODE\n$NODES_TO_SHUTDOWN"
    let ALIVE_NODES=ALIVE_NODES-1
  else
    echo "Not halting $NODE because I need to keep $NODES_KEEP_ALIVE alive nodes"
  fi
done
 
if [ "$NODES_TO_SHUTDOWN" != "" ]
then
  echo -e "$NODES_TO_SHUTDOWN" |/usr/lib/oar/sentinelle.pl -f - -t 3 -p '/sbin/halt -p'
fi

This is the script for shutting down nodes. It uses sentinelle to send the halt command to the nodes, as suggested by the default configuration, but it refuses to shutdown some nodes if this results in less than 4 ready nodes. This script is to be put into the SCHEDULER_NODE_MANAGER_SLEEP_CMD by this way:

 SCHEDULER_NODE_MANAGER_SLEEP_CMD="/usr/lib/oar/oardodo/oardodo /usr/local/sbin/set_standby_nodes.sh"
nodes_keepalive.sh
#!/bin/bash
set -e
 
# This script is intended to be ran every 5 minutes from the crontab
# It ensures that #NODES_KEEP_ALIVE nodes with at least 1 free resource
# are always alive and not shut down. It wakes up the nodes by submiting
# a dummy job. It does not submit jobs if all the resources are used or
# not available (cm_availability set to a low value)
 
NODES_KEEP_ALIVE=4
ADMIN_USER=bzeznik
 
# Locking
LOCK=/var/lock/`basename $0`
### Locking for Debian (using lockfile-progs):
#lockfile-create $LOCK || exit 1
#lockfile-touch $LOCK &
#BADGER="$!"
### Locking for others (using sendmail lockfile)
lockfile -r3 -l 43200 $LOCK
 
if [ "`oarstat |grep \"wake_up_.*node\"`" = "" ]
then
 
 # Get the number of Alive nodes with at least 1 free resource
 ALIVE_NODES=`oarnodes  --sql "state = 'Alive' and network_address NOT IN (SELECT distinct(network_address) FROM resources where resource_id IN (SELECT resource_id  FROM assigned_resources WHERE assigned_resource_index = 'CURRENT'))" | grep '^network_address' | sort -u`
 
 # Get the number of nodes in standby
 let AVAIL_DATE=`date +%s`+3600
 WAKEABLE_NODES=`oarnodes  --sql "state = 'Absent' and cm_availability > $AVAIL_DATE" |grep "^network_address" |sort -u|wc -l`
 
 if [ $ALIVE_NODES -lt $NODES_KEEP_ALIVE ]
 then
   if [ $WAKEABLE_NODES -gt 0 ]
   then
     if [ $NODES_KEEP_ALIVE -gt $WAKEABLE_NODES ]
     then
       NODES_KEEP_ALIVE=$WAKEABLE_NODES
     fi
     su - $ADMIN_USER -c "oarsub -n wake_up_${NODES_KEEP_ALIVE}nodes -l /nodes=${NODES_KEEP_ALIVE}/core=1,walltime=00:00:10 'sleep 1'"
   fi
 fi
fi
 
### Unlocking for Debian:
#kill "${BADGER}"
#lockfile-remove $LOCK
### Unlocking for others:
rm -f $LOCK

This script is responsible of waking up (power on) some nodes if there's not enough free alive nodes. The trick used by this script is to submit a dummy job to force OAR to wake up some nodes. It's intended to be ran periodically from the crontab, for example with such a /etc/cron.d/nodes_keepalive file:

 */5 * * * *     root    /usr/local/sbin/nodes_keepalive.sh
wiki/green_computing.txt · Last modified: 2020/03/25 15:19 by neyron
Recent changes RSS feed GNU Free Documentation License 1.3 Donate Powered by PHP Valid XHTML 1.0 Valid CSS Driven by DokuWiki