This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionLast revisionBoth sides next revision | ||
wiki:customization_tips [2020/03/25 15:12] – neyron | wiki:customization_tips [2020/03/25 15:19] – neyron | ||
---|---|---|---|
Line 337: | Line 337: | ||
*/10 * * * * root / | */10 * * * * root / | ||
</ | </ | ||
- | |||
- | ====== Useful commands and administration tasks ====== | ||
- | //Here, you'll find useful commands, sometimes a bit tricky, to put into your scripts or administration tasks// | ||
- | |||
- | ===== List suspected nodes without running jobs ===== | ||
- | You may need this list of nodes if you want to automatically reboot them because you don't know why they have been suspected and you think that it is a simple way to clean things: | ||
- | < | ||
- | | ||
- | | ||
- | </ | ||
- | |||
- | ===== List alive nodes without running jobs ===== | ||
- | < | ||
- | | ||
- | | ||
- | </ | ||
- | |||
- | ===== Oarstat display without best-effort jobs ===== | ||
- | |||
- | < | ||
- | | ||
- | </ | ||
- | |||
- | ===== Setting some nodes in maintenance mode only when they are free ===== | ||
- | |||
- | You may need to plan some maintenance operations on some particular nodes (for example add somme memory, upgrade bios,...) but you don't want to interrupt currently running or planned users jobs. To do so, you can simply run a " | ||
- | <code bash> | ||
- | | ||
- | </ | ||
- | This uses the " | ||
- | |||
- | The example above will disable 2 free nodes, but you may want to add a //-p// option to specify the nodes you want to disable, for example: '' | ||
- | |||
- | **Note:** you can't simply do that within a " | ||
- | |||
- | ===== Optimizing and re-initializing the database with Postgres ===== | ||
- | Sometimes, the database contains so much jobs that you need to optimize it. Normally, you should have a **vacuumdb** running daily fron cron. You can do manually a **vacuumdb -a -f -z ; reindexdb oar** but don't forget to stop OAR before, and be aware that it may take some time. But the DB still may be very big and it may be a problem for backups or the nightly vaccum takes too much time. A more radical solution is to start again with a new database, but keep the old one so that you can still connect to it for jobs history. You can do this once a year for example, and you only have to backup the current database. Here is a way to do this: | ||
- | |||
- | * First of all, make a backup of your database! With postgres, it is as easy as: | ||
- | < | ||
- | | ||
- | </ | ||
- | It will create an exact copy of the " | ||
- | * You should plan a maintenance and be sure there' | ||
- | * Make a dump of your " | ||
- | * Stop the oar server, drop the oar database and re-create it. | ||
- | * Finally, restore the " | ||
- | * And restart the server. | ||
- | |||
- | ====== Green computing ====== | ||
- | //In this section, you'll find tips for optimizing the fluids consumptions of your clusters// | ||
- | ===== Activating the dynamic on/off of nodes but keeping a few nodes always ready ===== | ||
- | **Warning: | ||
- | |||
- | First of all, you have to set up the ecological feature as told into the FAQ: [[http:// | ||
- | |||
- | **Note:** if you have an ordinary cluster with nodes that are always available, you may set the cm_availability property to 2147483646 (infinite minus 1) | ||
- | |||
- | **Note: ** once this feature has been activated, the **absent** status may not always really mean absent, but **standby** as oar may want to automatically power on the node. to put a node into a real absent status, you have to set the cm_availability property to **0** | ||
- | |||
- | This tip supposes that you have set up your nodes to automatically set them to the Alive state when they boot and to the Absent state when they shutdown. You may refer to the FAQ for this: [[http:// | ||
- | |||
- | Here, we provide 3 scripts that you may customize and that make your ecological configuration a bit smarter than the default as it will be aware of keeping powered on a few nodes (4 in this example) that will be ready for incoming jobs: | ||
- | |||
- | ==wake_up_nodes.sh== | ||
- | <code bash> | ||
- | #!/bin/bash | ||
- | |||
- | IPMI_HOST=" | ||
- | POWER_ON_CMD=" | ||
- | |||
- | NODES=`cat` | ||
- | |||
- | for NODE in $NODES | ||
- | do | ||
- | ssh $IPMI_HOST $POWER_ON_CMD $NODE | ||
- | done | ||
- | </ | ||
- | |||
- | Very simple script containing the command that powers on your nodes. In this example, suitable for an SGI Altix Ice, we do a **cpower** from an **admin** host. You'll probably have to customize this. This script is to be put in front of the SCHEDULER_NODE_MANAGER_WAKE_UP_CMD option of the oar.conf file, like this: | ||
- | < | ||
- | | ||
- | </ | ||
- | |||
- | == set_standby_nodes.sh == | ||
- | <code bash> | ||
- | #!/bin/bash | ||
- | set -e | ||
- | |||
- | # This script is intended to be used from the SCHEDULER_NODE_MANAGER_SLEEP_CMD | ||
- | # variable of the oar.conf file. | ||
- | # It halts the nodes given in the stdin, but refuses to stop nodes if this | ||
- | # results in less than # | ||
- | # want to have some nodes ready for treating immediately some jobs. | ||
- | |||
- | NODES_KEEP_ALIVE=4 | ||
- | |||
- | NODES=`cat` | ||
- | |||
- | ALIVE_NODES=`oarnodes | ||
- | |||
- | NODES_TO_SHUTDOWN="" | ||
- | |||
- | for NODE in $NODES | ||
- | do | ||
- | if [ $ALIVE_NODES -gt $NODES_KEEP_ALIVE ] | ||
- | then | ||
- | NODES_TO_SHUTDOWN=" | ||
- | let ALIVE_NODES=ALIVE_NODES-1 | ||
- | else | ||
- | echo "Not halting $NODE because I need to keep $NODES_KEEP_ALIVE alive nodes" | ||
- | fi | ||
- | done | ||
- | |||
- | if [ " | ||
- | then | ||
- | echo -e " | ||
- | fi | ||
- | </ | ||
- | |||
- | This is the script for shutting down nodes. It uses **sentinelle** to send the **halt** command to the nodes, as suggested by the default configuration, | ||
- | |||
- | < | ||
- | | ||
- | </ | ||
- | |||
- | ==nodes_keepalive.sh== | ||
- | <code bash> | ||
- | #!/bin/bash | ||
- | set -e | ||
- | |||
- | # This script is intended to be ran every 5 minutes from the crontab | ||
- | # It ensures that # | ||
- | # are always alive and not shut down. It wakes up the nodes by submiting | ||
- | # a dummy job. It does not submit jobs if all the resources are used or | ||
- | # not available (cm_availability set to a low value) | ||
- | |||
- | NODES_KEEP_ALIVE=4 | ||
- | ADMIN_USER=bzeznik | ||
- | |||
- | # Locking | ||
- | LOCK=/ | ||
- | ### Locking for Debian (using lockfile-progs): | ||
- | # | ||
- | # | ||
- | # | ||
- | ### Locking for others (using sendmail lockfile) | ||
- | lockfile -r3 -l 43200 $LOCK | ||
- | |||
- | if [ " | ||
- | then | ||
- | |||
- | # Get the number of Alive nodes with at least 1 free resource | ||
- | | ||
- | |||
- | # Get the number of nodes in standby | ||
- | let AVAIL_DATE=`date +%s`+3600 | ||
- | | ||
- | |||
- | if [ $ALIVE_NODES -lt $NODES_KEEP_ALIVE ] | ||
- | then | ||
- | if [ $WAKEABLE_NODES -gt 0 ] | ||
- | then | ||
- | if [ $NODES_KEEP_ALIVE -gt $WAKEABLE_NODES ] | ||
- | then | ||
- | | ||
- | fi | ||
- | su - $ADMIN_USER -c " | ||
- | fi | ||
- | fi | ||
- | fi | ||
- | |||
- | ### Unlocking for Debian: | ||
- | #kill " | ||
- | # | ||
- | ### Unlocking for others: | ||
- | rm -f $LOCK | ||
- | </ | ||
- | |||
- | This script is responsible of waking up (power on) some nodes if there' | ||
- | |||
- | < | ||
- | */5 * * * * | ||
- | </ | ||
- | |||
====== Use cases ====== | ====== Use cases ====== |