Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
wiki:old:customization_tips [2014/08/29 16:48] – [oarsh completion] neyronwiki:customization_tips [2016/03/31 11:05] – Page moved from wiki:old:customization_tips to wiki:customization_tips neyron
Line 1: Line 1:
 ====== Configuration ====== ====== Configuration ======
 //In this section, you'll find advanced configuration tips// //In this section, you'll find advanced configuration tips//
-===== Using oaradmin to initiate the resources ===== 
-You can install oardamin by installing the **oar-admin** package or typing **make tools-install && make tools-setup** from the sources. 
- 
-Example for a cluster with 10 nodes with 2 hexa-core processors per node; the name of the nodes are james1, james2,... james10: 
-<code> 
- oaradmin resources -a "/node=james{10}/cpu={2}/core={6}" 
-</code> 
- 
-Example for an hybrid cluster (72 itaniums cores SMP and 28 xeon cores) 
-<code> 
- oaradmin resources -a "/node=healthphy/pnode={18}/cpu={2}/core={2}" -p cputype=itanium2 
- oaradmin resources -a "/node=heathphy-xeon{7}/cpu={2}/core={2}" -p cputype=xeon 
-</code> 
- 
-Oaradmin only prints a set of "oarnodesetting" commands that you can then pipe into bash when you think that it's ok: 
-<code> 
- oaradmin resources -a "/node=james{10}/cpu={2}/core={6}" | bash 
-</code> 
- 
-**Note:** oaradmin checks in the oar database if you have the necessary properties (for example "cpu" or "core" which are not defined by default). If it failed, be sure that you created the properties before running oaradmin. For instance: 
-<code> 
-  oarproperty -a cpu 
-  oarproperty -a core 
-</code> 
- 
 ===== Priority to the nodes with the lower workload ===== ===== Priority to the nodes with the lower workload =====
 This tip is useful for clusters of big nodes, like NUMA hosts with numerous cpus and a few nodes. When the cluster has a lot of free resources, users often wonder why their jobs are always sent to the first node while the others are completely free. With this simple trick, new jobs are sent preferably on the nodes that have the lowest 15 minutes workload. This tip is useful for clusters of big nodes, like NUMA hosts with numerous cpus and a few nodes. When the cluster has a lot of free resources, users often wonder why their jobs are always sent to the first node while the others are completely free. With this simple trick, new jobs are sent preferably on the nodes that have the lowest 15 minutes workload.
Line 65: Line 40:
  
 Examples (differences from the original script are set in bold): Examples (differences from the original script are set in bold):
-  *  [[job_resource_manager_2_memory_banks.pl]]  +  *  [[wiki:old:job_resource_manager_2_memory_banks.pl]]  
-  *  [[job_resource_manager_altix_350.pl]]+  *  [[wiki:old:job_resource_manager_altix_350.pl]]
  
 ===== Use fake-numa to add memory management into cpusets ===== ===== Use fake-numa to add memory management into cpusets =====
wiki/customization_tips.txt · Last modified: 2020/03/25 15:24 by neyron
Recent changes RSS feed GNU Free Documentation License 1.3 Donate Powered by PHP Valid XHTML 1.0 Valid CSS Driven by DokuWiki