OAR Installation

From Supercomputación y Cálculo Científico UIS


Logo_sc33.png

OAR Installation

In this section we describe all the administration tasks for the job scheduler OAR in the frontend node (Server) and in the compute nodes (Client)

OAR Server Installation

1) Configure the Repository

curl http://oar-ftp.imag.fr/oar/oarmaster.asc | sudo apt-key add -
echo "deb http://oar-ftp.imag.fr/oar/2.5/debian squeeze main" > /etc/apt/sources.list.d/oar.list

2) Install required software

apt-get install mysql-server mysql-client libdbd-mysql-perl libdbi-perl libsort-versions-perl


3) Install OAR packages

apt-get install oar-server-mysql oar-user oar-server oar-user-mysql oar-web-status oar-admin oar-node


4) OAR Configuration. Edit the file /etc/oar/oar.conf and fix all required values. Example

File: /etc/oar/oar.conf
DB_TYPE="mysql"
DB_HOSTNAME="localhost"
DB_PORT="3306"
DB_BASE_NAME="oardb"
DB_BASE_LOGIN="oar"
DB_BASE_PASSWD="xxxx"
SERVER_HOSTNAME="localhost"
SERVER_PORT="6666"
OARSUB_DEFAULT_RESOURCES="/nodes=1"
OARSUB_NODES_RESOURCES="network_address"
OARSUB_FORCE_JOB_KEY="no"
LOG_LEVEL="3"
LOG_CATEGORIES="all"
OAREXEC_DEBUG_MODE="1"
OAR_RUNTIME_DIRECTORY="/var/lib/oar"
LOG_FILE="/var/log/oar.log"
DEPLOY_HOSTNAME="127.0.0.1"
COSYSTEM_HOSTNAME="127.0.0.1"
DETACH_JOB_FROM_SERVER="1"
OPENSSH_CMD="/usr/bin/ssh -p 6667"
FINAUD_FREQUENCY="300"
PINGCHECKER_SENTINELLE_SCRIPT_COMMAND="/usr/lib/oar/sentinelle.pl -t 5 -w 20"
PROLOGUE_EXEC_FILE="/etc/oar/prologue"
EPILOGUE_EXEC_FILE="/etc/oar/epilogue"
SCHEDULER_TIMEOUT="10"
SCHEDULER_NB_PROCESSES=4
SCHEDULER_JOB_SECURITY_TIME="60"
SCHEDULER_GANTT_HOLE_MINIMUM_TIME="300"
SCHEDULER_RESOURCE_ORDER="scheduler_priority ASC, suspended_jobs ASC, network_address DESC, resource_id ASC"
SCHEDULER_PRIORITY_HIERARCHY_ORDER="network_address/resource_id"
SCHEDULER_AVAILABLE_SUSPENDED_RESOURCE_TYPE="default"
SCHEDULER_FAIRSHARING_MAX_JOB_PER_USER=30
ENERGY_SAVING_INTERNAL="no"
JOB_RESOURCE_MANAGER_PROPERTY_DB_FIELD="cpuset"
CPUSET_PATH="/oar"
OARSH_OPENSSH_DEFAULT_OPTIONS="-oProxyCommand=none -oPermitLocalCommand=no"

5) Initialize the OAR data base.

oar-database --create --db-admin-user root --db-admin-pass 'XXXXXX'


6) Add the resources (compute nodes) to the database.


Edit the file /tmp/nodes and add the name of the compute nodes. (One per line)

File: /tmp/nodes
guane01
guane02
.
.
guane09
guane10

Then, excecute the following command

oar_resources_init /tmp/nodes


This, will generate a file (/tmp/oar_resources_init.cmd) with the description of the resources.

File: /tmp/oar_resources_init.cmd
oarproperty -a cpu
oarproperty -a core
oarproperty -c -a host
oarproperty -c -a cpuset
oarproperty -a mem
oarnodesetting -a -h guane09 -p host=guane09 -p cpu=1 -p core=1 -p cpuset=0 -p mem=103
oarnodesetting -a -h guane09 -p host=guane09 -p cpu=1 -p core=2 -p cpuset=10 -p mem=103
oarnodesetting -a -h guane09 -p host=guane09 -p cpu=1 -p core=3 -p cpuset=12 -p mem=103

To add the resources excecute the following command

source /tmp/oar_resources_init.cmd


In case of GPU resources, edit a file (/tmp/oar_gpu_resources_init.cmd) with the following content. You must change some parameters to fit the type of GPU resources you have

File: /tmp/oar_gpu_resources_init.cmd
oarproperty -c -a gpu
oarproperty -c -a gputype
oarproperty -a gpunum
oarnodesetting --sql "core=1" -p gpu=YES -p gpunum=1 -p gputype=M2075
oarnodesetting --sql "core=2" -p gpu=YES -p gpunum=1 -p gputype=M2075
oarnodesetting --sql "core=3" -p gpu=YES -p gpunum=1 -p gputype=M2075
oarnodesetting --sql "core=4" -p gpu=YES -p gpunum=2 -p gputype=M2075
oarnodesetting --sql "core=5" -p gpu=YES -p gpunum=2 -p gputype=M2075
oarnodesetting --sql "core=6" -p gpu=YES -p gpunum=2 -p gputype=M2075
oarnodesetting --sql "core=7" -p gpu=YES -p gpunum=3 -p gputype=M2075
oarnodesetting --sql "core=8" -p gpu=YES -p gpunum=3 -p gputype=M2075
oarnodesetting --sql "core=9" -p gpu=YES -p gpunum=3 -p gputype=M2075
NOTE: In our cluster guane we have 8 GPUs per node, every node have 24 cores. Therefore we have to use 3 CPU cores to manage one GPU. Thus, you have to modify the lines to do something like that.

Then, execute the following command

source /tmp/oar_gpu_resources_init.cmd


Finally, you can check the list of the assigned resources using the following command

oarnodes | less


7) To see the resources in a graphical way configure the application called monika. In the frontend edit the file archivo /etc/oar/monika.conf and set the proper variables and parameters. Example:

File: /etc/oar/monika.conf
css_path = /monika.css
clustername = GridUIS-2
hostname = localhost
dbport = 3306
dbtype = mysql
dbname = oardb
username = oar
password = xxxxxxx
nodes_synonym = network_address
summary_display = default:nodes_synonym,resource_id
nodes_per_line = 1
max_cores_per_line = 8
nodename_regex = (\d+)
nodename_regex_display = (.*)
set_color Down = "red"
set_color Free = "#ffffff"
set_color Absent = "#c22200"
set_color StandBy = "cyan"
set_color Suspected = "#ff7b7b"
color_pool = "#9999ff"
color_pool = "#00cccc"
color_pool = "pink"
color_pool = "yellow"
color_pool = "orange"
color_pool = "#ff22ff"
color_pool = "#33cc00"
color_pool = "#cc66cc"
color_pool = "#99ff99"
color_pool = "#995522"
color_pool = "orange"
color_pool = "#999999"
hidden_property = network_address
hidden_property = expiry_date
hidden_property = desktop_computing
hidden_property = cpu
hidden_property = cpuset
hidden_property = available_upto
hidden_property = last_available_upto
hidden_property = core
hidden_property = finaud_decision
hidden_property = last_job_date
hidden_property = resource_id
hidden_property = state_num
hidden_property = suspended_jobs
hidden_property = next_state
hidden_property = next_finaud_decision
hidden_property = deploy
hidden_property = host
hidden_property = ip
hidden_property = hostname
hidden_property = scheduler_priority



OAR Client Installation

1) Configure the repository

wget -q http://oar-ftp.imag.fr/oar/oarmaster.asc -O- | apt-key add -


echo "deb http://oar-ftp.imag.fr/oar/2.5/debian wheezy main" > /etc/apt/sources.list.d/oar.list


2) Install the required software

apt-get install perl perl-base


3) Install the OAR packages

apt-get install oar-node


4) Copy the directory /var/lib/oar/.ssh from the frontend to all the nodes.

scp -rp guane:/var/lib/oar/.ssh /var/lib/oar/