Difference between revisions of "OAR Installation"

From Supercomputación y Cálculo Científico UIS
 
(8 intermediate revisions by 2 users not shown)
Line 1: Line 1:
Back to [[Job Scheduler OAR]]
+
__NOTOC__
  
 +
<div class="thumbnail img-thumbnail">http://wiki.sc3.uis.edu.co/images/a/a8/Logo_sc33.png</div>
 +
<p><div class="btn btn-primary"><i class="fa  fa-long-arrow-left"></i> [[Job Scheduler OAR]]</div></p>
 
<div class="column clearfix">
 
<div class="column clearfix">
      <div class="col-md-14">
+
    <div class="col-md-14">
         <div class="well well-success">
+
         <div class="well well-midnight">
          <h4>'''Super Computación y Cálculo Científico'''</h4>
+
            <h4>OAR Installation</h4>
          <p class="bs">In this section we describe all the administration tasks for the job scheduler OAR in the frontend node (Server) and in the compute nodes (Client)</p>
+
            <p class="bs">In this section we describe all the administration tasks for the job scheduler OAR in the frontend node (Server) and in the compute nodes (Client)</p>
 
         </div>
 
         </div>
      </div>
 
 
     </div>
 
     </div>
 +
</div>
  
===OAR Server Installation===
+
<div class="col-md-14">
 
+
     <div class="panel panel-darker-white-border">  
     <div class="column clearfix">
+
         <div class="panel-heading">
         <div class="col-md-14">
+
            <h3 class="panel-title">OAR Server Installation</h3>
              <div class="panel well well-neutra">
+
        </div>
 +
        <div class="panel-body">
  
                              <div class="col-md-14">
 
                                    <div class="panel well well-neutra">
 
                                       
 
 
1)  Configure the Repository
 
1)  Configure the Repository
 
 
{{Command|<nowiki>curl http://oar-ftp.imag.fr/oar/oarmaster.asc | sudo apt-key add  -</nowiki>}}
 
{{Command|<nowiki>curl http://oar-ftp.imag.fr/oar/oarmaster.asc | sudo apt-key add  -</nowiki>}}
 
 
{{Command|<nowiki>echo "deb http://oar-ftp.imag.fr/oar/2.5/debian squeeze main" > /etc/apt/sources.list.d/oar.list</nowiki>}}
 
{{Command|<nowiki>echo "deb http://oar-ftp.imag.fr/oar/2.5/debian squeeze main" > /etc/apt/sources.list.d/oar.list</nowiki>}}
 
 
2) Install required software
 
2) Install required software
 
 
{{Command|<nowiki>apt-get install mysql-server mysql-client libdbd-mysql-perl libdbi-perl libsort-versions-perl</nowiki>}}
 
{{Command|<nowiki>apt-get install mysql-server mysql-client libdbd-mysql-perl libdbi-perl libsort-versions-perl</nowiki>}}
 
   
 
   
 
3) Install OAR packages
 
3) Install OAR packages
 
 
{{Command|<nowiki>apt-get install oar-server-mysql oar-user oar-server oar-user-mysql oar-web-status oar-admin oar-node
 
{{Command|<nowiki>apt-get install oar-server-mysql oar-user oar-server oar-user-mysql oar-web-status oar-admin oar-node
 
</nowiki>}}
 
</nowiki>}}
Line 36: Line 31:
 
4) OAR Configuration.
 
4) OAR Configuration.
 
Edit the file /etc/oar/oar.conf and fix all required values. Example
 
Edit the file /etc/oar/oar.conf and fix all required values. Example
 
 
{{File|/etc/oar/oar.conf|<pre><nowiki>
 
{{File|/etc/oar/oar.conf|<pre><nowiki>
 
DB_TYPE="mysql"
 
DB_TYPE="mysql"
Line 81: Line 75:
  
 
6) Add the resources (compute nodes) to the database.  
 
6) Add the resources (compute nodes) to the database.  
 +
 +
 
Edit the file /tmp/nodes and add the name of the compute nodes. (One per line)
 
Edit the file /tmp/nodes and add the name of the compute nodes. (One per line)
  
Line 88: Line 84:
 
.
 
.
 
.
 
.
guane09
+
guane09
guane10
+
guane10
 
 
 
</nowiki></pre>}}
 
</nowiki></pre>}}
  
 
Then, excecute the following command
 
Then, excecute the following command
  
{{Command|<nowiki>oar_resources_init /tmp/nodes
+
{{Command|<nowiki>oar_resources_init /tmp/nodes</nowiki>}}
  
 +
This, will generate a file (/tmp/oar_resources_init.cmd) with the description of the resources.
  
Despues, Si esta de acuerdo con los recursos ejecute
+
{{File|/tmp/oar_resources_init.cmd|<pre><nowiki>
 +
oarproperty -a cpu
 +
oarproperty -a core
 +
oarproperty -c -a host
 +
oarproperty -c -a cpuset
 +
oarproperty -a mem
 +
oarnodesetting -a -h guane09 -p host=guane09 -p cpu=1 -p core=1 -p cpuset=0 -p mem=103
 +
oarnodesetting -a -h guane09 -p host=guane09 -p cpu=1 -p core=2 -p cpuset=10 -p mem=103
 +
oarnodesetting -a -h guane09 -p host=guane09 -p cpu=1 -p core=3 -p cpuset=12 -p mem=103
 +
</nowiki></pre>}}
 +
 
 +
To add the resources excecute the following command
  
 
{{Command|<nowiki>source /tmp/oar_resources_init.cmd</nowiki>}}
 
{{Command|<nowiki>source /tmp/oar_resources_init.cmd</nowiki>}}
  
El contenido de este archivo debe ser algo parecido a:
+
In case of GPU resources, edit a file (/tmp/oar_gpu_resources_init.cmd) with the following content. You must change some parameters to fit the type of GPU resources you have
  
        oarproperty -a cpu
+
{{File|/tmp/oar_gpu_resources_init.cmd|<pre><nowiki>
        oarproperty -a core
+
oarproperty -c -a gpu
        oarproperty -c -a host
+
oarproperty -c -a gputype
        oarproperty -c -a cpuset
+
oarproperty -a gpunum
        oarproperty -a mem
+
oarnodesetting --sql "core=1" -p gpu=YES -p gpunum=1 -p gputype=M2075
        oarnodesetting -a -h guane09 -p host=guane09 -p cpu=1 -p core=1 -p cpuset=0 -p mem=103
+
oarnodesetting --sql "core=2" -p gpu=YES -p gpunum=1 -p gputype=M2075
        oarnodesetting -a -h guane09 -p host=guane09 -p cpu=1 -p core=2 -p cpuset=10 -p mem=103
+
oarnodesetting --sql "core=3" -p gpu=YES -p gpunum=1 -p gputype=M2075
        oarnodesetting -a -h guane09 -p host=guane09 -p cpu=1 -p core=3 -p cpuset=12 -p mem=103
+
oarnodesetting --sql "core=4" -p gpu=YES -p gpunum=2 -p gputype=M2075
 +
oarnodesetting --sql "core=5" -p gpu=YES -p gpunum=2 -p gputype=M2075
 +
oarnodesetting --sql "core=6" -p gpu=YES -p gpunum=2 -p gputype=M2075
 +
oarnodesetting --sql "core=7" -p gpu=YES -p gpunum=3 -p gputype=M2075
 +
oarnodesetting --sql "core=8" -p gpu=YES -p gpunum=3 -p gputype=M2075
 +
oarnodesetting --sql "core=9" -p gpu=YES -p gpunum=3 -p gputype=M2075
 +
</nowiki></pre>}}
  
NOTA: para configurar los recursos de GPU, edite un archivo de nombre oar_gpu_resources_init.cmd y agregue
+
{{Note|In our cluster guane we have 8 GPUs per node, every node have 24 cores. Therefore we have to use 3 CPU cores to manage one GPU. Thus, you have to modify the lines to do something like that.}}
las siguientes líneas
+
Then, execute the following command
  
        oarproperty -c -a gpu
+
{{Command|<nowiki>source /tmp/oar_gpu_resources_init.cmd</nowiki>}}
        oarproperty -c -a gputype
 
        oarproperty -a gpunum
 
  
Luego, ejecute source oar_gpu_resources_init.cmd
+
Finally, you can check the list of the assigned resources using the following command
  
Finalmente, asigne los GPUs a cada core de cada nodo. Edite un archivo de nombre GPUresources.cmd y agregue
+
{{Command|<nowiki>oarnodes | less</nowiki>}}
las siguientes líneas.
 
  
        oarnodesetting --sql "core=1" -p gpu=YES -p gpunum=1 -p gputype=M2075
+
7) To see the resources in a graphical way configure the application called monika. In the frontend edit the file  archivo /etc/oar/monika.conf and set the proper variables and parameters. Example:
        oarnodesetting --sql "core=2" -p gpu=YES -p gpunum=1 -p gputype=M2075
 
        oarnodesetting --sql "core=3" -p gpu=YES -p gpunum=1 -p gputype=M2075
 
        oarnodesetting --sql "core=4" -p gpu=YES -p gpunum=2 -p gputype=M2075
 
        oarnodesetting --sql "core=5" -p gpu=YES -p gpunum=2 -p gputype=M2075
 
        oarnodesetting --sql "core=6" -p gpu=YES -p gpunum=2 -p gputype=M2075
 
        oarnodesetting --sql "core=7" -p gpu=YES -p gpunum=3 -p gputype=M2075
 
        oarnodesetting --sql "core=8" -p gpu=YES -p gpunum=3 -p gputype=M2075
 
        oarnodesetting --sql "core=9" -p gpu=YES -p gpunum=3 -p gputype=M2075
 
  
NOTA: En los nodos nuevos activos del cluster Guane de la UIS se tienen 168 Cores y 56 GPUs (8 GPUs por nodo)
+
{{File|/etc/oar/monika.conf|<pre><nowiki>
Por lo tanto, la asignación se realizó fijando 1 GPU cada 3 cores
+
css_path = /monika.css
(debe completar las líneas de acuerdo al número de GPUs y Cores que se tengan en todos los nodos)
+
clustername = GridUIS-2
 +
hostname = localhost
 +
dbport = 3306
 +
dbtype = mysql
 +
dbname = oardb
 +
username = oar
 +
password = xxxxxxx
 +
nodes_synonym = network_address
 +
summary_display = default:nodes_synonym,resource_id
 +
nodes_per_line = 1
 +
max_cores_per_line = 8
 +
nodename_regex = (\d+)
 +
nodename_regex_display = (.*)
 +
set_color Down = "red"
 +
set_color Free = "#ffffff"
 +
set_color Absent = "#c22200"
 +
set_color StandBy = "cyan"
 +
set_color Suspected = "#ff7b7b"
 +
color_pool = "#9999ff"
 +
color_pool = "#00cccc"
 +
color_pool = "pink"
 +
color_pool = "yellow"
 +
color_pool = "orange"
 +
color_pool = "#ff22ff"
 +
color_pool = "#33cc00"
 +
color_pool = "#cc66cc"
 +
color_pool = "#99ff99"
 +
color_pool = "#995522"
 +
color_pool = "orange"
 +
color_pool = "#999999"
 +
hidden_property = network_address
 +
hidden_property = expiry_date
 +
hidden_property = desktop_computing
 +
hidden_property = cpu
 +
hidden_property = cpuset
 +
hidden_property = available_upto
 +
hidden_property = last_available_upto
 +
hidden_property = core
 +
hidden_property = finaud_decision
 +
hidden_property = last_job_date
 +
hidden_property = resource_id
 +
hidden_property = state_num
 +
hidden_property = suspended_jobs
 +
hidden_property = next_state
 +
hidden_property = next_finaud_decision
 +
hidden_property = deploy
 +
hidden_property = host
 +
hidden_property = ip
 +
hidden_property = hostname
 +
hidden_property = scheduler_priority
 +
</nowiki></pre>}}
 +
 
 +
           
 +
 
 +
 
 +
        </div>
 +
    </div>
 +
</div>
 +
 
 +
<div class="col-md-14">
 +
    <div class="panel panel-dark-white-border">
 +
        <div class="panel-heading">
 +
            <h3 class="panel-title">OAR Client Installation</h3>
 +
        </div>
 +
        <div class="panel-body">
 +
 
 +
1) Configure the repository
 +
 
 +
{{Command|<nowiki>wget -q http://oar-ftp.imag.fr/oar/oarmaster.asc -O- | apt-key add -</nowiki>}}
  
Luego, ejecute el comando
+
{{Command|<nowiki>echo "deb http://oar-ftp.imag.fr/oar/2.5/debian wheezy main" > /etc/apt/sources.list.d/oar.list</nowiki>}}
  
{{Command|<nowiki>source GPUresources.cmd</nowiki>}}
+
2) Install the required software
  
Compruebe que todo está en orden ejecutando
+
{{Command|<nowiki>apt-get install perl perl-base</nowiki>}}
 +
 +
3) Install the OAR packages
  
{{Command|<nowiki>oarnodes | less</nowiki>}}
+
{{Command|<nowiki>apt-get install oar-node</nowiki>}}
 +
 
 +
4) Copy the directory /var/lib/oar/.ssh from the frontend to all the nodes.
 +
 
 +
{{Command|<nowiki>scp -rp guane:/var/lib/oar/.ssh /var/lib/oar/</nowiki>}}  
  
7) Configurar monika. Edite el archivo /etc/oar/monika.conf y fije las variables necesarias, parámetros de
 
la base de datos entre otros.
 
                                      </div>
 
                              </div>
 
                           
 
                    </div>
 
 
         </div>
 
         </div>
 
     </div>
 
     </div>
 +
</div>

Latest revision as of 20:14, 20 March 2015


Logo_sc33.png

OAR Installation

In this section we describe all the administration tasks for the job scheduler OAR in the frontend node (Server) and in the compute nodes (Client)

OAR Server Installation

1) Configure the Repository

curl http://oar-ftp.imag.fr/oar/oarmaster.asc | sudo apt-key add -
echo "deb http://oar-ftp.imag.fr/oar/2.5/debian squeeze main" > /etc/apt/sources.list.d/oar.list

2) Install required software

apt-get install mysql-server mysql-client libdbd-mysql-perl libdbi-perl libsort-versions-perl


3) Install OAR packages

apt-get install oar-server-mysql oar-user oar-server oar-user-mysql oar-web-status oar-admin oar-node


4) OAR Configuration. Edit the file /etc/oar/oar.conf and fix all required values. Example

File: /etc/oar/oar.conf
DB_TYPE="mysql"
DB_HOSTNAME="localhost"
DB_PORT="3306"
DB_BASE_NAME="oardb"
DB_BASE_LOGIN="oar"
DB_BASE_PASSWD="xxxx"
SERVER_HOSTNAME="localhost"
SERVER_PORT="6666"
OARSUB_DEFAULT_RESOURCES="/nodes=1"
OARSUB_NODES_RESOURCES="network_address"
OARSUB_FORCE_JOB_KEY="no"
LOG_LEVEL="3"
LOG_CATEGORIES="all"
OAREXEC_DEBUG_MODE="1"
OAR_RUNTIME_DIRECTORY="/var/lib/oar"
LOG_FILE="/var/log/oar.log"
DEPLOY_HOSTNAME="127.0.0.1"
COSYSTEM_HOSTNAME="127.0.0.1"
DETACH_JOB_FROM_SERVER="1"
OPENSSH_CMD="/usr/bin/ssh -p 6667"
FINAUD_FREQUENCY="300"
PINGCHECKER_SENTINELLE_SCRIPT_COMMAND="/usr/lib/oar/sentinelle.pl -t 5 -w 20"
PROLOGUE_EXEC_FILE="/etc/oar/prologue"
EPILOGUE_EXEC_FILE="/etc/oar/epilogue"
SCHEDULER_TIMEOUT="10"
SCHEDULER_NB_PROCESSES=4
SCHEDULER_JOB_SECURITY_TIME="60"
SCHEDULER_GANTT_HOLE_MINIMUM_TIME="300"
SCHEDULER_RESOURCE_ORDER="scheduler_priority ASC, suspended_jobs ASC, network_address DESC, resource_id ASC"
SCHEDULER_PRIORITY_HIERARCHY_ORDER="network_address/resource_id"
SCHEDULER_AVAILABLE_SUSPENDED_RESOURCE_TYPE="default"
SCHEDULER_FAIRSHARING_MAX_JOB_PER_USER=30
ENERGY_SAVING_INTERNAL="no"
JOB_RESOURCE_MANAGER_PROPERTY_DB_FIELD="cpuset"
CPUSET_PATH="/oar"
OARSH_OPENSSH_DEFAULT_OPTIONS="-oProxyCommand=none -oPermitLocalCommand=no"

5) Initialize the OAR data base.

oar-database --create --db-admin-user root --db-admin-pass 'XXXXXX'


6) Add the resources (compute nodes) to the database.


Edit the file /tmp/nodes and add the name of the compute nodes. (One per line)

File: /tmp/nodes
guane01
guane02
.
.
guane09
guane10

Then, excecute the following command

oar_resources_init /tmp/nodes


This, will generate a file (/tmp/oar_resources_init.cmd) with the description of the resources.

File: /tmp/oar_resources_init.cmd
oarproperty -a cpu
oarproperty -a core
oarproperty -c -a host
oarproperty -c -a cpuset
oarproperty -a mem
oarnodesetting -a -h guane09 -p host=guane09 -p cpu=1 -p core=1 -p cpuset=0 -p mem=103
oarnodesetting -a -h guane09 -p host=guane09 -p cpu=1 -p core=2 -p cpuset=10 -p mem=103
oarnodesetting -a -h guane09 -p host=guane09 -p cpu=1 -p core=3 -p cpuset=12 -p mem=103

To add the resources excecute the following command

source /tmp/oar_resources_init.cmd


In case of GPU resources, edit a file (/tmp/oar_gpu_resources_init.cmd) with the following content. You must change some parameters to fit the type of GPU resources you have

File: /tmp/oar_gpu_resources_init.cmd
oarproperty -c -a gpu
oarproperty -c -a gputype
oarproperty -a gpunum
oarnodesetting --sql "core=1" -p gpu=YES -p gpunum=1 -p gputype=M2075
oarnodesetting --sql "core=2" -p gpu=YES -p gpunum=1 -p gputype=M2075
oarnodesetting --sql "core=3" -p gpu=YES -p gpunum=1 -p gputype=M2075
oarnodesetting --sql "core=4" -p gpu=YES -p gpunum=2 -p gputype=M2075
oarnodesetting --sql "core=5" -p gpu=YES -p gpunum=2 -p gputype=M2075
oarnodesetting --sql "core=6" -p gpu=YES -p gpunum=2 -p gputype=M2075
oarnodesetting --sql "core=7" -p gpu=YES -p gpunum=3 -p gputype=M2075
oarnodesetting --sql "core=8" -p gpu=YES -p gpunum=3 -p gputype=M2075
oarnodesetting --sql "core=9" -p gpu=YES -p gpunum=3 -p gputype=M2075
NOTE: In our cluster guane we have 8 GPUs per node, every node have 24 cores. Therefore we have to use 3 CPU cores to manage one GPU. Thus, you have to modify the lines to do something like that.

Then, execute the following command

source /tmp/oar_gpu_resources_init.cmd


Finally, you can check the list of the assigned resources using the following command

oarnodes | less


7) To see the resources in a graphical way configure the application called monika. In the frontend edit the file archivo /etc/oar/monika.conf and set the proper variables and parameters. Example:

File: /etc/oar/monika.conf
css_path = /monika.css
clustername = GridUIS-2
hostname = localhost
dbport = 3306
dbtype = mysql
dbname = oardb
username = oar
password = xxxxxxx
nodes_synonym = network_address
summary_display = default:nodes_synonym,resource_id
nodes_per_line = 1
max_cores_per_line = 8
nodename_regex = (\d+)
nodename_regex_display = (.*)
set_color Down = "red"
set_color Free = "#ffffff"
set_color Absent = "#c22200"
set_color StandBy = "cyan"
set_color Suspected = "#ff7b7b"
color_pool = "#9999ff"
color_pool = "#00cccc"
color_pool = "pink"
color_pool = "yellow"
color_pool = "orange"
color_pool = "#ff22ff"
color_pool = "#33cc00"
color_pool = "#cc66cc"
color_pool = "#99ff99"
color_pool = "#995522"
color_pool = "orange"
color_pool = "#999999"
hidden_property = network_address
hidden_property = expiry_date
hidden_property = desktop_computing
hidden_property = cpu
hidden_property = cpuset
hidden_property = available_upto
hidden_property = last_available_upto
hidden_property = core
hidden_property = finaud_decision
hidden_property = last_job_date
hidden_property = resource_id
hidden_property = state_num
hidden_property = suspended_jobs
hidden_property = next_state
hidden_property = next_finaud_decision
hidden_property = deploy
hidden_property = host
hidden_property = ip
hidden_property = hostname
hidden_property = scheduler_priority



OAR Client Installation

1) Configure the repository

wget -q http://oar-ftp.imag.fr/oar/oarmaster.asc -O- | apt-key add -


echo "deb http://oar-ftp.imag.fr/oar/2.5/debian wheezy main" > /etc/apt/sources.list.d/oar.list


2) Install the required software

apt-get install perl perl-base


3) Install the OAR packages

apt-get install oar-node


4) Copy the directory /var/lib/oar/.ssh from the frontend to all the nodes.

scp -rp guane:/var/lib/oar/.ssh /var/lib/oar/