Difference between revisions of "OAR Installation"

From Supercomputación y Cálculo Científico UIS
(Created page with "<div class="column clearfix"> <div class="col-md-14"> <div class="well well-success"> <h4>'''Super Computación y Cálculo Científico'''</h4>...")
 
 
(14 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 +
__NOTOC__
 +
 +
<div class="thumbnail img-thumbnail">http://wiki.sc3.uis.edu.co/images/a/a8/Logo_sc33.png</div>
 +
<p><div class="btn btn-primary"><i class="fa  fa-long-arrow-left"></i> [[Job Scheduler OAR]]</div></p>
 
<div class="column clearfix">
 
<div class="column clearfix">
      <div class="col-md-14">
+
    <div class="col-md-14">
         <div class="well well-success">
+
         <div class="well well-midnight">
          <h4>'''Super Computación y Cálculo Científico'''</h4>
+
            <h4>OAR Installation</h4>
          <p class="bs">In this section we describe all the administration tasks for the job scheduler OAR</p>
+
            <p class="bs">In this section we describe all the administration tasks for the job scheduler OAR in the frontend node (Server) and in the compute nodes (Client)</p>
 
         </div>
 
         </div>
      </div>
 
 
     </div>
 
     </div>
 +
</div>
 +
 +
<div class="col-md-14">
 +
    <div class="panel panel-darker-white-border">
 +
        <div class="panel-heading">
 +
            <h3 class="panel-title">OAR Server Installation</h3>
 +
        </div>
 +
        <div class="panel-body">
 +
 +
1)  Configure the Repository
 +
{{Command|<nowiki>curl http://oar-ftp.imag.fr/oar/oarmaster.asc | sudo apt-key add  -</nowiki>}}
 +
{{Command|<nowiki>echo "deb http://oar-ftp.imag.fr/oar/2.5/debian squeeze main" > /etc/apt/sources.list.d/oar.list</nowiki>}}
 +
2) Install required software
 +
{{Command|<nowiki>apt-get install mysql-server mysql-client libdbd-mysql-perl libdbi-perl libsort-versions-perl</nowiki>}}
 +
 +
3) Install OAR packages
 +
{{Command|<nowiki>apt-get install oar-server-mysql oar-user oar-server oar-user-mysql oar-web-status oar-admin oar-node
 +
</nowiki>}}
 +
 +
4) OAR Configuration.
 +
Edit the file /etc/oar/oar.conf and fix all required values. Example
 +
{{File|/etc/oar/oar.conf|<pre><nowiki>
 +
DB_TYPE="mysql"
 +
DB_HOSTNAME="localhost"
 +
DB_PORT="3306"
 +
DB_BASE_NAME="oardb"
 +
DB_BASE_LOGIN="oar"
 +
DB_BASE_PASSWD="xxxx"
 +
SERVER_HOSTNAME="localhost"
 +
SERVER_PORT="6666"
 +
OARSUB_DEFAULT_RESOURCES="/nodes=1"
 +
OARSUB_NODES_RESOURCES="network_address"
 +
OARSUB_FORCE_JOB_KEY="no"
 +
LOG_LEVEL="3"
 +
LOG_CATEGORIES="all"
 +
OAREXEC_DEBUG_MODE="1"
 +
OAR_RUNTIME_DIRECTORY="/var/lib/oar"
 +
LOG_FILE="/var/log/oar.log"
 +
DEPLOY_HOSTNAME="127.0.0.1"
 +
COSYSTEM_HOSTNAME="127.0.0.1"
 +
DETACH_JOB_FROM_SERVER="1"
 +
OPENSSH_CMD="/usr/bin/ssh -p 6667"
 +
FINAUD_FREQUENCY="300"
 +
PINGCHECKER_SENTINELLE_SCRIPT_COMMAND="/usr/lib/oar/sentinelle.pl -t 5 -w 20"
 +
PROLOGUE_EXEC_FILE="/etc/oar/prologue"
 +
EPILOGUE_EXEC_FILE="/etc/oar/epilogue"
 +
SCHEDULER_TIMEOUT="10"
 +
SCHEDULER_NB_PROCESSES=4
 +
SCHEDULER_JOB_SECURITY_TIME="60"
 +
SCHEDULER_GANTT_HOLE_MINIMUM_TIME="300"
 +
SCHEDULER_RESOURCE_ORDER="scheduler_priority ASC, suspended_jobs ASC, network_address DESC, resource_id ASC"
 +
SCHEDULER_PRIORITY_HIERARCHY_ORDER="network_address/resource_id"
 +
SCHEDULER_AVAILABLE_SUSPENDED_RESOURCE_TYPE="default"
 +
SCHEDULER_FAIRSHARING_MAX_JOB_PER_USER=30
 +
ENERGY_SAVING_INTERNAL="no"
 +
JOB_RESOURCE_MANAGER_PROPERTY_DB_FIELD="cpuset"
 +
CPUSET_PATH="/oar"
 +
OARSH_OPENSSH_DEFAULT_OPTIONS="-oProxyCommand=none -oPermitLocalCommand=no"
 +
</nowiki></pre>}}
 +
 +
5) Initialize the OAR data base.
 +
 +
{{Command|<nowiki>oar-database --create --db-admin-user root --db-admin-pass 'XXXXXX'</nowiki>}}
 +
 +
6) Add the resources (compute nodes) to the database.
 +
 +
 +
Edit the file /tmp/nodes and add the name of the compute nodes. (One per line)
 +
 +
{{File|/tmp/nodes|<pre><nowiki>
 +
guane01
 +
guane02
 +
.
 +
.
 +
guane09
 +
guane10
 +
</nowiki></pre>}}
 +
 +
Then, excecute the following command
 +
 +
{{Command|<nowiki>oar_resources_init /tmp/nodes</nowiki>}}
 +
 +
This, will generate a file (/tmp/oar_resources_init.cmd) with the description of the resources.
 +
 +
{{File|/tmp/oar_resources_init.cmd|<pre><nowiki>
 +
oarproperty -a cpu
 +
oarproperty -a core
 +
oarproperty -c -a host
 +
oarproperty -c -a cpuset
 +
oarproperty -a mem
 +
oarnodesetting -a -h guane09 -p host=guane09 -p cpu=1 -p core=1 -p cpuset=0 -p mem=103
 +
oarnodesetting -a -h guane09 -p host=guane09 -p cpu=1 -p core=2 -p cpuset=10 -p mem=103
 +
oarnodesetting -a -h guane09 -p host=guane09 -p cpu=1 -p core=3 -p cpuset=12 -p mem=103
 +
</nowiki></pre>}}
 +
 +
To add the resources excecute the following command
 +
 +
{{Command|<nowiki>source /tmp/oar_resources_init.cmd</nowiki>}}
 +
 +
In case of GPU resources, edit a file (/tmp/oar_gpu_resources_init.cmd) with the following content. You must change some parameters to fit the type of GPU resources you have
 +
 +
{{File|/tmp/oar_gpu_resources_init.cmd|<pre><nowiki>
 +
oarproperty -c -a gpu
 +
oarproperty -c -a gputype
 +
oarproperty -a gpunum
 +
oarnodesetting --sql "core=1" -p gpu=YES -p gpunum=1 -p gputype=M2075
 +
oarnodesetting --sql "core=2" -p gpu=YES -p gpunum=1 -p gputype=M2075
 +
oarnodesetting --sql "core=3" -p gpu=YES -p gpunum=1 -p gputype=M2075
 +
oarnodesetting --sql "core=4" -p gpu=YES -p gpunum=2 -p gputype=M2075
 +
oarnodesetting --sql "core=5" -p gpu=YES -p gpunum=2 -p gputype=M2075
 +
oarnodesetting --sql "core=6" -p gpu=YES -p gpunum=2 -p gputype=M2075
 +
oarnodesetting --sql "core=7" -p gpu=YES -p gpunum=3 -p gputype=M2075
 +
oarnodesetting --sql "core=8" -p gpu=YES -p gpunum=3 -p gputype=M2075
 +
oarnodesetting --sql "core=9" -p gpu=YES -p gpunum=3 -p gputype=M2075
 +
</nowiki></pre>}}
 +
 +
{{Note|In our cluster guane we have 8 GPUs per node, every node have 24 cores. Therefore we have to use 3 CPU cores to manage one GPU. Thus, you have to modify the lines to do something like that.}}
 +
Then, execute the following command
 +
 +
{{Command|<nowiki>source /tmp/oar_gpu_resources_init.cmd</nowiki>}}
 +
 +
Finally, you can check the list of the assigned resources using the following command
 +
 +
{{Command|<nowiki>oarnodes | less</nowiki>}}
 +
 +
7) To see the resources in a graphical way configure the application called monika. In the frontend edit the file  archivo /etc/oar/monika.conf and set the proper variables and parameters. Example:
 +
 +
{{File|/etc/oar/monika.conf|<pre><nowiki>
 +
css_path = /monika.css
 +
clustername = GridUIS-2
 +
hostname = localhost
 +
dbport = 3306
 +
dbtype = mysql
 +
dbname = oardb
 +
username = oar
 +
password = xxxxxxx
 +
nodes_synonym = network_address
 +
summary_display = default:nodes_synonym,resource_id
 +
nodes_per_line = 1
 +
max_cores_per_line = 8
 +
nodename_regex = (\d+)
 +
nodename_regex_display = (.*)
 +
set_color Down = "red"
 +
set_color Free = "#ffffff"
 +
set_color Absent = "#c22200"
 +
set_color StandBy = "cyan"
 +
set_color Suspected = "#ff7b7b"
 +
color_pool = "#9999ff"
 +
color_pool = "#00cccc"
 +
color_pool = "pink"
 +
color_pool = "yellow"
 +
color_pool = "orange"
 +
color_pool = "#ff22ff"
 +
color_pool = "#33cc00"
 +
color_pool = "#cc66cc"
 +
color_pool = "#99ff99"
 +
color_pool = "#995522"
 +
color_pool = "orange"
 +
color_pool = "#999999"
 +
hidden_property = network_address
 +
hidden_property = expiry_date
 +
hidden_property = desktop_computing
 +
hidden_property = cpu
 +
hidden_property = cpuset
 +
hidden_property = available_upto
 +
hidden_property = last_available_upto
 +
hidden_property = core
 +
hidden_property = finaud_decision
 +
hidden_property = last_job_date
 +
hidden_property = resource_id
 +
hidden_property = state_num
 +
hidden_property = suspended_jobs
 +
hidden_property = next_state
 +
hidden_property = next_finaud_decision
 +
hidden_property = deploy
 +
hidden_property = host
 +
hidden_property = ip
 +
hidden_property = hostname
 +
hidden_property = scheduler_priority
 +
</nowiki></pre>}}
 +
 +
           
 +
 +
 +
        </div>
 +
    </div>
 +
</div>
 +
 +
<div class="col-md-14">
 +
    <div class="panel panel-dark-white-border">
 +
        <div class="panel-heading">
 +
            <h3 class="panel-title">OAR Client Installation</h3>
 +
        </div>
 +
        <div class="panel-body">
 +
 +
1) Configure the repository
 +
 +
{{Command|<nowiki>wget -q http://oar-ftp.imag.fr/oar/oarmaster.asc -O- | apt-key add -</nowiki>}}
 +
 +
{{Command|<nowiki>echo "deb http://oar-ftp.imag.fr/oar/2.5/debian wheezy main" > /etc/apt/sources.list.d/oar.list</nowiki>}}
 +
 +
2) Install the required software
 +
 +
{{Command|<nowiki>apt-get install perl perl-base</nowiki>}}
 +
 +
3) Install the OAR packages
  
 +
{{Command|<nowiki>apt-get install oar-node</nowiki>}}
  
 +
4) Copy the directory /var/lib/oar/.ssh from the frontend to all the nodes.
  
    <div class="column clearfix">
+
{{Command|<nowiki>scp -rp guane:/var/lib/oar/.ssh /var/lib/oar/</nowiki>}}
        <div class="col-md-14">
 
              <div class="panel well well-neutra">
 
  
                              <div class="col-md-14">
 
                                    <div class="panel well well-neutra">
 
                                          <h5>'''Installation'''</h5>
 
                                            <ul>
 
                                                  <li></li>
 
                                          </ul>
 
                                      </div>
 
                              </div>
 
                           
 
                    </div>
 
 
         </div>
 
         </div>
 
     </div>
 
     </div>
 +
</div>

Latest revision as of 20:14, 20 March 2015


Logo_sc33.png

OAR Installation

In this section we describe all the administration tasks for the job scheduler OAR in the frontend node (Server) and in the compute nodes (Client)

OAR Server Installation

1) Configure the Repository

curl http://oar-ftp.imag.fr/oar/oarmaster.asc | sudo apt-key add -
echo "deb http://oar-ftp.imag.fr/oar/2.5/debian squeeze main" > /etc/apt/sources.list.d/oar.list

2) Install required software

apt-get install mysql-server mysql-client libdbd-mysql-perl libdbi-perl libsort-versions-perl


3) Install OAR packages

apt-get install oar-server-mysql oar-user oar-server oar-user-mysql oar-web-status oar-admin oar-node


4) OAR Configuration. Edit the file /etc/oar/oar.conf and fix all required values. Example

File: /etc/oar/oar.conf
DB_TYPE="mysql"
DB_HOSTNAME="localhost"
DB_PORT="3306"
DB_BASE_NAME="oardb"
DB_BASE_LOGIN="oar"
DB_BASE_PASSWD="xxxx"
SERVER_HOSTNAME="localhost"
SERVER_PORT="6666"
OARSUB_DEFAULT_RESOURCES="/nodes=1"
OARSUB_NODES_RESOURCES="network_address"
OARSUB_FORCE_JOB_KEY="no"
LOG_LEVEL="3"
LOG_CATEGORIES="all"
OAREXEC_DEBUG_MODE="1"
OAR_RUNTIME_DIRECTORY="/var/lib/oar"
LOG_FILE="/var/log/oar.log"
DEPLOY_HOSTNAME="127.0.0.1"
COSYSTEM_HOSTNAME="127.0.0.1"
DETACH_JOB_FROM_SERVER="1"
OPENSSH_CMD="/usr/bin/ssh -p 6667"
FINAUD_FREQUENCY="300"
PINGCHECKER_SENTINELLE_SCRIPT_COMMAND="/usr/lib/oar/sentinelle.pl -t 5 -w 20"
PROLOGUE_EXEC_FILE="/etc/oar/prologue"
EPILOGUE_EXEC_FILE="/etc/oar/epilogue"
SCHEDULER_TIMEOUT="10"
SCHEDULER_NB_PROCESSES=4
SCHEDULER_JOB_SECURITY_TIME="60"
SCHEDULER_GANTT_HOLE_MINIMUM_TIME="300"
SCHEDULER_RESOURCE_ORDER="scheduler_priority ASC, suspended_jobs ASC, network_address DESC, resource_id ASC"
SCHEDULER_PRIORITY_HIERARCHY_ORDER="network_address/resource_id"
SCHEDULER_AVAILABLE_SUSPENDED_RESOURCE_TYPE="default"
SCHEDULER_FAIRSHARING_MAX_JOB_PER_USER=30
ENERGY_SAVING_INTERNAL="no"
JOB_RESOURCE_MANAGER_PROPERTY_DB_FIELD="cpuset"
CPUSET_PATH="/oar"
OARSH_OPENSSH_DEFAULT_OPTIONS="-oProxyCommand=none -oPermitLocalCommand=no"

5) Initialize the OAR data base.

oar-database --create --db-admin-user root --db-admin-pass 'XXXXXX'


6) Add the resources (compute nodes) to the database.


Edit the file /tmp/nodes and add the name of the compute nodes. (One per line)

File: /tmp/nodes
guane01
guane02
.
.
guane09
guane10

Then, excecute the following command

oar_resources_init /tmp/nodes


This, will generate a file (/tmp/oar_resources_init.cmd) with the description of the resources.

File: /tmp/oar_resources_init.cmd
oarproperty -a cpu
oarproperty -a core
oarproperty -c -a host
oarproperty -c -a cpuset
oarproperty -a mem
oarnodesetting -a -h guane09 -p host=guane09 -p cpu=1 -p core=1 -p cpuset=0 -p mem=103
oarnodesetting -a -h guane09 -p host=guane09 -p cpu=1 -p core=2 -p cpuset=10 -p mem=103
oarnodesetting -a -h guane09 -p host=guane09 -p cpu=1 -p core=3 -p cpuset=12 -p mem=103

To add the resources excecute the following command

source /tmp/oar_resources_init.cmd


In case of GPU resources, edit a file (/tmp/oar_gpu_resources_init.cmd) with the following content. You must change some parameters to fit the type of GPU resources you have

File: /tmp/oar_gpu_resources_init.cmd
oarproperty -c -a gpu
oarproperty -c -a gputype
oarproperty -a gpunum
oarnodesetting --sql "core=1" -p gpu=YES -p gpunum=1 -p gputype=M2075
oarnodesetting --sql "core=2" -p gpu=YES -p gpunum=1 -p gputype=M2075
oarnodesetting --sql "core=3" -p gpu=YES -p gpunum=1 -p gputype=M2075
oarnodesetting --sql "core=4" -p gpu=YES -p gpunum=2 -p gputype=M2075
oarnodesetting --sql "core=5" -p gpu=YES -p gpunum=2 -p gputype=M2075
oarnodesetting --sql "core=6" -p gpu=YES -p gpunum=2 -p gputype=M2075
oarnodesetting --sql "core=7" -p gpu=YES -p gpunum=3 -p gputype=M2075
oarnodesetting --sql "core=8" -p gpu=YES -p gpunum=3 -p gputype=M2075
oarnodesetting --sql "core=9" -p gpu=YES -p gpunum=3 -p gputype=M2075
NOTE: In our cluster guane we have 8 GPUs per node, every node have 24 cores. Therefore we have to use 3 CPU cores to manage one GPU. Thus, you have to modify the lines to do something like that.

Then, execute the following command

source /tmp/oar_gpu_resources_init.cmd


Finally, you can check the list of the assigned resources using the following command

oarnodes | less


7) To see the resources in a graphical way configure the application called monika. In the frontend edit the file archivo /etc/oar/monika.conf and set the proper variables and parameters. Example:

File: /etc/oar/monika.conf
css_path = /monika.css
clustername = GridUIS-2
hostname = localhost
dbport = 3306
dbtype = mysql
dbname = oardb
username = oar
password = xxxxxxx
nodes_synonym = network_address
summary_display = default:nodes_synonym,resource_id
nodes_per_line = 1
max_cores_per_line = 8
nodename_regex = (\d+)
nodename_regex_display = (.*)
set_color Down = "red"
set_color Free = "#ffffff"
set_color Absent = "#c22200"
set_color StandBy = "cyan"
set_color Suspected = "#ff7b7b"
color_pool = "#9999ff"
color_pool = "#00cccc"
color_pool = "pink"
color_pool = "yellow"
color_pool = "orange"
color_pool = "#ff22ff"
color_pool = "#33cc00"
color_pool = "#cc66cc"
color_pool = "#99ff99"
color_pool = "#995522"
color_pool = "orange"
color_pool = "#999999"
hidden_property = network_address
hidden_property = expiry_date
hidden_property = desktop_computing
hidden_property = cpu
hidden_property = cpuset
hidden_property = available_upto
hidden_property = last_available_upto
hidden_property = core
hidden_property = finaud_decision
hidden_property = last_job_date
hidden_property = resource_id
hidden_property = state_num
hidden_property = suspended_jobs
hidden_property = next_state
hidden_property = next_finaud_decision
hidden_property = deploy
hidden_property = host
hidden_property = ip
hidden_property = hostname
hidden_property = scheduler_priority



OAR Client Installation

1) Configure the repository

wget -q http://oar-ftp.imag.fr/oar/oarmaster.asc -O- | apt-key add -


echo "deb http://oar-ftp.imag.fr/oar/2.5/debian wheezy main" > /etc/apt/sources.list.d/oar.list


2) Install the required software

apt-get install perl perl-base


3) Install the OAR packages

apt-get install oar-node


4) Copy the directory /var/lib/oar/.ssh from the frontend to all the nodes.

scp -rp guane:/var/lib/oar/.ssh /var/lib/oar/