User Tools

Site Tools


Login and sending jobs to CBLab

The Computational Biology Lab cluster is a set of computers and programs that let make simultaneous calculations. We can divide this set in two big groups, the nodes (that make all the calculations) and the support system. All the applications of the cluster run in a Bio·Linux8 (Ubuntu 14.04LTS).

User data are stored in the /home/usuaris/<user_name> directory and we can find most of the programs that run in the cluster in /home/soft/<program> directory. There is a /home/db directory where we want to store all the databases necessary for user's work and others that can be useful for all users.

By default, a maximum 4GB RAM and one processor (from 284 installed) are assigned to each process sended, but we will see how to modify this.

You can download a presentation here: CBLab tutorial (Español)

How to log in

We use a ssh connection to the cluster. For Linux and Mac we simply write in the terminal:


Windows' users can login too using putty, ssh-client, etc.

If the user wants to upload/download files we recommend filezilla or from the terminal using scp.

How to send jobs to the cluster

The first thing we have to do is to create a plaintext file (e.g. a .txt file, and NO html, .docx etc) where we will write the commands or instructions (a script) with all the options and also the path of the file or files that the cluster will need to work.

### Codi R
cd  ~/research/gTiger/tiger_risk_estimation/scripts
/home/soft/R-3.2.1/bin/R CMD BATCH --no-save --no-restore t002.0_sampling_effort_overlay_rsample.r t002.0_sampling_effort_overlay_rsample.out

We could complete our script adding other options that give us useful information:

< <input_file_name>

Standard Input

> <output_file_name>

Standard Output, this file will contain all the output generated by the program.

2> <error_file_name>

Standard Error, if the job fails it will give us information about errors.

Written in our script it would look like this:

### Codi R
/home/soft/program_name < ''input_file_name'' > ''output_file_name'' 2> ''nom_fitxer_error''

Once the script is written we save it as a .sh file. Make sure all the files that the cluster will need are in the path you have indicated previously.

From the console, we will send our job using the command qsub


By default (with no options), a maximum 4GB RAM and one processor (from 284 installed) are assigned to each process sended. Now we will see how to modify this. Also, each job sended, when finished, it will generate two files <>.o<job_id> (output file) and <>.e<job_id> (error file, empty if there is no errors).

qsub options

These are some useful options:

qsub -l h_vmem=<x>G

This one will let us assign a different amount of RAM. 'Attention' RAM is assigned per core (processor) not for job.

qsub -l h_vmem=10G ... <script_name>.sh

qsub -pe make <n_processors>

With this option we will activate the parallelization environment 'make'. It is an intra-node parallelization, so the maximum number of processors that we can use is limited by hardware (64 cores max). Exemple:

qsub -pe make 10 <script_name>.sh

e.g. We want to send a job with 20 cores and we want to assign 100G RAM, then:

qsub -pe make 20 -l h_vmem=5G <script_name>.sh

Please, consider that assigning more RAM or processors than necessary will “block” resources that other users could use.

qsub -m bea -M <user_mail>

Using this option we will receive a mail at the beginning and at the end or when we abort the sended job.

qsub -m bea -M

qsub -q ceab@nodexxx

Send a job to a chosen node where xxx is the number of the node (100 to 112)

qsub -e error -o output

By default these files are created, with this option we can choose the name of them. Just write this:

qsub -e <error_file_name.txt> -o <output_file_name.txt> <>.sh

You can add all these option in one:

qsub qsub -pe make 10 -l h_vmem=10G -M <an_email> -m bea -q ceab@nodexxx <script_name>.sh

Login in a node

Sometimes it can be useful to enter in a node where a job is running or for other reasons. We use the command qlogin for this. It is choose at random with no options and at least one processor of the node has to be free. You can use the same commands of the qsub too. In this case you don't need to indicate a

qlogin <options>

Check job status

If we want to know which of our jobs are runnning in the cluster we use the command qstat. It show us the job ID and the node where it is running.


To know other user's job:

qstat -u <login_user_name>

Or all users jobs at the same time:

qstat -u "*"

If we want more information about one particular job

qstat -j <job_ID>

"Kill" jobs

If we want to end a job before it is finished:

qdel <job_ID>
users/introduction/login_and_sending_jobs_to_the_cluster.txt · Last modified: 2018/06/13 15:12 by xavi