From Digitalis

Revision as of 13:44, 21 August 2015 by Neyron (Talk | contribs)
Jump to: navigation, search
| Introduction | Usage | Idfreeze | Idgraf | Idphix | Idbool | Idkat | Idcin | Idarm | Ppol | Grimage |



Technically speaking, the Digitalis platform is composed of the hardware machines described below:

  • The Grenoble site of Grid'5000
  • The pôle Informatique Distribuée (ID) of CIMENT
  • The cluster of the Kinovis platform
  • Several research teams' experimentation machines

Operation of the machines is shared among different engineers teams, working in a tight cooperation.

The technical manager of the platform is Pierre Neyron (LIG/CNRS).

Hardware description

Grid'5000 Grenoble clusters

Grenoble Grid'5000 site is composed of 3 clusters (as of 2012-03): genepi, edel and adonis. More information can be found on Grid'5000 Grenoble site pages. Those machines are handled by the Grid'5000 global (national) system, and managed by Grid'5000 team. One must then refer to the Grid'5000 documentation to know how to use them. The remaining of this page is mostly not relevant to those clusters.

Grid'5000 resources can be accessed indifferently in any Grid'5000 sites (i.e. Grenoble users are not restricted to Grenoble hardware).

One just need a Grid5000 account to access resources of Grid'5000.


As of 2014, CIMENT pole ID has no specific hardware in CIMENT (managed with the CIMENT stack). Grid'5000 Grenoble's site hardware is however used by CIMENT for some purposed like training (GPUs), etc. Also, CIMENT storage (Irods) is replicated on data storage in Grid'5000 network.

Other CIMENT resources (e.g. the Froggy cluster, 3000cores) can nevertheless be used. One must request an CIMENT access.

Kinovis cluster

Currently the cluster of the Kinovis platform only supports the required functions for the Kinovis platform: dynamic multi-camera 3D reconstruction.

However, we envision to allow other research experiments using that cluster in the future, when the Kinovis platform is not in operation.

The cluster is composed of 17 nodes, Dell R720 with 2 Intel Xeon Ivybridge CPUs with 64GB of RAM and both a 10GB ethernet and QDR/FDR Infiniband network.

Also, every nodes connect 4 high resolution IP cameras (hence, nodes feature 8x 1GB Ethernet interfaces per nodes for the total 68 cameras) and 8 HDDs.

Machines specs

Aquisition clusters
17 nodes (rack servers)
  • Dell PowerEdge R720
  • 2x Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz/8 cores (16 cores/node)
  • 1x Quadro K4000 (768 cores/node)
  • 1x Intel 10Gb Ethernet
  • 1x Mellanox ConnetX3-IB FDR (setup with QDR only)
Production nodes
2 nodes (highend desktops)
  • Dell Precision T5610
  • 2x Intel(R) Xeon(R) CPU E5-2687W v2 @ 3.40GHz/8 cores (16 cores/node)
  • 1x Geforce Titan Black -> 2880 cores
  • 1x Intel 10Gb Ethernet
Computation nodes
2 nodes (rack servers)
  • Dell PowerEdge T630
  • 2x Intel(R) Xeon(R) CPU E5-2697 v3 @ 2.60GHz/14 cores (28 cores/node)
  • Xx GPU (TBC)
  • 1x Intel 10Gb Ethernet
  • 1x Mellanox ConnetX3-IB FDR (setup with QDR only)

Research teams' machines

Digitalis includes machines which are not managed by Grid'5000, but benefit from many services provided by Grid'5000 (tight cooperation). First of all, access to those machines uses the Grid'5000 account credentials (more details below).

Those machines are (click on the link to access the dedecated page for each machine):

Those machines are resources co-funded by several teams in order to provide experimental platforms such as:

  • large and complex SMP configurations
  • complex processor/cache architecture analysis
  • multi-GPU configurations
  • etc

Hardware summary table

Platform: Grid'5000 -> access via
Machine CPU RAM GPU Network Other
genepi-[1-34].grenoble.grid5000.fr2x Intel E5420 (8C)8GB DDR2IB DDR
edel-[1-72].grenoble.grid5000.fr2x Intel E5520 (8C)24GB DDR3IB QDR
adonis-[1-10].grenoble.grid5000.fr2x Intel E5520 (8C)24GB DDR31/2x S1070 (2GPU)IB QDR
Platform: Digitalis -> access via
Machine CPU RAM GPU Network Other
grimage-1.grenoble.grid5000.fr2x Intel E5530 (8C)12GB DDR31x GTX-680 (1GPU)IB DDRKeyboard/Mouse/Screen attached (4/3 screen, on the left, same as grimage-7)
grimage-2.grenoble.grid5000.fr2x Intel E5530 (8C)12GB DDR3IB DDR + 1x 10GE (DualPort)2x Camera (firewire)
grimage-3.grenoble.grid5000.fr2x Intel E5530 (8C)12GB DDR31x GTX-680 (1GPU)IB DDRKeyboard/Mouse/Screen attached (16/9 screen, on the right) + 2x cameras (firewire)
grimage-4.grenoble.grid5000.fr2x Intel E5530 (8C)12GB DDR3IB DDR + 1x 10GE (DualPort)2x Camera (firewire)
grimage-5.grenoble.grid5000.fr2x Intel E5530 (8C)12GB DDR3IB DDR + 2x 10GE (DualPort)2x Camera (firewire)
grimage-6.grenoble.grid5000.fr2x Intel E5530 (8C)12GB DDR3IB DDR + 1x 10GE (DualPort)
grimage-7.grenoble.grid5000.fr2x Intel E5530 (8C)12GB DDR31x GTX-580 (1GPU)IB DDR + 2x 10GE (DualPort)Keyboard/Mouse/Screen attached (4/3 screen, on the left, same as grimage-1)
grimage-8.grenoble.grid5000.fr2x Intel E5530 (8C)12GB DDR3IB DDR + 1x 10GE (DualPort)
grimage-9.grenoble.grid5000.fr2x Intel E5620 (8C)24GB DDR31x K40cIB DDR
grimage-10.grenoble.grid5000.fr2x Intel E5620 (8C)24GB DDR32x GTX-295 (4GPU)IB DDR
idgraf.grenoble.grid5000.fr2x Intel X5650 (12C)72GB DDR38x Tesla C2050 (8GPU)
idfreeze.grenoble.grid5000.fr4x AMD 6174 (48C)256GB DDR3
idphix.grenoble.grid5000.fr1x Intel E5-2650(8C)64GB DDR31x Xeon Phi 5110P
ppol-1.grenoble.grid5000.fr1x Intel E5620 (4C)12GB DDR3Ethernet 10Gpbs1x HDD 500GB sata + 2x SSD 50GB-sata
ppol-2.grenoble.grid5000.fr1x Intel E5620 (4C)12GB DDR3Ethernet 10Gbps1x HDD 500GB sata + 2x SSD 50GB-sata
ppol-3.grenoble.grid5000.fr1x Intel E5620 (4C)12GB DDR3Ethernet 10Gbps1x HDD 500GB sata + 2x SSD 50GB-sata
idbool.grenoble.grid5000.fr12x AMD Opteron 6376 (192C)
Machine CPU RAM GPU Network Other
idkoiff.imag.fr8x AMD 875 (16C)32GB DDR21x GTX-280 (1GPU)


The Grid'5000 Network

The Digitalis platform takes benefit from Grid'5000 infrastructure, and first of all, uses Grid'5000 network. This allows a unified access to many resources France-wide, within a single network space. Machines from any Grid'5000 site can communicate without administrative restriction (access control), and with a very high throughput (10GE backbone).

However, since Grid'5000 is a very powerful scientific instrument, the outside world must me protected from buggy experiments or uncontrolled behaviors. Please read the following pages for information about this:

Consequences are that one cannot just use machines on Grid'5000 as one uses a workstation on an intranet of laboratory.

Dedicated services

Dedicated services are provided for the management of our machines. Indeed, our machines couldn't fit in Grid'5000 model, due to their special characteristics and usage: The Grimage cluster is special in the fact that it uses to operate the Grimage platform with cameras and other equipments attached, making it's hardware configuration different. Other local machines are special in the fact that they are unique resources, which make their model of usage very different from the one of a cluster of many identical machines as found with Grid'5000 clusters.

As a result, a dedicated resource management system (OAR) is provided to manage the access to the machines, with special mechanics (different from the ones provided in Grid'5000). A dedicated deployment system (kadeploy) is also provided to handle user's customized operating systems that can be deployed on the machines. Even if different from the main Grid'5000 tools, many of the documented information for the Grid'5000 tools also apply to our dedicated services. This document actually only explains their specificities.

OAR and Kadeploy frontend for Digitalis machines (i.e. not Grid'5000) is the machine named

Mutualised services (services provided by Grid'5000)

Many services we use on our local machines are provided by the Grid'5000 infrastructure, from a national perspective. For instance, the following services are provided for Grid'5000 but also serve our local purposes (by courtasy) :

  • access machines
  • accounts (LDAP)
  • network home directory storage (NFS)
  • web proxy
  • and more.

Please mind the fact that all services are not dedicated to our local needs.

Terms of service

Grid'5000 services are handled nationaly for the global platform (11 sites, France-wide). As a result, some aspects may seam more complex than the should from a local perspective. Please mind the fact that some services are not for our local conveniance only. Furthermore, the local platform is to be seen as an extension to the main Grid'5000 platform, that is not supported by the Grid'5000 staff, even if we can freely benefit from some services they provide.

As a result, we are subject to rules edicted by the Grid'5000 platform:

  • Security policies: restricted access to the network, output traffic filtering.
  • Maintenance schedules: Thursday is the maintenance day, do not be surprised if interruption of services happen on that day !
  • Rules of good behavior within the large Grid'5000 user community (reading the mailing lists is a must)

If one is using the "official" Grid'5000 nodes, one must comply to the Grid'5000 charter (as approved by every user when requesting a Grid'5000 account)

Data integrity

There is not guarantee provided against data loss on the Grid'5000 NFS (home directories), nor on machines local hard drives. No backup is performed, so in case of an incident, the Grid'5000 staff will not be able to provide you with any way to get back any data.

As a result, if you have data you really care about, and cannot reproduce with an acceptable cost (time of computation) with regard to risks of data loss (which rarely happens), it is strongly suggested you back them up elsewhere.

(NFS storages uses RAID to overcome a disk failure, but RAID is not backup)

Platform usage

Charter of good usage

The charter of usage for the machines of Digitalis (except for Grid'5000 official machines which follow Grid'5000 Charter) is the following:


Users for the platform are split in 2 communities:

  • the owners of the machines (e.g. local users, buyers)
  • the others

The others are welcome to use the machines, but the owners keep priority and privileged rights (e.g. can possibly ask to drop jobs from others). In any case, everybody is encouraged to plan its experiments, and possibly to book resources in advance while trying to ask for reasonable (fair) shares of the resources (walltime).

Also, time is split in to phases: daytime and night.

During daytime
  • jobs should use the shared access as much a possible
  • if machine are obviously unused, one may consider running exclusive (or deploy) jobs, but please try to limit them to 2 hours max (possibly renewable, see redeploy job type for instance).
  • during high pressure periods, like before conferences submission dead-lines, any usage by local users might preempt other usage.
During the night
  • night is everyday: 18:00 > 9:00, or week-ends 18:00 on Friday > 9:00 on Monday or holidays (like Christmas but not school holiday)
  • night is the time for long, exclusive jobs, for experiments requiring exclusive access to the resources (for performance reasons for instance)
  • however, if one just needs a long job, it is of course always preferred to run in the shared access mode

For now, the charter policy is not enforced by any technical mean, so everyone's kindness is appreciated.

Also, if one requires a special usage, out of the charter for the resources, one is encouraged to inform every other users using the mailing list

Again, while trying to foster mutualisation as much as possible, owners of the machines keep higher priority and privileges.

Access to Digitalis

Access to Grid5000

As a prerequisite to access Digitalis, your need to be able to access Grid'5000's network.

For that prupose, you require a Grid'5000 account. Please see:

Also, make sure your account belongs to the digitalis group (among others possibly).

Once you have a Grid'5000 account, you can ssh to the Grid'5000 network using ssh:

$ ssh

In case of any issue at that point, please report to the documentation of Grid'5000.

From there you can access to the frontend of the Grid'5000 Grenoble site or other sites, by running

$ ssh grenoble

Or to any other Grid'5000 site, e.g.

$ ssh nancy

BUT the research team's machines of Digitalis are not manage by Grid'5000 Grenoble's frontend, see the next paragraph.

Please see also the tips and tricks section below which provides a lot of useful information to ease the access.

Access to the non-Grid'5000 machines

The frontend machine to use Digitalis' resources is: From Grid'5000 access machine you can just do:

$ ssh

Like for Grid'5000 machines (but with a slightly different charter), access to the teams' machines is controlled by a resource manager.

This means that users cannot just ssh to a machine and have processes indefinitely running on them (e.g. vi or emacs processes).

Any user must book the machine for a period of time (a job), during which access will be granted to him.

Once the period of time is ended, all rights are revoked, and all processes of the user are killed.

By default users are not root on the machines. Some privileged commands may however be permitted (e.g. schedtool). Default access to a machine is not exclusive, which means that many users can have processes on the machine at a same time, unless a user requested an exclusive access.

Just like on Grid'5000, this is possible on some machine to kadeploy. Special use cases indeed require full access to the machine: need to be root, to reboot the machine, or to install software or a different operating system, without breaking it for other.

As a result, you need to use the OAR commands to get access to the experimentation machines.

Use cases

I want to access a machine

To access a specific machine, just provide the machine name in the oarsub command:

pneyron@digitalis:~$ oarsub -I -p "machine = 'idgraf'"
[ADMISSION RULE] Modify resource description with type constraints
Import job key from file: .ssh/id_rsa
Interactive mode : waiting...

Connect to OAR job 1122 via the node

(Mind looking at the dedicated page of each machine for its details, e.g. for idbool, one must use the `-l machine=1' option to run a job on the whole machine).

You then get access to the machine for 1 hour by default (add -l walltime=4 for 4 hours).

Note that if the machine is not available (e.g. an exclusive job is already running), you will have to wait until it is free (see the resource usage visualization tools).

If no machine is specified, you get access to one of the grimage nodes.

You can use the oarsh command to open other shells to the machine, as long as the job is still running.

Please read the man pages of the OAR commands for more details.

I want to gain exclusive access to a machine for N hours

To get access to a machine as only user (e.g. in order to avoid noises from other users), use the exclusive job type:

pneyron@digitalis:~$ oarsub -I -p "machine = 'idgraf'" -t exclusive -l walltime=N
[ADMISSION RULE] Modify resource description with type constraints
Import job key from file: .ssh/id_rsa
Interactive mode : waiting...

Connect to OAR job 1122 via the node

This way you get access to the machine for N hours, and nobody else can access the machine during your job.

Note that if the machine is not available, you will have to wait until it is free (see the resource usage visualization tools).

Also, some privileged command can be run via sudo in exclusive jobs (see the machines' dedicated pages).

I want to open a new shell in an existing job

There are several ways to open a shell in a OAR job.

Assuming you created a job as follows:

[pneyron@digitalis ~]$ oarsub "sleep 1h"
[ADMISSION RULE] Modify resource description with type constraints
Generate a job key...

You can:

Use oarsub -C <job id>
[pneyron@digitalis ~]$ oarsub -C 6028
Connect to OAR job 6028 via the node
[OAR] Your nodes are:*8

[pneyron@grimage-8 ~](6028-->58mn)$ 

NB: With this method, you do not need to know the nodes used by your job, but the job id. Also the environment is the same as when in the shell opened upon oarsub.

Use oarsh with the OAR_JOB_ID=<job id> environment variable
[pneyron@digitalis ~]$ OAR_JOB_ID=6028 oarsh
Linux 2.6.32-grimage #1 SMP Fri Jan 6 14:10:41 UTC 2012 x86_64
This is a Grid'5000 compute node.
You must have a reservation with OAR before using this host.
Last login: Fri Feb 21 16:54:42 2014 from
[pneyron@grimage-8 ~]$ 

NB: later on, you can also use oarsh on the node to connect from node to node (useful in multi node jobs)

Use oarsh with a job key

For that, create a public/private key pair on digitalis with no passphrase (for the sack of the ease of use and because this key should be for Grid'5000 internal usage only):

pneyron@digitalis:~$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/pneyron/.ssh/id_rsa):

Again: Do not use your existing sensible SSH keys here, for instance located on your workstation and protected by a passphrase of course !

Then export the OAR_JOB_KEY_FILE environement variable:

[pneyron@digitalis ~]$ export OAR_JOB_KEY_FILE=~/.ssh/id_rsa

You can also add the export line to you .bashrc if meaningful to you (make sure your .bashrc is sourced upon login, or look at your .profile or .bash_profile...)

You will now see that the oarsub command will use this key for your jobs.

[pneyron@digitalis ~]$ oarsub  "sleep 1h"
[ADMISSION RULE] Modify resource description with type constraints
Import job key from file: /home/pneyron/.ssh/id_rsa
[pneyron@digitalis ~]$

And you can connect to the job afterward:

[pneyron@digitalis ~]$ export OAR_JOB_KEY_FILE=~/.ssh/id_rsa # useless if export done in .bashrc 
[pneyron@digitalis ~]$ oarsh
Linux 2.6.32-grimage #1 SMP Fri Jan 6 14:10:41 UTC 2012 x86_64
This is a Grid'5000 compute node.
You must have a reservation with OAR before using this host.
Last login: Thu Feb 20 14:29:09 2014 from
[pneyron@grimage-9 ~]$


I want to run batch jobs, like on a regular HPC cluster

If you don't want your jobs to overlap like with shared and exclusive job types, you can use the batch job type. This job type activate OAR's original behavior, where one job waits for the termination of the previous job before starting.


  • First job:
[pneyron@digitalis ~]$ oarsub  -p "host like 'grimage-10.%'" -t batch 'sleep 2h'
Properties: host like 'grimage-10.%'
[ADMISSION RULE] Modify resource description with type constraints
Import job key from file: /home/pneyron/.ssh/id_rsa
  • Second job (-I is used here for the purpose of the demonstartion only)
[pneyron@digitalis ~]$ oarsub  -p "host like 'grimage-10.%'" -t batch 'sleep 2h' -I
Properties: host like 'grimage-10.%'
[ADMISSION RULE] Modify resource description with type constraints
Import job key from file: /home/pneyron/.ssh/id_rsa
Interactive mode : waiting...
[2014-01-31 22:10:47] Start prediction: 2014-01-31 23:11:43 (FIFO scheduling OK)

NB: batch jobs are exclusive (but not timesharing=*,user)

I want to execute privileged commands on my node

Within a exclusive job, some privileged commands can be run via sudo. Those authorized privileged commands typically have an impact on other users, hence they require an exclusive access (job) to the machine.

See the page dedicated to each machine for information about the available commands (grimage, idfreeze, idgraf, idphix).

If the privileged command you need is not available (available commands run without any sudo password prompt), you can ask your administrator whether it's possible to enable it. However, not all command can be safe, and if one is considered harmful to the system, it will not be made available. Please mind deploying your own operating system on the machine to get full privileges.

I want to be able to reboot a node without loosing my reservation

Rebooting a node kills jobs, therefor a special job type is provided to overcome this and allow rebooting nodes while keeping them booked. Unsurprisingly, this job type is named reboot (-t reboot). This type of job does not provide a shell on a node but on the frontend instead (just like deploy jobs). To get access to the nodes, the user must then run an exclusive job concurrently, and possibly several of them if they get interrupted by reboots.

Example of use:

pneyron@digitalis:~$ oarsub -I -t reboot -p "host like 'grimage-4.%'"
[ADMISSION RULE] Modify resource description with type constraints
Interactive mode : waiting...

Connect to OAR job 1129 via the node

Note that you get a shell on digitalis instead of on a grimage-4, unlike with an exclusive job.

While such a job is running, reboot can be performed either from the node (from the shell of an exclusive job) or from the frontend (digitalis).

Reboot from the node, as follows
pneyron@digitalis:~$ oarsub -I -t exclusive -p "host like 'grimage-4.%'"
[ADMISSION RULE] Modify resource description with type constraints
Interactive mode : waiting...

Connect to OAR job 1130 via the node
pneyron@grimage-4:~$ sudo reboot
The system is going down for reboot NOW! (pts/0) (Fri Jul 2
pneyron@grimage-4:~$ Connection to closed by remote host.
Connection to closed.
[ERROR] An unknown error occured : 65280
Disconnected from OAR job 1130

(the interruption of the job due to the reboot causes some error that can be ignored of course.)

Reboot from the frontend as follows
pneyron@digitalis:~$ sudo node-reboot
[sudo] password for pneyron: 
*** Checking if pneyron is allowed to reboot
OK, you have a job of type "reboot" on the node, firing a reboot command !
--- switch_pxe (grimage cluster)
--- reboot (grimage cluster)
  *** A soft reboot will be performed on the nodes
CMD: ssh -q -o BatchMode=yes -o StrictHostKeyChecking=no -o PreferredAuthentications=publickey -o ConnectTimeout=2 -o UserKnownHostsFile=/dev/null -i /etc/kadeploy3/keys/id_deploy "nohup /sbin/reboot -f &>/dev/null &" -- EXIT STATUS: 0
--- set_vlan (grimage cluster)
  *** Bypass the VLAN setting

NB: Please note that reboot jobs are exclusive.

Once rebooted, the user can get a new shell on the node by resubmitting an exclusive job, thanks to the reboot job which guarantees that no other user reserved the nodes in the meantime.

I want to change the system (OS, software) on the machine

Use the deploy type. See Grid'5000 documentation about kadeploy. The kadeploy installation on digitalis works the same way.

I want to book the machine for next night

OAR allows advance reservations

pneyron@digitalis:~$ oarsub -r "2012-04-01 20:00:00" -l walltime=4 -p "machine='idgraf'"
[ADMISSION RULE] Modify resource description with type constraints
Import job key from file: .ssh/id_rsa
Reservation mode : waiting validation...
Reservation valid --> OK

Once your job starts (on April 1st, 8pm), you will be able to oarsh to the node.

See OAR's documentation for more information.

Q&A / Tips and tricks

Access seems to be broken, what can I do ?

You normally access the Grid'5000 network by ssh'ing to

However, if that access machine is not reachable:

  1. Check for known issues on Grid'5000 incident page:
  2. Check your grid'5000 emails about possible outage or maintenance (planned or exceptional)
  3. Try other access paths to the grid'5000 network, try and cascade ssh as follows:
    1. from Internet > >
    2. from Internet > >
    3. from the intranet of Inria Grenoble or LIG > >
    4. from Internet: LIG bastion (e.g. > >
    5. from Internet: Inria Grenoble bastion (e.g. > >
  • You can hide those ssh cascades by playing with the ssh config file and proxycommands (see other tips and man ssh_config).
  • You might also want to benefit from a better bandwidth or latency by using the local access (

Access to the Grid'5000 network is ok, but I can't reach digitalis nor grenoble

  1. Check for known issues on Grid'5000 incident page:
  2. Check your grid'5000 emails about possible outage or maintenance (planned or exceptional)
  3. Try to access Grenoble's sote directly with one of the following path of cascaded ssh:
    1. from Inria Grenoble or LIG > >
    2. from Internet: LIG bastion (e.g. > >
    3. from Internet: Inria Grenoble bastion (e.g. > >

(connection to are restricted to local academic networks)

NB: If you can reach Grid'5000 Grenoble's site ( but not, that probably means that digitalis is broken. Please use the digitalis mailing list to report the problem.

I want to access to digitalis directly without having to go first to the access machine

Add to you ssh configuration on your workstation (~/.ssh/config):

cat <<'EOF' >> .ssh/config
Host *.g5k
ProxyCommand ssh -W "$(basename %h .g5k):%p"
User pneyron
ForwardAgent no

(replace pneyron by your Grid'5000 login)

Make sure you pushed your SSH public key to Grid'5000. see

Then you should be able to ssh to digitalis directly:

neyron@workstation:~$ ssh digitalis.grenoble.g5k
Linux 2.6.26-2-xen-amd64 #1 SMP Tue Jan 25 06:13:50 UTC 2011 x86_64
Last login: Thu Mar 22 14:36:05 2012 from

Or to copy files without needing a 2 hops operation:

neyron@workstation:~$ scp file digitalis.grenoble.g5k:/tmp/
file                                         100% 2783     2.7KB/s   00:00 

Same with rsync:

neyron@workstation:~$ rsync -av file digitalis.grenoble.g5k:/tmp/
sending incremental file list
sent 77 bytes  received 18 bytes  63.33 bytes/sec
total size is 15  speedup is 0.16
This can be used to connect to any machine within Grid'5000 from the outside, assuming you can already ssh from the inside (Watch out: see below if you want to connect to a machine within a job, this needs oarsh then)

I want to ssh directly from my workstation to a job on a experimentation machine

(Note: This does not apply to the case of deploy jobs)

Make sure that the job you create uses a job key. See #I_want_to_open_a_new_shell_in_an_existing_job

You should have a ssh/job key in ~/.ssh/id_rsa.

Copy your keys on your worskation:

scp digitalis.grenoble.g5k:.ssh/id_rsa ~/.ssh/id_rsa_g5k
scp digitalis.grenoble.g5k:.ssh/ ~/.ssh/

Add to your ssh configuration on your workstation (~/.ssh/config):

neyron@workstation:~$ cat <<'EOF' >> .ssh/config
Host *.g5koar
ProxyCommand ssh -W "$(basename %h .g5koar):6667"
User oar
IdentityFile ~/.ssh/id_rsa_g5k
ForwardAgent no

(replace pneyron by your Grid'5000 login)

Assuming you exported the OAR_JOB_KEY_FILE before doing the oarsub

[pneyron@digitalis ~]$ export OAR_JOB_KEY_FILE=~/.ssh/id_rsa # useless if export done in .bashrc 
[pneyron@digitalis ~]$ oarsub -p "machine='idgraf'" "sleep 1h"
Properties: machine='idgraf'
[ADMISSION RULE] Modify resource description with type constraints
Import job key from file: /home/pneyron/.ssh/id_rsa

Then you should be able to ssh directly to the machine from your workstation:

neyron@workstation:~$ ssh idgraf.grenoble.g5koar
Linux 3.2.0-2-amd64 #1 SMP Sun Mar 4 22:48:17 UTC 2012 x86_64

I want to push/pull data from/to the outside to/from a machine

There are several ways of pushing/pulling files from/to the outside.

Using NFS

Assuming your Grid'5000 user's NFS home directory is mounted on the destination machine, you can access files from there after copying them to it with one of the following command:

  • Using the global access machine:
neyron@workstation$ rsync -av file
  • using Grenoble local access machine (access restricted):
neyron@workstation$ rsync -av file

Then file is available in the home directory of all Grenoble machines:

neyron@machine$ ls /home/pneyron/

(replace pneyron by your Grid'5000 login)

Using the SSH proxy command setup

See above the setup of the .g5k and .g5koar SSH proxycommand. You can then run commands like

neyron@workstation$ rsync -av file digitalis.grenoble.g5k:/tmp/
neyron@workstation$ rsync -av file idgraf.grenoble.g5koar:/tmp/

I want my code to be pushed automatically to the machine

One can use inotifywait for instance.

To push files edited by vi for instance:

while f=$(inotifywait . --excludei '(\.swp)|(~)$' -e modify --format %f); do rsync -av $f remote_machine:remote_dir/; done


man inotifywait

A node is marked Absent or Suspected, how to fix it ?

Nodes stay Absent sometime after deploy jobs. While a short Absent time is normal during the reboot phase that follows the termination of a deploy job, having a long Absent time (more than 15 minutes) usually reveals a failed reboot. If you detect such a problem, please feel free to reboot the node again, from the frontend as follows:

pneyron@digitalis:~$ sudo node-reboot
[sudo] password for pneyron: 
*** Checking if pneyron is allowed to reboot
OK, node is absent or suspected, firing a reboot command !
--- switch_pxe (grimage cluster)
--- reboot (grimage cluster)
  *** A soft reboot will be performed on the nodes
CMD: ssh -q -o BatchMode=yes -o StrictHostKeyChecking=no -o PreferredAuthentications=publickey -o ConnectTimeout=2 -o UserKnownHostsFile=/dev/null -i /etc/kadeploy3/keys/id_deploy "nohup /sbin/reboot -f &>/dev/null &" -- EXIT STATUS: 0
--- set_vlan (grimage cluster)
  *** Bypass the VLAN setting

Rarely, nodes can also be marked as Suspected for an unknown reason (typically, OOM Killer waked up...). If a node stays Suspected for a long time, you can also try to reboot it, using the same command.

kaconsole3 is not working on idgraf

The IPMI stack of the BMC of idgraf is buggy. If you want to use the console but see that it is broken (no prompt), you can try to fix the BMC.

This is possible if you are in an exclusive job, by running:

sudo ipmi-reset

This is also possible if you are root (i.e. in a deploy job), by running

ipmitool mc reset cold

(Please do not play with other IPMI commands, since this will break the system).

NB: this reset takes a few minutes to complete.

How do I exit kaconsole3 ??

type "&."

I'd like to access resources stored outside of Grid'5000 (Internet)

Depending on what one wants to do, several options exists:

For the specific case of the access to source control repositories (SVN, Git,...), at least 2 options are possible:

  • Configure the HTTP proxy settings in your SVN or Git configuration, and use the webdav access method (i.e. not ssh). This requires that your repository server be white-listed (this is the case for common servers).
  • Checkout sources on your workstation and synchronize them to the experimentation machine (using any combination of the following tools: rsync, ssh with proxy command, inotify,...)

NB: Soon, a NAT service should be provided to allow to establish any kind of IP connection from inside Grid'5000 to the outside (Internet) for white-listed Internet destinations. This should make life easier.

The default OS on the machine I use does not provide what I need

If you thing deploying is overkill for your need, because you just need a single package more, or an upgrade of version that seems straight-forward, you are in right to ask for it to be applying in the default OS of the machine.

The good way to go is the following:

  1. test yourself that you are indeed right: use kadeploy to install a copy of the default OS of the machine by yourself in a job, hence getting full super user privileges.
  2. do what you think is good
  3. take note of every modification you did
  4. finally, ask your administrator (me) if those modifications could be applyied by default.

Ex: the OS of idgraf is currently pretty old. If requested, it could be upgraded, to provide CUDA 5 by default for instance, instead of CUDA 4 currently (as of 2013-06)

I just deployed the default OS of a machine, and I cannot ssh to the machine with my user login

Default environments of machines have restrictions regarding user logins: only root and the oar user can connect via ssh (required by the oarsh mechanism). If you deploy a default environment, then you must comment the last line in /etc/security/access.conf:

-:ALL EXCEPT root oar:ALL

In case of doubt, you can actually comment all the lines in the file.

Then you should be able to ssh to the machine using any valid user credential.

Any other question ?

Please visite the Grid'5000 website:

Or see below the technical contact section.

Resource usage visualization tools

2 tools are available to see how resources are or will be used:


Chandler is command line tool, to run on digitalis. It gives a view of the current usage of the machines.

pneyron@digitalis:~$ chandler

4 jobs, 92 resources, 60 used
         grimage-1 	TTTTTTTT grimage-2 	TTTTTTTT grimage-3 	
TTTTTTTT grimage-4 	TTTTTTTT grimage-5 	         grimage-6 	
         grimage-7 	JJJJJJJJ grimage-8 	JJJJJJJJ grimage-9 	
         grimage-10 	TTTTTTTTTTTT idgraf 	

 =Free  =Standby J=Exclusive job T=Timesharing job S=Suspected A=Absent D=Dead
  [1101] eamat (shared)
  [1101] eamat (shared)
  [1101] eamat (shared)
  [1101] eamat (shared)
  [1115] pneyron (reboot)
  [1115] pneyron (reboot)
  [1113] jvlima (shared)
  [1114] pneyron (shared)

Gantt diagram of usage

OAR Drawgantt diagram gives a view of the past, current and future usage of the machines.

(to see only one of the machine, you can set the filter paramater to one of the values shown in the select box, e.g. only )

Other OAR tools

All OAR command are available, see OAR's documentation.

  • oarstat: list current jobs
  • oarnodes: list the resources with their properties
  • etc.

Platform information and technical contact

Mailing lists

Dedicated list

A mailing list is dedicated to the communication about the locally managed machine: You'll get information about the platform via emails sent to this list, and you can also write to this list if you have to communicate something to the other users or the administrators of the local machines.

This list is the preferred medium for any question regarding the platform.

You must be a member of the digitalis group (see/edit your affiliation in Grid'5000 users management system) to receive/send e-mails from/to this mailing list.

Grid'5000 lists

Grid'5000 provide many mailing lists which any Grid'5000 user automatically receives (e.g. Since the local machines benefit from global Grid'5000 services, you should keep an eye on information sent on those mailing lists to be aware of potential exceptional maintenance schedules for instance.

Be aware that Thursday is the maintenance day. Regular maintenances are programmed which may for instance impact the NFS service.

Please do not use the list for issue related to the local machines, since the Grid'5000 staff is not in charge of those machines.

Grid'5000 Platform Events

Please also mind having in your bookmarks the Grid'5000 platform events page. It lists the futurs events which will impact the platform. You can also subscribe to the RSS feed.

Personal tools