From Digitalis

(Difference between revisions)
Jump to: navigation, search
Line 30: Line 30:
{| {{table}}
{| {{table}}
| align="center" style="background:#f0f0f0;"|'''Digitalis'''
| align="center" style="background:#f0f0f0;"|'''Platform'''
| align="center" style="background:#f0f0f0;"|''''''
| align="center" style="background:#f0f0f0;"|'''Access'''
| align="center" style="background:#f0f0f0;"|''''''
| align="center" style="background:#f0f0f0;"|'''Machine'''
| align="center" style="background:#f0f0f0;"|'''2x Intel E5530 (8C)'''
| align="center" style="background:#f0f0f0;"|'''CPU'''
| align="center" style="background:#f0f0f0;"|'''12GB DDR3'''
| align="center" style="background:#f0f0f0;"|'''RAM'''
| align="center" style="background:#f0f0f0;"|'''1x GTX-580'''
| align="center" style="background:#f0f0f0;"|'''GPU card(s)'''
| align="center" style="background:#f0f0f0;"|'''1'''
| align="center" style="background:#f0f0f0;"|'''#GPU'''
| align="center" style="background:#f0f0f0;"|'''IB QDR'''
| align="center" style="background:#f0f0f0;"|'''Network'''
| align="center" style="background:#f0f0f0;"|'''Keyboard/Mouse/Screen attached'''
| align="center" style="background:#f0f0f0;"|'''Other'''
| Digitalis||||||2x Intel E5530 (8C)||12GB DDR3||1x GTX-580||1||IB QDR||Keyboard/Mouse/Screen attached
| ||||||2x Intel E5530 (8C)||12GB DDR3||||||IB QDR + 1x 10GE (DualPort)||2x Camera (firewire)
| ||||||2x Intel E5530 (8C)||12GB DDR3||||||IB QDR + 1x 10GE (DualPort)||2x Camera (firewire)

Revision as of 15:52, 16 April 2012



Technically speaking, the Digitalis platform is composed of the hardware machines described below. Some of them are managed by the Grid'5000 team (national service), some of them are managed locally.

This page decribes how to use the locally managed machines.

Hardware description

Grid'5000 Grenoble clusters

Grenoble Grid'5000 site is composed of 3 clusters (as of 2012-03): genepi, edel and adonis. More information can be found on Grid'5000 Grenoble site pages. Those machines are handled by the Grid'5000 global (national) system. One must then refer to the Grid'5000 documentation to know how to use them. The remaining of this page is mostly not relevant to those clusters.

Grimage cluster

The Grimage cluster was originally dedicated to connect the Grimage platform hardware (cameras, etc) and process its data (videos captures, etc). More recently, 10GE ethernet cards were added to some nodes for a project, making the cluster a mutualized platform. Currently, at least 4 projects are using the cluster, requiring the resource management system and deployment system adapted to a experimental platform, just like Grid'5000.

Special machines

Those machines are resources co-funded by several teams in order to provide experimental platforms for problems such as:

  • large and complex SMP configurations
  • complex processor/cache architecture analysis
  • multi-GPU configurations
  • etc

Currenlty the following machines are available


  • 2x Intel Xeon X5650 (Westmere, 6 cores each)
  • 72 GB DDR3 RAM
  • 8x Nvidia Tesla C2050


idfreeeze may be integrated to the platform later

Platform Access Machine CPU RAM GPU card(s) #GPU Network Other
Digitalisdigitalis.grenoble.grid5000.frgrimage-1.grenoble.grid5000.fr2x Intel E5530 (8C)12GB DDR31x GTX-5801IB QDRKeyboard/Mouse/Screen attached
grimage-2.grenoble.grid5000.fr2x Intel E5530 (8C)12GB DDR3IB QDR + 1x 10GE (DualPort)2x Camera (firewire)
grimage-3.grenoble.grid5000.fr2x Intel E5530 (8C)12GB DDR31x GTX-5801IB QDRKeyboard/Mouse/Screen attached + 2x cameras (firewire)
grimage-4.grenoble.grid5000.fr2x Intel E5530 (8C)12GB DDR3IB QDR + 1x 10GE (DualPort)2x Camera (firewire)
grimage-5.grenoble.grid5000.fr2x Intel E5530 (8C)12GB DDR3IB QDR + 2x 10GE (DualPort)2x Camera (firewire)
grimage-6.grenoble.grid5000.fr2x Intel E5530 (8C)12GB DDR3IB QDR + 1x 10GE (DualPort)
grimage-7.grenoble.grid5000.fr2x Intel E5530 (8C)12GB DDR3IB QDR + 2x 10GE (DualPort)
grimage-8.grenoble.grid5000.fr2x Intel E5530 (8C)12GB DDR3IB QDR + 1x 10GE (DualPort)
grimage-9.grenoble.grid5000.fr2x Intel E5620 (8C)24GB DDR31x GTX-2952IB QDR
grimage-10.grenoble.grid5000.fr2x Intel E5620 (8C)24GB DDR31x GTX-2952IB QDR
idgraf.grenoble.grid5000.fr2x Intel X5650 (12C)72GB DDR38x Tesla C2050
DMZ@IDincas.imag.fridfreeze.imag.fr4x AMD 6174 (48C)256GB DDR220x 1GE (→ GoFree)
idkoiff.imag.fr8x AMD 875 (16C)32GB DDR21x GTX-2801
Grid5000frontend.grenoble.grid5000.fradonis-[1-10].grenoble.grid5000.fr2x Intel E5520 (8C)24GB DDR32x C1060IB QDR


Dedicated services

Dedicated services are provided for the management of our machines. Indeed, our machines couldn't fit in Grid'5000 model, due to their special characteristics and usage: The Grimage cluster is special in the fact that it operates the Grimage platform with cameras and other equipments attached, making it's hardware configuration different. Other local machines are special in the fact that they are unique resources, which make their model of usage very different from the one of a cluster of many identical machines as found with Grid'5000 clusters.

As a result, a dedicated resource management system (OAR) is provided to manage the access to the machines, with special mechanics (different from the ones provided in Grid'5000). A dedicated deployment system (kadeploy) is also provided to handle user's customized operating systems that can be deployed on the machines. Even if different from the main Grid'5000 tools, many of the documented information for the Grid'5000 tools also apply to our dedicated services. This document actually only explains their specificities.

OAR and Kadeploy frontend for our machines is the machine named

Mutualised services (services provided by Grid'5000)

Many services we use on our local machines are provided by the Grid'5000 infrastructure, from a national perspective. For instance, the following services are provided for Grid'5000 but also serve our local purposes (by courtasy) :

  • access machines
  • NFS storage
  • proxying
  • and more.

Please mind the fact that all services are not dedicated to our local needs.

Terms of service

Grid'5000 services are handled nationaly for the global platform (11 sites, France-wide). As a result, some aspects may seam more complex than the should from a local perspective. Please mind the fact that some services are not for our local conveniance only. Furthermore, the local platform is to be seen as an extension to the main Grid'5000 platform, that is not supported by the Grid'5000 staff, even if we can freely benefit from some services they provide.

As a result, we are subject to rules edicted by the Grid'5000 platform:

  • Security policies: restricted access to the network, output traffic filtering).
  • Maintenance schedules: Thursday is the maintenance day, do not be surprised if interruption of services happen on that day !
  • Rules of good behavior within the large Grid'5000 user community (reading the mailing lists is a most)

If one is using the "official" Grid'5000 nodes, one must comply to the UserCharter (as approved by every user when requesting a Grid'5000 account)

Data integrity

There is not guaranty provided against data loss on the Grid'5000 NFS (home directories), nor on machines local hard drives. No backup is performed, so in case of an incident, the Grid'5000 staff will not be able to provide you any way to get back any data.

As a result, if you have data you really care about, and cannot reproduce with an acceptable cost (time of computation) with regard to risks of data loss (which rarely happens), it is strongly suggested you back them up elsewhere.

(NFS storages uses RAID to overcome a disk failure, but RAID is not backup)

Platform usage

Machine access

Access to the machine is controlled by the resource manager. This means that users cannot just ssh to a machine and have processes indefinitely running on them (e.g. vi process).

Any user must instead book the machine for a period of time (a job), during which access will be granted to him, maybe with some other privileges (depending on the requested type of job). Once the period of time is ended, all rights are revoked, and all processes of the user are killed.

By default users are not root on the machines. Some privileged commands may however be permitted (e.g. schedtool). Default access to a machine is not exclusive, which means that many users can have processes on the machine at a same time, unless a user requested an exclusive access.

Special use cases also require full access to the machine: one want to be root, to be able to reboot the machine, or even to be able to install software or a different operating system. Just like on Grid'5000, this is possible, at the cost of the use of kadeploy.

Access the machine is granted through command run on the frontend:

Use cases

I want to access a machine

To access a specific machine, just provide the machine name in the oarsub command:

pneyron@digitalis:~$ oarsub -I -p "machine = 'idgraf'"
[ADMISSION RULE] Modify resource description with type constraints
Import job key from file: .ssh/id_rsa
Interactive mode : waiting...

Connect to OAR job 1122 via the node

You then get access to the machine for 1 hour by default (add -l walltime=4 for 4 hours).

Note that if the machine is not available (e.g. an exclusive job is already running), you will have to wait until it is freed up (see the resource usage visualization tools).

If no machine is specified, you get access to one of the grimage nodes.

You can use the oarsh command to open other shells to the machine, as long as the job is still running.

Please OAR's documentation for more details.

I want to gain exclusive access to a machine for N hours

To access to a machine and be alone (to avoid noises of other users), give the exclusive type to your job:

pneyron@digitalis:~$ oarsub -I -p "machine = 'idgraf'" -t exclusive -l walltime=N
[ADMISSION RULE] Modify resource description with type constraints
Import job key from file: .ssh/id_rsa
Interactive mode : waiting...

Connect to OAR job 1122 via the node

You then get access to the machine for N hours, nobody else can access the machine during your job.

Note that if the machine is not available, you will have to wait until it is free (see the resource usage visualization tools).

Also, some privileged command can be run via sudo in exclusive jobs (see below).

I want to execute privileged commands on my node

Within a exclusive job, some privileged commands can be run via sudo. Those authorized privileged commands typically have an impact on other users, hence they require an exclusive access (job) to the machine.

Currently, the following commands can be run via sudo in exclusive jobs:

on idgraf
  • sudo /usr/bin/whoami (provided for testing the mechanism, should return "root")
  • sudo /sbin/reboot
  • sudo /usr/bin/schedtool
  • sudo /usr/bin/nvidia-smi (please notify other users via the mailing list if you change parameters on GPUs that will not be reset to default after a reboot, e.g. ECC)
on grimage
  • n/a

If the privileged command you need is not available (available commands run without any sudo password prompt), you can ask your administrator whether it's possible to enable it, but command considered harmful to the system will not made available. Please mind deploying your own operating system on the machine to get full privileges.

I want to be able to reboot the machine during my job

A reboot of a machine kill normal jobs, so a special job type as been created to survive machine reboot. Obviously it is called the "reboot" type.

During a job of type reboot, the machine can be rebooted, the job won't be terminated and machine keeps being reserved for the job's lifetime. Furthermore, the owner of the reboot job is the only user that can create jobs (exclusive jobs) in the same period of time as the reboot job is running, which allows the user of the job to make sure he gets access to the machine after the reboot.

Please note that reboot jobs are exclusive.

Example of use:

pneyron@digitalis:~$ oarsub -I -t reboot -p "machine='idgraf'"
[ADMISSION RULE] Modify resource description with type constraints
Import job key from file: .ssh/id_rsa
Interactive mode : waiting...

Connect to OAR job 1129 via the node

Note that with this interactive job, instead of getting a shell on the target machine, you get a shell on digitalis.

A command will soon be provided to reboot the node from within such a job.

While the resource is booked in this reboot job by the user, he can still submit exclusive jobs to get access to the machine (possibly before and after the reboot):

pneyron@digitalis:~$ oarsub -I -t exclusive -p "machine='idgraf'"
[ADMISSION RULE] Modify resource description with type constraints
Import job key from file: .ssh/id_rsa
Interactive mode : waiting...

Connect to OAR job 1130 via the node

Note that this works as well the way around: one can first begin an exclusive job, then within the time period of that job, begin a reboot job, then reboot (which ends the exclusive job) and finally run a new exclusive job to get access to the rebooted machine afterward...

I want to change the system (OS, software) on the machine

Use the deploy type. See Grid'5000 documentation about kadeploy. The kadeploy instance on digitalis works the same way.

I want to book the machine for next night

OAR allows advance reservations

pneyron@digitalis:~$ oarsub -r "2012-04-01 20:00:00" -l walltime=4 -p "machine='idgraf'"
[ADMISSION RULE] Modify resource description with type constraints
Import job key from file: .ssh/id_rsa
Reservation mode : waiting validation...
Reservation valid --> OK

Once your job starts (on April 1st, 8pm), you will be able to oarsh to the node.

See OAR's documentation for more information.

Tips and tricks

I want to access to digitalis directly without having to go first to the access machine

Add to you ssh configuration on your workstation (~/.ssh/config):

cat <<EOF .ssh/config
Host *.g5k
ProxyCommand ssh "nc -q 0 `basename %h .g5k` %p"
User pneyron
ForwardAgent no

(replace pneyron by your Grid'5000 login)

Make sure you pushed your SSH public key to Grid'5000. see

Then you should be able to ssh to digitalis directly:

neyron@workstation:~$ ssh digitalis.grenoble.g5k
Linux 2.6.26-2-xen-amd64 #1 SMP Tue Jan 25 06:13:50 UTC 2011 x86_64
Last login: Thu Mar 22 14:36:05 2012 from

I want to ssh directly from my workstation to my experimentation machine

(Note: This does not apply to the case of deploy jobs)

Make sure that jobs you create use a job key. For that, create a public/private key pair on digitalis (with no passphrase):

pneyron@digitalis:~$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/pneyron/.ssh/id_rsa):

(Don't use your existing SSH keys, located on your workstation and protected by a passphrase, for security concerns)

Then add to your bashrc on digitalis:

cat <<EOF ~/.bashrc
export OAR_JOB_KEY_FILE=.ssh/id_rsa

The oarsub command will now use this key for your jobs. pneyron@digitalis:~$ oarsub -I

[ADMISSION RULE] Modify resource description with type constraints
Import job key from file: .ssh/id_rsa

Copy your keys on your worskation:

scp digitalis.grenoble.g5k:.ssh/id_rsa ~/.ssh/id_rsa_g5k
scp digitalis.grenoble.g5k:.ssh/ ~/.ssh/

Add to your ssh configuration on your workstation (~/.ssh/config):

cat <<EOF .ssh/config
Host *.g5koar
ProxyCommand ssh "nc -q 0 `basename %h .g5k` %p"
User oar
IdentityFile ~/.ssh/id_rsa_g5k
ForwardAgent no

(replace pneyron by your Grid'5000 login)

Then you should be able to ssh directly to a machine reserved in a OAR job:

neyron@workstation:~$ ssh idgraf.grenoble.g5koar
Linux 3.2.0-2-amd64 #1 SMP Sun Mar 4 22:48:17 UTC 2012 x86_64

I want my code to be pushed automatically to the machine

man inotifywait

FIXME: to be completed

Resource usage visualization tools

2 tools are available to see how resources are or will be used:


Chandler is command line tool, to run on digitalis. It gives a view of the current usage of the machines.

pneyron@digitalis:~$ chandler

4 jobs, 92 resources, 60 used
         grimage-1 	TTTTTTTT grimage-2 	TTTTTTTT grimage-3 	
TTTTTTTT grimage-4 	TTTTTTTT grimage-5 	         grimage-6 	
         grimage-7 	JJJJJJJJ grimage-8 	JJJJJJJJ grimage-9 	
         grimage-10 	TTTTTTTTTTTT idgraf 	

 =Free  =Standby J=Exclusive job T=Timesharing job S=Suspected A=Absent D=Dead
  [1101] eamat (shared)
  [1101] eamat (shared)
  [1101] eamat (shared)
  [1101] eamat (shared)
  [1115] pneyron (reboot)
  [1115] pneyron (reboot)
  [1113] jvlima (shared)
  [1114] pneyron (shared)


Drawgantt give a view of the past, current and future usage of the machines.

Other OAR tools

All OAR command are available, see OAR's documentation.

  • oarstat: list current jobs
  • oarnodes: list the resources with their properties
  • etc.

Platform information and technical contact

Mailing lists

Dedicated list

A mailing list is dedicated to the communication about the locally managed machine: You'll get information through emails sent to this list, and you can also write to this list if you have to communicate something to the other users of the local machines.

Grid'5000 lists

Grid'5000 provide many mailing lists which any Grid'5000 user automatically receives (e.g. Since the local machines benefit from global Grid'5000 services, you should keep an eye on information sent on those mailing lists to be aware of potential exceptional maintenance schedules for instance.

Be aware that Thursday is the maintenance day. Regular maintenances are programmed which may for instance impact the NFS service.

Please do not use the list for issue related to the local machines, since the Grid'5000 staff is not in charge of those machines.

Grid'5000 Platform Events

Please also bookmark Grid'5000 platform events page, which list futurs events programmed for the platform. You can also subscribe to the RSS feed.


For any issue with the platform, you can contact me using Grid'5000 jabber. Feed free to add me to your buddy list:

Personal tools