Wiki sandbox

From Digitalis

(Difference between revisions)
Jump to: navigation, search
(Privileged commands)
(Blanked the page)
 
(10 intermediate revisions not shown)
Line 1: Line 1:
-
{{Tabs}}
 
-
===== Overview =====
 
-
[[File:Grimage10GE.png|200px|thumb|right|Grimage 10GE network]]
 
-
; '''Update 2015-08'''
 
-
* grimage-9 is now equiped with a Nvidia K40c GPU (thanks to Nvidia/Inria-GRA partnership).
 
-
* grimage-10 is now equiped with 2 Nvidia Geforce 295, which makes it a 4 GPU machine.
 
-
 
-
; '''Update 2015-06'''
 
-
The grimage machine room is shut down. As a result, grimage-1 to grimage-8 are shutdown as well, with no plan to be up again soon.
 
-
 
-
Grimage-9 and grimage-10 (hosted in the cluster machine room along with other machines of Digitalis/Grid'5000) are still available.
 
-
 
-
; '''Update 2014-05'''
 
-
Intel 10GE cards are removed from the nodes to be used in the ppol nodes.
 
-
 
-
The Grimage cluster was originally dedicated to support the Grimage VR platform: handle hardware (cameras, etc) and process data (videos captures, etc).
 
-
 
-
More recently, 10GE ethernet cards were added to some nodes for a new project, making the cluster a mutualized platform (multi-project). Currently, at least 4 projects are using the cluster, requiring a resource management system and deployment system adapted to an experimental platform, just like Grid'5000.
 
-
 
-
Grimage nodes have big computer cases (4U), with the purpose of being able to host various hardware.
 
-
; By design, the hardware configuration of the Grimage nodes is subject to changes:
 
-
* new generation of video (GPU) cards may be installed over time
 
-
* 10GE network connections may change
 
-
* ...
 
-
 
-
; Current 10GE network setup is as follows:
 
-
* One Myricom dual port card is installed on each of grimage-{4,5,7,8}
 
-
* One Intel dual port card is installed on each of grimage-{2,5,6,7}
 
-
Connexions are point to point (NIC to NIC, no switch) as follows:
 
-
* Myricom: grimage-7 <-> grimage-8 <-> grimage-4 <-> grimage-5
 
-
* Intel: grimage-2 <=> grimage-5 et grimage-6 <=> grimage-7 (double links)
 
-
* test jfs
 
-
 
-
== How to experiment ==
 
-
The default system of the grimage node is design to operate the Grimage VR room.
 
-
 
-
Using kadeploy is required to adapt the system to other needs (if the default system is not sufficient).
 
-
 
-
=== Privileged commands ===
 
-
 
-
Currently, the following commands can be run via sudo in exclusive jobs:
 
-
* sudo /usr/bin/whoami (provided for testing the mechanism, should return "root")
 
-
* sudo /sbin/reboot
 
-
* sudo /usr/bin/schedtool
 
-
* sudo /usr/bin/nvidia-smi
 
-
 
-
 
-
encore un test
 
-
 
-
=== What is x2x and how to use it ===
 
-
 
-
This tip is useful for people that have to work in the Grimage room, with a screen attached to a Grimage machine.
 
-
 
-
x2x allows to control the mouse pointer and keyboard input of a remote machine over the network (X11 protocol).
 
-
In the case of the Grimage nodes which have a screen attached, it is very practical because it allows to not use the USB mouse and keyboard, which are sometime buggy (because of the out of norm USB cable extension).
 
-
 
-
To use x2x:
 
-
# login locally on the machine (gdm)
 
-
# run <code class="command">xhost +</code> to allow remote X connections.
 
-
# from you workstation: run
 
-
ssh pneyron@grimage-1.grenoble.g5k -X x2x -to grimage-1:0 -west
 
-
 
-
; NB
 
-
* replace pneyron by your username
 
-
* replace the 2 occurences of grimage-1 by the name of the Grimage node you actually use.
 
-
* make sure you get the ssh configuration to get the *.g5k trick to work (see the tip above)
 
-
 
-
 
-
test test
 
-
re test
 
-
 
-
== System changelog ==
 
-
 
-
a remplir
 
-
JFS
 

Current revision as of 11:48, 2 September 2015

Personal tools
platforms