Connecting to a Cluster

To connect to a cluster please SSH to:

 

Bellatrix: gaspar_username@bellatrix.epfl.ch

Castor: gaspar_username@castor.epfl.ch

Deneb: gaspar_username@deneb1.epfl.ch or gaspar_username@deneb2.epfl.ch

 

You need to connect using your GASPAR username which may not be the same as the username on your local machine:

ssh gaspar_username@castor.epfl.ch

If you have problems configuring SSH then please contact you local computer support team.

 

What is installed on the clusters

To see the list of installed software (modules), do

module spider

Getting the examples

Once you have logged in to the machine, we suggest you download the examples with the command:

git clone https://c4science.ch/diffusion/SCEXAMPLES/scitas-examples.git

Open Source or proprietary?

On our systems, we have compiled and installed software modules either with Intel Compiler and Intel MPI or GCC and MVAPICH2 and those are the only supported compiler/MPI combinations.

Running the examples

Enter the directory scitas-examples and choose the example to run by navigating the folders. We have three categories of examples: Basic (examples to get you started), Advanced (including hybrid jobs and job arrays) and Modules (specific examples of installed software).

To run an example (here: hybrid HPL), do


sbatch --partition=debug hpl-hybrid.run

or, if you do not wish to run on the debug partition,

sbatch hpl-hybrid.run

Running interactive jobs

An interactive job allows you to connect directly to a compute node and type commands that run on the compute node. Simply type the command interact from the login node to start an interactive session with 1 core and 4GB of memory for 30 minutes.

You can use the parameters to Sinteract (for help type: interact -h) to request more resources or more time.

usage: Sinteract [-n cores] [-t time] [-m memory] [-p partition]

options:
  -c cores    (default: 1)
  -t time    as hh:mm:ss (default: 00:30:00)
  -m memory    as #[K|M|G] (default: 4G)
  -p partition    (default: serial)

 e.g. to allocate 16 cores for one hour using 32 GB of memory on a debug node:

 Sinteract -c 16 -t 01:00:00 -m 32G -p debug

Mission

Providing computing resources, training and expertise to the EPFL community.

  • General purpose and specialized computing platforms
  • Training
  • Application support

Machines status

Maintenance schedule

Contact

Technical

1234@epfl.ch

HPC Coordination

hpc@epfl.ch
Tel: +41 (0) 21 693 12 34

Tel: +41 (0) 21 693 14 05