Connecting to a Cluster

To connect to a cluster please SSH to:




Deneb: or


You need to connect using your GASPAR username which may not be the same as the username on your local machine:


If you have problems configuring SSH then please contact you local computer support team.


What is installed on the clusters

To see the list of installed software (modules), do

module spider

Getting the examples

Once you have logged in to the machine, we suggest you download the examples with the command:

git clone

Open Source or proprietary?

On our systems, we have compiled and installed software modules either with Intel Compiler and Intel MPI or GCC and MVAPICH2 and those are the only supported compiler/MPI combinations.

Running the examples

Enter the directory scitas-examples and choose the example to run by navigating the folders. We have three categories of examples: Basic (examples to get you started), Advanced (including hybrid jobs and job arrays) and Modules (specific examples of installed software).

To run an example (here: hybrid HPL), do

sbatch --partition=debug

or, if you do not wish to run on the debug partition,


Running interactive jobs

An interactive job allows you to connect directly to a compute node and type commands that run on the compute node. Simply type the command interact from the login node to start an interactive session with 1 core and 4GB of memory for 30 minutes.

You can use the parameters to Sinteract (for help type: interact -h) to request more resources or more time.

usage: Sinteract [-n cores] [-t time] [-m memory] [-p partition]

  -c cores    (default: 1)
  -t time    as hh:mm:ss (default: 00:30:00)
  -m memory    as #[K|M|G] (default: 4G)
  -p partition    (default: serial)

 e.g. to allocate 16 cores for one hour using 32 GB of memory on a debug node:

 Sinteract -c 16 -t 01:00:00 -m 32G -p debug


Providing computing resources, training and expertise to the EPFL community.

  • General purpose and specialized computing platforms
  • Training
  • Application support

Machines status

Maintenance schedule



HPC Coordination
Tel: +41 (0) 21 693 12 34

Tel: +41 (0) 21 693 14 05