Page tree
Skip to end of metadata
Go to start of metadata

General FAQ

Why can't I connect to the clusters from home?

You can but to do so requires passing via the EPFL VPN service. See http://network.epfl.ch/vpn for how to use this service.

Users preferring a command line tool might also wish to consider the tremplin SSH proxy tunnel service:. You can find the Linux and Windows procedure here.

What's the maximum run time of a job?

The maximum wall time allowed depends on the cluster - please see the more cluster specific FAQs below.

In the case of scheduled maintenance operations, any job you have submitted will not start before the end of the maintenance period.

Where is my scratch space?

/scratch/<user name> - e.g. /scratch/jmenu

Can you recover an important file that was on my scratch area?

**NO**. /scratch is not backed up so the file is gone forever. Please note that we automatically delete files on scratch to prevent it from filling up!

I've deleted a file on /home or /work - How can I recover it?

If it was deleted in the last seven days then you can use the daily snapshots to get it back. These can be found at:

  • /home/.snapshots/<date>/<username>/

  • /work/.snapshots/<date>/<laboratory or group>/

e.g. /home/.snapshots/2015-11-11/bob/

The home filesystem is backed up onto tape. If the file was deleted more than a week before we may be able to help. The /work filesystem is not backed up by default.

How do I submit a job that requires a run time of more than three days? 

Users who have not paid are limited to 24 hours. 

If your group has purchased Premium then please contact your local computing co-ordinator who will in turn contact SCITAS if necessary.

Can I submit array jobs and, if so, how?

Yes, with the --array directive to sbatch. See http://slurm.schedmd.com/job_array.html for the official documentation and our scitas-examples git for several examples.

Is it safe to share nodes with other users?

Yes!  We use cgroups to limit the amount of CPU and memory assigned to users. There is no way for users to adversely affect each other.

Is there a debug queue?

Not as such. In SLURM the concept of queues doesn't exist so to have priority access for debugging there is a partition which gives priority access:

sbatch --partition debug myjobscript

The limits on the debug partition vary by cluster but in general the maximum run time is 30 minutes to one hour and users are only allowed one job at a time. Interactive jobs are allowed.

I have premium and I have run on the debug partition. Do I have to pay for debug time?

No. Debug time is free of charge.

What is a <job id>?

It's the unique numerical identifier of a job and is given when you submit the job:

[eroche@castor jobs]$ sbatch s1.job
Submitted batch job 400

It can also be seen using squeue:

[eroche@castor jobs]$ squeue
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
400 serial s1.job eroche R INVALID 1 c03

How to display scratch quota and usage information?

A. There are no quotas on scratch, as files older than 2 weeks may be deleted without notice as the filesystems fill up. However, you can see scratch usage for aries, bellatrix and deneb using the fsu command:

fsu /scratch

The scratch usage information of Castor can be found here and by executing the following command on Castor:

df -h /scratch

How to display quota and usage information for the /home and /work file systems?

A. /home: to get user quota and file system usage for your group members, use the following command:

fsu -q /home

You can also see an overview of /home usage and quota here.

B. /work: to get group quota and file system usage for your group members, use the following command:

fsu -q /work

You can also see an overview of /work file system usage and quota here.

Why do I get the error "module: command not found"?

This is because you have tcsh as your login shell and the environment isn't propagated to the compute nodes.

In order to fix the issue please change the first line of your job script as follows:


#!/bin/tcsh -l

The -l option tells tcsh to lauch an interactive shell which correctly sources the files in /etc/profile.d/

Which options should I use to link with the Intel MKL?

Ask the Intel Math Kernel link line advisor

If you use the Intel compilers then you can pass the -mkl flag which will do the hard work for you.

Why am I asked for a password while sshing from the frontend to a node?

Once logged in to a frontend of a cluster, you can ssh directly to the node(s) running your job(s). You can prevent to be asked for the Gaspar password again by creating a passwordless ssh key.

Run the following command only once in any of the clusters:

ssh-keygen -t rsa
ssh-copy-id -i .ssh/id_rsa.pub localhost

Which MPI flavours are supported on the clusters?

SCITAS supports IntelMPI and MVAPICH2.

What compilers/MPI combination do you support?

SCITAS supports Intel compilers and Intel MPI (full proprietary) or GCC compilers and MVAPICH2 (full free) or GCC compilers and OPENMPI. Other combinations are not supported.

Why do some system tools stop working after the intel module is loaded?

The Intel compiler includes its own versions of many libraries (and those take precedence over the system ones). Sometimes these libraries do not include symbols which are needed by some system tools and these will no longer work. Examples of these could be: gitrsync, etc.

If a module exists providing the same tool one can just load that module.

If no module exists you will have to module unload intel before using the command and module load intel afterwards (any modules which depend on intel will simply become inactive and will be restored automatically).

Cluster / Partition specific FAQ

Bellatrix

How many nodes are there in Bellatrix?

There are 424 compute nodes and one login node.

What are the characteristics of a node in Bellatrix?

Bellatrix nodes have two 8 core Intel(R) Xeon(R) CPU E5-2660  processors running at 2.2GHz. The nodes have 32 GB of memory and are interconnected with QDR Infiniband

How do I submit a job that requires a run time of more than three days? 

If your group has purchased Premium then please contact your local computing co-ordinator who will in turn contact SCITAS if necessary.



Castor

How many nodes are there in Castor?

There are 52 compute nodes and one login node.

What are the characteristics of a node in Castor?

Castor nodes have two 8 core Intel E5-2650 processors running at 2.6GHz. 50 of the nodes have 64GB of memory and two have 256GB

Can I run my MPI job on Castor?

As long as it stays within a node then you are free to use MPI. Inter-node MPI is not the goal of this cluster. Jobs that request more than one node will be refused by the scheduler.

Why doesn't Castor have Infiniband?

Castor was, from the very beginning, intended to run serial codes. As such, a low latency interconnect serves no purpose and would add cost and maintenance problems. The storage runs over 10 gigabit ethernet which is more than sufficient. 

What's the maximum run time of a job on Castor?

If you have a free account it's 24 hours. For premium accounts it's 3 days but you can ask to run for longer by contacting us and explaining why you need to run for more than 3 days.

How do I submit a job that requires a run time of more than three days? 

Premium accounts that have been granted permission to do so can add the "--qos=week" flag to ask for up to 7 days.

How do I use one of the nodes with 256GB of memory?

Specify the amount of memory required with "--mem <quantity in MB>" either on the command line or in your job script.

Deneb

Why do I need to ask for access to the GPU nodes?

In order to use the GPU nodes we request that you submit a description of the code you wish to use and the performance benefits expected. You will then be invited to meet our application and GPU experts to discuss your proposal. This is in order to ensure that your code will make the best possible use of the resources and to make sure that you understand the features and limitations of the nodes. Non paying access is limited to a maximum run time of 12 hours and one task at a time.

How do I submit jobs to the GPU nodes?

If you have been granted "free" access then you need to pass the options "--partition=gpu --qos=gpu_free --gres=gpu:X"  where X is the number of GPUs per node required to sbatch.

How do I use one of the nodes with more than 64GB of memory?

Specify the amount of memory required with "--mem <quantity in MB>" either on the command line or in your job script.

How do I ask to use a specific processor type (Ivy Bridge, 16 cores or Haswell, 24 cores)?

For Ivy Bridge please give the option "--constrain=E5v2" and for Haswell "--constrain=E5v3". If you do not specify a constraint it may run on either but a multi-node job will never span both architectures.

Fidis

How many nodes are there in Fidis and what are their characteristics ?

There are 336 nodes with 128 GB of memory and 72 nodes with 256 GB of memory.

Each node have two Intel Broadwell processors running at 2.6 GHz. Each processor has 14 cores which makes 28 cores per node. A 800 GB local SSD disk (/tmp) makes local checkpoints very fast.

All 408 nodes are interconnected with a FDR infiniband network.

  • No labels