The ARC cluster: Difference between revisions
No edit summary |
|||
Line 2: | Line 2: | ||
Presently we have 40 Tb of diskspace and one 12-unit blade cluster (96 cores) dedicated to the ARC. | Presently we have 40 Tb of diskspace and one 12-unit blade cluster (96 cores) dedicated to the ARC. | ||
ALMA and CASA users can request to access the server and the disk space by sending an e-mail to ''help-desk at alma.inaf.it'' indicating the reason for | ALMA and CASA users can request to access the server and the disk space by sending an e-mail to ''help-desk at alma.inaf.it'' indicating the reason for the request. | ||
[[File:Blade.jpg|thumbnail]] | [[File:Blade.jpg|thumbnail]] |
Revision as of 17:44, 11 September 2014
The ARC is equipped with a dedicated computer server connected with a high-speed optical fiber network to the outside world, allowing fast data transfer (1 Gbit/sec in 2009; 10 Gbit/sec by 2012). Presently we have 40 Tb of diskspace and one 12-unit blade cluster (96 cores) dedicated to the ARC.
ALMA and CASA users can request to access the server and the disk space by sending an e-mail to help-desk at alma.inaf.it indicating the reason for the request.
User policy
ARC users can access the Italian ARC node computing facilities by requesting a face-2-face visit (ALMA users only, through the ALMA Helpdesk) or by visiting the ARC node in Bologna (for any data-reduction-related issue to be solved in collaboration with the ARC staff). In both the cases they are requested to send an e-mail to help-desk@alma.inaf.it indicating the reason for the request.
Please notice that the request for a new account implies that the requesting user (and/or his/her collaborators) visit the ARC for an induction on the ARC facilities usage and on issues related to data reduction with CASA both for ALMA and any other telescope. If the request is positively evaluated the visit details will be arranged via e-mail.
The account will guarantee the usage of the facilities and the support for 6 months. Every user can use 1 TB of disk space. Once the account expires ALL DATA WILL BE REMOVED. Extensions of the account duration period could be considered on request (via e-mail sent in advance with respect to the account expiring date). No visit is needed in case of account renewal.
The ARC members support is guaranteed for any ALMA-related issue. For data-reduction-related issues that do not involve ALMA, the support (other than the technical support in the usage of the ARC computing facilities) is limited to the knowledge/experience and availability of the ARC members.
Upon explicit request IRA staff people can have an account with unlimited duration. IRA collaborators with temporary positions can have an account for the entire duration of their position.
To ensure a well-balanced load on the cluster nodes please follow instructions about accessing the computer cluster
Accessing the computer cluster
Once you have obtained an ARC account at IRA, you can access the computer cluster nodes from everywhere through host scheduler.ira.inaf.it..
Using graphical applications on the cluster is possible through remote X access. The working nodes are arcbl01 ... arcbl13. Never submit workloads to arcserv (control node) and arcnas1 (storage) as this can slow down the entire cluster.
Access is done via a Torque/Maui scheduler that redirects your job on the less-loaded node.
Jobs on the cluster are limited to a duration of 168 hours.
Here you can monitor the status of Torque scheduler and the number of active jobs.
Here you can find some statistics about resources consumption on the arcblXX nodes.
Executing programs
You can execute programs in two ways:
in interactive mode - your command is immediately executed on the less-loaded node and standard input, output and error are linked to your terminal. You can enter a node for interactive work by typing:
ssh -tX scheduler.ira.inaf.it
Useful tip: by typing 'hostname' you can know on wich node you are
or scheduling a pbs job - submit a job file (here is a guide)
Copying files to from the ARC storage
You can put and get files from outside with scp on the cluster via storage.alma.inaf.it:
# from storage... scp user@storage.alma.inaf.it:/remote/path /local/path # and to storage scp /local/path user@storage.alma.inaf.it:/remote/path
Mounting ARC storage on you workstation
On IRA workstations ARC home filesystem can be accessed on /iranet/homesarc
On your laptop ARC filesystems can be seamlessly accessed with fuse-sshfs:
as root, install the package sshfs
# on RedHat/Centos/ScientificLinux yum install fuse-sshfs # on Debian/Ubuntu apt-get install sshfs
then, as user
sshfs storage.alma.inaf.it:/remote/path /your/local/mount/point/
By omitting /remote/path you can mount you home directory. i.e.:
sshfs storage.alma.inaf.it: /your/local/mount/point/
Be aware that this method is suboptimal for heavy input/output loads. Running disk-intensive applications directly on the arc cluster will result in a file access speed 10-50 times faster.
Software packages available
Software available on ARC cluster could be listed typing the command setup-help
Software package | setup command | launch command | notes |
---|---|---|---|
CASA | casapy-setup | casapy | data reduction package |
Miriad | miriad-setup | miriad | data reduction package |
aips | aips-setup | ||
analysis utils | analysisUtils-setup | ||
analytic infall | analytic_infall-setup | ||
astron | astron-setup | ||
Coyote library | coyote-setup | ||
fits Viewer | fv-setup | ||
GCC Compiler | gcc-setup | ||
Gildas | gildas-setup | ||
Healpix | healpix-setup | ||
IDL | idl-setup | ||
JRE | jre-setup | ||
QA2 | qa2-setup | ||
Ratran | ratran-setup | ||
Starlink | starlink-setup |