The EVN Software Correlator at JIVE (SFXC)
The new EVN/JIVE software correlator was installed in February/March 2010 and has taken over all correlation in December 2012. The original cluster has been extended several times and currently handles approximately 20 stations at 2Gbit/s in real-time e-VLBI mode and many more for traditional disk-based VLBI. Here's some documentation on how this amazing machine works and was put together.
Usage
A very basic User's Manual for SFXC can be found here.
We kindly request users of SFXC to reference our paper that describes its algorithm and implementation The SFXC software correlator for very long baseline interferometry: algorithms and implementation, A. Keimpema et. al., Experimental Astronomy, Volume 39, Issue 2, pp.259-279.
SFXC software installation
The SFXC software correlator can be distributed under the terms of the General Public License (GPL). The SFXC Git repository can be found at https://code.jive.eu/JIVE/sfxc.
The repository can be downloaded using:
git clone https://code.jive.eu/JIVE/sfxc.git
The current production release is taken from the stable-5.1 branch which can be checked out using:
git checkout stable-5.1
In principle this branch will only receive bug fixes. Development of new features happens on the master branch, which can be checked out using:
git checkout master
using that version is not recommended unless you feel like being a guinea pig for testing the bugs we introduce while adding all the new cool stuff.
Building SFXC should be as easy as:
cd sfxc ./compile.sh ./configure CXX=mpicxx make make install
This will install SFXC in /usr/local
, which typically requires super-user privileges and may not be what you want. You can specify a different location by using the --prefix
option when running configure
.
SFXC depends on a number of software packages that are often not installed on a typical Linux system. The more exotic pieces of software are:
- OpenMPI
- GNU Fortran (GFortran)
- GNU Scientific Library (GSL)
- FFTW
These packages can often be installed by using the package management tools that come with your Linux distributions. The names of the packages differ between distributions. Below you find detailed information for some popular Linux distributions.
As there are some doubts whether the CALC10 software used by SFXC is 64-bit safe, the model generation tools are compiled in 32-bit mode. This means that if you are using a 64-bit (x86_64) Linux system, you will have to install 32-bit versions of the C++ and Fortran compilers and their support libraries. The packages that provide the 32-bit compilation environment on 64-bit systems are also include below.
SFXC can optionally use the Intel Performance Primitives (IPP) library. See the --enable-ipp
and --with-ipp-path
configure
options.
Ubuntu 12.04 LTS
Possibly also applies to other Debian-derivatives such as Debian, Mint.
On these systems you need the following packages:
- autoconf
- libtool
- bison
- flex
- libgsl0-dev
- libfftw3-dev
- libopenmpi-dev
- g++
- gfortran
If you are building on a 64-bit (x86_64) system, you will also need:
- g++-multilib
- gfortran-multilib
Scientific Linux 6.4
Probably also applies to other RedHat-derivatives such as RedHat Enterprise Linux, Fedora, CentOS.
On these systems you need the following packages:
- autoconf
- automake
- bison
- fftw-devel
- flex
- gsl-devel
- gcc-c++
- libtool
- make
- openmpi-devel
If you are building on a 64-bit (x86_64) system, you will also need:
- glibc-devel.i686
- libgfortran.i686
- libgcc.i686
- libstdc++-devel.i686
You will also need to tell the system that you want to use the OpenMPI MPI implementation:
module load openmpi-x86_64
GUI tools
SFXC comes with a couple of GUI tools to visualize the correlation results. These tools need the Python VEX parser module that can be found in the vex/
top-level subdirectory. This module uses a standard Python distutils setup.py, which means something like:
cd vex python setup.py build python setup.py install
should be sufficient. The last command will probably require root priviliges; setup.py offers a couple of alternative installation methods that avoid this. More information on the VEX parser is provided in vex/README
.
The GUI itself needs the Python Lex-Yacc
Ubuntu 12.04 LTS
- python-ply
- python-qwt5-qt4
Scientific Linux
No Python Lex-Yacc and PyQwt packages are provided by this Linux distribution. Sorry, you're on your own!
Post-processing software
To convert the SFXC correlator output into FITS-IDI, additional tools are needed. Information on how to obtain and build these tools is available at https://code.jive.eu/verkout/jive-casa.
Cluster Description
The cluster currently consists of eleven Transtec Calleo 642 servers, each containing four nodes with a mix of dual-CPU quad-Core CPUs and octa-Core CPUs, for a grand total of 512 cores. The nodes are interconnected by QDR Infiniband (40Gb/s) and are also connected to a dedicated ethernet switch with dual 1Gb/s ethernet links or a single 10Gb/s ethert link per node. The 23 Mark5s at JIVE are connected to the same networking at 10Gb/s in order to play back diskpacks for correlation. The 5 FlexBuffs are integrated in the cluster as well and use the same QDR Infiniband network as the nodes. There is a 36-port Infiniband switch, another 24-port Infiniband switch, and a head-node for central administration and NFS exported homedirectories.
Cluster Nodes
Each Calleo 642 server (rebranded SuperMicro SC-827) contains four independent computers (nodes). The main features of each node:
- Dual Intel E5520 quad-core Xeon CPUs (2.26 GHz, 8 MB cache)
- 24 GB DDR-3 memory
- Two 1 TB disks (Seagate Barracuda ES.2 SATA-2 7200rpm)
- Dual 1 Gb/s Ethernet (Intel 82576)
- Mellanox ConnectX QDR Infiniband (40 Gb/s)
- IPMI 2.0 management
- Dual Intel E5620 quad-core Xeon CPUs (2.40 GHz, 8 MB cache)
- 24 GB DDR-3 memory
- Two 2 TB disks (Seagate Constellation ES SATA-2 7200rpm)
- Dual 1 Gb/s Ethernet (Intel 82574L)
- Mellanox ConnectX QDR Infiniband (40 Gb/s)
- IPMI 2.0 management
- Dual Intel E5-2670 octa-core Xeon CPUs (2.60 GHz, 20 MB cache)
- 64 GB DDR-3 memory
- One 60 GB SSD (Intel 520 Series)
- Dual 1 Gb/s Ethernet (Intel I350)
- Mellanox ConnectX QDR Infiniband (40 Gb/s)
- IPMI 2.0 management
- Dual Intel E5-2630 v3 octa-core Xeon CPUs (2.40 GHz, 20 MB cache)
- 64 GB DDR-3 memory
- One 60 GB SSD (Intel 520 Series)
- Dual 10 Gb/s Ethernet (Intel X540-AT2)
- QLogic IBA7322 QDR Infiniband (40 Gb/s)
- IPMI 2.0 management
Output Node
- Dual Intel E5-2630 v2 hexa-core Xeon CPUs (2.60 GHz, 15 MB cache)
- 32 GB DDR-3 memory
- Four 3 TB disks (Seagate Constellation ES.3 SATA 6Gb/s 7200rpm)
- Dual 10 Gb/s (Intel X540-AT2)
- QLogic IBA7322 QDR Infiniband (40 Gb/s)
- IMPMI 2.0 management
Head Node
The head-node is a Calleo 141 1U server, featuring:
- Intel X3430 Xeon, (2.4G Hz, 8 MB Cache)
- 6 GB DDR-3 memory
- Two 2.1 TB disks (Seagate Barracuda ES2.1 Sata-2 7200rpm)
- Dual 1 Gb/s ethernet
- IPMI 2.0 management
Infiniband
All cluster nodes are connected to a QDR Infiniband to a QLogic-12200 36 port QDR unmanaged and a QLogic-12300 18/36 port QDR managed Infiniband switch. The switches are conneted together using a trunk of two links.
Network
The central switch is an HP5412zl modular switch, offering 72 ports at 1Gb/s.
- 1x J8798A HP5412zl 12-slot chassis
- 3x J8702A 24 port module, 10/100/1000 Mb/s Ethernet (RJ-45)
- 2x J8712A 875W power supply
There is an additional 48 port Allied Telesys switch for connecting the IPMI ports.