The new EVN/JIVE software correlator was installed in February/March 2010 and has taken over all correlation in December 2012. The original cluster has been extended several times and currently handles approximately 20 stations at 2Gbit/s in real-time e-VLBI mode and many more for traditional disk-based VLBI. Here's some documentation on how this amazing machine works and was put together.
A very basic User's Manual for SFXC can be found here.
We kindly request users of SFXC to reference our paper that describes its algorithm and implementation The SFXC software correlator for very long baseline interferometry: algorithms and implementation, A. Keimpema et. al., Experimental Astronomy, Volume 39, Issue 2, pp.259-279.
The SFXC software correlator can be distributed under the terms of the General Public License (GPL). We provide read-only access to the SFXC SVN code repository at https://svn.astron.nl/sfxc. The current production release is taken from the stable-5.1 branch which can be checked out using:
svn checkout https://svn.astron.nl/sfxc/branches/stable-5.1
In principle this branch will only receive bug fixes. Development of new features happens on the trunk, which can be checked out using:
svn checkout https://svn.astron.nl/sfxc/trunk
using that version is not recommended unless you feel like being a guinea pig for testing the bugs we introduce while adding all the new cool stuff.
Building SFXC should be as easy as:
cd sfxc ./compile.sh ./configure CXX=mpicxx make make install
This will install SFXC in /usr/local
, which typically requires super-user privileges and may not be what you want. You can specify a different location by using the --prefix
option when running configure
.
SFXC depends on a number of software packages that are often not installed on a typical Linux system. The more exotic pieces of software are:
These packages can often be installed by using the package management tools that come with your Linux distributions. The names of the packages differ between distributions. Below you find detailed information for some popular Linux distributions.
As there are some doubts whether the CALC10 software used by SFXC is 64-bit safe, the model generation tools are compiled in 32-bit mode. This means that if you are using a 64-bit (x86_64) Linux system, you will have to install 32-bit versions of the C++ and Fortran compilers and their support libraries. The packages that provide the 32-bit compilation environment on 64-bit systems are also include below.
SFXC can optionally use the Intel Performance Primitives (IPP) library. See the --enable-ipp
and --with-ipp-path
configure
options.
Possibly also applies to other Debian-derivatives such as Debian, Mint.
On these systems you need the following packages:
If you are building on a 64-bit (x86_64) system, you will also need:
Probably also applies to other RedHat-derivatives such as RedHat Enterprise Linux, Fedora, CentOS.
On these systems you need the following packages:
If you are building on a 64-bit (x86_64) system, you will also need:
You will also need to tell the system that you want to use the OpenMPI MPI implementation:
module load openmpi-x86_64
SFXC comes with a couple of GUI tools to visualize the correlation results. These tools need the Python VEX parser module that can be found in the vex/
top-level subdirectory. This module uses a standard Python distutils setup.py, which means something like:
cd vex python setup.py build python setup.py install
should be sufficient. The last command will probably require root priviliges; setup.py offers a couple of alternative installation methods that avoid this. More information on the VEX parser is provided in vex/README
.
The GUI itself needs the Python Lex-Yacc
No Python Lex-Yacc and PyQwt packages are provided by this Linux distribution. Sorry, you're on your own!
To convert the SFXC correlator output into FITS-IDI, additional tools are needed. Information on how to obtain and build these tools is available at https://code.jive.eu/verkout/jive-casa.
The cluster currently consists of eleven Transtec Calleo 642 servers, each containing four nodes with a mix of dual-CPU quad-Core CPUs and octa-Core CPUs, for a grand total of 512 cores. The nodes are interconnected by QDR Infiniband (40Gb/s) and are also connected to a dedicated ethernet switch with dual 1Gb/s ethernet links or a single 10Gb/s ethert link per node. The 23 Mark5s at JIVE are connected to the same networking at 10Gb/s in order to play back diskpacks for correlation. The 5 FlexBuffs are integrated in the cluster as well and use the same QDR Infiniband network as the nodes. There is a 36-port Infiniband switch, another 24-port Infiniband switch, and a head-node for central administration and NFS exported homedirectories.
Each Calleo 642 server (rebranded SuperMicro SC-827) contains four independent computers (nodes). The main features of each node:
The head-node is a Calleo 141 1U server, featuring:
All cluster nodes are connected to a QDR Infiniband to a QLogic-12200 36 port QDR unmanaged and a QLogic-12300 18/36 port QDR managed Infiniband switch. The switches are conneted together using a trunk of two links.
The central switch is an HP5412zl modular switch, offering 72 ports at 1Gb/s.
There is an additional 48 port Allied Telesys switch for connecting the IPMI ports.