sfxc
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionNext revisionBoth sides next revision | ||
sfxc [2014/11/10 10:36] – kettenis | sfxc [2023/07/10 14:38] – keimpema | ||
---|---|---|---|
Line 1: | Line 1: | ||
====== The EVN Software Correlator at JIVE (SFXC) ====== | ====== The EVN Software Correlator at JIVE (SFXC) ====== | ||
- | The new EVN/JIVE software correlator was installed in February/ | + | The new EVN/JIVE software correlator was installed in February/ |
+ | at 2Gbit/s in real-time e-VLBI mode and many more for traditional disk-based VLBI. Here's some documentation on how this amazing machine works and was put together. | ||
=== Usage === | === Usage === | ||
- | A very basic User's Manual for SFXC can be found [[http://www.jive.nl/~kettenis/sfxc.html|here]]. | + | A very basic User's Manual for SFXC can be found [[sfxc-guide|here]]. |
+ | |||
+ | We kindly request users of SFXC to reference our paper that describes its algorithm and implementation | ||
+ | [[http://adsabs.harvard.edu/abs/2015ExA....39..259K|The SFXC software correlator for very long baseline interferometry: | ||
=== SFXC software installation === | === SFXC software installation === | ||
The SFXC software correlator can be distributed under the terms of the General Public License (GPL). | The SFXC software correlator can be distributed under the terms of the General Public License (GPL). | ||
- | We provide read-only access to the SFXC SVN code repository at [[https:// | + | We provide read-only access to the SFXC SVN code repository at [[https:// |
- | svn checkout https:// | + | svn checkout https:// |
| | ||
In principle this branch will only receive bug fixes. | In principle this branch will only receive bug fixes. | ||
Line 112: | Line 116: | ||
No Python Lex-Yacc and PyQwt packages are provided by this Linux distribution. | No Python Lex-Yacc and PyQwt packages are provided by this Linux distribution. | ||
+ | |||
+ | === Post-processing software === | ||
+ | |||
+ | To convert the SFXC correlator output into FITS-IDI, additional tools are needed. | ||
+ | |||
=== Cluster Description === | === Cluster Description === | ||
- | The cluster currently consists of ten Transtec Calleo 642 servers, each containing four nodes with a mix of dual-CPU quad-Core CPUs and octa-Core CPUs, for a grand total of 384 cores. The nodes are interconnected by QDR Infiniband (40Gb/s) and are also connected to a dedicated ethernet switch with dual 1Gb/s ethernet links per node . The 23 Mark5s at JIVE are connected to the same networking at 1Gb/s in order to play back diskpacks for correlation. There is a 36-port Infiniband switch, another | + | The cluster currently consists of eleven |
== Cluster Nodes == | == Cluster Nodes == | ||
Line 124: | Line 133: | ||
* 24 GB DDR-3 memory | * 24 GB DDR-3 memory | ||
* Two 1 TB disks (Seagate Barracuda ES.2 SATA-2 7200rpm) | * Two 1 TB disks (Seagate Barracuda ES.2 SATA-2 7200rpm) | ||
- | * Dual 1 Gb/s (Intel 82576) | + | * Dual 1 Gb/s Ethernet |
* Mellanox ConnectX QDR Infiniband (40 Gb/s) | * Mellanox ConnectX QDR Infiniband (40 Gb/s) | ||
* IPMI 2.0 management | * IPMI 2.0 management | ||
Line 131: | Line 140: | ||
* 24 GB DDR-3 memory | * 24 GB DDR-3 memory | ||
* Two 2 TB disks (Seagate Constellation ES SATA-2 7200rpm) | * Two 2 TB disks (Seagate Constellation ES SATA-2 7200rpm) | ||
- | * Dual 1 Gb/s (Intel 82574L) | + | * Dual 1 Gb/s Ethernet |
* Mellanox ConnectX QDR Infiniband (40 Gb/s) | * Mellanox ConnectX QDR Infiniband (40 Gb/s) | ||
* IPMI 2.0 management | * IPMI 2.0 management | ||
* Dual Intel E5-2670 octa-core Xeon CPUs (2.60 GHz, 20 MB cache) | * Dual Intel E5-2670 octa-core Xeon CPUs (2.60 GHz, 20 MB cache) | ||
- | * 32 GB DDR-3 memory | + | * 64 GB DDR-3 memory |
* One 60 GB SSD (Intel 520 Series) | * One 60 GB SSD (Intel 520 Series) | ||
- | * Dual 1 Gb/s (Intel I350) | + | * Dual 1 Gb/s Ethernet |
* Mellanox ConnectX QDR Infiniband (40 Gb/s) | * Mellanox ConnectX QDR Infiniband (40 Gb/s) | ||
* IPMI 2.0 management | * IPMI 2.0 management | ||
+ | |||
+ | * Dual Intel E5-2630 v3 octa-core Xeon CPUs (2.40 GHz, 20 MB cache) | ||
+ | * 64 GB DDR-3 memory | ||
+ | * One 60 GB SSD (Intel 520 Series) | ||
+ | * Dual 10 Gb/s Ethernet (Intel X540-AT2) | ||
+ | * QLogic IBA7322 QDR Infiniband (40 Gb/s) | ||
+ | * IPMI 2.0 management | ||
+ | |||
+ | == Output Node == | ||
+ | |||
+ | * Dual Intel E5-2630 v2 hexa-core Xeon CPUs (2.60 GHz, 15 MB cache) | ||
+ | * 32 GB DDR-3 memory | ||
+ | * Four 3 TB disks (Seagate Constellation ES.3 SATA 6Gb/s 7200rpm) | ||
+ | * Dual 10 Gb/s (Intel X540-AT2) | ||
+ | * QLogic IBA7322 QDR Infiniband (40 Gb/s) | ||
+ | * IMPMI 2.0 management | ||
== Head Node == | == Head Node == | ||
Line 162: | Line 187: | ||
* 2x J8712A 875W power supply | * 2x J8712A 875W power supply | ||
There is an additional 48 port Allied Telesys switch for connecting the IPMI ports.\\ | There is an additional 48 port Allied Telesys switch for connecting the IPMI ports.\\ | ||
- | The IP-range 10.88.1.xx(/ | ||
sfxc.txt · Last modified: 2024/06/26 12:57 by keimpema