Laboratory Numerical Modeling
Hydraulics Laboratory Advanced Computing Center
The Federal Highway Administration’s (FHWA’s) J. Sterling Jones Hydraulics Research Laboratory’s numerical modeling is performed at the Transportation Research and Analysis Computing Center (TRACC) cluster at the Argonne National Laboratory (ANL). This modeling is completed through remote access using high-speed internet (I2) and collaboration. Since the beginning of the collaboration in 2008, computational fluid dynamics (CFD) modeling for conducting research studies has exceeded physical testing in the Laboratory. Hundreds of numerical experiments are running round the clock on the ANL/TRACC clusters to solve highway hydraulics issues. From 2017 to 2024, CFD modeling will increase to 70 percent and experimental work will reduce to 30 percent in the Laboratory.
Figure 1. Hydraulics Lab Advanced Computing Center connected to ANL/TRACC.
Argonne National Laboratory (ANL) Transportation Research and Analysis Computing Center
The Transportation Research and Analysis Computing Center (TRACC) was established in October 2006 through a grant under the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU). The technical objectives of this original grant included the establishment of a high-performance computing center for use by U.S. Department of Transportation (USDOT) research teams, including those from ANL and university partners. These advanced computing and visualization facilities are used for the conduct of focused computational research and development programs in areas specified by the USDOT. As part of the original project, a supercomputing user facility was established, with full operations beginning in February 2008.
The set of research and development activities identified by USDOT included:
- Computational structural mechanics and methods for analysis and optimum design of safety structures and analysis of transportation-related infrastructure.
- Computational fluid dynamics for hydraulics and aerodynamics research.
- Traffic modeling and simulation and emergency transportation planning.
- Multidimensional data visualization.
These transportation research and demonstration projects focused on (1) the exchange of research results with USDOT and the private sector and (2) collaboration with universities to foster and encourage technology transfer.
The essential resources for transportation infrastructure research and analysis at TRACC include the high performance computing (HPC) clusters and expert staff in the areas of computational fluid dynamics (CFD) and computational structural mechanics (CSM). At present, TRACC has two HPC clusters. The five-year-old Phoenix cluster is a customized system consisting of 1024 cores on 128 compute nodes, each with two quad-core AMD Opteron CPUs and 8 GB of RAM; a DataDirect Networks storage system consisting of 180 TB of shared RAID storage; a high-bandwidth, low-latency InfiniBand network for computations; and a high-bandwidth gigabit Ethernet management network. The system also includes high-performance compilers and message passing interface (MPI) parallel processing libraries. Peak performance is about 4 Teraflops.
The new HPC Zephyr cluster became available for use in October 2012. Zephyr is a cluster with 92 compute nodes. Each node has two 16-integer cores, eight floating-point cores, AMD Interlagos 6273, and 2.3-Ghz CPUs. The majority of the nodes (88) have 32 GB of RAM; two nodes have 62 GB of RAM, and two nodes have 128 GB of RAM. Zephyr has a 40 GB/s InfiniBand interconnect between nodes, which is twice the speed of the Infiniband interconnects on the older Phoenix cluster. Zephyr also has a high-performance Lustre-based file system with 120 TB of formatted capacity. All nodes run an up-to-date CentOS Linux 6. TRACC researchers have developed methodologies and software applications to more easily run CFD and CSM software (STAR-CCM+ and LS-DYNA) on TRACC’s systems. The TRACC team has also held workshops and classes to train TRACC users in the use of these software packages for analysis of transportation infrastructure problems.
Figure 2. The photo shows the Zephyr (left) and Phoenix (right) computer clusters at the Argonne National Laboratory. Both clusters consist of 3,968 total cores on 220 total compute nodes with 32 cores each, a DataDirect Networks storage system, and a high-bandwidth network.