USA Banner

Official US Government Icon

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure Site Icon

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

U.S. Department of Transportation U.S. Department of Transportation Icon United States Department of Transportation United States Department of Transportation
Turner-Fairbank logo
OFFICE OF RESEARCH, DEVELOPMENT, AND TECHNOLOGY AT THE TURNER-FAIRBANK HIGHWAY RESEARCH CENTER

Laboratory Numerical Modeling

Hydraulics Laboratory Advanced Computing Center

The Federal Highway Administration’s (FHWA’s) J. Sterling Jones Hydraulics Research Laboratory’s numerical modeling is performed at the Transportation Research and Analysis Computing Center (TRACC) cluster at the Argonne National Laboratory (ANL). This modeling is completed through remote access using high-speed internet (I2) and collaboration. Since the beginning of the collaboration in 2008, computational fluid dynamics (CFD) modeling for conducting research studies has exceeded physical testing in the Laboratory. Hundreds of numerical experiments are running round the clock on the ANL/TRACC clusters to solve highway hydraulics issues. From 2017 to 2024, CFD modeling will increase to 70 percent and experimental work will reduce to 30 percent in the Laboratory.

 "The top image shows the Hydraulics Lab’s advanced computing center—researchers are working at computers on a shared table. The bottom image is an exterior shot of the Transportation Research and Analysis Computing Center at the Argonne National Laboratory. An arrow and text between the images represents the high-speed I2 connection between the two computing centers."
Figure 1. Hydraulics Lab Advanced Computing Center connected to ANL/TRACC.

Argonne National Laboratory (ANL) Transportation Research and Analysis Computing Center

The Transportation Research and Analysis Computing Center (TRACC) was established in October 2006 through a grant under the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU). The technical objectives of this original grant included the establishment of a high-performance computing center for use by U.S. Department of Transportation (USDOT) research teams, including those from ANL and university partners. These advanced computing and visualization facilities are used for the conduct of focused computational research and development programs in areas specified by the USDOT. As part of the original project, a supercomputing user facility was established, with full operations beginning in February 2008. 

The set of research and development activities identified by USDOT included:

  1. Computational structural mechanics and methods for analysis and optimum design of safety structures and analysis of transportation-related infrastructure.
  2. Computational fluid dynamics for hydraulics and aerodynamics research.
  3. Traffic modeling and simulation and emergency transportation planning.
  4. Multidimensional data visualization.

These transportation research and demonstration projects focused on (1) the exchange of research results with USDOT and the private sector and (2) collaboration with universities to foster and encourage technology transfer.

The essential resources for transportation infrastructure research and analysis at TRACC include the high performance computing (HPC) clusters and expert staff in the areas of computational fluid dynamics (CFD) and computational structural mechanics (CSM). At present, TRACC has two HPC clusters. The five-year-old Phoenix cluster is a customized system consisting of 1024 cores on 128 compute nodes, each with two quad-core AMD Opteron CPUs and 8 GB of RAM; a DataDirect Networks storage system consisting of 180 TB of shared RAID storage; a high-bandwidth, low-latency InfiniBand network for computations; and a high-bandwidth gigabit Ethernet management network. The system also includes high-performance compilers and message passing interface (MPI) parallel processing libraries. Peak performance is about 4 Teraflops.

The new HPC Zephyr cluster became available for use in October 2012. Zephyr is a cluster with 92 compute nodes. Each node has two 16-integer cores, eight floating-point cores, AMD Interlagos 6273, and 2.3-Ghz CPUs. The majority of the nodes (88) have 32 GB of RAM; two nodes have 62 GB of RAM, and two nodes have 128 GB of RAM. Zephyr has a 40 GB/s InfiniBand interconnect between nodes, which is twice the speed of the Infiniband interconnects on the older Phoenix cluster. Zephyr also has a high-performance Lustre-based file system with 120 TB of formatted capacity. All nodes run an up-to-date CentOS Linux 6. TRACC researchers have developed methodologies and software applications to more easily run CFD and CSM software (STAR-CCM+ and LS-DYNA) on TRACC’s systems. The TRACC team has also held workshops and classes to train TRACC users in the use of these software packages for analysis of transportation infrastructure problems.

"Zephyr (left) and Phoenix (right) computer clusters are shown in seven black cabinets. The clusters are located at the Argonne National Laboratory (ANL) in Lemont, Illinois."
Figure 2. The photo shows the Zephyr (left) and Phoenix (right) computer clusters at the Argonne National Laboratory. Both clusters consist of 3,968 total cores on 220 total compute nodes with 32 cores each, a DataDirect Networks storage system, and a high-bandwidth network.