Technical infrastructure:

The repository functions on well-supported operating systems and other core infrastructural software and is using hardware and software technologies appropriate to the services it provides to its Designated Community


ESPRI infrastructure relies on standard, tried and tested equipment and facilities. ESPRI manages its own computing and storage infrastructure, which means that the supply and operation of hardware and the installation and maintenance of software are managed by ESPRI staff only. 

ESPRI currently hosts 7PB of storage for both data storage and user spaces, made up of some 15 storage arrays, whose logical volumes are secured via Disk Pool systems (evolution of RAID systems). Each disk-array is controlled by two file servers to ensure redundant access.

ESPRI storage uses a LUSTRE file system, which allows:

  • Disk spaces to be aggregated into unified logical volumes and ensures very high access performance
  • Storage extension without loss of efficiency. 

Data processing is supplied by:

  • 2 CPU clusters with 2,500 computing cores and 10TB of RAM
  • 1 GPU cluster of 8 cards representing 35,000 Nvidia Cuda cores

ESPRI implements a virtualization platform based on VMWARE and a Kubernetes containerization platform for a highly resilient deployment of data services between the sites.

ESPRI develops its infrastructure within the framework of the national RIs described in R0 and the structuring projects in which they are involved. In this context, ESPRI is one of the 8 “backbone” computing and storage centers of the GAIA-DATA national platform which aims to develop a grid of Earth System services and data at the national level. In addition, ESPRI is the main node providing regional and global climate projections to the Copernicus Climate Change Service (C3S). Through these projects, ESPRI’s storage and computing infrastructure is continuously being strengthened and improved, as explained in the IPSL Digital Strategy Plan.

In order to fulfill this role as a thematic computing and data center, ESPRI provides users with community-based suites and environments (open source where possible) adapted to the development of codes and the exploitation of the scientific climate data. These include compilers (GNU, Intel, PGI), libraries (netCDF, HDF5, OpenMPI, etc.) analysis and visualization software (Matlab, IDL, Ferret, R, etc.) and preconfigured environments (Python, Climaf, etc.). The tools, softwares and environments are managed in collaboration with expert users from the different fields of the IPSL communities (modeling, observation, ocean, atmosphere, astronomy, etc.).

The main software packages are listed in the ESPRI computing center documentation. Please refer to the link below. The software list is not exhaustive, but provides a fairly comprehensive package of available software.

All servers use Ubuntu 20.04 (Extended Security Maintenance: 2030) as their operating system. This allows for easy replication of analyses with a complete and reproducible environment (Jupyter, Gitlab).

Data services are based on international standard protocols (OGC, HTTPS, FTP, GridFTP, SSH), softwares widely used in the community (THREDDS, GeoNetwork, GeoServer, PyWPS) and standard software to provide the different services (PostGreSQL, MariaDB, Apache Tomcat/SolR, Elastic Search).

Software codes are managed throughout their life cycle, from development to distribution and debugging, on the GitLab platform hosted by the Computing Center of the National Institute of Nuclear and Particle Physics (CC-IN2P3).

To combine massive, high-performance storage and computing resources distributed on three geographical sites, ESPRI uses high-speed network connections provided by using the Research and Higher Education and the National Research and Education Network (RENATER) through several layers of equipment and services:

  • Each site accesses the Internet via a shared 10Gbps network, allowing both users to log in and high-speed data distribution or download.
  • Each cluster has a dedicated internal high-speed Infiniband network, enabling low-latency exchanges with the file servers.
  • Secure 10Gbps interconnections provided by the Research and Higher Education and the National Research and Education Network (RENATER) have been set up between ESPRI sites via a Virtual Private Network (L3VPN) for redundant file system and service sharing.

By default, the ESPRI data centre offers services on a “best effort” basis: ESPRI guarantees to do its utmost to ensure that systems are available, but with no quantified targets for system uptime, availability or durability. It is in ESPRI’s interest and reputation to restore systems as quickly as possible, and to communicate transparently on how any failure will be managed and anticipated in the future. However, ESPRI is capable of deploying operational services with strict key performance indicators (KPIs), as is the case for access to climate projections provided to the Climate Data Store of the Copernicus Climate Change Service. The Operational Level Agreement (OLA) signed jointly with the ECMWF is available as an example (see relevant links).All equipment is covered by a 5 to 7-year warranty and is renewed before the warranty expires. The multi-site implementation of ESPRI infrastructure ensures a business continuity plan for the most critical services and a disaster recovery plan in the event of a major breakdown due to the geographical distribution of human and material resources.