The project technologies will be co-designed, demonstrated, and evaluated by using six use case applications developed and used by the project participants and third-party applications. The use case applications have been carefully selected to showcase any of the features that will be provided by ADMIRE. All of them are in use in PRACE centers or CoE. Below we provide a short description of each use case application.

  • Application 1: Monitoring and Modelling Marine, weather and Air quality is an on-line environment real time monitoring and forecast service focused on marine, weather and air quality simulations in, but not limited to, the Campania Region area. 
    The Meteo computational workflow is supported by dedicated HPC resources orchestrated by a workflow engine performing following steps: 1) initial data acquisition; 2) data preparation and pre-processing; 3) run WRF model; 4) results processing and preparation for other models: wind driven wave simulation forecast (Wavewatch3), wind driven coastal sea currents (Regional Ocean Model System, ROMS), and air quality prediction through Chimere; 5). The output of these general purpose models is used by other models for specific applications: a model for extreme weather event coastal flood forecast (SOB using WW3 output), a Lagrangian model for pollutants transport and diffusion in nearshore and offshore areas (WaComM using ROMS output), a model for high resolution wind field forecast (CALMET/CALPUFF using WRF output), and a model for predicting the impact of emissions from arson (SMOKE using CALMET/CALPUFF output). The Meteo produces routinely high resolution forecasts (WRF/Chimere: 1Km, ROMS/WW3 200m, CALMET/CALPUFF 100m) with a heterogeneous data footprint of about .6 TB per workflow run and 6 TB of near permanent monthly storage. ADMIRE techniques for ad-hoc in-memory storage and data-aware scheduling will help to reduce the storage access time, data movements, and the pressure on the back-end file systems for both workflow related computation and operational data retrieval, access and analysis. This application will be ported by project partner CINI/University of Naples “Parthenope”, which has extensive experience in environmental research and supports a Weather Research and Forecast (WRF)-centred workflow in production at Center for Monitoring and Modelling Marine and Atmosphere applications.

  • Application 2: Car-Parrinello molecular dynamic simulation of large molecules and small proteins  allows to accurately describe complex electronic interactions (e.g. proteins with metals), to improve drug design or understanding of metabolomics processes. Car-Parrinello simulations are ab-initio density functional theory simulations using huge amount of system resources, usually scaling to hundreds to thousands of nodes, and run for months (usually through a long sequence of checkpoints and restarts to meet the workload management policies of queuing systems and tolerate node/system failures). The code used to run these kind of simulations in CINECA is Quantum-Espresso, which is already being ported to future exascale system within the MaX Center of Excellence. This application suffers for I/O bottlenecks, essentially due to the parallel file system performance during checkpoint cycles. The goal is to use the ADMIRE technology to decouple the code checkpoint from the actual writing of the data to the filesystem (if any), so that there is no need to save checkpoint to the back-end file system.

  • Application 3: Simulation of large scale turbulent flow. Execution of this simulator on Beskow SC at KTH produces an enormous amount of data: up to 1 Terabyte per time step, with each simulation running more than 100 thousand time steps. The sheer size and complexity of the simulation data overwhelms current I/O and storage systems as well as post-processing data analysis tools, which leads to underutilized data and slows progress in the scientific discovery process. Saving only selected time steps is not a solution, since the time-evolution of turbulent structures is of crucial importance. In-situ analysis and visualisation methods are required that are able to cope with all the data produced by the turbulent flow simulator. ADMIRE  techniques for in-situ processing and data-aware scheduling will be useful to enhance application performance avoiding data movements and saving energy. This use case is collaboratively driven with others by ADMIRE partner KTH.

  • Application 4: Continental-scale land cover mapping with scalable and automatic deep learning frameworks. Modern Earth observation programs have an open data policy and provide massive volume of free multi-sensor remote sensing data every day. Their availability guarantees a constant way for monitoring the surface of the Earth at high temporal resolution. Deep learning approaches represent the best solution to update land-cover maps (i.e., important products for commercial and environmental monitoring and planning) due to the possibility of generating accurate classification results. Their hierarchical architecture composed of stacked repetitive operations enables the extraction of informative features from raw pixel data and modelling high-level semantic content of remote sensing images. FZJ will establish automatic and scalable deep learning frameworks to keep continental-scale land cover maps regularly updated. The frameworks will rely on up-to-date and multi-sensors time series of Earth observation multispectral data available at global level and at high temporal resolution. In particular, the dense time series of multispectral Sentinel-2 and Landsat 8 images will be widely exploited. The outcomes of the ADMIRE project will enable the proposed deep learning frameworks to get fast access along all the storage levels. This will permit fast I/O for deep learning models checkpoints and efficient re-usability of data (e.g. dense time series residing in back-end file) for addressing training, testing and validation tasks with either contemporary Earth observation acquired data or new designed deep learning models (with new parameter setups). Furthermore, the novel capabilities offered by the \project{} software stack will be tested for different tasks, such as neural architecture search, a process that requires high amounts of HPC capacity in order to search for correct parameter settings. This use case is collaboratively driven with others by ADMIRE partner FZJ.

  • Application 5: Super-resolution imaging using Opera microscopy and SRRF/ImageJ software. The use case intensively pursued in The Institute of Bioorganic Chemistry is the imaging of the disease models such as organoids and brains to reveal the populations of cells involved in the pathogenesis of the polyQ neurodegenerative diseases. This challenge is part of The LifeTime initiative in the FET Flagship program aims to develop technologies that will enable the study of disease mechanisms in the resolution of individual cells and specific tracking of cell populations and subpopulations in various organs. The imaging needs to reveal the data in high resolution which can be achieved using Super-Resolution Radial Fluctuations (SRRF) techniques using conventional fluorescent images and taking advantage of the AI and machine learning. The SRRF technique requires the acquisition of a large amount of data and its subsequent computation to obtain the supper resolution image. On average acquisition of 100 images is required to generate a single image super-resolution. The predicted amount of data needed to produce one image would reach several GB while a full analysis of samples from one experiment could reach hundreds of TB. Execution of the super-resolution imaging will use \project{} infrastructure results to allow the machine learning algorithms to run efficiently on large scale HPC resources and technologies to manage data in large heterogeneous systems. This use case is collaboratively driven with others by ADMIRE partner PSNC.

  • Application 6: Software Heritage Management & Indexing. Software Heritage (SH) is a non-profit, multi-stakeholder initiative launched by Inria in partnership with UNESCO with the ambition to collect, preserve, and share all software that is publicly available in source code form. On this foundation, a wealth of applications can be built, ranging from cultural heritage to industry and research.  Given that any software component may turn out to be essential in the future, all software that is publicly available in source code form is collected. The origin of the software is archived along with its full development history: this precious meta-information is harvested and structured for future use. Beyond its primary mission that is preservation, SH represents a novel, huge and dynamic data set carrying information on open source software, which is nowadays the cornerstone of the software industry. SH today hosts over six billion files related to about 100M different projects exceeding 400TB (and constantly growing) with over 2TB of metadata, in the form a (giant) hash tree representing releases and revisions of all files, and thus the relations between all versions of all projects. These data can be turned into valuable knowledge: copyright and license violations, error propagation, successful programming patterns, the evolution of coding paradigms and languages, and crucially, the meaning of algorithms, which a prerequisite for trust in analytics and machine learning. Being the SH data set composed of very small files (median 3KBytes), its management is genuinely challenging for any High-Performance storage subsystem, which is typically optimised to manage large blocks. Also, being data organised along a hash tree, data access hardly exhibits spatial/temporal locality impairing cache performance.  ADMIRE ad-hoc storage systems, malleability, and data-aware scheduling will be applied to solve the two main challenges in this application: 1) analysing meta-data (hash-tree) and data (files) using HPDA software, 2) keeping multiple copies of SH coherent (e.g. OLTP and OLAP, local and remote) across dataset updates while ensuring the efficiency and robustness of the reconciliation process. This use case is collaboratively driven with others by ADMIRE partners CINI and INRIA.