Minisymposia

2nd High Performance Computing meets Quantum Computing (HPC+QC’25)

Quantum Computing (QC) exploits quantum physical phenomena, like superposition and entanglement. A mature quantum computer has the potential to solve some exceedingly difficult problems with moderate input sizes efficiently. Still, much work lies ahead before quantum computing can compete with traditional computer technologies or successfully integrate and complement them. From a software-only point of view, several promising algorithms for quantum systems have been developed over the past decades. In general, it emerges a paradigm where quantum computers will not replace traditional supercomputers. Instead, they will become an integral part of supercomputing solutions, acting as an “accelerator”, i.e. specialized to speed-up some parts of the application execution. In this respect, this hybrid HPC+QC approach is where real-world applications will find their quantum advantage. The goal of our proposal is to establish a regular minisymposium event at PASC as a yearly venue, where researchers and developers can discuss their experiences with applications development with QC algorithms, specifically related to the integration of applications currently running on “traditional” HPC systems, which aim to use QC devices as an accelerator.

Organizer(s): Alfio Lazzaro (HPE), and Tiziano Mueller (HPE)

Domain: Computational Methods and Applied Mathematics


Accelerating Sustainable Development through Coupled HPC Simulations and AI

High-performance computing (HPC) has a long history of driving scientific discovery through advances in hardware and numerical algorithms, but the adoption of artificial intelligence (AI) and machine learning (ML) is transforming this landscape. By integrating traditional simulations with AI/ML training and inference tasks into complex workflows, computational scientists are unlocking new HPC applications, from AI-driven design and optimization to online model fine-tuning and learning of dynamical systems, and revolutionizing how we tackle many of the UN’s sustainable development goals. However, building and efficiently deploying large-scale coupled workflows on HPC systems still poses significant software and hardware challenges, including managing massive datasets on distributed systems, making efficient use of the interconnect and local memory to avoid I/O bottlenecks, and ensuring reproducibility and provenance. In this minisymposium, speakers from leading hardware vendors, HPC centers, and universities share the latest software innovations, new learning methodologies developed, and successful practices adopted to address the issues faced by coupled simulation and AI workflows on modern HPC systems. Through applications in fields such as drug discovery and climate modeling, the talks will discuss lessons learned and the remaining challenges in adopting large-scale coupled workflows for scientific discovery in the exascale era of supercomputing.

Organizer(s): Riccardo Balin (Argonne National Laboratory), Alessandro Rigazzi (HPE), and Ramesh Balakrishnan (Argonne National Laboratory)

Domain: Computational Methods and Applied Mathematics


Addressing Scientific and Engineering Challenges in Complex Multi-Physics Coupling Scenarios

The complexity of real-world physical phenomena, combined with the need to address intricate engineering challenges, has prompted the scientific community to develop numerical methods and models for simulating large-scale multiphysics phenomena using high-performance computing resources. Multiphysics simulations are increasingly utilized in complex systems and emerging technologies, such as nanoscale materials and biophysics. These simulations present significant challenges, mainly due to the intricacies of modeling multiple physical processes that interact in a coupled manner or occur across varying spatial and temporal scales. Establishing robust coupling between diverse solvers and algorithms is particularly demanding, requiring innovative computational strategies to achieve stability, accuracy, and convergence. In multiphysics coupling scenarios, the boundaries between disciplines often merge, emphasizing the need for interdisciplinary collaboration. This minisymposium brings together experts in computational engineering and physics to explore various domains and applications. Four speakers will present specific coupled science cases, highlighting computational challenges, advancements toward exascale readiness, and the future role of machine learning as full surrogate models or as complementary tools to established numerical methods.

Organizer(s): Nur Aiman Fadel (ETH Zurich / CSCS), Mauro Bianco (ETH Zurich / CSCS), and Filippo Spiga (NVIDIA Inc.)

Domain: Computational Methods and Applied Mathematics


Advanced Simulations of the Plasma Edge Region in Magnetic Fusion Devices

This minisymposium is dedicated to addressing computational challenges in magnetic fusion research and, more specifically, latest progress made in the development of kinetic codes for simulating the edge region of tokamak reactors. This outer plasma region involves particularly steep pressure gradients, large amplitude fluctuations, the presence of many types of impurities, high collisionality among the different plasma species as well as neutral atoms, and non-trivial plasma-wall boundary conditions. These conditions are thus particularly challenging to model and this topic is therefore recognized as a frontier domain. The presentations in this session will cover improvements made to today’s most advanced gyrokinetic edge codes at three different levels: (1) Enhancements to the simulation models for including the particularly complex physical processes in this outer plasma region, (2) novel discretization schemes allowing for more compact representations and consequently more efficient computations, and (3) Improved parallelization schemes for efficiently exploiting today’s massively parallel and GPU-accelerated platforms. The codes presented in this minisymposium provide excellent illustrations of the different numerical representations considered in the plasma community for carrying out kinetic simulations: Eulerian, moment-based, PIC and semi-Lagrangian. These approaches are obviously common to many other fields of science.

Organizer(s): Stephan Brunner (EPFL), Eric Sonnendruecker (Max Planck Institute for Plasma Physics), and Laurent Villard (EPFL)

Domain: Physics


Advances in Foundation Models for Weather and Climate

In this minisymposium, we will survey the state-of-the-art of foundation models for weather and climate. These promise a second revolution for Earth system modeling, after the emergence of highly skilful machine learning-based weather forecasting models in the last two years. Foundation models aim to provide a machine learning-based, rich representation of the Earth system at many scales in space and time through training on many different datasets. With this, they can be used for a wide range of task, not unlike conventional, equation-based Earth system models. The first large foundation models are becoming available now and their applicability to a range of tasks is explored. The three talks in the minisymposium cover both model development as well as the application and the physicality of the output and will provide a comprehensive overview of the state of the field. In the panel discussion at the end of the minisymposium, the panelists will share their insights about the state of the field and where it will be developing to in the future. The panel also provides the audience with the opportunity to engage with the panelists and to share their experiences and opinions.

Organizer(s): Christian Lessig (European Centre for Medium-Range Weather Forecasts), Ilaria Luise (CERN), and Sebastian Schemm (ETH Zurich)

Domain: Climate, Weather and Earth Sciences


Advancing Medical AI: From Task-Specific Models to Reliable and Scalable Clinical AI Agents

The minisymposium “Advancing Medical AI: From Task-Specific Models to Scalable and Reliable Clinical AI Agents” brings together global experts to address key challenges and opportunities in medical AI. While task-specific models have driven advancements in pathology classification, image segmentation, and report generation, their limited adaptability and reasoning capabilities hinder broader clinical integration. This symposium explores how Large Language Models (LLMs) and Large Multimodal Models (LMMs) are redefining medical AI with multitask flexibility, multimodal data fusion, and scalable, transparent solutions. Speakers will discuss state-of-the-art systems like MEDITRON-70B, Med-Flamingo, MAIRA-2, and MMedAgent, which extend LLMs for tasks including disease detection, organ classification, and grounded diagnostics. Key themes include integrating biomedical knowledge into workflows, combining diverse data streams for holistic analysis, and optimizing models for real-world deployment in hospital infrastructure. The event emphasizes privacy-preserving frameworks, open-weight models, and inclusive AI systems that foster global accessibility. Through a multidisciplinary approach, the symposium aims to advance reliable, scalable, and equitable AI solutions for healthcare, highlighting innovations that address critical clinical needs while empowering diverse research communities worldwide.

Organizer(s): John Anderson Garcia Henao (Balgrist University Hospital, OR-X Translational Center for Surgery), and Sebastiano Caprara (Balgrist University Hospital, OR-X Translational Center for Surgery)

Domain: Life Sciences


AI and Nanotechnology: Leveraging Computational Advances for Environmental Sustainability

This symposium, titled “AI and Nanotechnology: Leveraging Computational Advances for Environmental Sustainability”, will explore how the synergy between Artificial Intelligence (AI) and nanotechnology can drive innovative solutions to pressing environmental challenges. The event will focus on the integration of data-driven tools, computational modeling, and nanomaterials in addressing critical sustainability issues. Key topics include accelerating catalyst design with data-driven tools, highlighting how AI-enabled molecular modeling can expedite the development of efficient and sustainable catalysts. Another important focus will be the role of AI and molecular modeling in phosphorus sustainability, ensuring more effective use of this vital resource in environmental systems. The symposium will also delve into data-driven discovery of Fe(III)-based spin-crossover systems, demonstrating how AI-driven approaches can enhance the design of advanced materials with tunable properties. Additionally, participants will discuss cutting-edge strategies for combating contamination by harnessing nanoparticles and leveraging AI to optimize remediation techniques. By bringing together leading experts from diverse disciplines, this symposium offers a platform for collaboration, knowledge exchange, and the development of actionable solutions. Through these discussions, participants will contribute to advancing AI and nanotechnology innovations aimed at achieving a sustainable future and addressing key United Nations Sustainable Development Goals (SDGs).

Organizer(s): Miroslava Nedyalkova (University of Fribourg), Yaroslava Yingling (NC State University), and Marco Lattuada (University of Fribourg)

Domain: Chemistry and Materials


Application Perspective on SYCL, a Modern Programming Model for Performance and Portability

HPC and data-intensive computing now stand as the fourth pillar of science. The heterogeneous architectures, primarily GPUs, are a staple of HPC. However, the fast pace of development and increasing diversity of hardware bring forth not only new opportunities, but also challenges. Proprietary heterogeneous programming models often pose risks due to vendor lock-in, hurting collaboration and limiting portability, hence undermining reproducibility in scientific research. SYCL, a vendor-agnostic, C++-based standard for heterogeneous computing with several mature implementations for a wide range of hardware accelerators, offers a promising path towards vendor-agnostic, high-performance computing. This minisymposium focuses on the experiences of scientific application developers using SYCL as a performance-portability solution across different hardware accelerators, fostering a collaborative open-software ecosystem. The talks will focus on best practices for developing performance-portable and maintainable code using SYCL, as well as on co-development and interaction between the SYCL Working Group and application developers. The aim of this workshop is to contribute to the wider adoption of open standards in scientific computing.

Organizer(s): Andrey Alekseenko (KTH Royal Institute of Technology, PDC Center for High Performance Computing), and Szilárd Páll (KTH Royal Institute of Technology, PDC Center for High Performance Computing)

Domain: Engineering


Art and Science

The Symposium on Art and Science brings together professionals from two seemingly distinct worlds—art and science—to explore the powerful synergies that emerge when their approaches intersect. This interdisciplinary event will delve into the ways in which artistic practices and scientific research inform and inspire one another, sparking new ideas, methods, and possibilities for addressing global challenges. Throughout the symposium, leading artists, scientists, researchers, and creative thinkers will present their work, share insights, and engage in thought-provoking discussions. Topics will range from the role of aesthetics in scientific visualization to the use of technology in contemporary art, and how both disciplines approach complex questions about humanity, nature, and the universe. Attendees will have the opportunity to explore cutting-edge collaborations in fields such as digital art, life sciences, engineering and artificial intelligence, witnessing firsthand how artists and scientists push the boundaries of their respective fields through shared knowledge and innovative approaches.

Organizer(s): Guillaume Houzeaux (Barcelona Supercomputing Center), Charles Moulinec (STFC), and Beatriz Eguzkitza (Barcelona Supercomputing Center)

Domain: Computational Methods and Applied Mathematics


Biopreparadness at Scale via Context-Aware Agent-Based Models

A rapid response to the initial phase of the COVID-19 pandemic was hampered by decentralized data collection, analysis, and the novelty of the virus itself; vital metrics for virus characteristics, such as its transmissibility and virulence, were unknown. Moreover, the disease progression was spatially heterogeneous; different regions experienced waves at varying times and with differing intensities. To mitigate these challenges and better inform public health officials for the next pandemic, we are developing methods to assimilate real-world data into biologically informed agent-based models, facilitating biopreparedness at scale in near-real time. These models will allow for population stratification along multiple comorbidities or socio demographic factors across diverse geospatial regions by incorporating decentralized data in a mathematically private way. By incorporating this data from varied populations across a region, these models will assist public health agencies in mitigating an emerging outbreak and effectively managing hospital capacity. We will highlight different computational methods designed to address these key bioprepardness challenges in this minisymposium.

Organizer(s): Adam Spannaus (Oak Ridge National Laboratory), John Gounley (Oak Ridge National Laboratory), and Heidi A. Hanson (Oak Ridge National Laboratory)

Domain: Life Sciences


Breaking the HPC Silos for Sustainable Development

IDEAS4HPC is proposing its a panel of four presentations under the topics of “Breaking the HPC silos for sustainable development”. Each presentation will describe the breadth and depth of progress obtained working at the intersection between different disciplines, or with team members of diverse origins and backgrounds. The early stage researcher is an Iranian computer scientist studying in Spain, and will share about her experience as summer student at CERN in the ROOT team. The second scientist works presently as scientific officer for the World Climate programme at the WMO in Geneva, and is an expert of sea level research. This topic has become of pivotal importance for climate change adaptation as a large proportion of the world population lives in coastal areas and sees its habitat and revenue earning pattern dramatically changed.

Organizer(s): Marie-Christine Sawley (ICES Foundation), Maria Grazia Giuffreda (ETH Zurich / CSCS), Florina M. Ciorba (University of Basel), and Maria Girone (CERN)

Domain: Computational Methods and Applied Mathematics


Building Expertise for Sustainable Scientific Software in High-Performance Computing: Bridging Training Gaps in the Research Community

The aim of this minisymposium is to explore how training initiatives in sustainable open-source scientific software development foster collaboration among researchers and developers. Scientific software development requires motivated, multi-talented individuals capable of balancing technical excellence with scientific rigor. However, the absence of standard curricula for scientists developing software, undefined career paths, and scarce resources for onboarding and maintaining sustainable software make recruiting and nurturing new talent particularly challenging. The scope and complexity of software skills required are growing rapidly, with software increasingly interdependent on evolving hardware. Traditional academic education often fails to adequately equip researchers for developing sustainable, efficient scientific software. In response, multiple initiatives have emerged over recent years to bridge this gap and provide advanced training to researchers from varied career stages and disciplines. We have invited representatives from some of these initiatives to talk about the topic from their perspective and discuss how we can foster collaborations between the training, researcher and developer communities. We welcome researchers of all scientific domains, and representatives of all training and scientific software developer communities to join us.

Organizer(s): Alessandra Villa (KTH Royal Institute of Technology), and Samantha Wittke (CSC – IT Center for Science, Finnish Geospatial Research Institute in the National Land Survey of Finland)

Domain: Computational Methods and Applied Mathematics


Challenges and Opportunities for Next-Generation Research Applications and Workflows

We are increasingly engaged in transdisciplinary research to address the complex challenges facing our world. These challenges include transitioning to renewable energy systems, advancing personalized medicine, utilizing digital twins, and accurately predicting climate change and its impacts on local and regional ecosystems. As we look toward a future shaped by computing, data, and AI, we aim to leverage various digital services & methodologies. In this context, application and workflow-focused approaches can play a crucial role in advancing scientific frontiers by harnessing the potential of integration. This approach could also serve as a long-term strategy for upholding the principles of sustainability, openness, and transparency, particularly within federated ecosystems. Thus, engaging in discussions about next-generation application workflows is just as crucial for advancing research as conversations concerning the development of digital infrastructure. This minisymposium will convene experts from various domains, each focusing on different aspects of the scientific research lifecycle. Speakers will critically examine the role of AI, explore performance and productivity beyond Moore’s Law, and discuss how generative strategies can empower physics-based simulations. Representing early-career researchers, Dr. Filippo Gatti from France will discuss generative strategies for physics-based simulations.

Organizer(s): Ewa Deelman (Information Sciences Institute, University of Southern California), Sagar Dolas (SURF), and Irene Bonati (SURF)

Domain: Computational Methods and Applied Mathematics


Challenges in Systems Design for Omics

This minisymposium aims to address the critical challenges faced in the design and implementation of systems for omics research. As the field of omics, encompassing various disciplines such as genomics, proteomics, and metabolomics, continues to expand rapidly, there is an increasing demand for hardware-software co-design and robust computational systems that can handle large datasets, provide accurate analyses, and facilitate meaningful biological insights. The enormous data growth continuously shifts the life sciences from model-driven towards data-driven science driving the adoption of deep neural network models, massively parallel accelerators such as GPUs, and vendor-independent portability frameworks. This session will bring together experts from both computational and life sciences to discuss innovative approaches to systems design that meet the unique needs of omics workloads. Topics will include advanced algorithms for data processing in genomics and proteomics, novel data representations that achieve superior memory efficiency, and hardware-software co-design to improve performance and energy efficiency. Mechanisms that enable real-time analysis of genomic data by analyzing electrical signals as raw sequencing data, lessons learned from GPU acceleration of computations in widely used bioinformatics tools, and an outlook on future software and hardware trends that will likely impact computational biology will be shared.

Organizer(s): Sriranjani (Gina) Sitaraman (AMD), Hari Sadasivan (AMD, University of Washington), Bertil Schmidt (Johannes Gutenberg University Mainz), and Gagandeep Singh (AMD)

Domain: Life Sciences


Data Without Borders: Fostering Equity and Access in Scientific Research

This minisymposium aims to bridge geographical divides, such as those between Europe and the United States, and domain-specific barriers across fields like material sciences, astrophysics, and high-energy physics. Our primary objective is to foster collaboration and knowledge exchange among communities that often operate in isolation, ensuring that diverse perspectives are represented. We are committed to engaging a diverse range of speakers, including both male and female experts, to enrich the dialogue with varied insights. In addition, this symposium focuses on supporting underserved communities that face challenges in accessing high-end resources and data storage capabilities. This includes regions with limited access to advanced technology and data infrastructure, posing barriers to conducting leading-edge research. These communities often struggle to obtain and share critical data for their scientific work. This minisymposium provides a platform for these communities to facilitate the exchange of knowledge, best practices, and resources. Through this collaborative approach, we strive to empower researchers from diverse backgrounds to make substantial contributions to their fields and the broader scientific community.

Organizer(s): Valerio Pascucci (University of Utah), Maria Girone (CERN), and Michela Taufer (University of Tennessee)

Domain: Climate, Weather and Earth Sciences


Dynamic Adaptive Scalable Methods in Earth System Modeling

To enhance the reliability and accuracy of weather and climate simulations, smaller scales and additional atmospheric processes have to be taken into account. This increases the memory and computational demand, often exceeding the capabilities of modern supercomputers. Adaptive methods present a solution by dynamically focussing computational power in specific areas in time and space, thereby significantly improving detail while maintaining low runtime and resource consumption. In spite of their great potential, there are many challenges to face, such as a sophisticated selection of adaptation strategies and a careful technical consideration of memory layouts and communication patterns to maintain good performance and scalability even on hundreds of GPUs like present in upcoming supercomputers. In this minisymposium, we will showcase advanced and highly efficient numerical schemes covering mesh management, computational fluid dynamics with discontinuous Galerkin methods, and multiresolution-based objective error estimation. Applications in chemistry climate modeling, atmospheric transport processes, and flooding simulation, will be presented and the ongoing efforts and challenges in adopting such schemes will be discussed. The overarching goal is to reach exascale capability with efficient adaptive algorithms.

Organizer(s): Farahnaz Khosrawi (Forschungszentrum Jülich), Benedict Geihe (University of Cologne), and Michael Schlottke-Lakemper (University of Augsburg)

Domain: Climate, Weather and Earth Sciences


Emerging Computing Technologies for Next-Generation High-Performance Computing

High-performance computing (HPC) is evolving rapidly to meet the growing demands of complex scientific, engineering, and industrial applications. As data volumes expand exponentially and simulations become increasingly intricate, the future of HPC will rely on cutting-edge hardware, innovative architectures, and software-driven optimizations to boost processing power, efficiency, and scalability. At the same time, Artificial Intelligence (AI) is transforming HPC by enhancing computational efficiency through predictive models and adaptive processing techniques. However, this integration also intensifies the need for computational resources, creating challenges such as rising energy consumption and the efficient management of resources. Addressing these issues requires exploring novel computing platforms that balance high performance with sustainability. This minisymposium will examine emerging technologies, including Quantum Computing, Electronic Neuromorphic Computing, and Photonic Computing, which are at various stages of development. By analyzing these advancements and their potential applications, we aim to outline a roadmap for the next generation of HPC technologies. The discussion will emphasize their transformative impact on diverse fields, highlighting how these innovative solutions could shape the future of computing over the next decade.

Organizer(s): Nur Aiman Fadel (ETH Zurich / CSCS), Cornelius Hempel (Paul Scherrer Institute), and Mauro Bianco (ETH Zurich / CSCS)

Domain: Engineering


Empowering Interdisciplinary Collaboration through Reproducible Benchmarking

Porting and tuning the performance of scientific applications on heterogeneous and increasingly complex supercomputer architectures is currently a manual and arduous task. Therefore, benchmarks are crucial for evaluating and improving the performance of applications on these systems. Benchmarks can be used as a proxy of the execution behaviors of scientific applications in a controlled and simpler environment, which provides an opportunity to extrapolate performance gains across new hardware, algorithm changes, and software updates. This mini symposium brings together interdisciplinary communities, who are using supercomputers, to discuss the challenges and opportunities in tracking, analyzing, and tuning application performance through the use of reproducible benchmarks. Specifically, we will address these critical topics: defining benchmarks to accurately capture scientific application behaviors; broadening performance metrics beyond time to solution, to reflect the impact of optimizations; modernizing the process of running benchmarks and analyzing their performance on different hardware architectures; proposing a standard for the definition of new benchmarks to improve reproducibility; identifying opportunities for software/hardware co-design across scientific applications and domains; and how to encourage contributions from the community through open-source benchmark implementations.

Organizer(s): Olga Pearce (Lawrence Livermore National Laboratory, TAMU), and Jens Domke (RIKEN)

Domain: Computational Methods and Applied Mathematics


Enhancing the Efficacy and Outcomes of Scientific Software Developers

To maximally benefit the research community, computational research and research software should be reproducible, use standard workflows, lead to public datasets, and facilitate users’ work. Meeting such expectations can be difficult for software authors, however, as these developers may be experts in a scientific domain rather than software development or user experience best practices. In this minisymposium, our panelists will present on several resources that can ease this burden. Panel members will address common challenges that authors of research software face and offer interventions for these issues, illustrating solutions with examples. This minisymposium will discuss a design system for facilitating web-app usability, science gateways for reproducibility, workflows for scalability and reproducibility, and development tools for scientific software development. Drawing on their research and experiences engaging with the scientific community, panelists will share expertise that can enhance the efficacy and outcomes of scientific software development.

Organizer(s): Johanna Cohoon (Lawrence Berkeley National Laboratory)

Domain: Applied Social Sciences and Humanities


Ethical and Societal Considerations for Scientific Computing

While significant investments have been made in the exploration of ethics in computation, recent advances in high performance computing (HPC) and artificial intelligence (AI) have reignited a discussion for more responsible and ethical computing with respect to the design and development of pervasive sociotechnical systems within the context of existing and evolving societal norms and cultures. The ubiquity of HPC in everyday life presents complex sociotechnical challenges for all who seek to practice responsible computing and ethical technological innovation. We wish to discuss how we can incorporate ethics into all phases of scientific computing developments and deployments to ensure the desired scientific outcome is achieved fully in a context that ethically considers humans and society rather than just the technical requirements. We will share experiences from those who have incorporated ethics into what they do to demonstrate that ethics and technical achievement are not at odds. We also will include perspectives on ethics to promote a lively discussion to seek balance in how we pursue scientific progress. The panel discussion in this session will address lessons learned and facilitate audience interaction aimed at enabling informed decision-making regarding ethics and responsible computing under the lens of the UN Sustainable Development Goals.

Organizer(s): Jakob Luettgau (INRIA), Jay Lofstead (Sandia National Laboratories), and Elaine Raybourn (Sandia National Laboratories)

Domain: Applied Social Sciences and Humanities


FAIR-By-Design HPC-Driven Research

In this minisymposium, we discuss state-of-the-art solutions and ongoing developments to embed FAIR data principles into HPC research workflows. Indeed, despite the growing awareness of the importance of FAIR data principles, implementing them remains a challenge. This is especially true for HPC-powered research, due to the sheer amount of simulations, the lack of tools to simplify their management, and the rapidly evolving landscape of computational frameworks. The minisymposium focuses on best practices for reproducible workflows via modern state-of-the-art tools and approaches, concerning both workflow engines and the underlying simulation software running on current HPC architectures. We discuss applications from climate research and materials science, two domains with similar requirements in terms of workflows, access to HPC resources, and related data management needs. Nevertheless, these two domains have had until now limited interaction and cross-fertilization. Invited speakers are both developers and advanced users of widely adopted open-source workflow engines with a clear focus on reproducibility. They will showcase current efforts and future directions for addressing challenges on how the workflow engines can be expanded to facilitate adherence to FAIR requirements, by still keeping the user interface (for workflow developers and users) as simple as possible.

Organizer(s): Giovanni Pizzi (Paul Scherrer Institute), Carlo A. Pignedoli (Empa), and Nataliya Paulish (Paul Scherrer Institute)

Domain: Chemistry and Materials


Fast and Accurate Numerical Linear Algebra on Low-Precision Hardware: Algorithms and Error Analysis

This minisymposium will address the state of the computer arithmetic algorithmic technique that allows to simulate accurate floating-point computations by using low-precision floating-point or integer operations. The progress of this research area is important to hardware manufacturers because it allows high-performance computers to reduce the number of complex high-precision floating-point units on the chip and increase the number of low-precision floating-point units which are especially useful for machine learning; since efficient algorithms are available to simulate high- precision computations, traditional applications not tolerant of errors associated with low precision do not suffer. These techniques are increasingly researched internationally, and this minisymposium includes four speakers from UK, Japan, TBD. One speaker is an early-career speaker, less than 5 years after his PhD award. We are still looking for two speakers. Here is our progress: (Cannot attend) Greg M. Henry, Intel, US. (Cannot attend) Miora Joldes, CNRS, France. (Cannot attend) Xinyi Li, PNNL, US. (Invitation sent) John Gunnels, NVIDIA, US. (Invitation sent) Xianwei Zhang, Sun Yat-sen University, China. (Invitation sent) Stefano Markidis, KTH, Sweden. (Invitation sent) Yutong Lu, Sun Yat-sen University, China.

Organizer(s): Mantas Mikaitis (University of Leeds), and Theo Mary (CNRS, Sorbonne University)

Domain: Computational Methods and Applied Mathematics


Fostering Sustainable Workflows in High Energy Physics: Developing Common Interfaces at Leadership Facilities to Enable Cross-Site Portability

Next-generation High Energy Physics (HEP) experiments such as those at the High-Luminosity Large Hadron Collider (HL-LHC) and Deep Underground Neutrino Experiments (DUNE) will require significantly more computational resources to analyze orders-of-magnitude higher volumes of data in the next decade. This means that the experiments may need to tap into the large-scale computational resources offered at diverse supercomputing sites traditionally designed for high performance computing (HPC) workloads, instead of the high-throughput computing (HTC) sites HEP experiments are accustomed to. In addition to the challenges of adapting HEP workflows to run on the HPC systems, there are also issues related to authentication/authorization, access policies and reproducibility that need to be addressed. This minisymposium will focus on current status and challenges in developing common interfaces at large-scale computing facilities to enable cross-site workflow portability. Issues such as establishing standardized protocols and tools for data management, workflow execution, and resource allocation will be discussed. We intend to use this minisymposium as a forum to foster conversations and collaborations between the high energy physics and computer science communities towards developing portable workflow execution across computing sites.

Organizer(s): Charles Leggett (Lawrence Berkeley National Laboratory), Meifeng Lin (Brookhaven National Laboratory), and Kyle Chard (Univesity of Chicago, Argonne National Laboratory)

Domain: Physics


Geometries and Topology of Learning for Computational Discovery in High Dimensional Biological Systems with Applications to Human Health

In recent years there has been substantial interest in using machine learning and AI algorithms for data-driven scientific discovery. This interest has to a large degree been fueled by significant increases in the power of high-performance computing coupled with growing availability of massive data sets, ranging from weather and climate simulations to biological studies of the host response to infectious disease. This computational and data driven research has led to a number of significant discoveries related to, e.g., protein-protein interactions, biomarkers of infectious disease, molecular neuroscience, immunology, cancer, and structural biology. This minisymposium will highlight recent work being done at the interface of high-performance computing, algorithms, mathematics, and computing for understanding complex systems biological systems. It will feature simulations and AI analyses using high-performance computing resources at Oak Ridge National Labs and the National Center for Atmospheric Research (NCAR). The focus will be on health-related applications, including modeling pathogen emergence in relation to climate change, graph learning and statistical shape analysis for understanding complex biological systems and learning neural activity patterns from deep geometric and topological networks.

Organizer(s): Michael Kirby (Colorado State University), and Corey O’Hern (Yale University)

Domain: Computational Methods and Applied Mathematics


Improving Energy Efficiency of HPC Systems through SW

Energy and power challenges increase as High-Performance Computing and AI scale to meet rapid industry and research demands. These challenges include higher CO2 emissions, increased energy costs, and strain on the power infrastructure. HPC centers are looking to reduce energy consumption and enhance energy efficiency by optimizing resource utilization and managing their workloads more efficiently. Efforts to improve energy efficiency often focus on hardware advancements, such as microarchitectures, intra-core parallelism, vectorization, and accelerators for critical workloads. These innovations reduced idle power and improved execution but have also introduced challenges like swift power variations. Data center infrastructure, rack design, and cooling techniques have also progressed. Liquid cooling, especially direct hot-water cooling, has gained traction for its cost-saving potential. Although such hardware improvements are impressive, they cannot fully address energy challenges due to their limited adaptability to workloads. Complementary software solutions provide a global view of system status and energy usage, support dynamic adaptation across the stack, enable long-term predictions of resource use, and deliver actionable insights on workload optimizations to users. Research on power-steering runtimes and monitoring tools has contributed to user-facing analytics tools. The rapid progress of AI techniques opens additional opportunities for energy efficiency and optimization in HPC systems.

Organizer(s): Lubomir Riha (IT4Innovations National Supercomputing Center, VSB-TU Ostrava), Hans-Christian Hoppe (Forschungszentrum Jülich), and Estela Suarez (SiPearl)

Domain: Computational Methods and Applied Mathematics


Julia for HPC: Reproducible High-Performance Computing

The fourth instalment of the “Julia for HPC” PASC minisymposium focuses on reproducibility, a cornerstone of scientific research and a key component of High-Performance Computing (HPC). As hardware and software evolve rapidly, reproducibility of both scientific results and application performance becomes increasingly complex. This challenge is particularly pronounced in HPC, where software packages are often tailored to specific hardware architectures for optimal performance. Addressing reproducibility in HPC therefore necessitates the careful development of portable libraries and applications, portable packaging, and a package management system that controls versions of all kinds of software dependencies. This minisymposium highlights the Julia programming language and its environment, which tackle these interconnected challenges in a holistic manner. Tightly integrated tools, such as the package manager and artifact builder, coordinate packaging and environment management, offering a consistent approach. Expert speakers will provide insights into the current state of reproducibility in Julia, highlighting strengths and remaining hurdles. The talks will be presented from the perspectives of both tooling developers and domain scientists. This minisymposium is aimed at Julia users eager to deepen their understanding of the existing reproducibility toolchain, as well as non-Julia users curious about how Julia’s reproducibility solutions might be adapted to other software ecosystems.

Organizer(s): Samuel Omlin (ETH Zurich / CSCS), Ludovic Räss (University of Lausanne, ETH Zurich), and Michael Schlottke-Lakemper (University of Augsburg)

Domain: Computational Methods and Applied Mathematics


Machine Learning Methods for the Simulation of Magnetic Fusion Plasmas

The quest for fusion as an environmentally benign, virtually inexhaustible energy source has recently taken frontstage thanks to a number of breakthroughs such as a new world record for fusion power or the first demonstration of energetic breakeven. This minisymposium is dedicated to addressing challenges in the simulation of magnetic fusion plasmas and, more specifically, on the latest data-driven approaches, complementing the more traditional ones. These include the use of deep learning methods to control the operation of tokamaks, the application of physics-informed neural networks to accelerate the solution of the plasma kinetic equations, the development of innovative techniques to accelerate the gathering of training sets for neural surrogate models, as well as the development of neural networks that preserve the symplectic nature of the underlying equations used for performing reduced-order modelling.

Organizer(s): Laurent Villard (EPFL), Eric Sonnendrücker (Max Planck Institute for Plasma Physics), and Stephan Brunner (EPFL)

Domain: Physics


Machine Learning Support for the Lifetime of Software (ML4SW)

Software plays a critical role in scientific discovery across computational science domains, including chemistry, climate science, physics, and applied mathematics. As software development advances, Machine Learning (ML) is becoming an essential tool for enhancing developer productivity, optimizing application execution, and replacing computationally expensive simulations with surrogate Neural Network models. However, several challenges hinder the broad adoption of ML in software, particularly in the context of sustainable development. With increasingly complex software stacks, workflows, and heterogeneous systems, novel techniques are needed to support development, execution orchestration, and performance optimization. A promising approach for reducing software development overhead in High Performance Computing (HPC) is program synthesis, where software is automatically generated from high-level specifications. Large Language Models (LLMs) such as GPT-4, Code Llama, and StarCoder provide intelligent code generation capabilities, yet challenges related to correctness, verification, and reliability remain. Understanding these limitations is crucial for improving ML-driven software development. The ML4SW minisymposium serves as a platform for researchers, developers, and industry professionals to explore ML-driven software synthesis, correctness verification, and application optimization. Key discussions will address ML’s role in enhancing software development, ensuring trustworthiness, and integrating ML into real-world applications for sustainable and efficient computing.

Organizer(s): Konstantinos Parasyris (Lawrence Livermore National Laboratory), Harshitha Menon (Lawrence Livermore National Laboratory), and Florina M. Ciorba (University of Basel)

Domain: Computational Methods and Applied Mathematics


Materials for Energy from First Principles

The development of sustainable energy technologies relies heavily on advanced materials. From catalysts for clean energy production to batteries for renewable energy storage, innovations in materials science are critical to addressing global energy challenges. First-principles quantum-mechanical simulations, particularly those based on density functional theory, play a key role in this field by providing deep insights into material behavior, predicting performance, and optimizing materials before experimental trials. This minisymposium highlights cutting-edge advancements in materials for energy applications, with an emphasis on the transformative role of first-principles methods and high-performing computing. It convenes experts from academia and industry to explore topics like solar energy conversion, advanced batteries, and CO2 capture, crucial for transitioning to a carbon-neutral society and addressing global warming and climate change – pressing challenges of our time.

Organizer(s): Iurii Timrov (Paul Scherrer Institut), Nicola Colonna (Paul Scherrer Institut), and Livia Giordano (Università di Milano-Bicocca)

Domain: Chemistry and Materials


Mathematical Developments for Magnetic Fusion HPC Applications

The quest for fusion as an environmentally benign, virtually inexhaustible energy source has recently taken frontstage thanks to a number of breakthroughs such as a new world record for fusion power or the first demonstration of energetic breakeven. This minisymposium is dedicated to the development and application of new mathematical methods and frameworks to improve the HPC simulation of magnetic fusion plasmas, the talks will cover communications-avoiding DG schemes for the Vlasov equation, using hierarchical matrices for accelerating the simulation of neutral particles, variable precision computing for multi-model applications and the GPU acceleration of a Python based framework for HPC simulation of plasmas.

Organizer(s): Eric Sonnendrücker (Max Planck Institute for Plasma Physics, Technical University of Munich), Stephan brunner (EPFL), and Laurent Villard (EPFL)

Domain: Computational Methods and Applied Mathematics


Motif-Based Automated Performance Engineering for HPC

In this minisymposium we will discuss some of the efforts that are made in the area of improving performance portability and programming productivity for motif-based high-performance algorithms that are used in wide range of scientific applications. We will describe domain specific libraries (DSLs) that express mathematical/programming motifs (data objects and operations on those data objects), along with software back-ends that translate the library calls into high-performance code. By the use of a motif-aware software stack, the scientific application code written is much smaller than fully optimized code, with the applications-level code remaining unchanged in moving between platforms, thus leading to a less expensive development process. There will be four talks given in this minisymposium covering multiple motifs like structured grids, Fast Fourier Transforms (FFTs), particle methods and dense/sparse linear algebra, and different approaches to supporting motif-based DSLs. This minisymposium aims to bring together different groups focused on developing various motif-based high-performance codes using different software tools and automate the process as much as possible.

Organizer(s): Het Yagnesh Mankad (Oak Ridge National Laboratory), Franz Franchetti (Carnegie Mellon University), and Brian Van Straalen (Lawrence Berkeley National Laboratory, Carnegie Mellon University)

Domain: Computational Methods and Applied Mathematics


Moving Towards a More Sustainable HPC Ecosystem: A Full Stack Approach

Global challenges around climate change necessitate a move to a greener society. However, mathematical simulations on supercomputers and the recent rise in popularity of AI consumes vast amounts of energy with associated environmental impact. Indeed, there are estimates that computing could consume up to 20% of the world’s electricity by the end of the decade. This is not only bad for the environment, because energy is a significant contributing factor to the CO2 emissions of computing, but it also limits the size of a supercomputer because very few locations can deliver the necessary amount of energy. Moreover, recent studies predict that data centres may account for up to 8% of global Carbon emissions by 2030, however despite these alarming numbers, moving HPC towards net zero has not seen the same urgency as other sectors. Given the scale of the challenge, this requires a full stack approach, identifying and exploring opportunities to reduce our carbon footprint across the entire ecosystem. The purpose of this minisymposium is to highlight opportunities for improved sustainability in key areas of the HPC ecosystem, explore how we can address these as a community and how these areas can interoperate to deliver a step change in sustainability.

Organizer(s): Nick Brown (EPCC), Michele Weiland (EPCC), and Adrian Jackson (EPCC)

Domain: Computational Methods and Applied Mathematics


Optimizing Molecular Dynamics Dataflows: Integrating Workflows for Real-Time Analysis in the Era of Heterogeneous Computing

Molecular dynamics (MD) simulations are pivotal in computational science, offering atomistic insights into complex systems across biophysics, materials science, and chemistry. However, the exponential growth in data generation and the increasing heterogeneity of supercomputing systems pose significant challenges for traditional MD workflows. This minisymposium presents cutting- edge strategies for optimizing MD dataflows to enable real-time analysis, leveraging in situ and in transit techniques alongside emerging technologies like machine learning, GPUs, and quantum computing. The session features diverse speakers from leading global institutions. Florence Tama (RIKEN, Japan) discusses scalable workflows for Cryo-EM and MD integration to study biomolecular con-formational variability. Michel A. Cuendet (SIB, Switzerland) introduces runtime strategies to improve ensemble MD efficiency by terminating unproductive trajectories. Early-career researcher Lo¨ıc Pottier (LLNL, USA) highlights the MuMMI framework for machine learning-driven multi- scale MD workflows. Ivona Brandi´c (TU Wien, Austria) explores hybrid classical/quantum systems and adaptive cloud-based MD workflows. Aligned with PASC25’s theme, “Supercomputing for Sustainable Development,” the session emphasizes sustainable computing practices and supports multiple UN SDGs, including SDG 9 (Industry, Innovation, and Infrastructure). Attendees will gain valuable insights into advancing MD workflows for the future of HPC-driven scientific discovery.

Organizer(s): Ewa Deelman (University of Southern California), and Michela Taufer (University of Tennessee)

Domain: Computational Methods and Applied Mathematics


Ptychographic X-Ray Tomography for Nondestructive Imaging of Complex Nanoscale Structure

Ptychographic x-ray tomography is the highest resolution x-ray imaging technique available for generic samples, achieving resolutions down to 4 nm. Since its development at the Paul Scherrer Institute in 2010, it has been applied to a wide variety of systems, with applications in catalysis, materials science, engineering, the life sciences, and physics. The field as a whole is currently facing a major opportunity, combined with a major challenge because synchrotrons around the world, including the Swiss Light Source, are upgrading to fourth-generation storage rings. These new synchrotrons provide far higher brightness and massively increased data rates, making it possible to study larger systems but also brings computational challenges related to dealing with the ever-growing data rates. This minisymposium includes talks from experts on ptychographic tomography reconstructions who work in different fields, and who champion different approaches to making use of these new, brighter sources. By including perspectives from a variety of application areas, with a shared focus on the challenge of managing data rates, we hope to drive forward the conversation and identify the most promising approaches for the next generation of beamlines and reconstruction tools.

Organizer(s): Abraham Levitan (Paul Scherrer Institute), and Ana Diaz (Paul Scherrer Institute)

Domain: Computational Methods and Applied Mathematics


Ray Tracing for Scientific Applications

Ray tracing is a highly accurate, but computationally expensive technique for simulating the propagation of waves or particles in computational geometries. While widely employed in graphics rendering, this symposium explores adaptations of ray tracing for scientific applications. Presenters will discuss domain-specific challenges, optimizations, and HPC considerations within astrophysics, geodetics, ocean acoustics, and nuclear engineering applications. Prof. Monika Mościbrodzka will discuss ray tracing for astrophysical applications, with a focus on black hole imaging. Dr. Thalia Nikolaidou will discuss how nonlinear refraction, multi-scale variability, and real-time requirements are handled for ray tracing within Global Navigation Satellite System (GNSS) tomography. Dr. Ben Thomas will discuss GPU-accelerated ray tracing applied to seafloor imaging simulations. Dr. Elliott Biondo will discuss GPU-accelerated ray tracing within Monte Carlo neutron transport simulations of nuclear reactors, including the use of geometric-specific tracking algorithms and tree-based acceleration structures. The diverse range of scientific fields represented in this symposium will foster valuable cross-disciplinary collaborations and knowledge sharing among typically disparate research communities.

Organizer(s): Elliott Biondo (Oak Ridge National Laboratory), and Madicken Munk (Oregon State University)

Domain: Engineering


Scaling AI Surrogate Modelling Methods Towards Industrial Application for Computational Fluid Dynamics

The ScaleAI4CFD minisymposium explores the latest advancements in machine learning (ML) models for computational fluid dynamics (CFD), focusing on scaling these models for industrial applications. This event bridges the gap between academic research and industry needs, addressing challenges in energy, automotive, aerospace, and healthcare sectors. The symposium’s emphasis on CFD is strategic, building on the success of the AI4DifferentialEquations workshop at ICLR2024. While ML applications in weather forecasting and climate modeling have progressed, other fluid dynamics areas lag behind. The event features experts from industry, academia, and startups to discuss specific challenges and opportunities in applying AI surrogates to real-world research and design. The focus on CFD also addresses crucial environmental concerns. Aerospace and automotive industries significantly contribute to climate change, and CFD offers a way to reduce their environmental impact by minimizing physical prototyping and testing. AI/ML surrogate models have the potential to further reduce costs, time, and energy consumption in simulations. Attendees will gain insights into the current state of the art and the challenges and opportunities in applying cutting-edge AI methods to CFD problems.

Organizer(s): Neil Ashton (NVIDIA Inc.), Andrew Wilson (NYU Courant), and Danielle Maddix (Amazon Web Services)

Domain: Engineering


Supercomputing for Sustainable Development

This minisymposium will showcase how supercomputing is advancing efforts to address global challenges aligned with the 17 United Nations Sustainable Development Goals (SDGs). Talks will explore a range of topics, including the use of digital twins for sustainability, GeoAI and earth modeling for understanding and visualizing uncertainty at scale, and leveraging high-performance computing to advance health and energy solutions. The session will also address the energy demands of supercomputing itself, with a focus on strategies to minimize energy consumption in computationally intensive simulations. Through these discussions, the session highlights the critical role of supercomputing in driving innovation and supporting global sustainability efforts.

Organizer(s): Peter Vincent (Imperial College London)

Domain: Computational Methods and Applied Mathematics


Sustainable Computing for Big Data Infrastructures

The rapid growth of data in fields like astronomy, particle physics, and genomics, alongside the rising need for large AI model training, poses significant sustainability challenges for big data infrastructures. Carbon-conscious innovation is crucial for future computing infrastructures of the Square Kilometre Array Observatory (SKAO), expected to generate over 700 petabytes annually for the next 50 years to enable groundbreaking discoveries in physics and astronomy. Similarly, effective decarbonization strategies are necessary for legacy systems, such as CERN’s Worldwide LHC Computing Grid (WLCG), which offers global access to over 1.5 exabytes of data. This minisymposium will explore innovative strategies for designing energy-efficient hardware and optimizing software pipelines while minimizing environmental impact. It will feature sustainable computing research and solutions for infrastructures like SKAO and CERN, emphasizing real-world examples and insights into improving sustainability metrics for big data infrastructures. Key topics will include reducing carbon footprints, ensuring performance portability, and co-designing energy-efficient accelerators for high-performance computing, all crucial for addressing the growing demand for extreme-scale scientific computing. By fostering collaboration and sharing cutting-edge research, this event aims to enhance energy efficiency in scientific computing, develop more sustainable high-performance computing infrastructures, and advance carbon-aware practices to meet the demands of a data-driven future.

Organizer(s): Denisa-Andreea Constantinescu (EPFL), Emma Tolley (EPFL), and Rosie Bolton (SKAO)

Domain: Computational Methods and Applied Mathematics


Sustainable Scientific Computing

In scientific computing and beyond, we are all used to optimizing time-to-solution in our computations and to use computers to their fullest power. Yet, the world is facing a climate crisis and we must all strive to minimize our energy footprint. The power consumption constraint for large-scale computing (aka exascale computing limited by 20MW) encourages scientists to revise the architectural design of hardware but also of applications, their underlying algorithms, as well as the working/ storage precision. By improving energy efficiency, we contribute to sustainable scientific computing. Developing energy-efficient scientific computing applications is nontrivial and requires expertise in several different areas: algorithms and applications, programming languages and compilers, numerical verification, and computer architecture. In this minisymposium, we aim to discuss challenges in designing and implementing energy-efficient mixed-precision algorithms with the assistance of computer arithmetic tools, energy efficiency modeling, hardware-software co-design in light of the end of Moore’s law, and potential for automating energy-saving techniques. Key topics include establishing energy-to-solution as a HPC metric following our initial attempt https://zenodo.org/records/13306639, exploring energy optimization opportunities from algorithmic derivation and across computing stacks, and enhancing tools to aid energy-efficient software development.

Organizer(s): Roman Iakymchuk (Uppsala University, Umeå University), and Ana Lucia Varbanescu (University of Twente)

Domain: Computational Methods and Applied Mathematics


Trusted Research Environments for Supercomputing in Health and Social Sciences

This minisymposium will bring together those working on enabling HPC facilities to be used effectively and efficiently when working with sensitive, confidential and personally identifiable data. Outside national security institutions, particularly in academic settings, HPC environments have traditionally been built with security standards appropriate to moderately sensitive data, in the physical sciences and engineering. Researchers are increasingly using large-scale computational methods to address questions in health and social science which require use of personally identifiable data obtained from governmental authorities. In order to maintain public consent for the use of such approaches “data saves lives” it is essential that an appropriate security environment is maintained. In these fields, secure environments are typically isolated from internet, have no in-out copy-paste facilities, a significant information governance overhead for actions as basic to the modern data scientist as pip install, and limited access to compute at scale needed for large digital twins or AI training and inference. Streaming data, continuously arriving from devices such as smart or personal equipment, is particularly challenging. Worldwide efforts are now focused on creating productive, secure environments called the Trusted Research Environments (TREs) at scale, which support the programming-language based approaches of modern data scientists and mathematical modellers.

Organizer(s): James Hetherington (University College London), Christian Cole (Dundee University), and Sadaf Alam (University of Bristol)

Domain: Computational Methods and Applied Mathematics


What the Actual FORTRAN? – Radiation Edition

Fortran, the primary programming language underpinning many operational weather and climate codes, was built around the fundamental principle that performance optimisation is left to the compiler. However, to fully utilise modern HPC architectures and GPUs, additional programming paradigms as well as invasive code changes are often needed to efficiently target modern HPC accelerators. While high-level DSLs or C-based abstraction layers like Kokkos and Raja provide a disruptive path to performance portability, for Fortran such abstractions are lacking. This leads to the fundamental question “What the actual FORTRAN can we do to achieve performance portability for complex physical models?” In this minisymposium we continue this ongoing discussion by looking at cross-architecture portability of ecRad, ECMWF’s state-of-the-art operational radiation model. ecRad is actively used in research or operational contexts at various European weather and climate institutions and is written in a modular fashion in modern, object-oriented Fortran. We highlight the complexities and challenges that its numerous compute kernels and several alternative solvers pose, and, together with industrial and academic partners, discuss different strategies towards a maintainable cross-platform adaptation strategy that runs efficiently across different HPC architectures.

Organizer(s): Michael Lange (ECMWF), and Balthasar Reuter (ECMWF)

Domain: Climate, Weather and Earth Sciences