BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Europe/Stockholm
X-LIC-LOCATION:Europe/Stockholm
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20250822T115809Z
LOCATION:Room 6.0D13
DTSTART;TZID=Europe/Stockholm:20250618T113000
DTEND;TZID=Europe/Stockholm:20250618T130000
UID:submissions.pasc-conference.org_PASC25_sess173@linklings.com
SUMMARY:AP2D - ACM Papers Session 2D
DESCRIPTION:Leveraging Large Language Models for Code Translation and Soft
 ware Development in Scientific Computing\n\nThe emergence of foundational 
 models and generative artificial intelligence (GenAI) is poised to transfo
 rm productivity in scientific computing, especially in code development, r
 efactoring, and translating from one programming language to another. Howe
 ver, because the output of GenAI cannot be guara...\n\n\nAkash Dhruv and A
 nshu Dubey (Argonne National Laboratory)\n---------------------\nCAFE AU L
 AIT: Compute-Aware Federated Augmented Low-Rank AI Training\n\nFederated f
 inetuning is essential for unlocking the knowledge embedded in pretrained 
 Large Language Models (LLMs) when data is distributed across clients. Unli
 ke single-institution finetuning, federated finetuning enables collaborati
 on across decentralized datasets while preserving data privacy. To ...\n\n
 \nJiayi Wang, John Gounley, and Heidi Hanson (Oak Ridge National Laborator
 y)\n---------------------\nHiPerRAG: High-Performance Retrieval Augmented 
 Generation for Scientific Insights\n\nThe volume of scientific literature 
 is growing exponentially, leading to underutilized discoveries, duplicated
  efforts, and limited cross-disciplinary collaboration. Retrieval-Augmente
 d Generation (RAG) offers a way to assist scientists by improving the fact
 uality of Large Language Models (LLMs) in ...\n\n\nOzan Gokdemir, Carlo Si
 ebenschuh, and Alexander Brace (University of Chicago, Argonne National La
 boratory); Azton Wells (Argonne National Laboratory); Brian Hsu (Argonne N
 ational Laboratory, University of Chicago); Kyle Hippe and Priyanka Setty 
 (University of Chicago, Argonne National Laboratory); Aswathy Ajith and J.
  Gregory Pauloski (University of Chicago); Varuni Sastry, Sam Foreman, Hui
 huo Zheng, Heng Ma, Bharat Kale, and Nicholas Chia (Argonne National Labor
 atory); Thomas Gibbs (NVIDIA Inc.); Michael Papka (Argonne National Labora
 tory, University of Illinois Chicago); Thomas Brettin and Francis Alexande
 r (Argonne National Laboratory); Anima Anandkumar (California Institute of
  Technology); Ian Foster (Argonne National Laboratory, University of Chica
 go); Rick Stevens and Venkatram Vishwanath (Argonne National Laboratory); 
 Arvind Ramanathan (Argonne National Laboratory, University of Chicago); an
 d Thomas Uram (Argonne National Laboratory)\n\nDomain: Engineering, Life S
 ciences, Computational Methods and Applied Mathematics\n\nSession Chair: Z
 haohui Song (Zhaohui Song, Politecnico di Milano, Italy)
END:VEVENT
END:VCALENDAR
