BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Europe/Stockholm
X-LIC-LOCATION:Europe/Stockholm
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20250822T115807Z
LOCATION:Room 6.0D13
DTSTART;TZID=Europe/Stockholm:20250618T123000
DTEND;TZID=Europe/Stockholm:20250618T130000
UID:submissions.pasc-conference.org_PASC25_sess173_pap119@linklings.com
SUMMARY:CAFE AU LAIT: Compute-Aware Federated Augmented Low-Rank AI Traini
 ng
DESCRIPTION:Jiayi Wang, John Gounley, and Heidi Hanson (Oak Ridge National
  Laboratory)\n\nFederated finetuning is essential for unlocking the knowle
 dge embedded in pretrained Large Language Models (LLMs) when data is distr
 ibuted across clients. Unlike single-institution finetuning, federated fin
 etuning enables collaboration across decentralized datasets while preservi
 ng data privacy. To address the high computing costs of LLM training and i
 mprove energy efficiency in Federated Learning (FL), Low-Rank Adaptation (
 LoRA) has gained popularity due to its reduced number of trainable paramet
 ers. However, this approach assumes all clients have sufficient computing 
 resources, which is often unrealistic due to the heterogeneity of resource
 s across clients. While some clients may access powerful GPUs, others have
  limited or no such resources. Federated finetuning using synthetic data a
 llows participation without local LLM training but introduces a performanc
 e gap compared to local updates. To address this, we propose a novel two-s
 tage algorithm leveraging the storage and computing power of a strong serv
 er. In the first stage, resource-constrained clients generate synthetic da
 ta under the coordination of the strong server, which is stored on the str
 ong server. In the second stage, the strong server uses this synthetic dat
 a on behalf of constrained clients to perform federated LoRA finetuning al
 ongside clients with sufficient resources. This ensures participation from
  all clients. Experimental results demonstrate that incorporating local up
 dates from even a small fraction of clients improves performance compared 
 to using synthetic data for all clients. Additionally, we integrate the Ga
 ussian mechanism in both stages to ensure client-level differential privac
 y.\n\nDomain: Engineering, Life Sciences, Computational Methods and Applie
 d Mathematics\n\nSession Chair: Zhaohui Song (Zhaohui Song, Politecnico di
  Milano, Italy)\n\n
END:VEVENT
END:VCALENDAR
