BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Europe/Stockholm
X-LIC-LOCATION:Europe/Stockholm
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20250822T115807Z
LOCATION:Room 5.2D02
DTSTART;TZID=Europe/Stockholm:20250618T090000
DTEND;TZID=Europe/Stockholm:20250618T110000
UID:submissions.pasc-conference.org_PASC25_sess106@linklings.com
SUMMARY:MS5F - Fast and Accurate Numerical Linear Algebra on Low-Precision
  Hardware: Algorithms and Error Analysis
DESCRIPTION:This minisymposium will address the state of the computer arit
 hmetic algorithmic technique that\nallows to simulate accurate floating-po
 int computations by using low-precision floating-point or\ninteger operati
 ons. The progress of this research area is important to hardware manufactu
 rers\nbecause it allows high-performance computers to reduce the number of
  complex high-precision\nfloating-point units on the chip and increase the
  number of low-precision floating-point units which\nare especially useful
  for machine learning; since efficient algorithms are available to simulat
 e high-\nprecision computations, traditional applications not tolerant of 
 errors associated with low precision\ndo not suffer. These techniques are 
 increasingly researched internationally, and this minisymposium includes f
 our speakers from UK, Japan, and US.\n\nFast and Accurate Algorithm Effici
 ently Using FMA for Matrix Multiplication\n\nWe introduce a new algorithm 
 for high-precision computations of matrix multiplication. While hardware-s
 upported floating-point operations are fast, they suffer from rounding err
 ors due to their finite precision. When the accuracy of computed results i
 s not satisfactory, high-precision computation ma...\n\n\nKATSUHISA OZAKI 
 (Shibaura Institute of Technology) and Toru Koizumi (Nagoya Institute of T
 echnology)\n---------------------\nDGEMM Emulation Using INT8 Matrix Engin
 es and its Rounding Error Analysis\n\nModern architectures are equipped wi
 th high-performance matrix engines optimized for low-precision matrix mult
 iplications used in machine learning models. Fully leveraging these archit
 ectures is the key to achieving superior performance in numerical algorith
 ms. This study aims to design methods for ...\n\n\nYuki Uchino (RIKEN Cent
 er for Computational Science), Katsuhisa Ozaki (Shibaura Institute of Tech
 nology), and Toshiyuki Imamura (RIKEN Center for Computational Science)\n-
 --------------------\nPrecision Redefined: Unlocking and Delivering the Fu
 ll Power of Modern GPUs for Scientific Computing\n\nOver the last decade G
 PU architectures have dramatically improved in both performance and energy
  efficiency.  Due largely to the rising importance of artificial intellige
 nce (AI), especially in the areas of large language models (LLMs) and gene
 rative AI, this growth has been most pronounced in reduc...\n\n\nHarun Bay
 raktar (NVIDIA)\n---------------------\nError Analysis of Matrix Multiplic
 ation with Narrow Range Floating-Point Arithmetic\n\nHigh-performance comp
 uting hardware now supports many different floating-point formats, from 64
  bits to only 4 bits. While the effects of reducing precision in numerical
  linear algebra computations have been extensively studied, some of these 
 low precision formats also possess a very narrow range of...\n\n\nTheo Mar
 y (CNRS) and Mantas Mikaitis (University of Leeds)\n\nDomain: Computationa
 l Methods and Applied Mathematics\n\nSession Chair: Mantas Mikaitis (Unive
 rsity of Leeds)
END:VEVENT
END:VCALENDAR
