GPU Computing: Past, Present and Future (47 mins, ~21 MB) - podcast episode cover

GPU Computing: Past, Present and Future (47 mins, ~21 MB)

Mar 02, 201247 min
--:--
--:--
Download Metacast podcast app
Listen to this episode in Metacast mobile app
Don't just listen to podcasts. Learn from them with transcripts, summaries, and chapters for every episode. Skim, search, and bookmark insights. Learn more

Episode description

The past five years have seen the use of graphical processing units for computation grow from being the interest of handful of early adopters to a mainstream technology used in the world’s largest supercomputers. The CUDA GPU programming ecosystem today provides all that a developer needs to accelerate scientific applications with GPUs. The architecture of a GPU has much to offer to the future of large-scale computing where energy-efficiency is paramount. NVIDIA is the lead contractor for the DARPA-funded Echelon project investigating efficient parallel computer architectures for the exascale era.

Timothy Lanfear is a Solution Architect in NVIDIA’s Professional Solutions Group, promoting the use of the NVIDIA Tesla(TM) computing solution for high-performance computing. He has twenty years’ experience in HPC, starting as a computational scientist in British Aerospace’s corporate research centre, and then moving to technical pre-sales roles with Hitachi, ClearSpeed, and most recently NVIDIA. He has a degree in Electrical Engineering and a PhD for research in the field of graph theory, both from Imperial College London.
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast
GPU Computing: Past, Present and Future (47 mins, ~21 MB) | EPCC Guest Lectures podcast - Listen or read transcript on Metacast