Please note that the recordings on Zoom are Dual Mono and not Stereo - some pre-recorded sessions have been uploaded to YouTube to preserve the independent dual channel audio, but some sessions may still be affected.Modern console, mobile and VR games typically juggle tens of thousands of compressed samples at once, playing a select hundred or more via a cascade of codecs, filters and DSP effects, updated tens of times per second. Ambisonics, granular synthesis, multiple reverbs, psychoacoustic analysis, live and interleaved streams, are all thrown into the mix.
Game audio programmers design, create and imbed an automated mix engineer for each game. Days or years later, player(s) call the shots, place the camera and tailor a custom mix the sound designers have never heard, which is consistent, informative and immersive.
This talk explains the audio tech stack used on consoles, VR and mobile devices. It draws on decades of professional game programming, explaining (amongst other things) why you can never have enough voices, how to manage when 96 tyres bite the asphalt on the first corner of a Formula 1 simulation, the merits of multiple listeners, why most game sounds play at pitches other than that recorded, which standards are customarily ignored and why, how audio is the most real-time part of game software, and why only the worst case matters.
ALL REBROADCAST SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY:
https://conference.audio.dev