
Prospects of In- and Near-Memory Computing for Future AI Systems
October 16 @ 1:00 pm - 2:00 pm
Technical seminar with the following abstract:
Future data-intensive workloads, particularly from artificial intelligence, have pushed conventional computing architectures to their limits of energy efficiency and throughput, due to the scale of both computations and data they involve. In- and near-memory computing are breakthrough paradigms that provide approaches for overcoming this. But, in doing so, they instate new fundamental tradeoffs that span the device, circuit, and architectural levels. This presentation starts by describing the methods by which in/near-memory computing derive their gains, and then examines the critical tradeoffs, looking concretely at recent designs across memory technologies (SRAM, RRAM, MRAM). Then, its focus turns to key architectural considerations, and how these are likely to drive future technological needs and application alignments. Finally, this presentation analyzes the potential for leveraging application-level relaxations (e.g., noise sensitivity) through algorithmic <a href="http://approaches.
Speaker(s):” target=”_blank” title=”approaches.
Speaker(s):”>approaches.
Speaker(s): Naveen Verma,
Room: MCLD 3038, Bldg: MacLeod Building, 2356 Main Mall, Vancouver, British Columbia, Canada, V6T 1Z4