English: This one tile can do 9 quadrillion floating point calculations per second, optimized for AI training. I saw it up close today. Each 15kW tile holds a 5x5 array of blocks, each with 12 D1 multi-chip-module stacks, with RF shielding between them. These are custom AI chips by Tesla, with 362 teraFLOPS per block. Connectors around the perimeter provide 36Tb/s of inter-tile bandwidth. The Dojo ExaPod supercomputer has 120 of these tiles, for 1.1 exaFLOPS of AI-optimized compute.
to share – to copy, distribute and transmit the work
to remix – to adapt the work
Under the following conditions:
attribution – You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
https://creativecommons.org/licenses/by/2.0CC BY 2.0 Creative Commons Attribution 2.0 truetrue