You can see previous news in the old version of the news blog. Watch
Lisa Su demonstrates for the first time the AMD Instinct MI300 accelerator with 146 billion transistors.
Having talked about the Instinct MI300 computing accelerator in general terms last summer, AMD, only as part of the presentation at the January CES 2023, clarified some of the layout features and characteristics of this long-awaited solution, which will be used in the server segment this year. The chiplet layout allows the novelty to combine several dissimilar crystals with a total number of transistors of 146 billion pieces.
As Lisa Su explained at the presentation, the complex layout of the Instinct MI300 allows you to place chiplets not only next to each other, but also in several tiers. For the first time, the accelerator combines processor and "graphics" cores on a single chip, and for the system they are considered one unit, providing equal access to HBM3 memory, which is located on a common substrate in the neighborhood. The head of AMD rightly called the Instinct MI300 the most complex chip ever created by the company.
It was stated that Instinct MI300 combines cores with CDNA 3 architecture and 24 processor cores with Zen 4 architecture. Memory type HBM3 reaches 128 GB. A sample of the accelerator was demonstrated on stage by Lisa Su, this was his first public appearance. As the head of the company explained, in the design of this chip, nine 5-nm crystals are located on four 6-nm crystals, and stacks with HBM3-type memory chips are located on the sides.
Compared to the Instinct MI250X, it delivers eight times the computing performance while delivering five times the power efficiency in AI workloads. Using the Instinct MI300 can reduce the training time for relevant systems from months to weeks, Lisa Su explained, while significantly reducing the associated energy costs. Samples of Instinct MI300 are already successfully working in AMD laboratories, accelerators of this model will appear on the market in the second half of the year.