Intelligent Tech Channels Issue 67 | Page 45

INTELLIGENT DATACENTRES

G42 , Cerebras , partner to deliver Condor Galaxy with nine interconnected supercomputers for AI training

( Left to right ) Andrew Feldman , CEO of Cerebras Systems ; and Talal Alkaissi , CEO of G42 Cloud

C erebras Systems , a solution provider in accelerating generative AI , and G42 , the UAE-based technology holding group , announced Condor Galaxy , a network of nine interconnected supercomputers , offering a new approach to AI compute . This promises to significantly reduce AI model training time . The first AI supercomputer on this network , Condor Galaxy 1 , CG-1 , has 4 exaFLOPs and 54 million cores .

Cerebras and G42 are planning to deploy two more such supercomputers , CG-2 and CG-3 , in the US in early 2024 . With a planned capacity of 36 exaFLOPs in total , this supercomputing network will revolutionise the advancement of AI globally . CG-1 is the first of three 4 exaFLOP AI supercomputers to be deployed across the US .
Located at Colovore , a high-performance colocation facility in Santa Clara , California , CG-1 is operated by Cerebras under US laws , ensuring state of the art AI systems are not used by adversary states . Each Cerebras CS-2 system is designed , packaged , manufactured , tested , and integrated in the US . Cerebras is the only AI hardware company to package processors and manufacture AI systems in the US .
CG-1 is the first of three 4 exaFLOP AI supercomputers , CG- 1 , CG-2 , and CG-3 , built and located in the US by Cerebras and G42 in partnership . These three AI supercomputers will be interconnected in a 12 exaFLOP , 162 million core distributed AI supercomputer consisting of 192 Cerebras CS-2s and fed by more than 218,000 high performance AMD EPYC CPU cores . G42 and Cerebras plan to bring online six additional Condor Galaxy supercomputers in 2024 , bringing the total compute power to 36 exaFLOPs . Driven by more than 70,000 AMD EPYC processor cores , Cerebras ’ Condor Galaxy 1 will make accessible vast computational resources for researchers and enterprises as they push AI forward . Access to CG-1 is available now .
CG-1 offers native support for training with long sequence lengths , up to 50,000 tokens out of the box , without any special software libraries . Programming CG-1 is done entirely without complex distributed programming languages , meaning even the largest models can be run without weeks or months spent distributing work over thousands of GPUs .
“ Collaborating with Cerebras to rapidly deliver the world ’ s fastest AI training supercomputer and laying the foundation for interconnecting a constellation of these supercomputers across the world has been enormously exciting ,” said Talal Alkaissi , CEO of G42 Cloud , a subsidiary of G42 .
“ Delivering 4 exaFLOPs of AI compute at FP 16 , CG-1 dramatically reduces AI training timelines while eliminating the pain of distributed compute ,” said Andrew Feldman , CEO of Cerebras Systems .
Many cloud companies have announced massive GPU clusters that cost billions of dollars to build , but that are extremely difficult to use . Distributing a single model over thousands of tiny GPUs takes months of time from dozens of people with rare expertise . CG-1 eliminates this challenge . Setting up a generative AI model takes minutes , not months and can be done by a single person . •
INTELLIGENT TECH CHANNELS 4545