For AI workloads, Meta is developing its own Chip and Data Center Design

0
AI Workloads

The parent company of Facebook announced that it is developing a novel data center design that is AI-optimized as well as the second stage of its 16,000 GPU supercomputer for AI research.

The parent organization of Facebook, Meta, has disclosed plans for the creation of an entirely novel data center architecture and its own bespoke processor for artificial intelligence workloads.

“We are working on an ambitious strategy to develop the upcoming AI infrastructure for Meta, and now we’re giving some information about our development.  Santosh Janardhan, lead for infrastructure at Meta, stated in a blog post on Thursday, “This comprises our initial customized silicon chip to run models based on AI, a new AI-optimized data center design, and the upcoming second phase of our 16,000 GPU supercomputer for AI research.”

In accordance with Janardhan, Meta’s customized AI model-running processor, the Meta Training and Inference Accelerator (MTIA), is intended to have more computing power as well as effectiveness than current CPUs.

The business stated that MTIA is tailored for internal workloads like content comprehension, feeds, generative AI, and ad ranking and that the initial version of the chip was developed in 2020.

In response to the expansion of large language models and generative AI, Meta announced its efforts to develop its own customized chips for running AI algorithms at a time when other major technology firms are either developing or have already released their own chips for AI workloads.

According to press releases from earlier in the month, Microsoft was collaborating with AMD to create its own CPU for AI workloads.  AWS has also made available a processor designed specifically for AI applications.

In a statement on Thursday, Meta added that the layout of its new data center would be geared toward teaching AI models, a procedure that allows them to operate better as they analyze more data.

“This fresh data center will be an AI-optimized architecture, bolstering liquid-cooled AI components and an outstanding performance AI system linking hundreds of thousands of AI chips collectively for data center-scale AI training clusters,” Janardhan wrote.  He also noted that the new data center structures would be quicker and more reasonably priced to construct than older facilities.

The business also announced that it had begun work on creating AI supercomputers that will supply enormous power to the augmented reality capabilities, enable instantaneous translation innovation, and assist with training of next-generation AI algorithms alongside the new data center design.

About The Author:

Yogesh Naager is a content marketer that specializes in the cybersecurity and B2B space.  Besides writing for the News4Hackers blog, he’s also written for brands including CollegeDunia, Utsav Fashion, and NASSCOM.  Naager entered the field of content in an unusual way.  He began his career as an insurance sales executive, where he developed an interest in simplifying difficult concepts.  He also combines this interest with a love of narrative, which makes him a good writer in the cybersecurity field.  In the bottom line, he frequently writes for Craw Security.

Cyber Security course

Read More Articles Here :

PyPI Admins held user signup & package uploads: Know Why?

Doctor in Delhi lost 4.5 cr in the Worst Cyber Fraud in the City

About Author

Leave a Reply

Your email address will not be published. Required fields are marked *

en_USEnglish
Open chat
Hello
Can we help you?