Microsoft has spent millions of dollars to build the supercomputer that ChatGPT is based on.

Microsofts

Microsoft announced new powerful and highly scalable virtual machines that integrate the latest NVIDIA H100 Tensor Core GPUs and NVIDIA Quantum-2 InfiniBand networking.

Microsoft has spent hundreds of millions of dollars on the construction of a massive supercomputer to help power the OpenAI ChatGPT chatbot, In a report, Microsoft explains how it built the powerful Azure AI infrastructure used by OpenAI and how its systems are getting even more robust.

To build the supercomputer that powers the OpenAI projects, Microsoft claims to have connected thousands of graphics processing units (GPU) NVIDIA to its Azure cloud computing platform. This, in turn, allowed OpenAI to train increasingly powerful models and "unlock the AI ​​capabilities" of tools like ChatGPT and Bing.

Scott Guthrie vice president of artificial intelligence and cloud at Microsoft, said the company spent several hundred million dollars on the project, according to a statement. And while that may seem like a drop in the bucket for Microsoft, which recently expanded its multi-billion dollar, multi-year investment in OpenAI, it certainly it shows that it is ready to invest even more money in the AI ​​space.

When Microsoft invested $ 1 billion at OpenAI in 2019, agreed to build a massive supercomputer and state-of-the-art for the start-up of artificial intelligence research. The only problem: Microsoft didn't have anything OpenAI needed and wasn't entirely sure it could build something that big on its Azure cloud service without breaking it.

OpenAI was trying to train an ever-growing set of artificial intelligence programs called models, which ingested greater volumes of data and learned more and more parameters, the variables that the AI ​​system discovered through training and retraining. This meant that OpenAI needed access to powerful cloud computing services for long periods of time.

To meet this challenge, Microsoft had to find ways to link tens of thousands of graphics chips NVIDIA A100 and change the way you rack servers to avoid power outages.

“We built a system architecture that could work and be reliable on a large scale. This is what made ChatGPT possible,” said Nidhi Chappell, Microsoft general manager for Azure AI infrastructure. “It's a pattern that came from there. There will be many, many more."

The technology enabled OpenAI to launch ChatGPT, the viral chatbot that attracted more than a million users within days of its November IPO and is now being absorbed into the business models of other companies, from those run by the billionaire hedge fund founder Ken Griffin at the time of delivery.

As generative AI tools like ChatGPT gain interest from businesses and consumers, there will be increased pressure on cloud service providers like Microsoft, Amazon, and Google to ensure their data centers can provide the enormous computing power required.

Now Microsoft is using the same set of resources it built for OpenAI to train and run its own great AI models, including the new Bing search bot introduced last month. The company also sells the system to other customers. The software giant is already working on the next generation of the AI ​​supercomputer, as part of an expanded deal with OpenAI in which Microsoft has added $10 billion to its investment.

“We don't build them as something custom; it started as something custom, but we always built it in a way that it was generalized so that anyone who wants to train a large language model can take advantage of the same improvements.” Guthrie said in an interview. "It really helped us become a better cloud for AI overall."

Training a massive AI model requires a large number of graphics processing units connected in one place, like the AI ​​supercomputer assembled by Microsoft. Once a model is in use, answering all the questions posed by users (called inference) requires a slightly different setup. Microsoft also deploys graphics chips for inference, but those processors (hundreds of thousands) are geographically dispersed across the company's 60-plus data center regions. Now the company is adding the latest NVIDIA graphics chip for AI workloads (the H100) and the latest version of NVIDIA's Infiniband networking technology for even faster data sharing.

Microsoft's decision to partner with OpenAI was founded on the belief that this unprecedented scale of infrastructure would produce results (new AI capabilities, a new type of programming platform) that Microsoft could turn into products and services that would deliver real benefits to customers, Waymouth said. This belief has fueled the ambition of companies to overcome all technical challenges to build it and continue to push the boundaries of AI supercomputing.

Source: https://news.microsoft.com/


Leave a Comment

Your email address will not be published. Required fields are marked with *

*

*

  1. Responsible for the data: AB Internet Networks 2008 SL
  2. Purpose of the data: Control SPAM, comment management.
  3. Legitimation: Your consent
  4. Communication of the data: The data will not be communicated to third parties except by legal obligation.
  5. Data storage: Database hosted by Occentus Networks (EU)
  6. Rights: At any time you can limit, recover and delete your information.

  1.   Leonardo said

    I wonder when this becomes a reality the amount of unemployment that will be in the world, we are killing ourselves