Lambda Secures $320 Million to Expand GPU-Based AI Services

Lambda, a leading provider of AI cloud services, has recently received a massive $320 million cash infusion to further develop its GPU-based offerings. These services focus on AI training clusters that utilize thousands of Nvidia accelerators.

In an effort to establish itself as the premier AI compute platform globally, Lambda aims to deploy an extensive fleet of Nvidia GPUs, including the latest H100 Hopper accelerators and the upcoming G200 GPU accelerators, which promise a significant performance boost. Additionally, Lambda plans to incorporate Nvidia’s hybrid GH200 CPU/GPU superchips into their infrastructure.

Lambda’s vision of becoming the go-to AI compute platform necessitates a robust infrastructure comprising ultra-fast networking, substantial data center capacity, and cutting-edge software development. The company intends to leverage the $320 million Series C funding, which was led by prominent venture funds, including B Capital, SK Telecom, and T. Rowe Price Associates, Inc., alongside existing investors such as Crescent Cove, Mercato Partners, and Gradient Ventures, among others.

With the new funding, Lambda aims to accelerate the expansion of its GPU cloud, providing AI engineering teams with unparalleled access to thousands of high-performance Nvidia GPUs. The inclusion of Nvidia Quantum-2 InfiniBand networking further bolsters Lambda’s commitment to delivering advanced GPU-based services to its customers.

Lambda’s substantial cash infusion not only signals confidence in the company’s potential but also reflects the growing demand for GPU-based AI services. As Lambda continues to innovate and expand its offerings, the prospects for the future of AI computing appear increasingly promising.

Lambda, a provider of AI cloud services, has received a $320 million cash infusion to develop its GPU-based offerings. These services focus on AI training clusters that use Nvidia accelerators. The aim is to establish Lambda as the premier AI compute platform globally.

Key terms:
AI compute platform: A platform that provides the necessary infrastructure and resources for running artificial intelligence workloads.
GPU: Graphics Processing Unit, a type of processor designed for rendering graphics and accelerating AI and machine learning tasks.
Nvidia accelerators: Hardware devices developed by Nvidia that enhance the performance of AI workloads by offloading certain computations to specialized processing units.

With the funding, Lambda plans to deploy a fleet of Nvidia GPUs, including the latest H100 Hopper accelerators and upcoming G200 GPU accelerators, to boost performance. They also intend to incorporate Nvidia’s hybrid GH200 CPU/GPU superchips into their infrastructure.

To become the go-to AI compute platform, Lambda needs a robust infrastructure consisting of fast networking, data center capacity, and cutting-edge software development. The $320 million Series C funding, led by venture funds like B Capital, SK Telecom, and T. Rowe Price Associates, will be used for this purpose.

Lambda aims to expand its GPU cloud with the funding, providing AI engineering teams with unprecedented access to high-performance Nvidia GPUs. The inclusion of Nvidia Quantum-2 InfiniBand networking further enhances their GPU-based services.

The cash infusion received by Lambda reflects confidence in the company’s potential and the growing demand for GPU-based AI services. As Lambda continues to innovate and expand, the future of AI computing looks promising.

For more information, visit Lambda.

https://youtube.com/watch?v=WZvYaQ-pqBs

The source of the article is from the blog crasel.tk

Privacy policy
Contact