Lambda Secures $320 Million in Funding to Expand GPU Cloud

Lambda, a leading player in the AI industry, recently announced that it has secured a whopping $320 million in funding to further expand its GPU cloud infrastructure. The funding will allow Lambda to deploy tens of thousands of Nvidia GPUs, including the highly anticipated G200 heavy weight GPU accelerators, to support training clusters for AI applications.

In addition to the GPU accelerators, the funding will also go towards the deployment of Quantum-2 InfiniBand networking, which can provide up to 400 Gb/sec of bandwidth to each port. This will ensure high-speed connectivity and efficient data transfer within the GPU cloud.

With this significant cash injection, Lambda is expected to acquire around 10,000 accelerators within its budget, leaving room for investments in networking and other supporting infrastructure. The company’s partnership with Nvidia has also enabled it to secure aggressive discounting on GPUs, allowing for further investment in networking and storage systems.

Lambda, founded in 2012, has been at the forefront of GPU system innovation. Over the years, the company has expanded its offerings to include co-location services and the reselling of Nvidia’s DGX SuperPODs. Its foray into cloud computing in 2018 proved to be a game-changer, earning Lambda millions in funding to fuel its expansion.

This latest funding round, led by the US Innovative Technology Fund (USIT), has brought Lambda’s total raised capital to an impressive $432 million. New investors such as B Capital, SK Telecom, and T. Rowe Price Associates have joined existing investors in supporting Lambda’s vision.

Lambda’s success reflects a growing trend in the industry, with increasing demand for GPUs capable of training large language models. Other companies, such as CoreWeave and Voltage Park, have also entered this competitive arena, capitalizing on the need for affordable GPU resources.

The cost-effectiveness of GPU clouds is a significant advantage for organizations utilizing large language models. Renting thousands of GPUs on an hourly basis is often more affordable than purchasing and maintaining the hardware internally. Lambda and other companies have recognized this market opportunity and are providing GPU resources at attractive prices, enabling businesses to access the computational power they need without significant upfront investments.

Lambda’s latest funding round positions the company for further growth and reinforces its position as a key player in the AI industry. With expanded GPU cloud capabilities, Lambda is well-positioned to meet the increasing demand for high-performance computing resources in the AI space.

An FAQ section based on the main topics and information presented in the article:

Q: What is Lambda?
A: Lambda is a leading player in the AI industry that offers GPU cloud infrastructure and services.

Q: What is the recent funding announcement by Lambda?
A: Lambda recently secured $320 million in funding to expand its GPU cloud infrastructure.

Q: What will the funding be used for?
A: The funding will be used to deploy tens of thousands of Nvidia GPUs, including the highly anticipated G200 heavy weight GPU accelerators, to support training clusters for AI applications. It will also be used for the deployment of Quantum-2 InfiniBand networking.

Q: What is Quantum-2 InfiniBand networking?
A: Quantum-2 InfiniBand networking is technology that provides up to 400 Gb/sec of bandwidth to each port, ensuring high-speed connectivity and efficient data transfer within the GPU cloud.

Q: How many accelerators will Lambda acquire with the funding?
A: Lambda is expected to acquire around 10,000 accelerators within its budget.

Q: Who is Lambda’s partner in this venture?
A: Lambda has a partnership with Nvidia, which enables it to secure aggressive discounting on GPUs.

Q: How much total capital has Lambda raised?
A: Lambda has raised a total of $432 million in capital.

Q: Which organizations led the recent funding round?
A: The recent funding round was led by the US Innovative Technology Fund (USIT), with participation from investors such as B Capital, SK Telecom, and T. Rowe Price Associates.

Definitions for key terms or jargon used within the article:

– AI: Stands for Artificial Intelligence, referring to the simulation of human intelligence in machines that are programmed to think and learn like humans.

– GPU: Stands for Graphics Processing Unit, a specialized electronic circuit that accelerates the creation and rendering of images, videos, and animations.

– GPU accelerators: Specialized GPUs designed to process large amounts of data quickly, often used in AI applications for training and computation.

– InfiniBand networking: A high-speed networking technology that provides low latency and high throughput for data centers and high-performance computing systems.

– Co-location services: Services that provide physical space, power, and cooling for servers and other hardware equipment.

– DGX SuperPODs: Nvidia’s high-performance computing solutions that combine multiple DGX systems to create a powerful AI infrastructure.

Suggested related links:

Lambda (official website of Lambda, the company mentioned in the article)

Nvidia (official website of Nvidia, a major player in GPUs and AI technology)

CoreWeave (example of another company in the GPU cloud industry)

Voltage Park (example of another company in the GPU cloud industry)

The source of the article is from the blog macholevante.com

Privacy policy
Contact