Ambarella Showcases Low Power Multi-Modal LLMs on N1 SoC Series

Ambarella, Inc., an edge AI semiconductor company, is demonstrating its multi-modal large language models (LLMs) running on its new N1 SoC series. The N1 SoCs provide significant power-per-inference efficiency compared to leading GPU solutions, enabling the deployment of generative AI technology in edge endpoint devices and on-premise hardware. The applications for this technology span various industries, including video security analysis, robotics, and industrial applications.

Ambarella’s optimized generative AI processing capabilities will be initially available on its mid to high-end SoCs, catering to different power requirements. The processing efficiency of Ambarella’s SoCs is up to 3 times more power-efficient per generated token compared to GPUs and other AI accelerators. This allows for cost-effective deployment and immediate integration into products.

According to Les Kohn, CTO and co-founder of Ambarella, generative AI networks are revolutionizing the capabilities of edge devices. The N1 series of SoCs offers world-class multi-modal LLM processing and high performance at an attractive power/price ratio.

Advanced Computing Principal Analyst Alexander Harrowell predicts that generative AI will enhance nearly every edge application within the next 18 months. The focus will shift towards performance per watt and integration within the edge ecosystem, rather than raw throughput.

Ambarella’s AI SoCs are supported by the Cooper Developer Platform, and pre-ported and optimized LLM models, such as Llama-2 and LLava, are available for download from the Cooper Model Garden. The SoCs are designed for efficient video and AI processing at low power consumption, making them suitable for applications that require visual input and natural language understanding.

With the deployment of generative AI processing, devices such as security systems, robots, and industrial equipment can benefit from improved context understanding and scene analysis. These systems rely on camera input and natural language processing, and on-device processing ensures speed, privacy, and cost savings. Ambarella’s SoCs offer a local processing solution that is tailored to specific application scenarios, promoting customization and efficiency.

The source of the article is from the blog mgz.com.tw

Privacy policy
Contact