Zoho partners with NVIDIA for AI-driven SaaS applications


Zoho Corporation has announced its plan to leverage the NVIDIA AI accelerated computing platform, which includes NVIDIA NeMo as part of NVIDIA AI Enterprise software, to develop and deploy large language models (LLMs) within its software-as-a-service (SaaS) applications.

These models will be accessible to over 700,000 customers worldwide through ManageEngine and Zoho.com. Over the past year, Zoho has invested more than $10 million in NVIDIA’s AI technology and GPUs, with an additional $10 million planned for the next year. This announcement was made during the NVIDIA AI Summit in Mumbai.

Focus on User Privacy and Comprehensive AI Approach

Zoho emphasizes user privacy by designing its models to comply with privacy regulations from the outset, rather than retrofitting them later. The company aims to help businesses achieve a quick return on investment (ROI) by utilizing the full capabilities of NVIDIA AI software and accelerated computing, which enhances throughput and reduces latency.

With over a decade of experience in AI technology, Zoho has integrated AI features into more than 100 products across its ManageEngine and Zoho divisions. The company’s approach is multi-modal, focusing on contextual intelligence to assist users in making informed business decisions.

Unlike standard LLMs, Zoho is also developing narrow, small, and medium language models, allowing for tailored solutions based on varying data sizes and use cases. Notably, these LLMs will not be trained on customer data, ensuring that privacy remains a fundamental aspect of Zoho’s AI strategy.

Collaboration with NVIDIA

Through this partnership, Zoho will accelerate its LLMs on NVIDIA’s computing platform using NVIDIA Hopper GPUs. The company will leverage the NVIDIA NeMo platform to develop custom generative AI, encompassing large language models (LLMs), multimodal AI, and both vision and speech capabilities.

Additionally, Zoho is testing NVIDIA TensorRT-LLM to enhance its LLMs for deployment, achieving a 60% increase in throughput and a 35% reduction in latency compared to a previously used open-source framework. The company is also optimizing other workloads, such as speech-to-text, on NVIDIA’s accelerated computing infrastructure.

Speaking about the current landscape of large language models, Ramprakash Ramamoorthy, Director of AI at Zoho Corporation, stated,

Many LLMs available today are primarily designed for consumer applications, providing limited benefits for businesses. At Zoho, our mission is to create LLMs specifically tailored for a diverse array of business use cases. By owning our entire tech stack and offering products across various business functions, we can integrate the key element that makes AI truly effective: context.

Commenting on the collaboration, Vishal Dhupar, Managing Director of Asia South at NVIDIA, stated,

The ability to select from a variety of AI model sizes empowers businesses to customize their AI solutions to meet specific needs, striking a balance between performance and cost-effectiveness. With NVIDIA’s AI software and accelerated computing platform, Zoho is developing a comprehensive range of models to address the diverse requirements of its business customers.