GLOBAL
REGIONAL CHANNELS
GLOBAL
ZOHO TO INTEGRATE NVIDIA AI TO BUILD LLMS FOR RANGE OF BUSINESS CASES
Zoho Corporation , a global technology company headquartered in Chennai , announced that it will be leveraging the NVIDIA AI accelerated computing platform , which includes NVIDIA NeMo , part of NVIDIA AI Enterprise software , to build and deploy its large language models , LLMs in its SaaS applications . Once the LLMs are built and deployed , they will be available to Zoho Corporation ’ s 700,000 + customers across ManageEngine and Zoho . com globally .
Over the past year , the company has invested more than US $ 10 million in NVIDIA ’ s AI technology and GPUs , and plans to invest an additional USD 10 million in the coming year . The announcement was made during NVIDIA AI Summit in Mumbai .
Ramprakash Ramamoorthy , Director of AI at Zoho Corporation , commented , “ Many LLMs on the market are designed for consumer use , offering limited value for businesses . At Zoho , our mission is to develop LLMs tailored specifically for a wide range of business use cases . Owning our entire technology stack , with products spanning various business functions , allows us to integrate the essential element that makes AI truly effective : context .”
Zoho prioritises user privacy from the outset to create models that are compliant with privacy regulations from the ground up , rather than retrofitting them later . Its goal is to help businesses realise ROI swiftly and effectively by leveraging the full stack of NVIDIA AI software and accelerated computing to increase throughput and reduce latency .
Zoho has been building its own AI technology for over a decade and adding it contextually to its wide portfolio of over 100 products across its ManageEngine and Zoho divisions . Its approach to AI is multi-modal , geared towards deriving contextual intelligence that can help users make business decisions .
The company is building narrow , small and medium language models , which are distinct from LLMs . This provides options for using different size models in order to provide better results across a variety of use cases . Relying on multiple models also means that businesses that do not have a large amount of data can still benefit from AI .
Privacy is also a core tenet in Zoho ’ s AI strategy , and its LLM models will not be trained on customer data .
“ The ability to choose from a range of AI model sizes empowers businesses to tailor their AI solutions precisely to their needs ,
The company is building narrow , small and medium language models , which are distinct from LLMs .
balancing performance with cost-effectiveness ,” said Vishal Dhupar , Managing Director , Asia South at NVIDIA .
Through this collaboration , Zoho will be accelerating its LLMs on the NVIDIA accelerated computing platform with NVIDIA Hopper GPUs , using the NVIDIA NeMo end-to-end platform for developing custom generative AI – including LLMs , multimodal , vision , and speech AI .
Additionally , Zoho is testing NVIDIA TensorRT-LLM to optimise its LLMs for deployment , and has already seen a 60 % increase in throughput and 35 % reduction in latency compared with a previously used open-source framework . The company is also accelerating other workloads like speech-to-text on NVIDIA accelerated computing infrastructure . •
Ramprakash Ramamoorthy , Director of AI , Zoho Corporation
26 www . intelligenttechchannels . com