NVIDIA Omniverse Avatar Enables Real-Time Conversational AI Assistants
NVIDIA today announced NVIDIA Omniverse Avatar, a technology platform for generating interactive AI avatars.
Omniverse Avatar connects the company’s technologies in speech AI, computer vision, natural language understanding, recommendation engines and simulation technologies. Avatars created in the platform are interactive characters with ray-traced 3D graphics that can see, speak, converse on a wide range of subjects, and understand naturally spoken intent.
Omniverse Avatar opens the door to the creation of AI assistants that are easily customisable for virtually any industry. These could help with the billions of daily customer service interactions — restaurant orders, banking transactions, making personal appointments and reservations, and more — leading to greater business opportunities and improved customer satisfaction.
“The dawn of intelligent virtual assistants has arrived,” said Jensen Huang, founder and CEO of NVIDIA. “Omniverse Avatar combines NVIDIA’s foundational graphics, simulation and AI technologies to make some of the most complex real-time applications ever created. The use cases of collaborative robots and virtual assistants are incredible and far-reaching.”
Omniverse Avatar is part of NVIDIA Omniverse™, a virtual world simulation and collaboration platform for 3D workflows currently in open beta with over 70,000 users.
In his keynote address at NVIDIA GTC, Huang shared various examples of Omniverse Avatar: Project Tokkio for customer support, NVIDIA DRIVE Concierge for always-on, intelligent services in vehicles, and Project Maxine for video conferencing.
In the first demonstration of Project Tokkio, Huang showed colleagues engaging in a real-time conversation with an avatar crafted as a toy replica of himself — conversing on such topics as biology and climate science.
In a second Project Tokkio demo, he highlighted a customer-service avatar in a restaurant kiosk, able to see, converse with and understand two customers as they ordered veggie burgers, fries and drinks. The demonstrations were powered by NVIDIA AI software and Megatron 530B, which is currently the world’s largest customisable language model.
In a demo of the DRIVE Concierge AI platform, a digital assistant on the centre dashboard screen helps a driver select the best driving mode to reach his destination on time, and then follows his request to set a reminder once the car’s range drops below 100 miles.
Separately, Huang showed Project Maxine’s ability to add state-of-the-art video and audio features to virtual collaboration and content creation applications. An English-language speaker is shown on a video call in a noisy cafe, but can be heard clearly without background noise. As she speaks, her words are both transcribed and translated in real time into German, French and Spanish with her same voice and intonation.
Omniverse Avatar Key Elements
Omniverse Avatar uses elements from speech AI, computer vision, natural language understanding, recommendation engines, facial animation, and graphics delivered through the following technologies:
These technologies are composed into an application and processed in real-time using the NVIDIA Unified Compute Framework. Packaged as scalable, customisable microservices, the skills can be securely deployed, managed and orchestrated across multiple locations by NVIDIA Fleet Command™.
Learn more about Omniverse Avatar.
ลงทะเบียนเข้าสู่ระบบ เพื่ออ่านบทความฟรีไม่จำกัด