Nvidia’s AI chatbot now supports Google’s Gemma model, voice queries, and more

From The Verge: Nvidia is updating its experimental ChatRTX chatbot with more AI models for RTX GPU owners. The chatbot, which runs locally on a Windows PC, can already use Mistral or Llama 2 to query personal documents that you feed into it, but now the list of supported AI models is growing to include Google’s Gemma, ChatGLM3, and even OpenAI’s CLIP model to make it easier to search your photos.

Nvidia first introduced ChatRTX as “Chat with RTX” in February as a demo app, and you’ll need an RTX 30- or 40-series GPU with 8GB of VRAM or more to be able to run it. The app essentially creates a local chatbot server that you can access from a browser and feed your local documents and even YouTube videos to get a powerful search tool complete with summaries and answers to questions on your own data.

Google’s Gemma model was designed to run directly on powerful laptops or desktop PCs, so its inclusion in ChatRTX is fitting. Nvidia’s app takes some of the complexity away of running these models locally, and the resulting chatbot interface lets you pick between models so you can find the one that fits best with your own data that you want to analyze or search through.

ChatRTX, available as a 36GB download from Nvidia’s website, also now supports ChatGLM3, an open bilingual (English and Chinese) large language model that’s based on the general language model framework. OpenAI’s Contrastive Language–Image Pre-training (CLIP) has also been added, which allows you to search and interact with local photo data and essentially train the model to recognize images.

View: Full Article