Nvidia’s ChatRTX now works on your own computer and supports additional AI models.
The range of compatible AI models is expanding to feature Google’s Gemma, ChatGLM3, and OpenAI’s CLIP model, enhancing photo searches.
Nvidia initially launched ChatRTX, previously called “Chat with RTX,” in February as a demonstration app. To use it, you need an RTX 30- or 40-series GPU with at least 8GB of VRAM.
The app sets up a local chatbot server that can be accessed through a browser. You can input your own documents and YouTube videos into it to create a robust search tool that provides summaries and answers based on your data.
Google’s Gemma model is built to operate on high-powered laptops or desktop PCs, making it a good fit for ChatRTX. Nvidia’s app simplifies the process of running these models locally.
The chatbot interface provided allows you to choose among different models to best match the data you wish to analyze or search.
ChatRTX can be downloaded as a 36GB file from Nvidia’s website and now includes support for ChatGLM3, a large language model that works in both English and Chinese. It also features OpenAI’s CLIP model, which helps you search and interact with local photo data, training the model to identify images.
Nvidia is also enhancing ChatRTX by adding support for voice commands. It now includes Whisper, an AI system for speech recognition that allows you to use your voice to search your data.
What we think?
I think Nvidia’s ChatRTX will become very popular for people with the right computer. It’s cool that you can use it with different AI models and search your own files easily.
Adding voice commands makes it even better because you can just talk to it. But, it needs a strong Nvidia RTX GPU, so not everyone can use it. It’s also a big file to download, so you need good internet.