Nvidia’s Chat with RTX: New AI Chatbot Runs Offline and Allows You to Create Custom Chatbots

Feb 13, 2024 | 0 comments

Nvidia Chat with RTX

AI technology is advancing very fast. Many tech companies have released their AI products to keep up with the growing technology. Now tech giant Nvidia just released their new AI chatbot, Nvidia’s Chat with RTX, that runs locally on your computer and helps users create custom AI chatbots and ask questions based on the local data given to them.

Nvidia’s CEO, Jensen Huang, recently spoke at the World Government Summit about the importance and value of data. The announcement of their new chatbot that runs locally and keeps users’ data private shows his commitment to that statement.

What is Nvidia’s “Chat with RTX”?

It is software that runs locally on your computer. It helps you run open-source AI large language models locally on your PC without using the internet. By default, it comes with large language models like Meta’s Llama2 and Mistral. Nvidia’s Chat with RTX allows users to create their own custom AI chatbots by giving their local data to these LLMs. It doesn’t require the internet, so it produces output results really quickly. 

Nvidia's Chat with RTX Interface

Nvidia uses retrieval-augmented generation (RAG). RAG is a technology powered by NVIDIA’s open-source TensorRT-LLM software and Nvidia RTX acceleration. RAG allows users to customize their chatbot by giving their data, which they can further ask questions about, and RTX acceleration gives fast outputs without the need for the internet.

NVIDIA’s open-source TensorRT-LLM RAG reference project is available on GitHub, and it can be used by developers to build their own RAG-based applications.

Users can give datasets to LLMs by adding their local files, folders, or YouTube videos. Chat with RTX supports.txt,.pdf,.xml,.doc/.docx documents, and users can also enter YouTube videos’ or YouTube playlists’ urls as a dataset for the LLMs.

How to use Nvidia’s Chat with RTX?

First, users need to select an AI model. Users can select any AI model from the default Mistral and Llama 2 models. After selecting the AI model, they need to add a dataset to these models. Now users can simply point to a folder containing txt,.pdf,.doc/.docx,.xml documents or add YouTube videos or playlists’ urls. Now they can ask questions related to content in their documents and YouTube videos.

Nvidia's Chat with RTX Youtube URL Uploading Interface

Users can access information from these documents and videos by asking simple queries. So now they no longer need to manually open documents on their PC to find information. 

This is especially useful when you have large documents and need to find specific information in those documents. This also saves time if you don’t want to see full YouTube videos for a simple query.

System Requirements for Using Nvidia’s Chat with RTX

Although it is amazing software, the main downside is that currently it may not be used by a large number of users because the system requirements for chat with RTX are very high.

To use Nvidia’s Chat with RTX, you need:

  • NVIDIA GeForce™ RTX 30 or 40 Series GPU or NVIDIA RTX™ Ampere or Ada Generation GPU with at least 8GB of VRAM.
  • 16GB of RAM or greater
  • Windows 11 and updated Nvidia drivers with 535.11 or later.
  • 100 GB of disk space

How to Access Nvidia’s Chat with RTX?

To use the software, you can download it for free from Nvidia’s website and install it on your PC. It takes about 30 minutes to install. It requires at least 40GB of free space to download, and after installation, it takes up to 100GB of disk space.

Nvidia about Security while using chat with RTX

As the software works locally, none of your data is stored on servers. All your data remains secure on your local machine.

Nvidia says, “Rather than relying on cloud-based LLM services, Chat with RTX lets users process sensitive data on a local PC without the need to share it with a third party or have an internet connection.”

Security with Nvidia's Chat with RTX

Conclusion

Nvidia is doing great and dominating the AI game. Their highly popular AI chips already give them a hardware edge over their competitors. Now, with their entry into AI software, they are revolutionizing the way we interact with AI. Nvidia’s Chat with RTX enables users to create custom chatbots without worrying about their security. This is surely going to make our lives easier and change the way we interact with chatbots.

Read More

How to Generate Beautiful Valentine’s Day AI Images with Bing

How AI made its presence felt at the Super Bowl with AI in Commercials

RECENT POSTS

Elon Musk’s xAI Announces Grok 1.5 with Great Capabilities

Elon Musk’s xAI Announces Grok 1.5 with Great Capabilities

Image Credits: xAI Elon Musk's xAI launched Grok last year in November to compete with chatbots from big tech giants like Google, Microsoft, and OpenAI. Elon Musk's xAI is soon launching the next version of their chatbot, Grok 1.5, which performs really well as...

Meta’s Ray-Ban Smart Glasses Are Getting New AI Features

Meta’s Ray-Ban Smart Glasses Are Getting New AI Features

Image Credits: Meta Meta’s $300 smart glasses, made in collaboration with Ray-Ban, allow users to take pictures, record videos, make calls, hear music, and do much more. Now, new AI features are being added to Meta's Ray-Ban smart glasses.  New AI Features in Meta’s...

Claude 3 beats GPT-4 for the First Time on LMSYS Leaderboard

Claude 3 beats GPT-4 for the First Time on LMSYS Leaderboard

Anthropic released the Claude 3 model family earlier this month, and they have become highly popular since their release. Now Anthropic's Claude 3 Opus Model beats OpenAI's GPT-4 model for the first time on the LMSYS Chatbot Arena Leaderboard. LMSYS Chatbot Arena is a...

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *