Displaying RTX-powered AI assistant Project G-Assist

 

AI assistant

Project G-Assist, an RTX-powered AI assistant technology showcase from NVIDIA, offers PC gamers and apps context-aware assistance. Through ARK: Survival Ascended from Studio Wildcard, the Project G-Assist tech demo had its premiere. For the NVIDIA ACE digital human platform, NVIDIA also unveiled the first PC-based NVIDIA NIM inference microservices.

The NVIDIA RTX AI Toolkit, a new collection of tools and software development kits that let developers optimise and implement massive generative AI models on Windows PCs, makes these technologies possible. They complement the full-stack RTX AI advances from NVIDIA that are speeding up more than 500 PC games and apps as well as 200 laptop designs from OEMs.

Furthermore, ASUS and MSI have just unveiled RTX AI PC laptops that come with up to GeForce RTX 4070 GPUs and energy-efficient systems-on-a-chip that support Windows 11 AI PC. When it becomes available, a free update to Copilot+ PC experiences will be given to these Windows 11 AI PCs.

“In 2018, NVIDIA ushered in the era of AI PCs with the introduction of NVIDIA DLSS and RTX Tensor Core GPUs,” stated Jason Paul, NVIDIA’s vice president of consumer AI. “Now, NVIDIA is opening up the next generation of AI-powered experiences for over 100 million RTX AI PC users with Project G-Assist and NVIDIA ACE.”

Best AI Assistant

GeForce AI Assistant Project G-Assist

AI assistants are poised to revolutionise in-app and gaming experiences by helping with intricate creative workflows and providing gaming techniques, as well as by analysing multiplayer replays. NVIDIA can see a peek of this future with Project G-Assist.

Even the most devoted players will find it difficult and time-consuming to grasp the complex mechanics and expansive universes found in PC games. With generative AI, Project G-Assist seeks to provide players with instant access to game expertise.

Project G-Assist uses AI vision models to process player speech or text inputs, contextual data from the game screen, and other inputs. These models improve a large language model (LLM) connected to a game knowledge database in terms of contextual awareness and app-specific comprehension, and then produce a customised answer that may be spoken or sent via text.

NVIDIA and Studio Wildcard collaborated to showcase the technology through ARK: Survival Ascended. If you have any queries concerning monsters, gear, lore, goals, challenging bosses, or anything else, Project G-Assist can help. Project G-Assist adapts its responses to the player’s game session since it is aware of the context.

Project G-Assist can also set up the player’s gaming system to run as efficiently and effectively as possible. It may apply a safe overclock, optimise graphics settings based on the user’s hardware, offer insights into performance indicators, and even automatically lower power usage while meeting performance targets.

Initial ACE PC NIM Releases

RTX AI PCs and workstations will soon be equipped with NVIDIA ACE technology, which powers digital people. With NVIDIA NIM inference microservices, developers can cut down deployment timeframes from weeks to minutes. High-quality inference for speech synthesis, face animation, natural language comprehension, and other applications is provided locally on devices using ACE NIM microservices.

The Covert Protocol tech demo, created in association with Inworld AI, will showcase the PC gaming premiere of NVIDIA ACE NIM at COMPUTEX. It now features locally operating NVIDIA Riva and Audio2Face automatic speech recognition on devices.

Installing GPU Acceleration for Local PC SLMs Using Windows Copilot Runtime

In order to assist developers in adding new generative AI features to their Windows native and web programmes, Microsoft and NVIDIA are working together. Through this partnership, application developers will have simple application programming interface (API) access to GPU-accelerated short language models (SLMs), which allow Windows Copilot Runtime to run on-device and support retrieval-augmented generation (RAG).

For Windows developers, SLMs offer a plethora of opportunities, such as task automation, content production, and content summarising. By providing the AI models with access to domain-specific data that is underrepresented in base models, RAG capabilities enhance SLMs. By using RAG APIs, developers can customise SLM capabilities and behaviour to meet individual application requirements and leverage application-specific data sources.

NVIDIA RTX GPUs and other hardware makers’ AI accelerators will speed up these AI capabilities, giving end users quick, responsive AI experiences throughout the Windows ecosystem.

Later this year, the developer preview of the API will be made available.

Using the RTX AI Toolkit, Models Are 4x Faster and 3x Smaller

Hundreds of thousands of open-source models have been developed by the AI ecosystem and are available for use by app developers; however, the majority of these models are pretrained for public use and designed to run in a data centre.

NVIDIA is announcing RTX AI Toolkit, a collection of tools and SDKs for model customisation, optimisation, and deployment on RTX AI PCs, to assist developers in creating application-specific AI models that run on PCs. Later this month, RTX AI Toolkit will be made accessible to developers on a larger scale.

Developers can use open-source QLoRa tools to customise a pretrained model. After that, they can quantize models to use up to three times less RAM by using the NVIDIA TensorRT model optimizer. After that, NVIDIA TensorRT Cloud refines the model to achieve the best possible performance on all RTX GPU lineups. When compared to the pretrained model, the outcome is up to 4 times faster performance.

The process of deploying ACE to PCs is made easier by the recently released early access version of the NVIDIA AI Inference Manager SDK. It orchestrates AI inference across PCs and the cloud with ease and preconfigures the PC with the required AI models, engines, and dependencies.

To improve AI performance on RTX PCs, software partners including Adobe, Blackmagic Design, and Topaz are incorporating RTX AI Toolkit components into their well-known creative applications.

According to Deepa Subramaniam, vice president of product marketing for Adobe Creative Cloud, “Adobe and NVIDIA continue to collaborate to deliver breakthrough customer experiences across all creative workflows, from video to imaging, design, 3D, and beyond.” “TensorRT 10.0 on RTX PCs unlocks new creative possibilities for content creation in industry-leading creative tools like Photoshop, delivering unparalleled performance and AI-powered capabilities for creators, designers, and developers.”

TensorRT-LLM, one of the RTX AI Toolkit’s components, is included into well-known generative AI development frameworks and apps, such as Automatic1111, ComfyUI, Jan.AI, LangChain, LlamaIndex, Oobabooga, and Sanctum.AI.

Using AI in Content Creation

Additionally, NVIDIA is adding RTX AI acceleration to programmes designed for modders, makers, and video fans.

NVIDIA debuted TensorRT-based RTX acceleration for Automatic1111, one of the most well-liked Stable Diffusion user interfaces, last year. Beginning this week, RTX will also speed up the widely-liked ComfyUI, offering performance gains of up to 60% over the version that is already in shipping, and a performance boost of 7x when compared to the MacBook Pro M3 Max.

With full ray tracing, NVIDIA DLSS 3.5, and physically correct materials, classic DirectX 8 and DirectX 9 games may be remastered using the NVIDIA RTX Remix modding platform. The RTX Remix Toolkit programme and a runtime renderer are included in RTX Remix, making it easier to modify game materials and objects.

When NVIDIA released RTX Remix Runtime as open source last year, it enabled modders to increase rendering power and game compatibility.

Over 100 RTX remasters are currently being developed on the RTX Remix Showcase Discord, thanks to the 20,000 modders who have utilised the RTX Remix Toolkit since its inception earlier this year to modify vintage games.

This month, NVIDIA will release the RTX Remix Toolkit as open source, enabling modders to improve the speed at which scenes are relit and assets are replaced, expand the file formats that RTX Remix’s asset ingestor can handle, and add additional models to the AI Texture Tools.

Furthermore, NVIDIA is enabling modders to livelink RTX Remix to digital content creation tools like Blender, modding tools like Hammer, and generative AI programmes like ComfyUI by providing access to the capabilities of RTX Remix Toolkit through a REST API. In order to enable modders to integrate the renderer of RTX Remix into games and applications other than the DirectX 8 and 9 classics, NVIDIA is now offering an SDK for RTX Remix Runtime.

More of the RTX Remix platform is becoming open source, enabling modders anywhere to create even more amazing RTX remasters.

All developers now have access to the SDK for NVIDIA RTX Video, the well-liked AI-powered super-resolution feature that is supported by the Mozilla Firefox, Microsoft Edge, and Google Chrome browsers. This allows developers to natively integrate AI for tasks like upscaling, sharpening, compression artefact reduction, and high-dynamic range (HDR) conversion.

Video editors will soon be able to up-sample lower-quality video files to 4K resolution and transform standard dynamic range source files into HDR thanks to RTX Video, which will be available for Wondershare Filmora and Blackmagic Design’s DaVinci Resolve video editing software. Furthermore, the free media player VLC media will shortly enhance its current super-resolution capabilities with RTX Video HDR.

Post a Comment

0 Comments