Google is reportedly developing ‘Jarvis’ AI that could

Google is close to introducing an AI agent that can operate a web browser to help users automate everyday tasks. The Information reports that the company is working on a “computer-using agent” under the codename Project Jarvis, and it could be ready for preview by December.

According to sources who spoke to The Information, Jarvis “responds to a person’s commands by continuously capturing screenshots of what’s on their computer screen and interpreting the shots before taking an action like clicking a button or typing into a text field.”

Jarvis is reportedly built to work only with web browsers — specifically Chrome — to assist with common tasks like researching, shopping, and booking flights. It comes as Google continues to expand the capabilities of its Gemini AI, the next-generation model of which is expected to be revealed in December, as reported by The Verge.

Google’s AI chatbot Gemini Live got support for dozens of new languages ​​this month, and there’s recently been Gemini integration in Google Meet, Photos and other apps.

The news of Jarvis comes just days after Anthropic introduced a similar but more elaborate feature for its cloud AI, which it says equips it with computer skills so it can “use a wide range of standard tools and software programs designed for people.” It’s now available in public beta.

While the use of generative AI in games seems almost inevitable, as the medium has always toyed with new ways to make enemies and NPCs smarter and more realistic, watching several NVIDIA ACE demos one after another really made me feel sick to my stomach.

This wasn’t just slightly smarter enemy AI — ACE could create entire conversations out of thin air, simulate voices and try to give NPCs a sense of personality. It’s also doing this locally on your PC, powered by NVIDIA’s RTX GPUs. But while this all might sound good on paper, I didn’t like watching an AI NPC in action almost every second.

TiGames’ ZooPunk is a great example of this: it relies on NVIDIA ACE to generate dialogue, a virtual voice, and lip syncing for an NPC named Buck. But as you can see in the video above, Buck sounds like a robot with a slightly rustic accent. If he’s supposed to have some kind of relationship with the main character, you can’t tell from his performance.

I think my deep dislike of NVIDIA’s ACE-powered AI comes down to this: there’s simply nothing charming about it. No joy, no warmth, no humanity. Every ACE AI character sounds like a developer cutting corners in the worst way possible, as if you can see their contempt for the audience in the form of a boring NPC. I’d much rather scroll through some on-screen text, at least I wouldn’t have to interact with weird robot voices.

Leave a Comment