HomeInnovationGoogle's Antigravity puts coding productivity before AI hype - and the result...

Google’s Antigravity puts coding productivity before AI hype – and the result is astonishing



Gary Hutchison/500Px Plus via Getty Images

Follow ZDNET: Add us as a preferred source on Google.

ZDNET’s key takeaways

  • Google unveils Antigravity, a productivity-focused AI coding IDE.
  • Built on VS Code, it enables instant familiarity and plugin support.
  • Screenshots, recordings, and browser testing power agent workflows.

Google today announced a new(ish) programmer’s development environment called Antigravity. The company calls it “a new era in AI-assisted software development.” And, from a first look I took at its functionality via this video, it well might be. At least for some things.

Also: Google’s Gemini 3 is finally here and it’s smarter, faster, and free to access

Some aspects of Antigravity are definitely astonishingly good. It has some features that I think can truly help move your agent-assisted programming forward in very productive ways.

But let’s bring this announcement back to earth for a minute. Although the company never mentioned it in its blog announcement or online demos, Antigravity is a fork of Microsoft’s open-source VS Code. You can tell from the screenshots I pulled from the demo.

Screenshot by David Gewirtz/ZDNET

This is not a bad thing. In fact, I think it’s fairly fantastic, because it means that while Google is adding some powerful new agentic features, it’s all wrapped up in an environment most coders are very familiar with.

A step beyond previous agent integration

When I first used VS Code with OpenAI’s Codex, it was powerful, indeed. I completed a tremendous amount of coding in a very short time. Codex occupied the right pane of the three-pane interface (the other two panes were a file browser and a code editor).

I was able to use CleanShot X to grab screenshots, and paste them right into VS Code for Codex to see. In fact, I found the ability to supply the AI with screenshots to be, by far, one of the most powerful tools for agentic coding.

Also: Google’s Gemini 3 is finally here and it’s smarter, faster, and free to access

But that was then; this is now.

Antigravity can take its own screenshots. It can also capture screen recordings. Not only that, you can use Antigravity to make comments on the screenshots and screen recordings to guide the Gemini 3 LLM on what you want changed.

But wait, there’s more.

Antigravity includes a Google Chrome extension that enables the AI to run your code within a real Chrome instance, test it, observe its behavior, and then take action.

Screenshot by David Gewirtz/ZDNET

That’s some next-level stuff right there.

To be clear, the browser integration features are limited to browser-only applications. While the AI might be able to test the WordPress plugins I worked on earlier (because WordPress also runs in the browser), it wouldn’t be able to test, for example, an iOS app for the iPhone.

Also: Google Vids premium features now available to everyone – here’s everything you can do

However, the ability of the IDE’s agents to interact deeply with the look and feel of a web app in real-time could lead to a tremendous boost in productivity.

Agent home

When you set up Antigravity, you can determine the level of autonomy to grant the agent.

Screenshot by David Gewirtz/ZDNET

Google has raised the prominence of the AI chatbot pane in VS Code, er, Antigravity. The Home screen of Antigravity isn’t the file browser or the editor; it’s the chatbot interaction screen.

This section of the interface, known as the Manager surface, actually becomes an agent dashboard where you can invoke and track numerous agent processes from a single location.

The company describes it as the interface for spawning, orchestrating, and observing multiple agents across multiple workspaces in parallel.

Also: Google Brain founder Andrew Ng thinks you should still learn to code – here’s why

That last sentence is worth a moment of deconstruction. A workspace in VS Code is merely a grouping of files, usually for one project. For example, I routinely have one workspace dedicated to whatever WordPress plugin I’m working on, and another to a completely different Python project I’m coding.

With Antigravity’s ability to manage multiple agents across multiple workspaces, you can have multiple projects going at once, all with different agents carrying out different tasks.

On the one hand, this could be very powerful. On the other hand, the context switching on the part of our very organic human brains could prove challenging. I find it challenging to work on two programming projects simultaneously. I often switch between projects from one day to the next. But switching back and forth dynamically between projects throughout the workday could lead to errors or brain melt.

Still, it’s available and lets you optimize for your working style.

An a-peeling integration

This is a small thing, but it’s cool. When Google demoed the functionality of Antigravity, the presenter wanted a logo for his app. Right from within Antigravity, he asked Gemini to create an image using Nano Banana.

Nano Banana is an impressive image generator inside Flash 2.5 (and now, presumably, Gemini 3). So it’s no big surprise to see it crank out some attractive logos.

Also: Google’s Private AI Compute promises good-as-local privacy in the Gemini cloud

But up until now, we’ve seen some logical walls between AI implementations. The coding chatbots are more coding-focused and less conversational. The chatty chatbots have fewer coding agent capabilities.

However, Antigravity was able to invoke the Nano Banana capabilities to create the logo directly within the Antigravity IDE interface.

Screenshot by David Gewirtz/ZDNET

UX design often requires creating a multitude of small graphic elements. Normally, we’d switch out of the IDE, drop into a graphics program, generate the files, and upload the files. Lather, rinse, repeat.

However, since Antigravity can do it all from within the agent management context inside the IDE, the process can save a bunch of steps. Saving steps is something every professional programmer needs to do.

The walkthrough

Most coding agents provide some kind of pre-execution plan and post-completion summary of actions taken. That’s not new. Antigravity does the same thing.

Screenshot by David Gewirtz/ZDNET

But what’s impressive about Antigravity is that because it has browser interactivity and screen recording built in, it can demonstrate its actions in a screen recording.

Let’s say you ask it to implement a new feature. At the end of its processing run, the agent can show the steps it took to build the feature. However, it can also provide a screen recording that demonstrates how it tested the new feature and what it saw on the screen.

Also: The best free AI for coding – only 3 make the cut now

That lets you take a quick look at what got produced. But because Antigravity provides an easy mechanism to add Google Docs-like comments to code snippets, screenshots, and screen recordings, you can actually mark up the walkthrough and show the AI what you want to change.

I don’t think it’s possible to overstate how beneficial that can be to productivity.

Ideas that rise above

I haven’t had a chance to use Antigravity yet, but I will. The fact that it’s VC Code-based makes it an easy consideration. That provides the new IDE with an enormous library of plugins and extensions right out of the gate.

I think most professional programmers would be far more inclined to try yet another VS Code fork (which they know can be integrated with their projects and workflows) than some brand-new, barely out of beta, AI-focused IDE. Productivity matters to pro coders.

What impresses me about Antigravity is that many of its features are purely productivity-focused. Yes, they work with the AI agents and make describing work to AI agents easier, but it’s productivity that drives these features, not AI hype. That’s a strong approach.

Also: Microsoft’s new AI agents won’t just help us code, now they’ll decide what to code

I’m just a little disappointed that Google didn’t fess up right out of the gate that this is really a VS Code mod. I think more programmers would have been immediately interested in seeing what was done. But that’s water under the bridge.

I’ve reached out to Google, because one thing I would like to get clarification on is where Jules (Google’s agent-first coding AI) fits into this puzzle. Jules works mostly on your GitHub repo, whereas Antigravity clearly works on local code, although it’s able to send commits back to GitHub. Does Jules work with Antigravity, or is Antigravity supplanting Jules, or are they still just two different paradigms?

At any rate, this looks like it could be a winner.  

Have you tried any AI-assisted development environments yet? If so, how do they compare to your normal workflow? Would the ability to capture and annotate screenshots or recordings right inside the IDE change how you develop or debug? How important is deep browser-level testing to your own projects, and would you trust an agent to operate across multiple workspaces at once? Finally, does Google’s decision not to emphasize its VS Code origins matter to you, or is capability all that counts? Let us know in the comments below.

You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, on Bluesky at @DavidGewirtz.com, and on YouTube at YouTube.com/DavidGewirtzTV.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read

spot_img