Google launches Gemini 2.0 AI models, and showcases their powers in new agents

Google on Wednesday gave the public and developers a taste of the second generation of its Gemini frontier models, and a preview of some of the agents it will power. 

The new Gemini 2.0 family of models is designed to power new AI agents that understand more than just text, and reason and complete tasks with more autonomy. Google described how the new models will improve an experimental agent called Project Astra, which lets AI process information seen through a camera. It previewed another experimental agent, now called Project Mariner that’s designed to perform web tasks on behalf of the user.

“[O]ur next era of models [is] built for this new agentic era,” said Google CEO Sundar Pichai in a blog post Wednesday. “With new advances in multimodality–like native image and audio output–and native tool use, it will enable us to build new AI agents that bring us closer to our vision of a universal assistant.” The term “universal assistant” implies an AI agent with artificial general intelligence (AGI), or the ability to do most tasks as well or better than humans. Experts say the industry is anywhere from two to 10 years away from realising that aspiration.

Google isn’t yet unveiling the largest and most capable of its Gemini 2.0 models. That may come in another announcement in January. For now it’s releasing to developers an experimental version of a smaller and faster variant called Gemini Flash 2.0. “It’s our workhorse model with low latency and enhanced performance at the cutting edge of our technology, at scale, Google Deepmind CEO Demis Hassabis says in a blog post

Gemini 2.0 Flash, Hasabis says, is twice as fast as its predecessor model, 1.5 Flash, and significantly smarter. He says the new model is multimodal, meaning it can process and output text, images, and audio. The “experimental” version supports multimodal input but only text output. Flash 2.0 is also capable of calling on external tools like Google search, or tools made by other companies, as well as execute computer code. 

Consumer users can get in on the fun, too. Gemini chatbot users can now choose to have the chatbot powered by the Flash 2.0 (experimental) model. Google says it’ll put Gemini 2.0 models under the hood of more of its apps and services next year.

Gemini’s second generation is focused on powering AI agents capable of taking steps on their own and calling on resources they need. The models can take a very large set of instructions and (multimodal) file inputs from the user, then use planning, reasoning, and function-calling (such as conducting a web search) to produce an answer. 

The wider skill set is showcased in a couple of experimental agents, one for a mobile device and one for a web browser.

At the company’s developer event earlier this year Google demonstrated a multimodal agent called Project Astra that can react and reason on real-time video seen through a phone camera, as well as audio (including language) it hears through the device’s microphones. Gemini 2.0, Google says, will give the agent better conversational skills and the ability to call on Google Search and Maps. Astra is nowhere near being released to the public, however.

Gemini 2.0 will enable another experiment called Project Mariner, an agent that understands the images, text, code, and other elements within a browser window, then performs tasks based on that input via a Chrome browser extension. Google says the agent, which is available only to a group of “trusted testers,” is often slow and inaccurate today, but will improve rapidly. 

“If Gemini 1.0 was about organizing and understanding information,” Pichai said in his blog post, “Gemini 2.0 is about making it much more useful.”

https://www.fastcompany.com/91245005/google-launches-gemini-2-0-ai-models-and-showcases-their-powers-in-new-agents?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Creato 5mo | 11 dic 2024, 16:30:05


Accedi per aggiungere un commento

Altri post in questo gruppo

30 years ago, ‘Hackers’ and ‘The Net’ predicted the possibilities—and horrors—of internet life

Getting an email in the mid-’90s was kind of an event—somewhere between hearing an unexpected knock at the door and walking into your own surprise party. The white-hot novelty of electronic mail i

11 mag 2025, 11:40:05 | Fast company - tech
Uber is hedging its bets when it comes to robotaxis

Uber CEO Dara Khosrowshahi is enthusiastic about the company’s pilot with Waymo. In

10 mag 2025, 14:50:05 | Fast company - tech
Apple may radically change its iPhone release schedule. Here are 3 business-boosting reasons why

For well over a decade now, consumers have been used to new iPhones coming out in the fall, like clockwork. However, according to a series of reports, Apple may be planning to change its iPhone re

10 mag 2025, 10:20:04 | Fast company - tech
How Google can save you money the next time you book travel

Booking travel has become a bit of a game—especially if you want to get the best possible prices and avoid getting ripped off.

That’s because hotels and airlines have developed the lovel

10 mag 2025, 10:20:03 | Fast company - tech
Uber staff revolts over return-to-office mandate

Uber is facing internal staff unrest as it attempts to implement a three-day-per-week return to office (RTO) mandate and stricter sabbatical eligibility. 

An all-hands meeting late

10 mag 2025, 01:10:03 | Fast company - tech
Why ‘k’ is the most hated text message, according to science

A study has confirmed what we all suspected: “K” is officially the worst text you can send.

It might look harmless enough, but this single letter has the power to shut down a conversatio

9 mag 2025, 22:40:05 | Fast company - tech
SoundCloud faces backlash after adding an AI training clause in its user terms

SoundCloud is facing backlash after creators took to social media to complain upon discovering that the music-sharing platform uses uploaded music to train its AI systems.

According to S

9 mag 2025, 20:30:02 | Fast company - tech