Useful information
Prime News delivers timely, accurate news and insights on global events, politics, business, and technology
Useful information
Prime News delivers timely, accurate news and insights on global events, politics, business, and technology
Jack Silva | Soup images | LIGHTROCKET | Getty images
Google On Wednesday he launched Gemini 2.0, his “more capable” artificial intelligence model suite until every day, for all.
In December, the company gave access to developers and trusted evaluators, as well as wrapping some features in Google products, but this is a “general launch”, according to Google.
The set of models includes 2.0 Flash, which is announced as a “battle horse model, optimal for high volume and high frequency tasks on scale”; 2.0 Pro experimental, which is largely focused on coding performance; and 2.0 flash-lite, which Google invoice as its “most profitable model so far.”
Gemini Flash costs developers 10 cents per million tokens for text entries, image and video, while Flash-Lite, its most profitable version, costs 0.75 of a penny for it.
Continuous launches are part of a broader strategy for Google to invest largely in “AI agents” as the AI arms race is heated between technological giants and new companies equally.
GoalAmazon, MicrosoftOperai and Anthrope are also moving towards AI agent, or models that can complete complex tasks of several steps in the name of a user, instead of having to guide them through each individual step.
“During the last year, we have been investing in the development of more agents models, which means that they can understand more about the world around it, think about several steps ahead and take measures in his name, with his supervision,” he wrote Google in December. BlogAdding that Gemini 2.0 has “new advances in multimodality, such as the image of native image and audio, and the use of native tools”, and that the family of models “will allow us to build new AI agents that bring us closer to our vision of a Universal Assistant. “
Anthrope, the startup of Amazon, founded by former Openai research executives, is a key competitor in the race to develop ia agents. In October, the startup said that its AI agents could use computers as humans to complete complex tasks. The ability to use Anthrope’s computer allows its technology to interpret what is on the screen of a computer, select buttons, enter text, navigate the websites and run tasks through any software and internet navigation in real time Said the startup.
The tool can “use computers basically in the same way we do,” said Jared Kaplan, scientific director of Anthrope, CNBC in an interview at that time. He said he can do homework with “tens or even hundreds of steps.”
Openai launched a similar tool recently, presenting a feature called operator that will automate tasks such as planning vacations, completing forms, making restaurants and orderly ordering. The startup backed by Microsoft described the operator as “an agent who can go to the web to perform tasks for you.”
Earlier this week, Operai announced another tool called deep research that allows an agent to compile complex research reports and analyze questions and issues of the user’s choice. Google launched a similar tool of the same name in December: deep research, which acts as a “research assistant, exploring complex issues and compiling reports in its name.”
CNBC first reported in December that Google would introduce several characteristics of AI in early 2025.
“In history, you do not always need to be the first, but you have to run well and really be the best in class as a product,” said the CEO Suendar Pichai at a strategy meeting at that time. “I think that’s what 2025 is about.”