Useful information
Prime News delivers timely, accurate news and insights on global events, politics, business, and technology
Useful information
Prime News delivers timely, accurate news and insights on global events, politics, business, and technology
After nearly two weeks of announcements, OpenAI capped off its 12-day OpenAI livestream series with a preview of its next-generation frontier model. “Out of respect for friends at Telefónica (owner of the O2 cellular network in Europe), and in the great tradition of OpenAI being really bad with names, it’s called o3,” OpenAI CEO Sam Altman told the who saw the ad on youtube.
The new model is not yet ready for public use. Instead, OpenAI will first make o3 available to researchers who want help with security testing. OpenAI also announced the existence of o3-mini. Altman said the company plans to launch that model “around the end of January,” followed by the o3 “shortly thereafter.”
As expected, o3 offers improved performance over its predecessor, but the main feature here is how much better it is than o1. For example, when this year’s exam is taken American Invitational Mathematics Examo3 achieved an accuracy score of 96.7 percent. In contrast, o1 scored a more modest 83.3 percent. “What this means is that o3 often leaves out just one question,” said Mark Chen, senior vice president of research at OpenAI. In fact, o3 did so well in the usual set of benchmark tests that OpenAI puts its models through that the company had to find more challenging tests to compare it to.
One of those is ARC-AGIa benchmark that tests an AI algorithm’s ability to intuit and learn on the spot. According to the test’s creator, the nonprofit ARCH Awardan AI system that could successfully outperform ARC-AGI would represent “a major milestone toward artificial general intelligence.” Since its debut in 2019, no AI model has surpassed ARC-AGI. The test consists of entry and exit questions that most people can solve intuitively. For example, in the example above, the correct answer would be to create squares from the four polyominoes using dark blue blocks.
In its low calculation setting, o3 scored 75.7 percent on the test. With additional processing power, the model achieved a score of 87.5 percent. “Human performance is comparable to the 85 percent threshold, so being above this is an important milestone,” according to Greg Kamradt, president of the ARC Prize Foundation.
OpenAI also showed o3-mini. The new model uses OpenAI’s recently announced Adaptive Thinking Time API to offer three different reasoning modes: Low, Medium, and High. In practice, this allows users to adjust how long the software “thinks” about a problem before giving an answer. As you can see in the graph above, o3-mini can achieve results comparable to OpenAI’s current o1 reasoning model, but at a fraction of the computational cost. As mentioned, o3-mini will arrive for public use before o3.