Useful information

Prime News delivers timely, accurate news and insights on global events, politics, business, and technology

Cohere’s smallest, fastest R-series model excels at RAG and reasons in 23 languages


Join our daily and weekly newsletters to get the latest updates and exclusive content on industry-leading AI coverage. More information


Demonstrating its intention to support a wide range of enterprise use cases, including those that do not require expensive and resource-intensive large language models (LLMs), the AI ​​startup Adhere has launched the Command R7B, the smallest and fastest in its R model series.

Command R7B is designed to support rapid prototyping and iteration and uses recovery augmented generation (RAG) to improve its accuracy. The model features a context length of 128K and supports 23 languages. It outperforms others in its class of open weight models (Google’s Gemma, Meta’s Llama, Mistral’s Ministerial) on tasks including math and coding, Cohere says.

“The model is designed for developers and enterprises that need to optimize the speed, cost-performance and computing resources of their use cases,” said Cohere co-founder and CEO Aidan Gómez. write in a blog post announcing the new model.

Outperform the competition in math, coding and RAG

Cohere has strategically focused on enterprises and their unique use cases. The company presented Command-R in March and the powerful Command R+ in April, and has made updates throughout the year to support speed and efficiency. It teased the Command R7B as the “final” model of its R series and says it will release model weights to the AI ​​research community.

Cohere noted that a critical area of ​​focus when developing Command R7B was improving performance in math, reasoning, coding, and translation. The company appears to have had success in those areas, with the new smaller model topping the list. HuggingFace LLM Open Leaderboard against open weight models of similar size, including Gemma 2 9B, Ministral 8B and Llama 3.1 8B.

Additionally, the smaller R Series model outperforms competing models in areas including AI agents, tool usage, and RAG, helping to improve accuracy by basing model results on external data. Cohere says the Command R7B excels at conversational tasks including technology workplace support and enterprise risk management (ERM); technical facts; media workplace support and customer service; Frequently asked questions about human resources; and summary. Cohere also notes that the model is “exceptionally good” at retrieving and manipulating numerical information in financial settings.

In total, Command R7B ranked first, on average, in major benchmarks, including the Instruction Following Evaluation (IFeval); big bench hard (BBH); Google Tested Graduate Level Questions and Answers (GPQA); soft multi-step reasoning (MuSR); and massive multitasking language understanding (MMLU).

Remove unnecessary calling functions

Command R7B can use tools including search engines, APIs, and vector databases to extend its functionality. Cohere reports that using model tools performs excellently against competitors on the Berkeley Function-Calling Leaderboard, which evaluates a model’s accuracy in calling functions (connecting to external data and systems).

Gómez notes that this proves effective in “dynamic, diverse, real-world environments” and eliminates the need for unnecessary calling functions. This can make it a good option for creating “fast and capable” AI agents. For example, Cohere notes, when functioning as an enhanced Internet search agent, Command R7B can break down complex questions into subgoals, while also working well with advanced reasoning and information retrieval.

Due to its small size, Command R7B can be deployed on consumer and low-end CPUs, GPUs, and MacBooks, enabling on-device inference. The model is now available on the Cohere and HuggingFace platform. The price is $0.0375 for 1 million input tokens and $0.15 for 1 million output tokens.

“It is an ideal option for companies looking for a profitable model based on their internal documents and data,” writes Gómez.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *