Useful information

Prime News delivers timely, accurate news and insights on global events, politics, business, and technology

IBM sees that business clients are using ‘everything’ when it comes to ia, the challenge is to coincide with the LLM with the correct use case


Join the reliable event by business leaders for almost two decades. VB Transform brings together people who build a strategy of real business. Get more information


In the last 100 years, YoBm He has seen many different technological trends increase and fall. What tends to win are technologies where there are options.

In VB transform 2025 Today, Armand Ruiz, Vice President of the AI ​​platform in IBM, explained how Big Blue is thinking about generative AI and how its business users are really implementing technology. A key theme that Ruiz emphasized is that at this point, it is not about choosing a large provider or technology of a single large language model (LLM). Increasingly, business clients are systematically rejecting the strategies of a single supplier in favor of multiple models that coincide with specific LLMs with specific use cases.

IBM has its own open source AI models with the Granite family, but does not position that technology as the only option, or even the correct option for all workloads. This business behavior is promoting IBM to position itself not as a base competitor, but as what Ruiz knew as a control tower for AI workloads.

“When I feel in front of a customer, they are using everything to which they have access, everything,” Ruiz explained. “For coding, they love the anthropic and for some other cases of use to reason, they like the O3 and then for the customization of LLM, with their own data and fine adjustment, they like our granite or mistral series with their small models, or even call … it is coinciding with the LLM with the appropriate use case. And then we help them make recommendations.”

Multi-Llm Linking Strategy

IBM’s response to this market reality is a recently launched model gateway that provides companies with single API to change between different LLM while maintaining observability and governance in all implementations.

The technical architecture allows customers to execute open source models in their own inferences pile for confidential use cases, while accessing public APIs such as AWS Bedrock or Gemini of Google Cloud for less critical applications.

“That entrance door is providing our customers with only one API to change from one LLM to another LLM and add observability and governance at all times,” Ruiz said.

The approach directly contradicts the common suppliers strategy of blocking customers in patented ecosystems. IBM is not alone to adopt an approach to multiple suppliers for model selection. Multiple tools have emerged in recent months for the routing of the model, whose objective is to direct workloads to the appropriate model.

Agent’s orchestration protocols emerge as a critical infrastructure

Beyond the management of multiple models, IBM is addressing the emerging challenge of agent to agent communication through open protocols.

The company has developed ACP (agents communication protocol) and has contributed it to the Linux Foundation. ACP is a competitive effort for the Agent2agent protocol (A2A) of Google, which only this week was contributed by Google to the Linux Foundation.

Ruiz said that both protocols aim to facilitate communication between agents and reduce personalized development work. He hopes that, eventually, the different approaches converge, and currently, the differences between A2A and ACP are mainly technical.

Agent’s orchestration protocols provide standardized ways for AI systems to interact on different platforms and suppliers.

The technical importance is clear when considering the business scale: some IBM clients already have more than 100 agents in pilot programs. Without standardized communication protocols, each agent to agent interaction requires personalized development, creating an unsustainable integration load.

Ai is about transforming workflows and the way the work is done

In terms of how Ruiz sees AI impacting companies today, he suggests that he really needs to be more than just chatbots.

“If you’re just doing chatbots, or you’re just trying to make cost savings with AI, you’re not doing AI,” Ruiz said. “I think it really is about completely transforming the workflow and the way the work is done.”

The distinction between the implementation of AI and the transformation of AI focuses on how technology is deeply integrated into existing commercial processes. The internal example of IBM human resources illustrates this change: instead that employees request information from chatbots for human resources information, specialized agents now handle routine consultations on compensation, hiring and promotions, automatically enroll to appropriate systems and increasing humans only when necessary.

“I used to spend a lot of time talking to my human resources partners for many things. I handled most now with a human resources agent,” Ruiz explained. “Depending on the question, if it is compensation or it is something about the management of separation, or hire someone, or make a promotion, all these things will connect with different internal human resources systems, and those will be as separate agents.”

This represents a fundamental architectural change of human-computer patterns to computer-mediated workflow automation. Instead of employees who learn to interact with AI tools, AI learns to execute full -to -end commercial processes.

The technical involvement: companies must go beyond API integrations and incorporate engineering towards a deep process instrumentation that allows ia agents to execute several steps workflows autonomously.

Strategic implications for business investment

The implementation data of the real world of IBM suggests several critical changes for the business strategy:

Leave chatbot-first thought: Organizations must identify complete workflows for transformation instead of adding conversational interfaces to existing systems. The objective is to eliminate human steps, not improve human-computer interaction.

Architect for multimodel flexibility: Instead of committing to individual AI suppliers, companies need integration platforms that allow the change between models based on the requirements of use cases while maintaining governance standards.

Invest in communication standards: Organizations must prioritize artificial intelligence tools that admit emerging protocols such as MCP, ACP and A2A instead of patented integration approaches that create suppliers blockade.

“There is much to build, and I still say that everyone needs to learn and especially business leaders must be the first AI leaders and understand the concepts,” said Ruiz.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *