Useful information
Prime News delivers timely, accurate news and insights on global events, politics, business, and technology
Useful information
Prime News delivers timely, accurate news and insights on global events, politics, business, and technology
Join our daily and weekly newsletters to get the latest updates and exclusive content on industry-leading AI coverage. More information
Two years after the public launch of ChatGPT, conversations about AI are inescapable as companies across industries look to leverage large language models (LLMs) to transform their business processes. However, as powerful and promising as LLMs are, many business and IT leaders have come to rely too much on them and overlook their limitations. That’s why I anticipate a future in which specialized language models, or SLMs, will play a more important and complementary role in enterprise IT.
SLMs are often called “small language models” because they require less data and training time and are “more optimized versions of LLMs.” But I prefer the word “specialized” because it better conveys the ability of these purpose-built solutions to perform highly specialized work with greater precision, consistency, and transparency than LLMs. By complementing LLMs with SLMs, organizations can create solutions that leverage the strengths of each model.
LLMs are incredibly powerful, but they are also known for sometimes “losing the plot” or delivering results that go off course due to their generalist training and massive data sets. That tendency is made more problematic by the fact that OpenAI’s ChatGPT and other LLMs are essentially “black boxes” that don’t reveal how they arrive at an answer.
This black box issue will become a bigger problem in the future, especially for enterprises and business-critical applications where accuracy, consistency, and compliance are paramount. Think of healthcare, financial services, and legal as prime examples of professions where inaccurate answers can have huge financial consequences and even life-or-death repercussions. Regulators are already taking notice and will likely begin to demand explainable AI solutions, especially in industries that depend on data privacy and accuracy.
While companies often implement a “human involved” approach to mitigate these issues, an over-reliance on LLMs can lead to a false sense of security. Over time, complacency can set in and mistakes can go unnoticed.
Fortunately, SLMs are better suited to address many of the limitations of LLMs. Rather than being designed for general-purpose tasks, SLMs are developed with a narrower focus and trained on data from specific domains. This specificity allows them to handle nuanced linguistic requirements in areas where precision is paramount. Instead of relying on vast, heterogeneous data sets, SLMs are trained on specific information, giving them the contextual intelligence to deliver more consistent, predictable and relevant responses.
This offers several advantages. First, they are more explainable, making it easier to understand the source and logic behind your results. This is critical in regulated industries where decisions must be traced back to a source.
Secondly, their smaller size means they can often run faster than LLMs, which can be a crucial factor for real-time applications. Third, SLMs offer businesses more control over data privacy and security, especially if they are implemented internally or built specifically for the business.
Additionally, while SLMs may initially require specialized training, they reduce the risks associated with using third-party LLMs controlled by external vendors. This control is invaluable in applications that require strict compliance and data handling.
I want to make it clear that LLM and SLM are not mutually exclusive. In practice, SLMs can complement LLMs, creating hybrid solutions where LLMs provide broader context and SLMs ensure precise execution. It’s also still early days, even when it comes to LLMs, so I always advise technology leaders to continue exploring the many possibilities and benefits of LLMs.
Additionally, while LLMs may adapt well to a variety of problems, SLMs may not transfer well to certain use cases. Therefore, it is important to have a clear understanding from the beginning of which use cases to address.
It is also important that business and IT leaders devote more time and attention to developing the various skills needed to train, refine, and test SLMs. Fortunately, there is a wealth of free information and training available through common sources like Coursera, YouTube, and Huggingface.co. Leaders must ensure their developers have enough time to learn and experiment with SLM as the battle for AI expertise intensifies.
I also advise leaders to carefully vet their partners. I recently spoke with a company that asked me for my opinion on the claims of a certain technology vendor. My opinion was that they were exaggerating their claims or were simply out of their depth in terms of understanding the capabilities of the technology.
The company wisely took a step back and implemented a controlled proof of concept to test the vendor’s claims. As I suspected, the solution simply wasn’t ready for prime time and the company was able to get by with relatively little time and money invested.
Whether a company starts with a proof of concept or a live implementation, I advise them to start small, test frequently, and build on early successes. I have personally experienced working with a small set of instructions and information, only to find that the results drift when I give the model more information. That’s why moving slowly and steadily is a prudent approach.
In short, while LLMs will continue to provide increasingly valuable capabilities, their limitations are becoming increasingly apparent as companies increase their reliance on AI. Complementing SLMs offers a way forward, especially in high-risk fields that demand precision and explainability. By investing in SLM, businesses can future-proof their AI strategies, ensuring their tools not only drive innovation but also meet demands for trust, reliability and control.
AJ Sunder is co-founder, CIO and CPO of Responsive.
DataDecisionMakers
Welcome to the VentureBeat community!
DataDecisionMakers is the place where experts, including data technicians, can share data-related knowledge and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data technology, join us at DataDecisionMakers.
You might even consider contributing an article of your own!
Read more from DataDecisionMakers