Useful information

Prime News delivers timely, accurate news and insights on global events, politics, business, and technology

The AI ​​paradox: how tomorrow’s avant -garde tools can become dangerous cyber threats (and what to do to prepare)


Join our daily and weekly newsletters to obtain the latest updates and exclusive content on the coverage of the industry leader. Get more information


AI is changing the way companies operate. While much of this change is positive, it introduces some unique cybersecurity concerns. The next -generation AI applications, such as agentic ai, represent a particularly notable risk for the safety position of organizations.

What is AI Agent?

The agent refers to AI models that can act autonomously, often automating complete roles with little or no human entry. Advanced chatbots are among the most prominent examples, but AI agents can also appear in applications such as business intelligence, medical diagnoses and insurance adjustments.

In all cases of use, this technology combines generative models, natural language processing (NLP) and other automatic learning functions (ML) to perform several steps tasks independently. It is easy to see the value in such a solution. Understandably, Gartner predicts that One third of all generative interactions of AI will use these agents by 2028.

The unique security risks of the AFFEE

AI’s adoption will increase as companies seek to complete a broader range of tasks without a larger workforce. However, as promising as it is, to give a model of AI that has so much power implications of cybersecurity.

AI agents generally require access to large amounts of data. Consequently, they are main objectives for cybercriminals, since attackers could focus efforts on a single application to expose a considerable amount of information. It would have an effect similar to the bolenado, which led to $ 12.5 billion in losses Only in 2021, but it can be easier, since AI models could be more susceptible than experienced professionals.

The autonomy of agentic ai is another concern. While all ML algorithms introduce some risks, conventional use cases require that human authorization do anything with their data. Agents, on the other hand, can act without authorization. As a result, any accidental privacy exposure or Errors such as Aryucinations of AI It can happen without anyone noticing.

This lack of supervision makes existing threats such as data poisoning even more dangerous. Attackers can corrupt a model altering only 0.01% of your training data setAnd doing so is possible with a minimum investment. That is detrimental in any context, but the defective conclusions of an poisoned agent would go much further than one in which humans review the results first.

How to improve the cybersecurity of the AI ​​agent

In the light of these threats, cybersecurity strategies must adapt before companies implement agents applications. Here are four critical steps towards that goal.

1. Maximize visibility

The first step is to ensure that safety and operations teams have total visibility in the workflow of an AI agent. Each task that completes the model, each device or application to which it is connected and all the data you can access must be evident. Revealing these factors will make it easier to detect potential vulnerabilities.

Automated network mapping tools may be necessary here. Only 23% of IT leaders Let’s say they have total visibility in their cloud environments and 61% use multiple detection tools, which leads to duplicate records. Administrators must address these problems first to obtain the necessary vision of what their AI agents can access.

Use the principle of less privilege

Once clear what the agent can interact, companies must restrict those privileges. The principle of less privilege, which argues that any entity can only see and use what it absolutely needs is essential.

Any database or application with which an AI agent can interact is a potential risk. Consequently, organizations can minimize relevant attack surfaces and prevent lateral movement by limiting these permits as much as possible. Anything that does not directly contribute to the purpose of driving value of an AI should be outside the limits.

Limit confidential information

Similarly, network administrators can avoid privacy violations by eliminating the confidential details of the data sets to which it can access the AGA. The work of many AI agents naturally involves private data. More than 50% of the generative expenditure of AI It will go to chatbots, which can collect information about customers. However, not all these details are necessary.

While an agent must learn from past client interactions, it does not need to store names, addresses or payment details. System programming to scrub unnecessary information on personal identification of AI-ACcessible data will minimize damage in case of non-compliance.

Be attentive to suspicious behavior

Companies must also be careful when programming agent also. Apply it to a single case of small use first and use a diverse equipment to review the model in search of bias signs or hallucinations during training. When the time comes to deploy the agent, extend it slowly and monitor for suspicious behavior.

The real -time response capacity is crucial in this monitoring, since the risks of the AI ​​AGENS mean that any breach could have dramatic consequences. Fortunately, automated detection and response solutions are highly effective, saving an average of $ 2.22 million in data violation costs. Organizations can slowly expand their AI agents after a successful test, but must continue to monitor all applications.

As cybersecurity progresses, so should cybersecurity strategies

The rapid advance of AI has a significant promise for modern companies, but their cybersecurity risks are increasing with the same speed. Cybernetic defenses of companies must expand and advance together with the generative cases of use of AI. Alar of these changes is not maintained could cause damage that exceeds the benefits of technology.

Agentive AI will lead to ML to new heights, but the same applies to related vulnerabilities. While that does not make this technology too insecure to invest, it guarantees additional caution. Companies must follow these essential security steps as new AI applications launch.

Zac Amos is an editor of features in Hack.

DatadecisionMakers

Welcome to the Venturebeat community!

DatadecisionMakers is where experts, including technical people who do data work, can share information and innovation related to data.

If you want to read about avant -garde ideas and updated information, the best practices and the future of data and data technology, join us in DatecisionMakers.

They could even consider contributing to your own article!

Read more than datadecisionmakers

Discounts
Source link

Leave a Reply

Your email address will not be published. Required fields are marked *