Panel Discussion: API Security in DevSecOps. Watch the Recording

Panel Discussion: API Security in DevSecOps. Watch the Recording

Panel Discussion: API Security in DevSecOps. Watch the Recording

Introducing Akto’s GenAI Security Testing Solution

Today, We launched Akto's GenAI Security Testing solution, an unparalleled automated approach that directly addresses LLM Security challenges. The solution is currently in closed beta.

Author image

Ankita

3 Mins

Akto GenAI Security Platform
Akto GenAI Security Platform
Akto GenAI Security Platform

In the past year, approximately 77% of organizations have embraced or are exploring GenAI, driving the demand for streamlined and automated processes. As reliance on GenAI models and Language Learning Models (LLMs) such as ChatGPT continues to grow, the importance of security measures for these models becomes a priority.

Today, I am delighted to present Akto's GenAI Security Testing solution, an unparalleled automated approach that directly addresses LLM Security challenges. The solution is currently in closed beta. Signup for beta access here.

The LLM Security problem at hand

On average, an organization uses 10 LLM models. Often most LLMs in production will receive data indirectly via APIs. That means tons and tons of sensitive data is being processed by the LLM APIs. Ensuring the security of these APIs will be very crucial to protect user privacy and prevent data leaks. There are several ways in which LLMs can be abused today, leading to sensitive data leaks.

  1. Prompt Injection Vulnerabilities - The risk of unauthorized prompt injections, where malicious inputs can manipulate the LLM’s output, has become a major concern.

  2. Denial of Service (DoS) Threats - LLMs are also susceptible to DoS attacks, where the system is overloaded with requests, leading to service disruptions. There's been a rise in reported DoS incidents targeting LLM APIs in the last year.

  3. Overreliance on LLM Outputs - Overreliance on LLMs without adequate verification mechanisms has led to cases of data inaccuracies and leaks. Organizations are encouraged to implement robust validation processes, as the industry sees an increase in data leak incidents due to overreliance on LLMs.

“Securing AI systems requires a multifaceted approach with the need to protect not only the AI from external inputs but also external systems that depend on their outputs. “ - OWASP Top 10 for LLM AI Applications Core team member.

On March 20, 2023, there was an outage with OpenAI's AI tool, ChatGPT. The outage was caused by a vulnerability in an open-source library, which may have exposed payment-related information of some customers. There are many such examples of security incidents related to using LLM models.

Monthly Google Search results for LLM Security for last 12 months

LLM Security google trend

Akto’s LLM Security Solution

By leveraging advanced testing methodologies and state-of-the-art algorithms, Akto’s LLM Security solution provides comprehensive security assessments for GenAI models and LLMs. The solution incorporates a wide range of innovative features, including over 60 meticulously designed test cases that cover various aspects of AI vulnerabilities such as prompt injection, overreliance on specific data sources, and more.

Currently, security teams manually test all the LLM APIs for flaws before release. Due to the time sensitivity of product releases, teams can only test for a few vulnerabilities. As hackers continue to find more creative ways to exploit LLMs, security teams need to find an automated way to secure LLMs at scale.

Often input to an LLM comes from an end-user or the output is shown to the end-user or both. The tests try to exploit LLM vulnerabilities through different encoding methods, separators and markers. This specially detects for weak security practices where developers encode the input or put special markers around the input.

GenAI security testing also detects weak security measures against sanitizing output of LLMs. It aims to detect attempts to inject malicious code for remote execution, cross-site scripting (XSS), and other attacks that could allow attackers to extract session tokens and system information. In addition, Akto also tests whether the LLMs are susceptible to generating false or irrelevant reports.

“From Prompt Injection ( LLM:01) to Overreliance (LLM09) and new vulnerabilities & breaches everyday and build systems that are secure by default; It is critical to test systems early for these ever evolving threats. I’m excited to see what Akto has in store for my llm projects” - OWASP Top 10 for LLM AI Applications Core team member.

To further emphasize the importance of AI security, a recent survey in September, 2023 by Gartner revealed that 34% of organizations are either already using or implementing artificial intelligence (AI) application security tools to mitigate the accompanying risks of generative AI (GenAI), according to a new survey from Gartner, Inc. Over half (56%) of respondents said they are also exploring such solutions, highlighting the critical need for robust security testing solutions like Akto's.

To showcase the capabilities and significance of Akto's AI Security Testing solution, I will be presenting at the prestigious Austin API Summit 2024. The session, titled "Security of LLM APIs," will delve into the problem statement, highlight real-world examples, and demonstrate how Akto's solution provides a robust defense against AI-related vulnerabilities.

As organizations strive to harness the power of GenAI, Akto stands at the forefront of ensuring the security and integrity of these GenAI technologies. The launch of Akto’s GenAI Security Testing solution reinforces our commitment to enabling organizations to secure data processed via all types of APIs including LLM APIs.

The solution is currently in closed beta. Signup for beta access here.

If you want to try Akto, here is a guide to deploy Akto in 60 seconds.

Want the best proactive API Security product?

Our customers love us for our proactive approach and world class API Security test templates. Try Akto's test library yourself in your testing playground. Play with the default test or add your own.

Want the best proactive API Security product?

Our customers love us for our proactive approach and world class API Security test templates. Try Akto's test library yourself in your testing playground. Play with the default test or add your own.

Want the best proactive API Security product?

Our customers love us for our proactive approach and world class API Security test templates. Try Akto's test library yourself in your testing playground. Play with the default test or add your own.

Want to ask something?

Our community offers a network of support and resources. You can ask any question there and will get a reply in 24 hours.

Want to ask something?

Our community offers a network of support and resources. You can ask any question there and will get a reply in 24 hours.

Want to ask something?

Our community offers a network of support and resources. You can ask any question there and will get a reply in 24 hours.

Follow us for more updates

Experience enterprise-grade API Security solution