Introducing Akto’s GenAI Security Testing Solution
Today, We launched Akto's GenAI Security Testing solution, an unparalleled automated approach that directly addresses LLM Security challenges. The solution is currently in closed beta.
In the past year, approximately 77% of organizations have embraced or are exploring GenAI, driving the demand for streamlined and automated processes. As reliance on GenAI models and Language Learning Models (LLMs) such as ChatGPT continues to grow, the importance of security measures for these models becomes a priority.
Today, I am delighted to present Akto's GenAI Security Testing solution, an unparalleled automated approach that directly addresses LLM Security challenges. The solution is currently in closed beta. Signup for beta access here.
The LLM Security problem at hand
On average, an organization uses 10 LLM models. Often most LLMs in production will receive data indirectly via APIs. That means tons and tons of sensitive data is being processed by the LLM APIs. Ensuring the security of these APIs will be very crucial to protect user privacy and prevent data leaks. There are several ways in which LLMs can be abused today, leading to sensitive data leaks.
Prompt Injection Vulnerabilities - The risk of unauthorized prompt injections, where malicious inputs can manipulate the LLM’s output, has become a major concern.
Denial of Service (DoS) Threats - LLMs are also susceptible to DoS attacks, where the system is overloaded with requests, leading to service disruptions. There's been a rise in reported DoS incidents targeting LLM APIs in the last year.
Overreliance on LLM Outputs - Overreliance on LLMs without adequate verification mechanisms has led to cases of data inaccuracies and leaks. Organizations are encouraged to implement robust validation processes, as the industry sees an increase in data leak incidents due to overreliance on LLMs.
“Securing AI systems requires a multifaceted approach with the need to protect not only the AI from external inputs but also external systems that depend on their outputs. “ - OWASP Top 10 for LLM AI Applications Core team member.
On March 20, 2023, there was an outage with OpenAI's AI tool, ChatGPT. The outage was caused by a vulnerability in an open-source library, which may have exposed payment-related information of some customers. There are many such examples of security incidents related to using LLM models.
Monthly Google Search results for LLM Security for last 12 months
Akto’s LLM Security Solution
By leveraging advanced testing methodologies and state-of-the-art algorithms, Akto’s LLM Security solution provides comprehensive security assessments for GenAI models and LLMs. The solution incorporates a wide range of innovative features, including over 60 meticulously designed test cases that cover various aspects of AI vulnerabilities such as prompt injection, overreliance on specific data sources, and more.
Currently, security teams manually test all the LLM APIs for flaws before release. Due to the time sensitivity of product releases, teams can only test for a few vulnerabilities. As hackers continue to find more creative ways to exploit LLMs, security teams need to find an automated way to secure LLMs at scale.
Often input to an LLM comes from an end-user or the output is shown to the end-user or both. The tests try to exploit LLM vulnerabilities through different encoding methods, separators and markers. This specially detects for weak security practices where developers encode the input or put special markers around the input.
GenAI security testing also detects weak security measures against sanitizing output of LLMs. It aims to detect attempts to inject malicious code for remote execution, cross-site scripting (XSS), and other attacks that could allow attackers to extract session tokens and system information. In addition, Akto also tests whether the LLMs are susceptible to generating false or irrelevant reports.
“From Prompt Injection ( LLM:01) to Overreliance (LLM09) and new vulnerabilities & breaches everyday and build systems that are secure by default; It is critical to test systems early for these ever evolving threats. I’m excited to see what Akto has in store for my llm projects” - OWASP Top 10 for LLM AI Applications Core team member.
To further emphasize the importance of AI security, a recent survey in September, 2023 by Gartner revealed that 34% of organizations are either already using or implementing artificial intelligence (AI) application security tools to mitigate the accompanying risks of
generative AI (GenAI), according to a new survey from Gartner, Inc. Over half (56%) of respondents said they are also exploring such solutions, highlighting the critical need for robust security testing solutions like Akto's.
To showcase the capabilities and significance of Akto's AI Security Testing solution, I will be presenting at the prestigious Austin API Summit 2024. The session, titled "Security of LLM APIs," will delve into the problem statement, highlight real-world examples, and demonstrate how Akto's solution provides a robust defense against AI-related vulnerabilities.
As organizations strive to harness the power of GenAI, Akto stands at the forefront of ensuring the security and integrity of these GenAI technologies. The launch of Akto’s GenAI Security Testing solution reinforces our commitment to enabling organizations to secure data processed via all types of APIs including LLM APIs.
The solution is currently in closed beta. Signup for beta access here.
If you want to try Akto, here is a guide to deploy Akto in 60 seconds.
Open Redirect in Outdated FCKeditor: SEO Poisoning in Action
The attackers exploited open redirect requests associated with FCKeditor, a web text editor that used to be popular.
NIST Releases Version 2.0 : 6 Key Features of NIST CyberSecurity Framework 2.0
Explore the key features and effective implementation of the NIST Cybersecurity Framework 2.0. This comprehensive guide provides insights on managing cybersecurity risks in organizations of all sizes and sectors.
Protecting Your APIs: An In-Depth Analysis of the Most Noteworthy CVEs
Uncover vulnerabilities and safeguard your APIs with insights into noteworthy CVEs. - CVE-2023-35078: Authentication Flaw in Ivanti EPMM API - CVE-2023-23752: Improper Access Control in Joomla - CVE-2023-49103: Serious Information Exposure in ownCloud's Graph API