start portlet menu bar

HCLSoftware: Fueling the Digital+ Economy

Display portlet menu
end portlet menu bar
Close
Select Page

Imagine you’re a well-known car brand. Your shiny new AI chatbot on your dealer’s website is live—ready to assist customers and drive sales. But then it offers a $50,000 car for $1.

This isn't a hypothetical. It’s exactly what happened to a Chevrolet dealership. A simple prompt injection attack tricked their chatbot, turning a promising sales tool into a brand-damaging liability.

If you’re integrating LLMs into your applications, be aware of the fast-emerging risk of LLM manipulation that can bypass traditional security scans. It’s not just a data breach—it’s a reality breach, with direct business, legal, and financial consequences. Before we dive any further, let’s first understand LLMs.

What Are LLMs

Large Language Models (LLMs), such as those powering chat, summarization, and code generation, are sophisticated neural networks trained on massive amounts of text data. From automating customer support to orchestrating complex workflows, they are rapidly becoming essential components of modern applications, taking on tasks that once required human oversight.

However, this powerful technology brings its own set of risks. 

When LLMs Get Tricked

When malicious users interact with LLMs, particularly those integrated with external tools and data sources (like Retrieval-Augmented Generation (RAG) or plugins), they can exploit these connections to introduce significant risks. These can include sensitive data exposure, content policy violations, and triggering unintended actions. Ultimately, such issues can lead to severe consequences like data leaks, financial loss, or major privacy breaches, such as GDPR violations.

Real-world examples, such as the Chevrolet incident, aren’t isolated. Air Canada, for instance, saw its LLM chatbot provide hallucinated information about bereavement fare refunds—an error that resulted in legal action and financial penalties for the airline.

For organizations embedding these models more deeply into their software, these risks grow both in scale and complexity, demanding a specialized security approach that traditional measures were never designed to handle.

Introducing DAST for LLM-augmented Applications 

HCL AppScan DAST for LLM-augmented web applications is a powerful new capability that addresses these risks by enabling dynamic testing of vulnerabilities unique to LLM-based workflows, in addition to identifying traditional vulnerabilities in websites and web applications. Security teams can now configure testing for chat endpoints and other prompts, providing access to LLM components within their web apps, then analyze interactions with full transcripts and remediation guidance. 

Our holistic, full-stack approach to evaluating LLM-powered applications, from chatbots to RAG, leverages proprietary, native technologies, providing in-depth coverage of LLM-specific risks, making it a one-of-a-kind solution. 

How It Works

When a security expert or developer starts a test in HCL AppScan, the platform takes on the role of an attacker. It automatically sends various types of malicious prompts and attack patterns—just like a hacker would attempt to do. As the test runs, HCL AppScan gives you a live view of what’s happening. You can see exactly how your LLM reacts to each injection or attack attempt. 

If an attack succeeds, the platform automatically creates an issue and provides a detailed report showing the exact malicious prompt that broke the model, the model’s response, and clear fix recommendations. The findings can also be imported into HCL AppScan Enterprise, giving enterprises a unified view of LLM security risks.

LLM security risks

By exercising LLM workflows end-to-end, HCL AppScan DAST inspects behavior and responses to identify a wide range of risks, including prompt injection, sensitive data disclosure, function or tool abuse, unauthorized actions, RAG-specific threats, and more. This comprehensive approach enables organizations to assess their security posture, ensuring that vulnerabilities across the entire LLM workflow are identified, prioritized, and resolved to maintain enterprise-wide security.

What Makes It Stand Out:

  • Aligned with industry standards: Built to comply with the new OWASP Top 10 for LLM Applications, it enables security teams to identify and mitigate the most critical AI-specific risks.
  • Model-agnostic integration: Seamlessly integrates with existing pipelines and frameworks, providing a developer-friendly experience without adding complexity.
  • Full-stack security: Delivers end-to-end coverage by unifying discovery, scanning,  assessment, and remediation for LLM-based components.
  • Security posture assessments: Generate full transcripts, actionable remediation guidance, and insights to help teams strengthen the security posture of their LLM applications.

Future-ready Security

For over two decades, HCL AppScan has been at the forefront of innovation in application security testing—consistently advancing the boundaries of what modern AppSec can achieve. From AI-driven analytics to automated vulnerability detection and triage, HCL AppScan has continually evolved to deliver intelligent, adaptive security solutions. 

DAST for LLM-augmented web applications represents yet another significant milestone in this journey. By combining discovery, scanning, and protection into a single, integrated approach, HCL AppScan continues to lead in holistic AI security—securing not just applications, but also the AI models that power them.

Discover the full potential of this capability and elevate your AI security strategy. Learn more.

Start a Conversation with Us

We’re here to help you find the right solutions and support you in achieving your business goals.

  |  February 2, 2023
AppScan Will Be at the CyberTech Global Tel Aviv Conference
CyberTech Global Tel Aviv takes place on January 30th - February 1st at Expo Tel Aviv. AppScan will join BigFix at the conference.
  |  February 23, 2023
Key Findings from Recent Application Security Testing Trends Report
The recently published 2022 Application Security Testing Trends Report has generated a lot of interest in the application security community.
  |  October 8, 2019
AppSec: Protect from the Inside Out
In cybersecurity today, it is no longer good enough to just protect and defend the perimeter of our applications - we have to protect from the inside too.
Hi, I am HCLSoftware Virtual Assistant.