If you are familiar with application security then you have heard the term Static Analysis Security Testing, or SAST. But how well is it serving you? Is it adding value to you and your teams; helping to deliver higher quality software faster? Or is it seen as necessary evil on the way to production? Does it give you the vulnerability visibility you need, or are you spending too much time sifting through noise to find real problems? At HCL, we think its time to get the most of your static analysis and get the SAST-isfaction you’re looking for.
Why Is It Hard to Find SAST-isfaction?
For as long as there has information to share, there has been a need to secure it. Throughout history there are lots of examples of using methods to hide or obfuscate data. For instance, there are records of cryptograms during the time of the Middle Ages, Before that, the Egyptians had Hieroglyphics, and cuneiform was all throughout Mesopotamia.
Starting around World War II, information was being shared in digital form, giving birth to Information Security. The most notable examples from this time are the Enigma and Bomba machines. Since then computers have been put in every industry and billions of lines of code have been written. And the need for code security has increased with it. The first real Static Analysis Security Testing (SAST) tool appeared in 1978, called Lint. It was made to find common code errors and problems with code construction, and have these flagged by compilers.
Fast-forward to today’s fast-paced development world, and the need for finding errors and vulnerabilities has never been more prevalent. Yet modern security teams feel pressured when it comes to configuring scans well. When you consider the dizzying array of potential sources, sinks, lexical analyses, taints and call (or control flow) graphs, it’s no wonder. All these options mean two big hurdles to deal with.
Those Darn False Positives
The underlying idea behind Lint was great, but because not everything that was flagged could be easily verified as an actual error, the potential for false positives was high. False positives happen when a test declares something is a vulnerability, but in reality, it is not. The test diagnosis determines that some kind of rule violation occurred, and therefore is a problem. This could happen for a variety of reasons but a common one is that the tool used is not able to fully follow data as it flows through the application code. Most SAST tools will err on the side of caution and when there is uncertainly they will flag as errors.
The main problem with false positives is that there is a lot of noise to deal with. It means teams have to spend more time investigating issues and trying to pinpoint if something is a real problem. It also means that it is easier for teams to miss real issues, simply because of the volume of things being checked.
And False Negatives
And missing something is potentially a much bigger problem. What if there are real vulnerabilities that could be easily exposed in our code, but the tool used didn’t assess them or have appropriate rules to catch them? If our teams are spending too much time trying to figure out the best ways to test the system, these kinds of problems go undetected for a long time.
What if we aren’t sure if we have tested everything like we should have? Particularly when it comes to application programming interfaces (APIs). For instance, suppose that your development team decided to start utilizing the latest version of Spring. The current version (5.2.5 at the time of this writing), contains over 400 packages. On average, a security professional can assess the security impact for an API in anywhere from one to 10 minutes. Imagine that for the APIs developed using these packages, we want assurance for each. That means that it could take a single dedicated person up to two months to manually assess them all. Even a small dedicated team doing nothing else would likely need a week or more to complete it. The problem is further compounded if there are other APIs and frameworks to account for.
Good News: Help Is Here
Today 88% of InfoSec teams average more than 25 hours per week investigating and detecting vulnerabilities. Which means, time spent chasing things that are not real problems is costly. And missing potential problems could be catastrophic. You may already know that HCL AppScan has unique capabilities specifically designed to address both of these problems. Now in Version 10, HCL is utilizing a common SAST engine to bring the analytics power of our cloud offering to on premise. This make AppScan a perfect choice for static testing in hybrid environments.
Limiting The Noise
HCL has enhanced capabilities designed to combat the noise from false positives. We call it Intelligent Finding Analytics, or IFA. Simply put, IFA uses machine learning capabilities to filter out false positives so that you don’t have to. IFA can triage results in seconds, turning a list of tens of thousands into something much more manageable – and meaningful. In practice HCL has seen IFA reduce false positives by up to 98%. That time savings alone makes IFA worthwhile. However IFA does not stop there.
In addition to filtering out noise, IFA goes a step further to prioritize and organize the remaining results into related groups. This makes the results immediately actionable by providing targeted remediation recommendations. It takes the guesswork out of where to start when it comes to fixing issues. With IFA developers can have a much better sense of where their efforts will have the most impact.
Avoiding False Sense of Security
If one is writing microservices and/or leveraging APIs, then accounting for changes is critical. To help, HCL has Intelligent Code Analytics, or ICA. ICA automatically discovers and detects new APIs and properly assesses them. So instead of having to dedicate your security team for weeks to write their own markups, you can confidently leverage the machine learning capabilities of ICA. And you can do it in seconds.
ICA ensures the review of your third-party APIs and frameworks and assigns the right security impact to each. This allows for more complete scan results, more accurate findings and greater confidence.
And Now It’s Even Better for DevOps
Because static testing involves deep knowledge of the code base, it finds vulnerabilities that other types of security testing cannot. However, that also means more complexity. This can make is challenging to integrate into development pipelines. HCL has introduced some great new capabilities to make doing great SAST easier and better.
- Project Pizza: Split scans in to small slices to take advantage of multi-threading and run then in parallel. To improve the speed further, we also cache these on the first scan and only scan ones that contain new code on subsequent scans.
- Implicit Security: Leave no developer behind by leveraging a new utility to analyze in real time. Codesweep is a VSCode IDE checker that automatically analyzes the last code changes to inform developers of findings in real time.
- Bring Your Own Language (BYOL) – a NEW framework for enabling development of custom scanners to perform fast vulnerability analysis directly on source code. These can be easily incorporated into any AppScan deployment. No more having to wait months for your vendors to build out support for a new language.
So if you are running static testing today and it leaves you a little dis-SAST-ified, we invite you to take a fresh look at HCL AppScan Version 10, or sign up for a free, 30-day trial of HCL AppScan on Cloud, and find the SAST-isfaction you’ve been looking for.
Start a Conversation with Us
We’re here to help you find the right solutions and support you in achieving your business goals.