Skip to content

Page270

Static Application Security Testing (SAST) and Dynamic Application Security Testing (DAST)

Static Application Security Testing (SAST) tests the code passively; the code is not running. This includes walkthroughs, syntax checking, and code reviews. Static analysis tools review the raw source code itself looking for evidence of vulnerabilities as well as known insecure practices, functions, libraries, or other characteristics having been used in the source code. The UNIX program ‘lint’ performed static testing for C programs.

Code compiler warnings can also be considered a “lite” form of static analysis. The C compiler GCC (Gnu Compiler Collection, see: https://gcc.gnu.org) contains static code analysis features: “The gcc compiler includes many of the features of lint, the classic C program verifier, and then some ... The gcc compiler can identify many C program constructs that pose potential problems, even for programs that conform to the syntax rules of the language. For instance, you can request that the compiler report whether a variable is declared but not used, a comment is not properly terminated, or a function returns a type not permitted in older versions of C.” Please note that GCC itself is not testable, it is given as an example of a compiler with static testing capabilities [7].

Dynamic Application Security Testing (DAST) tests the code while executing it. With dynamic testing, security checks are performed while running or executing the code or application under review.

Both approaches are appropriate and complement each other. Static analysis tools might uncover flaws in code that have not even been fully implemented in a way that would expose the flaw to dynamic testing. However, dynamic analysis might uncover flaws that exist in the implementation and interaction of code that static analysis missed.

The term “push left” describes discovering (or avoiding) flaws as early as possible in the software development lifecycle. The best way to mitigate a vulnerability is to never have that vulnerability, so training developers to write secure code (and avoid mistakes) is paramount. Assuming a vulnerability exists, SAST discovers flaws earlier in the software development process, when they are easier (and cheaper) to mitigate. DAST occurs later in the development process, increasing the time (and cost) to mitigate discovered flaws.

White box software testing gives the tester access to program source code, data structures, variables, etc. Black box testing gives the tester no internal details: the software is treated as a black box that receives inputs.

Disclosure

Performing application security testing may result in discovery of vulnerabilities in third-party software. Disclosure describes the actions taken after discovering a software vulnerability. This topic has proven controversial: what actions should you take if you discover a flaw in well-known software such as the Apache Web server or Microsoft’s IIS (Internet Information Services) Web server?

Assuming you are an ethical researcher, the risk is not that you understand the vulnerability: the risk is that others may independently discover the vulnerability or may have already done so. If the others are unethical, anyone running the vulnerable software is at risk.

The ethical researcher could privately inform the vendor responsible for the software and share the research that indicated the software was vulnerable. This process works well if the vendor quickly releases a fix or a patch for the vulnerability, but what if the vendor does nothing?

Full disclosure is the controversial practice of releasing vulnerability details publicly. The rationale is this: if the bad guys may already have the information, then everyone should also have it. This ensures the white hats also receive the information and will also pressure the vendor to patch the vulnerability. Advocates argue that vulnerable software should be fixed as quickly as possible; relying on (perceived) lack of knowledge of the vulnerability amounts to “Security through obscurity,” which many argue is ineffective.

The practice of full disclosure is controversial (and considered unethical by many) because many unethical hackers (including script kiddies) may benefit from this practice; zero-day exploits (exploits for vulnerabilities with no patch) are more likely to be developed, and additional innocent organizations may be harmed.

Ethical disclosure (also called responsible disclosure) is the practice of privately sharing vulnerability information with a vendor and withholding public release until a patch is available. This is considered the best disclosure option. Other options exist between full and responsible disclosure, including privately sharing vulnerability information with a vendor, but including a deadline, such as “I will post the vulnerability details publicly in three months, or after you release a patch, whichever comes first.”