Fuzz testing, or fuzzing, is the act of inputting unexpected, malformed, and/or random data to measure response or stability of an application or service. This is accomplished by monitoring for unexpected behavior or service / application crashes. Fuzz testing will find issues that traditional testing and QA methods typically do not.
The goal of fuzz testing is to discover software defects that need to be addressed as these defects may lead to (exploitable) vulnerabilities. Fuzz testing can find not only security issues, but also flaws in the business logic of an application or service.
Fuzz testing can be performed using one of the two fuzz algorithm strategies:
GitLab envisions two types of fuzz testing that will become part of our offering.
Coverage-guided fuzz testing is all about using contextual information from source code to better inform fuzz tests as well as to help correlate the results of a fuzz testing crash directly to the region of code that was vulnerable. This dramatically improves the cycle time to go from an initial fuzz test to a crash to an update to vulnerable areas.
A large benefit of coverage-guided fuzz testing is that it can be done without requiring all development of an app to be completed or for a Review App to be created for live testing. Instead, coverage-guided fuzz testing can be done iteratively on small parts of the app. This means that you can integrate fuzz testing earlier into your SDLC and shift security further left. Getting these results sooner means that developers can act on them sooner, reducing cycle times.
Behavioral fuzz testing is a technique that uses a definition of the inputs a target application is expecting to better understand the Implementation Under Test (IUT). This allows the fuzzer to smartly make mutations that are very close to valid, in contrast to fuzzers that don't understand the expected inputs of the IUT are.
Additionally, behavioral fuzz testing is able to observe how the behavior of the IUT changes to make different decisions for subsequent fuzz tests. For example, if a behavioral fuzz test notices that a specific type of input causes HTTP 500 errors, it could make similar sorts of mutations to find errors in the same parts of the app. This approach provides better quality of results by inducing more faults and helping to pinpoint where they are in the app.
GitLab will provide more valuable fuzz testing results by combining both behavioral and coverage-guided fuzz testingtechniques. By using both approaches together, we can find more faults and vulnerabilities than one approach would find on its own.
Behavioral fuzz testing tests what an app should be doing, based on its published interfaces. Because we also have the source code of the app available at the time of test, we can use coverage-guided fuzz testing techniques to identify parts of the app that aren't mentioned in the publicly facing interfaces. We can also use the source code of the app to provide context and better understand why the app is crashing. This allows us to intelligently generate new test cases to reach other parts of the app's code.
A final, and very important, benefit of combining the techniques is that because we have context around where in the code a fault has occurred, we can point users to specific parts of their app where the fault occurred so it can be fixed. Generally, the process of determining where in source code a fault occurred is difficult, so allowing users to immediately see this location is incredibly valuable and will directly help them to mitigate identified issues.
In a survey GitLab conducted in May 2020, 81% of our respondents said that they think fuzz testing is important. However, over 60% said that the most difficult part of fuzz testing is setting it up and integrating it into their CI systems. Because of this difficulty, only 36% of respondents were actually using fuzz testing.
Despite many people thinking fuzz testing is important, the large number of users not using it shows that there is an opportunity to give new value with GitLab if we can help users overcome those initial pain points related to adopting fuzz testing.
There are two primary target audiences for fuzz testing:
There is an additional role which may be included in the target audience. Product verification (PV) engineers may leverage fuzz testing as part of their unit or regression tests. If the software development team does not directly handle their unit test creation, a PV engineer may be partnered with a software development team to create and maintain unit tests on behalf of the software development team.
Sasha is the primary persona we will focus on building fuzz testing for initially. Ensuring they are able to successfully set up and consume the results of a fuzz test is critical to drive adoption of fuzz testing.
We prefer to get 80% of fuzz testing quality while only requiring users like Sasha to do 20% of the work. Even though better results are possible with more work by the end-user, we do not want to go in this direction initially since it means users will get frustrated and not use fuzz testing at all.
In the future, we will focus on building out deeper fuzz testing workflows to support Sam and the security research they are doing. We will also deepen the fuzz testing experience so that users like Sasha can easily learn more about fuzz testing and become a more advanced user without requring a large amount of time to be invested. At that point, we will be focusing on getting the 20% of better quality results that require the 80% of work by end users.
Approaching and prioritizing personas in this way will help us to drive adoption of Fuzz Testing and address the pain points identified in our research.
Web API providers - Web APIs are a natural fit for us to address with fuzz testing. GitLab has many users with this use case who use our product already. It will be a short jump for them to add fuzz testing.
This is beneficial since we can lean on our core strengths to bring fuzz testing to this use case and we will have a large number of existing users we can start to get feedback from.
Another benefit of this is that GitLab will be able to begin dogfooding this very quickly, as we publish many web APIs ourselves.
Security research and penetration testing teams - Security labs and penetration testing firms generally make use of fuzz testing in some form or another as part of their test engagements. GitLab has a large opportunity in this market with our fuzz testing solution to begin working with these organization.
Specifically, GitLab can provide value with our existing product to help security firms manage their repositories and pipelines, as part of a single DevOps applicaiton. What will really differentiate us here though is that our fuzz testing technology will live inside GitLab. This will help these orgs work more effectively, as opposed to having to use a standalone fuzz testing product and then being required to manually interface with their SCM tools to store and update results.
Medical device and regulated industries - GitLab has an opportunity to introduce fuzz testing as a new piece of technology to the medical device and regulated hardware industry. Because these devices can cause serious injury if they malfunction, the ability to find bugs with fuzz testing before they are found in the field will resonate quite well.
This will be a tougher set of use cases to enter for a number of reasons. Because these industries are already very regulated, they likely have very mature workflows, which GitLab would need to find a way to make ourselves a part of. We have an opportunity to introduce GitLab as a single application for their whole DevOps lifecycle, which will help these companies move quicker in addition to providing them fuzz testing, but this will take time.
Because this is a relatively new space for GitLab, we need to be cognizant of the time needed to enter the space, but it is quite a large opportunity. To make sure we can move forward iteratively, we will attempt to find an initial user to work with and make sure we understand the medical and regulated device use cases and workflows. Once they are successful, we can then use them as a case study for additional companies.
Automotive systems - Similar to medical devices, introducing fuzz testing as part of automotive systems is a big opportunity for GitLab. Automotive systems undergo long testing cycles before deployment into production, which means they can take advantage of long, "soak-test" fuzz testing sessions to find bugs.
A potential challenge for this use case will be ensuring our fuzz test and GitLab platform fit the existing technology stacks that automotive manufacturers use. To ensure that we are successful in this space, we will canvas several different automotive customers, learn more about their specific technology stacks, and work closely with one we are compatible with as aprt of a future iteration. Based on the results and feedback, we can then decide to either expand into additional technology stacks or to focus more on the ones we already have.
Embedded device manufacturers - A broad and challenging domain we can bring fuzz testing to is the embedded device space. These hardware devices typically connect to sensors and control units over specific interfaces, some of which are not always secured before being deployed. Fuzz testing can provide the ability to locate these bugs and vulnerabilities before deploying the devices.
Embedded devices generally will have toolchains that are less commonly used than web or desktop software, since those toolchains are intended solely for developing hardware-based applications. This means that fuzz testing will provide a large amount of value since the platforms have not been as rigorously tested by as many people, but also will present GitLab a challenge in that there will be far more choices than we can realistically support.
To ensure we can be successful in this space, we will start with hardware devices that communicate over channels and protocols that are close to our existing fuzz testing tools and do not require the use of custom hardware to communicate. Devices that speak HTTP over a wired or wirless connection are a good example of what we could quickly target. Devices that use a proprietary hardware interface with a proprietary protocol GitLab does not support are a good example of the type of devices we would not attempt to start with.
GitLab's aspirations with fuzz testing is that it becomes a security technique that all of our users can take advantage of to find and fix issues in their apps before attackers can exploit them.
Since fuzz testing is traditionally difficult to use, our emphasis will be on making fuzz testing an accessible technology that our users can take advantage of without becoming technical experts in the space nor requiring them to form a dedicated fuzz testing team.
GitLab will provide its users with application robustness and security visibility testing solutions which validate the entire application stack. This will be provided by verifying the user’s application or service both while at rest as well as while running. This will include historical trending and recommendations for next steps to provide peace of mind to our customers.
Furthermore, GitLab will provide real-time remediation for user solutions running in production. This will be possible by analyzing the user issues found in GitLab Secure and applying “virtual patches” to the user’s production application or service leveraging GitLab Defend. This will allow your organization to be secure and continue functioning until the underlying vulnerability can be remediated in your code.
GitLab ultimately envisions our fuzz testing solutions becoming a critical part of every organization's DevSecOps workflow, whether that is for web APIs, traditional desktop software, or for hardware devices. Our long-term vision is to provide shift fuzz testing solutions left for these use cases and that fuzz testing becomes a common part of every developer's workflow. We will start with web API and web appliation use cases, which are already well supported in GitLab, as part of an iterative approach to delivering fuzz testing.
The next step is to provide a minimal support for fuzz testing as part of GitLab. We will do this by integrating the new technology from Peach Tech and Fuzzit as well as by building out the core pieces of the fuzz testing experience inside GitLab.
We will focus on two MVCs, one for behavioral fuzz testing of APIs with OpenAPI specifications and one for coverage-guided fuzz testing of applications written in Go. Based on what we learn and customer feedback, we will then improve both use cases with future iterations.
This MVC will focus on providing coverage-guided fuzz testing of applications written in Go. Results will be provided to users in context with the pipeline so they can act on them. The full, deep integration into various GitLab workflows will be handled as follow on epics.
This MVC will focus primarily on API fuzz testing with OpenAPI definitions, since that is a common use case that fuzz testing can help with. Results will be surfaced in context with pipelines. In future epics, we will more deeply integrate these results in other parts of GitLab.
Our initial focus is on web applications and REST APIs, so we are not focusing on fuzz testing local desktop or mobile applications at this time.
Wireless, Bluetooth, and Automotive based fuzz testing as these types of fuzz testing solutions require special hardware to accomplish. This makes these types of fuzz testing out of scope for now.
The fuzz testing competitive landscape is composed of both commercial and open source testing solutions. The following outlines the competitive landscape for fuzz testing solutions:
GitLab already uses OWASP ZAP today to power the DAST capability within the GitLab Secure offerings. ZAP is a web application security scanner, leveraging a proxy to perform testing, and does not perform protocol-level fuzz testing. All fuzz testingby ZAP is performed within the context of the web application being tested.
Analysts consider fuzz testing to be part of Application Security Testing and is generally discussed as part of those reports, rather than as a standalone capability. There are challenges to make it part of the DevSecOps paradigm, and they are interested to hear more about that.
Gartner wrote "Outsourcing or leveraging managed services to perform fuzzing is the recommendation in the absence of internal subject matter expertise and staffing." as part of their "Structuring Application Security Practices and Tools to Support DevOps and DevSecOps" research. This recommendation is a result of the high-level of manual configuration many fuzzers need. This is a pain point GitLab should be mindful of as we work on bringing our own fuzz testing solutions out so that users can be successful without needing to be a fuzz testing expert nor need to do a large amount of configuration.
Gartner also published their How to Deploy Application and Perform Application Security Testing where they say "You should reserve fuzzing for nonweb applications." We disagree with this conclusion and think that it comes primarily from the difficulty normally associated with doing fuzz testing. This underscores the importance of our emphasis of making fuzz testing straightforward and easy to use.
Additionaly, as APIs become more and more of a staple of modern application development, they should be prioritized for fuzz testing. As one of the primary interfaces exposed to end-users, fuzz testing can reveal critical bugs and security issues before they are exposed to the world, despite them not being traditional web applications.
By combining our fuzz testing solution with our DAST, IAST, and SAST solutions into a GitLab Secure suite, GitLab can be placed into the Gartner Magic Quadrant for Application Security Testing. This will give GitLab additional exposure as well as show security thought leadership.
Other analyst firms include 451 Research and IDC as they have focused security practices in which GitLab Secure can be highlighted and show leadership.
Fuzz testing has not yet been released so we will continue to update this section with customer success and sales issues as we begin getting feedback.
Fuzz testing has not yet been released so we will continue to update this section with specific GitLab issues as we begin getting feedback.
Fuzz testing has not yet been released so we will continue to update this section with internal customer issues as we begin getting feedback.