Sunday, 19 June 2011

The Real World (™) against the OWASP ASVS

At Astyran we have been using the Open Web Application Security Project (OWASP) Application Security Verification Standard (ASVS) in quite a few projects.
Unfortunately, the ASVS is not known widely, even by application security specialists. I personally really like it a lot, and think that Dave Wichers, the project lead did a great job, but there are also some highly frustrating things wrong with it. This makes it very difficult to use the ASVS as is, without rework or a lot of expensive and time consuming explanations to give to development teams, security managers or auditors.
This blog represents my views and insights on the ASVS and where I hope this project will be going in the future. At first I thought giving this blog the title ‘The Good, Bad and Ugly of the OWASP ASVS’, but this blog would be way to long if I also needed to include the good things of ASVS. So sorry, the tone of this blog is quite negative.

What is this Thing Called ASVS?

The OWASP Application Security Verification standard (ASVS) defines four levels of application-level security verification for Web applications.
Each level described in ASVS includes a set of requirements for verifying the effectiveness of security controls that protect Web applications. The requirements were developed with the following objectives in mind:
  • Use as a metric – Provide application developers and application owners with a yardstick with which to assess the degree of trust that can be placed in their Web applications,
  • Use as guidance – Provide guidance to security control developers as to what to build into security controls in order to satisfy application security requirements, and
  • Use during procurement – Provide a basis for specifying application security verification requirements in contracts.
The requirements were designed to meet the above objectives by ensuring validation of how security controls are designed, implemented, and used by an application. The requirements ensure that the security controls used by an application operate using a deny-by-default strategy, are centralized, are located on the server side, and are all used where necessary.
The OWASP ASVS has the following validation levels (from the ASVS itself, that has an Creative Commons Attribution ShareAlike 3.0 license):
OWASP ASVS Validation Levels

 

The Audience is not Listening

The above was literally taken from the standard itself. I think there is already a problem with the objectives:
  • Use during procurement: the ASVS is a very technical document. People in procurement are not technical at all. Technical language should never be used in contracts. I do agree that contracts might included the requirement to have testing done up to a certain level of the ASVS, but this is something entirely different than “providing a basis”.
  • Use as a guidance to security control developers: no, no and no! This is not supposed to be a guidance document for developers, it is called a verification standard. Do not try to write a good document for security testers and developers, you’ll fail for both types.
  • Use as a metric for application developers and owners: I agree with the metric, but not for developers! Developers have nothing to do with degree of trust in an application, developers create applications based on requirements. There should be a minimal level of security requirements (maybe based on ASVS), but let developers focus on the creation of applications and functionalities.
Recommendation
  • Content: The ASVS should focus on providing a list of what needs to be verified according to what level.
  • Reader: The audience for this document will be the security consultant that performs the testing.
  • Scoring: The only possible score (for application owners, security managers or auditors) will be fail or success. The consultant can add his own opinion, whether or not a failure is considered a low or high risk in the context of that application and the business functions it supports.

 

The Mythical Developer

Ok, sorry, rant-mode on for a minute.
Everyone who was forced to read security related books knows that security is function of people, processes and technology. I disagree completely with that statement, IMHO processes and technology are only there to support (force will fail) people in making the right choice.
People are the most important thing (I refuse to use the denigrating term asset) for security. I have witnessed small development shops without any real processes write highly secure code, and I have seen large software divisions with a secure SDLC and automated scanners fail to create a secure application.
At the end, it is about one person’s knowledge and his/her willingness and ability to implement what is correct. For that reason, one should always respect people, their capabilities and their responsibilities. 
That’s why I always cringe when I read an OWASP document that regards ‘developers’ as only one type of people (usually the ones that are the coders) and that beliefs one document fits all development team members.
A typical development team consists of business analysts, requirement writers, specification writers, architects, designers, coders, Quality Assurance (QA) staff and later operational people.
OWASP, respect the way that development is done for ages, and consider rewriting all your very interesting and worthwhile documents to the intended development team member. In return you will gain the respect of the development community.
Recommendation
  • Although (see earlier section) I do not believe that the audience of the ASVS should be members of a development team, structure the to be developed ASVS standard around how things are done in application development: the different phases of the SDLC (Software Development Lifecycle). This will make life easier for the consultant.
  • A lot of the content of the current ASVS can be used in dedicated, new standards, for QA testers, business analysts, requirements writers, coders, etc. Tune your content to your audience, and choose that audience wisely.
  • When documents are 80% ready, reach out and go to the organisations of developers, analysts, architects, … Involve them, listen to them.

 

Puppets on a String in the Security Theatre

Although a lot of the recommendations superficially seem to make sense, many don’t. Really OWASP, you should be the leaders in application security, and not spread ideas that are not based on solid research. Yes, it can be hard not playing along (I absolutely hate needing  to mention APT in my proposals), but this must be the task of an independent organisation such as OWASP.
Do not include requirements that have no real consequences for security, but are maybe nice-to-haves or are simply required (copied) from other security standards. Do not include requirements that are not proven, even if they have other good impacts.
Some examples with remarks in the table below:
Item Remark
V1.6 This requires for the highest levels that threat modelling information has been provided. If one does not do threat modelling, does this make the application by default insecure? Having done threat modelling might improve the ‘trust’ in an application, but is that trust based on solid research? I have not (please proof me wrong) read a study that threat modelling improves the security of an application. Nice to have, yes, required, no. Given more trust in an application: debatable.
V2.5
V4.12
This requires that authentication/authorization controls have a centralised implementation. Again, this in itself does not make your application more secure or trustworthy. It is usually considered good design, makes the application more maintainable, but on the other hand: any error in the central component completely opens your whole application. 
V2.10 Re-authentication is required before any sensitive operations are permitted? This is a design trade-off. Why not digital signatures? Why asking for re-authentication if it teaches a user to fill in his credentials with each pop-up that appears to be coming from the application?
V2.12
V4.14
V5.7
V7.5
V8.5
V8.6
V10.4
These require certain details to be present in the log files. There is no real reason why. It will not stop attackers. This is a nice-to-have (enabling forensics). It should be part of the verification of the security requirements and probably rewritten as:

“Verify that requirements for logging are defined according to internal policy, contracts or compliancy demands”.

There is no need to perform a security check if this is really implemented: if it is part of the requirements specification, it will be part of acceptance documentation and will be QA tested.
V7.7
V7.8
This requires approved cryptographic modules/modes of operations according to the NIST standards. Please do no refer directly to US standards. Again, the use cryptography is usually documented in a security policy or requirement and should be part of the requirements documentation. If this does not exist, refer to the more neutral http://www.keylength.com
V7.9 Asks for the verification of existence/enforcement of the key management policy. Again, if this has any influence on the application at all, it should be in the requirements specification. 
V8.9 Any security reason for a centralised log component? Yes, in general it might be a good design, but has it really direct consequences on the security of the application? (OK, let’s better start with the definition of application security, maybe in a new blog)
V8.11 Requires the availability of a log analysis tool. Really? Again, if the application need to co-operate with a specific tool, it should have been part of the requirements. Some applications even deliberately do not log anything since that might make their owners liable, so this can never be a generic requirement.
V9.2 Requires to have a list of sensitive data, the access policy and the requirements for encryption and the enforcement of that policy. Again, can be moved to requirements specification, and does not make the application more secure in itself.
V12.2 Requires that access to the application is denied if the application cannot access its security configuration information. Makes sense, but I have seen many times that business people demand that the application goes on under any condition – e.g. disaster recovery mode - and that they take that risk. So this is again more a requirements/design trade-off.
V12.4 The configuration store must be able to be output in a human-readable format for audit. Nothing security related here, move to functional security requirements. A criminal will not care. Does it improve trustworthiness? Maybe, if you believe that your auditor knows what the configuration must look like.
Recommendation
For each verification requirement ask the following questions:
  • If the requirement is not in place, does it necessarily make the application insecure? If the answer is no, ditch the requirement (it’s at most a nice-to-have instead of a real requirement).
  • If not having the requirement makes the application insecure, please provide proof of that. Provide a link to study that is is indeed the case, and not something that has been copied all over the internet since years.
The result should also abide to the KISS (Keep It Simple Stupid) principle.

Dazed and Confused

The original goal of the ASVS was also to reach out to ‘developers’. Does OWASP really believe that development team members know what to do with (or validate) a statement such as:
Item Remark
V3.11 Verify that authenticated session tokens are sufficiently long and random to withstand attacks that are typical of the threats in the deployed environment.

 

Even I, as self-proclaimed application security expert, have problems with the interpretation (or explaining them to my customer):
  • What are the typical threats (against session tokens) in the environment?
  • What is long enough?
In general, I tend to translate this to: “Use only session tokens generated by the used application framework”. At least we know (ok, some references needed) that these frameworks are often tested and validated.
There are lots and lots of examples of this kind of unclear language. Even if the document is rewritten for application security consultants/testers, we still might need guidance. Not everyone knows the latest in application security research or typical attacks or has time to research/explain where a specific requirement comes from.
And yes, I know what people will tell me: it’s your job to know al that. Yes, I’m quite able to do that for a price, but sorry, it’s not my job to research all ‘commonly accepted’ security requirements that are not based on solid research (Example: try researching/explaining why a typical ‘good enough’ password should have a minimal length of 8 characters. I usually answer that using anything smaller will result in them having to discuss/explain that to all auditors, which is not a wise thing to do.).
Recommendation
Make certain that all words count. Be very clear. Maybe hire a technical writer. By all means, let your document be proofread by the intended audience.
Explain the why for a requirement. At least, if an application fails, consultants will have no problem putting that into the context of the application and business involved.

Validation levels based on tools or methodology used

I think the standard should limit itself to validation requirements, and not prescribe the use of tools or manual reviews. We all know that even reviewers that focus on manual work (such as my company Astyran) use tools.
And yes, some requirements (such as presence of malicious code or backdoors) can only be checked by reviewing the source code. But it should not part of the standard. The trust one has in an application should stem from the review of relevant validation requirements and not from the methodology or tools used. Or the methodology can be part of another document.
Maybe we can have different validation requirements based on the type of applications, such as the Internet Banking and Technology Risk Management Guidelines (IBTRM) of the Monetary Authority of Singapore (MAS). These guidelines describe 3 types of financial applications: information service, interactive information exchange service and transactional service. Maybe we need to add some categories for non-financial applications but at least it is a start.
Maybe we need a distinction between web applications that are on the internet, intranet or partner network?
Recommendation
  • Rethink the validation levels, do not base them on tools/manual approaches for the verification/trust

 

Scoring

Keep it simple and stupid. It is FAIL or SUCCEED. If the application fails the consultant should provide his opinion (e.g. is the failure for one validation requirements in the context of the application really a serious risk yes or no).
Do not become security zealots. Business have to run on applications, that sometimes might not be secure (as defined by one of the ASVS levels). Provide guidance and advise to business managers that will enable them to make an informed decision.
Be aware that sometimes a security risk in an application might be counteracted by a security control outside that application (for instance, limit access or increase surveillance).
Do not make the mistake to integrate compliancy elements of other standards. Being compliant is something different than being secure.

Reporting requirements

The current ASVS reporting requirements are IMHO too strict or to heavy especially with the needed documentation of the application architecture.  I see no need to include detailed architecture or design related documentation: for some applications this might be hundreds of pages. This should be provided to me as part of the review, if it is not there, then this is a finding …
According to my understanding this documentation requirement is there to proof that the consultant /tester really understood the application. Sorry, a report will only be accepted by a customer if he deems it valid. If I misunderstood the working of the application, I will have to go back. But having detailed diagrams into the report proofs nothing.
The report requirements require the use of the OWASP risk rating methodology. I do not agree, the report should take into the risk appetite and risk methodology of the company. The OWASP methodology is largely unproven (I do not mean that it is worse than any other) and is based on subjective data such as skill and motives of attackers.
Recommendation
  • Rethink the reporting requirements, keep it simple: FAIL or SUCCEED and the opinion of the consultant tester.

Conclusion

I really like the OWASP ASVS. But it is time for the ASVS and for OWASP to step it up and reach for the next level. I hope to have given some ways for improvement or at least some grounds for discussion. Now shoot …

History of this post

26/08/2011 – fixed some typos

No comments:

Post a Comment