Software

The Power of V&V

A secure system – especially a system that is used to provide security – has to be trusted. But what underpins that trust? What proof do we have that the main components of our trusted system are implemented properly and won’t fail at a critical moment? We mentioned this point in our last article about Secure OS and, as promised, we return to it here.

Verification and validation (V&V) are applied to assure that software (or the whole system or appliance) truly possesses the stated properties. Although these terms (V&V) sound quite similar and are used in conjunction with each other, they have quite different meanings. Let’s recap.

Verification is the process used to determine whether the outcome of a given stage of product development (i.e. software development) conforms exactly to the requirements set at the beginning of the stage.

Validation is the process used to determine whether the product (computer program, operating system, appliance etc.) satisfies its intended use and user needs. Requirements, lifecycle processes and other supporting artifacts can also be validated for their conformance to the expected results.

Put simply, verification asks, “Have we implemented the system properly?” while validation tries to find out “Did we implement the proper system?” And while the second question requires the involvement of an expert (whose opinion forms the basis for the whole scope of validation issues, from requirement validation to the final integration test), the first question has to be addressed mainly using formal methods.

Indeed, the one cannot exist without the other. The system can only be verified with regard to a concrete property, for example, evidence that the program does not suffer from deadlocks. The fact that this property makes sense should be validated. Furthermore, verification can be performed in a trivial way by restricting the ability to lock resources – but this may disrupt the integrity of those resources. Therefore, we have to validate an additional condition. In some cases the property definition can also be the subject of verification if the expert requirements can be appropriately formalized into the verification goal.

The Power of V&V

To envision the process of verification you can try imagining a sort of “magic evaluator” ready at the push of a button to perform an assessment of any given source code: Is the code valid or not? Is it safe or not? And so on. But even this sort of ideal raises a number of questions. The first of them being: what will we actually prove? What statements is verification capable of making evident? The correctness, completeness, data consistency, accuracy and safety of execution… It has been shown that all the properties might be represented as a composition of two basic properties – the ‘safety’ property and the ‘liveness’ property. The safety property stipulates that the system cannot reach a specific (unsafe) state. Put another way, this means “something bad will never happen”. The liveness property, on the other hand, guarantees that after a finite run the system will reach some defined state – in other words, “something good will definitely happen”.

However, awareness of the decomposition possibility of the verification goal is one thing, while the correct and valid representation of such a decomposed goal without loss of sense is, clearly, another. Sometimes, attempting rule decomposition for a system model results in a negative effect: the model hangs in the safety part of the rule without being able to establish the liveness part. This imposes additional conditions on the decomposition process. In some other realistic scenarios, you have to address the “fairness property” in addition to safety and liveness (just like in real life).

The Power of V&V

To formalize the criteria defined in such a manner, classical or temporal logic is often used and, to verify the system properties according to these criteria, the appropriate programming languages. In particular, for classical logic clauses Prolog is quite popular, while for temporal logic the Promela and SPIN languages are used. However, this is not the only way to define verification goals. The formal definition of correct program behavior and verification of this behavior is so specific and of such significance that in 1969 computer scientist and logician C. A. R. Hoare proposed a formal theory intended to establish the correctness of computer programs deductively. The basis of this theory is a set of logical rules defined in a way that imitates the semantics of imperative language constructions. Later, an approach to criteria specification was developed that even more closely resembles programming abstractions and supports further software design – design-by-contract programming.

Another major issue is the choice of object for verification. Despite the fact that a verification procedure implies a precise evaluation, the right choice of object needs to be made for there to be confidence in the result.

For example, one may choose a static system configuration – i.e. system parameters, applications and security policy restrictions – to verify. The evaluator accepts this data, performs the verification procedures according to the logical rules (based on expert knowledge) and generates ‘Pass’ or ‘Fail’ output. The evaluator may, for example, ascertain whether a certain kind of attack on the system is possible or if an unprivileged user can obtain unauthorized access to specific resources in the system, etc.

The verification of system configuration can ensure system behavior is trusted if system components are configured properly. This means that all system services and applications should run as specified and contain no bugs or vulnerabilities that could be exploited to affect the functioning of the whole system.1

However, the situation is often different in reality. Therefore, software internals need to be verified2. One thing is clear at this point – because software internals have a lot of representation layers, the need to make the right choice once again appears. What object is to be verified – the high-level source code or the sequences of machine instructions? Is it necessary to consider the program environment and how to model this environment? Is examining the specific dependencies of low-level execution from the hardware platform of any value…? And again, the choice depends on the verification goal and on the level of assurance provided for verification. Suppose you need to ensure the absence of a certain type of vulnerability in a piece of software (this example can be interpreted in most cases as the safety problem mentioned above). Testing and static code analysis intended to find typical dangers are not usually considered as formal verification methods3 due to the fact they tend not to cover all possible situations (although exceptions do exist). To solve this problem of the verification method, you need to perform logical computations with code constructions in order to make it evident that any continuous fragment on the program control flow graph (including all non-linear transitions) is not vulnerable to the given exploitation method. All that is required is to formalize, in a general way, the appropriate valid conditions and implement efficient evaluation algorithms for the entire program code.

The issue may be further complicated by the lack of guarantees that a compiler will save the proven properties for the resultant machine code, and by the necessity of guaranteeing the properties originally defined for the low-level code. It is because of this complexity of verifying program code that the verification methods are applied to the code as simply and concisely as possible. Priority is given to the code of the operating system kernel and the code of the low-level services that underpin the security of the whole system.

The Power of V&V

One promising approach to verification is by guaranteeing the security of some code properties (or setting the basis for such a guarantee) when the code is created. By demonstrating that a notation or programming language is capable of imparting the necessary characteristics to the program code, one can avoid the tedious checking procedures at least for these characteristics. Code generation minimizes human error (i.e. bugs) when creating software code. This is quite an effective approach that is currently only used for a limited number of algorithms in a specific context – at least until another more complicated task is solved. This task appears because we do not eliminate the code verification issue, but instead pass it to a higher level – the level of language (or compiler) verification. Therefore, we have to verify that the language is safe, meaning that all the constructions produced with this language are safe in the previous sense. This is a non-trivial task, but after being solved once it addresses verification issues for any code created using previously evaluated methods.

Another approach to implementing the verifiability of program code as it is created is to use the design-by-contract approach (contract-based programming). In this case, implementation starts by determining precise formal specifications of programming interfaces that prescribe preconditions (obligations accepted by the clients of interfaces), post-conditions (obligations accepted by the interface supplier) and invariants (obligations for saving certain properties related to the interface). Many programming languages support design by contract natively or with third-party extensions (e.g. C and Java languages).

“Laboratory verification” of the program code may cause complaints if the code behavior is affected to a large extent by the environment. Of course, it would be good if a system made from loosely coupled trusted components with properly defined interfaces could give a 100% guarantee that it will execute properly, but in real systems it is quite difficult to predict what influence the environment will have on individual components. In order to assess the correctness of the system it is necessary to resort to an analysis of the behavior of parallel components. Formal verification of whether a given logical formula is satisfied for the system with parallel execution architecture is referred to as model checking. This method brings together existing knowledge and expertise in the software verification field, and is widely used throughout the world to evaluate existing hardware and software systems. The Turing Award has been given twice for work in the field of model checking. The first time was in 1996 to Amir Pnueli “for seminal work introducing temporal logic into computing science and for outstanding contributions to program and systems verification”. The second time was in 2007 to the three scientists, Clarke, Emerson, and Sifakis “for their role in developing Model-Checking into a highly effective verification technology that is widely adopted in the hardware and software industries”).

During the Turing Award ceremony in 2007, ACM President Stuart Feldman said about the model checking method: “This is a great example of an industry-transforming technology arising from highly theoretical research.” We can say with some certainty that if the future of all aspects of our life lies with technologies that are safe, secure and smart in all senses of the word, validation and verification methods provide the route to that future.

It is impossible to cover all aspects of V&V in one article. For those who are particularly interested in the subject, we can recommend a paper by one of the pioneers of the model-checking approach, Edmund M. Clarke, ‘The Birth of Model Checking’, and his book ‘Model Checking’, co-authored with Orna Grumberg and Doron A. Peled, for a more in-depth exploration of the method. The best way to learn about aspects of safety, liveness and the other main properties is to refer to the original works listed in the paper by Ekkart Kindler, ‘Safety and Liveness Properties: A Survey’. The excellent monograph by G. Tel, ‘Introduction to distributed algorithms’, gives a detailed explanation of the formal representation and development of correct and dependable algorithms in complex systems.


1This is the case when the validation of lifecycle processes (based on an awareness of possible vulnerabilities) may help to reject configuration verification as inappropriate or enter compensating measures (e.g. code analysis) to provide some guarantees for software implementation. >>

2It should be noted that configuration verification and software verification are not interchangeable measures. While a check of the program code guarantees that it will be executed as expected, configuration checks ensure conformance to the required policy. >>

3They are usually considered as validation methods. >>

The Power of V&V

Your email address will not be published. Required fields are marked *

 

  1. ED DASHIELL

    Hi Ekaterina,
    I am a retired mainframe COBOL programmer. I wanted to compliment you on your article which is well written, understandable, and informative. Reading it brought back pleasant memories from my programming career and I have not seen the subject of code verification and validation explicated as well as your article does. Your article is an excellent example of technical writing at its best. Thank you for the enjoyable read. Best Wishes, Ed

  2. William Larson

    Hi, Ekaterina (say, how do you pronounce that, anyway, as in E Katrina?)

    No matter. What I liked about your article is your educating folks (but not perhaps dimwits like me) on all things cyber security-related. That is, in particular, validation advice and other matters on what is secure, or not, etc, etc. I am not educated in computer programming, and do not usually follow highly technical discourse on computer security-related issues, such as yours. But I am computer savvy, self-educated (or self-limited, if you will), and find your advice somewhat difficult to comprehend.

    So, would it be possible for dolts like me to read a synopsis of what you’ve said about these validation issues? You know, like in layman (laywoman’s) terms, for those of us like me who think COBOL sounds more like the color blue, than seeing red, when security issues are at stake.

    I purchased, like, and trust Kaspersky anti-virus software (recommended for me by Best Buy, no less), and use it daily to keep my computer and my data safe. However, am not sure whether any of your advice applies to trusting THEM, thru validation, or otherwise. Your thoughts this? Thanks.

    Bill Larson, somewhere in Minnesota (a dolt personified)

Reports

How to catch a wild triangle

How Kaspersky researchers obtained all stages of the Operation Triangulation campaign targeting iPhones and iPads, including zero-day exploits, validators, TriangleDB implant and additional modules.

Subscribe to our weekly e-mails

The hottest research right in your inbox