|
Research Seminars
Here are some of my recent research seminars and conference presentations.
If you are looking for a quick introduction to some of my recent work, these
sets of slides will give you an idea. After browsing these, you might like
to read the fuller treatment in one of my published research papers. Links
to these are found in a classified bibliography on my
Publications page.
You may also like to find out about the history of the research project that
led to the work, either by browsing my
Research Projects page, or
by clicking on the project icon in the margin.
Recent Seminars
The Verification and Testing Research Group
Download:
PPTX version;
Last presented in May, 2015 (at the Computer Science Industrial Advisory Board).
Abstract
The Verification and Testing Research Group, founded in 1997, has flourished over the last two decades. Its mission is to advance the state-of-the-art in theoretical computer science and to apply theoretical results in innovative and practical solutions for industry. The two sides of theory and practice have grown to become broad research themes covering, on the one hand, process algebras, theorem proving, formal refinement and genetic algorithms and, on the other hand, software engineering, model-based and search-based testing, reverse engineering by model inference, distributed XML databases and massively parallel multi-agent simulation.
The two themes come together in the recent establishment of the Advanced Computing Research Centre, a collaborative research forum for industry and technology transfer organisation. This centre now offers consultancy and industry-led research in the three areas of Big Data analytics, simulation and modelling, and software testing and quality assurance. Participating companies and organisations include Costain, KPMG, ZooDigital, the NHS and the MoD.
Broker@Cloud Verification & Testing: SAP HANA Case Study
Download:
PPT version;
Last presented in April, 2015 (at the Advanced Computing Research Centre, Testing Showcase for KPMG).
Abstract
We present the Broker@Cloud service verification and testing toolset, developed for the EU FP7 Broker@Cloud project, in conjunction with a case study developed by SAP deployed on the SAP HANA Cloud platform. The V&T tools were developed by the University of Sheffield and include an XML language for specifying software services, tools to validate and then verify the specifications, and tools to generate technology-neutral test-suites and then ground these to specific Cloud platform technologies. These tools are offered via the web, as Testing-as-a-Service.
The SAP HANA case study is a simple app for booking periods of vacation, developed as a rich-client app using the UI5 tool kit, deployed on SAP HANA and driven through a remote browser. A specification was developed for this, which was checked using the tools. After fixing detected faults in overlapping conditions and blocking operations, a test-suite was generated. This was grounded to Selenium code for driving the service through a remote client. The testing revealed security issues, such as operations executing where they should be ignored, and responses returning data when they should be silent.
X-Machine based Verification and Testing for Cloud Services
Download:
PPTX version;
Last presented in November, 2014 (at the SEERC 10th Anniversary Celebration).
Abstract
On the 10th Anniversary of the South-East European Research Centre, what better topic to celebrate than the theme of Stream X-Machines, which first brought Greek and UK researchers together over a decade ago? Stream X-Machines have been used in the past as a model for specifying and testing realistic computer systems. The recent collaboration between SEERC and the University of Sheffield on the EU FP7 Broker@Cloud project gave us the opportunity to revisit this topic and so develop a theory and a method for verifying and testing software services in the Cloud.
The seminar describes the formal foundations of Stream X-Machine based testing; then goes on to present the verification and testing toolset that has been developed for Broker@Cloud. The tools support the validation and verification of specifications developed in a bespoke XML language that captures the EFSM and IOPE style descriptions of software services in the Cloud. The test generation stage produces technology-neutral test suites that will fully exercise a software service. These test-suites, exported as XML, may later be grounded in whatever implementation technology the Cloud service provider uses.
What is Cloud Computing?
Download:
PPT version;
Last presented in September, 2014 (at the inaugural meeting of CloudCAST).
Abstract
Cloud computing is growing in importance as a means of providing computing and software resources, yet many do not yet understand what it is. The Cloud is not just datacentres, or web-based software access, or online cooperative working, or grid computing, or service-oriented computing; although these are all enabling technologies that gave rise to the Cloud. The term "the Cloud" was first used by Compaq in a 1996 memo and came from a symbol used in earlier network diagrams to represent the rest of cyberspace, "out there".
The definition of Cloud computing is now a NIST standard, which says that Cloud computing must offer: on-demand services, via networked access, using shared resource pooling with multi-tenancy, offering an elastic capability to scale out with demand, and measured as a utility service for metered charging. Cloud computing is offered on different levels, including Infrastructure-as-a-Service, Platform-as-a-Service and Software-as-a-Service. In the future, we look forward to Cloud service brokers and the establishment of the Cloud ecosystem.
Lazy systematic unit testing with JWalk
Download:
PPT version;
PDF version.
Last presented in June, 2010 (at SEERC, Thessaloniki).
Abstract
Testing from state-based specifications is a proven technique for exercising software components, to check for all expected and unwanted behaviours. However, in the Agile Methods community, there is a resistance even to simple state-based specifications and an over-reliance on manually created regression tests, which fail to find all faults. The "lazy systematic unit testing" method was created in response. It is based on the two notions of "lazy specification", the ability to defer finalising the software's apparent specification until a late stage in development; and "systematic testing", the ability to explore the state-space of a software component exhaustively, to bounded depth.
The approach is exemplified in a Java unit testing tool, JWalk. This infers the apparent specification of compiled Java classes, by a combination of static and dynamic analysis, and limited interaction with the programmer. The tool is able to excercise all method protocols, all algebraic constructions and all high-level state transitions, to bounded depth. Once the oracle for the specification has been acquired, testing is fully automatic. The test sets adapt gracefully when changes are made to the code; and re-generated tests perform significantly better than regression tests. In a testing challenge, a JWalk tester was able to test two orders of magnitude more paths than an expert tester writing manual tests for JUnit.
Dynamic analysis of algebraic structure to optimise test generation
and test case selection
Download:
PPT version;
PDF version.
Last presented September, 2009 (at TAIC PART, Windsor UK).
Abstract
Where no independent specification is available,
object-oriented unit testing is limited to exercising all
interleaved method paths, seeking unexpected failures.
A recent trend in unit testing, that interleaves dynamic
analysis between each test cycle, has brought useful
reductions in test-set sizes by pruning redundant prefix
paths. We describe a dynamic approach to
analyzing the algebraic structure of test objects, such
that prefix paths ending in observer or transformer
operations yielding unchanged, or derived states may
be detected and pruned on-the-fly during testing. The
fewer retained test cases are so close to the ideal
algebraic specification cases that a tester can afford to
confirm or reject these cases interactively, which are
then used as a test oracle to predict many further test
outcomes during automated testing.
The algebra-inspired
algorithms are incorporated in the latest
version of the JWalk lazy systematic unit testing tool
suite, which discovers key test cases, while pruning
many thousands of redundant test cases.
Z2SAL - building a model-checker for Z
Download:
PPT version;
PDF version.
Last presented in September, 2008 (at the ABZ conference, London).
Abstract
In this paper we discuss our progress towards building a
model-checker for Z. The approach we take in our Z2SAL project involves
implementing a translation from Z into the SAL input language, upon
which the SAL toolset can be applied. The toolset includes a number of
model-checkers together with a simulator. In this paper we discuss our
progress towards implementing as complete as a translation as possible,
the limitations we have reached and the optimizations we have made.
We illustrate with a small example.
Feedback-based specification, coding and testing with JWalk
Download:
PPT version;
PDF version.
Last presented August, 2008 (at TAIC PART, Windsor UK).
Abstract
JWalk is a lazy systematic unit-testing tool for Java,
which supports dynamic inference of specifications
from code and systematic testing from the acquired
specification. This paper describes the feedback-based
development methodology that is possible using the
JWalkEditor, an original Java-sensitive editor and
compiler coupled to JWalk, which helps programmers
to prototype Java class designs, generating novel test
cases as they code. Systematic exploratory testing
alerts the programmer to unusual consequences in the
design; and confirmed test results become part of the
evolving specification, which adapts continuously to
modified classes and extends to subclasses. The cycle
of coding, inferring and testing systematically exposes
test cases that are often missed in other test-driven
development approaches, which rely on programmer
intuition to create test cases.
Benchmarking effectiveness for object-oriented unit testing
Download:
PPT version;
PDF version.
Last presented in April, 2008 (at ICST Testbench workshop, Norway).
Abstract
We propose a benchmark for object-oriented unit
testing, called the behavioural response. This is a
normative set of state- and equivalence partition-based
test cases. Metrics are then defined to measure the
adequacy and effectiveness of a test set (with respect to
the benchmark) and the efficiency of the testing
method (with respect to the time invested). The metrics
are applied to expert manual testing using JUnit, and
semi-automated testing using JWalk, testing a standard
suite of classes that mimic component evolution.
Conformity testing for modified and extended objects
Download:
PPT version;
PDF version.
Last presented in March, 2006 (at Kings College, London).
Abstract
A behavioural theory of object compatibility is presented, which has implications
for object-oriented unit testing. The theory predicts that only certain
models of state refinement yield compatible types, dictating the legitimate design
styles to be adopted in object statecharts. The theory also predicts that
standard practices in regression testing are inadequate. Functionally complete
test-sets that are applied as regression tests to subtype objects are usually expected
to cover the state-space of the original type, even if they do not cover
transitions and states introduced in the subtype. However, such regression testing
is proven to cover strictly less than the original state-space in the new context and
so provides much weaker confidence than expected. A different retesting model is
proposed, based on full automatic test regeneration from the subtype's specification.
This method can guarantee equivalent levels of confidence after re-testing. The
behavioural conformity desired by regression testing can then be proved by
verification in the theory.
|