Cloud Service Test Generation
Introduction
This page grants access to a web service for generating high-level
tests from a cloud software service specification. This test generation
tool was developed for the EU FP7 Broker@Cloud project, to support the
continuous quality assurance objective.
Model-based testing is a formal process in which all paths through a
specification are systematically explored to generate all possible test
sequences that the service might encounter. Test sequences are typically
generated to some finite bounded depth, to avoid an explosion of cases.
However, it is important to cover all of the states and transitions of
the service, and also to attempt all significant input partitions that
might trigger different behaviour. The formal specification captures
both of these aspects, such that the test generation tool can provide
an ideal test suite that covers all the paths in the service specification.
The high-level test suite is expressed in a technology-agnostic way
that must then be grounded in any particular implementation technology.
Platform-Neutral Test Suite Generator
This web service generates a high-level test suite from a software service
specification. It returns machine-readable data in XML format.
Please supply the public URL of the XML file containing the service specification
in XML format. (You may copy an example URL from the
specification page).
Please also supply the depth to which tests will be generated
(the maximum path length to be explored from each state) and whether
multi-objective tests are to be generated (fewer paths, possibly verifying
multiple properties per path).
If the input file can be read, the output will be an XML file containing
the high-level tests (otherwise an error message will be displayed). The root
node TestSuite will contain a Notice node, whose children
nodes consist of Advice nodes describing the stages in generating
and filtering the resulting tests, and Warning nodes describing any
transitions and states that were not covered in the specification, for the chosen
depth of exploration. The remaining TestSequence nodes are the paths to
test, presented as an ordered set.
Each TestSequence describes a unique scenario to test, consisting
of a sequence of TestSteps. Typically, the early TestSteps in
a sequence denote set-up actions and the final TestStep is the particular
step under observation, to be verified. However, if multi-objective tests were
selected, some TestSequences may also have intermediate verified
TestSteps. This is where shorter tests have been merged into longer
tests, where the shorter sequence is a prefix of the longer sequence.
Grounding to Platform-Specific Tests
The returned high-level test suite is intended for service providers to
develop their own bespoke grounding to concrete tests. For demonstration
purposes, we have supplied some standard groundings that use the JUnit
framework, on the test grounding page.
A grounding is a transformation from high-level platform-independent
tests to low-level executable tests. This can be done by any program that
understands the XML format of the generated tests and the expected
implementation technology of the cloud service to be tested.
It is fairly simple to build a grounding. A Visitor Pattern
style of transformer can visit every node in the generated XML test suite,
and output suitable code in the concrete programming language.
The testing philosophy is that each TestSequence starts afresh
(the initial TestStep creates, or resets the service) and then drives
the service through a unique series of TestSteps, corresponding
to the different branching scenarios described in the service protocol, and
ends with a final TestStep which must be verified using assertions.
To leverage the full power of the Stream X-Machine testing method, the
verified step must assure that:
- any outputs returned in the response are the expected outputs
- the request triggered the expected named scenario of the operation
- the service subsequently entered the state named by the step
The above assumes that software services are designed according to certain
important design-for-test criteria, namely that they are able (when run in
test mode) to report which branching scenario was last executed,
and which UI state was reached. Our grounding examples presume that the
implemented services provide this capability as extra read-only operations.
Apart from this, the generated high-level tests cover both positive tests
for expected behaviour, and negative tests for unwanted behaviour that should
be blocked. Positive tests are indicated by the scenario's response-name, for
example, "ok" for a normal response, or "error" for a
planned error handler. Negative tests are always indicated by the response
"ignore", which is generated automatically. In a positive test,
some outputs may be bound, but in a negative test, the outputs are not bound
(and assertions should check that the service returns no information in this
case).
|