A J H
Simons (1994),
Department of Computer Science,
University of Sheffield
Object-oriented analysis (OOA) and design (OOD) are techniques which have evolved since 1988 to support the creation of object-oriented software targeted at languages like Smalltalk, C++ and Eiffel.
A contrast is drawn between the older structured systems analysis and design and the newer OOA&D. The aims and goals of SSA&D are different from the aims and goals of OOA&D.
A large number of OO methods for analysis and design have been put forward. These can roughly be categorised as:
relationship and attribute centred approaches (Shlaer/Mellor, Coad/Yourdon)
behaviour centred approaches (OBA, RDD)
high industrial profile approaches (OMT, Booch)
synthesized approaches (Objectory, Fusion)
No one approach has all the best techniques
Very few approaches cover all aspects in equal depth
What are the aims of analysis and design? In principle:
Analysis looks at a system in terms of problem-domain concepts and seeks to elicit natural interactions and discover natural constraints. The aim is for the Software Engineer to raise his understanding of the problem domain, communicate with the Client (the domain expert) and sometimes reveal inconsistencies and incompleteness in the Client's awareness of the problem domain.
Design has the task of converting the analysis model into concepts and abstractions present in the programming style of the target language. Such concepts can include procedures, modules, objects, or processes and vary from language to language. The design model may have to integrate with existing subsystems or aim to use components from an existing software library.
Analysis and design are often assumed to be completely independent from the chosen programming language.
However, this is not the case!
To see this, we need to step back and consider analysis, design and programming as human conceptual activities.
A look at the psychology behind human conceptual activity.
Software is among the most complex of human artefacts.
few material constraints, contrast bridge designs;
arbitrary reconstructions, contrast building designs;
Abstraction is the tool we use to encapsulate complexity:
higher abstractions introduced to reduce complexity;
certain aspects of problem highlighted;
certain aspects of problem suppressed.
Research in Gestalt psychology says that your initial conceptualisation of a problem affects your entire perception, cf optical illusions:
choice of structuring concept affects which aspects you see and which aspects you ignore;
it is difficult to undo a particular interpretation, once you have formed it.
Choice of structuring concept influences whether you can see how to solve the problem, or get the best solution!
Applied to the design of programming languages, the choice of structuring concept gives rise to programming paradigms, which emphasise/suppress different aspects of computation.
Paradigm: comes from Greek word meaning model, or canonical example.
The five main approaches are:
Statement-oriented: programs are conceived as linear sequences of commands altering global system states (state-based); eg imperative parts of Pascal, C, Fortran;
Expression-oriented: programs are conceived as mathematical function expressions requiring evaluation (value-based); eg Standard ML, Miranda, Lisp (in part);
Constraint-oriented: programs are conceived as a set of related logic equations and the program output is the solution of all the constraints; eg Prolog, SQL, rule-based systems;
Object-oriented: programs are conceived as a set of black boxes, encapsulating state, which cooperate via their external services to achieve a result; eg Smalltalk, C++, Eiffel;
Process-oriented: programs are conceived as a choreographed dance between independent actors, whose movements must synchronise when they communicate; eg Occam, other parallel languages.
Analysis and design methods are often based on a particular programming paradigm:
1) programming styles influence design methods;
2) design methods influence analysis methods.
Structured programming (Algol, Pascal) was invented to eliminate goto and global data.
- notion of blocks and scope;
- notion of controlled entry and exit;
Top-down design and step-wise refinement arose because blocks were the main structuring concept in the programming languages.
- primacy of functions;
- primacy of functional decomposition;
Structured systems analysis arose because of requirement to analyse system in terms of functions.
- data flow diagrams: process-centred;
- data stores: passive, localised to processes.
Do analysis and design achieve their aims?
Not always: beware badly fitting methods
Ideally, you want an analysis method that reflects the natural abstractions of the problem domain. This is often ignored!
Examples of mismatch:
component-based application modelled using SSA&D - it would be more natural and reveal more dependencies to model using OOA&D.
business-rules type of application modelled using OOA&D - it would be more natural to perform the analysis in a logic language, use proofs to test for consistency and completeness.
Ideally, you want a design method that reflects the natural abstractions in the target programming style. However, problems if this is different from the analysis paradigm:
either, there is no smooth transition from analysis to design;
or, you are forced to choose an ill-fitting analysis method, to suit the design paradigm.
- the two main A&D paradigms in current use.
SSA&D arose in a culture where the aim was:
to automate existing pen-and-paper solutions;
to update and rationalise older software systems;
to design bespoke software for the job at any expense.
OOA&D arose in a culture where the aim was:
to achieve faster, cheaper turnaround to delivery;
to allow reuse and extension from old solutions;
to promote a component-centred software industry.
Cf bespoke tailors versus off-the-shelf clothes at M&S
Cf custom-built rifles versus the Colt revolver
Top-down design with stepwise refinement has a well-defined process: crank the handle, get the result.
However:
products of SSA&D are based on analysis of a single system only;
major partitions of design based on volatile aspects of the system - top-level functions too vulnerable to change.
Example: use Jackson
Structured Programming diagrams to design
1) a program to count all
characters in a file;
2) a program to count all words
in a file;
3) a program to count all lines
in a file;
None of the top-down designs can be reused!
OO approaches say 'real systems have no top' (Meyer, 1988): the method is partly top-down, partly bottom-up. No clear uniform process involved, however:
products of OOA&D based on several (usually 3) systems - designs are refined as they are reused;
major partitions of the design are based on stable aspects of systems - the entities, rather than the functions.
Example: use any OOA&D
method to extend a payroll system to produce a management information system
(MIS) checking on performance of employees:
1) analysis entities are still
employee, paycheck, timesheet;
2) design entities are still
ADTs representing employee, paycheck, timesheet;
3) programming language
abstractions are still classes for employee, paycheck, timesheet;
Major abstractions are reused/adapted easily.
Better to modularise a system around its component entities than according to a breakdown of its functions:
Facilitates understanding: the object concept helps to find natural metaphors on which to hang abstract ideas - consider window, menu, icon...
Smooth transition from analysis through design to implementation, due to relevance of the entities in the system - easier than converting DFDs to block structure.
Systems react gracefully to change, due to permanence of the entities in the system - easier to add to the behaviours of entities than to change the top-level function.
Many different top-level functions possible depending on where initiated - a system is composed of multipurpose objects.
The object-oriented approach emphasises component-centred software with:
easy reuse of existing components;
easy adaptation of existing components.
Objects are black boxes with well-defined interfaces - they can be reused in full in new applications. This is more efficient than:
reuse of old program texts (edit, modify);
reuse of individual functions (too fine-grained);
reuse of whole function libraries (too coarse-grained).
The language mechanisms of inheritance and polymorphism allow for progressive adaptation of old object designs:
inheritance allows you to specify (just) how an object is different from another;
polymorphism allows you to use an adapted object (directly) in place of the original;
old and new designs can co-exist - Meyer's 'open-closed principle' (Meyer, 1988).
In order to realise maximum benefits of software reuse and extensibility, you need to aim for frameworks.
A framework is a generalisation of a system of collaborating objects. Whereas inheritance and polymorphism support reuse at the component-level, frameworks support reuse at the entire system level.
How do you develop frameworks?
in a particular system, you identify groups of collaborating objects, eg a menu, a user-application, a window;
you generalise from these to provide new abstractions in your class library, eg an input-event handler, an application routine dispatcher, a display device;
you produce the generalised concept of an event-driven user interface, which can be specialised easily for new applications.
Conventional libraries are less flexible:
standard libraries providing add-on functions, eg the maths library in C/Unix for floating point operations;
standard libraries providing add-on datatypes, eg the string library in C/Unix for text processing;
Need to consider the purposes for which object-orientation is adopted. There are different levels of commitment.
The OO approach can be entered on three levels:
programming language features: eg it might be useful to have inheritance and polymorphism for some applications like GUIs or CAD systems; eg systems like MS-Windows or the Unix OpenWindows toolkit which simulate these OO features;
design concepts: eg you get better factorisation and encapsulation of designs if you adopt the OO approach for all software in the software house; better reuse of library code as a result of developing frameworks;
institutionalisation of OO approach: eg you establish roles for class library managers, application scavengers, reuse managers; you base your purchases/sales around software chips that retail to other houses; you invent reward mechanisms that encourage reuse and abstraction from existing solutions.
Clearly, the further you go down this list, your commitment increases. Initial costs are higher, long-term benefits are greater.
You need to consider the knock-on effects of migrating to object-orientation. There are different time- and money-costs involved at different stages.
retraining personnel: learning, change and adaptation take personnel off-line (costly in time);
reassigning personnel: new roles, such as library manager, application scavenger require changes in mind-set and in institutional practices (resistance to change?);
setting up initial class libraries diverts programming staff from product development (costly in lost output);
developing frameworks requires periods of reorganisation as the set of reuseable abstractions change and stabilize (costly in proportion to early mistakes) ;
starting from about the fourth new product - easier to develop from existing frameworks abstracted from previous systems (benefit in reduced time to delivery);
long-term benefits - eventually, massive savings in time and costs.
Can we establish criteria for an OOA&D development methodology?
A method is a set of techniques and notations.
A methodology is a method with a set of rules for applying the method and a set of heuristics for judging when the different stages are complete.
Each development stage should be represented - requirements capture, analysis, design.
Early stages should reflect the nature of the problem domain, later stages the target domain.
Each stage of a method should have clearly defined products - diagrams, charts, checklists.
Process
Desire a semi-automatic mechanism leading to discovery of all the necessary concepts.
Desire a smooth transition from stage to stage.
Desire a mechanism for checking for completeness and consistency after each stage.
1. OOA should support discovery of problem-domain concepts and interactions:
2. OOA should facilitate constant communication between SE and DE;
3. OOA should establish a natural analysis model (or models);
4. OOA should provide a clear process for discovery;
5. OOA should support cross-checking for completeness, consistency;
6. OOD should provide smooth transition to design;
7. OOD should support discovery of objects, object interactions, object behaviours;
8. OOD should support generalisation, aggregation, discovery of subsystems;
9. OOD should adapt designs to existing library abstractions;
10. OOD should support extraction of library classes, and frameworks;
11. OOD should support cross-checking for completeness, consistency.
But,
existing OOA&D methods don't do all of this...
Nearly every OOA method assumes the existence of a requirements statement, which the analyst takes away.
no interaction with client to clarify requirements;
analysis models developed without client;
This is not good! However, two methods - OBA and Objectory - start properly at the beginning.
OBA trains its users to adopt non-directive interviewing skills - to avoid pressurising client into particular thought-modes:
don't impose a priori view on system;
elicit natural understanding of the whole system;
collect scripts, scenarios of the system's behaviours;
The idea of script, scenario is called a use case in Objectory
establish who will interact with system - the actors;
establish a use case for each distinct interaction of an actor with the system;
establish normal-course and exceptional-course use cases - write these out as scripts;
Virtually all OOA methods assume that the OO paradigm will suit the application - true in most cases, but:
doesn't allow for other paradigms - eg rule-based, or concurrent models;
no guidelines for cross-paradigm development.
Perhaps this is a gap which needs to be addressed!
All methods like to elicit the kinds of entity which interact in the system.
An entity could be a single object, or a class of similar objects - we don't yet know.
Three approaches are used to identify entities:
linguistic analysis of the requirements statement (Shlaer/Mellor, Coad/Yourdon, RDD, Booch, Objectory);
entity-relationship modelling (Shlaer/Mellor, Coad/Yourdon, OMT, Fusion, Objectory?);
object behaviour analysis (OBA, RDD) which includes CRC card modelling.
Describe the application in a few short sentences. The objects are often the nouns in the sentences, eg:
"I want a package which will draw graphics shapes on screen in a window. I want it to be able to draw rectangles, circles and lines".
In this statement, the following kinds of objects emerge immediately:
domain-level objects - the things concerned most directly in your application, such as shape, rectangle, circle, line;
system-level objects - things supporting your application, such as system, screen, window;
However, it is not obvious that this will elicit all the required objects in the application:
some nouns may not be objects you need to model directly, eg package (?), screen, graphics;
some supporting objects are not yet identified, eg shape_list, coordinate, diagram, event_handler;
Linguistic analysis is helpful, but not foolproof since:
you can nominalise verbs - eg the drawing of the line;
you can verbalise nouns - eg to window the data;
Developed for relational databases (Chen, 1976) to model inter-object dependencies, aid normalisation of files.
model application in terms of entities whose properties are either simple attributes or relationships with other entities;
work out the relationships between entities, whether 1:1, 1:M, M:1, M:M and whether optional or mandatory;
introduce new entities to simplify relationships, such as M:M or ternary, etc, by enclosing them - aggregation.
Extended ERM
annotate relationships with business rules, eg when created, if allowed, if disjoint;
allow entities sharing common attributes to be factored out, having disjoint 1:1 relationships with the 'difference parts' - generalisation.
eg dependency constraint (OMT);
eg totality constraint (Fusion).
good visualisation and communcation tool;
aggregation composition structure;
generalisation inheritance structure;
static relationships easy to visualise, dynamic relationships are more difficult.
total pattern of communication among objects not always captured;
objects classified solely according to their common attributes (cf RDD).
Objects are characterised more by their external behaviour than by their static attributes and relationships (Wirfs-Brock, 1989; Gibson 1990).
objects are identified as things with behaviour - cf card_reader versus money in ATM system;
objects are better classified by their behaviour - cf cartesian or polar complex numbers.
collect system responsibilities and assign to different objects;
define objects in terms of their own responsibilities and the collaborators they use to fulfil them (CRC);
classify objects according to their shared behaviour - generalisation;
data attributes are secondary, so add these later - distribute on a need-to-know basis only;
objects with no observable behaviour are deleted or assigned as simple attributes elsewhere.
captures total communication patterns;
classifies according to shared behaviour.
Great low-tech brainstorming system (Beck & Cunningham, 1989) for discovering objects, used by RDD, OBA. (Also adopted by Booch, Fusion later).
good discovery-process to find all required objects;
good technique for decentralisation of applications.
Use 6" x 4" file index cards to record each
class (ie entity, type of object) and its:
responsibilities - what services it provides;
collaborators - which classes it uses to do these;
CRC Process
try to simulate the system - if you can't, find more behaviours and add them to class cards;
if a card gets too full, decompose into component classes - forced decentralisation of behaviour;
if a card has no behaviour, it isn't a class - tear it up!
best discovery/elimination approach for objects;
rather low-level - can lead to excess detail early on;
not a good visualisation tool for whole system.
At this point, we want to check that the initial analysis is complete and consistent with respect to the requirements.
Most methods don't have any idea here, except to go through the requirements document to check for any missing objects, relationships.
Purpose of Cross-Check
have we got entry-points into the object-model for all required system behaviours?
are there sufficient collaborators to provide all the support needed for complex behaviours?
Use ideas from Objectory and OBA:
Bring forward the object interaction diagrams from Objectory construction (ie design) phase to link use cases with top-level objects;
Bring forward the CRC process, which OBA uses in initial analysis, but Booch and Fusion use later in detailed design, to determine objects exhaustively.
Here, used because no other mechanism ensures that all system behaviours have been intercepted by objects.
Only Objectory links object interactions to use cases.
Useful, because:
each clause in a use-case is listed on the y-axis;
each domain object is a line on the x-axis;
external interactions are arrows reaching from use case clauses to objects;
internal behaviours are arrows reaching from object to object, adjacent to the motivating use case clause.
Physical layout guarantees that every clause in a use case corresponds to a captured behaviour.
Most OOA methods build several complementary models. Aim is to capture as many viewpoints as possible:
Extended Entity-Relationship Models - showing cardinality of links, aggregation and generalisation (OMT, Fusion, Shlaer/Mellor, Coad/Yourdon, Booch);
Semantically rich object models - contrasting class and object relationships, introducing interactions, visibility and synchronisation (Booch);
Harel state charts (Booch, OMT) - describe life history of objects - creation, modification, deletion;
Other state models (Objectory, Shlaer/Mellor).
Object interaction diagrams (Objectory, Booch, Fusion) - show timelines, focus of control;
Life cycle models (Fusion) - formal BNF describing system life history, cf sum of use-cases;
Data flow diagrams (OMT) - borrowed from SSA&D, difficult to relate to object models;
But, not always easy to relate each view...
Viewpoint: static relationships between objects.
Strengths: object-centred, helps to discover complex M:M or ternary, etc relations which will need special treatment.
Weakness: no guarantee that this captures all dynamic links afforded through collaborations.
Viewpoint: life history of individual objects and whole systems.
Strengths: helps establish all control states which a system or an object can be in, helps establish local control flow.
Weaknesses: easy to confuse system states and object states - lose object boundaries; difficult to model interleaving of substates (history); easy to confuse control and data states.
Viewpoint: functionality of required system.
Strength: main link to scenarios in requirements - in Objectory, OIDs link use cases with domain objects.
Weaknesses: not object-centred - in OMT, impossible to cross-check DFD models against other models.
Many methods start to impose design structures at the analysis stage: not necessarily a good idea...
Generalisation - group objects on the basis of shared attributes and relationships (Shlaer/Mellor, Coad/Yourdon, OMT, Fusion);
Aggregation - compose wholes from parts where a strong containment relationship exists (OMT, Booch)
Dependency - group objects according to direct and indirect dependency in use, creation (Objectory);
Subjects - promote the dominant objects in generalisation/aggregation hierarchies to categories (Coad/Yourdon);
Factoring out domain objects from other kinds of object, eg interface, control (Objectory), model, system, viewer, manager, controller (RDD);
Sketches of windows and menus - visual appearance of the UI (Objectory, Smalltalk);
At this point, we want to verify that all the information captured in the three views is consistent.
In some cases, the views are orthogonal - eg the object model and functional model in OMT: no overlap!
In other cases, the views are well integrated and reinforce each other - eg object interaction diagrams in Objectory.
defines every element described in models (Fusion);
provides (weak) basis for completeness checks.
name kind description
customer class person obtaining a loan
loan class records amount lent and state of
repayments
payment system customer makes single payment
operation
check that every required function is captured by an event or action in a state diagram;
check that every required function is invoked on and therefore owned by an object;
check that every system state is mapped to an object;
etc...
Design has the purpose of mapping problem domain concepts onto target domain concepts:
poor design methods assume that analysis models can be enriched to become design models (eg OMT, Objectory?, Shlaer/Mellor, Coad/Yourdon);
good design methods allow analysis models to be reworked (Booch) and some provide good techniques for adapting analysis models to new design models (Fusion, RDD).
defining a system boundary (NB - only Fusion seems to take this seriously, in the analysis phase);
dropping entities that have no behaviour in the target system (RDD) - eg money in the ATM example;
dropping relationships that have no role to play in the target system (Fusion) - eg wheels from the Cars example; but NB reuse considerations!
discovering new basic entities to support complex behaviours of domain objects (RDD, Booch);
discovering new conceptually abstract entities to manage collaborations between groups of objects (RDD).
But how do you do this?...
ERMs are useful tools up to a point. They:
capture static relationships between objects;
capture familiar/intuitive properties/relationships;
fail to capture dynamic/temporary collaborations;
fail to isolate relevant properties/relationships;
fail to capture functional dependencies that may require a link to another type of object.
We really want to know how one object uses another. By looking at the ways objects collaborate, we:
capture all relevant dynamic object interactions;
expose new, temporary links;
provide basis for eliminating entities (NB reuse);
provide basis for eliminating links (NB reuse);
provide basis for reorganising/simplifying collaborations;
provide basis for introducing new entities.
OIDs and event trace diagrams (Objectory, OMT, Booch);
link to scenarios, input and output made clear;
comb or stair control structures made clear;
weak appreciation of total class interactions;
main design tool for refining responsibilities in (RDD);
distinguish public responsibilities (contracts) from private responsibilities;
a contract is one or more public methods fulfilling the same kind of function (good abstraction);
strengths of inter-class dependencies made very clear;
sequenced messages, showing target object and result objects (Fusion);
introduces notion of time dependency;
makes clear the effect of individual messages;
too detailed for early design.
Object interaction diagrams or timeline diagrams (Objectory, OMT, Booch) illustrate control flow:
decentralised control philosophy;
good if operations have a strong connection;
good if operations do not change order.
centralised control philosophy;
good if operations can change order;
good if new operations can be inserted.
Contracts arise from the initial responsibilities allocated to entities during CRC modelling (RDD), Eg ATM example:
Class: Account
Responsibilities:
"know the account
balance"
"accept deposits"
"accept withdrawals"
"commit changes to this
account"
"commit changes involving
other acounts"
Idea is to group responsibilities used by the same clients, minimise contracts within a class:
1. Access and modify the balance;
2. Commit changes to the database.
abstract over several individual behaviours - reduces complexity;
illustrates coarse-grained communication pattern between classes - crucial for dependency analysis.
Crucial to good design is the process of simplifying inter-class communications, cf Parnas' dictum:
high cohesion within modules;
loose coupling between modules.
Consider Estate Agent example, which has quite complex patterns:
Vendor uses contracts:
1. Put
house up for sale
4. Transfer
ownership to purchaser
Purchaser uses contracts:
2. Make
offer on house
3. Transfer
funds to vendor
House uses contracts:
5. Remove
vendor furnishings
6. Acquire
purchaser furnishings
RDD has by far the best approach here. First, a look at how composition structures are derived:
based on notion of contracts;
measure of strength of collaboration.
create new conceptual entities: here, we have a Sale subsystem;
encapsulate complex ternary couplings between classes Vendor, Purchaser, House;
transfer contracts outward to Sale subsystem;
simplify internal communication pattern.
Next, a look at how inheritance structures are derived. Again, RDD has the best approach:
look for common behaviour, rather than common attributes (achieves better type-factorisation);
show intersections of responsibilities using Venn diagrams.
In the ATM example, many different kinds of transaction, modelled using different objects with different, but similar, responsibilities:
Depost Transaction responsibilities:
"deposit amount in
account"
"commit changes to
account"
Withdrawal Transaction responsibilities:
"withdraw amount from
account"
"commit changes to
account"
RDD has the best approach:
factor common responsibilities into superclasses;
draw class hierarchy diagrams, mark abstract classes;
transfer common contracts to superclass;
simplify contracts with its suppliers;
simplify contracts with its clients.
At this point, we want to check that all relevant information has been transferred from ERMs to class- and subsystem-collaboration graphs.
RDD provides the best mechanism, because:
it focusses on active use rather than relationships from the outset;
it has contract abstraction for reducing the complexity of inter-class collaborations;
it has a guided process for transforming the design model by discovering and eliminating entities and contracts.
For the other methods:
have we missed any system behaviours implicit in ERMs?
do we have too much complexity in our initial design?
Use object interaction diagrams to turn relationships into collaborations;
determine contracts after collecting all responsibilities for each class from all scenarios (use cases).
then, as for RDD
In the detailed design process, we aim to produce:
final class structures: dependencies, encapsulated data;
method specifications: formal argument lists and pre/post conditions on correct operation.
At the same time, detailed design needs to look forward to the target environment in order to take into account :
opportunities for reuse;
constraints on implementation.
Your software class library may already have
reuseable components (black-box)
reuseable frameworks (white-box)
and in addition you may find that you want to derive a new class from an existing similar library class; or reorganise library classes in the light of new class designs.
visibility - object containment and reference
lifetime - memory management, persistence
modularity - file organisation, class groups
No method has a clear strategy for reuse:
all stress the desirability of reuse;
some describe criteria for reuseable components and frameworks (Objectory);
some describe management aspects and techniques to encourage and institutionalise reuse (Booch).
reuse a class's method (single service);
reuse a class component in its entirety (composition);
adapt a class's behaviour (inheritance);
provide concrete components to fill in an abstract subsystem design (framework).
OO programmers seem to know intuitively how to combine elements of these kinds of reuse in an optimal way. Need:
a thorough knowledge of class library;
considerable experience in assembling systems.
How can this be learnt / made part of a systematic process?
There should be a two-way influence:
good designs produce loosely-coupled components with minimal interfaces;
good components affect the modularisation and coupling strategy in new designs.
but no method has a clear strategy for managing this two-way influence.
Use RDD to decompose domain-level classes down to a level where they start to interact with library classes.
This can be high or low, depending on the level of abstraction in your library.
Optimise current design's couplings before considering class library (to avoid undue cross-interference).
If the design's control structure is adequate, plan to use and extend library components to complete the design.
If the library provides a superior control structure (framework) for your application, consider reworking the design.
In this case, roll back to the pre-optimal design and introduce framework classes before optimising couplings.
The class library is not fixed, especially during the early stages of a software house's construction of its store of reuseable components.
add a new, generally-needed method to a class (simple);
add a new variant of a component with additional functionality (simple);
reorganise inheritance structure to accommodate new component sharing functionality with an existing class (moderate);
reorganise communication structure to accommodate new devolved responsibilities (moderate);
reorganise communication patterns to counter unbalanced distribution of responsibilities (difficult).
Once you know the ideal communication structure for your application, you can proceed to refine methods.
These show a diagrammatic trace of single method invocation, sub-methods and sub-objects participating (Fusion):
show ordered path of execution;
show new objects created;
distinguish single objects from collections
NB that Fusion has OIGs at a much earlier stage - inappropriate, since we have not understood the communication patterns across the whole system.
show required formal arguments for each responsibility (RDD);
write process outlines for each method (Fusion)
write formal specification for pre-, post-conditions of method's execution (Fusion, Eiffel).
NB - you need to know which object references a class holds before deciding formal arguments...
Now that you know the responsibilities of each class, you can determine what data each class must store.
Budd (1991) suggests listing on the back of CRC cards the data that each class should manage (RDD):
distinguish temporary from long-term managed data;
data attributes allocated on a need-to-know basis, to a class with responsibility for managing them;
Cross-check: avoid classes that make remote modifications to another class's data
indicates that the data is being managed by the wrong class;
promotes unnecessary links between classes.
identify and classify on basis of behaviour;
add data attributes at later stage.
Clearly, RDD leads to a better design for ADTs than Shlaer/Mellor, Coad/Yourdon - data is really a private affair.
In S/M, C/Y you classify on the basis of data! This is too early and leads to incorrect encapsulation boundaries.
Now that you know the communication patterns of each class, you can determine what links a class has to another.
All collaborations require a handle on the collaborator class. No OOAD method has any guidelines, so we suggest:
permanent links - for collaborators that are used several times, or with which a strong logical connection exists;
temporary links - eg targetting a formal argument of one method with another message - in other cases.
How do we determine temporary and permanent links? A reasonable guide is:
keep permanent links with sub-objects at the next level of encapsulation;
rely on argument passing for sub-objects at the same level of encapsulation.
Why? To avoid mutual links between objects at the same level of encapsulation (reason why we reworked the design in the first place!)
Some OOAD methods offer various diagrammatic techniques to illustrate the intended modular structure of the system. None serve any guiding purpose in the method.
In Objectory, the analysis model is converted into an isomorphic block design.
a block is essentially a coarse-grained domain object and other supporting domain objects and system objects used to implement it.
main use of blocks is to partition private and public parts of the domain object and suggest library classes.
Not especially good at deciding file structure - just reflects the analysis model.
Booch module diagrams:
physical design (files) - header, implementation, main;
subsystem notation (encapsulate modules).
Booch process diagrams
allocation of processes to processors;
linkage of hardware devices, peripherals.
Visibility graphs (Booch, Fusion) provide dimensions for classifying inter-object links:
permanent vs temporary visibility
exclusive vs shared linkage
bound vs free lifetime dependency
constant vs variable linkage
Permanent links to objects can be implemented through
direct containment (embedding)
indirect reference (pointer)
Temporary links can be implemented through reference or value parameters in methods.
A bound object can be deleted when its client is deleted.
An exclusive object cannot be passed as a parameter or referenced by more than one client.
A bound, exclusive object can be embedded.
A constant object can only be initialised, not updated.
Most methods seem to say no more than "make sure the models look OK". We can formalise this better...
To round off the design, we need to check that:
the system has been properly reorganised to profit from reuse, especially control frameworks;
closely-coupled classes have been encapsulated and contracts minimised;
every responsibility has a method protocol and object interaction graph;
every method has access to the data required and that the data is allocated correctly in its class;
every collaboration is represented either as a permanent or temporary object link;
every permanent object link is either embedded or a reference;
every temporary object link is available in the form of a method argument and the method is in scope when the link is needed.
etc...