[Next] [Previous] [Up] [Top]

6.6 COMPLEXITY: STORAGE AND PROCESSING REQUIREMENTS

6.6.1 METRICS OF COMPLEXITY

Various aspects make a model complex. Toward an exploration of these aspects, characteristics of model development have been divided into two categories: authoring and implementation. But the requirements for the design and authoring of an artifact are not necessarily distinct from the requirements for implementation and use. Essentially the complexity of the tooling affects the complexity of the artifact, and the complexity of the artifact affects the complexity of the tool. Any measure of the complexity of a model must necessarily concern with the complexity of the tools required to build that model.

The co-development of tools and models are illustrated by the traditional engineering trade-offs made between cost in terms of time and resources (the tools of production) and the implementation and functioning of the final artifact. To add to complexity issues, there exist considerations pertaining to the entire product life cycle which, although remote from these traditional concerns, are increasingly necessary for the development of a quality product.

A software program can be examined in terms of the program as an artifact and the tools necessary to create that program. From a slightly different perspective, the software program can be viewed as both a tool, to employ algorithms and manifest concepts, and as an artifact in and of itself. This is analogous to an examination of a mold with regard to its creation of an artifact, and an examination of the mold as an artifact itself.

The shift here is from the concurrent examination an artifact and its tool to the examination of the artifact as a tool. While there are metrics which explore the breadth and depth of the program artifact, the self-referential nature of the design cycle as a whole has even lead to fractal theory as one measure of complexity [Chen 92].

Complexity metrics in software attempt to measure the time it takes to complete a program, the susceptibility of a program to errors, the ease of testing a program, and the maintenance of a program among others. Additional consideration of resources includes computational speed, memory requirements, and a characterization of computation time. While there may exist different orthogonal dimensions of concern within the entire software design cycle, many of the popular measures may correlate to each other to some degree. How the metrics are determined may involve:

1) Structural Complexity: The expression of the topological relationships of a system's components.
2) Computational Complexity: The algorithms' computational difficulty, often given as a characterization of the time to process given elements.
3) Logical Complexity: Systematic difficulty of decision-making flows.
4) Conceptual Complexity: An organizational view which takes human cognitive and resource factors into account.
5) Textual Complexity: Analysis of program source texts.

Software complexity metrics work effectively as predictive measures and are largely used as estimation of the cost of a programming project. Complexity metrics are often given as an ordinal number, an interval, or as a relative, ratio. The most well-known complexity measures are Lines of Code, McCabe's cyclomatic complexity metric [McCabe 82], and the Halstead computer science.

Line of code is a strict measure of how many lines of code a program takes up. While McCabe's Cyclomatic complexity uses an algorithm which counts the branches contained in a program. A level of 10 is ideal, although many programs in active use have complexity levels of 100, even 1000 [Perry 90]. The Halstead metrics, which can be computed from program logic charts, determines the size of a program.

Empirical studies deem lines of code to have the best correlation to programming time. The [Harrison & Cook 87] study of a project with 30,000 lines of C code confirms that Lines of Code is a reliable measure of program complexity, apart from being the most straightforward and easily calculated. [Lind & Vairavan 89] in another empirical survey of 3442 Pascal and 1123 Fortran routines totaling about 400,000 lines of code determined that developmental effort complexity was best measured, in decreasing order, by total number of lines, McCabe's control, and Halstead's program length. Halstead's program length estimator was found to be more accurate than Jensen's. In predictive models for programming errors, the lines of code metric were found to be as not as effective as are measures of software control complexity by [Khoshgoftaar & Mushon 90].

The factor analysis of software complexity measures of [Mata-Toledo & Gustafson 92] breaks Pascal program problems into a number of basic features for measurement. These include the number of decisions, number of procedures, maximum level of nesting, number of I/O statements, the number of lines of code. and the Halstead count. Conclusions were that normalization by lines of code was required and that small programs were intrinsically different from large programs.

[Munson & Khoshgoftaar 90] indicates hat software complexity metrics largely have little success because of the lack of understanding of the exact nature of what is being measured, especially for predictive models. It is acknowledged that many of the metrics are related.

[Next] [Previous] [Up] [Top]