WO1992009034A1 - Method for analysis and prediction of a software program development process - Google Patents

Method for analysis and prediction of a software program development process Download PDF

Info

Publication number
WO1992009034A1
WO1992009034A1 PCT/NL1991/000224 NL9100224W WO9209034A1 WO 1992009034 A1 WO1992009034 A1 WO 1992009034A1 NL 9100224 W NL9100224 W NL 9100224W WO 9209034 A1 WO9209034 A1 WO 9209034A1
Authority
WO
WIPO (PCT)
Prior art keywords
development
parameters
information
analysis
program
Prior art date
Application number
PCT/NL1991/000224
Other languages
French (fr)
Inventor
Frederikus Petrus Hirdes
Original Assignee
Techforce B.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Techforce B.V. filed Critical Techforce B.V.
Publication of WO1992009034A1 publication Critical patent/WO1992009034A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3604Software analysis for verifying properties of programs
    • G06F11/3616Software analysis for verifying properties of programs using software metrics

Definitions

  • the invention relates to a method for analysis of a software development process supported by the computer. According to this method at least one time during the process one or more parameters of the product under development are being measured and compared to similar parameters, defined before in the process, in order to gain through this method an understanding of the development process and the resulting product.
  • the collection, measuring and analysis of such parameters can be done by using a computer program.
  • This program may consist from several functional parts, such as a data collection part, a transformation part and a reporting part.
  • 'analysis program 1 we will use 'analysis program 1 as a program which can consist of any combination of such functional parts and hence can combine any combination of the functionalities 'data collection', 'transformation' and 'reporting'.
  • the last functionality may also include graphical functions and im-port and ex-port facilities.
  • CASE Computer Aided Software Engineering
  • programs programs for the construction of schematic diagrams, syntax checkers, entity relationship diagramming tools, consistency checking programs, configuration (version) control tools, wordprocessors, programs for the construction of cross reference lists, programs for the normalization of data structures, file and data ⁇ base management systems, compilers, test generators, "make" utilities etc.
  • a complete software system will consist of various subsystems each realizing a specified (certain) functionality of the complete system.
  • the decomposition of the total system in the so called, system components is being called the architectural design of the system. This architectural design defines the main parts of the system.
  • a system under development therefor will consist of a collection of system components, each in it's own phase of development and with it's own history.
  • Product related information may be straight forward, like the identifier (name) of a component or it's number of lines of code, but also more complex information may be obtained, like the number of non commented statements which have a higher then average complexity.
  • computed information taking into account process and product information can be very useful, like for instance the average tested lines of code per person-month, or the estimated test coverage for a certain set of modules with a computed total software mass, taking into account size and complexity, per average person-month.
  • Keywords may be searched to sort system components according to the likelihood of grouping in this way components with correlating functionality as being described with keywords.
  • Metrics can also be defined as complex compositions of various measured parameters. Examples of well known complex metrics are the metric of McCabe (measure of complexity), Rechenberg Metric, Halstead Metric, Henry and Kafura Fan-in and Fan-out metrics etc. Such metrics must be precisely interpreted based on a thorough understanding of them. This has shown the need for long periods of experience and calibration before metrics can be used effectively. The invention now supports a more advanced usage that metrics are capable of.
  • a special class of techniques becomes feasible when using historic information. Especially interesting is Quality Control techniques used as standard monitoring techniques in industry like CUSUM charts, SHEWHART charts etc. These techniques may now be applied on software components and the software engineering process, advancing the software process towards ISO 9000 principles and implementing the idea behind Total Quality Management in software engineering. Moreover, the storage of measurements or parameter information in files or databases is feasible. Also parameters of one system component can be saved at different points in time in order to be able to make a (certain) historical overview of the development. Generally spoken parameters may be saved in files, not necessarily belonging to the system environment of the system developer, but for instance belonging to the environment of the quality control officer, or the manager etc.
  • the invention now has also the goal to point out how and when the indicated parameters can be measured in a simple and effective way, without special actions from the developer or others.
  • the data structure defines the way data elements are interlinked and hence defines the possibilities (flexibility and efficiency) for the system in using the data.
  • control structure defines the way processes of the system are interlinked and hence defines the functional possibilities of the system. Again there are several control structures, like calling structures, task scheduling structures, interrupt structures, interfacing structures by data stores, etc.
  • a third kind of structure defines the way how system components re-use other existing components. These components can be stored in libraries relating to a system, a project, a department, a company, an industry etc. The use of such structures is for instance to manage the consequences in dependent components by a change in one of those components.
  • the technical and economical advantage of this method is that with the common representation covering all kind of software structures, software components are translated in such a way that similar software structures from different software notations or development techniques and from different development stages in the development lifecycle, are handled in a uniform way to be able to compare the structures of relating system parts along the lifecycle or across different development methods. This opens the way to economically study the life cycle of system components and to far reaching automate the analysis of relating parameters across the development cycle and/or across different development environments or tools.
  • Meta Models of each tool may be defined in a common notation (known as a Meta Meta Model) and store it in an external file.
  • transformation rules between the different CASE tool Meta Models which are also stored in a file. In some cases there will be more than one transformation possible between two sets of objects. It may be necessary for a single rule to be defined as the default with other translations as allowed options. Also we define transformation rules between these Meta Models and Standardised exchange formats. These transformation rules may be used as input parameters into a program to automatically read one CASE tools data and transfer it into a format which can be read by another CASE tool, or to transfer it into our structural representation.
  • Meta Models such as SQL2 from ISO IRDS and Extended Entity Relationship from ECMA PCTE and EIA CDIF.
  • a transfer format may be used in which CASE data can be expressed, like SYNTAX.1 from EIA CDIF, Semantic Text Language from IEEE P1175 and IBMs proprietary External Source Format.
  • the described method may use a transformation language for expressing the mapping between CASE Meta Models, declarative and powerful enough to express complex two-way mapping such as "turn all Relationships with Attributes into Associative Entities". For this it may use transformation formalisms such as TAMPR, CIP and REFINE along with standardised exchange syntaxes.
  • the actual time and date when the values of the indicated parameters were measured are being stored in a memory.
  • the duration of certain activities performed on a component it is necessary to measure the start and end time of that activity.
  • These measurements can easily be made fully automatic. They may be combined with an indication of the development activity, which actually is the origin of the measured parameter data.
  • computer assisted activities like. CASE tools, utilities etc. it is also possible to measure this indication automatically. This together will offer the possibility to relate the stages in which a system component had been and the time (effort) required for each stage.
  • the action by which the analysis program will be invoked or is integrated consists of the execution of one or more of the indicated kind of computer programs or utilities (smaller actions, in general part of the system environment) :
  • the analysis programm will be executed automatically every time the programmer for instance controls the syntax of a system component with a syntax control program, compiles a source code while making use of a compiler or stores in a directory a program documentation (text), produced by a wordprocessor, etc.
  • the analysis program may be invoked by a mechanism belonging to the system environment (such as a pipe or script facility) in such a way that it is integrated with actions, which are essential for the development process. This integration however is not always necessary. For instance the measurements may be taken at certain time intervals, at day-end, or initiated from an action from another part of the computer system etc. In such situations there is no fix integration with actions belonging to the development cycle.
  • a separate parameter file will be made, in which the measured values of the parameters will be stored.
  • the separate parameter files have an indication such that the relation between this parameter file and the system component is immediately clear for everybody authorized to have access to the indicated file.
  • the invention as indicated in the above mentioned and preferred implementations, also influences the structure of the parameter files, namely by creation of parallel parameter files, next to the original files of each system component (for instance the source of a sub ⁇ program, or the file in which the design of a sub system is stored) .
  • the sources of all programs' have file names like 'abcdefg.c' (for C-code) or 'abcdefg.pas' (for Pascal-code).
  • the corresponding parameter file according to the method of this invention, will now get the extension '.met', so 'abcdefg.met' . In this way it has become easy to put the parameter files in the directories of each programmer and/or project or system.
  • the separate parameter files may be given a predefined format such that the relation between the filed parameter values and the corresponding development tool or the corresponding development phase from which the values were measured can be reconstructed.
  • This might be implemented with a predefined code system by which each existing development tool or known development phase has a unique code, or a solution describing the file format in a unique way from which the meant relation can be obtained. Special attention must be given in order to maximise the storage efficiency.
  • the user is supported by the analysis program in an interactive way.
  • the graphical information of a scatter plot displaying components against size, can be used to select components with high size by pointing to those components, for instance with a mouse.
  • a window pops up on the screen and relevant information about the component is displayed.
  • the invention also has the feature that when the user now clicks on the button 'edit' immediately the editor is activated and the selected component is loaded and ready for inspection and correction.
  • the analysis program will have an effective facility to detect exceptions from the sets of parameter values or from the graphical information. This can be supported by statistical or mathematical techniques, especially techniques like time series analysis, boxplots, smoothing, frequency transformations, Pareto analysis, principal component analysis, regression techniques, logarithmic transformations etc.
  • data compression techniques may be used. Especially because the character of the data is well known, coding techniques may be used based on this specific character. Another technique may however be more effective: for instance when only exceptions have to be detected not all historic values have to be filed. In those situations only the last two values are sufficient. Also for instance the moving average may be filed in stead of all the historic values. This means that it is important to define in advance the goals for which the parameters are measured and stored. Based on those decisions one can develop an optimization how the data can be stored.
  • the kind of techniques used for this may also be used to enhance the value of the original information and will not always be executed to reduce the amount of data.
  • the analysis program will have features with which the values and graphical information can be output on paper or on other computer media, in selected reports with adaptable lay-outs.
  • An important feature to enhance the value of the analysis program is the possibility to import data from other tools. If provisions are made for uniforming information from the analysis tool itself and the data from external tools, these two kinds of information may be related to each other.
  • the analysis tool may have measured the complexity of a set of system components. From an external error reporting tool the number of errors detected in the field are available for a certain part of the mentioned set of system components.
  • the analysis tool can import the external data and execute on it like data from itself. This means that the user may interactively plot these data sets against each other to study the correlation between complexity and field detected errors.
  • the analysis tool offers therefor a mechanism to support manual input of data.
  • the data may be immediately displayed in a predefined graphical presentation to show the correlation with already existing data.
  • the analysis program can deliver effective information to support management decision making. It is therefor obvious that it will be highly effective to offer further management support integrated with the analysis program.
  • This further support may comprise from functionalities like project planning and control, cost estimation, resource scheduling, quality control, spreadsheets, electronic mail, reporting functions etc. Most of these functions may be delivered by other tools, so the intercommunication with these tools may be effectively facilitated by the analysis tool. In this way the analysis program will become a management workbench.
  • An example of what is possible with such functionality and data is the plotting of a selected set of system components against the estimated test effort, in growing order of test effort.
  • This estimated test effort may be derived in a defined way from complexity and structure metrics).
  • the manager By letting the manager define the total test effort available a next plot is displayed on the screen, showing the system components with an advised test effort per component.
  • the manager then has the possibility to adapt the advised effort distribution and can then initiate a report to be printed for the test team.
  • the test team also can look via the network into the same advised test plot and may select some components and related historic information to start a more in depth analysis of those components. They might find out that the estimated test effort for some components is quite high because of complex mathematical routines in those components, and a high goal for correctness. (According Victor Basili Goal-Question paradigm). These components ask for a test resource with special expertise in this kind of routines.
  • the manager can follow the progress from the automated analysis program, which is automatically collecting test acceptance data.
  • the analysis program can be realised in such a way, that at the appearance of certain values of parameters - which can be defined in advance - immediately a warning will be given, by which the execution of certain development tools will be directed.
  • a norm with regard to the number of levels that are allowed to be nested in a program. (Exceeding of this means a too high level of complexity of the program, leading to very expensive maintenance) . If the analysis program discovers such an exceeding of this norm, for example automatically the compilation process can be interrupted, or if the analysis precedes the compilation the compilation may never get a go ahead. The system developer than only may continue if he reconstructs the system component according the norm. Such warnings can also easily be given to the system developer, who will take action within his own responsibility.
  • Application 5 Structure metrics give the possibility to present information concerning the structure of system components in isolation from details which do not influence the structure. By showing such parameters immediately when the system developer has executed a development tool, a warning might be given if a system developer carries out changes that do not meet his authorization. Such warnings can be transmitted for example to a specific file of the quality department, who will be informed about such situations from time to time.
  • the load of the computer system can be optimised.
  • the analysis program for instance might first trace whether a system component has gone through relevant changes. For instance a source code can be added with comments, without having changed a single statement. By then a new compilation is redundant, as the compiler will not show another result compared with the preceding version of that system component.
  • Figure 1 shows the succeeding program- and compiler steps, which according to the state of art will be executed during the programming process.
  • Figure 2 shows the steps which are executed during a production process in according to the invention.
  • Figure 3A illustrates for a typical chosen parameter the way in which the set of measured values for this parameter can be used for analyzing the program development process.
  • Figure 3B illustrates a similar diagram like figure 3A, but now for another selected parameter.
  • the figures 4A and 4B illustrate the advantage of a fine grained parameter set collected in accordance with the invention.
  • FIG. 1 shows as common practice the steps which need to be executed in sequence in a program development process.
  • step 1 a number of lines of code are programmed, in step 2 the constructed program part is compiled, in step 3 more lines of code are produced, in step 4 a new compilation is executed, etc.
  • This production process finally ends in a final compilation during which it is possible to determine that the program is functioning well and does not contain anymore compilation (syntax) errors.
  • step 1 shows as common practice the steps which need to be executed in sequence in a program development process.
  • step 1 a number of lines of code are programmed
  • step 2 the constructed program part is compiled
  • step 3 more lines of code are produced
  • step 4 a new compilation is executed, etc.
  • This production process finally ends in a final compilation during which it is possible to determine that the program is functioning well and does not contain anymore compilation (syntax) errors.
  • certain parameters are measured and eventually filed. In that case the measurements might relate to the number of lines of code which until then are contained by the program,
  • Figure 2 illustrates a program production process in accordance with the invention. Also during this production process the sequencing program and compile actions appear, of which the following steps are indicated especially in figure 2: the program step 1, the compile step 2, the program step 3 and the compile step 4.
  • a connection has now been brought between the compiler, used for the execution of the various compile steps, and the parameter measuring program, used to quantify the parameters of the program under development.
  • This connection results in, as is outlined in figure 2, the quantification of the parameters of this program part every time when the program part will be compiled. More specially it is shown in figure 2 that synchronised with the compilation step 2 also the parameter measurement step 5a will be executed. Synchronised with the compilation step 4 also the parameter measurement process 5b will be executed, etc.
  • figure 3A the number of lines of code has been given as a function of time. At the beginning of the production process no lines are yet produced and the number of lines of code are equivalent to 0. Gradually this number grows over time. In general initially the number of lines of code will rise relatively fast and at the end of the production process it will decrease to 0. In the final stage the number of lines of code will be nearly constant and only the last error messages will be eliminated.
  • figure 3B the number of error messages is shown, which are generated frequently by the compiler after every compile step. This number of error messages can vary, and will in general fluctuate heavily in that part of figure 3A where the steepest gradient occurs. Effectively at the end of the production process the number of errors will decrease to become 0, in order to realise a program without compile errors.
  • Figure 4A shows the curve which occurs if only three values are available of the defined parameter, also in this case again a number of lines of code.
  • the three values as indicated in figure 4A with a, b and c, show at first stage a normal curve. If however the production process concerned would have been inspected in a more detailed way, by measuring the value of the parameters concerned at a larger number of points in time, than in fact a completely different curve might occur as shown in figure 4B.
  • the analysis is executed at a certain point in time when the system developer has made a significant change in the system component (otherwise he or she would not require an action of the development program or tool) .
  • the system developer does not need to carry out extra actions.
  • the system developer can receive feedback from the analysis, in which way his/her attention is drawn at the various aspects of his work and he can react in response. This will influence his work in a positive way (For instance he/she can be warned after the compilation that the number of test paths of his new version have increased sharply. Perhaps he/she now can see that further development in this direction, will result in a difficult testable module. An adapted strategy might now be considered in an early stage) .

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Stored Programmes (AREA)

Abstract

A method for analyzing a computer supported software development and/or maintenance process, wherein for obtaining insight about the process and the resulting products at least once during the process one or more parameters, keywords or other meaningful information relevant to describe a certain criterium of the product being developed or maintained are measured or collected and compared with similar data collected earlier in the development and/or maintenance process. These data are measured or collected at time intervals which are relatively short compared with the total development or maintenance period, in order to be able to facilitate time-series analysis of the said parameters, keywords or other meaningful information; for linking the execution of an analysis program, which collects or measures such parameters, with one or more actions which are essential for the development process; or to describe the structure of the software components in such a way that similar software structures from different notations or development techniques which may come from different development stages in the development life cycle are handled in a uniform way.

Description

Title: method for analysis and prediction of a software program development process
The invention relates to a method for analysis of a software development process supported by the computer. According to this method at least one time during the process one or more parameters of the product under development are being measured and compared to similar parameters, defined before in the process, in order to gain through this method an understanding of the development process and the resulting product.
The collection, measuring and analysis of such parameters can be done by using a computer program. This program may consist from several functional parts, such as a data collection part, a transformation part and a reporting part. In this invention we will use 'analysis program1 as a program which can consist of any combination of such functional parts and hence can combine any combination of the functionalities 'data collection', 'transformation' and 'reporting'. The last functionality may also include graphical functions and im-port and ex-port facilities.
Software Engineering Process
The development of software has various phases, like problem analysis, specification of requirements, system design, design of data- structures, coding, testing, documenting. Not only these phases, but also other phases can be executed by various system developers in their own selection and sequence. In general however a company or a project uses a more or less fixed defined process steps.
Nowadays these phases are more and more supported by automatic tools, like Computer Aided Software Engineering (CASE) tools. Examples of these tools (programs) are: graphical tools for the construction of schematic diagrams, syntax checkers, entity relationship diagramming tools, consistency checking programs, configuration (version) control tools, wordprocessors, programs for the construction of cross reference lists, programs for the normalization of data structures, file and data¬ base management systems, compilers, test generators, "make" utilities etc.
In general a complete software system will consist of various subsystems each realizing a specified (certain) functionality of the complete system. The decomposition of the total system in the so called, system components is being called the architectural design of the system. This architectural design defines the main parts of the system.
At the design phase these parts are not yet fully developed, because only their designs will exist. Later in the development cycle they will be given to programmers which will further develop the system by transforming the different design components into coded system components. These coded components will be documented and tested.
So during the development process each system component will go through various phases of the development process before it is ready for use. A system under development will consist of many components in different stages. It is noted that different system developers can work simultaneously in different phases on the development of a large system.
At each point in time a system under development therefor will consist of a collection of system components, each in it's own phase of development and with it's own history.
Process and Product Information
In order to improve the complex development process, different kinds of information about the system under development may be used to better control the different development tasks. Programmers can be helped most effectively with detailed information about individual system components. Technical group managers and quality staff might be interested in groups of system components and how they obey predefined norms. Project managers are mostly interested in information at a higher level of aggregation.
We can distinguish between information related to the process, like: programmers name, his level of experience, the time spent on the testing of a component etc. ; and at the other side information related to the product, like: the size, the complexity, the number of variables used etc.
Product related information may be straight forward, like the identifier (name) of a component or it's number of lines of code, but also more complex information may be obtained, like the number of non commented statements which have a higher then average complexity. Also computed information taking into account process and product information can be very useful, like for instance the average tested lines of code per person-month, or the estimated test coverage for a certain set of modules with a computed total software mass, taking into account size and complexity, per average person-month.
All this kind of information may be defined for different objectives. Some straight forward information may be only wanted to control the development costs or the proper filing of system components. Keywords may be searched to sort system components according to the likelihood of grouping in this way components with correlating functionality as being described with keywords.
Other information may be used to judge system components on more objective criteria such as quality, maintainability, complexity, size, testability, stability etc. Metrics as a special kind of Information
For the last purpose it is common to speak about 'metrics' when information is meant which can be measured in a predefined way. For effective measurements automated or non-automated techniques are being used.
In this content special attention is drawn to the measuring of various parameters of system components, like the number of lines of code, the number of different functions, the number of entity relations, the number of called functions, the number of hierarchic levels of nested routines etc. We can speak about parameters or 'metrics' . Metrics can also be defined as complex compositions of various measured parameters. Examples of well known complex metrics are the metric of McCabe (measure of complexity), Rechenberg Metric, Halstead Metric, Henry and Kafura Fan-in and Fan-out metrics etc. Such metrics must be precisely interpreted based on a thorough understanding of them. This has shown the need for long periods of experience and calibration before metrics can be used effectively. The invention now supports a more advanced usage that metrics are capable of. With the combination of management techniques from other disciplines we make metrics a powerful means for supporting decisions about system development, where unexperienced persons can start with using metrics without a long period of training and will become a more advanced user by further training on the job. In general practice metrics nowadays are mostly being collected from already developed system components and even mostly from code components. This makes the usefulness of metrics low, because at the code phase of the development cycle the possibilities to change the system under development are decreased compared with the earlier design phase. The invention therefor has concentrated on earlier metrics. Another disadvantage of metrics today is that they are often compared against a value which is seen as a more or less generally accepted norm. An example is the norm for the Mc. Cabe metric which is often advised to keep below 10.
Parameters
Because the invention is not restricted to metrics, we better speak about parameters in this paper. So the word 'parameters' is used here to include metrics, keywords, and other meaningful information. These parameters may be collected (automatically or not) from system components but also from manual procedures and from sources outside the computer system. Usage of parameters
Based on the usage of these parameters complete or partly re¬ development of system components, due to analysis, is not abnormal. After such re-work a new analysis of the system component can be executed in order to measure whether the component will now meet the acceptance criteria.
Important in the use of such parameters is the approach of using them at earlier development phases than the code phase, to be able to better react on their analysis results, because the more early an action is taken the less expensive it is to change the system. This means that parameters measured from a design notation of a system component can indicate to change the design, before the designed system component is already translated into coded components.
This makes clear that there exist relations between components from different development stages. Not only because they are designed to transfer information between them, or to allow one component to control the execution of another component, but also because components can be developed from each other. We therefor distinguish between system dependent relations (two components which are designed to have interaction as part of the system) and process dependent relations. In the latter kind of relation time is an important element.
When for a software system a functional requirement (such as the printing of a sales report) is being developed, we can see that somewhere in the development process a component for that requirement will appear. This might start as a primitive design described in text, as a design using a formal notation, as a body in a programming language etc. From this conception onwards this component will have a life and hence also a history. The invention now is aimed at using information and especially parameters, which can be generated along the lifetime of such components (giving historic information), to enable software engineers, quality control staff and managers to gain better insight into the development process and to be able to judge the quality and other criteria of the individual components and of sub-systems and the system as a whole. Historic information
To illustrate the use of historic information we can think of a code component which is entered into the testphase. As a result of these tests the component might be changed and this may result in a higher complexity of the component. Then this cycle may be repeated several times, till no errors are left over. In general the changed component then may have become more complex at the end. If we then should take a snapshot of the complexity of the component and compare it with the complexity when the component entered into the testphase, we would see a growth of complexity which normally would indicate a rise of error- proneness.
This illustrates that measuring complexity once is not enough to indicate the quality of a component. Historic insight is necessary. The same kind of examples can be given for many other parameters. The invention now defines methods to collect and use such historic information.
The usage of historic information becomes more and more necessary as software production gets more attention from managers. This means that techniques used in other disciplines might be also used in software engineering. We especially think of techniques as time-series analysis, pattern recognition, mathematical operations, transformations like Fourrier and Laplace, exception detection, etc. Also more statistical oriented techniques like averaging, smoothing, factor analysis, principal component analysis, outlier detection, computation of means, medians, quantiles etc.
A special class of techniques becomes feasible when using historic information. Especially interesting is Quality Control techniques used as standard monitoring techniques in industry like CUSUM charts, SHEWHART charts etc. These techniques may now be applied on software components and the software engineering process, advancing the software process towards ISO 9000 principles and implementing the idea behind Total Quality Management in software engineering. Moreover, the storage of measurements or parameter information in files or databases is feasible. Also parameters of one system component can be saved at different points in time in order to be able to make a (certain) historical overview of the development. Generally spoken parameters may be saved in files, not necessarily belonging to the system environment of the system developer, but for instance belonging to the environment of the quality control officer, or the manager etc. We feel it as great disadvantage that there is a lack of an optimal integration of parameters with their original components, that are the source of the parameters. In order to collect (measure) the parameters of the system under development, the system has to be made available for the analysis program by the software engineer at an appropriate moment.
Automatic invoke of the analysis
The inconvenience experienced by the software engineer, by being interrupted during his normal activities, and the fact that steps have to be taken to execute the analysis by others or possibly by the developer himself, result in practice in the fact that the number of times that during the production process the parameters are being measured is relatively small. The analysis being executed on basis of the so measured parameters, is therefore very rough and offers at most times not enough insight about the development of the process on various moments.
The invention now has also the goal to point out how and when the indicated parameters can be measured in a simple and effective way, without special actions from the developer or others.
This aim is realised by a method as indicated in the first paragraph, that it has been arranged to automatically invoke the execution of an analysis program, that measures these parameters, from a computer executed action which is essential for the development process, so that every time when the component under development is processed by this action also automatically the indicated parameters will be measured and these values will consequently be stored in a memory.
By measuring continuously the parameters of interest, required for the analysis process, in combination with an action that is essential for the development process and that can not be skipped, it is realised that the analysis programm is automatically started without human intervention. As a relatively great number of such actions are executed by the computer on relatively short periods, it is also realised that the parameters will be measured on such short time periods and their values will be stored in a memory. During the development process therefore a set of parameters will be collected, which provides a detailed and specific insight about the process and about the resulting product.
Structural information
As mentioned before, the different components have system dependent relations. These relations are important aspects which define the quality and functionality of the system. These relations make up the 'structure' of the system. However a software system has several structures.
The data structure defines the way data elements are interlinked and hence defines the possibilities (flexibility and efficiency) for the system in using the data.
The control structure defines the way processes of the system are interlinked and hence defines the functional possibilities of the system. Again there are several control structures, like calling structures, task scheduling structures, interrupt structures, interfacing structures by data stores, etc.
A third kind of structure defines the way how system components re-use other existing components. These components can be stored in libraries relating to a system, a project, a department, a company, an industry etc. The use of such structures is for instance to manage the consequences in dependent components by a change in one of those components.
To be able to economically use information which is related with those structures, the use of a generic technique for describing structural information is important. We know of such techniques based on flowgraphs, Petri Nets, hierarchical decomposition techniques, techniques using descriptor technology and techniques based on the theory of Fenton/Whitty. These techniques may be added with classification schemes, to be economically capable for including in a standardised way other non structural information of software notations.
With such a technique we are able to split the problems between two system parts: one part for translating structural information into a common representation (given by the mentioned techniques) and another part which translates this common representation into meaningful information, adapted for a certain purpose.
The technical and economical advantage of this method is that with the common representation covering all kind of software structures, software components are translated in such a way that similar software structures from different software notations or development techniques and from different development stages in the development lifecycle, are handled in a uniform way to be able to compare the structures of relating system parts along the lifecycle or across different development methods. This opens the way to economically study the life cycle of system components and to far reaching automate the analysis of relating parameters across the development cycle and/or across different development environments or tools.
Security mechanism for reconstruction An interesting feature combined with our generic structural representation technique is the fact that we can define our translation method from software notation into our generic structural representation in such a way, that all of the structural information is still available, but that re-translation into the original notation is not possible, because of deliberately throw away some information during the forward translation. This can for instance been done by throwing away the names of variables and/or the exact statements. In stead of exactly taking over all information, for our purpose it is normally enough to count such information. In this way re-engineering (backwards translation) into the original code is not possible. The advantage of this is that the data collecting part of the analysis system can be distributed and offers a security that the collected data can be given to someone without having to fear that the original code can be reconstructed.
Interfacing with CASE tools
Often it will be necessary to collect parameters from software components built with different CASE tools. Sometimes because the organization has changed to use another tool and the original component is now further developed or maintained in the new tool. Also a component may be designed in an UpperCASE tool and may be further implemented in a fourth generation (LowerCASE) tool. For being able to measure the corresponding parameters over the life cycle it is necessary to translate the both notations from both tools into our structural representation as mentioned above in such a way that comparable semantic meaning of all translated notations corresponds with each other.
This can be done in the following way. Think of two CASE tools each of which have their own internal database (repository). Each of the CASE tools will have its own semantic model (often referred to as a Meta Model) which describes in the form of a data model the CASE techniques it supports.
We may define the Meta Models of each tool in a common notation (known as a Meta Meta Model) and store it in an external file. We define transformation rules between the different CASE tool Meta Models which are also stored in a file. In some cases there will be more than one transformation possible between two sets of objects. It may be necessary for a single rule to be defined as the default with other translations as allowed options. Also we define transformation rules between these Meta Models and Standardised exchange formats. These transformation rules may be used as input parameters into a program to automatically read one CASE tools data and transfer it into a format which can be read by another CASE tool, or to transfer it into our structural representation.
A common formalism may be used to express Meta Models, such as SQL2 from ISO IRDS and Extended Entity Relationship from ECMA PCTE and EIA CDIF. To avoid the need to access each CASE tools proprietary data transfer structures a transfer format may be used in which CASE data can be expressed, like SYNTAX.1 from EIA CDIF, Semantic Text Language from IEEE P1175 and IBMs proprietary External Source Format.
The described method may use a transformation language for expressing the mapping between CASE Meta Models, declarative and powerful enough to express complex two-way mapping such as "turn all Relationships with Attributes into Associative Entities". For this it may use transformation formalisms such as TAMPR, CIP and REFINE along with standardised exchange syntaxes.
These techniques may be used to overcome the problem of having to develop and maintain individual interfaces between our analysis program and many CASE tools. This would become an uneconomical situation and would leave the analysis method without enough flexibility to give it a long future. With this technique the interfacing with CASE tools will be secured.
Time stamps and identification of the origin of data
Preferably together with the indicated parameter values also the actual time and date when the values of the indicated parameters were measured, are being stored in a memory. To be able to also establish the duration of certain activities performed on a component, it is necessary to measure the start and end time of that activity. These measurements can easily be made fully automatic. They may be combined with an indication of the development activity, which actually is the origin of the measured parameter data. When using computer assisted activities like. CASE tools, utilities etc. it is also possible to measure this indication automatically. This together will offer the possibility to relate the stages in which a system component had been and the time (effort) required for each stage.
Linkage between the analysis program and other computer actions
As indicated the invention will take care of the fact, that during the development process the analysis coincides with the essential development actions. According to a preferred implementation of the method according to the invention, the action by which the analysis program will be invoked or is integrated, consists of the execution of one or more of the indicated kind of computer programs or utilities (smaller actions, in general part of the system environment) :
- a drawing programm for schematic diagrams, - a syntax control program,
- an entity relationship diagramming tool
- a consistency checking and control program
- a configuration (version) management tool
- a wordprocessor - a programm for the building of cross reference lists
- a programm for normalization of data structures
- a file and database management program
- a compiler - a test generator
- a 'make ' -utility.
In other words, the analysis programm will be executed automatically every time the programmer for instance controls the syntax of a system component with a syntax control program, compiles a source code while making use of a compiler or stores in a directory a program documentation (text), produced by a wordprocessor, etc. This implies a possibility to define such linkage mechanisms, adapted to the actual development process. Also the analysis program may be invoked by a mechanism belonging to the system environment (such as a pipe or script facility) in such a way that it is integrated with actions, which are essential for the development process. This integration however is not always necessary. For instance the measurements may be taken at certain time intervals, at day-end, or initiated from an action from another part of the computer system etc. In such situations there is no fix integration with actions belonging to the development cycle.
Storage of the parameter values According to a further implementation of the method, preferably for every system component of the software under development, a separate parameter file will be made, in which the measured values of the parameters will be stored. Preferably the separate parameter files have an indication such that the relation between this parameter file and the system component is immediately clear for everybody authorized to have access to the indicated file.
The invention as indicated in the above mentioned and preferred implementations, also influences the structure of the parameter files, namely by creation of parallel parameter files, next to the original files of each system component (for instance the source of a sub¬ program, or the file in which the design of a sub system is stored) . In this way for instance the sources of all programs' have file names like 'abcdefg.c' (for C-code) or 'abcdefg.pas' (for Pascal-code). The corresponding parameter file, according to the method of this invention, will now get the extension '.met', so 'abcdefg.met' . In this way it has become easy to put the parameter files in the directories of each programmer and/or project or system. By this decentralized storage the access to the parameter files has become more simple and for instance each system developer can inspect his own parameter files. In this way a greater use of the information will be gained and the 'Big Brother is Watching You' syndrome will be decreased. In order to inspect all parameter files, the central quality department only needs to get access to all directories of the system developers or projects. In this way a central parameter data base will not be necessary, and will prevent the corresponding work and costs. Another advantage is also, that the structure of the parameter files now automatically follows the structure of the project organization, also implemented in the directory structure.
To be able to store the information combined with the development phases and corresponding development tools the separate parameter files may be given a predefined format such that the relation between the filed parameter values and the corresponding development tool or the corresponding development phase from which the values were measured can be reconstructed. This might be implemented with a predefined code system by which each existing development tool or known development phase has a unique code, or a solution describing the file format in a unique way from which the meant relation can be obtained. Special attention must be given in order to maximise the storage efficiency.
Input and output of the analysis program
To make full use of the time series of information from the parameter values it is effective to produce graphical presentations of the data. This value of such is extra increased with interactive graphical presentations, for instance on a computer screen. A mechanism to select any two parameters to be presented in one graphical plot gives an effective possibility to survey correlations between these parameters. These plots will be given in different forms, like bargraphs, scatter diagrams, line plots, pie charts etc. These graphs may be extended with statistical (graphical) information like boxplots, outline detection, regression lines etc.
The user is supported by the analysis program in an interactive way. For instance the graphical information of a scatter plot, displaying components against size, can be used to select components with high size by pointing to those components, for instance with a mouse. When clicking the mouse on the component, a window pops up on the screen and relevant information about the component is displayed. The invention also has the feature that when the user now clicks on the button 'edit' immediately the editor is activated and the selected component is loaded and ready for inspection and correction.
The analysis program will have an effective facility to detect exceptions from the sets of parameter values or from the graphical information. This can be supported by statistical or mathematical techniques, especially techniques like time series analysis, boxplots, smoothing, frequency transformations, Pareto analysis, principal component analysis, regression techniques, logarithmic transformations etc.
To reduce the amount of data to be filed in order to keep the historic information within acceptable limits, data compression techniques may be used. Especially because the character of the data is well known, coding techniques may be used based on this specific character. Another technique may however be more effective: for instance when only exceptions have to be detected not all historic values have to be filed. In those situations only the last two values are sufficient. Also for instance the moving average may be filed in stead of all the historic values. This means that it is important to define in advance the goals for which the parameters are measured and stored. Based on those decisions one can develop an optimization how the data can be stored.
The kind of techniques used for this, may also be used to enhance the value of the original information and will not always be executed to reduce the amount of data.
The analysis program will have features with which the values and graphical information can be output on paper or on other computer media, in selected reports with adaptable lay-outs.
An important feature to enhance the value of the analysis program is the possibility to import data from other tools. If provisions are made for uniforming information from the analysis tool itself and the data from external tools, these two kinds of information may be related to each other. For instance the analysis tool may have measured the complexity of a set of system components. From an external error reporting tool the number of errors detected in the field are available for a certain part of the mentioned set of system components. By making sure that the identification of the system components is identical for both data sets, the analysis tool can import the external data and execute on it like data from itself. This means that the user may interactively plot these data sets against each other to study the correlation between complexity and field detected errors.
As an added feature the same functionality as above is possible with manually imported data. The analysis tool offers therefor a mechanism to support manual input of data. When this data is keyed in, with state of the art input support, the data may be immediately displayed in a predefined graphical presentation to show the correlation with already existing data.
Support for managers
It has been illustrated that the analysis program can deliver effective information to support management decision making. It is therefor obvious that it will be highly effective to offer further management support integrated with the analysis program. This further support may comprise from functionalities like project planning and control, cost estimation, resource scheduling, quality control, spreadsheets, electronic mail, reporting functions etc. Most of these functions may be delivered by other tools, so the intercommunication with these tools may be effectively facilitated by the analysis tool. In this way the analysis program will become a management workbench.
It is now quite easy to enhance the analysis program and especially the data collection part, with data collection of plain product and process information. Examples of such information are: the name of system components, the name of the originating programmers, the error report information, the time sheet information of the development and maintenance tasks, the completion dates, the where used information, etc.
An example of what is possible with such functionality and data is the plotting of a selected set of system components against the estimated test effort, in growing order of test effort. (This estimated test effort may be derived in a defined way from complexity and structure metrics). By letting the manager define the total test effort available a next plot is displayed on the screen, showing the system components with an advised test effort per component. The manager then has the possibility to adapt the advised effort distribution and can then initiate a report to be printed for the test team. The test team also can look via the network into the same advised test plot and may select some components and related historic information to start a more in depth analysis of those components. They might find out that the estimated test effort for some components is quite high because of complex mathematical routines in those components, and a high goal for correctness. (According Victor Basili Goal-Question paradigm). These components ask for a test resource with special expertise in this kind of routines. During the test phase the manager can follow the progress from the automated analysis program, which is automatically collecting test acceptance data.
With the 'management workbench' described in this invention, the software engineering process will become increasingly visible, and highly manageable with a major impact on improving quality, costs and lead time of software projects.
Applications of the method of the invention
Application 1
By filing the information of the development process of system components in the indicated way, statistical material will be available to make a model of the development process. Through such a model one can calibrate the actual development. For instance after measuring of the design phase a prediction can be made on the quantity of manhours that will be involved in the further phases of the development. As a result of this, based on the initial budget for the development of a system component, on each moment the left budget can be derived per development phase and for the whole of the development cycle.
Application 2
By collecting the information about the development cycle of system components in the indicated way, it will be instantly detected when large changes in the parameter values appear. In this way discontinuities in the development can be quickly traced and corrective actions can be taken.
An example of such a discontinuity is for instance when quite suddenly the number of functions in a certain module becomes increasingly smaller. Normally the number of functions will increase from the beginning of a development of a module. This increase will faint and finally it will not increase anymore. The development of the module than can still continue, but will contain detailed developments per function: the number of functions in this module does not change anymore.
If however during the development process, the number of functions is decreasing strongly, to increase afterwards, it can be assumed that the original conception of the module has changed. This often occurs after a review from various system parts and after reaching the conclusion, that the structure was not acceptable. Often a radical functional change is the course. These so called 'sudden jumps' now can be detected immediately and corrective actions can be taken.
Application 3 Through this invention the appearance of such 'sudden jumps' in all system components can be detected and further investigation of the dependencies between system components can take place using the 'time stamps' stored in accordance with the measured parameters. If system components have no direct reference to each other, this method is a unique possibility for a manager to detect dependency.
Application 4
The analysis program can be realised in such a way, that at the appearance of certain values of parameters - which can be defined in advance - immediately a warning will be given, by which the execution of certain development tools will be directed. For example one can define a norm with regard to the number of levels that are allowed to be nested in a program. (Exceeding of this means a too high level of complexity of the program, leading to very expensive maintenance) . If the analysis program discovers such an exceeding of this norm, for example automatically the compilation process can be interrupted, or if the analysis precedes the compilation the compilation may never get a go ahead. The system developer than only may continue if he reconstructs the system component according the norm. Such warnings can also easily be given to the system developer, who will take action within his own responsibility.
If there are circumstances by which such standards can not be maintained by the system developer, the procedure might be, that he/she has to document an explanation. Only after making such an explanation the development of the system might be released to exceed the standard.
Application 5 Structure metrics give the possibility to present information concerning the structure of system components in isolation from details which do not influence the structure. By showing such parameters immediately when the system developer has executed a development tool, a warning might be given if a system developer carries out changes that do not meet his authorization. Such warnings can be transmitted for example to a specific file of the quality department, who will be informed about such situations from time to time.
Application 6 By integrating the analysis with development tools, the load of the computer system can be optimised. The analysis program for instance might first trace whether a system component has gone through relevant changes. For instance a source code can be added with comments, without having changed a single statement. By then a new compilation is redundant, as the compiler will not show another result compared with the preceding version of that system component.
In reverse development tools themselves sometimes already are provided with such controls. In that case the integration can be applied not to execute the analysis unnecessary. The invention will be explained in the succeeding in more detail by showing the enclosed figures.
Figure 1 shows the succeeding program- and compiler steps, which according to the state of art will be executed during the programming process.
Figure 2 shows the steps which are executed during a production process in according to the invention.
Figure 3A illustrates for a typical chosen parameter the way in which the set of measured values for this parameter can be used for analyzing the program development process.
Figure 3B illustrates a similar diagram like figure 3A, but now for another selected parameter.
The figures 4A and 4B illustrate the advantage of a fine grained parameter set collected in accordance with the invention.
Figure 1 shows as common practice the steps which need to be executed in sequence in a program development process. In step 1 a number of lines of code are programmed, in step 2 the constructed program part is compiled, in step 3 more lines of code are produced, in step 4 a new compilation is executed, etc. This production process finally ends in a final compilation during which it is possible to determine that the program is functioning well and does not contain anymore compilation (syntax) errors. Sometimes during this proper program production process, from the programm under development certain parameters are measured and eventually filed. In that case the measurements might relate to the number of lines of code which until then are contained by the program, the number of modules of the program, the number of functions built in the program, the number of memory accesses required by the program etc. This step which measures the parameter value as indicated with number 5 in figure 1 is however completely uncoupled from the actual program development process (steps 1,2,3,4, ). In fact the execution of these parameter measurement procedure is a violation of the actual production process and as such experienced by the programmer. This results in practice that the parameter measuring step 5 will only be executed a limited number of times or possibly not executed at all, so that none or just very little insight is received in the program production process. What might be the result of this, will be explained in the following figures, especially the figures 4A and 4B.
Figure 2 illustrates a program production process in accordance with the invention. Also during this production process the sequencing program and compile actions appear, of which the following steps are indicated especially in figure 2: the program step 1, the compile step 2, the program step 3 and the compile step 4. In accordance with the invention a connection has now been brought between the compiler, used for the execution of the various compile steps, and the parameter measuring program, used to quantify the parameters of the program under development. This connection results in, as is outlined in figure 2, the quantification of the parameters of this program part every time when the program part will be compiled. More specially it is shown in figure 2 that synchronised with the compilation step 2 also the parameter measurement step 5a will be executed. Synchronised with the compilation step 4 also the parameter measurement process 5b will be executed, etc. By realising a connection between the compilation process and the parameter measurement process it is effected that whenever possible, the parameters from the programm under development concerned will be quantified, while on the other hand the programmer is not forced to take any exceptional action and will not be disturbed during his normal activities.
In the example of figure 2, it is indicated that the compilation process and the parameter measurement process take place simultaneously. In real life the concerned programs will be executed in immediate sequence. However it is essentially important that by invoking a compilation, the programmer simultaneously instructs the system to quantify the parameters of the program concerned. The method as indicated in figure 2 can be applied with existing compilers. On the other hand with many existing compilers it will be possible to integrate the parameter measurement program with the compiler program.
Through the following figures 3A and 3B we try to explain in which way the structural parameters can be used in an analysis process and we will also indicate why it is of importance to store the ever changing values of the parameters at a relatively short time distance and more especially with an automated procedure in order to get a correct overview of the proceeding of the production process.
As a simplification of the example we base ourselves on an analysis program which only looks at the number of lines of code of a program at a certain instance and at the number of error messages generated at the end of each compile step.
In figure 3A the number of lines of code has been given as a function of time. At the beginning of the production process no lines are yet produced and the number of lines of code are equivalent to 0. Gradually this number grows over time. In general initially the number of lines of code will rise relatively fast and at the end of the production process it will decrease to 0. In the final stage the number of lines of code will be nearly constant and only the last error messages will be eliminated. In figure 3B the number of error messages is shown, which are generated frequently by the compiler after every compile step. This number of error messages can vary, and will in general fluctuate heavily in that part of figure 3A where the steepest gradient occurs. Effectively at the end of the production process the number of errors will decrease to become 0, in order to realise a program without compile errors.
In the example of the figures 3A and 3B we base ourselves on a relatively high number of measuring points, during the program production process. The large number of measuring points results as well in figure 3A and in figure 3B in a relatively accurately measured curve. If however, as generally accepted in the state of the art, just a few times during the complete production process a number of parameters are measured for the program concerned, a completely wrong impression of the production process might occur. An example of this is illustrated by figures number 4A and 4B.
Figure 4A shows the curve which occurs if only three values are available of the defined parameter, also in this case again a number of lines of code. The three values, as indicated in figure 4A with a, b and c, show at first stage a normal curve. If however the production process concerned would have been inspected in a more detailed way, by measuring the value of the parameters concerned at a larger number of points in time, than in fact a completely different curve might occur as shown in figure 4B.
In figure 4B the same three values a, b and c have been indicated, with in between a large number of other quantified values measured for the parameters concerned (the number of lines of code) . The resulted graph is much more detailed. It appears clearly now that a continuous growth in the number of lines of code was not available, but in fact two times a number of lines of code have been deleted, because of which at that moment the total number of lines decreased. This programmer apparently had been writing a number of lines of code, which were deleted afterwards. For a manager, controlling this programmer the curve 4A does not deliver any information, however the curve as indicated in figure 4B, clearly indicates that at a defined moment during the programm development process irregularities occurred, to which attention ought to be drawn.
The figures 4A and 4B show that the method, as used in the state of the art has disadvantages. These disadvantages can be eliminated by means of the invention.
Summarized it is pointed out that by linking the analysis with the calling of a development program (tool) belonging to a certain phase of the development cycle, as invented by this method, the following advantages are gained:
The analysis is executed at a certain point in time when the system developer has made a significant change in the system component (otherwise he or she would not require an action of the development program or tool) .
The system developer does not need to carry out extra actions. Immediately after the required action of the development tool the system developer can receive feedback from the analysis, in which way his/her attention is drawn at the various aspects of his work and he can react in response. This will influence his work in a positive way (For instance he/she can be warned after the compilation that the number of test paths of his new version have increased sharply. Perhaps he/she now can see that further development in this direction, will result in a difficult testable module. An adapted strategy might now be considered in an early stage) .
As the development programs belong to a defined development phase, one can store information automatically in the '.met' files concerning that process phase, like the time spent in that process phase, the number of keystrokes etc. In this way automatically accurate management overviews can be obtained per process phase. This is a major advantage, as nowadays the information for such overviews is done completely by hand, with little accuracy. Such overviews are fundamental for productivity analysis.

Claims

Claims
1. Method for analyzing a computer supported software development and/or maintenance process, according to which method at least once during the process one or more parameters, keywords or other meaningful information relevant to describe a certain criterium of the product being developed or maintained are measured or collected and compared with similar parameters, keywords or other meaningful information collected earlier in the development and/or maintenance process in order to obtain insight about the process and the resulting products, characterized in that at time intervals which are relatively short compared with the total development or maintenance period, said parameters, keywords or other meaningful information are measured or collected to be able to facilitate time-series analysis of the said parameters, keywords or other meaningful information.
2. Method according to claim 1 , characterized in that the method supports Quality Control techniques as used in other engineering or industrial disciplines to be applied in the process of 'software engineering' (the process of creating and maintaining software or computer programs) with special attention to Quality Assurance standards like ISO 90Ox and Total Quality Management, CUSUM charts, SHEWART charts, outlier detection, factor analysis etc.
3. Method for analyzing a computer supported software development and/or maintenance process, according to which method at least once during the process one or more parameters of the product being developed or maintained are measured and compared with similar parameters collected earlier in the production process in order to obtain insight about the development process and the resulting product, characterized in that measures have been taken to link the execution of an analysis program, which collects or measures such parameters, with one ore more actions which are essential for the development process and which can be executed by the computer, so that every time the product being developed is processed by such an action at almost the same time the mentioned parameters are measured and their values will be stored in a memory.
4. Method for analyzing a computer supported software development and/or maintenance process, according to which method at least once during the process one or more parameters of the product being developed or maintained are measured in order to obtain insight about the development and/or maintenance process and the resulting product, characterized in that measures have been taken to describe the structure of the software components in such a way that similar software structures from different notations or development techniques which may come from different development stages in the development lifecycle are handled in a uniform way to be able to compare the structure based criteria of relating system parts along the lifecycle or across different development methods.
5. Method according the claim 4, characterized in that the definition of the structures is done with a decomposition technique based on Fenton/Whitty theory.
6. Method according to claim 5 characterized in that the definition of the structures is done based on or with a generic technique for describing structural information, like techniques based on flowgraphs, Petri Nets, hierarchical decomposition techniques, techniques using a descriptor method and techniques based on the theory of Fenton/Whitty and the so called 'Primes'.
7. Method according to one of the claims 4-6, characterized in that the definition of the structures is accompanied with a classification scheme to allow further levels of notation details to be handled in a uniform way, which may result in a near complete transformed description of the original system component.
8. Method according to one of the claims 4-7, characterized in that the original system component can not be re-constructed from the resulting description of the transformation of the original system component.
9. Method according to claim 7, characterized in that the definition of the structures is done using a 'Meta model' of the original notation, taking into account the semantic meaning of the information from the mentioned notation in a more or less generic way.
10. Method according to one of the preceding claims, characterized in that with the mentioned parameter values also the time and date on which the values of the regarding parameters are measured are stored in a memory.
11. Method according to one of the preceding claims 1-9, characterized in that with the mentioned parameter values also the time on which the values of the regarding parameters are measured and the actual duration of the recent activity are stored in a memory.
12. Method according to one of the preceding claims, characterized in that the analysis program with which the mentioned parameters are measured is connected to or integrated with the aforementioned actions, which are essential for the development process and are executed by the computer.
13. Method according to one of the preceding claims, characterized in that the analysis program with which the mentioned parameters are measured is invoked by a mechanism belonging to the system environment, such as a batch, pipe or script facility, an encapsulation facility, a message passing service, a real time clock program to let the analysis program take action at certain time intervals or at day-end etc., or initiated from an action from another part of the computer system etc.
1 . Method according to claim 13, characterized in that said linkage mechanism is adapted to the actual development process, whereby the linkage mechanism may have learning capabilities.
15. A method according to one of the preceding claims, characterized in that said action, with which the analysis program is connected or integrated with, consists of the execution of one or more of the following kind of computer programs (tools) or utilities (small actions which are often part of the system environment) :
- a drawing programm for schematic diagrams,
- a syntax control program,
- an entity relationship diagramming tool
- a consistency checking and control program - integrity control program
- a configuration (version) management tool
- a wordprocessor
- a program for the building of cross reference lists
- a program for normalization of data structures - a file and database management program
- a compiler
- a test generator
- a 'make'-utility
16. Method according to one of the preceding claims, characterized in that the analysis program, with which the parameters are measured, is a program to define metrics.
17. Method according to one of the preceding claims, characterized in that immediately after the parameters are measured with the support of the analysis program a for the human senses perceptible signal is generated so that the person(s) who is (are) occupied with the development of the product concerned gets an actual feedback signal.
18. Method according to one of the preceding claims, characterized in that immediately after the measuring of the parameters with the support of the analysis program a message to another part of the computer system is transferred in response on which further actions in the development process can be directed.
19. Method according to one of the preceding claims, characterized in ' that immediately after the measuring of the parameters with the support of the analysis program a message to a separate memory is transferred in response on which further actions in the development process can be directed.
20. Method according to one of the preceding claims, characterized in that related with system components of the program (system) in development a separate parameter file will be created, in which the values of the measured parameters are stored.
21. Method according to claim 20, characterized in that the separate parameter files have been given such a name that the relation between this parameter file and the concerning system component is immediately clear for everyone who has access to the concerned file.
22. Method according to claim 20 or 21, characterized in that the separate parameter files have been given the names of the system components, only with the extension '.met'.
23. Method according to claim 20, 21 or 22, characterized in that the separate parameter files have been given a predefined format such that the relation between the filed parameter values and the corresponding development tool from which the values were measured can be reconstructed,
24. Method according to claim 23, characterized in that use is made of a predefined code for each existing development tool or a description of the file format in a unique defined way from which the meant relation can be obtained.
25. Method according to one of the preceding claims, characterized in that the analysis program on the basis of previously determined Criteria determines if an unexpected sharp change had been appeared in one or more of the parameter values and by the appearance of such an unexpected sharp change also checks the values of the remaining parameters in order to determine if at approximately the same time by other parameters an unexpected sharp change has been appeared.
26. Method according to one of the preceding claims, characterized in that the analysis program on the basis of previously determined criteria determines if an unexpected sharp change had been appeared in one or more of the parameter values and by the appearance of such an unexpected sharp change a message to another part of the computer system is transferred in response on which further actions in the development process can be directed, especially to connect warnings with system components who have interfaces with or are dependent on the system component which has the sudden change in one or more parameters, or to check the values of parameters of other system components in order to determine if (almost) at the same time an unexpected sharp change has occurred also.
27. Method according to one of the preceding claims, characterized in that from the mentioned parameter values graphical representations are made.
28. Method according to one of the preceding claims, characterized in that from the mentioned parameter values interactive graphical representations are made, for instance on a computer screen, with the possibility to choose any two parameters to be represented in one plot.
29. Method according to one of the preceding claims, characterized in that from the mentioned parameter values and/or graphical information exceptions are detected.
30. Method according to one of the preceding claims, characterized in that from the mentioned parameter values and graphical representations statistical and/or mathematical techniques are supported by the analysis tool especially techniques like time series analysis, averages, means, medians, boxplots, smoothing, frequency transformations, Pareto analysis, critical component analysis, regression techniques, logarithmic transformations etc.
31. Method according to one of the preceding claims, characterized in that from the mentioned parameter values and graphical representations statistical and/or mathematical techniques are supported by the analysis tool to reduce the amount of data to be filed in order to keep historic information as effective as possible in such a way as to support predefined interpretation applications of the stored historic data.
32. Method according to claim 31, characterized in that techniques may also be used to enhance the value of the original information and hence will not be done to only reduce the data.
33. Method according to one of the preceding claims, characterized in that the mentioned parameter values and/or graphical information are output on paper or on other computer media.
34. Method according to one of the preceding claims, characterized in that from the mentioned parameter values and/or graphical information selectable reports can be output on paper or on other computer media.
35. Method according to one of the preceding claims, characterized in that the analysis program can accept manual input of information and parameter values and/or the import of parameter values and/or graphical information from other external programs or tools and can combine this information with parameter values and/or graphical information from itself.
36. Method according to one of the preceding claims, characterized in that the analysis program can accept manual input of information and parameter values and/or the import of parameter values and/or graphical information from other external programs or tools with the interactive (graphical) support to assist the import process with immediate combining the interactively selected information with parameter values and/or graphical information from itself.
37. Method according to one of the preceding claims, characterized in that the analysis program supports aggregation of information from fine grained system details, as low as for instance statement level of coded components, upward to information concerning, modules, sub-systems and eventually the system as a whole.
38. Method according to one of the preceding claims, characterized in that the analysis program supports aggregation of information from fine grained process details, as low as for instance the programmer level, upward to information concerning, groups, project, departments etc.
39. Method according to one of the preceding claims, characterized in that the analysis program supports aggregation of information from fine grained details about activities, as low as for instance the programmer level, upward to information concerning, groups of activities etc.
40. Method according to one of the preceding claims, characterized in that the aggregation of information for the different information views can be logically combined to be supporting those views.
41. Method according to one of the preceding claims, characterized in that the information model allows for the system components to be handled as objects with a life time and history, if wanted including birth and death.
42. Method according to one of the preceding claims, characterized in that the information is collected at design phase or even earlier phases of the development process.
43. Method according to one of the preceding claims, characterized in that the integration with CASE tools is realized with making use of Meta Models or Meta Meta Models, to allow for semantic interfacing.
44. Method according to one of the preceding claims, characterized in that the integration with CASE tools is realized with making use of transformation rules.
45. Method according to one of the preceding claims, characterized in that the integration with CASE tools is realized with making use of CASE
Environments and their used techniques for synchronizing tools.
46. Method according to one of the preceding claims, characterized in that the Goal-Question paradigm from Basili supported with automated tools is used and that a learning environment is created to support the evolutionary decomposition from Goals into measurable information.
PCT/NL1991/000224 1990-11-09 1991-11-08 Method for analysis and prediction of a software program development process WO1992009034A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
NL9002460 1990-11-09
NL9002460A NL9002460A (en) 1990-11-09 1990-11-09 METHOD FOR ANALYZING AND FORECASTING A PROGRAM DEVELOPMENT PROCESS

Publications (1)

Publication Number Publication Date
WO1992009034A1 true WO1992009034A1 (en) 1992-05-29

Family

ID=19857959

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/NL1991/000224 WO1992009034A1 (en) 1990-11-09 1991-11-08 Method for analysis and prediction of a software program development process

Country Status (2)

Country Link
NL (1) NL9002460A (en)
WO (1) WO1992009034A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
NL1010123C2 (en) * 1997-12-16 1999-06-17 Freddy De Ryck Method for a time estimate for software development.
EP1540471A1 (en) * 2002-06-28 2005-06-15 Abb As Revalidation of a compiler for safety control
CN101473301A (en) * 2006-06-13 2009-07-01 微软公司 Iterative static and dynamic software analysis
CN110737985A (en) * 2019-10-15 2020-01-31 上海联影医疗科技有限公司 Running data verification method and device, computer equipment and readable storage medium
US10614093B2 (en) 2016-12-07 2020-04-07 Tata Consultancy Services Limited Method and system for creating an instance model
US10806709B2 (en) 2012-06-27 2020-10-20 G2B Pharma, Inc. Intranasal formulation of epinephrine for the treatment of anaphylaxis

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
BRITISH TELECOM TECHNOLOGY JOURNAL vol. 9, no. 2, April 1991, GB pages 39 - 46; A.D. PENGELLY ET AL.: 'Software structure and cost management - the Esprit II COSMOS project' see page 41, right column, line 17 - page 42, right column, line 18; figures 2,3 *
DIGEST OF PAPERS FROM COMPCON SPRING 1987 February 1987, SAN FRANCISCO US pages 236 - 241; GAIL E. KAISER ET AL.: 'Intelligent Assistance without Artificial Intelligence' see page 236, right column, line 25 - line 35 *
IEEE TRANSACTIONS ON SOFTWARE ENGINEERING. vol. 14, no. 6, June 1988, NEW YORK US pages 758 - 772; VICTOR R. BASILI ET AL.: 'The TAME Project: Towards Improvement-Oriented Software Environments' see page 764, right column, line 12 - page 768, left column, line 39; figures 1,2 *
PROCEEDINGS OF THE EIGHTH ANNUAL NATIONAL CONFERENCE ON ADA TECHNOLOGY March 1990, FORT MONMOUTH US pages 525 - 532; BRIAN L. CHAPPELL ET AL.: 'Measurement of Ada throughout the software development life cycle' see the whole document *
RESEARCH DISCLOSURE RD 303065 10 July 1989, 'Development metrics facility for software engineering' see the whole document *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
NL1010123C2 (en) * 1997-12-16 1999-06-17 Freddy De Ryck Method for a time estimate for software development.
BE1011617A3 (en) * 1997-12-16 1999-11-09 Rijck Freddy De Method for estimating time for software development
EP1540471A1 (en) * 2002-06-28 2005-06-15 Abb As Revalidation of a compiler for safety control
CN101473301A (en) * 2006-06-13 2009-07-01 微软公司 Iterative static and dynamic software analysis
CN101473301B (en) * 2006-06-13 2018-12-11 微软技术许可有限责任公司 Iterative static and dynamic software analysis
US10806709B2 (en) 2012-06-27 2020-10-20 G2B Pharma, Inc. Intranasal formulation of epinephrine for the treatment of anaphylaxis
US10614093B2 (en) 2016-12-07 2020-04-07 Tata Consultancy Services Limited Method and system for creating an instance model
CN110737985A (en) * 2019-10-15 2020-01-31 上海联影医疗科技有限公司 Running data verification method and device, computer equipment and readable storage medium

Also Published As

Publication number Publication date
NL9002460A (en) 1992-06-01

Similar Documents

Publication Publication Date Title
Wallace et al. Software verification and validation: an overview
Koziolek et al. A classification framework for automated control code generation in industrial automation
Groce et al. From scripts to specifications: the evolution of a flight software testing effort
Peng et al. Software error analysis
WO1992009034A1 (en) Method for analysis and prediction of a software program development process
Duncan Software development productivity tools and metrics
Wiesner et al. An ontology-based environment for effective collaborative and concurrent process engineering
Pichidtienthum et al. Developing Module Generation for Odoo Using Concept of Low-Code Development Platform and Automation Systems
Dampier et al. Automated merging of software prototypes
Vedros et al. Enhancement of Industry Legacy Probabilistic Risk Assessment Methods and Tools
Iovino et al. Metamodel deprecation to manage technical debt in model co-evolution
Poston et al. Counting down to zero software failures
Biffl et al. Model-based risk assessment in multi-disciplinary systems engineering
Peng et al. Software error analysis
Kececi et al. An integrated measure for functional requirements correctness
Lee Software analysis handbook: Software complexity analysis and software reliability estimation and prediction
Bennett et al. Two Stage Data Driven V&V for an Agile Thermohydraulic Analysis Method
Linger et al. Cleanroom software engineering reference
Houghton Features of software development tools
Hausen et al. Specification of Software Evaluation and Certification-Formal Model
Lee et al. Software Analysis Handbook: Software Complexity Analysis
CN118170358A (en) Application program generation method and device, electronic equipment and storage medium
Carrington Software engineering tools and methods
Huber Qualification and validation of software and computer systems in laboratories: Part 4. Evaluation and validation of existing systems
Barkowski et al. Domain-Specific Modeling as an Enabling Technology for SMEs

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): CA FI JP NO US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IT LU NL SE

NENP Non-entry into the national phase

Ref country code: CA

122 Ep: pct application non-entry in european phase