CA2267476A1 - Time management & task completion & prediction apparatus - Google Patents

Time management & task completion & prediction apparatus Download PDF

Info

Publication number
CA2267476A1
CA2267476A1 CA002267476A CA2267476A CA2267476A1 CA 2267476 A1 CA2267476 A1 CA 2267476A1 CA 002267476 A CA002267476 A CA 002267476A CA 2267476 A CA2267476 A CA 2267476A CA 2267476 A1 CA2267476 A1 CA 2267476A1
Authority
CA
Canada
Prior art keywords
project
task
data
completion
alerts
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002267476A
Other languages
French (fr)
Inventor
Stephen Kaufer
Marco A. Emrich
Arunachallam S. Sivakumar
Thomas Palka
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Compuware Corp
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CA 2233359 external-priority patent/CA2233359A1/en
Application filed by Individual filed Critical Individual
Priority to CA002267476A priority Critical patent/CA2267476A1/en
Publication of CA2267476A1 publication Critical patent/CA2267476A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Educational Administration (AREA)
  • Game Theory and Decision Science (AREA)
  • Development Economics (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

A computer software application in the form of a software management and task completion and prediction apparatus by which project completion can be ascertained and management of a project can be maintained with a high efficiency and accuracy.
The software application is a web-based application which enables users to proactively manage and accurately predict strategic software development and deliverables. This application and delivery management system comprises distributed data collectors, an application server and a browser interface. The data collectors automatically gather data already being generated by various tools within the organization, such as scheduling, defect tracking, requirements management and software quality tools. This data is constantly being collected and fed into the application server, thereby providing objective and updated information. New tools can be easily added without disrupting operations. The data collected is fed into the applications server which is the brain of the apparatus. The application server analyzes the data collected by the data collectors to generate a statistically significant probability curve. This curve is then compared to the original planned schedule of product delivery to determine if the project is meeting its targets. Based upon this comparison, the software application predicts a probable delivery date based upon the various inputs and variables from the development process. In addition, the application server also generate early warning alerts, as needed and indicated. At such time as the software apparatus identifies a potential problem, the alerts automatically inform the user, so as to mitigate a crisis and assist with resolution of the problem. The alerts are communicated to the designated user by e-mail.

Description

TIME MANAGEMENT & TASK COMPLETION & PREDICTION SOFTWARE
'This is a non-provisional utility patent application claiming benefit of the filing date of U.S. provisional application serial no. 60/079,819 filed March 30, 1999, and titled COMPUTER SOFTWARE DEVELOPMENT TIME MANAGEMENT AND TASK
COMPLETION & PREDICTION APPARATUS.
Appendix:
Attached herein is a copy of the computer code of the invention disclosed herein.
1o Parts I and II disclose the source code including the analyses for the scheduling aspect and Monte Carlo simulation. Part III is the source code pertaining to the resouce history and alert caluclation using the collected data.
Field of the Invention:
The present invention generally relates to the automated collection and processing of project completion data relating to computer software development, and more particularly to the manipulation of the accumulated data through a Monte Carlo simulation in order to analyze and determine projected project completion based upon current information made available through accumulated data extracted through project monitoring and user input, if desired. Both 2o projected project completion and projected potential development diffculties are ascertained in order to alert and/or apprize system users of current project status.

Summary of the Invention:
In accordance with the present invention, the foregoing objectives are met by a data extraction and manipulation process which employs a mathematical algorithm and knowledge base of rules to provide a prediction of anticipated project completion with enhanced accuracy. Data collection is accomplished through the use of data collectors specifically designed to extract data from tools utilized to accumulate data. Data is accumulated by the data collectors and then transferred to a computer system functioning as the application server. The data collectors are automated to gather data generated by the tools within the system organization. Accordingly, the collected data is stored on an application server for evaluation of the schedule for projected completion and project status, including estimated cost, available functionality, and quality levels.
The data extraction and manipulation process further meets the foregoing objectives by employing the aforementioned mathematical algorithm and applying the ~ 5 aformetioned knowledge base to provide the ability to alert system users to potential difficulties which may affect project completion prior to the occurrence of such difficulties in order for system users to be able to make necessary adjustments to minimize or prevent such projected difficulties firom occurring. In a preferred embodiment of the invention, data collectors are used to obtain and accept project information stored on various computers on a corporate network, wherein the data collected is then subject to an algorithm, including the use of a Monte Carlo simulation, for prediction of project completion and scheduled delivery.
As a further embodiment of the invention, the present invention establishes a probability curve for the projected schedule of completion for each task defined as a concrete deliverable unit of work. The Monte Carlo simulation is conducted over a plurality of iterations.
All simulations are run on the same set of data, which has been collected from all of the tools.
The data obtained from the simulations is fitted on a probability curve for assigning a confidence level for project completion in view of the given task required to complete the project. The system may be utilized to generate simulations for additional projects which may be concurrently under production. Accordingly, the simulation algorithm may be applied for one specific project or multiple projects, or the user may specify for an analysis to be conducted on to one or more specific projects.
Further features of the present invention, and the advantages offered thereby, are explained hereinafter with reference to specific examples illustrated in the accompanying drawings.
Brief Description of the Drawings Fig. 1 is a schematic illustration of the system;
Fig. 2 is an illustration of a sample directory structure of the software;
Fig. 3 is a sample dialogue box for gathering Task Information; and 2o Fig. 4 is a sample dialogue box for gathering Milestone Information;
Detail description of the Preferred Embodiments and Best Mode of the Invention:
To facilitate an understanding of its underlying concepts, the present invention is explained with reference to a schematic diagram and a sample directory layout structure. It will be appreciated) however, that the principles of the present invention are not limited to the examples presented in the illustrations. Rather, they are applicable to any situation in which it is desirable to provide time, cost and quality management of a project and predict completion of the project for delivery.
As noted previously, the project prediction is primarily based upon extracting data t o from multiple independent tools and applying a knowledge base of accepted project management metrics as well as a mathematical algorithm to the data. As will become apparent from the following description, a significant aspect of the use of the technique is that it allows the user to concentrate their energy on a timely delivery of a product within the parameters of ongoing development while minimizing interference with the development process. The resulting 15 delivery prediction is defined by the input of development conditions in the form of accumulated data.
An explanation of the manner in which mathematical algorithms are applied to the system to predict project completion will first be described in a general manner with reference to a schematic illustration of a system layout, as is illustrated in Fig. 1. This figure 2o shows a schematic illustration 10 of the system architecture. The system is comprised of a system server computer 20, further comprising an application server 22, a database 24, a web browser (not shown), and data collectors (not shown). The system further comprises Remote Managers 30, which are programs that run on workstations from which data is collected, and comprising communication tools (commonly referred to as Web Browsers) which are used to access alerts and reports generated by the system.
The application server 22 is the central component of the system. The primary responsibilities of the application server 22 are: storing project information in the database 24, accepting project information from the data collectors, and generating periodic reports and alerts regarding the status of the project. At such time as reports and alerts are generated by the system, they are presented to the user by means of a web browser 42.
Accordingly, the t o application server 22 is the central component of the system, with its primary tasks and functions being to spawn data collectors, manage the database, analyze database information and generate reports, and to generate alerts based upon deviation of the projected time for product delivery from system users' desired time for product delivery.
Data collectors are stand alone applications whose function is to extract data from t 5 third party tools, such as defect tracking tools. A list of available data collectors is stored in an index file in a collect directory created for each project under scrutiny.
Data collectors collect the data by various means and write the output to a temporary text file on a client workstation 40. After data collection is completed, the text file is transferred from the client workstation 40 to the application server 22 which then parses the text file of collected information and updates Zo the project database 24 with the data from the individual tool. Data collectors are capable of nmning on computers distributed throughout an organization, therefore enabling geographically distributed projects to be centrally monitored from a single web site and a single application server.
An explanation of the system installation and layout will be described in a general manner to illustrate and best explain the organization and functions of the system. 'The layout will further provide further information as to how the data is collected, managed, evaluated and presented to the user. A sample directory tree structure which contains product files is illustrated in Fig. 2. For purposes of clarification, where terms appear in full capital lettering they are intended to represent the name given to a specific directory within the sample directory tree.
However, the scope of the disclosure should not be limited to the titles of the directories presented. A person skilled in the art may appreciate that alternative directory designations may be utilized.
The directory layout comprises a root directory containing five directories (hereinafter "first level directories") within one level of the root directory. The first level directories are, in order of illustration in Fig. 2: 1 ) an operating system directory which shall be t 5 named according to the operating system of the computer system being utilized, 2) a CONFIGURATION directory where configuration related hypertext markup language (hereinafter "html'~ files are located, 3) a PROJECTS directory, 4) a COMMON
directory, and 5) a TEMPLATE directory.
For purposes of fiuther explanation and clarification of the system layout and 2o installation in the preferred embodiment, the PROJECTS directory, the COMMON directory, and the TEMPLATE directory shall be explored in detail. These three directories function to manage and organize the collection and presentation of data pertaining to the given projects being monitored on the system. Note that it is equally reasonable to store this information in a database instead of within various files on the file system. The significant aspect of how this information is stored is that it enables different projects to present different views on the data being collected and analyzed, and that it allows organizations to share how the data is collected) analyzed and presented.
The TEMPLATE directory contains the virgin files that are utilized to form each new project system data collection and observation function as new projects are added to the system. The TEMPLATE directory contains the original file index for each new project and a t o tool for adding new projects to the system for placement into and creation of new sub-PROJECTS directories for each new project. As a new project is added to the system through the use of the files contained in the TEMPLATE directory, the directory structure and files are utilized as a template within a new project's directory located within the PROJECTS directory, as shall be further described below, and thereby provide the basis for the initial formation of a t 5 new project's discrete directory and file structure. Accordingly, the TEMPLATE directory functions as the template for the discrete project directories and files required for each new project as a project is added to the system.
The PROJECTS directory contains the sub-directories and specific files for each project and for computation and reporting for all projects combined by way of the ENTERPRISE
2o sub-directory within the PROJECTS directory, as shall be more fully explained below. The addition of each new project as it is configured within the system through the files located within the TEMPLATE directory, results in the creation by the system of a new sub-directory and project files and indexes within the PROJECTS directory for utilization by the discrete project.
In this way the original files located within the TEMPLATE directory and the TEMPLATE
directory's sub-directory structure are reproduced with the addition of the newly entered project configuration information discussed above, to provide for the segregation of each project's specific files within that project's individual directory. As illustrated in Fig. 2 by way of example through the use of the "Skinner" project, a project's sub-directory structure will mimic that of the TEMPLATE directory with the addition of an ARCHIVES sub-directory which is utilized to house the reports generated by the system for the specific project fiurther utilizing a l0 sub-directory structure aimed at maintaining report information by time/date of report generation through the further utilization of the TEMPLATE/REPORT sub-directory and file structure for status, quality, schedule and functionality reports for the specific project.
Accordingly, the PROJECTS directory houses the discrete directory and sub-directory structure, files, and indexes required for each discrete project, and as new projects are added to the system, new project t 5 specific sub-directory structures are created to house the necessary project sub-directories, files, and indexes. This TEMPLATE directory structure allows end user organizations to customize how the information about their organization is collected, maintained and presented.
Customization occurs as end users manipulate the files in this TEMPLATE
directory to present additional content, as well as perform additional data collection or analysis.
Since this 2o customization is done to the TEMPLATE directory) all projects being managed at this site will share these common changes.
_g_ Within the PROJECTS directory, the system maintains the ENTERPRISE
directory where computation and reporting occurs for all projects as a group.
As with other project sub-directories, there exists COMMON, ANALYZE and REPORTS sub-directories, each containing their index file(s). Accordingly, the actions (analysis and reporting) represented by this directory follows the execution (data collection, analysis and reporting) for all projects in the PROJECTS directory and houses the necessary cumulative files, indexes, and information for reporting on all system projects.
The system further comprises a COMMON directory which has a list of common files that are integrated throughout the product. In this way the COMMON
directory contains to files common to all of the projects on the system. This directory is where product-wide and company-wide alerts and/or reports are stored, and where multiple projects can share information, programs and reports. In this way, the system provides one template file for each data collector per third party tool, and one location for commonly used report and analysis programs. Accordingly) the common directory function as the location for files and templates t5 common to all projects.
While configuring the project, the user is prompted to select the set of tools from each of the different categories. For each of the tools selected, the information from the collector template file is accessed and the system user is prompted for values for the various fields. Once all values are obtained, the data collector file is written into a specific project's COLLECT
Zo directory. The field values which are specified at the time of configuration are then available as environment variables when the data is collected for that specific project.
The names of these environment variables can be specified in the template files.
Furthermore, as can be gleaned from the above discussion of the TEMPLATE
directory, the system is not limited to the initial projects provided with and programmed into the system, rather new projects may be added on an as needed basis by means of a web browser. In order to add a new project to the system, the user will be prompted for information so that the system is configured to look for data from the newly added project. First, the system prompts the user to identify which data sources are available. For configuration of each data source, the user will initially have to specify a host name, a path to the data file or database, available options where appropriate, and other tool specific information that may be necessary. New data t o collector files are then generated by the system based upon the inputted information, wherein such files may be manually modified at a later time, if necessary. Second, the system prompts the user to indicate message delivery locations (i. e. e-mail addresses) of system users (i. e. team members) who should receive alerts from the system as to project completion and delivery schedule matters generated for the new project. Each team member whose e-mail addresses is ~ 5 listed in the system for the indicated project will receive all of the alerts generated. Alerts are thereby utilized by the system as a mechanism for notifying team members when there are potential problems with the project, such as when deadlines have not been met or task completion is projected to delay the given project completion schedule.
Finally, the system prompts the user to indicate a schedule for collection of data from the databases so that the 2o system may determine progress of the project and generate any necessary alerts on the system user's desired schedule. All data is collected on the same schedule.
Accordingly, the system provides the above discussed outline for adding additional projects or data sources to the system on an as need basis.
The basic usage model for the application is to have a project administator identify the key data sources for the applications being monitored. Additional key constraints and variables are entered to help the system model application development for this organization.
Once setup, the system will automatically perform a 3 step process on a regular (e.g. nightly) basis. Step 1 is to automatically collect data from various tools used during the software development process. Step 2 is to analyze this data using the knowledge base of common software practices and mathematical algorithms to predict cost, timliness and quality of the to software being developed. Step 3 is to present this information to end users.
As illustrated by Fig. 2 and indicated above, three sub-directories are present within the COMMON and TEMPLATE directories. These directories are the REPORT, ANALYZE and COLLECT directories, which correspond with the three step process outlined above. The REPORT directory contains the index file which lists the paths of the html files used 15 for report generation. Each html file is designed and utilized to produce a page of a report. The html files can be located in the TEMPLATE/REPORT directory, COMMON/REPORT
directory, or each project specific REPORT directory, depending on whether they are shared between projects. The report index files have a simple structure:
<report-type><when-to-generate-key>: <html filename to process>
2o Preferably the values for entry into the <when-to-generate-key> field are anytime, daily, weekly, or monthly.-When no value is entered in the <when-to-generate-key> field, the default value of anytime is utilized for report generation. Each html file may contain pre-processing directives that describe which html files to include or what commands to execute.
The ANALYZE sub-directory of the COMMON directory contains one file per analyzer program. Analyzers are programs that interface with the application server and database, analyze the data, and write their results to a set of tables. These results can then be used and manipulated to generate the desired reports and create projections of project completion or issue relevant alert messages to indicated system users. Similar to the REPORTS directory, the ANALYZE sub-directory of each project directory contains an index file which lists the analyzer programs which will be utilized for the given project. The format is namely:
<type-of analysis>: <frequency-to-compute>: <name-of analyzer file>
The analyzer programs are initiated in the sequence in which they appear in the index list.
Options available for the <type-of analysis> field are "alert" and "analysis,"
which provides the ability to parse the analysis processing function between analysis which may be utilized for report generation or analysis which may be utilized for alert e-mail initiation. The <frequency-to-compute> indicates the frequency with which to perform the analysis calculation selected. As with the index in the REPORTS directory, options for this field are: anytime, daily, weekly and monthly. For example, the index may be configured so that computation of a project's projected schedule of completion may be calculated anytime, and the index may be configured so that 2o alerts are provided daily for alerts that can be generated on a daily basis and weekly for alerts that require analysis based upon weekly data. In a preferred embodiment, the default system configuration provides for the execution of the analyze files at 1:00 a.m., however, the scope of the invention should not be limited to this time for execution of the analyze files as this default can tube altered as needed. As such, the Application Server 22 reviews the analyze index file(s), which directs the program to the analyze files and runs any analyze programs marked as daily that have not been run within the last twenty four hours. A similar schedule is followed for weekly and monthly reports. However, when anytime is the selected interval for the analyze files, at the time any data is collected the analyze files will be executed.
Accordingly, whether a system user initiates an analysis for report generation or an analysis for alert generation, the system user has the option of selecting the time frame for the analysis cycle.
1 o Furthermore, the mathematical algorithms developed for and contained within the system are embedded in one or more analyze files. However, for purposes of utilizing the mathematical algorithms, the time firame for initiating the algorithms is set at "anytime."
Therefor, the prediction component of the system remains constantly updated as data flows into the system. Accordingly, alerts and analysis may be e~ciently provided in any of the available t 5 time frames selected by the user as the prediction component is constantly updated.
Finally, each project's COLLECT directory contains one project specific data collector file per third party tool subject to data collection for the respective project. The project specific data collector files are created during the configuration of a project. It is important to note that a project may have multiple schedules in which there potentially could be multiple data 2o collector files for the project. In such a case, the application server assembles a list of data collectors available based upon the data collector files in the COLLECT
directory for the specific project and thereby retrieves the necessary information on how to obtain the necessary data through the data collector developed for collection of data from a specific third party tool.
The system's ability to access a myriad of data sources through the use of various data collectors specifically designed to access a data source, and the ability and methods of acquiring and analyzing data are some of the novel areas within the system embodied in the present invention. More specifically, the system comprises data collectors containing data collector programs for collecting the data from the data sources available on the user's system by various means and writes the output to a temporary file. For example, the system may access the following data sources if available: project schedules maintained in Microsoft Project; defect t0 reports stored in PVCS Tracker; requirements stored in Microsoft Word; and testing information stored in Acqua/SQM. However, the system is in no way limited to the aforementioned software tools. The system is designed to accommodate and utilize alternative sources for gathering information on the state of the project and progress of design development and testing.
After the collection of data is completed by the data collector, the file is sent to t 5 the Application Server 22 and parsed to extract the data collected by the data collector. The data file generated by the data collector is a simple text file utilizing keywords to specify what data is being transferred to the application server. The keywords are illustrated in the following table:
Keyword Description tablename=NAME Start data for table NAME

20 /table End data for current table column name=NAME Declares a column NAME

row Start a new data row /row End current row options mtime=<string> Sets some options, i. e. last modification time.

options mode=replace Tells application serve to delete the records in the existing tables, i.e. a full overwrite.

History is still maintained.

options mode~pdate Tells application server to consider the tables as incremental updates.

For each data table that is written to the database file, the file contains the table command followed by a set of column commands. Each column command declares the set of columns that will appear in the given file. Each row provides values for exactly as many columns as were declared. The values for the rows are then provided, each one starting with a line containing the 1 o row commands with one field per line, followed by the end row command. The row and end row block may be repeated as many times as there are rows. The table ends with the end table command. Fields having embedded newlines may be specified by using a backslash at the end of the line to end the command The "options mode=replace" command instructs the system to delete the existing t 5 table contents. This replace command is used when parsing requirement documents are utilized.
With requirement documents the only way to know that a requirement has been deleted is when it is no longer in the set being transferred. By utilizing the "options mode=replace" command, the system ensures that new requirements completely replace old ones in the given table.
However, prior to initiation of the "option mode=replace" command past records must be written 2o to the application server's database to insure retention of the date in the system for analysis, report and alert generating purposes. Accordingly, the use of the "option mode=replace"

command is intended to facilitate date collection through data collectors designed to extract data from third party tools that do not maintain a notion of a transaction record.
The "options mode=update" command is utilized to point to a location in the database intended to indicate the division between data already processed and update data which has not yet been processed. The "option mode=update" command is used by data collectors where there exists transaction records that serves as a marker so that the transfer of the entire set of records need not be repeated. Accordingly, in this mode rows will not be deleted as in the "option mode=replace" command as deletion is not required since the system has a means for determining which data requires processing.
Status fields in tables can refer to values by using a lookup table command.
The lookup command will allow the data collector to store lookup keys instead of the actual values, thereby normalizing the database and preventing the data collector from hard coding the lookup keys. For example lookup keys may take the following form:
lookup <colname><tablename><reference_col_name><reference_col_value>
t 5 If the table does not contain a row where the <reference_col_name> field has the specified value, then such a row is added.-In the new row the <colname> field has the value of max(<colname>)+l. Additionally, null fields may be specified by the NULL or NA
keyword.
The text file provides merely a mapping to the actual tables in the database as opposed to actual values. Accordingly) it is the responsibility of the users who wish to write their own data 2o collectors to ensure that the named tables contain the columns and that they are of the right type.
The data collectors may further be programmed with the following flags:

Flag Description /out:FILENAME Specifies file where output is to be sent /mtime: STRING Specifies a string encoding the last time when the data was collected.

Files) Options understood by the data collector, specified in the data collector file.

The mtime option is a string encoding the last time when the data was collected and used by the data collector to recognize that the data has not changed and does not need to be collected again. In addition, the following environment variables can be set for the data collector:
t0 Variable Description ACQ_PRJ_ID Integer, project id AC~SET_ID Integer, set by the dcol_var setting for requirements ACQ_SCH ID Integer, set by the dcol_var setting for Microsoft Project schedules t 5 Accordingly, the mtime option will be set and read by the data collector and stored in the database by the Application Server 22. Using this method to transfer the data from the data collector to the application server allows data collectors to be written independently of the application server and specific schemes. Data collectors are therefore easier to write, maintain, and can be updated independently of the entire application server for increased reliability and 20 customizability.
To further illustrate the functions of the data collectors, the following description will be provided in relation to Microsoft Project. However, as mentioned above, the scope of the disclosure should not be limited to the third party project management tools discussed herein.
Alternative project management software tools may be utilized, thereby providing comparable functions. In the example provided, information gathered by the data collectors is extracted from a Microsoft Project .mpp file through OLE (object linking and embedding) automation. All extracted data is stored in a relational database using ODBC (open database connectivity) thereby allowing the user to use any supported underlying database.
For Microsoft Project, in particular, it is helpful to have additional information about the project schedules. For this function, a utility called the the Microsoft Project data gatherer is provided. It is written in Visual Basic and uses OLE Automation to access information from the .mpp files created by Microsoft Project. The Microsoft Project data gatherer is a utility designed to enhance information contained with Microsoft Project schedules.
This data gatherer is available as a standalone application for both reading and writing a .mpp file. The Microsoft Project data gatherer proceeds through all tasks and milestones and prompts the user for information. For example, Fig. 3 is illustrative of a sample screen for accumulating t 5 task information. There are eight sections for task breakdown, including:
Design/Coding, unit testing, bug fixing, test creation, test execution, contingency and other. The activity breakdown is used to calculate alerts and otherwise analyze project development. Unit Testing assists in conveying how many tests were created per unit time spent testing, Bug Fixing assists in understanding how long it takes to resolve outstanding defects, and Contingency is the 2o information a user enters to describe padding in a task or an entire task devoted to padding which helps maintain the accuracy of the prediction algorithm. Furthermore, the Confidence levels inputted by the user are implemented by the system to assist with the prediction algorithm.
When this program first reads a .mpp file, it will open a screen and ask what percentages to query from the file. In the example illustrated in Fig. 3, the user customized the percentage query at 80% and 90%. Accordingly, the information provided on this screen is used as the seeded information on the Milestone Information Screen, see Fig. 4. This information is generally entered once, and then may be kept current as the project progresses.
The data collector for Microsoft Project works like any other data collector in that it runs at it's scheduled collection time, and sends back a text file with lots of tables of data in a manner described above. OLE automation is used to automatically extract the required to information from Microsoft project.
The embodied system is further configured to manage multiple files configured from multiple third party project management software through the use of multiple data collectors. Each part of a system user's organization may have separate project files requiring different data collectors to properly extract data. In such cases, the system is configured to start ~ 5 one data collector for each of the related project files. The schedule table stores information corresponding to known information about a particular schedule. A sample schedule table is as follows:
Column Type Description Sch_id Integer Primary Key 2o sch_name String Sch comment String Sch_filename String Sch_author_id Integer foreign key: names(sched_author) Sch start Date Filled in by data collector Sch end Date Filled in by data collector Sch_ask~rob 1 Integer Filled in by data collector, with user requested input Sch ask~rob2 Integer Filled in by data collector, with user requested input The schedule identification (sch_id) is then generated by the database.
A schedule baselines history table stores baseline information about a given to schedule. This data is not stored in the schedule's table. For example, the baselines history table is completed by the data collector if the baseline information is present in the Microsoft Project schedule. In the event a user is utilizing Microsoft Project software and the information regarding the baseline is present in the schedule, the information for the baseline is completed by the data collector file. The schedule baseline history tables are updated when a baseline changes, 15 i. e, when new rows are added to this table reflecting new baseline dates.
This allows a user to track changes to the baseline dates over time and, if necessary, to calculate a deviation from the original date. In addition, the system comprises a schedule baseline table which is similar in appearance and configuration to the schedule baseline history table, except that the schedule baseline table omits the date. This specific schedule baseline table allows for comparison of 20 current start and end dates with the current baseline start and end dates for the project.
Accordingly, in addition to tasks, the overall prediction date for each schedule is also stored in a schedule predictions table.

The data collector further obtains information from the project identified in the layout of the directory of the system. An example of a project table is as follows:
Column Type Description prj_id Integer Primary Key prj name String Primary Key prj comment String pr~warning_days Integer Number of days later, causes a yellow light prj emergency_daysInteger Number of days before a red light prj~robability_1 Integer First probability for prediction to prj~robability_2 Integer Second probability for prediction pr~manager_id Integer foreign key: names(pr~manager) prj status_id Integer pending/active/retired The term "prj" in the above displayed table refers to the project.
Furthermore, the system t 5 comprises a projects schedules table for storing information necessary for referencing a system project to all schedules used in that project. The data in the project schedules table is separated in order to allow different projects to share schedules. An example of when sharing of schedules becomes necessary is when different projects within a company are dependent on each other.
Accordingly, the projects schedules table allows the system to collect information for that project 20 only once, and is illustrated in the following table:
Column Type Description prj_id Integer foreign key:projects (pr~id) sch_id Integer foreign keyachedules (sch_id) The prj id field stores the project identification. This value is different for every project in the system.
The system further comprises a table labeled "collectors table". The collectors table is used for storing data collector information so that all mtime's are stored in the same table for easy look-up. A sample of the collector table is as follows:
Column Ty Decription e prj id p foreign key: projects(prj id) Intege dcol_tool String tool name 1o dcol_name String unique name for this data collector, supplied by the user dcol_host String hostname to run on dcol_source String source from which to collect data dcol_mtime String last modificaiton time t s The most significant fields in the collectors table is the name and the mtime. The application server 22 queries this table to determine if it needs to run the tool again.
In a further embodiment, there is a project predictions table, which summarizes the high level predictions every time they are calculated In addition to the above discussed projects tables, the system further comprises 2o task tables. The task tables store information about each task. A sample of the tasks table is as follows:

Column Type Description sch_id Integer foreign key: schedules (sch_id) task_id Integer Comes from the Task.Uniquid. Unique per prj id and sch_id:

Task_row Integer Row is Microsoft Project schedule (task.id) task_level Integer Level in the hierarchy task_wbs String In the form of 1.2.3.4 and shows the location of the task in the outline task_name S~g task_start Date When should it start?

task end Date When should it end?

1o task_duration Long Minutes task_status_id Integer foreign key:-task_status (status_id) task_act start Date Actual start date taslc~act end Date Actual end date task_act duration Long Minutes t 5 task~rogress Integer From 0 to 100 tashmilestone_type_idInteger foreign key: task_type (type_id).
Identified whether a milestone tashis_summary Boolean Same as summary property (identifies a summary task) task~is_critical Boolean Critical path task User Defined User defined columns set during configuraiton of project The pr,~id field matches tasks with projects, and the pr~id fields and the taslc'id fields act as a unique index. The task_id field is a unique identifier for each task within a given schedule. A

task_confidence field and value is set by the user to specify the confidence level that the user has for the task being completed on time. Within the task table is a task~rogress field which stores a percentage indicating how much of the task has been completed so far. For example, the percentage completion data will be provided by Microsoft Project as the illustrated third party project management software. The task table further comprises a task_uniqueId field which is an identifier for the project, so that if the user changes a task the table will not be affected.
Another table in the system is a task_type table which stores information that identifies whether a task is a milestone, and if affirmative, what level milestone has been achieved. A sample of a task_type table is as follows:
1 task_type_id (Integer)task_type (string) o 1 Not A_Milestone 2 Minor_Milestone 3 Major_Milestone t 5 There is also a task_baselines_history table for storing the baseline information for all tasks performed) i. e. the history of baseline dates per task. It is stored separately because it can be populated from Microsoft Project in our illustration or the system itself.
This table is updated for every task when a baseline changes. For example, when a baseline changes, new rows are added to the table reflecting a new baseline date. This permits a user to monitor changes to the baseline Zo over time and to calculate a slip from the original date, if so desired. A
sample of the task_baselines_history table is as follows:

Column Type Description sch_id integer foreign keyachedules(sch_id) task_id integer foreign keyaasks(task_id) task bl start date baseline start date task_bl_end date baseline end date task date date date when this baseline was created task_bl_duration long minutes For fast access to the status of the task and most recent developments, the system comprises a t o task_baselines table. In addition, for summary tasks and milestones, the system comprises a task~redictions table for storing both 80% and 90% confidence dates as computed by Acqua. A
sample of the task,~redictions table is as follows:
Column Type Description sch_id integer foreign keyachedules(sch_id) 15 task~id integer foreign keyaasks(task_id) taskpr_start_I date taskpr_end_1 date taskpr_duration_1 long taskpr_start_2 date 2o taskpr_end~ date taskpr_duration_2 long In the above referenced table, both sclLid and task_id fields uniquely identify a task from the tasks table. The taskpr_*_1 field corresponds to predictions given the user's first percentage confidence estimate, and the taskpr * 2 field stores the second percentage confidence estimate where * represents either ''start", "end" or "duration." Accordingly, the tasks table stores information about each individual task.
In addition, the system further comprises task dependencies tables defined as task_deps. These tables store information about the relationships between tasks. For example, if task 2 is required to be completed before task 7 can begin, such information would be stored in this table. This information is required to determine what tasks need to be completed before a milestone can be achieved. In the system, there are various types of dependencies that can be established between different tasks. An example of the task_dep table and how the dependencies to are stored is illustrated in the following example:
Column Type Description prj_id integer foreign key:projects (pr~id) sch_id integer foreign keyachedules~ (sch_id) task_id integer foreign keyaasks(task_id) pred_sch_id integer foreign keyachedules(sch_id) pred_task_id integer foreign keyaasks(task_id) dep_type string foreign key:dep_types(dep_id) It is important to note, since a task may have more than one dependency the table may contain 2o multiple rows for the same project id, schedule_id and taslc'id fields.
Additionally, there are several other tables utilized in the multiple scheduling .
For example, there is a resource table for storing a list of all resources available in each project.

A person can be listed as a resource on two different projects but appearing with a different resource identifier each time. An assignments table stores a list of all assignments for each task in the project. The assignments are stored in a separate table to allow a particular task to be assigned to more than one resource. Finally, the system comprises a task status table which is a lookup table for the status of different tasks.
Through the use of the tables schedule changes are tracked over time to assist with predicting schedule dates and various trends in product development.
Accordingly, the following information is versioned: resource schedule history, task history and completion dates.
For each resource, a resource history table stores the frequency a resource does not meet a t o schedule. In addition, the resource history table calculates percentage of completion for the resource. A task history table stores information regarding the frequency in which task dates are changed and by how much. The date changes commonly reflect a time slip or functionality change, and this information enables the user to identify how the project deadlines changed over time. This table detects individual task and summary task changes. If the user would desire to t 5 ascertain historical information baseline date changes for tasks and historical information on phase changes, the user would have to access the task baseline history table and the phase table/task history table respectively. Finally, the completion table is completed by the analysis program that executes the Monte Carlo simulation. This table contains all of the data resulting from the Monte Carlo simulation. Accordingly, each time the program is executed, the old 2o information for this project is deleted.
Furthermore, the software quality management capability of the Application Server 22 functions as a critical component of the application delivery management system. The software quality management capability ensures that all of the manual and automated testing activities are coordinated and archived. The software quality management (SQM) data collector allows collecting of all information pertaining to the testing effort and provides statistics of how many tests were added, removed, how many are passing, and how many are failing. This information is then used in the schedule prediction and trend graphs. The SQM
data collector generates a suite identifier field for its internal database storage.
Furthermore, the SQM data collector comprises a project suites table for storing information between suites and projects, a suite table for storing information about each suite, and a tests table for storing information about to all test cases in the current suite. The test table stores classes and cases. A test class result is pass if all of its results pass, otherwise its result is fail. At such time as a test is deleted from a suite, it is removed from the tests table. The SQM data collector further comprises a lookup table for the state of test codes, a lookup table for the result codes, and a requirements table for storing the associations between requirements and tests, thereby allowing the determination of t 5 which requirements do not have tests, how many pass and how many fail. A
suite can have one set of requirements associated with it. Accordingly, a single test suite can only verify one set of requirements as defined by a set of requirements documents.
Furthermore, the SQM data collectors comprises the following tables: a defects tables for storing the association between defects and tests; a jobs tables for storing information 2o about the test jobs; an outcomes table for storing information about the outcomes of tests in those jobs; and a trends table for storing computed trend information about tests.
Based upon all of the data collected from the SQM data collectors and the above-defined generated tables) software quality management reports are generated. The following reports are available based upon these data collectors: percentage of requirements which have been verified, failed or untested; current test results for a given project of pass, fail or unexecutable; an outcome history for a given test case; a trend of new tests added versus the number of verified requirements;
and a trend of time spent testing versus the number of verified requirements. Accordingly, the SQM
data collectors, ensure that all of the manual and automated testing activities are coordinated.
After data is collected from the identified tools in the system and prior to having reports generated, analyzer applications are invoked to analyze the data collected and write their t 0 reports back to the database. As discussed earlier, alerts are one form of analyzers. They are mechanisms for notifying team members when there are potential problems with the project.
Team members are notified in two ways: through monitoring reports accessible through a web browser and by e-mail alerts.
Alerts are stored in the database of the server machine 20. An alerts defn table t 5 stores the alerts definitions, as follows:
Column Type Description prj id integer foreign key:projects(pr~id) alert_id integer unique per project alert_name string name of ~e-alert 20 sample_msg- string sample message to help ~e-configuration screen alert_date date date when this alert was last calculated (i.e. the application server ran the last analyze program) email_users string space separated list of users to email category string user-defined name of the category in which this alert is placed filter string future string that we can selectively filter alerts on.

Alert_active integer true means this alert is active In addition, the database fiuther comprises an alert threshold table which holds threshold and filter values for specific alerts. It is important to note that there may be more than one threshold per alert. The alert threshold table is configured as follows:
Column Type Description prj id integer foreign key:projects (plj_id) alert id integer foreign key:alert defn(alert_id) threshold_name string name to use to look up the value in the pair (pro~id, alert_id, threshold_id) threshold_type_id long foreign key: threshold_types (threshold_type_id) ~ threshold~rompt string message to use to prompt for a new threshold value threshold_value string value, stored as a string for this threshold for this alert Furthermore, the threshold_type table is a look-up table for the threshold_type_id field and is configured as follows:

Column Type Description threshold_type_id long primary key threshold_type string Alerts that want to find the threshold variables in order to compute whether to generate the alert must open the alerts definition table in the database and find the alert identification alert id field that matches their alert name. Then they can scan the alert threshold field table looking for a match of alert_id field and threshold_name field, at which point they will extract the threshold value field.
1 o Each of the alerts comprise threshold levels for which new values of the threshold can be modified, wherein any new modification value will take effect the next time the alerts are computed. The following alerts, for which there is no hierarchy, are currently available on the system, although this listing of alerts should not be considered limiting in that additional alerts can be added by users on an as needed basis. These alerts represent common software practices, ~ 5 or common warning signs of software development projects getting into trouble.
Applied as a whole, these alerts represent a knowledge base of software development rules that serve to warn project managers of potential pitfalls before, during or after they have occurred. The benefit is obvious for early warnings of potential pitfalls, as project managers can take early corrective action. Timely presentation of accurate information during 20 critical stages is also helpful for project managers. The title of the alert is generally descriptive of the message it is conveying to the user:

Defects Rising Too Fast Tasks Late Not Enough New Defects Summary Task Late Not Enough New Tests Created Summary Task Slip Predicted Test Creation Warning Milestone Late Not Enough Test Were Executed Milestone Slip Predicted Test Passing Rate No Baseline Schedule Critical Path Warning Requirements are not linked to Missing Milestone development task and/or Project Data Out of Date milestones to Project Warning Task Slip Predicted Schedule Warning Schedule Slip Predicted Project Confidence Decreasing Project Slip Predicted Schedule Confidence Decreasing Project is Late Project Prediction Date Slipping Schedule is Late t 5 Schedule Prediction Date SlippingRequirements not Linked to Testing Schedule Changed Tasks Project Changed Low Testing Effort 2o The alerts are computed at such time as the analyze file is executed. The analyze file will create rows in an alert table. After creation of the rows in the alert table, an alert page reads the rows to generate the proper set of alerts to display to the user.
Programs which calculate alerts are specified in the analyze directory, and can also generate pages or additional tables in the database, similar to other analyze directory programs.
Furthermore, each alert 25 program in the analyze directory can generate more than one alert, but they are all of the same type. For example, if one alert looks for late schedules, this alert can generate two alert messages as long as the alerts comprise the same identification.
The alerts are statically generated and presented to the user in a single html page.
Furthermore, each alert begins with an explanation of the alerts) and may link to pages with more information such as a link to customize, a link to receive help on this specific alert, and a link to delete the alert. The link for more information takes the user to the page that the alert generation program found was the primary trigger for the alert. The help link takes the user to the on-line help which explains more information about the alert generated, what the alert message is trying to convey to the user, what the user can do about the alert generated, and the definitions of the threshold values. The customize link takes the user to a page enabling them to customize the threshold values. The delete link deletes the alert.
An alerts~enerated table stores all alerts generated in the database of the server machine - A sample of an alert~enerated table is as follows:
1o Column Type Description pr~id integer foreign key:projects (pr,~id) alert id integer foreign key:alert_defn(alert_id) message string this string may be long, contain html tags, tables...

priority integer 15 alert_date date mail sent boolean The alert programs add rows to the alert~enerated table. The same alert name is presented as that utilized in the analyze directory. The message and priority field are set by the alert program, 2o and date, mail sent and alert id fields are automatically set. In addition, the user is not limited to alerts present in the system. A user can add new alerts by writing the alert program and creating new analyze files in the specific project's ANALYZE directory, adding an entry for them in the ANALYZE directory's index file.
The results produced by the analyzers may then be utilized by the reports generators. The user may specify when reports should be generated. The user further has the option to force the regeneration of a given report, which may be useful after documents in the database are updated and the user wants to see the new confidence level predictions for meeting proj ect delivery deadlines. Reports are generated by processing html files stored in a directory accessible to the Application Server 22, and may include calls to programs to generate some additional html or images. Furthermore, these html files may include applets, frames, or any other type of standard web-based logic.
t o There are several customization options available for the reports generated. For example, the users can modify the web site for their project by editing these html files directly.
Furthermore, users can add new links by editing the html pages directly, or the users can add new pages by adding lines to the index file in the PROJECT/REPORTS directory.
Finally, user can embed commands to generate graphs in existing or new html pages.
Accordingly, the user t 5 has a plurality of options available for customizing the reports generated by the application.
The user further has the option to force data to be collected interactively.
In such circumstances, the user is prompted to designate which data sources to update and which reports to generate. Regardless of how or when the data is collected and the reports are generated, all of the reports are catalogued and archived by default once per day immediately before the daily 2o data collection report generation occurs. Reports that are regenerated are placed into a LATEST
directory updating the previous information stored in this directory. However, the previous set of reports can always be retrieved from the LATEST directory. Accordingly, the Application Server 22 first collects all of the data, then runs all of the analyzers, and finally generates all of the reports for a given project, wherein all of the reports are archived and accessible from other reports.
In a further embodiment of the invention, the invention has the ability to predict when projects will be completed. The system gathers information from a wide variety of tools to deliver a realistic assessment of project delivery, and a more realistic assessment than one tool can provide individually. During the initial setup) as disclosed earlier, the user is requested to supply information that assists the system in producing accurate predictions.
In a preferred to embodiment, the system uses the following data sources to provide prediction of project delivery, although additional data sources may be utilized with access to a related data collector:
Project schedules maintained in Microsoft Project; Defect reports stored in PVCS Tracker;
Requirements stored in Microsoft Word; and Testing information stored in the Acqua test management system. Based upon the initial information provided by the user and the t 5 information obtained by the data collectors, the prediction process incorporates a Monte Carlo algorithm to predict accuracy of project completion.
The prediction algorithm is described as follows: First, the algorithm builds a probability curve for each task. The curve is set upon at least three data points the user has entered for each task, through the project gatherer utility, although fewer data points may be Zo utilized. According to the probabilities calculated, a duration for this task is assessed. Second) the system analyzes the individual resources assigned to the task based upon the history of the resource in completing their tasks in a timely manner. Based upon the analysis, the duration is either increased or decreased based upon the history collected and the percentage of timely task completions. Accordingly, this outlined analysis is applied over approximately one hundred iterations to establish a probability curve for this one task, which will then be used as input into the overall Monte Carlo simulation. In this novel manner, multiple sources of data are applied to each individual task in order to determine it's liklihood (probability) of completing over a range of times. Applying additional sources of information other than mentioned above is easily imagined and can be applied equally well to the overall algorithm.
Following this, the following three step analysis is undertaken. First, a trend to analysis is implemented. The trend analysis generates additional potential delivery delays by analyzing various trends, such as incoming defects, number of tests created, and number of regressions found. Second, an alert analysis is performed. Alerts that have been triggered might mean a delivery problem. In the event that factors causing the alert have not previously been taken into consideration in prior steps of the prediction algorithm then the duration of the t 5 prediction must be altered for early or delayed prediction of product delivery. Finally, a comparison of additional time required in the trend and alert analysis against contingency factors is attained and time for project completion is increased or decreased from the total duration, as deemed appropriate.
The basic prediction calculation works by defining a probability map for each 2o task, as described above, and then adding the additional tasks that represent potential delays.
The Monte Carlo simulation technique is then applied over a plurality of iterations so as to establish a refined probability curve for the entire schedule. The current date is then fitted to the curve and a confidence level is assigned to the task completion. In addition, the user can also access and review 80% and 90% confidence dates (or a different set of confidence factors of their choosing) for project completion.
In a preferred embodiment, the system comprises the ability to manage, monitor and predict completion of projects based upon a mufti-faceted, mufti-data, mufti-tooled approach. As such, the prediction software may operate to evaluate completion of a plurality of tasks in order to accurately predict completion of an entire project comprising a plurality of tasks and/or deliverable units. The system evaluates information gathered from an array of software to tools to deliver a realistic assessment of project completion.
The presently disclosed embodiments are therefore considered in all respects illustrative and not restrictive. The scope of the invention is indicated by the appended claims, rather than the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.

Claims (19)

1. A computer-readable medium having stored thereon computer executable instructions for predicting project completion from tools within a network, comprising the following steps:
collecting data from a plurality of project development tools within the network;
managing the collected data;
invoking an application for analyzing the collected data; and generating reports from said analyzed data.
2. The method of claim 1, further comprising presenting periodic alerts to a client workstation.
3. The method of claim 2, wherein said alerts being statically generated and presented to team members.
4. The method of claim 3, wherein presentation of alerts and reports to team members being through a web browser in hypertext markup language file format.
5. The method of claim 4, wherein the hypertext markup language file may be selected from the group consisting of applets, frames, or any other standard web-based logic.
6. The method of claim 3, further comprising executing a software quality management data collector for ensuring coordination and archiving of automated and manual testing activities within the network.
7. The method of claim 6, wherein said software quality management data collector collecting information for generating a schedule prediction for product completion.
8. The method of claim 1, further comprising the ability to add new data sources within the network, comprising the following steps:
identifying to the network system data sources available;
indicating message delivery locations of team members; and indicating a schedule for collection of data.
9. The method of claim 8, wherein said identification step comprising specifying a host name, a path to information sources that need to be collected, and specific tool information.
10. The method of claim 1, further comprising a prediction analysis algorithm for determining product completion, comprising the following steps:

prompting a user to assign confidence levels of project completion for a specific task;
building a probability curve based upon said user prompted confidence levels for said task;
analyzing individual resources within said network and histories of the resources for timely completing of said task;
adjusting duration for said task completion from said histories and percentage of timely task completion;
applying said analysis over a plurality of iterations for establishing a probability curve for said task.
11. The method of claim 22, further comprising applying said prediction analysis to the following steps:
implementing a trend analysis;
performing an alert analysis;
performing a comparison of additional time required in said trend and alert analyses against contingency factors; and adjusting time for project completion based upon said analysis.
12. The method of claim 11, wherein said comparison step further comprising defining a probability map for each task and adding additional tasks representing potential delay in project completion.
13. The method of claim 12, wherein said prediction analysis step further comprising establishing a refined probability curve for scheduled completion of a plurality of tasks within said project by applying a monte carlo simulation technique to a probability map for each task over a plurality of iterations.
14. The method of claim 13, wherein said prediction analysis further comprising fitting a current date to said refined probability curve and assigning a confidence level for task completion.
15. The method of claim 14, wherein said prediction analysis being performed for a plurality of tasks comprising the project.
16. A computer system for predicting project completion, comprising:
a network of interconnected devices, further comprising:
an application server comprising a database for storing project information;
a remote manager from which data is collected; and a client workstation;
said devices operating a computer readable medium containing a computer program for managing project completion, said program comprising:
data collector programs for collecting data from a plurality of tools operated during project development;

programs for analyzing collected data;
report generating programs for generating reports and alerts from collected and analyzed data; and a web browser for presenting the reports and alerts to a client workstation.
17. The computer system of claim 16, further comprising a software quality management data collector program for coordinating and archiving testing activities within the network,
18. The computer system of claim 17, wherein a schedule prediction for project completion is produced from said software quality management data collector program.
19. The computer system of claim 16, wherein said computer program for managing project completion further comprising a computer readable medium containing a prediction analysis computer program for determining project completion, said program comprising:
a tool for prompting a user to assign confidence levels for project completion of a task;
a curve building program for building a probability curve based upon said user assigned confidence levels for said task;
a resource analyzer program for analyzing individual resources within said system for determining timely completion of said task;
an adjustment tool for changing the duration of task completion based upon the task histories and percentage of timely task completion; and a program for applying said resource analyzer program over a plurality of iterations for establishing a probability curve for said task.
CA002267476A 1998-03-30 1999-03-30 Time management & task completion & prediction apparatus Abandoned CA2267476A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CA002267476A CA2267476A1 (en) 1998-03-30 1999-03-30 Time management & task completion & prediction apparatus

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CA 2233359 CA2233359A1 (en) 1998-03-30 1998-03-30 Computer software development time management and task completion and prediction apparatus
CA2,233,359 1998-03-30
CA002267476A CA2267476A1 (en) 1998-03-30 1999-03-30 Time management & task completion & prediction apparatus

Publications (1)

Publication Number Publication Date
CA2267476A1 true CA2267476A1 (en) 1999-09-30

Family

ID=29712976

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002267476A Abandoned CA2267476A1 (en) 1998-03-30 1999-03-30 Time management & task completion & prediction apparatus

Country Status (1)

Country Link
CA (1) CA2267476A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111240945A (en) * 2019-12-30 2020-06-05 中国建设银行股份有限公司 System, method and related device for automatically processing secondary alarm
CN113094243A (en) * 2020-01-08 2021-07-09 北京小米移动软件有限公司 Node performance detection method and device
CN115086363A (en) * 2022-05-23 2022-09-20 北京声智科技有限公司 Learning task early warning method and device, electronic equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111240945A (en) * 2019-12-30 2020-06-05 中国建设银行股份有限公司 System, method and related device for automatically processing secondary alarm
CN113094243A (en) * 2020-01-08 2021-07-09 北京小米移动软件有限公司 Node performance detection method and device
CN115086363A (en) * 2022-05-23 2022-09-20 北京声智科技有限公司 Learning task early warning method and device, electronic equipment and storage medium
CN115086363B (en) * 2022-05-23 2024-02-13 北京声智科技有限公司 Early warning method and device for learning task, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US6519763B1 (en) Time management and task completion and prediction software
US7617210B2 (en) Global inventory warehouse
US8326870B2 (en) Critical parameter/requirements management process and environment
US6308164B1 (en) Distributed project management system and method
US7533008B2 (en) System and method for simulating a discrete event process using business system data
US7490319B2 (en) Testing tool comprising an automated multidimensional traceability matrix for implementing and validating complex software systems
Paim et al. DWARF: An approach for requirements definition and management of data warehouse systems
US20070288212A1 (en) System And Method For Optimizing Simulation Of A Discrete Event Process Using Business System Data
US6964044B1 (en) System and process for management of changes and modifications in a process
US20090070237A1 (en) Data reconciliation
Chung et al. Dealing with change: An approach using non-functional requirements
WO1998027489A1 (en) Software release document process control system and method
JPH10222351A (en) Computer execution integrating request system for managing change control in software release stream
JP2001256333A (en) Operation allocation system and method, decentralized client-server system, and computer program storage medium
US8027956B1 (en) System and method for planning or monitoring system transformations
Olsem An incremental approach to software systems re‐engineering
WO2005008397A2 (en) Systems and methods for categorizing charts
CN111914417A (en) Plan and budget simulation analysis system
CA2267476A1 (en) Time management &amp; task completion &amp; prediction apparatus
Sakamoto et al. Supporting business systems planning studies with the DB/DC data dictionary
CN114911773A (en) Universal meta-model design method
JP2010092387A (en) Created document navigation system
KR20060012572A (en) System and methods for managing distributed design chains
Wu Software project plan tracking intelligent agent
Fisseler et al. D2. 5-Initial Requirements and Architecture Specifications

Legal Events

Date Code Title Description
EEER Examination request
FZDE Dead