EP1272945A2 - Verfahren zur gesundheitslösungmodelle - Google Patents

Verfahren zur gesundheitslösungmodelle

Info

Publication number
EP1272945A2
EP1272945A2 EP01927029A EP01927029A EP1272945A2 EP 1272945 A2 EP1272945 A2 EP 1272945A2 EP 01927029 A EP01927029 A EP 01927029A EP 01927029 A EP01927029 A EP 01927029A EP 1272945 A2 EP1272945 A2 EP 1272945A2
Authority
EP
European Patent Office
Prior art keywords
files
management
data
data management
match
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP01927029A
Other languages
English (en)
French (fr)
Inventor
Kevin W. Carley
Lisa Marie Harrington
Jennifer Dikeman
Megan Moody
Mary Michelle Gregory
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Accenture Global Services Ltd
Original Assignee
Accenture LLP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US09/549,237 external-priority patent/US6701345B1/en
Application filed by Accenture LLP filed Critical Accenture LLP
Publication of EP1272945A2 publication Critical patent/EP1272945A2/de
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/60ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/25Integrating or interfacing systems involving database management systems
    • G06F16/252Integrating or interfacing systems involving database management systems between a Database Management System and a front-end application

Definitions

  • the present invention relates to system frameworks and more particularly to a system framework for a health care system.
  • Computerized databases are commonly used to store large amounts of data for easy access and manipulation by multiple users.
  • a centralized computer system there is a single copy of the data stored at one location, typically a computer.
  • a computer By maintaining a single, centralized database, such a system avoids inconsistencies which might otherwise occur with more than one copy of the data.
  • the centralized database approach has several drawbacks. First, since only one copy of the data exists, if the data becomes corrupted or inaccessible, the entire system becomes unavailable. Second, with only one copy of data available for read and update purposes, the system may appear slow and time-consuming, especially to multiple users.
  • a transaction is a set of data-dependent operations requested by a user of the system. For example, a user may request some combination of retrieval, update, deletion or insertion operations.
  • the completion of a transaction is called a commitment and the cancellation of a transaction prior to its completion is referred to as an abort. If a transaction is aborted, then any partial results (i.e., updates from those operations that were performed prior to the abort decision) must be undone. This process of returning the data items to their original values is also referred to as a roll back.
  • An important aspect of a transaction is atomicity. Atomicity means that all of the operations associated with a particular transaction must be performed or none of them can be performed.
  • replicated systems provide the above advantages over non-replicated systems, there are nonetheless inherent costs associated with the replication of databases.
  • To update a single data item at least one message must be propagated to every replica of that data item, consuming substantial communications resources.
  • a complicated administrative support mechanism is required.
  • the replicated system cannot guarantee consistent updates at all replicas, data integrity may be compromised.
  • a single database manager associated with a single database facility is chosen as the coordinator of the transaction.
  • the coordinator first asks all of the participants (i.e., the other replicas) including itself (if the coordinator is a participant) to prepare for the commitment of a transaction.
  • Each participant replies to the coordinator with either a READY message, signaling that the participant is ready and willing to commit the transaction, or an ABORT message, signaling that the participant is unable to commit the transaction.
  • the coordinator Before sending the first prepare message, the coordinator typically enters a record in a log stored on stable storage, identifying all of the replicas participating in the transaction.
  • the coordinator also activates a time-out mechanism. Based on the replies received from the participants, the coordinator decides whether to commit or abort the transaction. If all participants answer READY, the coordinator decides to commit the transaction. Otherwise, if at least one participant replies with an ABORT message or has not yet answered when the time-out expires, the coordinator decides to abort the transaction.
  • the coordinator begins the second phase of 2PC by recording its decision (i.e., commit or abort) in the log.
  • the coordinator informs all of the participants, including itself, of its decision by sending them a command message, i.e., COMMIT or ABORT.
  • COMMIT or ABORT a command message
  • all of the participants write a commit or abort record in their own logs.
  • all participants send a final acknowledgment message to the coordinator and execute the relevant procedures for either committing or aborting the transaction.
  • the acknowledgment message moreover, is not simply an acknowledgment that a command has been received, but is a message informing the coordinator that the command has been recorded by the participant in its stable log record.
  • the coordinator receives the acknowledgment messages from the participants, it enters a "complete" record in its log.
  • the 2PC protocol Although widely implemented, the 2PC protocol nonetheless has several disadvantages.
  • Second, 2PC requires the transmission of at least three messages per replicated database per transaction. The protocol thus consumes substantial communications resources and reduces the system's response time and throughput.
  • the primary-backup approach does not automatically allow for transparent failover to a backup site after a primary failure.
  • a state machine is a entity containing a set of states and a set of commands which transform those states such that all of the new states are also contained within the machine.
  • a method for providing a multi-tier client/server architecture for storing files First, a connection is maintained between multiple user stations and a server that has a database. A plurality of files and a command to load the files into the database are received from one of the user stations. Also, a data management template corresponding to the files is selected. Next, it is validated that all of the files to be loaded match the data management template. Then, the files are sent to a database for loading in the database upon validation that the files match the data management template.
  • files that match the data management template are separated from files that do not match the data management template. Also, a list of files that match the data management template and files that do not match the data management template may be compiled.
  • no files are sent to the database if any of the files do not match the data management template.
  • files that match the data management template are separated from files that do not match the data management template. These files are sent to the user station if there are files that do not match the data management template.
  • the files are medical files.
  • a notification is sent to the user station upon detecting a concurrently executing load process.
  • Figure 1 is a schematic diagram of a hardware implementation of one embodiment of the present invention.
  • Figure 2 illustrates a data load process in which a single user runs the process on an individual client desktop (user station);
  • Figure 3 is a flowchart depicting a process for providing a multi-tier client/server architecture for storing files and/or records
  • Figure 4 is a flowchart illustrating a process for providing a notification when multiple users attempt to alter the same data
  • Figure 5 depicts a process for providing status messaging during data loading in a multi-tier client/server architecture
  • Figure 6 is a flowchart that illustrates a process for generating error and summary reports for a data load
  • Figure 7 is a flowchart illustrating a process for loading data in a multi-tier client/server architecture
  • FIG 8 is an illustration of the Integrated Development Environment Architecture (IDEA);
  • IDEA Integrated Development Environment Architecture
  • Figure 9 is an illustration showing a Development Organization Framework in accordance with one embodiment of the present invention.
  • Figure 10 is an illustration showing a security organization functional according to one embodiment of the present invention
  • Figure 11 is an illustration showing the responsibilities of an Environmental Management Team
  • Figure 12 is an illustration showing the responsibilities of an Application Team structure
  • Figure 13 is an illustration showing a model migration plan in accordance with one embodiment of the present invention.
  • Figure 14 is an illustration showing a single release capability development pipeline in accordance with one embodiment of the present invention.
  • Figure 15 is an illustration showing a multiple release capability development pipeline in accordance with one embodiment of the present invention.
  • Figure 16 is an illustration showing a multiple release capability development pipeline with code base synchronization among three pipelines
  • Figure 17 is an illustration showing a Development Tools Framework in accordance with one embodiment of the present invention.
  • Figure 18 is an illustration showing information captured in the Repository and reused
  • Figure 19 is an illustration showing the Repository's central role in the development environment.
  • Figure 20 is an illustration showing an Operational Architecture Framework in accordance with one embodiment of the present invention.
  • a preferred embodiment of a system in accordance with the present invention is preferably practiced in the context of a personal computer such as an IBM compatible personal computer, Apple Macintosh computer or UNIX based workstation.
  • a representative hardware environment is depicted in Figure 1, which illustrates a typical hardware configuration of a workstation in accordance with a preferred embodiment having a central processing unit 110, such as a microprocessor, and a number of other units interconnected via a system bus 112.
  • the workstation shown in Figure 1 includes a Random Access Memory (RAM) 114, Read Only Memory (ROM) 116, an I/O adapter 118 for connecting peripheral devices such as disk storage units 120 to the bus 112, a user interface adapter 122 for connecting a keyboard 124, a mouse 126, a speaker 128, a microphone 132, and/or other user interface devices such as a touch screen (not shown) to the bus 112, communication adapter 134 for connecting the workstation to a communication network (e.g., a data processing network) and a display adapter 136 for connecting the bus 112 to a display device 138.
  • a communication network e.g., a data processing network
  • display adapter 136 for connecting the bus 112 to a display device 138.
  • the workstation typically has resident thereon an operating system such as the Microsoft Windows NT or Windows/95 Operating System (OS), the IBM OS/2 operating system, the MAC OS, or UNIX operating system.
  • OS Microsoft Windows NT or Windows/95 Operating System
  • IBM OS/2 operating system the IBM OS/2 operating system
  • MAC OS the MAC OS
  • UNIX operating system the operating system
  • OOP Object oriented programming
  • OOP is a process of developing computer software using objects, including the steps of analyzing the problem, designing the system, and constructing the program.
  • An object is a software package that contains both data and a collection of related structures and procedures.
  • OOP Since it contains both data and a collection of structures and procedures, it can be visualized as a self-sufficient component that does not require other additional structures, procedures or data to perform its specific task. OOP, therefore, views a computer program as a collection of largely autonomous components, called objects, each of which is responsible for a specific task. This concept of packaging data, structures, and procedures together in one component or module is called encapsulation.
  • OOP components are reusable software modules which present an interface that conforms to an object model and which are accessed at run-time through a component integration architecture.
  • a component integration architecture is a set of architecture mechanisms which allow software modules in different process spaces to utilize each others capabilities or functions. This is generally done by assuming a common component object model on which to build the architecture. It is worthwhile to differentiate between an object and a class of objects at this point.
  • An object is a single instance of the class of objects, which is often just called a class.
  • a class of objects can be viewed as a blueprint, from which many objects can be formed.
  • OOP allows the programmer to create an object that is a part of another object.
  • the object representing a piston engine is said to have a composition-relationship with the object representing a piston.
  • a piston engine comprises a piston, valves and many other components; the fact that a piston is an element of a piston engine can be logically and semantically represented in OOP by two objects.
  • OOP also allows creation of an object that "depends from” another object. If there are two objects, one representing a piston engine and the other representing a piston engine wherein the piston is made of ceramic, then the relationship between the two objects is not that of composition.
  • a ceramic piston engine does not make up a piston engine. Rather it is merely one kind of piston engine that has one more limitation than the piston engine; its piston is made of ceramic.
  • the object representing the ceramic piston engine is called a derived object, and it inherits all of the aspects of the object representing the piston engine and adds further limitation or detail to it.
  • the object representing the ceramic piston engine "depends from" the object representing the piston engine. The relationship between these objects is called inheritance.
  • the object or class representing the ceramic piston engine inherits all of the aspects of the objects representing the piston engine, it inherits the thermal characteristics of a standard piston defined in the piston engine class.
  • the ceramic piston engine object overrides these ceramic specific thermal characteristics, which are typically different from those associated with a metal piston. It skips over the original and uses new functions related to ceramic pistons.
  • Different kinds of piston engines have different characteristics, but may have the same underlying functions associated with it (e.g., how many pistons in the engine, ignition sequences, lubrication, etc.).
  • a programmer would call the same functions with the same names, but each type of piston engine may have different/overriding implementations of functions behind the same name. This ability to hide different implementations of a function behind the same name is called polymorphism and it greatly simplifies communication among objects.
  • composition-relationship With the concepts of composition-relationship, encapsulation, inheritance and polymorphism, an object can represent just about anything in the real world. In fact, our logical perception of the reality is the only limit on determining the kinds of things that can become objects in object- oriented software. Some typical categories are as follows:
  • Objects can represent physical objects, such as automobiles in a traffic-flow simulation, electrical components in a circuit-design program, countries in an economics model, or aircraft in an air-traffic-control system.
  • Objects can represent elements of the computer-user environment such as windows, menus or graphics objects.
  • An object can represent an inventory, such as a personnel file or a table of the latitudes and longitudes of cities.
  • An object can represent user-defined data types such as time, angles, and complex numbers, or points on the plane.
  • OOP allows the software developer to design and implement a computer program that is a model of some aspects of reality, whether that reality is a physical entity, a process, a system, or a composition of matter. Since the object can represent anything, the software developer can create an object which can be used as a component in a larger software project in the future.
  • OOP enables software developers to build objects out of other, previously built objects.
  • C++ is an OOP language that offers a fast, machine-executable code.
  • C++ is suitable for both commercial-application and systems-programming projects.
  • C++ appears to be the most popular choice among many OOP programmers, but there is a host of other OOP languages, such as Smalltalk, Common Lisp Object System (CLOS), and Eiffel. Additionally, OOP capabilities are being added to more traditional popular computer programming languages such as Pascal.
  • Encapsulation enforces data abstraction through the organization of data into small, independent objects that can communicate with each other. Encapsulation protects the data in an object from accidental damage, but allows other objects to interact with that data by calling the object's member functions and structures.
  • Subclassing and inheritance make it possible to extend and modify objects through deriving new kinds of objects from the standard classes available in the system. Thus, new capabilities are created without having to start from scratch.
  • Class hierarchies and containment hierarchies provide a flexible mechanism for modeling real- world objects and the relationships among them.
  • class libraries allow programmers to use and reuse many small pieces of code, each programmer puts those pieces together in a different way.
  • Two different programmers can use the same set of class libraries to write two programs that do exactly the same thing but whose internal structure (i.e., design) may be quite different, depending on hundreds of small decisions each programmer makes along the way.
  • similar pieces of code end up doing similar things in slightly different ways and do not work as well together as they should.
  • Class libraries are very flexible. As programs grow more complex, more programmers are forced to Stahl basic solutions to basic problems over and over again.
  • a relatively new extension of the class library concept is to have a framework of class libraries.
  • This framework is more complex and consists of significant collections of collaborating classes that capture both the small scale patterns and major mechanisms that implement the common requirements and design in a specific application domain. They were first developed to free application programmers from the chores involved in displaying menus, windows, dialog boxes, and other standard user interface elements for personal computers.
  • Frameworks also represent a change in the way programmers think about the interaction between the code they write and code written by others.
  • the programmer called libraries provided by the operating system to perform certain tasks, but basically the program executed down the page from start to finish, and the programmer was solely responsible for the flow of control. This was appropriate for printing out paychecks, calculating a mathematical table, or solving other problems with a program that executed in just one way.
  • event loop programs require programmers to write a lot of code that should not need to be written separately for every application.
  • the concept of an application framework carries the event loop concept further. Instead of dealing with all the nuts and bolts of constructing basic menus, windows, and dialog boxes and then making these things all work together, programmers using application frameworks start with working application code and basic user interface elements in place. Subsequently, they build from there by replacing some of the generic capabilities of the framework with the specific capabilities of the intended application.
  • Application frameworks reduce the total amount of code that a programmer has to write from scratch.
  • the framework is really a generic application that displays windows, supports copy and paste, and so on, the programmer can also relinquish control to a greater degree than event loop programs permit.
  • the framework code takes care of almost all event handling and flow of control, and the programmer's code is called only when the framework needs it (e.g., to create or manipulate a proprietary data structure).
  • a programmer writing a framework program not only relinquishes control to the user (as is also true for event loop programs), but also relinquishes the detailed flow of control within the program to the framework. This approach allows the creation of more complex systems that work together in interesting ways, as opposed to isolated programs, having custom code, being created over and over again for similar problems.
  • a framework basically is a collection of cooperating classes that make up a reusable design solution for a given problem domain. It typically includes objects that provide default behavior (e.g., for menus and windows), and programmers use it by inheriting some of that default behavior and overriding other behavior so that the framework calls application code at the appropriate times.
  • default behavior e.g., for menus and windows
  • Behavior versus protocol Class libraries are essentially collections of behaviors that you can call when you want those individual behaviors in your program.
  • a framework provides not only behavior but also the protocol or set of rules that govern the ways in which behaviors can be combined, including rules for what a programmer is supposed to provide versus what the framework provides.
  • a framework embodies the way a family of related programs or pieces of software work. It represents a generic design solution that can be adapted to a variety of specific problems in a given domain. For example, a single framework can embody the way a user interface works, even though two different user interfaces created with the same framework might solve quite different interface problems.
  • a preferred embodiment of the invention utilizes HyperText Markup Language (HTML) to implement documents on the Internet together with a general-purpose secure communication protocol for a transport medium between the client and the Newco. HTTP or other protocols could be readily substituted for HTML without undue experimentation.
  • HTML HyperText Markup Language
  • Information on these products is available in T. Berners-Lee, D. Connoly, "RFC 1866: Hypertext Markup Language - 2.0" (Nov. 1995); and R. Fielding, H, Frystyk, T. Berners-Lee, J. Gettys and J.C.
  • HTML Hypertext Transfer Protocol ⁇ HTTP/1.1 : HTTP Working Group Internet Draft
  • HTML documents are SGML documents with generic semantics that are appropriate for representing information from a wide range of domains. HTML has been in use by the World-Wide Web global information initiative since 1990. HTML is an application of ISO Standard 8879; 1986 Information Processing Text and Office Systems; Standard Generalized Markup Language (SGML).
  • HTML has been the dominant technology used in development of Web-based solutions.
  • HTML has proven to be inadequate in the following areas: Poor performance;
  • UI User Interface
  • Custom “widgets” e.g., real-time stock tickers, animated icons, etc.
  • client-side performance is improved.
  • Java supports the notion of client-side validation, offloading appropriate processing onto the client for improved performance.
  • Dynamic, real-time Web pages can be created. Using the above-mentioned custom UI components, dynamic Web pages can also be created.
  • Sun's ® Java ® language has emerged as an industry-recognized language for "programming the Internet.”
  • Sun defines Java as: "a simple, object-oriented, distributed, interpreted, robust, secure, architecture-neutral, portable, high-performance, multithreaded, dynamic, buzzword- compliant, general-purpose programming language.
  • Java supports programming for the Internet in the form of platform-independent Java applets.”
  • Java applets are small, specialized applications that comply with Sun's Java Application Programming Interface (API) allowing developers to add "interactive content" to Web documents (e.g., simple animations, page adornments, basic games, etc.). Applets execute within a Java-compatible browser (e.g., Netscape Navigator) by copying code from the server to client.
  • Java's core feature set is based on C++.
  • Sun's Java literature states that Java is basically, "C++ with extensions from Objective C for more dynamic method resolution.”
  • ActiveX includes tools for developing animation, 3-D virtual reality, video and other multimedia content.
  • the tools use Internet standards, work on multiple platforms, and are being supported by over 100 companies.
  • the group's building blocks are called ActiveX Controls, small, fast components that enable developers to embed parts of software in hypertext markup language (HTML) pages.
  • ActiveX Controls work with a variety of programming languages including Microsoft Visual C++, Borland Delphi ®, Microsoft Visual Basic ® programming system and, in the future, Microsoft's development tool for Java, code named "Jakarta ®.”
  • ActiveX Technologies also includes ActiveX Server Framework, allowing developers to create server applications.
  • ActiveX could be substituted for JAVA without undue experimentation to practice the invention.
  • An embodiment of the present invention is a data load process that automates the process of loading large volume configuration or conversion data into a database in a health care solution framework.
  • the process can be used to automate the normally manual, time intensive process of loading data via a front-end user interface.
  • the load process can be used to allow a user to:
  • the data load process provides the following benefits: Minimizes configuration problems due to human error
  • the keywords are organized into a tier structure, where all keywords within one tier must be loaded before the next tier can be started.
  • FIG. 2 illustrates a data load process 200 in which a single user runs the process on an individual client desktop 202 (user station).
  • An illustrative data load process may be embodied in a three tier client/server architecture including a Graphical User Interface (GUI) built in Microsoft Access, a server application built in C, Pro*C, Perl 5 and Unix korn shell scripts, Oracle SQL*Loader scripts, and a series of Oracle PL/SQL stored procedures.
  • GUI Graphical User Interface
  • a user logs onto the system. See arrow 1.
  • the user selects specific keywords within a tier 204 to load into the database 206.
  • the user executes a load process at arrow 3 and files to be loaded are transferred to the server at arrow 4.
  • a load process control module is executed and the corresponding DMT(s) for the selected keyword(s) are sent to the server application. See arrow 5.
  • a check for concurrently executing load processes is performed in operation 6.
  • the success of the file transfer is performed in operation 7.
  • the files are reformatted and loaded into tables by the server application loads data into worktables.
  • the server application initiates stored PL/SQL procedures to perform validation. See operation 10.
  • Data is validated according to database and/or client-specific business rules. If no validation errors are found, data is loaded into the Diamond database. See operation 11. If errors are found, a file containing all the good records, and a file containing all the bad records are sent back to the client desktop. See arrow 12. A report is produced listing all the erred records and the corresponding row numbers and error messages. Also, a verification report is produced that provides control totals for data loaded into database, or written to good/bad files. The reports can then be reviewed by the user. See arrow 13.
  • FIG. 3 is a flowchart depicting a process 300 for providing a multi-tier client/server architecture for storing files and/or records such as medical records.
  • a connection is maintained between multiple user stations and a server that has a database.
  • the connection may be maintained utilizing a local area network or a wide area network.
  • a dialup connection could be created periodically or upon user request.
  • a plurality of records/files and a command to load the records into the database are received from one of the user stations in operation 304.
  • the command may be ordered by the user, or may be executed automatically. If the command is executed automatically, it may be performed at predetermined intervals.
  • a data management template corresponding to the files/records is selected.
  • the data management template may include a listing of all records/files that should be loaded.
  • the data management template may specify particular content of the files/records that must be matched for verification.
  • the data management template may specify specific particular sizes of the files/records.
  • it is validated that all of the records/files to be loaded match the data management template.
  • the records/files are sent to a database for loading in the database upon validation that the records match the data management template.
  • files/records that match the data management template are separated from files/records that do not match the data management template. Also, a list of records/files that match the data management template and records/files that do not match the data management template may be compiled and may be sent to the user station.
  • no records are sent to the database if any of the records do not match the data management template. This prevents entry of erroneous data.
  • records that match the data management template are separated from records that do not match the data management template. The records are then sent to the user station if there are records that do not match the data management template.
  • the records are medical records.
  • a notification is sent to the user station upon detecting a concurrently executing load process.
  • FIG. 4 is a flowchart illustrating a process 400 for providing a notification when multiple users attempt to alter the same data.
  • connections to a plurality of user stations are monitored. This may be done continuously or at predetermined intervals, for example.
  • An instruction for initiating a load process is received from one of the user stations in operation 404.
  • Data is downloaded from the one of the user stations in operation 406. In this and other embodiments of the present invention, the data may be in the form of files or records, for example.
  • a notification is also sent to the user station that initiated the concurrently executing load process in operation 412.
  • Such notifications may include a pop-up window, an email, and/or a facsimile, for example. Both users are notified to allow them to coordinate their updates so that all alterations to the data are entered.
  • At least one of the load processes is suspended in operation 414 upon detecting the concurrently executed load process to allow the users time to react to the notification.
  • One of the load processes, all but the first load process, all of the load processes, or any other combination can be suspended upon it being determined that another load process is being concurrently executed.
  • At least one of the load processes should be allowed to continue upon receiving a command to continue from the user station associated with the suspended at least one of the load processes. See operation 416.
  • the data includes medical records.
  • the connections to the user stations may be via a wide area or local area network.
  • Figure 5 depicts a process 500 for providing status messaging during data loading in a multi-tier client/server architecture.
  • data is downloaded from a user station.
  • a status of the download of the data is transmitted to the user station in operation 504.
  • the status is displayed as it is received.
  • the data is divided into divisible portions.
  • Each of the divisible portions of the data is checked in operation 508 to validate that the data meets predetermined criteria, such as that it includes certain content.
  • a message is sent to the user station indicating whether the divisible portions of the data meet the predetermined criteria.
  • the data is loaded in a database in operation 512.
  • the data may include medical records.
  • a list of data that matches the predetermined criteria and data that does not match the predetermined criteria is compiled.
  • data that matches the predetermined criteria is separated from data that does not match the predetermined criteria.
  • the separated data is transmitted to the user station. The data may be transmitted during separation or may be transmitted after separation.
  • the divisible portions of the data are loaded into a table before validating that the data meets the predetermined criteria.
  • a notification may be sent to the user station upon detecting a concurrently executing load process.
  • Figure 6 is a flowchart that illustrates a process 600 for generating error and summary reports for a data load.
  • a plurality of records to be loaded in a database are received.
  • the records may include medical records.
  • a data management template corresponding to the records is chosen in operation 604.
  • All or matching records are sent to a database in operation 608 for loading in the database upon validation that the records match the data management template.
  • a report of records that match the data management template and records that do not match the data management template is compiled in operation 610.
  • records that match the data management template are separated from records that do not match the data management template. The separated records are sent to a user station if there are records that do not match the data management template.
  • no records are sent to the database if any of the records do not match the data management template.
  • the records are loaded into a table before validation of the records.
  • a notification may be sent to a user station or log file upon detecting a concurrently executing load process.
  • Figure 7 is a flowchart illustrating a process 700 for loading data in a multi-tier client/server architecture.
  • a plurality of user-selected keywords are received.
  • Data is organized around the keywords.
  • the data can include medical-related data such as medical records.
  • a data management template which corresponds to the keywords is selected in operation 704.
  • a validation is performed in operation 706 to determine whether all of the data to be loaded matches the data management template.
  • the data is sent to a database in operation 708 to be loaded in the database upon validation that the data matches the data management template.
  • data that matches the data management template is separated from data that does not match the data management template.
  • a list of data that matches the data management template and data that does not match the data management template is compiled.
  • no data is sent to the database if any of the data does not match the data management template for eliminating insertion of erroneous data.
  • the data is loaded into a table before validation of the data.
  • a notification is sent to a user upon detecting a concurrently executing load process.
  • FIG. 8 is an illustration of the Integrated Development Environment Architecture (IDEA).
  • the Integrated Development Environment Architecture provides a development environment framework and associated guidelines that reduce the effort and costs involved with designing, implementing, and maintaining an integrated development environment.
  • IDEA takes a holistic approach to the development environment by addressing all three Business Integration components: organization, processes, and tools.
  • the development environment is a production environment for one or several systems development projects as well as for maintenance efforts. It requires the same attention as a similarly sized end-user execution environment.
  • the purpose of the development environment is to support the tasks involved in the analysis, design, construction, and maintenance of business systems, as well as the associated management processes.
  • the environment should adequately support all the development tasks, not just the code/compile/test/debug cycle. Given this, a comprehensive framework for understanding the requirements of the development environment should be used.
  • the investment required to design, set up, and tune a comprehensive, good development and maintenance environment is typically several hundred development days. Numbers between 400 and 800 days are commonly seen, depending on the platforms, target environment complexity, amount of reuse, and size of the system being developed and maintained.
  • Figure 9 is an illustration showing a Development Organization Framework in accordance with one embodiment of the present invention.
  • the development organization's size, structure, experience, and maturity should strongly influence the choice of tools and the way the tools are integrated. If this link is not understood, the benefit of tool support will be minimal in many areas, and may significantly reduce productivity.
  • the Business Integration Methodology provides valuable information on organizational issues.
  • a Responsibility, Accountability, and Authority profiles deliverable (RAA) for each role in the Development team, making sure that all the responsibilities listed earlier are covered
  • the RAA profiles deliverable consists of statements about the responsibilities, accountability, and authority of each of the positions in the development organization. These statements define the role of each position in terms of:
  • FIG 10 is an illustration showing a security organization according to one embodiment of the present invention.
  • a Security Management Team may have a security management 1000, under which are an administration team 1002, a projects & planning team 1004, and a business process security team 1006.
  • the size of the Security Management team, and the way in which it is integrated into the development organization depends on the degree to which security is a factor for each specific environment. For example, the security risks associated with an Internet-based online banking system are far greater than those of a fully isolated client/server system, and therefore warrant a larger team with broader responsibilities and greater influence.
  • the Information Management team is responsible for ensuring that the project's knowledge capital and information resources are managed effectively. This includes:
  • Information Management encompasses Repository management, but generally has a broader scope than merely the repository contents, because most repositories are not capable of holding all the information resources of a project. It is, for example, common to have key project information reside in a combination of repositories, teamware databases, flat files, and paper documents. It is the Information Management team's responsibility to ensure consistency across all these formats. The responsibilities of the Information Management team therefore cover:
  • the Information Management team In addition to managing the information for the System Building team, the Information Management team must also manage the information resources of the other management processes - quality management, environment management, and project management.
  • the Information Management team is ultimately responsible for the contents of the repository. They need to have an intimate understanding of the repository structure and the rules that govern how different objects should be stored in the repository. Although most of the input to the repository are entered by designers, the Repository Management team must manage this population process. Rather than taking a policing role on the project, they should work as facilitators - helping the designers do things correctly the first time, thereby maintaining the integrity of the repository. Without strong repository management, the benefits of using a repository quickly diminish. In many situations the Information Management team must make decisions that affect functional areas. To empower the Information Management team, the Application teams should include the Information Management team in relevant design discussions. This facilitates the validation of design outputs.
  • Folders can be very useful in gaining control over the overwhelming amount of information produced on a large project. Their utility greatly increases if they are managed appropriately. This management is based on easy-to-follow, easy-to-enforce standards.
  • the Quality team is responsible for defining and implementing the Quality Management Approach, which means defining what Quality means for the Program Leadership, and then implementing the procedures, standards, and tools required to ensure the delivery of a quality program.
  • the Quality Management Approach addresses concepts such as expectation management, quality verification, process management, metrics, and continuous improvement. Since quality is the result of the interaction of many teams working on multiple processes, the Quality team is responsible for ensuring effective cooperation between teams and good integration of the development processes. The Quality team must therefore forge strong links with all the other project teams.
  • the Quality team is not only responsible for ensuring the quality of the system building process.
  • the Quality team is also directly involved in ensuring the quality of the other IDEA management processes.
  • the Program Management team is responsible for delivering business capability. In this respect, it is responsible for the System Building and other management teams. In addition, other management responsibilities that do not have a specific team or role defined within IDEA also belong to the Program Management team. These include:
  • the Project Management team is responsible for producing a deliverable or set of deliverables. As such, it is responsible for:
  • the Configuration Management team is responsible for defining the approach the program takes to deal with scope, change control, version control, and migration control, and for putting in place the policies, processes, and procedures required to implement this approach.
  • the team is responsible for maintaining the integrity of software and critical documents as they evolve through the delivery life cycle from analysis through deployment.
  • Delivering a system on a release-based approach means delivering the system in a series of consecutive releases, increasing or refining functionality progressively.
  • Some of the main drivers to such an approach include:
  • the Release Management team is responsible for:
  • Release Management is more a role than a function. It is good practice to have as many areas as possible represented in the Release Management team; for example, Design, Construction, Configuration, and Environment Management team members would make up a typical Release Management team, each providing input based on their own perspective.
  • Figure 11 is an illustration showing the Environmental Management Team responsibilities.
  • the Service Group 1102 serves as a single point of contact for developers. It interfaces with the Architecture team to provide answers to questions from developers. To avoid adding overhead to the issue resolution process, the support group must be staffed adequately to ensure that all questions are answered. For example, the support group should recruit people from the Technology Infrastructure team at the completion of Technology Infrastructure development.
  • Problem Management is concerned with the discrepancies that result from the testing process and the management of design problems detected during verification or validation steps throughout the development process.
  • the Problem Management team is responsible for defining the problem tracking and solution process, and for providing tools and procedures to support the solution process.
  • the Application team 1200 consists of three separate subteams: Application Architecture 1202, Application Development 1204, and System Test 1206.
  • Figure 12 is an illustration showing the Application Team structure and responsibilities.
  • the structure of the Application team evolves as the development process continues - as the development of the application architecture components is completed, the Application Architecture team's roles may change. While the team continues maintaining the application architecture components, some team members may be deployed to the Application Development team. Here their roles can include helping application developers to correctly use the architecture components, providing development support, and performing code reviews, and so forth.
  • the technology infrastructure evolves throughout the project and responsibility for managing and evolving the infrastructure must be clearly defined. Therefore, rather than having a single amorphous 'technical team' (responsible for operations, support, architecture evolution, and more), it is important to define a dedicated technology infrastructure team. By allowing the technology infrastructure team to focus on the technology infrastructure, rather than the day to day running of the environment, the project increases the chances that the technology infrastructure will provide good support for the business applications.
  • the Technology Infrastructure team is the team that will implement the IDEA framework.
  • the Development Process Model is a framework that facilitates the analysis of the many concurrent processes of systems development. This analysis helps understand process interaction, which, in turn, affects organizational interaction and defines a need for tools integration.
  • the Process model is simple - at its core is the system building process, which is surrounded by eight key management processes.
  • Information Management manages the information that supports the entire project - information that is used both in systems building and in other management processes
  • Security Management covers all areas of development security, from coding standards, to security verification.
  • Release Management manages the simultaneous development of multiple releases
  • Configuration Management often closely linked with release management covers the version control, migration control and change control of system components such as code and its associated documentation
  • each of the processes must be defined at a greater level of detail than that which any methodology can achieve.
  • This additional specification consists of a set of procedures and standards that specify how to perform the work and what to produce at each step.
  • Standards specify what the results should look like. They may include industry standards and more formal (de jure) standards, such as POSIX compliance, but most standards are project specific and determine, for example, how to structure and name system components and where to place system components. Standards make it possible for a large team to exchange information effectively and to work productively together.
  • Procedures specify how to perform a task. They are generally guided by the methodology but provide information at a lower level of detail. They are highly environment-specific, and take into account the organization, the standards, and the tools in the environment. Procedures often specify the techniques to be used. They may specify which tools to use and how to use the tools that support these techniques.
  • Samples can sometimes convey a message much faster than pages of explanatory prose.
  • Sample programs are generally very useful.
  • Other samples may include logs, which demonstrate interaction with tools, a sample change request, or a sample request for technical support. Samples can sometimes be created efficiently by taking screen dumps. This can be much faster than specifying what the screen should look like in theory.
  • Samples and standards must be high quality - any quality breach will be multiplied when developers start using them. It is therefore imperative that samples and standards not be created in a vacuum but be based on concrete experience with the project's development environment. Some pilot development work often proves extremely useful when fine tuning the standards.
  • Security requirements are the outcome of the security Risk Assessment. This is the process of identifying business risks, identifying system vulnerabilities or weaknesses that can impact those risks, and recommending mechanisms to control the vulnerabilities. Specific confidentiality, integrity and availability requirements for the new system and the development environment are defined through this process.
  • Security standards, guidelines and procedures provide security direction to the implementation. They will help define how the security requirements developed through the Risk Assessment must be addressed in all areas of the development environment. They will include security standards for the development environment infrastructure, procedures for the development processes, standards for the design of the security architecture and security guidelines for programming. It is especially important to ensure the security of the development environment because if these systems are broken into and back doors are introduced, it may lead to later compromise of the production system. It will be the responsibility of all developers that these security controls are implemented and adhered to throughout the development process.
  • periodical security audits should be arranged, in order to verify that the processes and architecture and application components that are being developed conform to security proven practices. This may be done by an external body specializing in security (such as Global TIS - Security) in the form of interviews, architecture and code reviews, and automated tool assessment.
  • an external body specializing in security such as Global TIS - Security
  • Information Management generally involves Repository Management, Folder Management and, where applicable, Object Management and Media Content Management.
  • SLA Service Level Agreement
  • Repository Management includes activities such as:
  • repositories do not provide sufficient versioning functionality, it is common to have more than one repository on large projects. Typically, there may be one repository for development, one for system test, and one for production. This allows better control, but also requires significant resources to move repository objects from the development environment to the system test environment.
  • the medium-sized project has a potential for productivity gains. If these gains are to be realized, great care must be taken when making corrections during system test.
  • any error analysis involving repository objects must take into account the possibility that these objects could have changed since the previous migration to system test. This situation can be managed by meticulously maintaining a comprehensive change log.
  • a single development environment may have to deal with multiple repositories:
  • one repository might be integrated with an upper-case design tool and the other with a lower-case generation tool
  • repositories may be distributed over different locations. In order to keep these repositories synchronized, well defined development processes must be implemented.
  • Repository Management can be divided into the following areas:
  • the data elements should usually be controlled by the Repository Management team, because they are the basic building blocks of the system and have broad reuse. Poorly defined data elements can cause inconsistency, redundancy, and generation errors. Data elements should therefore be locked at least by the time construction starts, and possibly earlier, depending on the discipline of the team. Project members must be allowed to browse the data elements, but only the Repository Management team should be allowed to modify or unlock data elements. In some repositories, it is difficult to restrict the creation of repository objects. If this is the case, it may be acceptable to let designers create data elements if these are reviewed and locked at the end of each day. Increased control can be obtained by having designers submit requests for new data elements to the repository administrator. This allows the repository manager to evaluate whether the new data element is justified, or whether an existing one should be used.
  • Requests for data element changes can be forwarded using a database or paper-based system. Based on functional and technical knowledge, the repository administrator evaluates the requests and may involve other teams to make appropriate decisions.
  • the database used to request data element changes during design and programming should be separate from the project's change request database. This will simplify and speed up the change process. When data elements have to be changed during system test, however, the impact can be much greater, and the regular change request database should be used.
  • dialog definitions, reports, messages, and so forth are usually maintained by the designers and programmers.
  • dialogs and report programs are tested, approved, and ready to be promoted to the system test environment, the related objects must be locked. This is the responsibility of the Repository Management team.
  • project-specific standards should exist for defining repository objects. These standards can form the basis for a repository validation program, which can run through the entire repository and report on detected deviations from standards. In some cases, this program can also enforce the standard.
  • Mass changes to the repository can be performed when the validation reports show the occurrence of many standards violations that follow a common pattern. This may occur in cases where:
  • Certain reports should be run daily, such as the list of new data elements or modified data elements. These reports can serve as an audit trail of changes and can be used to communicate changes to the entire team. Procedures should specify which reports are run daily and what their distribution should be.
  • the Repository Management team performs certain analyses repeatedly. Standard analyses such as impact analyses should be specified in detail to facilitate staffing flexibility.
  • the Repository Management team can provide custom reports or ad hoc queries that satisfy particular needs.
  • Folder Management It is important to set up and communicate a detailed folder structure with specified access rights from the beginning. Contents of folders must be checked regularly to ensure that folders contain what they are supposed to.
  • Folders can be organized by type of component so that one folder contains all the include files, one folder contains the source modules, one folder contains executables, and so on.
  • Folders can also be organized functionally so that all the common components reside in one folder and each application area stores its components in its own folder.
  • scratch folders may be useful in certain contexts, the proliferation of miscellaneous folders with cryptic names can make it very difficult to navigate the information.
  • Some useful guidelines include:
  • each folder either in a central location, or in the form of a readme type file within the folder itself.
  • the high-level documentation should include the purpose of the folder and the kinds of contents it should hold.
  • Storage management concerns the methods of storing and retrieving media content.
  • the cost of data storage may be decreasing, but it is still the case that for large volumes of media it is often uneconomical to store everything on-line. For this reason, processes must be implemented to manage where data should be stored, and how it may be transitioned from one location to another.
  • Metadata about the media that is being stored is an important commodity that must be managed. As the volume of media content grows, it is vital to be able to understand characteristics of the media, in order to be able to manage it correctly. Examples of metadata include:
  • Media type for example, MPEG video, JPEG image
  • Media source for example, Source, author, creation date
  • the more advanced media content management tools may provide much of the functionality required to support these processes, but where this is not the case, the processes must be implemented manually.
  • Object Management processes are very similar to those involved with Repository Management. However, they should promote reuse through specific processes:
  • the objective of these tasks is to ensure that, early in the life of a program, program leadership explicitly defines what quality means for the program. This results in the production of the quality plan. Then the infrastructure and processes are put in place to ensure delivery of a quality program.
  • the Quality Management Approach defines the following processes:
  • Processes and deliverables are key candidates.
  • the V-model is the preferred method by which the quality verification process is managed.
  • the V-model ensures that deliverables are verified, validated, and tested. It is based on the concept of stage containment (enforcing for a given deliverable the identification of the problems before it goes to the next stage) and entry and exit criteria (describes conditions in which a deliverable passes from one stage to another).
  • the quality verification process owner may not be responsible for executing the V-model, but is responsible for making sure that the V-model is in place and complied with.
  • Sample metrics include:
  • the first stage of the Continuous Improvement Process is to capture continuous improvement opportunities. These may include:
  • CMM Capability Maturity Model
  • the CIP then plans and manages improvement related activities such as:
  • the Quality Management team While maintaining quality at a program level, the Quality Management team must liaise with each of the organizational units within the development environment in order to monitor the quality management processes within these units.
  • CMM Capability Maturity Model
  • the CMM provides a software organization with guidance on how to gain control over their processes for developing and maintaining software and how to evolve toward a culture of software engineering and management excellence.
  • the model defines five levels of software process maturity as well as how to move from one level to the level above.
  • the V-model is a framework that promotes stage containment by organizing the verification, validation, and testing in and across all the methodology elements throughout the delivery phase of the Business Integration Methodology.
  • the IMPROVE Job Aid (provided with the BIM Guide) describes the process for solving problems or improving a process. In this Job Aid, you will find an introduction to the five step process your team can use to solve both simple and complex problems.
  • the Quality Action Team (QAT) is responsible for applying IMPROVE to improve a process or solve a problem.
  • Program Management focuses on the continuous oversight needed to support the delivery of business capability through multiple projects and releases. Appropriate disciplines, techniques, and tools are used to plan and organize the work, and to manage the incremental delivery of the new business capability.
  • Program Management consists of three major activities, each split into a number of task packages.
  • Project Management focuses on providing specific deliverables through balanced management of scope, quality, effort, risk, and schedule. Project Management processes follow a cycle of planning the project's execution, organizing its resources, and controlling its work. The Project Management team oversees all other teams within the development environment.
  • Project Management comprises a single activity containing a number of task packages.
  • Configuration Management is not only the management of the components in a given environment to ensure that they collectively satisfy given requirements, but it is the management of the environment itself.
  • the environment consists not only of system components, but also of the maintenance of these components and the hardware, software, processes, procedures, standards, and policies that govern the environment.
  • Packaging is the combination of systems software and application component configurations (source code, executable modules, DDL and scripts, HTML) together with their respective documentation. It may also include the test-data, test scripts, and other components that must be aligned with a given version of the configuration. Packaging allows the grouping of components into deliverable packets of application software that can be developed, tested, and eventually delivered to the production environment. Packaging defines the underlying architecture that drives version, change, and migration control. Each of these control processes defines how changes to configuration packages are versioned and migrated to the various development and test phases in the systems development life cycle.
  • a sample packaging strategy would take into consideration some of the following factors in determining a unique method to handle a given configuration packet in terms of version, change, and migration control:
  • Base package type - identifies the various types of application components that are developed during systems building such as executables, JCL, HTML scripts, and Java applets.
  • Package release type - identifies the types of commonality that components can have. There are usually four basic types of components that are developed during systems building:
  • Application packages these packages are the most rudimentary of all packages developed. They consist of basic application components developed by application developer
  • Package platform type - identifies the eventual delivery platform of the package. Identifying this early on in development and encapsulating this information within the package definition, allows developers to envisage the production environment at an early stage during the systems development life cycle.
  • a configuration management cube can be defined, which uniquely identifies version, change, and migration control characteristics of a given package.
  • the cube can be used to implement a table-driven configuration management control system for all software developed on the program.
  • the configuration control system consists of version and migration control. Therefore, the cube defines all processes associated with version control and migration of a package.
  • Version control and compatibility are key considerations when managing these packages. Note that version control not only applies to software components, but also to all components of a given package, including test scripts, test data, and design documentation. It is also of great importance to keep track of which version is in which environment. If incompatibilities are discovered, it must always be possible to "roll back" to a previous consistent state, that is, to revert to an earlier version of one or more components. It must be possible to define releases of a configuration — a list of version numbers, one for each component of the package which together form a consistent configuration. The smallest unit that can be version controlled should be the package as defined in the packaging plan. This ensures that the lowest common denominator in all version control activities is managed at the package level.
  • Migration of packages or consistent configurations from one stage to another is a central part of Configuration Management.
  • the key to successful migration is the knowledge of what constitutes each stage. Examples of migration include:
  • FIG. 13 is an illustration showing a model migration plan in accordance with one embodiment of the present invention.
  • the Figure 13 model allows the development and testing of architecture components independent of application components.
  • the Technology Architecture team can develop 1300, assembly test 1302, and system test 1304 their components before delivering them to the development environment for the application developers. This ensures that the architecture is thoroughly tested before being used by the Application teams.
  • the model also illustrates the progression of architecture and application components through the systems development life cycle.
  • the application developers can then develop 1306, assembly test 1308, and system test 1310 their components before user acceptance tests 1312.
  • the model is a temporal one and thus suggests that architecture must be present at a given stage before the introduction of application components.
  • the version control plan must align with the migration control plan.
  • the version control plan defines the points where version control activities will take place. In the above example, version control will take place at the development stages, architecture development and unit test, and application development and unit test.
  • Migration control defines how these version control configuration packages will be migrated successfully from one stage to the next until the package is eventually released to the production environment.
  • Configuration Management becomes more complex in a component-based development environment as the system is broken down to a greater level of granularity.
  • Release Management involves coordinating activities that contribute to a release (for example, cross-project management) and the coordination of products that contribute to a release (such as architecture, integration, and packaging). It is concerned with managing a single release rather than cross-release management.
  • the Release Management approach documents critical decisions regarding the management, tracking, and integrity of all components and configurations within a given release.
  • the Release Management approach must be closely coordinated with the definition of the Configuration Management approach and the Problem Management approach. Release Management involves two main components:
  • the coordination of products that contribute to a release is the maintenance of a bill of materials for a release. It is an inventory of all software and hardware components that are related to a given release.
  • the development environment is directly affected by the Release Management strategy. The way a program decides to plan releases affects the complexity of the development environment.
  • FIG. 14 is an illustration showing a single release capability development pipeline in accordance with one embodiment of the present invention.
  • the ability to perform all development stages for a given release can be defined as a development pipeline.
  • the pipeline consists of all development and testing stages necessary to release the software to production.
  • a pipeline consists of all the necessary development and testing stages required to deliver a piece of software to production. Therefore, because of simultaneous development and testing of three code bases, there needs to be three development and testing pipelines that deliver software to production.
  • Figure 15 is an illustration showing a multiple release capability development pipeline in accordance with one embodiment of the present invention.
  • FIG. 16 is an illustration showing a multiple release capability development pipeline 1600 with code base synchronization among three pipelines.
  • the present invention can include a comprehensive framework for the Management Of Distributed Environments (MODE), describing four central functions:
  • MODE provides an excellent framework for specifying the management responsibilities that apply to the development environment. These responsibilities are often assigned to the technical group, but as discussed above, there are benefits associated with establishing a dedicated environment management team.
  • the Environment Management component described here uses MODE as a framework, adopts MODE terminology, and focuses on those management tasks, which are particularly important in the development environment.
  • the development environment is simpler than the production environment. It is, for example, generally smaller in terms of the number of hardware components and the number of locations. In other respects, however, the development environment is more complex. For example, the amount of change in this environment is generally higher than in the production environment. In fact, the environment can be so fluid that extreme care must be taken to maintain control. On a large engagement, one dedicated technical support person per ten designers and programmers is recommended. The greatest need for technical support is generally during detailed design and programming. It is, however, necessary to start building the technical support function before detailed design.
  • Service Management provides the interface between the Environment Management team, the Development teams, and external vendors or service providers. It manages the level of service that is provided to the developers. In order to maintain this service, three areas must be managed:
  • SLAs Service Level Agreements
  • Service Level Agreement In order to plan and organize the development work appropriately, a Service Level Agreement (SLA) must be in place between the Service Management group (typically part of the Environment Management team) and the developers. As with all other components of the development environment, this agreement should be kept simple. It should specify the following:
  • service levels should be precise and the service must be measurable.
  • the SLA should also specify how to measure this service (for example, system response times, request service times, backup frequencies).
  • the SLA must be managed. It may have to be modified as the environment changes, and it must be reviewed with developers on a regular basis to see if the service level is adequate.
  • the Environment Management team is responsible for providing the specified level of service, but frequently relies on external vendors and suppliers to perform certain tasks.
  • hardware service is typically provided by the hardware vendor.
  • the Environment Management team must ensure that external vendors provide their services as required. This generally means establishing a contract with the vendor and following up that the contract is respected.
  • the Help Desk function is an important part of the interface between the Service Management group and the developers.
  • the Help Desk makes sure that questions are answered and requests serviced in a timely manner by the right people.
  • the Help Desk is crucial to maintaining productivity.
  • the Help Desk needs particular focus when:
  • the Help Desk In addition to serving internal project needs, the Help Desk must be prepared to coordinate the activities of external suppliers to solve problems. This occurs when several new versions of hardware and system software are introduced, and compatibility issues arise. Part of the coordination is the tracking of request IDs, which refer to the same question but which are assigned differently by each supplier.
  • Defining the SLA, with its specific, measurable criteria, is the basis for continuous improvement.
  • the continuous improvement effort may focus on providing the same level of service with fewer resources, or on providing better service.
  • An important part of quality management is ensuring that the Environment Management team understands the key performance indicators for service delivery, that these indicators are monitored, and that all personnel are adequately equipped with the tools and training to fill their responsibilities. While the entire team is responsible for delivering quality, the responsibility for Quality management should be assigned to a specific individual on the Environment Management team.
  • Control tasks may include checking and archiving activity logs. Standards and procedures that describe the control function must be established.
  • the Environment Management team must systematically monitor the development environment to ensure that it is stable, provides adequate response times, and satisfies the needs of the developers. This monitoring involves looking at trends and extrapolating them to anticipate problems with disk capacity, system performance, network traffic, and so forth.
  • Security management involves:
  • the LAN supplier may be willing to- take responsibility for LAN support, upgrades, and so on.
  • an existing data processing center may be willing to take responsibility for host operations.
  • Such agreements are very beneficial and make it possible to use project team members more effectively.
  • outsourcing the development environment carries a risk, which can be mitigated by defining a Service Level Agreement with the provider. This will generally be very similar to the SLA established between the Environment Management team and the developers.
  • punitive measures to be applied if the SLA is not respected
  • the resources required for delivering the service can be specified. Questions to address include the staffing of these resources and training to ensure that they are equipped to deliver service as agreed.
  • Planning for change includes choosing options based on a thorough understanding of the positive and negative impacts of change to the environment. Changes to the development environments should be analyzed and planned for as orderly releases rather than a stream of small modifications. Changes should be packaged into releases, and each new release of the development environment should be tested by developing a small, but representative part of the system using the new environment. Ideally, this test should be performed by real developers rather than by the Environment Management team. This may be very helpful in order to obtain better buy-in. Strategic Planning
  • Strategic planning is traditionally regarded as being less important in a development environment than in the production environment, mainly because the development environment is often viewed as a temporary entity that does not warrant serious strategic considerations. This may be changing however, with the concept of the enterprise- wide development environment - a single, generic development environment architecture that is tailored to each specific project. In this case, strategic planning for the development environment is vitally important if the environment is to evolve, and allow the organization to remain competitive. Strategic planning of the environment management function may, for example, include such questions as support for multi-site development and coordination of multi-sourced systems management.
  • the development environment is subject to constant change (for example, the addition of new tools, or changes to code libraries), which needs to be managed carefully.
  • the Managing Change component comprises three sub-components: Controlling Change, Testing Change, and Implementing Change.
  • a hardware and systems software acceptance test environment where components from external suppliers are validated before the component is accepted into the environment.
  • One or more separate architecture build and test environments where new or modified custom-built components can be thoroughly verified before they are made available.
  • testing should also verify that the expected positive benefits of the change are indeed obtained.
  • Problem Management is generally associated with the discrepancies that result from the testing process, though it may also be applied to the management of design problems detected during verification or validation steps. Problem Management is a crucial process in the system development life cycle. It ensures that quality software is designed, developed, and tested so that initial benefits defined in the business case are in fact realized. A development environment must have a formally defined problem management process to ensure that this objective is met.
  • Formal problem tracking helps to control the analysis and design process by maintaining documentation of all problems and their solutions. Problem tracking improves communication between developers and business representatives, which is particularly helpful in minimizing misunderstandings at later stages of the development cycle.
  • Such formal problem tracking also helps to facilitate the solution process by formalizing a procedure for reviewing, acting on, and solving problems in a timely manner.
  • management can minimize the risk of misunderstandings at a later date.
  • the documentation serves as an audit trail to justify design and implementation decisions.
  • the development environment varies by segment of a systems development project. The following model is used when discussing different components of the development environment.
  • the development process is iterative and can be entered at different stages depending on the complexity of the changes. Small corrections may not require explicit design, and small enhancements may not require any high-level design.
  • the shaded, elliptical labels in the above figure indicate how the development process can be entered depending on the magnitude of the change.
  • the iterative nature of the development process is important since it implies that components of the development environment, which are put in place for design (for example), must be maintained, since they will continue to be used until the end of system test and beyond. Multiple releases of the business application may also be under concurrent development at different stages. This may lead to very active use of design, construction, and testing tools at the same time.
  • Tool support may help enforce standards, and such tools are discussed under Tools - System Building - Analysis & Design (below).
  • the design process includes numerous activities, which range from high-level general considerations to low-level detailed issues.
  • the overall objective of design is to transform functional and technical specifications into a blueprint of the system, one that will effectively guide construction and testing. While requirements analysis and specification deals with what the system must do, design addresses how the system will be constructed. Validating that the design actually meets the requirements for functionality, performance, reliability, and usability is essential.
  • the quality of the design process directly affects the magnitude of the efforts required to construct and test the system, as well as the maintenance effort. Investments in defining high- quality design standards and procedures and integrating tools is therefore particularly important. It may, for example, have a direct impact on the degree of reuse achieved. In addition, adequate training must be provided to ensure that the designers make optimal use of the environment provided.
  • parts of design may occur after system test starts, as in the case of an urgent change request, or when a significant inconsistency is detected in system test.
  • Some reverse engineering work may also occur before design or during construction.
  • Usability is an important (and often overlooked) consideration in system design. Usability is more than a well-designed user interface - the way in which business processes are modeled, how they are implemented within the system, and how they are presented to the user all contribute to the overall usability of the system. Usability is an iterative process of refinement that results in systems that are easy to learn, efficient, and enjoyable. In the very broadest sense, usability is the thoughtful, deliberate design approach that considers users throughout the solutions-building process, from start to finish. For this reason, usability guidelines should be defined and followed at every stage of system design. This, along with regular usability reviews and tests both internally, and by target user groups (by using prototypes), helps to reduce the risk of a poorly received system.
  • the User Interface has become increasingly important as systems become more and more user- facing. As multimedia technologies evolve allowing the development of richer user interfaces, so the design processes must adapt to reflect these new technologies. The processes that surround the design of media content are similar to that of regular system design, and many of the same issues that apply to designing traditional user interfaces also apply to the design of media content. The major change is the involvement of media content designers - a group of people not traditionally associated with system design and development. As their presence is relatively new to the scene of systems development, it is often the case that media content designers are not fully integrated into the development team - a potentially costly mistake. It is important to ensure that media content designers are involved in the design process at a very early stage, and that they are fully integrated into the application design and construction teams.
  • Valuable guidelines give assistance in areas where judgment is important and where standards are not easy to define. Valuable guidelines may include:
  • Reverse Engineering is a set of techniques used to assist in reusing existing system components. Most of the time, this work is performed manually: one person studies thick listings to understand data layouts and processing rules. The person gradually builds a higher-level understanding of how the components work and interact, effectively reverse engineering the system into a conceptual model. It may be necessary to study certain pieces of code to understand how they work, but reverse engineering is not limited to code. For example, these techniques might help understand the data-model of a legacy application, in order to better design the new applications that will coexist with it.
  • the supporting tools can, however, reduce the amount of manual effort needed and significantly lessen the amount of non value-added activities, such as "find all the places in a program that affect the value of a given variable".
  • round-trip reengineering provides the developer with a way of modifying a component model and generating the code, then at a later date modifying the code at predefined locations in the source code and regenerating, thus enabling the model to maintain a 2-way-syncronization.
  • components to be reverse engineered can be both part of a custom-built system, or part of a software package.
  • Tools should be chosen based on knowledge of the system, the amount of code to be processed, and the experience of the personnel involved.
  • Packaged Component Integration applies to the use of any third party (or previously developed) technical components that may be integrated into the target system. This can range from simple components offering limited functionality (worksheet or charting GUI components), to components handling a significant portion of the application architecture (data access components and firewalls). The process involves a number of stages:
  • Construction covers both generation of source code and other components as well as programming and unit test. It may also involve help text creation and string test. As construction is a large part of system building, the benefits of streamlining this process are significant. Since several aspects of construction are rather mechanical, it is often fairly easy to simplify this process and to automate parts of it, particularly if the design holds high quality.
  • IDEs Integrated Development Environments
  • system test is changing in nature. Firstly, the testing of interfaces to other systems is becoming an ever larger part of systems test. Secondly, system test increasingly applies to a new release of an existing system. In addition, it is worth noting that as design and construction is increasingly automated, system test is becoming a larger part of the total development effort.
  • IMPORTANT When planning system test, it is vital that the testing of all target platforms is included in the test plan. For each platform that is supported by the system, there must be a separate set of tests.
  • Component-based development may have an impact on the way in which testing should be performed.
  • Configuration management provides the basis for promoting a configuration from the construction environment to the system test environment. As test cycles are run and fixes implemented, migration can become complex, requiring flexible mechanisms for locking and unlocking system components and analyzing the impacts of change.
  • Component Test is the testing of an individual piece of the solution. All components, including application programs, conversion programs, and input/output modules, are subject to component test. The objective is to ensure that the component implements the program specifications. At the end of component test, all lines of code should have been exercised, keeping in mind the specified functional and quality requirements.
  • Assembly Test The assembly test tests the interaction of related components to ensure that the components, when integrated, function properly. Assembly test ensures that data is passed correctly between screens in a conversation or batch process and that messages are passed correctly between a client and a server.
  • the specification tested is the technical design.
  • the application flow diagram within the technical design depicts the assemblies, either on-line conversations or batch assemblies, that will be assembly tested. Testing is therefore organized by assembly rather than by business function. By the completion of assembly testing, the system should be technically sound, and data flow throughout the system should be correct. Component and assembly testing ensures that all transactions, database updates, and conversation flows function accurately. Testing in later stages will concentrate on user requirements and business processes, including work flow.
  • Benefits Realization Test tests that the business case for the system will be met. The emphasis here is on measuring the benefits of the new system, for example: increased productivity, decreased lead times, or lower error rates. If the business case is not testable, the benefits realization test becomes more of a buyer signoff.
  • benefits realization test occurs prior to complete deployment of the system and utilizes the same environment that was used for the service-level test piece of operational readiness test.
  • Tools are put in place to collect data to prove the business case (e.g., count customer calls).
  • a team of people to monitor the reports from the tools and prove that the business case is achieved is still needed. The size of the team depends upon the number of users and the degree to which tools can collect and report the data.
  • the benefits realization test tests that the business case for the system will be met. The emphasis here is on measuring the benefits of the new system, for example: increased productivity, decreased lead times, or lower error rates. If the business case is not testable, the benefits realization test becomes more of a buyer signoff.
  • the Product Test tests the entire application to ensure that all functional and quality requirements have been met. Product testing may occur at multiple levels. The first level tests assemblies within an application. The next level tests applications within a system, and a final level tests systems within a solution. Within the multiple levels, the purpose is the same.
  • the product test tests the actual functionality of the solution as it supports the user requirements: the various cycles of transactions, the resolution of suspense items, the work flow within organizational units and among these units.
  • the specification against which the product test is run includes all functional and quality requirements. The testing is organized by business function.
  • the Operational Readiness Test The objective of the operational readiness test is to ensure that the application can be correctly deployed.
  • the operational readiness test is also commonly known as the readiness test, roll-out test, release test, or the conversion test.
  • the operational readiness test becomes especially key in client/server environments. It has four parts:
  • Roll out test ensures that the roll out procedures and programs can install the application in the production environment.
  • Service level test ensures that once the application is rolled out, it provides the level of service to the users as specified in the Service Level Agreement (SLA).
  • SLA Service Level Agreement
  • Roll out verification ensures that the application has been correctly rolled out at each site. This test, developed by the work cell or team performing operational readiness test, should be executed during each site installation by the work cell or team in charge of the actual roll out of the application.
  • the operational readiness test assumes a completely stable application and architecture in order for it to be successful, and therefore, is heavily reliant on the previous testing stages.
  • the operational readiness test is the point in the development process where all the application development, architecture development, and preparation tasks come together.
  • the operational readiness test ensures that the application and architecture can be installed and operated in order to meet the SLA.
  • FIG 17 is an illustration showing a Development Tools Framework in accordance with one embodiment of the present invention.
  • the development environment is built upon an integrated set of tools and components, each supporting a specific task or set of tasks in the development process.
  • the central component, System Building is supported by the eight management components:
  • Information Management tools 902 manage the information that supports the entire project - information that is used both in systems building and in other management processes
  • Security Management tools 916 enable the development of security components
  • Program and Project Management tools 914 assist the management teams in their daily work
  • Environment Management tools 906 provide the facilities to maintain the development environment
  • Release Management tools 918 manages the simultaneous development of multiple releases
  • Configuration Management tools 910 cover the version control, migration control and change control of system components such as code and its associated documentation
  • Productivity tools 1702 provide the basic functionality required to create documents, spreadsheets, and simple graphics or diagrams
  • Collaborative tools 1704 enable groups of people to communicate and to share information, helping them work together effectively, regardless of location
  • Process Integration tools 1706 enforce the correct sequencing of tasks and tools in conformance with a pre-defined methodology An efficient development environment requires good tools. For general issues regarding tool selection, please refer to the general Product Selection Considerations.
  • productivity tools While many tools are developed in order to support a specific task (for example, source code editor), there is a family of tools that are generally required across the board, often known as productivity tools or office automation tools. These tools, typically packaged as integrated suites of software, provide the basic functionality required to create documents, spreadsheets, and simple graphics or diagrams. More recently, the ability to access the Internet and browse electronic documentation has been added to the suite of productivity tools.
  • productivity tools include:
  • E-mail provides the capability of sending and receiving messages electronically. In addition to the ability to send simple ASCII text, e-mail systems usually provide the capability to attach binary files to messages. E-mail is a convenient tool for distributing information to a group of people, as it has the advantage of delivering content directly to the 'mailbox' of each individual, rather than relying on individuals to access a central data repository in order to retrieve the information.
  • a gateway will be required to manage communication beyond the local environment. This will bring with it security implications, as the local environment will no longer be isolated.
  • Teamware provides the ability to capture and share information across a project through the use of common-access, structured databases.
  • a good example of teamware is the Knowledge Xchange. Teamware may be used to share many different types of information, for example:
  • Resource reservation (for example, meeting rooms)
  • Teamware will generally only be effective when used within large groups of people. Unless a critical mass of people is achieved and content is regularly added to the system, interest will soon dwindle, and the system will no longer be of any value.
  • Group scheduling tools help to centrally manage the personal schedules of a group of people. This offers the advantage of being able to coordinate events that require the participation of a number of people automatically by checking 'group availability' rather than checking with each person individually. These tools may also be used to schedule other resources such as meeting rooms and equipment.
  • Audio and video conferencing tools allow many individuals in different locations to communicate simultaneously. Audio conferencing is not a new concept, but remains a valuable tool for conducting meetings where the issues being discussed do not require the support of visual aids. Video conferencing takes this one step further, allowing people to interact both aurally and visually, making for a much richer method of communication.
  • the video conferencing system should be designed with that fact in mind and provide for some degree of interoperability between dissimilar systems. For example, being able to connect a desktop-based video conference user with a room-based video conference user.
  • Video conferencing is an advantage when one person needs to see the other person's face, his or her reactions, read body-language, build relationships, and so on.
  • communication is more technical, for example, fixing a bug, collaborative design, document writing, or presenting a demonstration, it is more critical to be able to see what the other person is seeing, or to be able to show information at hand.
  • application sharing assumes greater importance.
  • video conferencing replaces working in the same place.
  • the real value of synchronous communication is not in being able to see someone else at the other end, it is in being able to share a working session on a work object (see Collaboration - Shared Workspace, below).
  • Shared workspace systems may be categorized as follows:
  • An electronic whiteboard provides a large, clear screen that can be viewed close up and at a wide angle, upon which participants may 'write' with an infrared pen or a mouse. Images may also be pasted onto the whiteboard.
  • Regular workstations on a network may also be used for electronic whiteboarding, providing the appropriate software is installed.
  • Electronic whiteboarding often works in conjunction with video conferencing applications.
  • Application sharing allows participants to see and control the same application running on multiple PCs. In this way they can simultaneously create and edit a single, common file. Application sharing may be combined with audio conference.
  • Process Management may be categorized into two areas:
  • Simple process integration 848 which concerns the simple integration of a sequence of tasks, according to a prescribed development methodology.
  • Task integration must be provided in accordance with the methodology and should provide direct support for the methodology. Effective task integration therefore reduces the need to consult the methodology.
  • Simple Process Integration concerns the integration of a limited sequence of tasks, for an individual, according to a prescribed development methodology.
  • the construction process can be supported within an integrated development environment tool by a menu with the following choices:
  • Real-time tools integration is most commonly provided by vendors who deliver integrated environments.
  • Workflow Management tools address this problem by providing the ability to define, manage, and execute automated business processes through an electronic representation of the process, both in terms of what has to be done, and by whom.
  • Workflow Management can be applied to many processes within the development environment, such as quality assurance, migration, design/construction, system test, and standards development.
  • Security Management tools provide the components that make up the security layer of the final system, and may provide required security controls to the development environment. While some of these tools may be considered as nothing more than security-specific Packaged Components, many are an integral part of the development environment toolset. Security Management tools include:
  • Intrusion detection discovers and alerts administrators of intrusion attempts.
  • Network assessment performs scheduled and selective probes of the network's communication services, operating systems, and routers in search of those vulnerabilities most often used by unscrupulous individuals to probe, investigate, and attack your network.
  • Platform security minimizes the opportunities for intruders to compromise corporate systems by providing additional operating system security features.
  • Web-based access control enables organizations to control and manage user access to web based applications with restricted access.
  • Mobile code security protects corporate resources, computer files, confidential information, and corporate assets from possible mobile code attack.
  • E-mail content filtering allows organizations to define and enforce e-mail policies to ensure the appropriate email content.
  • Application development security toolkits allow programmers to integrate privacy, authentication, and additional security features into applications by using a cryptography engine and toolkit.
  • Encryption - provides confidential communications to prevent the disclosure of sensitive information as it travels over the network. This capability is essential for conducting business over an unsecured channel such as the Internet.
  • Public key infrastructure provides public-key encryption and digital signature services. The purpose of a public-key infrastructure is to manage keys and certificates. A PKI enables the use of encryption, digital signatures, and authentication services across a wide variety of applications.
  • Firewall protects against theft, loss, or misuse of important data on the corporate network, as well as protection against attempted denial of service attacks. Firewalls may be used at various points in the network to enforce different security policies. ro uct ons erat ons a) Does the tool use Role-based access control?
  • Role-based access control establishes access rights and profiles based on job functions within the environment. If different access rights are required for security administrators vs. code developers vs. code reviewers vs. testers, then the correct access can be established based on these functions.
  • the security administrator should be able to granularly configure what is being audited by the tool.
  • the audit logs should be able to optionally record User ID, time-of-day, location of access, successful and unsuccessful access or change attempts, etc. c) What are the performance implications of the tool?
  • Some security services such as content scanning or auditing, may add noticeable processing time and requirements to the system.
  • Tools should be architectured in such a way that performance impacts are or can be configured to be minimal.
  • Information Management of the development architecture is provided through an integrated development repository.
  • tools share a common repository of development objects, design documents, source code, test plans and data.
  • the repository would be a single database with an all-encompassing information model.
  • the repository must be built by integrating the repositories of the different development tools through interfaces.
  • Tool vendors may also build part of the integrated repository by integrating specific products. Implementation Considerations a) Is there a desire to enforce consistency in the development effort?
  • a repository can store standard data, process, design, and development objects for use during application development activities. Developers then use these standard objects during implementation. As objects are defined once in the repository and reused throughout the implementation process, applications display a consistent look, feel, and flow while enforcing the standards inherent in the repository objects.
  • a repository houses many application development components including data definitions, process models, page designs, window designs, common GUI widgets, message layouts, and copybooks.
  • a repository provides the development teams with the ability to reuse objects defined in the repository in a controlled manner. Most engagements consider using a repository once the number of developers exceeds ten.
  • a repository management tool may be required to provide an integration platform for existing and future tools, providing communication among all tools where appropriate.
  • the repository may need to be extended by the Engagement team to support custom objects defined by the Application Development team. Some repositories support user-defined objects as part of the base functionality. Others allow customization of the repository by the user while some are not designed for customization at all. If the repository requires extensive customization, a buy versus build decision may be required. b) Is a logical or physical repository more beneficial?
  • a physical repository is implemented as a single product. Many CASE tools employ this type of repository by housing all application development objects in a single source. Application development tools are then tightly integrated with the repository.
  • a logical repository integrates multiple tools to form an application development repository.
  • the various tools employed in the development environment are bridged together by custom architecture components. This approach is commonly used when the Engagement team takes a best of breed approach to tool selection.
  • the Engagement team should determine whether the repository must support multiple platforms.
  • the selected tool should not only support current platforms but also support the future platform direction of the project.
  • the repository should support multiple versions of objects. By doing this, the repository can support applications in multiple phases of development.
  • the repository tool should control access to the versions of objects by providing check-in and check-out functionality. This allows multiple developers in various phases of development to work from the same repository while allowing only one developer update access to a particular object at a time.
  • Engagement teams often chose a tool that can be used in other areas of the development environment. Many Engagement teams select data modeling tools that can double as Information Management tools. Using one tool for multiple purposes results in fewer integration points in the architecture and less time and cost training personnel on multiple tools.
  • repositories do not provide sufficient versioning functionality, it is common to have more than one repository on large projects. Typically there would be one repository for development, one for system test, and one for production. This improves overall control. Another reason could be that there is concurrent development of different releases, each requiring its own repository. Hence, on a large project, a tool that supports multiple repositories is often a requirement.
  • the repository contents are effectively the building blocks of the system and have broad reuse.
  • a facility for security is required to prevent unauthorized changes to the repository elements and hence to ensure high quality and consistent repository content. For example, restrictions are often placed on making changes to data elements because ad-hoc changes by a single designer could have devastating impacts on other parts of the design.
  • Repository access control is important where developers in the development environment need to be assigned different rights to the repository. Typically, the developers will be placed in groups with diminishing access rights such as repository administrator, technical support, designer, or programmer. These access rights may relate to read/write/modify/delete authority. This method of access control is far more flexible than simple object locking. h) Does the tool provide repository reporting facilities?
  • Repository reports serve as an audit trail for changes to objects within a repository and can be used to communicate these changes to the entire team.
  • the Repository Management tool should provide this utility.
  • Reports for impact analysis are extremely useful in the change control process.
  • the repository maintains relationships between repository objects, 'where-used' and 'contains' report facilities can be very useful when dealing with change requests.
  • Active Information Management tools can be used to generate components, whereas passive tools are used to hold information about the tool but are not used to build the system.
  • the use of an active Information Management tool increases productivity because of the facility to generate components.
  • the repository needs to be customized in order to integrate with all the required tools, then it is important that the Repository tool has a published interface and underlying data model. Using such a repository makes interfacing other tools with the repository considerably easier and less time consuming.
  • Flexibility is important if a number of point tools are to be used in the development process as opposed to using an integrated CASE tool
  • naming standards must be validated to allow better navigation of the repository and easier reuse of elements.
  • Repository Management is the key information management tool.
  • the repository should be:
  • repositories • Open, with a published interface and an underlying data model.
  • multiple repositories may be used.
  • One repository can be integrated to an upper-case design tool, and another one to a lower-case design tool, each of them offering the best capabilities in their respective domain. It is then key that repositories offer import/export capabilities, so proper bridging/synchronizing capabilities can be developed.
  • Figure 18 is an illustration showing information captured in the Repository and reused.
  • a development repository results in three important benefits for a development organization and for the business units they support: • Information is kept in one place, in a known and organized structure. This means that effort is not wasted initially in recreating work that already exists and effort is not wasted later on when reconciling relevant information. This is often referred to as “full life-cycle support.”
  • Design information, created for one step of the development process, can be fed to the next step, reducing effort and knowledge "gaps" or misunderstandings.
  • the repository captures information relevant to each stage in application development: design 1802, construction 1804, testing 1806, migration, execution, and operation 1808.

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Epidemiology (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Computer And Data Communications (AREA)
EP01927029A 2000-04-13 2001-04-13 Verfahren zur gesundheitslösungmodelle Withdrawn EP1272945A2 (de)

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
US54908600A 2000-04-13 2000-04-13
US54892500A 2000-04-13 2000-04-13
US548925 2000-04-13
US09/549,237 US6701345B1 (en) 2000-04-13 2000-04-13 Providing a notification when a plurality of users are altering similar data in a health care solution environment
US549086 2000-04-13
US549237 2000-04-13
PCT/US2001/012270 WO2001080092A2 (en) 2000-04-13 2001-04-13 Method for a health care solution framework

Publications (1)

Publication Number Publication Date
EP1272945A2 true EP1272945A2 (de) 2003-01-08

Family

ID=27415544

Family Applications (1)

Application Number Title Priority Date Filing Date
EP01927029A Withdrawn EP1272945A2 (de) 2000-04-13 2001-04-13 Verfahren zur gesundheitslösungmodelle

Country Status (4)

Country Link
EP (1) EP1272945A2 (de)
AU (2) AU2001253522B2 (de)
CA (1) CA2406421C (de)
WO (1) WO2001080092A2 (de)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030088472A1 (en) * 2001-11-05 2003-05-08 Sabre Inc. Methods, systems, and articles of manufacture for providing product availability information
US8782097B2 (en) 2003-07-25 2014-07-15 Honeywell International Inc. Multiple system compatible database system and method
CN102467421B (zh) * 2010-11-19 2014-04-16 深圳市金蝶友商电子商务服务有限公司 一种基于租户数据的处理方法及计算机
US11636955B1 (en) 2019-05-01 2023-04-25 Verily Life Sciences Llc Communications centric management platform
CN111683066B (zh) * 2020-05-27 2023-06-23 平安养老保险股份有限公司 异构系统集成方法、装置、计算机设备和存储介质
CN111721536B (zh) * 2020-07-20 2022-05-27 哈尔滨理工大学 一种改进模型迁移策略的滚动轴承故障诊断方法
CN112579013A (zh) * 2020-12-24 2021-03-30 安徽航天信息科技有限公司 一种文件填写打印方法、装置及存储介质
CN112964469B (zh) * 2021-02-28 2022-05-27 哈尔滨理工大学 一种迁移学习的变负载下滚动轴承在线故障诊断方法

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5335343A (en) * 1992-07-06 1994-08-02 Digital Equipment Corporation Distributed transaction processing using two-phase commit protocol with presumed-commit without log force
US5721943A (en) * 1993-10-14 1998-02-24 International Business Machines Corporation Negotiable locks for concurrent access of control data by multiple programs
US5909570A (en) * 1993-12-28 1999-06-01 Webber; David R. R. Template mapping system for data translation
US5715397A (en) * 1994-12-02 1998-02-03 Autoentry Online, Inc. System and method for data transfer and processing having intelligent selection of processing routing and advanced routing features
US6023694A (en) * 1996-01-02 2000-02-08 Timeline, Inc. Data retrieval method and apparatus with multiple source capability
US5859972A (en) * 1996-05-10 1999-01-12 The Board Of Trustees Of The University Of Illinois Multiple server repository and multiple server remote application virtual client computer
US5787450A (en) * 1996-05-29 1998-07-28 International Business Machines Corporation Apparatus and method for constructing a non-linear data object from a common gateway interface
US5903889A (en) * 1997-06-09 1999-05-11 Telaric, Inc. System and method for translating, collecting and archiving patient records
US5805900A (en) * 1996-09-26 1998-09-08 International Business Machines Corporation Method and apparatus for serializing resource access requests in a multisystem complex
US5933824A (en) * 1996-12-23 1999-08-03 Lsi Logic Corporation Methods and apparatus for locking files within a clustered storage environment
JP2001513926A (ja) * 1997-02-28 2001-09-04 シーベル システムズ,インコーポレイティド 複数レベルのリモート・クライアントを持つ部分的複製分散データベース

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO0180092A2 *

Also Published As

Publication number Publication date
WO2001080092A3 (en) 2002-08-29
WO2001080092A2 (en) 2001-10-25
AU5352201A (en) 2001-10-30
CA2406421A1 (en) 2001-10-25
CA2406421C (en) 2012-02-21
AU2001253522B2 (en) 2007-04-05

Similar Documents

Publication Publication Date Title
US7403901B1 (en) Error and load summary reporting in a health care solution environment
US6701345B1 (en) Providing a notification when a plurality of users are altering similar data in a health care solution environment
US6405364B1 (en) Building techniques in a development architecture framework
US6370573B1 (en) System, method and article of manufacture for managing an environment of a development architecture framework
US6662357B1 (en) Managing information in an integrated development architecture framework
US6256773B1 (en) System, method and article of manufacture for configuration management in a development architecture framework
US7139999B2 (en) Development architecture framework
US6324647B1 (en) System, method and article of manufacture for security management in a development architecture framework
US7467198B2 (en) Architectures for netcentric computing systems
US7610233B1 (en) System, method and article of manufacture for initiation of bidding in a virtual trade financial environment
US7069234B1 (en) Initiating an agreement in an e-commerce environment
US7167844B1 (en) Electronic menu document creator in a virtual financial environment
US6629081B1 (en) Account settlement and financing in an e-commerce environment
US6721713B1 (en) Business alliance identification in a web architecture framework
US6473794B1 (en) System for establishing plan to test components of web based framework by displaying pictorial representation and conveying indicia coded components of existing network framework
US6536037B1 (en) Identification of redundancies and omissions among components of a web based architecture
Bass et al. Architecture-based development
US7315826B1 (en) Comparatively analyzing vendors of components required for a web-based architecture
US6615166B1 (en) Prioritizing components of a network framework required for implementation of technology
US6957186B1 (en) System method and article of manufacture for building, managing, and supporting various components of a system
US6519571B1 (en) Dynamic customer profile management
US8121874B1 (en) Phase delivery of components of a system required for implementation technology
US7885793B2 (en) Method and system for developing a conceptual model to facilitate generating a business-aligned information technology solution
Rozinat Process mining: conformance and extension
US8782598B2 (en) Supporting a work packet request with a specifically tailored IDE

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20021104

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

AX Request for extension of the european patent

Free format text: AL;LT;LV;MK;RO;SI

17Q First examination report despatched

Effective date: 20061009

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: ACCENTURE GLOBAL SERVICES GMBH

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: ACCENTURE GLOBAL SERVICES LIMITED

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20101221