WO2001008004A2 - A system, method and article of manufacture for determining capability levels of a monitoring process area for process assessment purposes in an operational maturity investigation - Google Patents

A system, method and article of manufacture for determining capability levels of a monitoring process area for process assessment purposes in an operational maturity investigation Download PDF

Info

Publication number
WO2001008004A2
WO2001008004A2 PCT/US2000/020280 US0020280W WO0108004A2 WO 2001008004 A2 WO2001008004 A2 WO 2001008004A2 US 0020280 W US0020280 W US 0020280W WO 0108004 A2 WO0108004 A2 WO 0108004A2
Authority
WO
WIPO (PCT)
Prior art keywords
assessment
management
capability
practices
level
Prior art date
Application number
PCT/US2000/020280
Other languages
French (fr)
Other versions
WO2001008004A8 (en
Inventor
Nancy S. Greenberg
Colleen R. Winn
Original Assignee
Accenture Llp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Accenture Llp filed Critical Accenture Llp
Priority to AU63752/00A priority Critical patent/AU6375200A/en
Publication of WO2001008004A2 publication Critical patent/WO2001008004A2/en
Publication of WO2001008004A8 publication Critical patent/WO2001008004A8/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management

Definitions

  • the present invention relates to IT operations organizations and more particularly to evaluating a maturity of an operations organization by determining capability levels of a monitoring process area.
  • frameworks and gap analysis have been used to capture the best practices of IT management and to determine areas of improvement. While the frameworks and gap analysis are intended to capture weaknesses in processes that are observable, it does not provide data with sufficient objectivity and granularity upon which a comprehensive improvement plan can be built.
  • a system, method, and article of manufacture consistent with the principles of the present invention are provided for determining capability levels of a monitoring process area when gauging a maturity of an operations organization. First, a plurality of process attributes are defined. Next, a plurality of generic practices are determined for each of the process attributes.
  • the generic practices include base practices such as polling for a current status, gathering and documenting monito ⁇ ng information, classifying events, assigning severity levels, assessing impact, analyzing faults, routing the faults to be corrected, mapping event types to pre-defined diagnostic and/or corrective procedures, logging the events locally and/or remotely, suppressing messages until thresholds are reached, displaying status information on at least one console in multiple formats, displaying status information in multiple locations, issuing commands on remote processors, setting up and changing local and/or remote filters, setting up and changing local and/or remote threshold schemes, analyzing traffic patterns, and sending broadcast messages. Thereafter, a maturity of an operations organization is calculated based at least in part on the achievement of the generic practices.
  • the present invention provides a basis for organizations to gauge performance, and assists in planning and tracking improvements to the operations environment.
  • the present invention further affords a basis for defining an objective improvement strategy in line with an organization's needs, priorities, and resource availability.
  • the present invention also provides a method for determining the overall operational maturity of an organization based on the capability levels of its processes.
  • the present invention can thus be used by organizations in a variety of contexts.
  • An organization can use the present invention to assess and improve its processes.
  • An organization can further use the present invention to assess the capability of suppliers in meeting their commitments, and hence better manage the risk associated with outsourcing and sub-contract management.
  • the present invention may be used to focus on an entire IT organization, on a single functional area such as service management, or on a single process area such as a service desk.
  • Figure 1 is a schematic diagram of a hardware implementation of one embodiment of the present invention.
  • Figure 2 is a flowchart illustrating generally the steps associated with the present invention
  • Figure 3 is an illustration showing the relationships of the process category, process area, and base practices of the operations environment dimension in accordance with one embodiment of the present invention
  • Figure 4 is an illustration showing a measure of each process area to the capability levels according to one embodiment of the present invention.
  • Figure 5 is an illustration showing various determinants of operational maturity in accordance with one embodiment of the present invention.
  • Figure 6 is an illustration showing an overview of the operational maturity model
  • Figure 7 is an illustration showing a relationship of capability levels, process attributes, and generic practices in accordance with one embodiment of the present invention.
  • Figure 8 is an illustration showing a capability rating of various attributes in accordance with one embodiment of the present invention.
  • Figure 9 is an illustration showing a mapping of attribute ratings to the process capability levels determination in accordance with one embodiment of the present invention.
  • Figure 10 is an illustration showing assessment roles and responsibilities in accordance with one embodiment of the present invention
  • Figure 11 is an illustration showing the process area rating in accordance with one embodiment of the present invention.
  • the present invention comprises a collection of best practices, both from a technical and management perspective.
  • the collection of best practices is a set of processes that are fundamental to a good operations environment.
  • the present invention provides a definition of an "ideal” operations environment, and also acts as a road map towards achieving the "ideal" state.
  • Figure 1 is a schematic diagram of one possible hardware implementation by which the present invention may be carried out. As shown, the present invention may be practiced in the context of a persona] computer such as an IBM compatible personal computer, Apple Macintosh computer or UNIX based workstation.
  • a persona such as an IBM compatible personal computer, Apple Macintosh computer or UNIX based workstation.
  • FIG. 1 A representative hardware environment is depicted in Figure 1, which illustrates a typical hardware configuration of a workstation in accordance with one embodiment having a central processing unit 110, such as a microprocessor, and a number of other units interconnected via a system bus 112.
  • the workstation shown in Figure 1 includes a Random Access Memory (RAM) 114, Read Only Memory (ROM) 116, an I/O adapter 118 for connecting peripheral devices such as disk storage units 120 to the bus 112, a user interface adapter 122 for connecting a keyboard 124, a mouse 126, a speaker 128, a microphone 132, and or other user interface devices such as a touch screen (not shown) to the bus 112, communication adapter 134 for connecting the workstation to a communication network 135 (e.g., a data processing network) and a display adapter 136 for connecting the bus 112 to a display device 138.
  • a communication network 135 e.g., a data processing network
  • display adapter 136 for connecting the bus 11
  • the workstation typically has resident thereon an operating system such as the Microsoft Windows NT or Windows/95 Operating System (OS), the IBM OS/2 operating system, the MAC OS, or UNIX operating system.
  • OS Microsoft Windows NT or Windows/95 Operating System
  • IBM OS/2 operating system the IBM OS/2 operating system
  • MAC OS the MAC OS
  • UNIX operating system the operating system
  • a preferred embodiment of the present invention is written using JAVA, C, and the C++ language and utilizes object oriented programming methodology.
  • Object oriented programming has become increasingly used to develop complex applications. As OOP moves toward the mainstream of software design and development, various software solutions require adaptation to make use of the benefits of OOP.
  • OOP is a process of developing computer software using objects, including the steps of analyzing the problem, designing the system, and constructing the program.
  • An object is a software package that contains both data and a collection of related structures and procedures. Since it contains both data and a collection of structures and procedures, it can be visualized as a self-sufficient component that does not require other additional structures, procedures or data to perform its specific task.
  • OOP therefore, views a computer program as a collection of largely autonomous components, called objects, each of which is responsible for a specific task. This concept of packaging data, structures, and procedures together in one component or module is called encapsulation.
  • OOP components are reusable software modules which present an interface that conforms to an object model and which are accessed at run-time through a component integration architecture.
  • a component integration architecture is a set of architecture mechanisms which allow software modules in different process spaces to utilize each others capabilities or functions. This is generally done by assuming a common component object model on which to build the architecture. It is worthwhile to differentiate between an object and a class of objects at this point.
  • An object is a single instance of the class of objects, which is often just called a class.
  • a class of objects can be viewed as a blueprint, from which many objects can be formed.
  • OOP allows the programmer to create an object that is a part of another object.
  • the object representing a piston engine is said to have a composition-relationship with the object representing a piston.
  • a piston engine comprises a piston, valves and many other components; the fact that a piston is an element of a piston engine can be logically and semantically represented in OOP by two objects.
  • OOP also allows creation of an object that "depends from” another object. If there are two objects, one representing a piston engine and the other representing a piston engine wherein the piston is made of ceramic, then the relationship between the two objects is not that of composition.
  • a ceramic piston engine does not make up a piston engine. Rather it is merely one kind of piston engine that has one more limitation than the piston engine; its piston is made of ceramic.
  • the object representing the ceramic piston engine is called a derived object, and it inherits all of the aspects of the object representing the piston engine and adds further limitation or detail to it.
  • the object representing the ceramic piston engine "depends from" the object representing the piston engine. The relationship between these objects is called inheritance.
  • the object or class representing the ceramic piston engine inherits all of the aspects of the objects representing the piston engine, it inherits the thermal characteristics of a standard piston defined in the piston engine class.
  • the ceramic piston engine object overrides these ceramic specific thermal characteristics, which are typically different from those associated with a metal piston. It skips over the original and uses new functions related to ceramic pistons.
  • Different kinds of piston engines have different characteristics, but may have the same underlying functions associated with it (e.g., how many pistons in the engine, ignition sequences, lubrication, etc.).
  • a programmer would call the same functions with the same names, but each type of piston engine may have different/overriding implementations of functions behind the same name. This ability to hide different implementations of a function behind the same name is called polymorphism and it greatly simplifies communication among objects.
  • composition-relationship With the concepts of composition-relationship, encapsulation, inheritance and polymorphism, an object can represent just about anything in the real world. In fact, our logical perception of the reality is the only limit on determining the kinds of things that can become objects in object- oriented software. Some typical categories are as follows:
  • Objects can represent physical objects, such as automobiles in a traffic-flow simulation, electrical components in a circuit-design program, countries in an economics model, or aircraft in an air-traffic-control system.
  • Objects can represent elements of the computer-user environment such as windows, menus or graphics objects.
  • An object can represent an inventory, such as a personnel file or a table of the latitudes and longitudes of cities.
  • An object can represent user-defined data types such as time, angles, and complex numbers, or points on the plane.
  • OOP allows the software developer to design and implement a computer program that is a model of some aspects of reality, whether that reality is a physical entity, a process, a system, or a composition of matter. Since the object can represent anything, the software developer can create an object which can be used as a component in a larger software project in the future.
  • OOP enables software developers to build objects out of other, previously built objects.
  • C++ is an OOP language that offers a fast, machine-executable code.
  • C++ is suitable for both commercial-application and systems-programming projects.
  • C++ appears to be the most popular choice among many OOP programmers, but there is a host of other OOP languages, such as Smalltalk, Common Lisp Object System (CLOS), and Eiffel. Additionally, OOP capabilities are being added to more traditional popular computer programming languages such as Pascal.
  • Encapsulation enforces data abstraction through the organization of data into small, independent objects that can communicate with each other. Encapsulation protects the data in an object from accidental damage, but allows other objects to interact with that data by calling the object's member functions and structures.
  • Class hierarchies and containment hierarchies provide a flexible mechanism for modeling real -world objects and the relationships among them.
  • Class libraries are very flexible. As programs grow more complex, more programmers are forced to adopt basic solutions to basic problems over and over again.
  • a relatively new extension of the class library concept is to have a framework of class libraries. This framework is more complex and consists of significant collections of collaborating classes that capture both the small scale patterns and major mechanisms that implement the common requirements and design in a specific application domain. They were first developed to free application programmers from the chores involved in displaying menus, windows, dialog boxes, and other standard user interface elements for personal computers. Frameworks also represent a change in the way programmers think about the interaction between the code they write and code written by others.
  • event loop programs require programmers to write a lot of code that should not need to be written separately for every application.
  • the concept of an application framework carries the event loop concept further. Instead of dealing with all the nuts and bolts of constructing basic menus, windows, and dialog boxes and then making these things all work together, programmers using application frameworks start with working application code and basic user interface elements in place. Subsequently, they build from there by replacing some of the generic capabilities of the framework with the specific capabilities of the intended application.
  • Application frameworks reduce the total amount of code that a programmer has to write from scratch.
  • the framework is really a generic application that displays windows, supports copy and paste, and so on, the programmer can also relinquish control to a greater degree than event loop programs permit.
  • the framework code takes care of almost all event handling and flow of control, and the programmer's code is called only when the framework needs it (e.g., to create or manipulate a proprietary data structure).
  • a programmer writing a framework program not only relinquishes control to the user (as is also true for event loop programs), but also relinquishes the detailed flow of control within the program to the framework. This approach allows the creation of more complex systems that work together in interesting ways, as opposed to isolated programs, having custom code, being created over and over again for similar problems.
  • a framework basically is a collection of cooperating classes that make up a reusable design solution for a given problem domain. It typically includes objects that provide default behavior (e.g., for menus and windows), and programmers use it by inheriting some of that default behavior and overriding other behavior so that the framework calls application code at the appropriate times.
  • default behavior e.g., for menus and windows
  • Behavior versus protocol Class libraries are essentially collections of behaviors that one can call when one wants those individual behaviors in a program.
  • a framework provides not only behavior but also the protocol or set of rules that govern the ways in which behaviors can be combined, including rules for what a programmer is supposed to provide versus what the framework provides.
  • a preferred embodiment of the invention utilizes HyperText Markup Language (HTML) to implement documents on the Internet together with a general-pu ⁇ ose secure communication protocol for a transport medium between the client and the Newco. HTTP or other protocols could be readily substituted for HTML without undue experimentation. Information on these products is available in T. Berners-Lee, D. Connoly, "RFC 1866: Hypertext
  • HTML Markup Language - 2.0
  • R. Fielding H, Frystyk, T. Berners-Lee, J. Gettys and J.C. Mogul, "Hypertext Transfer Protocol - HTTP/1.1 : HTTP Working Group Internet Draft” (May 2, 1996).
  • HTML is a simple data format used to create hypertext documents that are portable from one platform to another.
  • HTML documents are SGML documents with generic semantics that are appropriate for representing information from a wide range of domains.
  • HTML has been in use by the World-Wide Web global information initiative since 1990. HTML is an application of ISO Standard 8879; 1986 Information Processing Text and Office Systems; Standard Generalized Markup Language (SGML).
  • SGML Standard Generalized Markup Language
  • HTML has been the dominant technology used in development of Web-based solutions.
  • HTML has proven to be inadequate in the following areas:
  • UI User Interface
  • Custom “widgets” e.g., real-time stock tickers, animated icons, etc.
  • client-side performance is improved.
  • Java supports the notion of client-side validation, offloading appropriate processing onto the client for improved performance.
  • Dynamic, real-time Web pages can be created. Using the above-mentioned custom UI components, dynamic Web pages can also be created.
  • Sun's Java language has emerged as an industry-recognized language for "programming the Internet.”
  • Sun defines Java as: "a simple, object-oriented, distributed, inte ⁇ reted, robust, secure, architecture-neutral, portable, high-performance, multithreaded, dynamic, buzzword- compliant, general-pu ⁇ ose programming language.
  • Java supports programming for the Internet in the form of platform-independent Java applets.”
  • Java applets are small, specialized applications that comply with Sun's Java Application Programming Interface (API) allowing developers to add "interactive content” to Web documents (e.g., simple animations, page adornments, basic games, etc.). Applets execute within a Java-compatible browser (e.g.,
  • ActiveX Technologies to give developers and Web designers wherewithal to build dynamic content for the Internet and personal computers.
  • ActiveX includes tools for developing animation, 3-D virtual reality, video and other multimedia content.
  • the tools use Internet standards, work on multiple platforms, and are being supported by over 100 companies.
  • the group's building blocks are called ActiveX Controls, small, fast components that enable developers to embed parts of software in hypertext markup language (HTML) pages.
  • ActiveX Controls work with a variety of programming languages including Microsoft Visual C++, Borland Delphi, Microsoft Visual Basic programming system and, in the future, Microsoft's development tool for Java, code named "Jakarta.”
  • ActiveX Technologies also includes ActiveX Server Framework, allowing developers to create server applications.
  • One embodiment of the present invention includes three different, but complementary dimensions that together provide a framework which can be used in assessing and rating the IT operations of an organization The following three dimensions constitute the framework of the present invention.
  • 1) Operations Environment Dimension, 2) Capability Dimension, and 3) Maturity Dimension constitute the framework of the present invention.
  • the first dimension desc ⁇ bes and organizes the standard operational activities that any IT organization should perform
  • the second dimension provides a context for evaluating the performance quality of these operational activities
  • This dimension specifies the qualitative characteristics of an operations environment and orders these characte ⁇ stics on a scale denoting rising capability
  • the final dimension uses this capability scale and outlines a method for deriving a capability rating for specific IT process groups and the entire organization
  • the Operations Environment and Capability dimensions provide the foundation for determining the quality or capability level of the organization's IT operations
  • the Operations Environment dimension can be viewed as a desc ⁇ ptive mapping of a model operations environment
  • the Capability dimension can be construed as a qualitative mapping of a model operations environment
  • the Matu ⁇ ty dimension builds on the foundation set by these two dimensions to provide a method for rating the matu ⁇ ty level of the entire IT organization
  • FIG. 2 is a flow chart illustrating the vanous steps associated with the different dimensions of the present invention
  • a plurality of process areas of an operations organization are first defined in terms of either a goal or a pu ⁇ ose in operation 200
  • the process areas are then grouped into catego ⁇ es, as indicated in operation 202
  • the catego ⁇ es are grouped terms of process areas having common characte ⁇ stics
  • process capabilities are received for the process areas of the operations organization
  • Such data may be generated via a matu ⁇ ty questionnaire which includes a set of questions about the operations environment that sample the base practices in each process area of the present invention
  • the questionnaire may be used to obtain information on the capability of the IT organization, or a specific IT area or project
  • category capabilities are calculated for the catego ⁇ es of the process areas m operation 206 A. matu ⁇ ty of the operations organization is subsequently determined based on the category capabilities of the catego ⁇ es in operation 208.
  • the user-specified or measured parameters i.e., capability of each of the process areas, may be inputted by any input device, such as the keyboard 124, the mouse 126, the microphone 132, a touch screen (not shown), or anything else such as an input port that is capable of relaying such information.
  • the definitions, grouping, calculations and determinations may be carried out manually or via the CPU 110, which in turn may be governed by a computer program stored on a computer readable medium, i.e., the RAM 114, ROM 116, the disk storage units 120, and/or anything else capable of storing the computer program.
  • a computer readable medium i.e., the RAM 114, ROM 116, the disk storage units 120, and/or anything else capable of storing the computer program.
  • dedicated hardware such as an application specific integrated circuit (ASIC) may be employed to accomplish the same.
  • any one or more of the definitions, grouping and determinations may be carried out manually or in combination with the computer.
  • the outputting of the determination of the maturity of the operations organization may be effected by way of the display 138, the speaker 128, a printer (not shown) or any other output mechanism capable of delivering the output to the user. It should be understood that the foregoing components need not be resident on a single computer, but also may be a component of either a networked client and/or a server.
  • the Operations Environment Dimension is characterized by a set of process areas that are fundamental to the effective technical execution of an operations environment. More particularly, each process is characterized by its goals and pu ⁇ ose. which are the essential measurable objectives of a process. Each process area has a measurable pu ⁇ ose statement, which describes what has to be achieved in order to attain the defined pu ⁇ ose of the process area.
  • goals refer to a summary of the base practices of a process area that can be used to determine whether an organization or project has effectively implemented the process area.
  • the goals signify the scope, boundaries, and intent of each process area.
  • the process goals and pu ⁇ ose may be achieved in an IT organization through the various lower level activities; such as tasks and practices that are carried out to produce work products. These performed tasks, activities and practices, and the characteristics of the work products produced are the indicators that demonstrate whether the specific process goals or pu ⁇ ose is being achieved.
  • work product describes evidence of base practice implementation. For example, a completed change control request, a resolved trouble ticket, and/or a service level agreement (SLA) report.
  • SLA service level agreement
  • Process Categories The operations environment is partitioned into three process areas: Process Categories, Process Areas and Base Practices which reflect processes within any IT organization.
  • Figure 3 depicts and summarizes the relationship of the Process Categories 300, Process Areas 302, and Base
  • a Process Category has a defined pu ⁇ ose and measurable goals and consists of logically related set of Process Areas that collectively address the pu ⁇ ose and goals, in the same general area of activity.
  • Process Categories are to organize Process Areas according to common IT functional characteristics. There are four process categories defined in the present invention: Service Management, Systems Management, Managing Change, and IT Operations Planning. Process Categories are desc ⁇ bed as follows:
  • Category Managing Change includes all the functions that enable controlled and repeatable Desc ⁇ ption management of Software/Hardware components as they evolve through the development life cycle into production.
  • Process Areas are the second level in the operations hierarchy.
  • the elements of this level are a collection of Base Practices that are performed to achieve the defined pu ⁇ ose of the Process Area.
  • Process Areas refer to a collection of Base Practices that are performed sequentially, concurrently and/or iteratively to achieve the defined pu ⁇ ose of the process area.
  • the pu ⁇ ose describes the unique functional objectives of the process area when instantiated in a particular environment. Satisfying the pu ⁇ ose statement of a process area represents the first step in building process area capability.
  • Process Areas for the Service Management Category include service level management, operations level management, service desk, user administration, and service pricing.
  • the pu ⁇ ose of service level management may be to document the information technology services to be delivered to users. Note that this pu ⁇ ose states a unique functional objective (to establish requirements), and provides a context (service level).
  • Base Practices are the lowest level in the operation hierarchy. Base Practices are essential activities that an IT organization performs to achieve the pu ⁇ ose of a Process Area. A base practice is what an IT organization does.
  • Base Practices of service level management may be to assess business strategy, audit current service levels, determine service requirements and IT's ability to deliver services, prepare a draft SLA, identify the charge-back structure, and agree to SLAs with customers.
  • the Process Areas are expressed in terms of their goals, whereas Base Practices are tasks that need to be carried out to achieve those goals.
  • Base Practices may have work products associated with them.
  • a work product is evidence of base practice implementation, for example, a completed change control request, a resolved trouble ticket, and/or a SLA report.
  • a service desk example of a process area and associated base practices is as follows:
  • Capability Dimension refers to formalizing the process performance into quantifiable range of expected results based on the process capability level that can be achieved by following the process Process capability dimension characte ⁇ zes the level of capability of each process area within an organization In other words, the process capability dimension desc ⁇ bes how well the processes m the process dimension are performed
  • the Capability Dimension measures how well an IT organization performs its operational processes In determining capabilities, the Base Practices are viewed as a guide to what should be done The related Gene ⁇ c Practices deal with the effectiveness in which the Base Practices are earned out Capability Levels, Process Att ⁇ butes, and Gene ⁇ c Practices desc ⁇ be the Process Capability The present invention has five levels of Process Capability that can be applied to any Process Area The Capability Dimension provides a means to formalize and quantify the process performance The Capability Dimension desc ⁇ bes how well the processes are performed as contrasted with Base Practices that desc ⁇ be what an IT organization does
  • Capability Dimension consists of three components Capability Levels, Process Att ⁇ butes, and Gene ⁇ c Practices These are desc ⁇ bed below
  • Capability Levels indicate increasing levels of process matu ⁇ ty and are composed of one or more gene ⁇ c practices that work together to provide a major enhancement m the capability to perform the process
  • the Capability Level is the highest level of the Capability dimension
  • the Capability Level of a process determines its performance and effectiveness
  • Each Capability Level has certain Process Att ⁇ butes associated with it
  • a Process Att ⁇ bute is composed of a set of Gene ⁇ c Practices that provide c ⁇ te ⁇ a for improving performance
  • a particular Capability Level is achieved when all the Process Att ⁇ butes associated with it and with preceding levels are present Therefore, once the Capability Level is determined, those Process Att ⁇ butes - and associated Gene ⁇ c Practices - that are required to enhance capability can be identified
  • Capability Levels offer a staged guideline for improving the capability to perform the defined processes
  • Capability Levels provide two benefits: they acknowledge dependencies and relationships among the Base Practices of a Process Area, and they help an IT organization identify which improvements should be performed first, based on a plausible sequence of process implementation.
  • Each level provides a major enhancement in capability to that provided by its predecessors in the fulfillment of the process pu ⁇ ose. For example, at capability Level 1 , Base Practices are performed. The performance is ad hoc, informal, and unpredictable. At capability Level 2, the performing of Base Practices are planned and tracked versus just performed - thereby offering a significant improvement over Level 1 practice.
  • Capability Levels are applied to each Process Area independent of other Process Areas. An assessment is performed to determine Process Capability for each Process Area, as illustrated in Figure 4.
  • an assessment refers to a diagnostic performed by a trained team to evaluate aspects of an organization's IT operations environment processes.
  • the trained team determines the state of the operational processes, identifies pressing operational process related issues, and obtains organizational support for a process improvement program.
  • Process Areas can, and may, exist at different levels of capability.
  • the ability to rate Process Areas independently enables an IT organization to focus on process improvement priorities driven from business goals and strategic directions. An example of this is illustrated in Figure 4.
  • process attributes refer to features of a process that can be evaluated on a scale of achievement (performed, partially performed, not performed, etc.) which provide a measure of the capability of the process.
  • measures of capability are based on a set of nine Process Attributes. Process Attributes are used to determine whether a process has reached a given capability.
  • the nine Process Attributes are: • Process Performance
  • the attributes are evaluated on a four-point scale of achievement. Achieving a given Capability Level depends on the rating assigned to one or more of these attributes.
  • Generic Practices refer to activities that contribute to the capability of managing and improving the effectiveness of the operations environment Process Areas.
  • a generic practice is applicable to any and all Process Areas. It contributes to overall process management, measurement, and the institutionalization capability of the Process Areas.
  • the allocation of adequate resources to a process is a Generic Practice and is applicable to all processes.
  • Service Level Management and Migration Control are two different Process Areas with different Base Practices, goals, and pu ⁇ oses. However, they share the same Generic Practice of allocation of adequate resources.
  • Operational Maturity Dimension characterizes the maturity of an entire operations IT organization.
  • maturity refers to the degree of order (structure or systemization) and effectiveness of a process.
  • the degree of order determines its state of maturity. Less mature processes are less ordered and less effective; more mature processes are more ordered and more effective.
  • the Capability Dimension focuses on the determination of the capability of individual processes, within an operations organization, in achieving their stated goals and pu ⁇ ose.
  • the Operational Maturity Dimension determines the IT organizational maturity by focusing on a collection of processes at a certain level of capability in order to characterize the evolution of the operations IT organization as they improve.
  • Maturity in the overall context of present invention, is applied to an IT organization as a whole.
  • the Maturity Level is determined by the Capability Level of the four Process
  • Maturity Level refers to a sequence of key intermediate states leading to the goal state. Each state builds incrementally on the preceding state.
  • the assessment tool of the present invention is flexible to accommodate an assessment of a Process Category or just a Process Area. As shown in Figure 5, an assessment could end at the Process Area Level with the Process Capability Level or Process Area Maturity determined. An assessment could also be performed to assess all the Process Areas within a Process Category to determine the Process Category Maturity Level.
  • the framework of the present invention which consists of the three dimensions described previously, is illustrated in Figure 6.
  • the Operations Environment Dimension 600 the box in the center of Figure 6, divides all IT processes into Process Categories 300. Process Categories
  • Process Area 300 divide into a finite number of Process Areas 302.
  • Process Areas 302 consist of a finite number of Base Practices 304.
  • Each Process Area within a category is assigned a Capability Level 504 based on the performance of Process Attributes 601 comprised of a finite number of Generic Practices 602 applicable to that process (shown in the box on the right).
  • the IT organization's operational maturity 603 present invention is based on a clustering of process capabilities, as illustrated in the third box to the left.
  • the framework of the present invention is designed to support an IT organization's need to assess and improve their operational capability.
  • the structure of the model enables a consistent appraisal methodology to be used across diverse Process Areas. The distinction between essential operations and process management-focused elements therefore allows a systematic approach to process improvement.
  • the Capability Dimension of the present invention measures how capable an IT organization is in achieving the pu ⁇ ose of its various Process Areas.
  • Capability Levels, Process Attributes, and Generic Practices describe the Process Capability.
  • the Capability Levels, their characteristics, the Process Attributes, and the Generic Practices that comprise them are discussed in more detail.
  • the present invention has five levels of Process Capability that can be applied to any Process
  • Capability Level As mentioned before, Generic Practices are grouped by Process Attributes, and Process Attributes determine the Capability Level. Capability Levels build upon one another; levels cannot, therefore, be skipped.
  • Level 1 Level 2
  • GP Generic Practices
  • ATT 1 A Process Performance - the extent to which the execution of the process employs a set of practices which uses identifiable input work products to produce identifiable output work products that are adequate to satisfy the pu ⁇ ose of the process.
  • GP1.1 Ensure that Base Practices are performed. When all base practices are performed, the pu ⁇ ose of the process area is satisfied.
  • a process may exist but it may be informal and undocumented.
  • Process Area performance is dependent on how efficiently the Base Practices are implemented.
  • Work products such as completed change control requests, resolved trouble tickets, etc., which are related to base practice implementation are periodically reviewed and placed under version control. Corrective action is taken when variances in services and work products occur.
  • ATT 2 A Performance Management - the extent to which the execution of the process is managed in order to produce work products within a stated time and resource requirement.
  • the related Generic Practices are:
  • GP2.1 Establish and maintain a policy for performing operational tasks.
  • Policy is a visible way for the operations environment personnel and the management team to set expectations.
  • the form of policies varies widely depending on the local culture. Policy typically specifies that plans are documented, managed and controlled, and that reviews are conducted. Policy provides guidance for performing the operational tasks and processes.
  • GP2.2 Allocate sufficient resources to meet expectations. Resources include adequate funding, appropriate physical facilities, skilled people, and appropriate tools. This practice ensures that the level of effort, appropriate skills mix, tools, workspace, and other direct resources are available to perform the operational task and processes.
  • GP2.3 Ensure personnel receive the appropriate type and amount of training. Ensure that the individuals are appropriately trained on how to perform the operational tasks and processes. Training provides a common basis for repeatable performance. Even if the operations personnel or management have satisfactory technical skills and knowledge, there is almost always a need to establish a common understanding of the operational process activities and how skills are applied in them. Training, and how it is delivered, may change with process capability due to changes in how the process is performed and managed.
  • GP2.4 Collect data to measure performance.
  • the use of measurement implies that the metrics have been defined and selected, and data has been collected. Building a history of measures, such as cost and schedule variances, is a foundation for managing by data. Quality measures may be collected and used, but result in maximum impact at Level 4 when they are subjected to quantitative process control.
  • Open communication ensures that there is common understanding, that decisions are consensual, and that team members are kept aware of decisions made. Communication is needed when changes are made to plans, products, processes, activities, requirements, and responsibilities. The commitments, expectations, and responsibilities are documented and agreed upon within the project group. Commitment may be obtained by negotiation, by using input and feedback, or through joint development of solutions to issues. Issues are tracked and resolved within the group. Communication occurs periodically and whenever the status changes. The participants have access to data, status information, and recommended actions.
  • ATT 2B Work Product Management - the extent to which the process is managed to produce work products that are documented and controlled, and that meet their functional and nonfunctional requirements, in line with the work product quality goals of the process. In order to achieve this capability, a process needs to have stated functional and non- functional requirements for work products, including mtegnty, and to produce work products that fulfill the stated requirements.
  • the related Gene ⁇ c Practices are:
  • Requirements may come from the business customer, policies, standards, laws, regulations, etc. The applicable requirements are documented and available for ve ⁇ fication activities.
  • GP2.7 Employ version control to manage changes to work products. Place identified work products under version control, or configuration management to provide a means of controlling work products and services.
  • Base Practices are performed with the assistance of an available, well-defined, and operations-wide process infrastructure. The processes are tailored to meet the specific needs of a certain practice.
  • Data from using the process are gathered to determine if modifications or improvements should be made. This information is used m planning and managing the day-to-day execution of multiple projects within the IT organization, and for short and long-term process improvement.
  • ATT 3 A Process Resource - the extent to which the execution of the process uses suitable skilled human resources and process infrastructure effectively to cont ⁇ bute to the defined business goals of the operations environment.
  • GP3.2 Define tasks that satisfy the process pu ⁇ ose and business goals consistently and repeatedly. This includes:
  • Process Attribute ATT 3B Process Definition - the extent to which the execution of the process uses a definition, based upon a standard process, that enables it to contribute to the defined business goals of the IT organization.
  • this practice embodies the pro-active planning of personnel. This includes the selection of proper work forces, training, and dissemination.
  • GP 3.4 Provide feedback in order to maintain knowledge and experience.
  • the standard process repository is to be kept up-to-date, through a continuous feedback system based on experiences gained from using the defined process.
  • ATT 4A Process Measurement - the extent to which measures are used to ensure that the implementation of the process supports its execution, and contributes to the achievement of IT organizational goals.
  • GP4.1 Establish measurable quality objectives for the operations environment.
  • Process definitions are modified to reflect the quantitative nature of process performance. Measurements become inherent in the process definition and are collected as the process is being performed.
  • Process Attribute ATT 4B Process Control - the extent to which the execution of the process is controlled through the collection and analysis of measures that correct the performance of the process in order to reliably achieve the defined process goals.
  • the related Generic Practices are:
  • GP4.3 Provide adequate resources and infrastructure for data collection.
  • Level 5 is the highest achievement level from the viewpoint of Process Capability.
  • Continuous process improvement is enabled by quantitative feedback from the process and from pilot studies of innovative ideas and new technology. A focus on widespread, continuous improvement should permeate the IT organization.
  • the IT organization should establish quantitative performance goals for process effectiveness and efficiency, based on its business goals and strategic objectives.
  • ATT 5A Continuous Improvement - the extent to which changes to the process are identified and implemented to ensure continuous improvement in the fulfillment of the defined business goals of the IT organization. In order to achieve this capability, it is necessary to continuously identify and implement improvements to the tailored process, and provide input to make changes to the standard process definition.
  • the related Generic Practices are:
  • Improvements may be based on incremental operational refinements or through innovations, such new technologies. Improvements may typically be driven by the following activities:
  • ATT 5B Process Change - the extent to which changes to the definition, management, and performance of the process is controlled to better achieve the business goals of the IT organization.
  • GP5.2 Deploy "best practices" across the IT organization. Improved practices must be deployed across the operations environment to allow their benefit to be felt across the IT organization.
  • the deployment activities include: Identifying improvement opportunities in a systematic and proactive manner to continuously improve the process.
  • the rating framework requires identification of objective attributes or characteristics of a practice or work product of an implemented process to validate that Base Practices are performed, and Generic Practices are followed. Assessment Indicators determine Process Attribute ratings which then are used to determine Capability Level.
  • Assessment Indicators refer to objective attributes or characteristics of a practice or work product that supports an assessor's judgment of performance of an implemented process.
  • the cornerstone of a rating framework is the identification and description of Assessment Indicators to help rate the Process Attributes.
  • Assessment Indicators are objective attributes or characteristics of a practice or work product that supports an assessor's judgment of performance of an implemented process.
  • Assessment Indicators are evidence that Base Practices are performed, and Generic Practices are followed.
  • the indicators are not intended to be regarded as a mandatory checklist to be followed, but rather are a guide to enhance an assessment team's objectivity in making their judgments of a process's performance and capability.
  • the rating framework adds definition and reliability to the present invention, and thereby improves repeatability.
  • Assessment Indicators are determinants of Process Attribute ratings for each Process Capability attribute.
  • Each assessed process profile consists of a set of Process Attribute ratings.
  • Each attribute rating represents a judgment by the assessment team of the extent to which the attribute is achieved.
  • Figure 8 illustrates the Process Attribute rating represented on a four-point scale of achievement.
  • the indicators determine attributes rating which then are used to determine Capability Level.
  • the rating scale defined below is used to describe the degree of achievement of the defined capability characterized by Process Attributes. Once the appropriate rating for each Process Attribute is determined, ratings can be combined to assign the Capability Level achieved by the assessed process.
  • Figure 9 represents the mapping of attribute ratings to the process Capability Levels determination.
  • the first step is to identify if the appropriate Base Practices are performed at all.
  • the necessary foundation for improving the capability of any process is to at least demonstrate that the Base Practices are being performed.
  • the assessment team may then formulate an objective judgment of process performance attribute through different means such as analysis of the work products
  • Achievement of Base Practices is an indication that Process Area goals are being met.
  • the increasing capability of a process to effectively achieve its goals and objectives is based upon attribute rating.
  • the attribute rating is determined by the performance of the associated Generic Practices.
  • Attribute supports the assessment team's judgement of the degree of achievement of the attributes.
  • Process Category capabilities are determined from capability ratings of its Process Areas. Once all Process Areas of a category are rated the lowest rating assigned to a Process Area becomes the category rating as well. Similarly, the operational maturity rating is determined from Process Category rating within the IT organization. Once all Process Categories are rated then the lowest rating assigned to a Process Category becomes the IT organizational maturity. For example, if the Process Categories of an IT organization are rated as follows, then this particular IT organization would receive a maturity level rating of "1".
  • an assessment team collects the evidence on the implementation of the processes being assessed and determines their compatibility as defined in the framework of the present invention.
  • the objective of the assessment is to identify the differences and the gaps between the actual implementations of the processes in the assessed operational IT organization with respect to the present invention.
  • Using the framework of the present invention ensures that results of assessments can be reported in a common context and provides the basis on which comparisons can be based.
  • the assessment process is used to appraise an organization's IT operations environment process capability. Defining a reference model ensures that results of assessments can be reported in a common context and provides the basis on which comparisons can be based.
  • An IT organization can perform an assessment for a variety of reasons. An assessment can be performed in order to assess the processes in the IT operations environment with the pu ⁇ ose of improving its own work and service processes. An IT organization can also perform an assessment to determine and better manage the risks associated with outsourcing. In addition, an assessment can be performed to better understand a single functional area such as systems management, on a single process area such as a performance management, or on the entire IT operations environment. Three phases are defined in the assessment model: Planning and Preparing, Performing, and Distributing Results. All phases of the assessment are performed using a team-based approach. Team members include the client sponsor, the assessment team lead, assessment team members, and client participants.
  • assessment scope refers to organizational entities and components selected for inspection.
  • a clear understanding of the pu ⁇ ose of the framework, constraints, roles, responsibilities, and outputs are needed prior to the start of the assessment. Therefore, in preparation for the assessment, the assessment team lead and the client sponsor work together to reach agreement on the scope and goals of the assessment. Once agreement is reached, the assessment team lead ensures that the IT operational processes selected for the assessment are sufficient to meet the assessment pu ⁇ ose and may provide output that is representative of the assessment scope.
  • An assessment plan is developed based on the goals identified by the client sponsor.
  • the plan consists of detailed schedules for the assessment and potential risks identified with performing the assessment.
  • Assessment team members, assessment participants, and areas to be assessed are selected.
  • Work products are identified for initial review, and the logistics for the on-site visit are identified and planned.
  • the assessment team members must receive adequate training on the framework of the present invention and the assessment process. It is essential that the assessment team be well-trained on the present invention to ensure that they may have the ability to inte ⁇ ret the data obtained during the assessment.
  • the team must have comprehensive understanding of the assessment process, its underlying principles, the tasks necessary to execute it, and their role in performing the tasks.
  • Maturity questionnaires are distributed to participants prior to the client site visit. Maturity questionnaires exist for each process area of the present invention, and tie back to base practices, process attributes and generic practices. Completed questionnaires provide the assessment team with an overview of the IT operational process capability of the IT organization. The responses assist the team in focusing their investigations, and provide direction for later activities such as interviews and document reviews. Assessment team members prepare exploratory questions based on Interview Aids and responses to the maturity questionnaires.
  • Interview Aids refers to a set of exploratory questions about the operations environment which are used during the interview process to obtain more detailed information on the capability of the IT organization.
  • the interview aids are used by the assessment team to guide them through interview sessions with assessment participants.
  • a Kick off meeting is scheduled at the start of the on-site activities.
  • the pu ⁇ ose of the meeting is to provide the participants with an overview of present invention and the assessment process, to set expectations, and to answer any questions about the process.
  • a client sponsor of the assessment may participate in the presentation to show visible support and stress the importance of the assessment process to everyone involved.
  • Data for the assessment are obtained from several sources: responses to the maturity questionnaires, interview sessions, work products, and document reviews. Documents are reviewed in order to verify compliance. Interviewing provides an opportunity to gain a deeper understanding of the activities performed, how the work is performed, and processes currently in use. Interviewing provides the assessment team members with identifiable assessment indicators for each Process Area appraised. Interviewing also provides the opportunity to address all areas of the present invention within the scope of the assessment.
  • IT operations managers and supervisors are interviewed as a group in order to understand their view of how the work is performed in the IT organization, any problem areas of which they are aware, and improvements that they feel need to be made. IT operations personnel are interviewed to collect data within the scope of the assessment and to identify areas that they can and should improve in the IT organization.
  • the pu ⁇ ose of solidifying this information is to summarize and consolidate information into a manageable set of findings.
  • the data is then categorized into Process Areas of the present invention.
  • the assessment team must reach consensus on the validity of the data and whether sufficient information m the areas evaluated has been collected. It is the team's responsibility to obtain sufficient information on the components of the present invention within the scope of the assessment for the required areas of the IT organization before any rating can be done.
  • follow- up interviews may occur for clarification.
  • Initial findings are generated from the information collected thus far, and presented to the assessment participants.
  • the pu ⁇ ose of presenting initial findings is to obtain feedback from the individuals who provided information during the various interviews. Ratings are not considered until after the initial findings presentations, as the assessment team is still collecting data.
  • Initial findings are presented in multiple sessions in order to protect the confidentiality of the assessment participants. Feedback is recorded for the team to consider at the conclusion of all of the initial findings presentations. Examples of assessments associated with the foregoing service desk example are as follows:
  • the rating process may begin.
  • the first step in the rating process is to determine if Process Area goals are being met. Process Area goals are considered met when all base practices are performed. Each process attribute for each Process Area within the assessment scope is then rated. Process attributes are rated based on the existence of and compliance to generic practices.
  • the Assessment Indicator Rating template the assessment team identifies assessment indicators for each process area to determine whether or not process attributes are achieved. Ratings are always established based on consensus of the entire assessment team. Questionnaire responses, interview notes, and documentation are used to support ratings; confirmation from two sources in different contexts (e.g., two people in different meetings) ensures compliance of an activity.
  • the team reviews all weaknesses that relate to the associated generic practices. If the team determines that a weakness is strong enough to impact the process attribute, the process attribute is rated "not achieved.” If it is decided that there are no significant weaknesses that have an impact on a process attribute, it is rated "fully achieved.” For a Process Area to be rated “fully achieved,” all process attributes for the Process Area must be rated “fully achieved.” A Process Area may be rated fully achieved, largely achieved, partially achieved, or not achieved.
  • Assignment of a maturity level rating is optional at the discretion of the sponsor. For a particular maturity level rating to be achieved, all Process Areas within and below the maturity level must be satisfied. For example, for an IT organization to be rated at maturity level 4, all Process Areas at level 4, level 3 and at level 2 must have been investigated during the assessment, and all Process Areas must have been rated achieved by the assessment team. The final findings presentation is developed by the team to present to the sponsor and the IT organization the strengths and weaknesses observed for each Process Area within the assessment scope, the ratings of each Process Area, and the maturity level rating if desired by sponsor.
  • the final assessment results are presented to the client sponsor. During the final presentation, the assessment team must ensure that the IT organization understands the issues that were discovered during the assessment and the key issues that it faces. Operational strengths are presented to validate what the IT organization is doing well. Strengths and weaknesses are presented for each process area within the assessment scope as well as any issues that affect process and are un-related to the present invention. A Process Area profile is presented showing the individual Process Area ratings in detail.
  • An executive overview session is held in order to allow the senior IT Operations manager to clarify any issues with the assessment team, to confirm his or her understanding of the operational process issues, and to gain full understanding of the recommendations report.
  • the assessment team collects feedback from the assessment participants and the assessment team on the process, packages information that needs to be saved for historical pu ⁇ oses.
  • Figure 10 describes the roles and responsibilities of those involved with the assessment process.
  • Figure 11 represents the indicator types and their relationship to the determination of Process Area rating. As shown, evidence of process performance and process capability is provided by assessment indicators. Such assessment indicators, in turn, consist of base practices and general practices. At the next level, the base practices and general practices are assessed by process implements, work products, practice performance, resources and infrastructure.
  • SLA Management involves the creation, management, reporting, and discussion of Service Descnption Level Agreements (SLAs) with users and the providers withm Information Technology (IT).
  • SLA is a formal agreement between a user who requires information services and the IT organization responsible for providmg those services.
  • SLA Management involves the following areas:
  • SLA Definition The SLA document defines, in specific and quantifiable terms, the level of service that is to be delivered to users. In the enterprise environment, many design and configuration alternatives are available that affect a given system's response time, availability, development cost, and ongoing operational costs. A SLA cla ⁇ fies the business objectives and constraints for an application system, and forms the basis for both application design and system configuration choices.
  • SLA Reporting The actual production of trend reports is necessary to monitor and meter the effectiveness of a SLA.
  • SLA Control It is important that the services desc ⁇ bed in SLAs are carefully aligned with current busmess needs, monitored to ensure that they are performed as described, and updated in lme with changes to business needs.
  • OLA Operations Level Agreements with providers within the organization, as well as external suppliers and vendors.
  • An OLA is an agreement between the IT organization and those delive ⁇ ng the constituent services of the system.
  • OLAs enable the IT organization to provide the level of service stipulated in a Service Level Agreement as supporting services are guaranteed in the OLA.
  • OLA Management involves the following:
  • OLA Definition An OLA outlines the type of service that will be delivered to the users from each service provider. OLA Definition works with service providers to define:
  • Formal OLAs are defined for suppliers who are external to the IT organization. They may take the form of maintenance contracts, warranties, or service contracts. Further formal or informal OLAs may also be created for internal suppliers, depending on the size of the organization.
  • OLA Reporting The actual production of trend reports are necessary to monitor and meter the effectiveness of an OLA.
  • OLA Control It is important that the services described in OLAs are carefully aligned with current business needs, monitored to ensure that they are performed as desc ⁇ bed, and updated in line with changes to business needs.
  • OLA Review The reports generated from trackmg OLAs are reviewed to ensure that the OLAs are carefully aligned with current business needs and if necessary updated to be in line with business needs. In enterprise environments, this process becomes more complex as more components are required to perform these services.
  • PA's Base 1.2.1 Determine operational items Practices 1.2.2 Group related operational items
  • PA Goals To define a quantifiable service level that represents a minimum level of service for each service delivered.
  • OLAs contain e.g. workloads, cost of service, targets, type of support etc.
  • OLA outline each key business application e.g. penalties, tools used to maintain the OLA
  • KPIs Key Performance Indicators
  • service measurement metrics specified in the OLA Are targets for the service measurement metrics specified? If so, how are these targets determined, for example is the supplier capability gauged and considered?
  • Process Area OLA Management involves the creation, management, reporting, and discussion of Desc ⁇ ption Operations Level Agreements with suppliers and vendors.
  • OLAs enable the IT organization to provide the level of service stipulated in a Service Level Agreement as supporting services are guaranteed m the OLA.
  • An OLA is an agreement between the IT orgamzation and those delive ⁇ ng the constituent services of the system.
  • Operational Level Management involves the following:
  • OLA Definition An OLA outlines the type of service that will be delivered to the users from each service provider OLA Definition works with service providers to define-
  • Formal OLAs are defined for suppliers who are external to the IT organization. They may take the form of maintenance contracts, warranties, or service contracts. Further formal or informal OLAs may also be created for internal suppliers, depending on the size of the organization
  • OLA Reporting The actual production of trend reports are necessary to momtor and meter the effectiveness of an OLA.
  • OLA Control It is important that the services described in OLAs are carefully aligned with current business needs, momtored to ensure that they are performed as desc ⁇ bed, and updated in line with changes to business needs
  • Problem Management utilizes the skills of experts and support groups to fix and prevent recumng incidents by determining and fixing the underlying problems causing the incidents.
  • Request Management is responsible for coordinating and controlling all activities necessary to fulfill a request from a user, vendor, or developer Requests can be raised as change requests with Change Control, or planned, executed, and tracked by the Service Desk.
  • Request Management is responsible for coordinating and controlling all activities necessary to fulfill a request from a user, vendor, or developer. Further sub- functions of Request Management are: Request Logging Impact Analysis Authorization Prioritization
  • the Service Desk provides a single point of contact for users with problems or specific Description service request.
  • the Service Desk forms part of an organization's strategy to enable users and business communities to achieve business objectives through the use of technology.
  • the Service Desk main objectives are:
  • the Service Desk consists of the following functions:
  • Incident Management An incident is a single occurrence of an issue that affects the delivery of normal or expected services. Incident Management strives to resolve as high a proportion of incidents as possible prior to passing them on to other areas.
  • Problem Management A problem is the underlying cause of one or more incidents. Problem Management utilizes the skills of experts and support groups to fix and prevent recurring incidents by determining and fixing the underlying problems causing the incidents.
  • Request Management is responsible for coordinating and controlling all activities necessary to fulfill a request from a user, vendor, or developer. Requests can be raised as change requests with Change Control, or planned, executed, and tracked by the Service Desk. Request Management is responsible for coordinating and controlling all activities necessary to fulfill a request from a user, vendor, or developer. Further sub- functions of Request Management are: Request Logging Impact Analysis Authorization Prioritization
  • BP Desc ⁇ ption Ensure that a list of services and their cost is made available. This list should be a living document that is updated frequently. This document should be broken down to the lowest level of each service if possible.
  • Example A vendor may be asked to submit separate pricing for each of the service areas and service levels outlined in their "scope of service " agreement or contract
  • Example Cost of services does not only mean individual services per person or department Cost also needs to be factored in for shared resources
  • Example Financial thresholds need to be set along with what specific data and how often it should be reported on A company may have the following reports
  • Example Monthly equipment support reports can be used to determine and plan for equipment upgrades and training for users to better understand their equipment
  • Do budgets include contingencies for unanticipated growth or product/service needs
  • Base Practice 1.4.9 Prepare, distribute, and maintain a catalogue of service prices for users
  • Process Area Service Pricing is comprised of the following areas Desc ⁇ ption
  • Service Pricing & Cost Service Costing & P ⁇ cmg projects and monitors costs for the management of operations, provision of service, equipment installation, etc Based upon the projected cost and business needs, a service p ⁇ cmg strategy may be developed to re-allocate costs within the organization If developed, the service pricing strategy will be documented, communicated to the users, monitored and adjusted to ensure that it is both comprehensive
  • Billing & Accounting The purpose of Billing & Accounting is to gather information for calculating actual cost, determine chargeback costs and bill users for services rendered
  • Process Area User Admimstration handles the day-to-day tasks involved in admimstenng users on a Desc ⁇ ption system. Tasks include adding new users, changing user IDs, re-establishing user passwords and maintaining groups of users
  • Process Area Production Scheduling determines the requirements for the execution of scheduled jobs Desc ⁇ ption across a distnaded environment A production schedule is then placed to meet these requirements, taking into consideration other processes occur ⁇ ng throughout the dist ⁇ aded environment (e g , software and data dist ⁇ bution, and remote backup/restoration of data )
  • Examples of custom (or packaged) screens promptmg for scheduling information needed to execute j obs or j ob streams
  • Results of any network performance testing across the network (e.g. RMON. SNMP, etc.)
  • Process Definition GP3.1 Define policies and There is one centralized print procedures at an management system versus organization level individual or independent use of various systems.
  • GP3.2 Define tasks that New print management personnel satisfy the process purpose are trained and receive subsequent and business goals training on new technologies, consistently and equipment, procedures, etc. Future repeatedly employment needs are also considered.
  • Process Resource GP3.3 Plan for human Print management print jobs are resources proactively always handled according to policy vs. ad hoc.
  • GP3.4 Provide feedback in Print management receives order to maintain feedback via, e-mail, reports and knowledge and experience meetings from customers and other process areas regarding issues, approvals and reviews.
  • Process Area Output and Print Management monitors all of the printing and/or done across a distributed Desc ⁇ ption environment and is responsible for managing the prmters and the printing for both central and remote locations.
  • control system is set up to handle multiple transfers and both remote systems and the host complete file transfer successfully
  • Convert file types (e g , VSAM, PDS, etc )
  • Base Practice 2 3 1 Transfer files on a scheduled basis
  • Base Practice 2 3 3 Transfer files on an ad hoc basis
  • Can file types e.g. VSAM, PDS, etc.
  • Can file types e.g. VSAM, PDS, etc.
  • Process Area File Transfer and Control initiates and momtors the files being transfe ⁇ ed throughout the Desc ⁇ ption system as part of the business processing (e g , mghtly batch runs) File transfers can take place in a bi-directional fashion between hosts, servers and workstations
  • HR has their own private directory with access to confidential data restriaed to HR personnel
  • a document specifying network-capacity guidelines is available. guiding principles The guidelines take into consideration security, geographic for Communication location, and other business needs or requirements. Address Planning
  • Process Definition GP3.1 Define policies and Personnel are able to perform procedures at an Direaory or Communications organization level Address Management functions in a consistent and repeatable manner.
  • GP3 Define tasks that When Network Management tasks j satisfy the process purpose are distributed, success of the i and business goals tasks are ensured by common consistently and repeatedly resources such as tools, training and a company vision or direction.
  • Plan for human Network Services takes into resources proactively consideration the growth of the company when planning for hiring, updating systems with new technology, and other tools that may make their tasks manageable
  • GP3.4 Provide feedback in Monitoring group provides order to maintain performance reports on address knowledge and experience management to the Network Sen-ices Team.
  • Measurement measurable quality environment may be that the objectives for the business requires 24-hour operations environment network availability. This can be set as a threshold and monitored for following this business requirement or driver.
  • j GP4.2 Automate data Metrics are automatically j collection colleaed from the Direaory and Address Management tools, (vs. Manual collection)
  • Continuous GP5.1 Continually Current resources, applications, I ⁇ xpro ement improve tasks and and procedures are periodically processes assessed or altered with the intent to promote continuous improvement (eg. an upgrade to the latest NT Administration tools.)

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Economics (AREA)
  • Operations Research (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • Physics & Mathematics (AREA)
  • Development Economics (AREA)
  • Educational Administration (AREA)
  • Game Theory and Decision Science (AREA)
  • Data Mining & Analysis (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Stored Programmes (AREA)

Abstract

A system, method, and article of manufacture consistent with the principles of the present invention are provided for determining capability levels of a monitoring process area when gauging a maturity of an operations organization. First, a plurality of process attributes are defined. Next, a plurality of generic practices are determined for each of the process attributes. The generic practices include base practices such as polling for a current status, gathering and documenting monitoring information, classifying events, assigning severity levels, assessing impact, analyzing faults, routing the faults to be corrected, mapping event types to pre-defined diagnostic and/or corrective procedures, logging the events locally and/or remotely, suppressing messages until thresholds are reached, displaying status information on at least one console in multiple formats, displaying status information in multiple locations, issuing commands on remote processors, setting up and changing local and/or remote filters, setting up and changing local and/or remote threshold schemes, analyzing traffic patterns, and sending broadcast messages. Thereafter, a maturity of an operations organization is calculated based at least in part on the achievement of the generic practices.

Description

A SYSTEM, METHOD AND ARTICLE OF MANUFACTURE FOR DETERMINING
CAPABILITY LEVELS OF A MONITORING PROCESS AREA FOR PROCESS ASSESSMENT PURPOSES IN AN OPERATIONAL MATURITY INVESTIGATION
FIELD OF INVENTION
The present invention relates to IT operations organizations and more particularly to evaluating a maturity of an operations organization by determining capability levels of a monitoring process area.
BACKGROUND OF INVENTION
Triggered by a recent technology avalanche and a highly competitive global market, the management of information systems is undergoing a revolutionary change. Both information technology and business directions are driving information systems management to a fundamentally new paradigm. While business bottom lines are more tightly coupled with information technology than ever before, studies indicate that many CEOs and CFOs feel that they are not getting their money's worth from their IT investments. The complexity of this environment demands that a company have a formal way of assessing its IT capabilities, as well as a specific and measurable path for improving them.
In initiatives to address these issues, various frameworks and gap analysis have been used to capture the best practices of IT management and to determine areas of improvement. While the frameworks and gap analysis are intended to capture weaknesses in processes that are observable, it does not provide data with sufficient objectivity and granularity upon which a comprehensive improvement plan can be built.
There is thus a need to add further objectivity and consistency to conventional framework and gap analysis. SUMMARY OF INVENTION
A system, method, and article of manufacture consistent with the principles of the present invention are provided for determining capability levels of a monitoring process area when gauging a maturity of an operations organization. First, a plurality of process attributes are defined. Next, a plurality of generic practices are determined for each of the process attributes. The generic practices include base practices such as polling for a current status, gathering and documenting monitoπng information, classifying events, assigning severity levels, assessing impact, analyzing faults, routing the faults to be corrected, mapping event types to pre-defined diagnostic and/or corrective procedures, logging the events locally and/or remotely, suppressing messages until thresholds are reached, displaying status information on at least one console in multiple formats, displaying status information in multiple locations, issuing commands on remote processors, setting up and changing local and/or remote filters, setting up and changing local and/or remote threshold schemes, analyzing traffic patterns, and sending broadcast messages. Thereafter, a maturity of an operations organization is calculated based at least in part on the achievement of the generic practices.
The present invention provides a basis for organizations to gauge performance, and assists in planning and tracking improvements to the operations environment. The present invention further affords a basis for defining an objective improvement strategy in line with an organization's needs, priorities, and resource availability. The present invention also provides a method for determining the overall operational maturity of an organization based on the capability levels of its processes.
The present invention can thus be used by organizations in a variety of contexts. An organization can use the present invention to assess and improve its processes. An organization can further use the present invention to assess the capability of suppliers in meeting their commitments, and hence better manage the risk associated with outsourcing and sub-contract management. In addition, the present invention may be used to focus on an entire IT organization, on a single functional area such as service management, or on a single process area such as a service desk. BRIEF DESCRIPTION OF DRAWINGS
The invention may be better understood when consideration is given to the following detailed description thereof. Such description makes reference to the annexed drawings wherein:
Figure 1 is a schematic diagram of a hardware implementation of one embodiment of the present invention;
Figure 2 is a flowchart illustrating generally the steps associated with the present invention;
Figure 3 is an illustration showing the relationships of the process category, process area, and base practices of the operations environment dimension in accordance with one embodiment of the present invention;
Figure 4 is an illustration showing a measure of each process area to the capability levels according to one embodiment of the present invention;
Figure 5 is an illustration showing various determinants of operational maturity in accordance with one embodiment of the present invention;
Figure 6 is an illustration showing an overview of the operational maturity model;
Figure 7 is an illustration showing a relationship of capability levels, process attributes, and generic practices in accordance with one embodiment of the present invention;
Figure 8 is an illustration showing a capability rating of various attributes in accordance with one embodiment of the present invention;
Figure 9 is an illustration showing a mapping of attribute ratings to the process capability levels determination in accordance with one embodiment of the present invention;
Figure 10 is an illustration showing assessment roles and responsibilities in accordance with one embodiment of the present invention; and Figure 11 is an illustration showing the process area rating in accordance with one embodiment of the present invention.
DISCLOSURE OF INVENTION
The present invention comprises a collection of best practices, both from a technical and management perspective. The collection of best practices is a set of processes that are fundamental to a good operations environment. In other words, the present invention provides a definition of an "ideal" operations environment, and also acts as a road map towards achieving the "ideal" state.
Figure 1 is a schematic diagram of one possible hardware implementation by which the present invention may be carried out. As shown, the present invention may be practiced in the context of a persona] computer such as an IBM compatible personal computer, Apple Macintosh computer or UNIX based workstation.
A representative hardware environment is depicted in Figure 1, which illustrates a typical hardware configuration of a workstation in accordance with one embodiment having a central processing unit 110, such as a microprocessor, and a number of other units interconnected via a system bus 112. The workstation shown in Figure 1 includes a Random Access Memory (RAM) 114, Read Only Memory (ROM) 116, an I/O adapter 118 for connecting peripheral devices such as disk storage units 120 to the bus 112, a user interface adapter 122 for connecting a keyboard 124, a mouse 126, a speaker 128, a microphone 132, and or other user interface devices such as a touch screen (not shown) to the bus 112, communication adapter 134 for connecting the workstation to a communication network 135 (e.g., a data processing network) and a display adapter 136 for connecting the bus 112 to a display device 138.
The workstation typically has resident thereon an operating system such as the Microsoft Windows NT or Windows/95 Operating System (OS), the IBM OS/2 operating system, the MAC OS, or UNIX operating system. Those skilled in the art may appreciate that the present invention may also be implemented on other platforms and operating systems.
A preferred embodiment of the present invention is written using JAVA, C, and the C++ language and utilizes object oriented programming methodology. Object oriented programming (OOP) has become increasingly used to develop complex applications. As OOP moves toward the mainstream of software design and development, various software solutions require adaptation to make use of the benefits of OOP.
OOP is a process of developing computer software using objects, including the steps of analyzing the problem, designing the system, and constructing the program. An object is a software package that contains both data and a collection of related structures and procedures. Since it contains both data and a collection of structures and procedures, it can be visualized as a self-sufficient component that does not require other additional structures, procedures or data to perform its specific task. OOP, therefore, views a computer program as a collection of largely autonomous components, called objects, each of which is responsible for a specific task. This concept of packaging data, structures, and procedures together in one component or module is called encapsulation.
In general, OOP components are reusable software modules which present an interface that conforms to an object model and which are accessed at run-time through a component integration architecture. A component integration architecture is a set of architecture mechanisms which allow software modules in different process spaces to utilize each others capabilities or functions. This is generally done by assuming a common component object model on which to build the architecture. It is worthwhile to differentiate between an object and a class of objects at this point. An object is a single instance of the class of objects, which is often just called a class. A class of objects can be viewed as a blueprint, from which many objects can be formed.
OOP allows the programmer to create an object that is a part of another object. For example, the object representing a piston engine is said to have a composition-relationship with the object representing a piston. In reality, a piston engine comprises a piston, valves and many other components; the fact that a piston is an element of a piston engine can be logically and semantically represented in OOP by two objects.
OOP also allows creation of an object that "depends from" another object. If there are two objects, one representing a piston engine and the other representing a piston engine wherein the piston is made of ceramic, then the relationship between the two objects is not that of composition. A ceramic piston engine does not make up a piston engine. Rather it is merely one kind of piston engine that has one more limitation than the piston engine; its piston is made of ceramic. In this case, the object representing the ceramic piston engine is called a derived object, and it inherits all of the aspects of the object representing the piston engine and adds further limitation or detail to it. The object representing the ceramic piston engine "depends from" the object representing the piston engine. The relationship between these objects is called inheritance.
When the object or class representing the ceramic piston engine inherits all of the aspects of the objects representing the piston engine, it inherits the thermal characteristics of a standard piston defined in the piston engine class. However, the ceramic piston engine object overrides these ceramic specific thermal characteristics, which are typically different from those associated with a metal piston. It skips over the original and uses new functions related to ceramic pistons. Different kinds of piston engines have different characteristics, but may have the same underlying functions associated with it (e.g., how many pistons in the engine, ignition sequences, lubrication, etc.). To access each of these functions in any piston engine object, a programmer would call the same functions with the same names, but each type of piston engine may have different/overriding implementations of functions behind the same name. This ability to hide different implementations of a function behind the same name is called polymorphism and it greatly simplifies communication among objects.
With the concepts of composition-relationship, encapsulation, inheritance and polymorphism, an object can represent just about anything in the real world. In fact, our logical perception of the reality is the only limit on determining the kinds of things that can become objects in object- oriented software. Some typical categories are as follows:
• Objects can represent physical objects, such as automobiles in a traffic-flow simulation, electrical components in a circuit-design program, countries in an economics model, or aircraft in an air-traffic-control system.
• Objects can represent elements of the computer-user environment such as windows, menus or graphics objects.
• An object can represent an inventory, such as a personnel file or a table of the latitudes and longitudes of cities.
• An object can represent user-defined data types such as time, angles, and complex numbers, or points on the plane. With this enormous capability of an object to represent just about any logically separable matters, OOP allows the software developer to design and implement a computer program that is a model of some aspects of reality, whether that reality is a physical entity, a process, a system, or a composition of matter. Since the object can represent anything, the software developer can create an object which can be used as a component in a larger software project in the future.
If 90% of a new OOP software program consists of proven, existing components made from preexisting reusable objects, then only the remaining 10% of the new software project has to be written and tested from scratch. Since 90% already came from an inventory of extensively tested reusable objects, the potential domain from which an error could originate is 10% of the program. As a result, OOP enables software developers to build objects out of other, previously built objects.
This process closely resembles complex machinery being built out of assemblies and sub- assemblies. OOP technology, therefore, makes software engineering more like hardware engineering in that software is built from existing components, which are available to the developer as objects. All this adds up to an improved quality of the software as well as an increased speed of its development.
Programming languages are beginning to fully support the OOP principles, such as encapsulation, inheritance, polymoφhism. and composition-relationship. With the advent of the C++ language, many commercial software developers have embraced OOP. C++ is an OOP language that offers a fast, machine-executable code. Furthermore, C++ is suitable for both commercial-application and systems-programming projects. For now, C++ appears to be the most popular choice among many OOP programmers, but there is a host of other OOP languages, such as Smalltalk, Common Lisp Object System (CLOS), and Eiffel. Additionally, OOP capabilities are being added to more traditional popular computer programming languages such as Pascal.
The benefits of object classes can be summarized, as follows:
• Objects and their corresponding classes break down complex programming problems into many smaller, simpler problems.
• Encapsulation enforces data abstraction through the organization of data into small, independent objects that can communicate with each other. Encapsulation protects the data in an object from accidental damage, but allows other objects to interact with that data by calling the object's member functions and structures.
• Subclassing and inheritance make it possible to extend and modify objects through deriving new kinds of objects from the standard classes available in the system. Thus, new capabilities are created without having to start from scratch.
• Polymoφhism and multiple inheritance make it possible for different programmers to mix and match characteristics of many different classes and create specialized objects that can still work with related objects in predictable ways.
• Class hierarchies and containment hierarchies provide a flexible mechanism for modeling real -world objects and the relationships among them.
• Libraries of reusable classes are useful in many situations, but they also have some limitations. For example:
• Complexity. In a complex system, the class hierarchies for related classes can become extremely confusing, with many dozens or even hundreds of classes. • Flow of control. A program written with the aid of class libraries is still responsible for the flow of control (i.e., it must control the interactions among all the objects created from a particular library). The programmer has to decide which functions to call at what times for which kinds of objects.
• Duplication of effort. Although class libraries allow programmers to use and reuse many small pieces of code, each programmer puts those pieces together in a different way.
Two different programmers can use the same set of class libraries to write two programs that do exactly the same thing but whose internal structure (i.e., design) may be quite different, depending on hundreds of small decisions each programmer makes along the way. Inevitably, similar pieces of code end up doing similar things in slightly different ways and do not work as well together as they should.
Class libraries are very flexible. As programs grow more complex, more programmers are forced to reinvent basic solutions to basic problems over and over again. A relatively new extension of the class library concept is to have a framework of class libraries. This framework is more complex and consists of significant collections of collaborating classes that capture both the small scale patterns and major mechanisms that implement the common requirements and design in a specific application domain. They were first developed to free application programmers from the chores involved in displaying menus, windows, dialog boxes, and other standard user interface elements for personal computers. Frameworks also represent a change in the way programmers think about the interaction between the code they write and code written by others. In the early days of procedural programming, the programmer called libraries provided by the operating system to perform certain tasks, but basically the program executed down the page from start to finish, and the programmer was solely responsible for the flow of control. This was appropriate for printing out paychecks, calculating a mathematical table, or solving other problems with a program that executed in just one way.
The development of graphical user interfaces began to turn this procedural programming arrangement inside out. These interfaces allow the user, rather than program logic, to drive the program and decide when certain actions should be performed. Today, most personal computer software accomplishes this by means of an event loop which monitors the mouse, keyboard, and other sources of external events and calls the appropriate parts of the programmer's code according to actions that the user performs. The programmer no longer determines the order in which events occur. Instead, a program is divided into separate pieces that are called at unpredictable times and in an unpredictable order. By relinquishing control in this way to users, the developer creates a program that is much easier to use. Nevertheless, individual pieces of the program written by the developer still call libraries provided by the operating system to accomplish certain tasks, and the programmer must still determine the flow of control within each piece after it's called by the event loop. Application code still "sits on top of the system.
Even event loop programs require programmers to write a lot of code that should not need to be written separately for every application. The concept of an application framework carries the event loop concept further. Instead of dealing with all the nuts and bolts of constructing basic menus, windows, and dialog boxes and then making these things all work together, programmers using application frameworks start with working application code and basic user interface elements in place. Subsequently, they build from there by replacing some of the generic capabilities of the framework with the specific capabilities of the intended application.
Application frameworks reduce the total amount of code that a programmer has to write from scratch. However, because the framework is really a generic application that displays windows, supports copy and paste, and so on, the programmer can also relinquish control to a greater degree than event loop programs permit. The framework code takes care of almost all event handling and flow of control, and the programmer's code is called only when the framework needs it (e.g., to create or manipulate a proprietary data structure).
A programmer writing a framework program not only relinquishes control to the user (as is also true for event loop programs), but also relinquishes the detailed flow of control within the program to the framework. This approach allows the creation of more complex systems that work together in interesting ways, as opposed to isolated programs, having custom code, being created over and over again for similar problems.
Thus, as is explained above, a framework basically is a collection of cooperating classes that make up a reusable design solution for a given problem domain. It typically includes objects that provide default behavior (e.g., for menus and windows), and programmers use it by inheriting some of that default behavior and overriding other behavior so that the framework calls application code at the appropriate times.
There are three main differences between frameworks and class libraries:
• Behavior versus protocol. Class libraries are essentially collections of behaviors that one can call when one wants those individual behaviors in a program. A framework, on the other hand, provides not only behavior but also the protocol or set of rules that govern the ways in which behaviors can be combined, including rules for what a programmer is supposed to provide versus what the framework provides.
• Call versus override. With a class library, the code the programmer instantiates objects and calls their member functions. It's possible to instantiate and call objects in the same way with a framework (i.e., to treat the framework as a class library), but to take full advantage of a framework's reusable design, a programmer typically writes code that overrides and is called by the framework. The framework manages the flow of control among its objects. Writing a program involves dividing responsibilities among the various pieces of software that are called by the framework rather than specifying how the different pieces should work together. • Implementation versus design. With class libraries, programmers reuse only implementations, whereas with frameworks, they reuse design. A framework embodies the way a family of related programs or pieces of software work. It represents a generic design solution that can be adapted to a variety of specific problems in a given domain. For example, a single framework can embody the way a user interface works, even though two different user interfaces created with the same framework might solve quite different interface problems.
Thus, through the development of frameworks for solutions to various problems and programming tasks, significant reductions in the design and development effort for software can be achieved. A preferred embodiment of the invention utilizes HyperText Markup Language (HTML) to implement documents on the Internet together with a general-puφose secure communication protocol for a transport medium between the client and the Newco. HTTP or other protocols could be readily substituted for HTML without undue experimentation. Information on these products is available in T. Berners-Lee, D. Connoly, "RFC 1866: Hypertext
Markup Language - 2.0" (Nov. 1995); and R. Fielding, H, Frystyk, T. Berners-Lee, J. Gettys and J.C. Mogul, "Hypertext Transfer Protocol - HTTP/1.1 : HTTP Working Group Internet Draft" (May 2, 1996). HTML is a simple data format used to create hypertext documents that are portable from one platform to another. HTML documents are SGML documents with generic semantics that are appropriate for representing information from a wide range of domains.
HTML has been in use by the World-Wide Web global information initiative since 1990. HTML is an application of ISO Standard 8879; 1986 Information Processing Text and Office Systems; Standard Generalized Markup Language (SGML).
To date, Web development tools have been limited in their ability to create dynamic Web applications which span from client to server and interoperate with existing computing resources. Until recently, HTML has been the dominant technology used in development of Web-based solutions. However, HTML has proven to be inadequate in the following areas:
• Poor performance; • Restricted user interface capabilities;
• Can only produce static Web pages;
• Lack of interoperability with existing applications and data; and
• Inability to scale.
Sun Microsystem's Java language solves many of the client-side problems by:
• Improving performance on the client side;
• Enabling the creation of dynamic, real-time Web applications; and
• Providing the ability to create a wide variety of user interface components. With Java, developers can create robust User Interface (UI) components. Custom "widgets" (e.g., real-time stock tickers, animated icons, etc.) can be created, and client-side performance is improved. Unlike HTML, Java supports the notion of client-side validation, offloading appropriate processing onto the client for improved performance. Dynamic, real-time Web pages can be created. Using the above-mentioned custom UI components, dynamic Web pages can also be created.
Sun's Java language has emerged as an industry-recognized language for "programming the Internet." Sun defines Java as: "a simple, object-oriented, distributed, inteφreted, robust, secure, architecture-neutral, portable, high-performance, multithreaded, dynamic, buzzword- compliant, general-puφose programming language. Java supports programming for the Internet in the form of platform-independent Java applets." Java applets are small, specialized applications that comply with Sun's Java Application Programming Interface (API) allowing developers to add "interactive content" to Web documents (e.g., simple animations, page adornments, basic games, etc.). Applets execute within a Java-compatible browser (e.g.,
Netscape Navigator) by copying code from the server to client. From a language standpoint, Java's core feature set is based on C++. Sun's Java literature states that Java is basically, "C++ with extensions from Objective C for more dynamic method resolution."
Another technology that provides similar function to JAVA is provided by Microsoft and
ActiveX Technologies, to give developers and Web designers wherewithal to build dynamic content for the Internet and personal computers. ActiveX includes tools for developing animation, 3-D virtual reality, video and other multimedia content. The tools use Internet standards, work on multiple platforms, and are being supported by over 100 companies. The group's building blocks are called ActiveX Controls, small, fast components that enable developers to embed parts of software in hypertext markup language (HTML) pages. ActiveX Controls work with a variety of programming languages including Microsoft Visual C++, Borland Delphi, Microsoft Visual Basic programming system and, in the future, Microsoft's development tool for Java, code named "Jakarta." ActiveX Technologies also includes ActiveX Server Framework, allowing developers to create server applications. One of ordinary skill in the art readily recognizes that ActiveX could be substituted for JAVA without undue experimentation to practice the invention. One embodiment of the present invention includes three different, but complementary dimensions that together provide a framework which can be used in assessing and rating the IT operations of an organization The following three dimensions constitute the framework of the present invention. 1) Operations Environment Dimension, 2) Capability Dimension, and 3) Maturity Dimension
The first dimension descπbes and organizes the standard operational activities that any IT organization should perform The second dimension provides a context for evaluating the performance quality of these operational activities This dimension specifies the qualitative characteristics of an operations environment and orders these characteπstics on a scale denoting rising capability The final dimension uses this capability scale and outlines a method for deriving a capability rating for specific IT process groups and the entire organization
The Operations Environment and Capability dimensions provide the foundation for determining the quality or capability level of the organization's IT operations The Operations Environment dimension can be viewed as a descπptive mapping of a model operations environment In a similar manner, the Capability dimension can be construed as a qualitative mapping of a model operations environment The Matuπty dimension builds on the foundation set by these two dimensions to provide a method for rating the matuπty level of the entire IT organization
Figure 2 is a flow chart illustrating the vanous steps associated with the different dimensions of the present invention As shown, a plurality of process areas of an operations organization are first defined in terms of either a goal or a puφose in operation 200 The process areas are then grouped into categoπes, as indicated in operation 202 It should be noted that the categoπes are grouped terms of process areas having common characteπstics
Next, in operation 204, process capabilities are received for the process areas of the operations organization Such data may be generated via a matuπty questionnaire which includes a set of questions about the operations environment that sample the base practices in each process area of the present invention The questionnaire may be used to obtain information on the capability of the IT organization, or a specific IT area or project
Thereafter, category capabilities are calculated for the categoπes of the process areas m operation 206 A. matuπty of the operations organization is subsequently determined based on the category capabilities of the categoπes in operation 208. The user-specified or measured parameters, i.e., capability of each of the process areas, may be inputted by any input device, such as the keyboard 124, the mouse 126, the microphone 132, a touch screen (not shown), or anything else such as an input port that is capable of relaying such information. Further, the definitions, grouping, calculations and determinations may be carried out manually or via the CPU 110, which in turn may be governed by a computer program stored on a computer readable medium, i.e., the RAM 114, ROM 116, the disk storage units 120, and/or anything else capable of storing the computer program. In the alternative, dedicated hardware such as an application specific integrated circuit (ASIC) may be employed to accomplish the same. As an option, any one or more of the definitions, grouping and determinations may be carried out manually or in combination with the computer.
Further, the outputting of the determination of the maturity of the operations organization may be effected by way of the display 138, the speaker 128, a printer (not shown) or any other output mechanism capable of delivering the output to the user. It should be understood that the foregoing components need not be resident on a single computer, but also may be a component of either a networked client and/or a server.
Operations Environment Dimension The Operations Environment Dimension is characterized by a set of process areas that are fundamental to the effective technical execution of an operations environment. More particularly, each process is characterized by its goals and puφose. which are the essential measurable objectives of a process. Each process area has a measurable puφose statement, which describes what has to be achieved in order to attain the defined puφose of the process area.
In the present description, goals refer to a summary of the base practices of a process area that can be used to determine whether an organization or project has effectively implemented the process area. The goals signify the scope, boundaries, and intent of each process area.
The process goals and puφose may be achieved in an IT organization through the various lower level activities; such as tasks and practices that are carried out to produce work products. These performed tasks, activities and practices, and the characteristics of the work products produced are the indicators that demonstrate whether the specific process goals or puφose is being achieved.
In the present description, work product describes evidence of base practice implementation. For example, a completed change control request, a resolved trouble ticket, and/or a service level agreement (SLA) report.
The operations environment is partitioned into three process areas: Process Categories, Process Areas and Base Practices which reflect processes within any IT organization. Figure 3 depicts and summarizes the relationship of the Process Categories 300, Process Areas 302, and Base
Practices 304 of the Operations Environment Dimension. This breakdown provides a grouping by type of activity. The activities characterize the performance of a process. The three level hierarchy is described as follows.
Process Categories (300)
In the present description, a Process Category has a defined puφose and measurable goals and consists of logically related set of Process Areas that collectively address the puφose and goals, in the same general area of activity.
The puφose of Process Categories is to organize Process Areas according to common IT functional characteristics. There are four process categories defined in the present invention: Service Management, Systems Management, Managing Change, and IT Operations Planning. Process Categories are descπbed as follows:
Figure imgf000017_0001
Figure imgf000018_0001
Figure imgf000018_0002
Cateεorv Number
Cateεorv Name Managing Change
Category Managing Change includes all the functions that enable controlled and repeatable Descπption management of Software/Hardware components as they evolve through the development life cycle into production.
Process Areas Release Management
Change Control
Validation
Deployment
Software / Data Distribution
Migration Control
Repository Management
Content Management
License Management
Asset Management
Procurement
Category Goals To minimize the impact of change on day-to-day operations
To effectively deploy new technology to the user community
To effectively migrate new releases into the operational environment
Figure imgf000019_0001
Figure imgf000019_0002
Process Areas (302)
Process Areas are the second level in the operations hierarchy. The elements of this level are a collection of Base Practices that are performed to achieve the defined puφose of the Process Area.
In the present description, Process Areas refer to a collection of Base Practices that are performed sequentially, concurrently and/or iteratively to achieve the defined puφose of the process area. The puφose describes the unique functional objectives of the process area when instantiated in a particular environment. Satisfying the puφose statement of a process area represents the first step in building process area capability.
Examples of Process Areas for the Service Management Category include service level management, operations level management, service desk, user administration, and service pricing. To illustrate further, the puφose of service level management may be to document the information technology services to be delivered to users. Note that this puφose states a unique functional objective (to establish requirements), and provides a context (service level). Base Practices (304)
Base Practices are the lowest level in the operation hierarchy. Base Practices are essential activities that an IT organization performs to achieve the puφose of a Process Area. A base practice is what an IT organization does.
For example, Base Practices of service level management may be to assess business strategy, audit current service levels, determine service requirements and IT's ability to deliver services, prepare a draft SLA, identify the charge-back structure, and agree to SLAs with customers. The Process Areas are expressed in terms of their goals, whereas Base Practices are tasks that need to be carried out to achieve those goals. Base Practices may have work products associated with them. A work product is evidence of base practice implementation, for example, a completed change control request, a resolved trouble ticket, and/or a SLA report.
A service desk example of a process area and associated base practices is as follows:
Figure imgf000020_0001
Figure imgf000021_0001
Base Practices
Figure imgf000021_0002
Figure imgf000022_0001
Capability Dimension
In the present descπption, Capability Dimension refers to formalizing the process performance into quantifiable range of expected results based on the process capability level that can be achieved by following the process Process capability dimension characteπzes the level of capability of each process area within an organization In other words, the process capability dimension descπbes how well the processes m the process dimension are performed
The Capability Dimension measures how well an IT organization performs its operational processes In determining capabilities, the Base Practices are viewed as a guide to what should be done The related Geneπc Practices deal with the effectiveness in which the Base Practices are earned out Capability Levels, Process Attπbutes, and Geneπc Practices descπbe the Process Capability The present invention has five levels of Process Capability that can be applied to any Process Area The Capability Dimension provides a means to formalize and quantify the process performance The Capability Dimension descπbes how well the processes are performed as contrasted with Base Practices that descπbe what an IT organization does
The Capability Dimension consists of three components Capability Levels, Process Attπbutes, and Geneπc Practices These are descπbed below
Capability Levels
In the present descπption, Capability Levels indicate increasing levels of process matuπty and are composed of one or more geneπc practices that work together to provide a major enhancement m the capability to perform the process
The Capability Level is the highest level of the Capability dimension The Capability Level of a process determines its performance and effectiveness Each Capability Level has certain Process Attπbutes associated with it A Process Attπbute is composed of a set of Geneπc Practices that provide cπteπa for improving performance A particular Capability Level is achieved when all the Process Attπbutes associated with it and with preceding levels are present Therefore, once the Capability Level is determined, those Process Attπbutes - and associated Geneπc Practices - that are required to enhance capability can be identified In other words, Capability Levels offer a staged guideline for improving the capability to perform the defined processes Capability Levels provide two benefits: they acknowledge dependencies and relationships among the Base Practices of a Process Area, and they help an IT organization identify which improvements should be performed first, based on a plausible sequence of process implementation.
Each level provides a major enhancement in capability to that provided by its predecessors in the fulfillment of the process puφose. For example, at capability Level 1 , Base Practices are performed. The performance is ad hoc, informal, and unpredictable. At capability Level 2, the performing of Base Practices are planned and tracked versus just performed - thereby offering a significant improvement over Level 1 practice.
In this architecture, the Capability Levels are applied to each Process Area independent of other Process Areas. An assessment is performed to determine Process Capability for each Process Area, as illustrated in Figure 4.
In the present description, an assessment refers to a diagnostic performed by a trained team to evaluate aspects of an organization's IT operations environment processes. The trained team determines the state of the operational processes, identifies pressing operational process related issues, and obtains organizational support for a process improvement program.
Therefore, different Process Areas can, and may, exist at different levels of capability. The ability to rate Process Areas independently enables an IT organization to focus on process improvement priorities driven from business goals and strategic directions. An example of this is illustrated in Figure 4.
Process Attributes
In the present description, process attributes refer to features of a process that can be evaluated on a scale of achievement (performed, partially performed, not performed, etc.) which provide a measure of the capability of the process.
Within the framework of the present invention, measures of capability are based on a set of nine Process Attributes. Process Attributes are used to determine whether a process has reached a given capability. The nine Process Attributes are: • Process Performance
• Performance Management
• Work Product Management
• Process Definition
• Process Resource
• Process Measurement
• Process Control
• Process Change
• Continuous Improvement
The attributes are evaluated on a four-point scale of achievement. Achieving a given Capability Level depends on the rating assigned to one or more of these attributes.
Generic Practices In the present description, Generic Practices refer to activities that contribute to the capability of managing and improving the effectiveness of the operations environment Process Areas. A generic practice is applicable to any and all Process Areas. It contributes to overall process management, measurement, and the institutionalization capability of the Process Areas.
For example, the allocation of adequate resources to a process is a Generic Practice and is applicable to all processes. Service Level Management and Migration Control are two different Process Areas with different Base Practices, goals, and puφoses. However, they share the same Generic Practice of allocation of adequate resources.
Maturity Dimension
Operational Maturity Dimension characterizes the maturity of an entire operations IT organization. In the present description, maturity refers to the degree of order (structure or systemization) and effectiveness of a process. The degree of order determines its state of maturity. Less mature processes are less ordered and less effective; more mature processes are more ordered and more effective.
The Capability Dimension focuses on the determination of the capability of individual processes, within an operations organization, in achieving their stated goals and puφose. The Operational Maturity Dimension determines the IT organizational maturity by focusing on a collection of processes at a certain level of capability in order to characterize the evolution of the operations IT organization as they improve.
The term Maturity, in the overall context of present invention, is applied to an IT organization as a whole. The Maturity Level is determined by the Capability Level of the four Process
Categories: Service Management, Systems Management, Managing Change, and IT Operations Planning. Operational maturity is defined by a staged model, wherein a operational maturity level 500 cannot be reached until all Process Categories driving it have themselves reached a certain maturity level. Similarly, a category Capability Level 502 cannot be reached until all Process Areas 302 contained in it have reached a certain Process Capability Level 504. This staging is illustrated in Figure 5.
In the present description, Maturity Level refers to a sequence of key intermediate states leading to the goal state. Each state builds incrementally on the preceding state.
Even though it is recommended that an entire operational assessment be conducted, the assessment tool of the present invention is flexible to accommodate an assessment of a Process Category or just a Process Area. As shown in Figure 5, an assessment could end at the Process Area Level with the Process Capability Level or Process Area Maturity determined. An assessment could also be performed to assess all the Process Areas within a Process Category to determine the Process Category Maturity Level.
The framework of the present invention, which consists of the three dimensions described previously, is illustrated in Figure 6. The Operations Environment Dimension 600, the box in the center of Figure 6, divides all IT processes into Process Categories 300. Process Categories
300 divide into a finite number of Process Areas 302. Process Areas 302 consist of a finite number of Base Practices 304.
Each Process Area within a category is assigned a Capability Level 504 based on the performance of Process Attributes 601 comprised of a finite number of Generic Practices 602 applicable to that process (shown in the box on the right).
In turn, the IT organization's operational maturity 603 present invention is based on a clustering of process capabilities, as illustrated in the third box to the left. The framework of the present invention is designed to support an IT organization's need to assess and improve their operational capability. The structure of the model enables a consistent appraisal methodology to be used across diverse Process Areas. The distinction between essential operations and process management-focused elements therefore allows a systematic approach to process improvement.
Capability Determination
As described in the previous section, the Capability Dimension of the present invention measures how capable an IT organization is in achieving the puφose of its various Process Areas. Within the context of the present invention, Capability Levels, Process Attributes, and Generic Practices describe the Process Capability. In this section, the Capability Levels, their characteristics, the Process Attributes, and the Generic Practices that comprise them are discussed in more detail.
The present invention has five levels of Process Capability that can be applied to any Process
Area. As mentioned before, Generic Practices are grouped by Process Attributes, and Process Attributes determine the Capability Level. Capability Levels build upon one another; levels cannot, therefore, be skipped.
Figure 7 tabulates the relationship of Generic Practices and Process Attributes to Capability
Levels.
The following section explains in greater detail what is meant by Level 1 , Level 2, and so forth. Each Level is described in terms of its characteristics and the Generic Practices (GP) assigned to it.
Level 1 : Performed Informally
At this Level, all Base Practices are generally performed, but operations may be ad hoc and occasionally chaotic. Consistent planning and tracking of performance is not performed. Good performance depends on individual knowledge and effort. Operational support and services are generally adequate, but quality and efficiency depend on how well individuals within the IT organization perceive that tasks should be performed. The capability to perform an activity is not generally repeatable or transferable. Process Attribute
ATT 1 A: Process Performance - the extent to which the execution of the process employs a set of practices which uses identifiable input work products to produce identifiable output work products that are adequate to satisfy the puφose of the process.
In order to achieve this capability, Base Practices of the process must be implemented and work products must be produced that satisfy the process puφose. The related Generic Practice is:
GP1.1 Ensure that Base Practices are performed. When all base practices are performed, the puφose of the process area is satisfied. A process may exist but it may be informal and undocumented.
Level 2: Planned and Tracked
At this Level, performance of the Base Practices in the Process Area is planned and tracked. The necessary discipline is in place to repeat earlier successes with similar characteristics.
There is general recognition that the Process Area performance is dependent on how efficiently the Base Practices are implemented. Work products, such as completed change control requests, resolved trouble tickets, etc., which are related to base practice implementation are periodically reviewed and placed under version control. Corrective action is taken when variances in services and work products occur.
Process Attribute
ATT 2 A: Performance Management - the extent to which the execution of the process is managed in order to produce work products within a stated time and resource requirement. The related Generic Practices are:
GP2.1 Establish and maintain a policy for performing operational tasks.
Policy is a visible way for the operations environment personnel and the management team to set expectations. The form of policies varies widely depending on the local culture. Policy typically specifies that plans are documented, managed and controlled, and that reviews are conducted. Policy provides guidance for performing the operational tasks and processes.
GP2.2 Allocate sufficient resources to meet expectations. Resources include adequate funding, appropriate physical facilities, skilled people, and appropriate tools. This practice ensures that the level of effort, appropriate skills mix, tools, workspace, and other direct resources are available to perform the operational task and processes.
GP2.3 Ensure personnel receive the appropriate type and amount of training. Ensure that the individuals are appropriately trained on how to perform the operational tasks and processes. Training provides a common basis for repeatable performance. Even if the operations personnel or management have satisfactory technical skills and knowledge, there is almost always a need to establish a common understanding of the operational process activities and how skills are applied in them. Training, and how it is delivered, may change with process capability due to changes in how the process is performed and managed.
GP2.4 Collect data to measure performance. The use of measurement implies that the metrics have been defined and selected, and data has been collected. Building a history of measures, such as cost and schedule variances, is a foundation for managing by data. Quality measures may be collected and used, but result in maximum impact at Level 4 when they are subjected to quantitative process control.
GP2.5 Maintain communication among team members.
Open communication ensures that there is common understanding, that decisions are consensual, and that team members are kept aware of decisions made. Communication is needed when changes are made to plans, products, processes, activities, requirements, and responsibilities. The commitments, expectations, and responsibilities are documented and agreed upon within the project group. Commitment may be obtained by negotiation, by using input and feedback, or through joint development of solutions to issues. Issues are tracked and resolved within the group. Communication occurs periodically and whenever the status changes. The participants have access to data, status information, and recommended actions.
Process Attribute
ATT 2B: Work Product Management - the extent to which the process is managed to produce work products that are documented and controlled, and that meet their functional and nonfunctional requirements, in line with the work product quality goals of the process. In order to achieve this capability, a process needs to have stated functional and non- functional requirements for work products, including mtegnty, and to produce work products that fulfill the stated requirements. The related Geneπc Practices are:
GP2.6 Ensure work products satisfy documented requirements.
Requirements may come from the business customer, policies, standards, laws, regulations, etc. The applicable requirements are documented and available for veπfication activities.
GP2.7 Employ version control to manage changes to work products. Place identified work products under version control, or configuration management to provide a means of controlling work products and services.
Level 3 Weil-Defined
At Level 3, Base Practices are performed with the assistance of an available, well-defined, and operations-wide process infrastructure. The processes are tailored to meet the specific needs of a certain practice.
Data from using the process are gathered to determine if modifications or improvements should be made. This information is used m planning and managing the day-to-day execution of multiple projects within the IT organization, and for short and long-term process improvement.
Once the environment is stable, common practices for performing the processes are collected, defined m a consistent manner, and used as the basis for long-term improvement across the operations environment At this level, the proper mechanism is in place to distribute knowledge and expeπence throughout the operations environment
Process Attribute
ATT 3 A: Process Resource - the extent to which the execution of the process uses suitable skilled human resources and process infrastructure effectively to contπbute to the defined business goals of the operations environment.
In order to achieve this capability, a process needs to have an infrastructure available that fulfills stated needs, and adequate human resources The related Geneπc Practices are: GP3.1 Define policies and procedures at an IT level.
Policies, standards, and procedures are established at an IT level for common use throughout the operations environment.
GP3.2 Define tasks that satisfy the process puφose and business goals consistently and repeatedly. This includes:
Identifying the standard process from those available in the IT organization that is appropriate to the process puφose and the business goals of the IT organization. Tailoring the standard process to obtain a defined process appropriate for the task at hand, implementing the defined process to achieve the process puφose consistently and repeatedly, and to support the business goals of the organization.
Process Attribute ATT 3B: Process Definition - the extent to which the execution of the process uses a definition, based upon a standard process, that enables it to contribute to the defined business goals of the IT organization.
In order to achieve this capability, a process needs to be executed according to a standard definition that has been suitably tailored to the needs of the process instance. The standard process needs to be capable of supporting the stated business goals of the IT organization. The related Generic Practices are:
GP3.3 Plan for human resources proactively. Unlike training at Capability Level 2, this practice embodies the pro-active planning of personnel. This includes the selection of proper work forces, training, and dissemination.
GP 3.4 Provide feedback in order to maintain knowledge and experience. The standard process repository is to be kept up-to-date, through a continuous feedback system based on experiences gained from using the defined process.
Level 4: Quantitatively Controlled
At this Level, processes and services are quantitatively measured, understood, and controlled.
Detailed measures of performance are collected and analyzed. Establishing common processes within an operations environment enables more sophisticated methods of performing activities. These activities include controlling processes and results quantitatively; integrating processes across groups, or fine-tuning processes to different services.
At this Level, measurable process goals are established for each defined process and associated services. Detailed measures of performance are collected and analyzed. This data enables quantitative understanding of the processes and an improved ability to predict performance. Performance is objectively managed, the quality of services is quantitatively known, and defects are selectively identified and corrected.
Process Attribute
ATT 4A: Process Measurement - the extent to which measures are used to ensure that the implementation of the process supports its execution, and contributes to the achievement of IT organizational goals.
In order to achieve this capability, a process needs to have defined measures that enable an execution to be controlled. The related Generic Practices are:
GP4.1 Establish measurable quality objectives for the operations environment.
These quality objectives can be tied to the strategic quality goals of the IT organization, the particular needs and priorities of the customer, or the tactical needs of a specific group or project. The measurements referred to here go beyond the traditional service level and end product measurements. They are intended to imply sufficient understanding of the processes being used to enable the IT organization to set and use intermediate goals for work-product quality.
GP4.2 Automate data collection.
Process definitions are modified to reflect the quantitative nature of process performance. Measurements become inherent in the process definition and are collected as the process is being performed.
Process Attribute ATT 4B Process Control - the extent to which the execution of the process is controlled through the collection and analysis of measures that correct the performance of the process in order to reliably achieve the defined process goals. The related Generic Practices are:
GP4.3 Provide adequate resources and infrastructure for data collection.
Since the success of Level 4 lies fundamentally on collection of proper data, automated methods should be in place to collect them. This includes software tools and meaningful placement of appropriate metrics for collection of the relevant data.
GP4.4 Use data analysis methods and tools to manage and improve the process.
This includes the identification of analysis and control techniques appropriate to the process; the provision of adequate resources and infrastructure for analysis and process control; analysis of available measures to identify process control parameters; and, identification of deviations and employment of corrective actions.
Level 5: Continuously Improving
Level 5 is the highest achievement level from the viewpoint of Process Capability.
Continuous process improvement is enabled by quantitative feedback from the process and from pilot studies of innovative ideas and new technology. A focus on widespread, continuous improvement should permeate the IT organization. The IT organization should establish quantitative performance goals for process effectiveness and efficiency, based on its business goals and strategic objectives.
Once critical business objectives are consistently evaluated and compared against process capability, continuous improvement can be institutionalized within the operations environment.
This results in a cycle of continuous learning.
Process Attribute
ATT 5A: Continuous Improvement - the extent to which changes to the process are identified and implemented to ensure continuous improvement in the fulfillment of the defined business goals of the IT organization. In order to achieve this capability, it is necessary to continuously identify and implement improvements to the tailored process, and provide input to make changes to the standard process definition. The related Generic Practices are:
GP5.1 Continually improve tasks and processes
Improvements may be based on incremental operational refinements or through innovations, such new technologies. Improvements may typically be driven by the following activities:
• Identifying and approving changes to the standard process definition on the basis of quantitative understanding of the process. • Providing adequate resources to effectively implement the approved changes in affected tailored processes.
• Implementing the approved changes to the affected tailored processes.
• Validating the effectiveness of process change on the basis of measurement of actual performance against the process and business goals.
Process Attribute
ATT 5B: Process Change - the extent to which changes to the definition, management, and performance of the process is controlled to better achieve the business goals of the IT organization.
In order to achieve this capability, a process may use quantitative methods to identify and implement changes to the standard process definition. The related Generic Practices are:
GP5.2 Deploy "best practices" across the IT organization. Improved practices must be deployed across the operations environment to allow their benefit to be felt across the IT organization. The deployment activities include: Identifying improvement opportunities in a systematic and proactive manner to continuously improve the process.
Establishing an implementation strategy based on the identified opportunities to improve process performance according to business goals. Implementing changes to selected areas of the tailored process according to the implementation strategy. Validating the effectiveness of process change on the basis of measurements of actual performance against process and business goals, and then feedback to the standard process definition.
Rating Framework
The rating framework requires identification of objective attributes or characteristics of a practice or work product of an implemented process to validate that Base Practices are performed, and Generic Practices are followed. Assessment Indicators determine Process Attribute ratings which then are used to determine Capability Level.
In the present description, Assessment Indicators refer to objective attributes or characteristics of a practice or work product that supports an assessor's judgment of performance of an implemented process.
Process Capability Rating
The cornerstone of a rating framework is the identification and description of Assessment Indicators to help rate the Process Attributes. Assessment Indicators are objective attributes or characteristics of a practice or work product that supports an assessor's judgment of performance of an implemented process. Assessment Indicators are evidence that Base Practices are performed, and Generic Practices are followed. The indicators are not intended to be regarded as a mandatory checklist to be followed, but rather are a guide to enhance an assessment team's objectivity in making their judgments of a process's performance and capability. The rating framework adds definition and reliability to the present invention, and thereby improves repeatability.
Assessment Indicators are determinants of Process Attribute ratings for each Process Capability attribute. Each assessed process profile consists of a set of Process Attribute ratings. Each attribute rating represents a judgment by the assessment team of the extent to which the attribute is achieved.
Figure 8 illustrates the Process Attribute rating represented on a four-point scale of achievement.
The indicators determine attributes rating which then are used to determine Capability Level. The rating scale defined below is used to describe the degree of achievement of the defined capability characterized by Process Attributes. Once the appropriate rating for each Process Attribute is determined, ratings can be combined to assign the Capability Level achieved by the assessed process. Figure 9 represents the mapping of attribute ratings to the process Capability Levels determination.
As an example, to assess the capability of a particular instance of a Service Desk process, the first step is to identify if the appropriate Base Practices are performed at all. The necessary foundation for improving the capability of any process is to at least demonstrate that the Base Practices are being performed. The assessment team may then formulate an objective judgment of process performance attribute through different means such as analysis of the work products
(i.e., reviewing completed trouble tickets), demonstration of evidence of process implementations (i.e., are escalation procedures documented and understood?), interviews with process performers (i.e., discuss daily activities with Service Desk personnel), and other means as appropriate (i.e., does the Service Desk have a dedicated phone number that users should call to report incident/problems/requests or a dedicated email address, etc.).
Achievement of Base Practices is an indication that Process Area goals are being met. The increasing capability of a process to effectively achieve its goals and objectives is based upon attribute rating. The attribute rating is determined by the performance of the associated Generic Practices. Evidence of effective performance of the Generic Practices associated with a Process
Attribute supports the assessment team's judgement of the degree of achievement of the attributes.
Operational Maturity Rating Up to now, the discussion has focused on the capability rating of Process Areas. To determine the maturity level of an organization, the third dimension of the architecture of the present invention, the capability ratings are used.
Process Category capabilities are determined from capability ratings of its Process Areas. Once all Process Areas of a category are rated the lowest rating assigned to a Process Area becomes the category rating as well. Similarly, the operational maturity rating is determined from Process Category rating within the IT organization. Once all Process Categories are rated then the lowest rating assigned to a Process Category becomes the IT organizational maturity. For example, if the Process Categories of an IT organization are rated as follows, then this particular IT organization would receive a maturity level rating of "1".
Process Category Capabilitv Rating
Service Management 2
Systems Management 1
IT Operations Planning 3
Managing Change 2
In the present invention, the concept of capability is applied to processes, and the concept of maturity is applied to IT organizations.
Assessment Process
In performing an assessment, an assessment team collects the evidence on the implementation of the processes being assessed and determines their compatibility as defined in the framework of the present invention. The objective of the assessment is to identify the differences and the gaps between the actual implementations of the processes in the assessed operational IT organization with respect to the present invention. Using the framework of the present invention ensures that results of assessments can be reported in a common context and provides the basis on which comparisons can be based.
The assessment process is used to appraise an organization's IT operations environment process capability. Defining a reference model ensures that results of assessments can be reported in a common context and provides the basis on which comparisons can be based.
An IT organization can perform an assessment for a variety of reasons. An assessment can be performed in order to assess the processes in the IT operations environment with the puφose of improving its own work and service processes. An IT organization can also perform an assessment to determine and better manage the risks associated with outsourcing. In addition, an assessment can be performed to better understand a single functional area such as systems management, on a single process area such as a performance management, or on the entire IT operations environment. Three phases are defined in the assessment model: Planning and Preparing, Performing, and Distributing Results. All phases of the assessment are performed using a team-based approach. Team members include the client sponsor, the assessment team lead, assessment team members, and client participants.
Plan and Prepare for the Assessment
Determine Assessment Scope
In the present description, assessment scope refers to organizational entities and components selected for inspection. A clear understanding of the puφose of the framework, constraints, roles, responsibilities, and outputs are needed prior to the start of the assessment. Therefore, in preparation for the assessment, the assessment team lead and the client sponsor work together to reach agreement on the scope and goals of the assessment. Once agreement is reached, the assessment team lead ensures that the IT operational processes selected for the assessment are sufficient to meet the assessment puφose and may provide output that is representative of the assessment scope.
An assessment plan is developed based on the goals identified by the client sponsor. The plan consists of detailed schedules for the assessment and potential risks identified with performing the assessment. Assessment team members, assessment participants, and areas to be assessed are selected. Work products are identified for initial review, and the logistics for the on-site visit are identified and planned.
Train the Assessment Team
The assessment team members must receive adequate training on the framework of the present invention and the assessment process. It is essential that the assessment team be well-trained on the present invention to ensure that they may have the ability to inteφret the data obtained during the assessment. The team must have comprehensive understanding of the assessment process, its underlying principles, the tasks necessary to execute it, and their role in performing the tasks.
Gather Assessment Input
Maturity questionnaires are distributed to participants prior to the client site visit. Maturity questionnaires exist for each process area of the present invention, and tie back to base practices, process attributes and generic practices. Completed questionnaires provide the assessment team with an overview of the IT operational process capability of the IT organization. The responses assist the team in focusing their investigations, and provide direction for later activities such as interviews and document reviews. Assessment team members prepare exploratory questions based on Interview Aids and responses to the maturity questionnaires.
In the present description, Interview Aids refers to a set of exploratory questions about the operations environment which are used during the interview process to obtain more detailed information on the capability of the IT organization. The interview aids are used by the assessment team to guide them through interview sessions with assessment participants.
Assessment participants prepare documentation for the assessment team members to review. Documentation about the IT operational processes allows the assessment team to tie IT organization data to the present invention.
Conduct Assessment
A Kick off meeting is scheduled at the start of the on-site activities. The puφose of the meeting is to provide the participants with an overview of present invention and the assessment process, to set expectations, and to answer any questions about the process. A client sponsor of the assessment may participate in the presentation to show visible support and stress the importance of the assessment process to everyone involved.
Gather Data
Data for the assessment are obtained from several sources: responses to the maturity questionnaires, interview sessions, work products, and document reviews. Documents are reviewed in order to verify compliance. Interviewing provides an opportunity to gain a deeper understanding of the activities performed, how the work is performed, and processes currently in use. Interviewing provides the assessment team members with identifiable assessment indicators for each Process Area appraised. Interviewing also provides the opportunity to address all areas of the present invention within the scope of the assessment.
Interviews are scheduled with IT operations managers, supervisors, and operations personnel. IT operations managers and supervisors are interviewed as a group in order to understand their view of how the work is performed in the IT organization, any problem areas of which they are aware, and improvements that they feel need to be made. IT operations personnel are interviewed to collect data within the scope of the assessment and to identify areas that they can and should improve in the IT organization.
Examples of maturity questionnaires associated with the foregoing service desk example are as follows:
Questions
Base Practice: 1.3.1 Call Attention
What methods are available to users for communication with the Service Desk, and do users have access to resources needed for such communication?
Are all users informed how and when to contact the Service Desk? If so, how?
Do all users receive the same level of support? If no, how does support differ?
Do you gather call statistics like total volume of calls and number of abandoned calls? If so, can we access this information?
Is there a need for after-hours support? If so, what type of after-hours support does the Service Desk provide?
Base Practice: 1.3.2 Incident/Request Logging
1. What is the procedure for logging incidents/requests, and is this followed in all cases?
Is a pπoπty level assigned to the incident/request at time of receipt and how is it determined?
Base Practice: 1.33 Incident/Request Qualification
Do Service Desk personnel have access to a catalogue/database of frequently occurring incidents and their solutions, and does its format allow for rapid access and search?
How often is this catalogue/database accessed to provide an immediate solution or work-around to the user? (e.g., all calls, some calls, very few calls)
How frequently is this catalogue/database updated9
What other resources exist to aid Service Desk personnel with immediate incident resolution?
Base Practice: 1.3.4 Incident/Request Assignment
Is there a defined time frame within which the incident/request should be assigned and is it usually followed?
Are users notified of receipt, status and approximate time to resolution (if possible) of incident/request and provided with the incident/request ID?
By what process is the appropriate personnel determined for handling an incident/request9
Is a defined system used for assigning responsibility for an incident request to the appropriate personnel? (e.g. trouble tickets are generated and sent to appropπate personnel)
Is a record made of the person to whom the incident/request is assigned?
Base Practice: 1.3.5 Incident & Problem Resolution
Are non-resolved incidents/problems escalated according to procedures defined in SLAs?
2. How are appropπate resources notified that the incident/problem has been escalated?
While problem resolution is in process, is a work-around solution determined and conveyed to the user
When a problem is escalated or a resolution has been determined, is the log updated?
Does the Service Desk or the party to whom the problem was escalated "own" the problem'
Base Practice: 1.3.6 SLA & OLA Tracking and Monitoring
What is the system for tracking and momtoring the problem resolution process for an incident/request?
What types of issues (e.g. excessive reassignments, deviations from estimated task times) are flagged and what action is taken to address them9
Base Practice: 1.3.7 Resolution Confirmation
Are users notified of incident/request resolution?
Is confirmation sought from the user to verify that incident/request has been resolved satisfactorily?
If such confirmation is not obtained what is done?
Base Practice: 13.8 Incident / Request Closure
How is an incident/request closed? What records are made1
If it exists, is a solution database updated with the incident/problem and solution for future reference?
What parties are informed of a closure?
Base Practice: 13.9 Trends and Repetitive Incident Analysis Are incidents analyzed to detect trends and identify underlying problems? If so, by what process?
Are users notified of known incidents proactively before they report the incident?
Base Practice: 13.10 Service Level Control
Does the Service Desk generate reports comparing actual service levels (eg. Number of incidents resolved at initial call, resolution tune by seventy) with target service levels9
Who receives these reports and for what purposes?
How are service levels targets set and what is the process for reviewing/ updating them9
Do the users communicate their views of support to the Service Desk and agree with the Service Desk's assessment of incident and problem management9
Base Practice: 13.11 Receive Requests
Are requests handled immediately or do they require provisioning/approval9
Does the Service Desk coordinate the approval of requests with the appropπate functions and notify requester of approval/rejection?
If request requires functions outside the Service Desk, how does the Service Desk pass responsibility to the appropriate personnel?
Do SLAs exist between the Service Desk and the end user community9
Do agreements exist between the Service Desk and the next level of support (internal or external)?
Geneπc Questions for Process Area
Are the policies for Service Desk operation outlined in a document9 How are employees made aware of these policies9
What mechanisms are in place to ensure policies are followed9
How frequently are Service Desk policies reviewed and/or modified9 What is the process for such policy updates9
Are the current staff and resources of the Service Desk adequate for satisfactorily meeting user needs What type of qualification and/or training do Service Desk personnel have9
Are Service Desk operations periodically reviewed in order to identify and implement potential improvements9 Who manages this process9
Solidify Information
The puφose of solidifying this information is to summarize and consolidate information into a manageable set of findings. The data is then categorized into Process Areas of the present invention. The assessment team must reach consensus on the validity of the data and whether sufficient information m the areas evaluated has been collected. It is the team's responsibility to obtain sufficient information on the components of the present invention within the scope of the assessment for the required areas of the IT organization before any rating can be done. Follow- up interviews may occur for clarification.
Initial findings are generated from the information collected thus far, and presented to the assessment participants. The puφose of presenting initial findings is to obtain feedback from the individuals who provided information during the various interviews. Ratings are not considered until after the initial findings presentations, as the assessment team is still collecting data. Initial findings are presented in multiple sessions in order to protect the confidentiality of the assessment participants. Feedback is recorded for the team to consider at the conclusion of all of the initial findings presentations. Examples of assessments associated with the foregoing service desk example are as follows:
Level 1
Figure imgf000042_0001
Level 2
Figure imgf000042_0002
Figure imgf000043_0001
Level 3
Figure imgf000043_0002
Level 4
Figure imgf000043_0003
Level 5
Process Attπbute | Generic Practice [ Example of Assessment Indicator
Figure imgf000044_0001
Rating
After the assessment team consolidates all of the data, the rating process may begin. The experience and training that the assessment team has provides them with the knowledge needed to inteφret the data obtained duπng the assessment. The first step in the rating process is to determine if Process Area goals are being met. Process Area goals are considered met when all base practices are performed. Each process attribute for each Process Area within the assessment scope is then rated. Process attributes are rated based on the existence of and compliance to generic practices. Using the Assessment Indicator Rating template, the assessment team identifies assessment indicators for each process area to determine whether or not process attributes are achieved. Ratings are always established based on consensus of the entire assessment team. Questionnaire responses, interview notes, and documentation are used to support ratings; confirmation from two sources in different contexts (e.g., two people in different meetings) ensures compliance of an activity.
For each process attribute, the team reviews all weaknesses that relate to the associated generic practices. If the team determines that a weakness is strong enough to impact the process attribute, the process attribute is rated "not achieved." If it is decided that there are no significant weaknesses that have an impact on a process attribute, it is rated "fully achieved." For a Process Area to be rated "fully achieved," all process attributes for the Process Area must be rated "fully achieved." A Process Area may be rated fully achieved, largely achieved, partially achieved, or not achieved.
Assignment of a maturity level rating is optional at the discretion of the sponsor. For a particular maturity level rating to be achieved, all Process Areas within and below the maturity level must be satisfied. For example, for an IT organization to be rated at maturity level 4, all Process Areas at level 4, level 3 and at level 2 must have been investigated during the assessment, and all Process Areas must have been rated achieved by the assessment team. The final findings presentation is developed by the team to present to the sponsor and the IT organization the strengths and weaknesses observed for each Process Area within the assessment scope, the ratings of each Process Area, and the maturity level rating if desired by sponsor.
Wrap up and Distribution of Results
The final assessment results are presented to the client sponsor. During the final presentation, the assessment team must ensure that the IT organization understands the issues that were discovered during the assessment and the key issues that it faces. Operational strengths are presented to validate what the IT organization is doing well. Strengths and weaknesses are presented for each process area within the assessment scope as well as any issues that affect process and are un-related to the present invention. A Process Area profile is presented showing the individual Process Area ratings in detail.
An executive overview session is held in order to allow the senior IT Operations manager to clarify any issues with the assessment team, to confirm his or her understanding of the operational process issues, and to gain full understanding of the recommendations report.
When the assessment has been completed and findings have been presented, the assessment team collects feedback from the assessment participants and the assessment team on the process, packages information that needs to be saved for historical puφoses.
Figure 10 describes the roles and responsibilities of those involved with the assessment process.
As shown, various roles that may be involved with the execution of the present invention include a client sponsor, assessment participants, an assessment team leader, and assessment team members. It should be noted that any of such roles and responsibilities may be automated per the desires of the user.
Figure 11 represents the indicator types and their relationship to the determination of Process Area rating. As shown, evidence of process performance and process capability is provided by assessment indicators. Such assessment indicators, in turn, consist of base practices and general practices. At the next level, the base practices and general practices are assessed by process implements, work products, practice performance, resources and infrastructure.
A plurality of examples of additional process areas and associated generic/base practices will now be set forth. In addition, maturity questionnaires are also provided for each example. Given this information, the foregoing principles of the present invention may be employed for determining capability levels of various process areas for process assessment puφoses in an operational maturity investigation.
Figure imgf000046_0002
Figure imgf000046_0001
PA's Metπcs Percentage of SLAs signed off on time
Number of iterations of the SLA before sign off
Percentage of SLAs not signed off at the same time as the corresponding OLAs.
Percentage of SLA Reports delivered on time
Base Practices
Figure imgf000047_0001
Figure imgf000048_0001
References
Figure imgf000048_0002
Process Area: SLA Management
Level 1
Assessment Indicators Process Performance
Generic Practice. Ensure that Base practices are performed
Figure imgf000048_0003
Level 2
Figure imgf000048_0004
Figure imgf000049_0001
Level 3 Assessment Indicators
Figure imgf000049_0002
Level 4 Assessment Indicators
Figure imgf000049_0003
Figure imgf000050_0002
Level 5 Assessment Indicators
Figure imgf000050_0003
Process Capability Assessment Instrument: Interview Guide
Process Area I 1 • 1 SLA Management
Figure imgf000050_0001
Base Practice: 1.1.5 Prepare Draft SLA
What is the procedure for drafting SLAs? What parties are involved?
What does the SLA contain (e.g. specific applications, workload, cost of service, measure of service, type of support etc.)?
Does the SLA outline each key business application (e.g. penalties for SLA violation, tools to maintain SLAs, manager/owner of SLA etc.)?
Are separate user groups determined based on different service requirements and unique SLAs created for each group? If so, do standard guidelines exist?
Does the process of preparing SLAs include identifying potential suppliers to support the service requirements?
Are provisions for normal/contingency/disaster conditions specified in the SLA?
Are monitoring and reporting procedures defined?
Are escalation procedures defined for instances when SLAs are not met?
Has what constitutes a failed SLA and the penalties for failure been determined?
Are provisions for rewards made for cases when service exceeds requirements?
Base Practice: 1 1.6 Identify Charge Back, Budget or Cost Structure Components
Was a chargeback structure determined as part of the SLA preparation process? If so, for what components is the chargeback determined?
How is the chargeback structure utilized in relation to service level management?
Do you have or do any budgeting or costing that is used in SLA management?
Base Practice: 1.1.7 Aεree to SLAs with Users
To what parties are SLAs submitted for approval?
How is approval of the SLA documented?
Where is information about the finalized SLA stored? Are SLA summaries available to users?
Is there a system for users to communicate desired changes to services provided
Base Practice: 1.1.8 Report on SLA Performance
Are actual statistics required to measure service delivery gathered and in what format are they stored?
Is information on service delivery collected according to prescribed schedules?
Are actual service statistics compared to targets defined in the SLAs?
Are users' input on SLA performance obtained (e.g. surveys)?
What types of reports are produced based on the statistics gathered?
Who reviews these reports and what is the process for ascertaining SLA compliance? What procedures are in place to monitor and address SLA breaches?
Does the need for short-term deviations to SLAs due to business requirements arise, and how is it managed?
Geneπc Questions for Process Area
How often are SLAs re-examined and updated? Approximately how many hours are allocated to review and discuss SLAs?
Are there personnel who control and manage new and existing SLAs? What relevant qualifications and/or training do they have?
Do you think the resources allocated to managing SLAs are adequate? Please explain.
Is the SLA management process periodically evaluated with the intent of identifying possible improvements? How frequently does this occur and what is the process?
Process Capability Assessment Instrument
Process Area 1.1 SLA Management
Process Area SLA Management involves the creation, management, reporting, and discussion of Service Descnption Level Agreements (SLAs) with users and the providers withm Information Technology (IT). A SLA is a formal agreement between a user who requires information services and the IT organization responsible for providmg those services. SLA Management involves the following areas:
SLA Definition: The SLA document defines, in specific and quantifiable terms, the level of service that is to be delivered to users. In the enterprise environment, many design and configuration alternatives are available that affect a given system's response time, availability, development cost, and ongoing operational costs. A SLA claπfies the business objectives and constraints for an application system, and forms the basis for both application design and system configuration choices.
SLA Reporting The actual production of trend reports is necessary to monitor and meter the effectiveness of a SLA.
SLA Control It is important that the services descπbed in SLAs are carefully aligned with current busmess needs, monitored to ensure that they are performed as described, and updated in lme with changes to business needs.
SLA Review The reports generated from trackmg SLAs are reviewed to ensure that the SLAs are carefully aligned with current business needs and if necessary updated to be in line with busmess needs. In enterpπse environments, this process becomes more complex as more components are required to perform these services.
Questionnaire
Process Area 1.1 SLA Management (Busmess Relationship Management)
Figure imgf000052_0002
Work Product list
Process Area 1.1 SLA Management (Busmess Relationship Management)
SLA process flow
Sample SLA document
IT capability report
SLA performance reports
User survey results
Charge back structure document
Responsibility matπx
SLA Communication flow
Job description of SLA manager and staff
Figure imgf000052_0001
Figure imgf000052_0003
Operations Level Agreements with providers within the organization, as well as external suppliers and vendors. An OLA is an agreement between the IT organization and those deliveπng the constituent services of the system. OLAs enable the IT organization to provide the level of service stipulated in a Service Level Agreement as supporting services are guaranteed in the OLA. OLA Management involves the following:
OLA Definition: An OLA outlines the type of service that will be delivered to the users from each service provider. OLA Definition works with service providers to define:
Whether a particular service level can be met, and how it will be met through operational levels
Which provιder(s) can supply a service, or part of a service
Roles and responsibilities
What constitutes a failure to meet the OLA, and corresponding penalties (if appropriate)
Procedures for momtoring operational levels
Cost structures
How the service will be measured
Contractual aπangements with the providers
Formal OLAs are defined for suppliers who are external to the IT organization. They may take the form of maintenance contracts, warranties, or service contracts. Further formal or informal OLAs may also be created for internal suppliers, depending on the size of the organization.
OLA Reporting: The actual production of trend reports are necessary to monitor and meter the effectiveness of an OLA.
OLA Control. It is important that the services described in OLAs are carefully aligned with current business needs, monitored to ensure that they are performed as descπbed, and updated in line with changes to business needs.
OLA Review: The reports generated from trackmg OLAs are reviewed to ensure that the OLAs are carefully aligned with current business needs and if necessary updated to be in line with business needs. In enterprise environments, this process becomes more complex as more components are required to perform these services.
PA's Base 1.2.1 Determine operational items Practices 1.2.2 Group related operational items
1.2.3 Identify suppliers of operational items
1.2.4 Finalize service suppliers
1.2.5 Prepare OLAs
1.2.6 Agree to OLAs with suppliers
1.2.7 Report on OLA performance
PA Goals To define a quantifiable service level that represents a minimum level of service for each service delivered.
To gather and compare provider service statistics, and to identify and resolve service deviations.
To regularly review services being delivered, as specified in the OLA, to determine if they are appropriately fulfilling the requirements.
To regularly report on OLA compliance.
PA's Metrics I Percentage of OLAs signed off on time
Number of iterations of the OLA before sign off Percentage of OLA Reports delivered on time
Base Practices
Figure imgf000053_0001
Figure imgf000054_0001
References
Figure imgf000054_0002
Process Area: OLA Management
Level 1
Assessment Indicators Process Performance
Generic Practice. Ensure that Base practices are performed
Figure imgf000055_0001
Level 2
Figure imgf000055_0002
Figure imgf000056_0001
Level 3 Assessment Indicators
Figure imgf000056_0002
Level 4 Assessment Indicators
Figure imgf000056_0003
Level 5 Assessment Indicators
Figure imgf000056_0004
Figure imgf000057_0001
Process Capability Assessment Instrument: Interview Guide
Process Area I 1.2 OLA Management
Questions
Base Practice: 1.2.1 Determine Operational Items
What is the process by which the key operational items required to support the SLAs is determined?
What personnel are assigned responsibility for identifying these key operational items?
Base Practice: 1.2.2 Group Related Operational Items
1. What cπteπa are used to group operational items together?
Please describe or list the various groupings of operational items.
Does each defined group of operational items typically fall under one OLA?
Base Practice: 1.2.3 Identify Suppliers of Operational Items
What procedure is used to identify potential service providers?
Do service providers include both internal and external organizations?
What information about the service providers is collected?
Are any preliminary negotiations conducted with the suppliers to determine what type of contractual terms they would consider?
Base Practice: 1.2.4 Finalize Service Suppliers
What selection criteria (e.g. cost, training requirements, tools required) are considered when choosing the service providers?
Does a formal system for evaluating potential suppliers exist to aid in the selection process?
Is a list of alternative or back-up suppliers determined?
Base Practice: 1.2.5 Prepare OLAs
How are OLAs prepared and negotiated with suppliers? Is a standardized procedure followed for each OLA?
What do OLAs contain (e.g. workloads, cost of service, targets, type of support etc.)? Does the OLA outline each key business application (e.g. penalties, tools used to maintain the OLA)?
Has a document specifying standard contents of an OLA been created? Are OLAs prepared according to the specifications in this document?
Are Key Performance Indicators (KPIs) or service measurement metrics specified in the OLA? Are targets for the service measurement metrics specified? If so, how are these targets determined, for example is the supplier capability gauged and considered?
Are OLA monitoring and reporting procedures defined, including the specific reports that will be produced?
Are OLA violation escalation procedures determined?
Is a specification of what constitutes a failed OLA made, and are the penalties (if appropriate) for failure determined?
Are there any provisions for rewards if OLA requirements are exceeded?
Base Practice: 1.2.6 Agree to OLAs with Suppliers
To what parties are OLAs submitted for approval?
Figure imgf000058_0001
Process Capability Assessment Instrument
Process Area 1.2 OLA Management
Process Area OLA Management involves the creation, management, reporting, and discussion of Descπption Operations Level Agreements with suppliers and vendors. OLAs enable the IT organization to provide the level of service stipulated in a Service Level Agreement as supporting services are guaranteed m the OLA. An OLA is an agreement between the IT orgamzation and those deliveπng the constituent services of the system. Operational Level Management involves the following:
OLA Definition An OLA outlines the type of service that will be delivered to the users from each service provider OLA Definition works with service providers to define-
Whether a particular service level can be met, and how it will be met through operational levels
Which provιder(s) can supply a service, or part of a service
Roles and responsibilities
What constitutes a failure to meet the OLA, and corresponding penalties (if appropriate)
Procedures for monitoring operational levels
Cost structures
How the service will be measured
Contractual aπangements with the providers
Formal OLAs are defined for suppliers who are external to the IT organization. They may take the form of maintenance contracts, warranties, or service contracts. Further formal or informal OLAs may also be created for internal suppliers, depending on the size of the organization
OLA Reporting The actual production of trend reports are necessary to momtor and meter the effectiveness of an OLA.
OLA Control It is important that the services described in OLAs are carefully aligned with current business needs, momtored to ensure that they are performed as descπbed, and updated in line with changes to business needs
OLA Review The reports generated from tracking OLAs are reviewed to ensure that the OLAs are carefully aligned with current business needs and if necessary updated to be in line with busmess needs In enterprise environments, this process becomes more complex as more components are required to perform these services Questionnaire
Figure imgf000059_0001
Work Product list
Process Area 1.2 OLA Management (Service Partner Management)
Sample OLA document
Service level performance reports
OLA compliance reports
Vendor/supplier selection information
Responsibility matrix
OLA Communication flow
Job Descπption of OLA manager and staff
Figure imgf000059_0002
Problem Management utilizes the skills of experts and support groups to fix and prevent recumng incidents by determining and fixing the underlying problems causing the incidents.
Request Management - Request Management is responsible for coordinating and controlling all activities necessary to fulfill a request from a user, vendor, or developer Requests can be raised as change requests with Change Control, or planned, executed, and tracked by the Service Desk. Request Management is responsible for coordinating and controlling all activities necessary to fulfill a request from a user, vendor, or developer. Further sub- functions of Request Management are: Request Logging Impact Analysis Authorization Prioritization
Figure imgf000060_0001
Base Practices
Figure imgf000060_0002
Figure imgf000061_0001
Figure imgf000062_0001
Figure imgf000062_0002
Process Area: Service Desk
Level 1
Assessment Indicators Process Performance
Generic Practice: Ensure that Base practices are performed
Figure imgf000062_0003
Level 2
Figure imgf000062_0004
Figure imgf000063_0001
Level 3 Assessment Indicators
Figure imgf000063_0002
Level 4 Assessment Indicators
Figure imgf000063_0003
Figure imgf000064_0001
Level 5 Assessment Indicators
Figure imgf000064_0002
Process Capability Assessment Instrument: Interview Guide
Process Area 1.3 Service Desk
Questions
Base Practice: 1.3.1 Call Attention
What methods are available to users for communication with the Service Desk, and do users have access to resources needed for such communication?
Are ail users informed how and when to contact the Service Desk? If so, how?
Do all users receive the same level of support? If no. how does support differ?
Do you gather call statistics like total volume of calls and number of abandoned calls? If so, can we access this information?
Is there a need for after-hours support? If so, what type of after-hours support does the Service Desk provide?
Base Practice: 1.3.2 Incident/Request Logging
1. What is the procedure for logging incidents/requests, and is this followed in all cases?
Is a priority level assigned to the incident/request at time of receipt and how is it determined?
Base Practice: 1.3.3 Incident/Request Qualification
Do Service Desk personnel have access to a catalogue/database of frequently occurring incidents and their solutions, and does its format allow for rapid access and search?
How often is this catalogue/database accessed to provide an immediate solution or work-around to the user? (e.g., all calls, some calls, very few calls)
How frequently is this catalogue/database updated?
What other resources exist to aid Service Desk personnel with immediate incident resolution?
Base Practice: 1.3.4 Incident/Request Assignment
Is there a defined time frame within which the incident/request should be assigned and is it usually followed?
Are users notified of receipt, status and approximate time to resolution (if possible) of incident/request and provided with the incident/request ID? By what process is the appropriate personnel determined for handling an incident/request?
Is a defined system used for assigning responsibility for an incident/request to the appropriate personnel? (e.g. trouble tickets are generated and sent to appropriate personnel)
Is a record made of the person to whom the incident/request is assigned?
Base Practice: 1.3.5 Incident & Problem Resolution
Are non-resolved incidents/problems escalated according to procedures defined in SLAs?
2. How are appropriate resources notified that the incident/problem has been escalated?
While problem resolution is in process, is a work-around solution determined and conveyed to the user?
When a problem is escalated or a resolution has been determined, is the log updated?
Does the Service Desk or the party to whom the problem was escalated "own" the problem?
Base Practice. 1.3.6 SLA & OLA Tracking and Momtonng
What is the system for tracking and monitoring the problem resolution process for an incident/request?
What types of issues (e.g. excessive reassignments, deviations from estimated task times) are flagged and what action is taken to address them?
Base Practice: 1.3.7 Resolution Confirmation
Are users notified of incident/request resolution?
Is confirmation sought from the user to verify that incident/request has been resolved satisfactorily?
If such confirmation is not obtained what is done?
Base Practice: 1.3.8 Incident / Request Closure
How is an incident/request closed? What records are made?
If it exists, is a solution database updated with the incident/problem and solution for future reference?
What parties are informed of a closure?
Base Practice: 1.3.9 Trends and Repetitive Incident Analysis
Are incidents analyzed to detect trends and identify underlying problems? If so, by what process?
Are users notified of known incidents proactively before they report the incident?
Base Practice: 1.3.10 Service Level Control
Does the Service Desk generate reports comparing actual service levels (eg. Number of incidents resolved at initial call, resolution time by severity) with target service levels?
Who receives these reports and for what purposes?
How are service levels targets set and what is the process for reviewing/updating them?
Do the users communicate their views of support to the Service Desk and agree with the Service Desk's assessment of incident and problem management?
Base Practice: 1.3.11 Receive Requests
Are requests handled immediately or do they require provisioning/approval?
Does the Service Desk coordinate the approval of requests with the appropriate functions and notify requester of approval/rejection?
If request requires functions outside the Service Desk, how does the Service Desk pass responsibility to the appropriate personnel?
Do SLAs exist between the Service Desk and the end user community?
Do agreements exist between the Service Desk and the next level of support (internal or external)?
Geneπc Questions for Process Area
Are the policies for Service Desk operation outlined in a document? How are employees made aware of these policies?
What mechanisms are in place to ensure policies are followed?
How frequently are Service Desk policies reviewed and/or modified? What is the process for such policy updates?
Are the current staff and resources of the Service Desk adequate for satisfactorily meeting user needs.
What type of qualification and/or training do Service Desk personnel have?
Are Service Desk operations periodically reviewed in order to identify and implement potential improvements? Who manages this process?
Are any metrics computed to assess the Service Desk performance? If so, please describe them. Are targets for these metrics established and performance assessed against them?
Process Capability Assessment Instrument Process Area 1.3 Service Desk
Process Area The Service Desk provides a single point of contact for users with problems or specific Description service request. The Service Desk forms part of an organization's strategy to enable users and business communities to achieve business objectives through the use of technology.
The Service Desk main objectives are:
To help users when required.
To manage problem resolution.
To log and document problems types, their frequency, and associated workarounds.
To produce management reports on levels of service and user satisfaction.
The Service Desk consists of the following functions:
Incident Management - An incident is a single occurrence of an issue that affects the delivery of normal or expected services. Incident Management strives to resolve as high a proportion of incidents as possible prior to passing them on to other areas.
Problem Management - A problem is the underlying cause of one or more incidents. Problem Management utilizes the skills of experts and support groups to fix and prevent recurring incidents by determining and fixing the underlying problems causing the incidents.
Request Management - Request Management is responsible for coordinating and controlling all activities necessary to fulfill a request from a user, vendor, or developer. Requests can be raised as change requests with Change Control, or planned, executed, and tracked by the Service Desk. Request Management is responsible for coordinating and controlling all activities necessary to fulfill a request from a user, vendor, or developer. Further sub- functions of Request Management are: Request Logging Impact Analysis Authorization Prioritization
Questionnaire
Process Area 1.3 Service Desk
Figure imgf000066_0001
Figure imgf000067_0001
Trouble ticket
Employee training handbook
User surveys
Performance reports (resolution, response, trending, etc.)
SLA
Sample log record for an incident/request
Staffing plan document
Service Pricin 1.4
Figure imgf000067_0002
PA's Metπcs Service Pricing & Cost: Billing & Accounting:
Percentage of chargebacks outstanding per month
Percentage of chargebacks paid on time each month
Total cost of software per month
Total cost of hardware per month
Total cost of services/support per month
Total amount of money spent per month by department
Base Practices
Figure imgf000068_0001
Figure imgf000069_0001
BP Descπption Ensure that a list of services and their cost is made available. This list should be a living document that is updated frequently. This document should be broken down to the lowest level of each service if possible.
Example A vendor may be asked to submit separate pricing for each of the service areas and service levels outlined in their "scope of service " agreement or contract
BP Number 1.4 10
BP Name Inform users about costs
BP Descπption Justify the charges assigned to the customer for a particular service Provide a breakdown of charges
Example Cost of services does not only mean individual services per person or department Cost also needs to be factored in for shared resources
BP Number 1.4.11
BP Name Monitor and assess budgetary spending and actual costs vs projected costs
BP Description By reviewing reports and trending data, department/project/user budget auditing, monitoπng, and assessing is possible.
Example Cross-referencing the reports with an updated catalog of service pricing it may be possible to identify cost reduction opportunities
BP Number 1.4.12
BP Name Review current and planned budgets and cost allocation plans with management/user
BP Description Establish checkpoints when creating budgets to veπfy the cost allocation and budgeting strategy with management.
Example Review and have sign-off by management on the budget plans
BP Number 1 4.13
BP Name Define what the reporting specifications are and report on financial information
BP Description The purpose of this activity is to collect consistent, detailed financial information regarding service costs for each department or project m a company. A specification to this collection needs to be set so that unnecessary information is not collected
Example Financial thresholds need to be set along with what specific data and how often it should be reported on A company may have the following reports
Cost of hardware support internal per department per month
Cost of hardware support by external vendor per department per month
Cost of software support per department per month
Cost of software support by external vendor per department per month
Report of how on or off budget a department may be
BP Number 1.4.14
BP Name Disseminate reports to appropπate parties
BP Description Assure that monthly reports are distπbuted to departments or project groups to veπfy any chargebacks and so they can use as a checkpoint to verify that they are on track with their budgets.
Example Monthly equipment support reports can be used to determine and plan for equipment upgrades and training for users to better understand their equipment
Billing & Accounting
BP Number 1.4.15
Figure imgf000070_0001
References
Figure imgf000070_0002
Process Area: Service Pricing
Level 1
Assessment Indicators Process Performance
Generic Practice: Ensure that Base practices are performed
Figure imgf000070_0003
Figure imgf000071_0001
Level 2
Figure imgf000071_0002
Level 3 Assessment Indicators
Figure imgf000071_0003
Figure imgf000072_0001
Level 4 Assessment Indicators
Figure imgf000072_0002
Level 5 Assessment Indicators
Figure imgf000072_0003
Process Capability Assessment Instrument: Interview Guide
Process Area | 1.4 Service Pricing
Questions
Base Practice: 1.4.1 Determine projected service/equipment costs and depreciation schedule for distributed technical environment What is the process for projecting costs of service and equipment capacity enhancements? How frequently does this occur?
Can costs be projected on a customer-group basis?
Can service costs be broken down by implementation, operation and overhead for each service?
How are depreciation schedules determined?
Are projected costs and depreciation figures used to decide between leasing and purchasing?
Currently, what is the approximate percentage of leased and purchased equipment?
Base Practice: 1.4.2 Determine if chargeback is appropπate
What criteria are used to determine which items will be charged back?
Are departments or other appropriate parties informed of the items with associated charges?
Are there any known "hidden costs" (e.g. users spending business time helping other users)?
What types of costs are not charged to department/project/individuals?
Base Practice: 1.4.3 Determine usage trends
What information is collected on service/equipment usage? Where is this information stored?
What type of trending analysis is performed using this data (e.g. frequency of calls to Service Desk per department)
For what purposes are trend data used?
Base Practice: 1.4.4 Prepare budgets and ensure that data is valid and correct
What is the process for creating budgets? Does each department follow a standard procedure?
What information is analyzed while preparing budgets? Are projected service/product costs, expected growth and past budgetary needs considered?
Are periodic audits of the budget performed to ensure the use of accurate and valid data?
Do budgets include contingencies for unanticipated growth or product/service needs
Base Practice: 1.4.5 Identify product/service options associated with service level objectives
Are SLAs or service level objectives reviewed to verify that all needed products/services are being offered?
At present, are all products/services covered by SLAs?
If a cost cannot be tied back to an SLA, does an evaluation of the need or justication for that service/product occur?
Who is responsible for the process of checking product/service options against SLAs 7
Base Practice: 1.4.6 Define products/services in terms useful to customers
How are appropriate parties informed of services/products offered?
Is information sent of additional costs for non-standard products/services?
Base Practice: 1.4.7 Determine service price costs and model/evaluate costs
How are service costs finalized? Who is in charge of this process?
What type of cost modeling is done? Why was this strategy settled on?
Has a pricing strategy been defined? If yes, please describe.
Does the pricing strategy map back to the services being provided?
Base Practice. 1.4.8 Determine cost allocation plans for services and equipment
What is the procedure for creating cost allocation plans for services and equipment?
How are costs of shared resources (e.g. service desk, technical infrastructure) allocated?
Base Practice: 1.4.9 Prepare, distribute, and maintain a catalogue of service prices for users
What information does the catalogue of service prices for users contain?
How is the catalogue distributed, and how frequently?
Who receives the catalogue and for what purposes?
How frequently is the catalogue updated?
Base Practice: 1.4.10 Inform users about costs
How are users informed of the breakdown of costs (both individual and shared) allocated to them?
Have you found that informing users about costs affects their service expectations and/or the efficiency with which resources are used/requested?
Base Practice: 1.4.11 Monitor and assess budgetary spendmg and actual costs vs projected costs
1. What is the procedure for monitoring budgetary spending?
What are the outputs of the budgetary spending assessment process? (i.e. what documents are produced?)
What occurs when spending deviates from budget?
What is the process for notifying management of deviations from proposed spending
Base Practice: 1.4.12 Review current and planned budgets and cost allocation plans with management/user
Are budgets and allocation plans submitted to management and user representatives for review?
Figure imgf000074_0001
Process Capability Assessment Instrument
Process Area 1 4 Service Pricing
Process Area Service Pricing is comprised of the following areas Descπption
Service Pricing & Cost Service Costing & Pπcmg projects and monitors costs for the management of operations, provision of service, equipment installation, etc Based upon the projected cost and business needs, a service pπcmg strategy may be developed to re-allocate costs within the organization If developed, the service pricing strategy will be documented, communicated to the users, monitored and adjusted to ensure that it is both comprehensive
Figure imgf000074_0002
Billing & Accounting The purpose of Billing & Accounting is to gather information for calculating actual cost, determine chargeback costs and bill users for services rendered
Questionnaire
Process Area | 1 4 Service Pncuig
Yes No Don't N/A Know
Figure imgf000075_0001
Work Product list
Process Area | 1 4 Service Pπcmg
Depreciation schedules Sample budget
Service pπce listing or catalogue Chargeback algoπthm or strategy Chargeback reports
Figure imgf000075_0002
Base Practices
Figure imgf000075_0003
Figure imgf000076_0001
References
Figure imgf000076_0002
Process Area: User Administration
Level 1
Assessment Indicators Process Performance
Generic Practice. Ensure that Base practices are performed
Figure imgf000076_0003
Figure imgf000077_0001
Level 2
Figure imgf000077_0002
Level 3 Assessment Indicators
Figure imgf000077_0003
changes to policy, process, etc.
Level 4 Assessment Indicators
Figure imgf000078_0002
Level 5 Assessment Indicators
Figure imgf000078_0003
Process Capability Assessment Instrument: Interview Guide
Process Area 1.5 User Admimstration
Figure imgf000078_0001
Figure imgf000079_0001
Process Capability Assessment Instrument
Process Area 1.5 User Admimstration
Process Area User Admimstration handles the day-to-day tasks involved in admimstenng users on a Descπption system. Tasks include adding new users, changing user IDs, re-establishing user passwords and maintaining groups of users
Questionnaire
Figure imgf000079_0002
Yes No Don't N/A Know
Figure imgf000080_0001
Work Product list
Figure imgf000080_0002
User Administration Maintenance Status Report
Termination List
Change of Name Request Form
Access Control Profile Document
Network Group Access Property Document
Figure imgf000080_0003
Base Practices
Figure imgf000081_0001
Figure imgf000082_0001
References
Figure imgf000082_0002
Process Area. Production Scheduling
Level 1
Assessment Indicators Process Performance
Generic Practice: Ensure that Base practices are performed
Figure imgf000082_0003
Figure imgf000083_0001
Level 2
Figure imgf000083_0002
Level 3 Assessment Indicators
Figure imgf000083_0003
Figure imgf000084_0001
Level 4 Assessment Indicators
Figure imgf000084_0002
Level 5 Assessment Indicators
Figure imgf000084_0003
Process Capability Assessment Instrument: Interview Guide
Process Area I 2.1 Production Scheduling
Questions
Base Practice: 2.1.1 Identify requirements for jobs in the distπbuted environment
Figure imgf000085_0001
1. Describe any workload balancing capabilities provided?
2. Are forecasting mechanisms available? When/how are they used?
3. What reports are produced that provide network traffic data?
4. What tools are used to quantify that the production schedule is meeting goals?
5. What other historical data is used to maintain performance?
Geneπc Questions for Process Area
1. What are the procedures/policies for the current version of production scheduling? (e.g. Process of submitting a job.)
2. What reports are produced for management, operations and customers that show production performance measurements and verifications? How are these used to manage the production scheduling process?
3. Explain the training provided to the production scheduling staff regarding procedures, systems and interaction with other functions and their importance (e.g. event management, backup and restore, fault recovery, etc.)?
4. Is the process/procedure for production scheduling reviewed for continuous improvement? If yes, how?
5. Has there been a shortage of resources while performing the production scheduling process?
6. When continuous improvements are executed, how is the improvement validated against business and performance goals (e.g. benchmarks, basic measurements, etc.)?
7. What objectives are established to measure the quality of operation standards and processes?
8. What reports are distributed to customers, management and staff that provide feedback/verify adherence regarding the production scheduling process/procedure?
Process Capability Assessment Instrument
Process Area 2 1 Production Scheduling
Process Area Production Scheduling determines the requirements for the execution of scheduled jobs Descπption across a distnbuted environment A production schedule is then placed to meet these requirements, taking into consideration other processes occurπng throughout the distπbuted environment (e g , software and data distπbution, and remote backup/restoration of data )
Questionnaire
Process Area | 2 1 Production Scheduling
Figure imgf000086_0001
Work Product list
Process Area [ 2 1 Production Scheduling
Example of an existing production schedule and work flow diagrams Existing operating procedure manuals
Software scheduling software documentation, detailed and quick reference
Examples of custom (or packaged) screens promptmg for scheduling information needed to execute jobs or job streams
Phone list of who to call for different types of problems
Existing reports that analyze business customers performance
Existing reports that review network traffic and hardware during the monitoπng process
Existing reports that review network traffic trend data to validate job performance
Results of any network performance testing across the network, (e.g. RMON. SNMP, etc.)
Figure imgf000087_0001
Base Practices
Figure imgf000087_0002
Figure imgf000088_0001
References
Figure imgf000088_0002
Process Area. Pπnt Management
Level 1
Assessment Indicators Process Performance
Generic Practice Ensure that Base practices are performed
Figure imgf000088_0003
Figure imgf000089_0001
Level 3 Assessment Indicators
Process Attπbute Geneπc Practice Example of Assessment Indicator Assessment Indicators at Client
Process Definition GP3.1: Define policies and There is one centralized print procedures at an management system versus organization level individual or independent use of various systems.
GP3.2 Define tasks that New print management personnel satisfy the process purpose are trained and receive subsequent and business goals training on new technologies, consistently and equipment, procedures, etc. Future repeatedly employment needs are also considered.
Process Resource GP3.3: Plan for human Print management print jobs are resources proactively always handled according to policy vs. ad hoc.
GP3.4 Provide feedback in Print management receives order to maintain feedback via, e-mail, reports and knowledge and experience meetings from customers and other process areas regarding issues, approvals and reviews.
Level 4 Assessment Indicators
Figure imgf000089_0002
Figure imgf000090_0002
Level 5 Assessment Indicators
Figure imgf000090_0003
Process Capability Assessment Instrument: Interview Guide
Process Area [ 2.2 Pπnt Management
Figure imgf000090_0001
1. Can pπnt jobs be sent from one pπnt queue to another without customer intervention? If so, how? Who redirects the print job? _____
What are the reasons that a print job would be redirected to another printer (e.g. off-line, out of paper, powered off, busy)?
Base Practice: 2.2.6 Batch print jobs
1. Are customers made aware of the batch print feature? How?
2. What is the typical length of time and size of a batch print job?
3. Do you schedule your batch print jobs during certain hours of the day? If yes, when?
4. Is there a software program that manages and monitors this process or does an administrator need to schedule and oversee? If yes, what is the process?
Base Practice: 2.2.7 Pπnt forms
1. Does the output/print management personnel review forms prior to their use throughout the distributed system? Reports? How is this process of approval managed (e.g. meetings, requests, sample stock, test runs, etc.)?
2. Can forms be collated and packed?
3. Are there certain printers on the network where confidential forms/output is directed to? If yes, how are those printers managed (e.g. locked closet, attendant, specific time frame, etc.)?
4. How many different types of preprinted forms are used? How many standard paper stock reports are produced? Of these forms, how many are multi-part?
Geneπc Questions for Base Practice
1. Are customers able to access a master map or listing printer types and locations available to them? If yes, who updates this information? ___
2. Are customers notified of any delays or problems with regards to their print jobs? If yes, how are they informed (e.g. broadcasts messages, e-mails, phone mail, etc.)?
3. What type of training is provided on the procedure/policy regarding output/print management? Is it followed? When is it provided (e.g. orientation, new hire review, process change meetings, etc.)?
4. Is there a standard procedure provided on how to perform output/print management? How is this documented/maintained (e.g. hardcopy, manual, service procedure updates)?
5. What measurements are used to qualify and quantify the output/print management process?
6. Are there enough resources for output/print management (e.g. printers, supplies, personnel, software, etc.)? ____
Are there any business or processing goals for the output/print management process? If yes, what are they? How are they qualified and quantified?
8. Is the output/print management consistently review for continuous improvement for business and process aspects? If yes, are these recommendations acted on and tracked for their results?
Process Capability Assessment Instrument
Process Area 2.2 Print Manaεement
Process Area Output and Print Management monitors all of the printing and/or done across a distributed Descπption environment and is responsible for managing the prmters and the printing for both central and remote locations.
Questionnaire
Process Area 2.2 Pπnt Management
Figure imgf000091_0001
Figure imgf000092_0001
or ro uct ist
Process Area 2 2 Pπnt Management
Operator's manual for output/pπnt management personnel
Customer's manual for available output/pπnt resources
Examples of any forms/paper stock used for non-typical pπnt jobs
List of equipment/supplies used for non-typical pπnt jobs (e.g feeders, inks, etc )
Figure imgf000092_0002
Base Practices
Figure imgf000092_0003
and control system is set up to handle multiple transfers and both remote systems and the host complete file transfer successfully
BP Number 2.3 4
BP Name Location, format, and file veπfication
BP Description Determine if the file to be transferred exists
Determine and check the version of the file to be transferred
Determine if there is room on the recipient machine for the file
Dynamically allocate space for file
Convert file types (e g , VSAM, PDS, etc )
Convert file formats (e g , ASCII to EBCDIC)
Encrypt/decrypt file being transferred
Compress/decompress file at source and at target
Rename file at source and/or target
Create, wπte over or delete files
Merge or append to transfeπed files
Example File Transfer Type Considerations include
Host to Host
Remote System to Host
Remote System to Remote System
References
Figure imgf000093_0001
Process Area File Transfer and Control
Level 1
Assessment Indicators Process Performance
Generic Practice. Ensure that Base practices are performed
Figure imgf000093_0002
Level 2
Figure imgf000093_0003
Figure imgf000094_0001
Level 3 Assessment Indicators
Figure imgf000094_0002
Level 4 Assessment Indicators
Figure imgf000094_0003
Figure imgf000095_0001
Level 5 Assessment Indicators
Figure imgf000095_0002
Process Capability Assessment Instrument Interview Guide
Process Area 2 3 File Transfer and Control
Questions
Base Practice 2 3 1 Transfer files on a scheduled basis
1. Has the schedule of file transfers to and from devices been determined? If yes, what is the schedule? Who is responsible for this task? Is it under version control? Does the schedule encompass all aspects of the service provider at the organizational level?
Can file transfers be initiated by the sender and/or the receiver? What is their customer level (e.g. administrator, all customers, some customers, etc.) and do they write scripts or assign priorities levels via an interface?
Can concurrent file transfers be performed? If yes, please explain how?
Can automated conditional file transfers be performed? If yes, please explain how?
Base Practice 2 3 2 Determine backup and recovery scheme
Are file transfer events logged? If yes how and is this information kept for historical purposes?
Are failed file transfers retried? If yes, by whom or is it automatic?
Has the backup/recovery scheme for a file transfer been invoked? If no, why? If yes, what was the end result (e.g. lost data, transfer complete, etc.)? Who is responsible for creating scheme and is it under version control?
Is there notification of a successful/failed file transfer? If yes, how is this performed (e.g. e-mail, banner message, report, etc.) and to whom (e.g. administrator, initiator, etc.)? Is fault management made aware of failures? If yes, how?
Is there a check for successful file transfers? If yes, how are these checks performed and logged?
Base Practice 2 3 3 Transfer files on an ad hoc basis
Are files transferred on an ad hoc basis? If yes, what are the most common reasons and by whom? Do these transfers interfere with other process areas (e.g. production scheduling, output/print management, etc.)?
Who can perform or initiate an ad hoc file transfer (e.g. administrators, all customers, customers with permission, etc.)? Is it performed by senders, receivers or both?
Can ad hoc files be transferred concuπently? If yes, please explain how this is bemg done^
Base Practice 2 3 4 Location, format, and file veπfication Can space for a transferred file be dynamically allocated? If no, what is the customers recourse if there is a problem?
Can file types (e.g. VSAM, PDS, etc.) be converted? If yes, what is the most common? How are they converted? What tools do you use to convert them?
Have file formats (e.g. ASCII to EBCDIC) been converted? If yes, what is the most common? What tools do you use and how are they converted?
4. Are files being compressed/decompressed at source and at target? If yes, how?
5. Can files be renamed at source and/or target? Can files be created, wπtten over or deleted? If yes to either, please explain the process of how this is done.
6. Can transferred files be merged or appended to? If yes, is this method used often?
What are the most common platforms encountered during file transfer? Has there been a problem with any particular platform? If yes, explain.
Are files transferred being encrypted/decrypted? If no, why? If yes, please explain how? What tools are being used?
Geneπc Questions for Process Area
Are file transfer times defined and/or evaluated for number of destinations, machines and platforms? If yes, explain?
Is there a policy established and maintained for file transfer and control? Is this process followed?
3. Are adequate resources available for file transfer and control? If no, explain?
4. Is training provided for all new employees within file transfer and control? If no explain? Are subsequent training times available for file transfer and control personnel to learn new processes, technologies, etc.? If yes, explain. Are proactive plans made for future personnel needs? If yes, explain.
Are reports to customers, administration and other groups provided as a means for process update and feedback? If yes, who gets these reports. If no, explain how feedback is provided?
Is the file transfer and control process and procedure reviewed for continuous improvement purposes? Are these improvements deployed and measured against process and business goals?
Are strategic goals in place for file transfer and control? If yes, what are they and can they be measured? Are metrics collected on the file transfer and control process? Is this process automated with use of software, tools, etc.? Are the metrics analyzed for process parameters and deviation identification?
Process Capability Assessment Instrument
Process Area 2 3 File Transfer and Control
Process Area File Transfer and Control initiates and momtors the files being transfeπed throughout the Descπption system as part of the business processing (e g , mghtly batch runs) File transfers can take place in a bi-directional fashion between hosts, servers and workstations
Questionnaire
Process Area 2 3 File Transfer and Control
Figure imgf000096_0001
Work Product list
Process Area 2 3 File Transfer and Control Sampie of a file transfer and control schedule
Sample of a backup and
Figure imgf000097_0001
scheme
List of file types and formats used duπng file co ersions
Reports metrics, concerns and or issues regarding file transfer and control
Net ork Services 2.4)
Figure imgf000097_0002
Base Pracuces
Figure imgf000097_0003
Figure imgf000098_0001
Figure imgf000099_0001
References
MODE v2
MODE \ 1 Toolkit
Process .Area. Netw ork Services
Level 1
Assessment Indicators: Process Performance
Geneπc Practice: Ensure that Base pracnces are performed
Figure imgf000099_0002
structure and needs are taken into consideration is: HR has their own private directory with access to confidential data restriaed to HR personnel
2.4 4 Extract The current system regularly checks and collects appropriate Information from network direaory information (e.g. Configuration information, Directoπes authentication, etc.) for reporting and trending purposes. Such reports are available.
2.4.5 Identify Inventory list of the various network components that are Component Opπons assessed (E.g. Physical and Logical Components) are consistently- assessed and inventoried.
2.4.6 Document .-1 process is in place that tracks present and future direaory Strategic D ers management trends. Network Services personnel are aware of this process and aa in accordance with this process. For example, a large influx of new employees would necessitate more disk space and personal direaories to be created. The Network Services team is prepared to handle this flux.
2.4.7 Outline
A document specifying network-capacity guidelines is available. guiding principles The guidelines take into consideration security, geographic for Communication location, and other business needs or requirements. Address Planning
2.4. S Address and A schedule is available for address and domain maintenance
Domain tasks like configuring DHCP, performing updates to version
Maintenance control, and maintaining D S.
2.4 9 Address Future IP capacity needs are anticipated and tracked by looking Capacity Planning at trending growth from the past couple years and the business direction or strategy.
2.4.10 Address A process is in place for the overall communications address Design Process design. The business and technical requirements, such as the defined protocol, are taken into consideration for Address Management A Visio chart is then developed to map IP address. business, and technical flows.
2.4.11 D? There is a review process in place for assessing and evaluating Technology the relevancy of new emerging technologies.(E.g. Business Case Research Process Analyses are performed on new technologies)
Level 2
Process Attπbute Geneπc Practice Example of Assessment Indicator .Assessment Indicators at Client
Figure imgf000100_0001
files in a month, % of employees, average DNS or ΣP issue response rime, etc)
GP2.5 Maintain Conflicts and network alterations communication among are addressed and communicated team to the Service Desk Monitoring, and Performance Management group
Work Product GP2.6 Ensure work Work orders for additional IP Management products satisfy addresses are filled out, documented requirements processed, and filed for historical tracking.
GP2.7 Employ version The relationship between different
' control to manage changes directories is continuously
1 to work products maintained. (E.g. The ' synchronization of two ! 1 direaories.)
Level 3 Assessment Indicators
Process Attπbute Geneπc Pracπce Example of Assessment Indicator Assessment Indicators at Client
Process Definition GP3.1: Define policies and Personnel are able to perform procedures at an Direaory or Communications organization level Address Management functions in a consistent and repeatable manner..
GP3 Define tasks that When Network Management tasks j satisfy the process purpose are distributed, success of the i and business goals tasks are ensured by common consistently and repeatedly resources such as tools, training and a company vision or direction.
Process Resource GP33: Plan for human Network Services takes into resources proactively consideration the growth of the company when planning for hiring, updating systems with new technology, and other tools that may make their tasks manageable
GP3.4 Provide feedback in Monitoring group provides order to maintain performance reports on address knowledge and experience management to the Network Sen-ices Team.
Level 4 Assessment Indicators
Process Attπbute Geneπc Practice Example of Assessment Indicator Assessment Indicators at Client
Process GP4.1 Establish A charaaeristic of the
Measurement measurable quality environment may be that the objectives for the business requires 24-hour operations environment network availability. This can be set as a threshold and monitored for following this business requirement or driver. j GP4.2 Automate data Metrics are automatically j collection colleaed from the Direaory and Address Management tools, (vs. Manual collection)
GP43 Provide adequate Tools to monitor and measure resources and Network Services are in place. 1 infrastructure for data These tools distribute reports and
1 collection send notifications to appropriate parties.
Process Lontrol GP4.4 Use data analysis Direaory and Address methods and tools to maintenance processes are revised manage and improve the after reviewing collected metrics process (E.g. % of loss or misplaced files per month, % of IP or DNS issues reported monthly, average response time to IP or DNS issues, etc)
Lev el 5 Assessment Indicators
Process Attπbute Geneπc Practice Example of Assessment Indicator Assessment | Indicators | at Client
Continuous GP5.1: Continually Current resources, applications, Iπxpro ement improve tasks and and procedures are periodically processes assessed or altered with the intent to promote continuous improvement (eg. an upgrade to the latest NT Administration tools.)
Process Change , GP 5.1 :Deploy "best Process improvements practices'" across the implemented in GP5.1 are organization validated via collected metrics and defined strategic drivers, (eg. .4s in the example above the upgrade to the latest NT tools would be tested, rolled-out an d monitored to measure in business performance improvement)
Process Capabihty Assessment Instrument: Interview Guide
Process Area 2 4 Network Services
Questions
Base Practice. 2 4 1 Populate Directoπes
What is the process for adding first time directory information to new directories? Is there a different process for populating old directories? If so. please describe.
How often does populating new directories occur and who approves this?
How are directory permission properties defined and gathered?
How often are directory permission properties surveyed and altered?
Does the process of populating existing directories take various system needs into consideration? (E.g. Does directory population follow a convenient and logical schedule?)
Base Pracnce: 2.4.2 Manage Directoπes
Who is responsible for managing the network directories? What is the overall process for managing the directories?
How is the directory content volume monitored and managed?
How are the relationships between directories managed?
How often is the interface between different directories updated?
How is the content of different directories maintained?
Do you have directories that require synchronization? WTiat is the process for synchronizing the directories?
Base Pracnce: 2.4.3 Determine Orgamzaαonal Impacts
Are organizational and business impacts taken into consideration when determining and designing various network services? (e.g. directory structure, permissions, etc.) If yes, how?
VSTi at processes are in place to determine organizational impacts?
Base Pracnce: 2.4.4 Extract Information from Directoπes
What type of information do you gather or extract from directories (e.g. authentication information. access control profiles, etc.)?
How do vou store the information collected from directories?
.Are you creating reports from this data? If yes, what types of reports are you creating?
Is anyone managing inconsistencies or flagging abnormalities? If yes. w ho and how are they flagging or correcting the abnormalities? Is their communication between the Network Services and Fault Management or Monitoring teams w hen sever abnormalities occur?
Base Pracnce: 2.4.5 Identify Component Opnons
What physical and logical components have you identified in your environment? How did you determine what components were needed for your environment?
Is there a process for categorizing different network components? Are different people responsible for the different types of components? If yes. who are they and do they just receive training on the specific component types they are responsible for?
Base Pracnce: 2.4.6 Document Strategic Dπvers ( e.g. geography, secuπty. etc. )
What are some of the strategic drivers identified for providing the optimum network services? Is there an order of importance for the strategic drivers you have identified? If yes, please elaborate.
Are your strategic drivers documented? Are they reΛisited when a business or organizational change happens? How are they kept in line with the business or organizational needs?
Base Pracnce: 2.4.7 Outline Guiding Principles for Communication Address Planning
Do you have any guiding principles in place that allow the address team to develop and share a common vision for all addressing functions? If yes, what are some of these guiding principles?
Are there common processes and practices across several of your networking functions? If yes. which ones?
Is there a lot of cross functionality between your network groups? If yes, please explain the cross functionality?
Base Pracnce: 2.4.3 Address and Domain Maintenance
How often is address maintenance performed? What processes are used for the addition, deletion, maintenance, and modification of addresses?
How often is domain maintenance performed? What processes are used for the addition, deletion, maintenance, and modification of domains?
How are the address tables maintained?
What is the process for maintaining DNS?
Base Pracnce: 2.4.5 Address Design Process
1. Are address design and technical network diagrams created? Are they updated? If so. how often?
2. Are conflicts or network issues taken into consideration when the address system is being designed? If yes. what conflicts or issues are considered and how are the network solutions modified? Is there a process to follow for making changes?
Base Pracnce: 2.4.6 EP Technology Research Process
How often is emerging technology considered and evaluated for the current network?
Are there defined processes that determine whether a new technology would enhance or improve the current network svstem? If so. what are thev?
If a new technology is being considered what type of testing or research is done to ensure that the technology meets the business needs?
Geneπc Quesoons for Process Area
1. Are training classes provided and do all new Network Services personnel attend training on the defined Directory Maintenance and Communication Address Planning processes? If so what type of training ensures adequate execution of these established directory management and address servicing procedures?
2. Are current resources and procedures periodically assessed with the intent to promote continuous improvement? What is the approval process for proposed solutions? Are all potential stakeholders involved in the decision process? How often are these solutions implemented and by whom? 3. How are routine network services and continuous improvement solutions evaluated for impact?
4. Do you find that the resources allocated to network services is adequate? Please elaborate.
Process Capability Assessment Instrument
Process .Area 2.4 Network Services
Process Area Network Services Process Area is co pnsed of the following two areas: Descπpnon
Directory Semces: is the function of publishing and maintaining organized invemoπes of information resources to make them available to networked customers. Directory Management can apply to internal directoπes as well as the publishing of directory infoπnaπon for global directory services.
DNS: ensures that EP services are provided to devices within an enterprise. Whether dealmg with a new or existing capabihty. the communicauons address management funcnon demands that high-level busmess requirements be taken into consideration.
Questionnaire
Process .Area 2.4 Network Services
Yes i No Don't N A Know
1 Is there a formalized process for populating new directoπes?
2 ' Is maintenance of the directoπes performed on a regular basis?
Is the organizational structure taken mto consideration when creating the directory management structure ?
Is network directory information being collected? (e.g. authenπcanon. configuraπon information, etc..)?
Does a process exist which inventories the vaπous network components (e.g. Physical and Logical Components)?
Is general documentation of strategic dπvers documented?
Are netw ork capacity guidelines outlined for the address plan?
Are address and domain maintenance performed regularly?
9 Do forecasts predicting future IP capacity- needs exist?
10 Is there a specific process documented and followed when creating business and technical requirements for Communicaoons Address Management?
11 ! Is there a review process for assessmg and evaluating new emergmg technologies for Communications Address Management and other Network Services Functions?
Work Product list
Process Area 2.4 Network Services
Access Control Profiles
Network Traffic Flow Diagrams
EP Address Availability Report
DHCP Address Lease Contracts
EP Address Tables
Copy of current documented Address Plan
Backup/Restore/ Archiving (IS)
Figure imgf000105_0001
Base Pracnces
Figure imgf000105_0002
Figure imgf000106_0001
References
Figure imgf000106_0002
Process Area: Backup/Restore/ Archiving
Level 1
Assessment Indicators- Process Performance
Geneπc Pracnce: Ensure that Base pracnces are performed
Figure imgf000106_0003
priority to mission critical data.
2.5.3 File restoranon steps and Documented procedures for customers to request consideranons restoration of files are in place.
2.5.4 Compress and mdex informanon Data is compressed and indexed for storage. being archived
2.5.5 Notify that the backup Support personnel receive a page (via HP restoranon archival process has been Openl iew/Netview) when backup server fails or an email completed successfully failed confirmation of a successful backup.
2.5.6 Perform housekeeping on the All archived tapes are adequately labeled for easy backup archival library retrieval
2.5." Synchronize backups and Backups are scheduled during low network traffic rimes. restores
Level 2
Process Attribute Geneπc Pracnce Example of Assessment Indicator .Assessment Indicators at Client
I Performance GP2.1 Establish and maintain A documented policy is maintained, ] Management a policy for performing that describes the backup and operational tasks archival plan and schedule The SLA specifies the terms for restoration (eg. time within which a requested file will be restored).
GP23 Allocate sufficient Adequate resources are allocated so resources to meet expectations that the backup/restore/arch al process occurs according to plan and schedule
GP23 Ensure personnel Personnel receive training on the receive the appropriate type backup/restore/archival tool and and amount of training process.
GP2.4 Collect data to measure Data such as the following are performance colleaed: percent of successful backup/restore/archive transactions per month, mean time to restore data, etc.
GP2.5 Maintain Backup/restore/archive personnel communication among team provide regular status reports to ' members appropπate parties.
VΛ ork Product GP2.6 Ensure work products Tapes are labeled according to
Management satisfy documented specifications. requirements
GP2.7 Employ version control Records are maintained of any to manage changes to work updates made to backed-up/stored products data, and any restorations performed.
Level 3 Assessment Indicators
Process Attnbute Geneπc Pracnce Example of Assessment Indicator .Assessment Indicators at Client
Process GP3.1: Define policies and Channels exist (and customers are
Definition procedures at an organization aware of them) for making restore level requests.
GP3.2 Define tasks that satisfy Planning for new backup/restore the process purpose and technologies includes projecting business goals consistently and accompanying human resource repeatedlv needs.
Process GP33: Plan for human | Backups occur in accordance with Resource resources proactively ' documented policies on frequency
1 and type. ι GP3.4 Provide feedback in The backup/restore/archival team i order to maintain knowledge provides feedback on the process at 1 and experience periodic meetings.
Level 4 Assessment Indicators
Process Attribute Genenc Pracnce Example of Assessment Indicator Assessment Indicators at Client
Process GP4.1 Establish measurable Quantitative targets for assessment Measurement quality objectives for the metrics for the backup/restore/ operations environment archival process are periodically set
GP4.2 Automate data The backup/restore tool collection automatically colleas data needed for assessing the backup and restore processes.
1 GP43 Provide adequate All specified data for backup/ resources and infrastructure restore archival process evaluation 1 for data collection is collected. i i Process Control GP4.4 Use data analysis , Assessment metrics colleaed are
| methods and tools to manage : compared to targets and and improve the process discrepancies addressed.
Level 5 Assessment Indicators
Process Atmbute I Geneπc Practice Example of Assessment Indicator Assessment Indicators at Client
Continuous I GP5.1 : Continually improve 77ιe backup/restore archival process
Improvement j tasks and processes and relevant technologies are periodically reviewed to identify potential enhancements or beneficial changes to the existing system. Actions are taken to implement the improvements identified.
Process Change GP 5.1:Deploy "best practices" Policy documentation is updated to across the organization reflea any new/modified procedures and all appropriate parries within the organization receive notification.
Process Capability Assessment Instrument: Interview Guide
Process Area ~> .5 *> Backup Restore Archiving
Questions
Base Pracnce: 2.5.1 Test CentraL Remote Backup/Restore Archival Procedure Peπodically What type of periodic testing of the backup/restore/archival procedures is performed?
Are both central and remote backup/restore/archiving tested?
How (in what format) and to whom are the testing results reported?
Have your tests typically been successful? What constitutes a successful test?
Base Pracnce: 2.5.2 File Backup Steps and Consideranons
Have the backup requirements been defined and documented for the following items:
Customer, operations, applications responsibilities
Remote vs. central backups
Frequency of backups
Components to be backed up
What type of application or automated process is used for backup?
Are backup and restore processes managed centrally or remotely?
What type of backup (fu L incremental, export) is performed and how often?
What media (tape, magnetic disc, cartridge etc.) is used for backup? Why was this medium chosen? If the system is unavailable to customers during backups, how is system unavailability managed? If parts of the sy stem are down duπng a scheduled backup, is a manual backup performed hen the system gets back online?
Where is backed-up/arci ed data stored? For what length of time is data stored?
Does the backup and restore process require manual intervention?
What type of momtoring of the backup process is performed?
Are backup records made? If yes, w hat information is documented?
Base Pracnce: 2.5.3 File Restoranon Steps and Consideranons
What events w arrant a restoration and how is the process initiated? .Are these policies documented?
Can customers submit requests for particular files to be restored? How are customer requests logged and tracked?
Can single/ multiple objects be restored from the backup media?
Can a full incremental backup be restored centrally and remotely'
What type of momtoring is done of the restoration process?
Are notification procedures in place to inform customers and service providers of success/failure of restoration?
Base Pracnce: 2.5.4 Compress and Index Informanon Bemg Archived
Is archiving triggered automatically or must it be manually initiated?
How is data compressed and indexed prior to being archived?
Base Pracnce: 2.5.5 Notify that Backup Restoranon Archival Process has been Completed Successfully Failed
Who receives notification of the outcome of the backup/restore/archival process?
How is this notification sent?
What action, if any , is taken on receipt of the notification?
Base Pracnce: 2.5.6 Perform Housekeeping on the Backup Archiv al Library
What maintenance tasks are performed on the backup/archival library ? Who is responsible for maintaining the library?
Is storage media labeled? What information is recorded on the label? Does labeling follow documented specifications?
How many copies of backup data are made, and how many generations are maintained? Are copies stored in different locations?
How is integrity of stored and retrieved files ensured (e.g. resurrecting relationships)?
Base Pracnce: 2.5.7 Synchronize Backups and Restores
Does a predefined schedule for regular backups and restores exist? If so. w hen do backups and restores occur?
What is the process for scheduling a backup/restore not regularly planned? Who manages this process?
Are there any indicators in the application that can help signal hen a backup is needed if it does not fall on one of the scheduled backup times?
Geneπc Quesnons for Process .Area
Are any quantitative targets set with regard to the backup/restore/archr e process (e.g. % of successful backups per month)? If so. w hat are they? .Are these targets achieved? How frequently are thev evaluated?
Is the backup/restore/archive process periodically reviewed and new technologies evaluated with the purpose of identifying potential improvements? How frequently does this occur?
Do you find that adequate resources are allocated to managing the backup/restore/archive process?
WTiat type of training do backup/restore/archive personnel receive?
Process Capabihty Assessment Instrument
Process Area 2.5 Backup Restore .■Archiving
Process Area Backup Restore .Archive Management considers all of the back-ups and restoraπons that need Descπpnon to take place across a distributed system for master copies of data. Archiving sav es and stores informanon across the dismbuted environment. These processes may occur centrally or m dismbuted locanons.
Quesnonnane Process .Area 2.5 Backup/Restore/ Archiving
Yes No Don't ! N A Know I
Is the backup/ restore archival procedure peπodically tested?
.Are backups restores performed both centrally and remotely?
.Are backup restore processes monitored?
Is an audit trail of backup restore processes created?
Is the mtegnty and accuracy of backed-up restored data ensured?
.Are data compressed and indexed before being stored?
Do appropnate personnel receive nonficanon of success/ failure of backup/restore/archival process?
.Are general maintenance and housekeeping tasks performed for the backup/ archival library?
.-Are backup and restores scheduled so that they do not interfere with other batch jobs or producnon acnviries?
Work Product list
Process .Area 2.5 Backup Restore, Archiving
Backup requirements document
Sample backup log
Document outlining schedule of backups ( e.g. full, incremental, differenπal)
SLA outlining backup and restore agreements
Figure imgf000110_0001
Figure imgf000111_0001
Base Pracnces
Figure imgf000111_0002
Figure imgf000112_0001
eferences
Figure imgf000113_0001
Process .Area: Monitoring
Level 1
Assessment Indicators Process Performance
Genenc Pracnce: Ensure that Base pracnces are performed
Base Pracnce Example of Assessment Indicator Assessment Indicators at Client
2.6.1 Poil for current status, SNMP commands are used to gather information on the if necessary distributed environment, when necessary. Example of output of such a polling is available.
2.6.2 Gather and document Standard log shows the event and fault information gathered and monitoπng informanon recorded
2.6.3 Gassifv events When questioned, personnel can explain the criteria under which events are categorized as informational fault etc
2.6.4 Analyze faults Example of the results of a preliminary analysis that occurred to identify the extent/scope ofa fauh.
2.6.5 Route faults to be Faults are routed to appropriate resource for handling. coπected Personnel can explain the policy on where and how particular faults should be routed
2.6.6 Map ev ent types to Where possible, proactive procedures are predefined and pre-defined procedures triggered for particular events.
2.6.7 Log events locally Event/fault logs are accessible both locally and remotely. and or remotely
2.6.8 Suppress duphcated Duplicate informational messages are suppressed until predefined lnformanonal messages thresholds are reached. Example of log shows that duplicate unul thresholds are reached messages are not recorded.
2.6.9 Display status Graphs, maps and logs are available to show status information. infoπnanon on console* s ) m mulnple formats
2.6. 10 Display status Status information is conveyed to Service Desk and other relevant informanon in multiple panies. When questioned, personnel can describe who is locaπons informed and the procedure for doing so.
2.6.11 Issue commands on Commands can be issued on remote processors hosts. remote processors hosts
2.6.12 Set up and change Router filters are in place to control access. local and or remote filters
2.6.13 Set up and change Thresholds have been specified for critical nodes and local and or remote documentation of the thresholds is available.. threshold schemes
2.6.14 Analyze traffic Traffic information is reviewed to detea errors, misrouted traffic. patterns thresholds, bandwidth, repetitious data patterns etc
2.6.15 Send broadcast Broadcast messages are sent when necessary. messages
Level 2
Process Geneπc Pracnce Example of Assessment Indicator Assessment Attribute Indicators at Client
Performance , GP2.1 Establish and maintain Documentation is maintained of all Management | a policy for performing necessary monitoring activities to be operational tasks performed
GP2.2 Allocate sufficient A monitoring tool is employed that is resources to meet appropriate for the breadth and complexity expectanons of the distributed environment
GP23 Ensure personnel Qualified staff are assigned to performing receive the appropriate type the monitoring tasks that require manual and amount of training intervention (e.g. reviewing logs) New monitoring personnel receive training on
I the monitoring tool
GP2.4 Collect data to Metrics are colleaedfor example: average measure performance response time to network outages and issues, average time an application is available
GP2.5 Maintain Issues are tracked and logged. Members I communication among team of the monitoring team provide status I members repons to appropriate groups.
Work Product GP2.6 Ensure work products Monitoring logs contain all required Management satisfy documented entries and information that are outlined requirements in monitoring policy.
GP2.7 Employ version Monitoring information from the various control to manage changes to components of the dismbuted system are work products reformatted to a standard message type. Monitoring data is archived for future reference
Level 3 Assessment Indicators
Process Geneπc Pracnce Example of Assessment Indicator .Assessment Atmbute Indicators at Client
Process GP3.1 : Define policies and The monitoring provides mechanisms to Definition procedures at an organization alert appropriate resources when a fault is level detected.
GP32 Define tasks that As the monitoring technology changes, satisfy the process purpose monitoring personnel receive additional and business goals training that is needed consistently and repeatedly
Process GP3.3: Plan for human Fault management occurs according to Resource resources proactively stated policies and procedures, (eg. for every fault a ticket is logged, the problem is resolved within the SLA/OLA specifications, and ticket is updated with fault resolution.
! GP3.4 Provide feedback in The manager of the monitoring function order to maintain know ledge solicits feedback from the monitoring team and experience on processes involved with monitoring their business environment
Level 4 Assessment Indicators
Figure imgf000114_0001
Level 5 Assessment Indicators
Process | Geneπc Pracnce Example of Assessment Indicator Assessment i
Atmbute Indicators at Client
Continuous GP5.1: Continually improve Monitoring software and process are Improvement tasks and processes periodically evaluated with the intent of identifying potential new technologies that may fit the needs of the business strategy
I better than the existing technologies..
Process j GP 5.1 :Deploy "best The same reporting procedures and
Change j practices'* across the requirements are met when implementing ' organization or evaluating a new technology.
Process Capabihty Assessment Instrument: Interview Guide
Process Area 2.6 Momtoring
Figure imgf000115_0001
Who accesses the event log and for what purposes?
Base Pracnce: 2.6.8 Suppress Duplicated lnformanonal Messages Until Thresholds are Reached
What mechanism checks for duplicated informational messages and clears them from the event log unless a threshold is reached?
Base Pracnce: 2.6.9 Display Status Informanon on Console< s) m Multiple Formats
1. What types of current status information can be obtained?
In what formats can such status information be viewed (e.g. graph, map, log)?
Base Pracnce: 2.6.10 Display Status Informanon m Mulnple Locanons
In what locations is status information displayed?
Do personnel other than operations staff access this status information? If so w ho does and for what purposes?
Base Pracnce: 2.6.1 1 Issue Commands on Remote Processors Hosts
What types of commands can be run on remote processors/hosts?
Can commands to remote processors hosts be initiated both manually or by an application?
Base Pracnce: 2.6.12 Set up and Change Local and or Remote Filters
For what types of purposes are router filters set up?
How frequently does the need arise for these filters to be changed?
What is the procedure for changing filters? Who manages this process?
Base Pracnce: 2.6.13 Set up and Change Local and or Remote Threshold Schemes
I How are thresholds determined for critical nodes?
Do these thresholds meet SLAs?
Under what circumstances are these thresholds changed?
What is the procedure for changing threshold schemes? Who controls this process?
Base Pracnce: 2.6.14 Analyze Traffic Patterns
What information about network traffic is collected?
What types of conclusions are sought in analyzing the traffic data? Are there predefined guidelines for the analysis that needs to be done?
Who performs this analysis and how frequently?
Base Pracnce: 2.6.15 Send Broadcast Messages
Are there provisions for sending broadcast messages?
What circumstances necessitate broadcast messages
Who has the ability/responsibility for sending broadcast messages?
How frequently are broadcast messages sent?
Genenc Quesnons for Process Area
What personnel are involved in the monitoring process? What roles to they play ? What type of relevant qualification training do they have?
Are personnel trained to decipher momtoring data, understand the processes involved in momtoring a distributed environment, and how to make changes to the monitoring system?
.Are the monitoring softw are and process periodically evaluated with the intent of identifying potential improvements? Who facilitates this evaluation process?
Do you feel that adequate resources are allocated for monitoring purposes? Please elaborate.
Process Capabihty Assessment Instrument
Process Area 2.6 Monitoπng
Process Area Monitoπng verifies that the system is continually funcuoning m accordance with defined Descπption SLAs. Monitoring consists of the following funcnons;
Event Management: receives, logs, classifies and presents event messages on a console(s) based on pre-established filters or thresholds. Ev ent mformanon is sent from such components as: hardware, apphcanons/system software, communicanons resources, etc. If an event is classified as "neganve"' (i.e.. a fault), event management forwards the event on to fault management for diagnosis and correcnon.
Fault Management: a neganve event has been brought to the attennon of the system, acnons are undertaken within Fault Management to define, diagnose and correct the fault. .Although it may be possible to automate this process, human mtervennon may be required to perform at least some of these management tasks. Quesnonnaire
Process .Area 2.6 Momtoπng
Figure imgf000117_0001
Work Product list
Process .Area .6 Monitoring
Sample of event log Network status map Reports on traffic patterns Reports on faults
Figure imgf000117_0002
To provide hooks to allow unlizaπon capacity/performance to be monitored from end-to end
To provide apphcanon transacnon andor nested transacnon response time. This is done unhzing A.R.M. ( apphcanon response measurement!
PA's Metrics 0 o of troubles isolated correctly
°ό of bandwidth used
Av erage response time to access apphcanons
Percentage of time that a serv er, router, hub. etc is available
Base Pracnces
Figure imgf000118_0001
BP Name Isolate the cause of the performance problem
BP Description Gather and analyze data to determine the source of a performance problem.
Example An apphcanon on a sen-er appear to be having penormance problem Performance and capacity planning apphcanon such as BMC BEST 1 and HP
Pert ANAL YZER .MEASUREMENT can gather the performance data and ar.anze the complete SΛStem reλ-ealing that the pe ormance issue may be with the sen er capacm:
References
Figure imgf000119_0001
Process Area: Performance Management
Level 1
Assessment Indicators: Process Performance
Geneπc Pracnce: Ensure that Base pracnces are performed
Base Pracnce Example of Assessment Indicator Assessment Indicators at Client
2.7.1 Monitor resource A monitoring tool collects usage information that is evaluated unhzanon performance to to ensure system adequacy. When questioned, personnel can ensure adequacy of resources describe tools used to monitor resource utilization and perfonnance.
2. ".2 Establish thresholds for Thresholds for each critical node have been determined and each cππcal node documentation of these thresholds are available.
2. ".3 Pnoπnze informanon When questioned, personnel can describe the signals that are and flag abnormalities sent when statistics near or cross threshold levels.
I 2.7.4 Capture, save. Performance statistics are colleaed on an ongoing basis. 1 summarize and collate Personnel can provide examples of statistics colleaed, necessary capacity statisncs explain the statistics colleaed and reasons for collection.
2.7.5 Create reports on Example of utilization/capacity/performance reports. unhzanon capacity/performa nee
2. ".6 Disseminate reports to Appropriate parties receive and review reports. appropπate parties
Determine where Personnel can explain when shon-term adjustments to performance requires short- performance data are deemed necessary and how they are term aάiustments made.
2.T.S Isolate the cause of the Personnel can explain how performance data are analyzed to performance problem identify the source of a performance problem.
Level .
Figure imgf000119_0002
GP2.5 Maintain Issues are tracked and logged. Members communication among team of the performance management team members provide status reports to groups impaaed by the network performance issues.
Work GP2.6 Ensure work products Performance reports follow predefined Product satisfy documented specifications on content and format Management requirements
GP2.7 Employ version Ensure that any new or updated format to control to manage changes to performance reports adhere to the format work products and versioning as other reports.
Lev el 3 Assessment Indicators
Process Genenc Pracnce Example of Assessment Indicator Assessment Atmbute Indicators at Client
Process GP3.1: Define policies and The system-monitoring tool tracks all
Definition procedures at an organization critical events and generates logs. level Personnel are able to review the logs to isolate performance problems.
GP3.2 Define tasks that As the performance management satisfy the process purpose technology changes, personnel receive and business goals additional training that is needed. consistently and repeatedly
1 Process GP3J: Plan for human Stated policies are followed in dealing with ! Resource resources proactively situations arising from thresholds being reached
GP3.4 Provide feedback in The manager of the performance order to maintain knowledge management function solicits feedback and experience from the team.
Level 4 Assessment Indicators
Process Geneπc Pracnce Example of Assessment Indicator .Assessment Attribute Indicators at Chem
Process GP4.1 Establish measurable Targets are regularly set for metrics
Measuremen quality objectives for the assessing performance. Performance t operations environment trending data can be utilized by the Capacity Modeling and Planning teams to aide in future capacity forecasting.
GP4.2 Automate data System capabilities and thresholds have collection been identified and recorded.
GP43 Provide adequate The performance management tool resources and infrastructure automatically colleas the appropriate data for data collection and generates reports based on defined reporting struaures (Le. defined x +y axes, type of graph etc).
Process GP4.4 Use data analysis i Appropriate steps are taken to address and Control ( methods and tools to manage correa any discrepancies between actual ' and improve the process performance and the targets set
Level 5 Assessment indicators
Figure imgf000120_0001
locations.
Process Capabihty Assessment Instrument: Interview Guide
Process .Area 2.7 Performance Management
Questions
Base Pracnce: 2.7.1 Momtor Resources Unhzanon Performance to Ensure Adequacy of Resources
How are systems/applications/network workloads monitored to check for adequacy'
What condition qualifies a resource as inadequate, and what action occurs if an inadequacy is noted? Are these procedural policies documented?
Who is responsible for monitoring adequacy of resources?
How is trending data repoπed to the service provider for planning?
Base Pracnce: 2.7.2 Establish Thresholds for Each Cπncal Node
How are thresholds measured and determined for managed resources?
Do these thresholds meet SLAs?
Base Pracnce: 2.7.3 Pnoπnze Informanon and Flag Abnormalittes
How is utilization monitored vis-a-vis thresholds?
.4s utilization is monitored, w hat types of abnormalities are flagged?
What is the procedure for handling abnormalities and w ho is responsible for ensuring that the πecessarv action occurs?
Base Pracnce: 2. ".4 Capture. Sav e. Summarize and Collate Necessary- Capacity Stansncs .Are capacity statistics collected on an on-going basis?
For how long is this capacity data saved?
What types of summary or trend reports on capacity are generated? How often?
Who review s these reports and for what purposes?
Base Pracnce: 2.7.5 Create Reports on Unhzanon Capacity'/ Performance
What types of reports on utilization/capacity/performance are generated?
Are guidelines for the format and contents of regular reports documented?
Base Pracnce: 2.7.6 D L)iisssseemmiιnnaattee R κeeppoorrttss t too A Apppprrooppππaattee P raartrπieess
1. Who receives the utilization/capacity/performance reports and for what purposes? How frequently are these reports distributed?
R Bnascpe P Prraarcnnrcpe-: 21. ~7". ^7 D Deetteerrmmiinnee W Whheerree P Peerrffoorrmmaance Requires Short-term Adjustments
Are adjustments to performance data made to account for down time related to repairs, upgrades. etc. (to ensure trending information is not skewed)? If so. in what situations are adjustments made?
Who decides on the appropriate adjustments, and on what basis?
Base Pracnce: 2 S Isolate the Cause of the Performance Problem
Is system-wide data gathered and analyzed to identify the source of a performance problem? How is this data reported? Does any trending occur?
What is the mechanism or procedure by which the cause of a performance problem is isolated using svstem-wide data?
Geneπc Quesnons for Process .Area
What personnel are involved in the Performance Management process? What roles to they play? What type of relevant qualification/training do they have?
Is a documented set of procedural policies followed in activities related to managing performance? Are any data collected for use in assessing performance management? If so. please describe the information collected and any metrics that are computed. Are targets for the metrics set and performance evaluated against those targets?
Do you feel that adequate resources are allocated to performance management? Please elaborate.
Process Capabihty Assessment Instrument
Figure imgf000121_0001
to the proάucπon environments to either enhance performance or to recnfv degraded erformance.
Quesuonnaire
Process Area 2." Performance Management
Figure imgf000122_0001
Work Product list
Process Area 2." Performance Management
Capacity reports
Utilization reports
Performance reports
Document listing thresholds for managed resources
Figure imgf000122_0002
! resmcnons.
PA's Metπcs °o of individuals with multiple IDs and passwords Number of secunn modificanons made per month Number of secuntv violauons per month Mean number of accounts deleted and created per month
Base Pracnces
Figure imgf000123_0001
Figure imgf000124_0001
References "Network Secunrv Best Pracnce" Alfred G Leach a Andersen Consuinng. 1997.1998.
"Secunrv- Imphcanons of Net-Cenmc Computmg Sharon K. Dietz <a .Andersen Consuinng. 1997. 199S.
MODE v 2
MODE v 1 Toolkit
Process .Area: Secunty Planning &. Management
Level 1 ssessment Indicators Process Performance
Geneπc Pracnce- Ensure that Base pracnces are performed
Base Pracnce Example of Assessment Indicator Assessment i Indicators | at Client l
2.8.1 Define secunty obiecuv es and A formal policy that describes the organizatio 's policies security objectives, security approach and security actions exists.
2.8.2 Develop secunty plan A plan is available that specifies all security measures to be implemented
2.8.3 Obtain feedback & update Documentation shows that new technologies (both secunty plan security threatening and enhancing) are periodically evaluated to assess impaa on existing system.
2.S.4 Establish secunn A variety of logical and physical security controls are in place (eg. firewall, authentication mechanisms, encryption systems, security awareness programs etc.)
2.8.5 Receiv e informanon from A direa channel between HR and the systems Human Resources regardmg employee administrator exists, and employee arrival/departure comings and gomgs information is received in a timely manner.
2.8.6 Mamtam accounts and Ids Authorized system customers receive individual customer profiles and confidential passwords.
2.8.7 Log secunn ev ents Data such as access times, customer Ids, actions performed etc are logged for monitoring purposes.
2.8.8 Check for viruses and clean up An anti-virus produa is installed and periodically anv found updated.
2.8 9 Audit logs Security logs are reviewed to detea any questionable activity.
2 8 10 Take correcnv e acnons for A set of procedures, including contaa persons and secunn violanons appropriate steps, are detailed for dealing with security violations.
2.8.11 Momtor secun plan for its The security plans and data are periodically evaluated effecπv eness to ascertain effectiveness of security measures.
Level 2
Process Geneπc Pracnce Example of Assessment Indicator Assessment Attπbute Indicators at Client
Performance GP2.1 Establish and maintain A policy outlining all security actions to be Management a policy for performing taken and responsibilities of staff and j 1 ! ' operational tasks security management personnel exists. i
1 GP2.2 .Allocate sufficient Sufficient resources are allocated to | 1 ' resources to meet expectations security planning so that all planned j
1 ' actions can be implemented ■ <
GP23 Ensure personnel All security management personnel receive 1 1 1 receive the appropriate type training on the policy, procedures and 1 j ! and amount of training technologies associated with security | j
1 ' management 1
GP2.4 Collect data to measure Data such as the following are colleaed: 1 1 performance percent of individuals with multiple Ids and passwords, number of security violations per month.
1 1 GP2.5 Maintain Security issues are tracked and logged. ' j communication among team Appropriate parties are informed of members security violations.
VS ork Product GP2.6 Ensure work products All specified security events and Management satisfy documented information are logged requirements
GP2.7 Employ version control All workstations receive the most recent to manage changes to work version of the anti-virus product products
Level 3 Assessment Indicators
Process Geneπc Pracnce Example of Assessment Indicator Assessment Attribute Indicators at Client
Process GP3.1 : Define policies and Security management personnel Definition procedures at an organization consistently handle security violations or
I level issues according to documented policies and procedures.
GP3.2 Define tasks that satisfy Security management provides reports and the process purpose and meets with the security function to discuss business goals consistently and security issues and effectiveness of security repeatedly program.
Process GP33: Plan for human All customers are aware of their
Resource resources proactively responsibilities in supporting security and the procedures/channels for reporting security violations or concerns.
GP3.4 Provide feedback in Training on new technologies is planned order to maintain knowledge and provided for security management and experience personnel Security planning projects future human resource needs.
Level 4 Assessment Indicators
Process Genenc Pracnce Example of Assessment Indicator .Assessment Atmbute Indicators at Client
Process GP4.1 Establish measurable Targets are set for metrics assessing Measurement quality objectives for the effectiveness of security programs. For operations environment example a targa ceiling for number of security violations per month is established
GP4.2 Automate data A tool is used for automatically collecting collection security events, logging them and providing summary statistics.
GP43 Provide adequate Sufficient resources are provided so that all resources and infrastructure specified metrics for assessing security for data collection management is collected.
Process GP4.4 Use data analysis Aaual security management metrics are Control methods and tools to manage compared to targets or goal set and and improve the process discrepancies are addressed.
Level 5 Assessment Indicators
Figure imgf000126_0001
organizaπon importance of their role in supporting security. Topics such as periodically- changing passwords, not giving passwords out on the phone, and physical site security- are covered.
Process Capabihty Assessment Instrument: Interview Guide
! Process .-Area 2.S Secunn- Management & Planmng
Questions
Base Practice: 2.8.1 Define Secunrv Obiecπves
What types of issues are covered by the formal security policy'
Was the security policy submitted to management for approval?
Is the security policy documented and available to customers and management?
Base Practice: 2.8.2 Develop security plan and policies
Please describe the contents of the security plan?
What was the process for creating the security plan and policies?
Who is involved in the creation of the security plan/policies and w ho views the completed document?
Base Practice: 2.8.3 Obtain feedback &. update secunty plan
What is the procedure by which new factors that affect the system's security are determined and incorporated into securit planning?
Who is responsible for identifying and monitoring factors that might necessitate changes to the current security plan?
How does the security planning function receive information on planned changes to the distributed environment? Who is responsible for communicating such information?
How are developments of new technology (that threatens or enhances security) tracked and taken into consideration for security planning?
Base Practice: 2.8.4 Establish Securin-
List all security software (encryption, authentication, virus protection, remote access. proactive evaluation etc. ) that currently protects your system?
What other types of security measures have been implemented?
How are customers informed of the importance of network security and their responsibilities in supporting security?
Base Pracnce: 2.8.3 Receive Informanon from Human Resources Regarding Employee Comings and Goings
How is information on employee comings and goings communicated by Human Resources? i H Hooww l loonngg aafltteerr aann eemmppllooyyeeee'ss d αeeppaarrttuurree i iss t thnee aaccccoouunntt disabled? { Who is responsible for creating and deleting accounts?
I Base Pracnce: 2.8.4 Maintain Accounts and Ids
Who is responsible for maintaining accounts, passwords and IDs?
Are customer, supervisor and resource profiles maintained?
Do any shared login ids exist on the system? If so. for w hat purposes?
Does a default -guest" login ED exist on the system? If so. for what purpose and how are access rights controlled?
Are there any specifications for valid customer passwords, such as minimum length, character combinations etc.?
How frequently are customers required to change their passwords? Are customers required to change their password after an administrative reset (e.g. customer forgets password)?
Are customer accounts locked out when consecutive failed logins occur? If yes. how many failed login attempts cause a lock-out? How long is the account locked before it is reset automatically?
Are customer accounts disabled when they are inactive for a set period of time? If so. what is this time period?
Base Pracnce: 2.8.5 Log Securitv Events
What types of event information are logged for security monitoring purposes?
Where are these logs stored and for what time period?
Who has access to the security event logs and for what purposes?
How are log records protected from alteration by unauthorized personnel? Base Pracnce: 2.S.6 Check for Viruses and Clean up anv Found
What forms of virus protection does your system have?
.Are viruses checked for only w hen a virus scan is explicitly ordered by the customer, or does the virus checker implicitly monitor all file accesses? If the former is the case, is there a mechanism to ensure customers routinely run virus scans?
Ho frequently are updates to the anti-virus product received?
Base Pracnce: 2.S.7 Audit Loss
Is the secunn- log monitoring process automated? If so. what npes of events generate alerts? .Are the logs reviewed regularly for abnormalities that might not be automatically flagged?
What npes of summary- reports are created from the log informanon? Who receives these reports and for what purposes ?
Base Pracnce: 2.S.8 Take Correcπve Acnons for Secunn- Violanons
What is the procedure for dealing with security violations? .Are these procedural guidelines documented and viewed by security personnel?
Are securitv violations handled off-line?
When are security violations escalated and what is the process for doing so? Are escalation policies documented?
What types of reports are generated on security violations? Who review s these reports and for what purposes?
Base Pracnce: 2.S.9 Momtor Secunn- Plan for its Effecnveness
At time of security plan creation, were any means for judging plan effectiveness specified? If so. what are these methods, and are they routinely employed?
Ho frequently are security data reviewed to assess effectiveness of security plan? Who is responsible for performing these reviews?
.Are any quantitative targets related to security set? .Are these typically met? If they are not met. what is done?
What types of explicit testing (e.g. running hacker tools) of the system's security are performed? How frequently?
Geneπc Quesnons for Process .Area
Do you find that adequate resources are devoted to planning, implementing and monitoring system securitv?
Are security policies and procedures documented and communicated to appropriate personnel? What type of training do security personnel receive?
Process Capabihty Assessment Instrument
Process .Area 2.S Secunn- Planning &. Management
Process Area Secunn- Planning lninally involves defining the orgamzanon's secunty poiicy and Descnpnon developing a secunty "plan of acnon'". .An ongomg funcnon of Secunrv Planmng is to evaluate the effecnveness of the exisπng secunty plan -pamcularly m the context of changmg technologies - and plan for future secunty needs.
Secunty Management connols both physical and logical secunty for the dismbuted system. Due to the nature of a dismbuted environment, secunty may need to be managed cenπally. remotely or through a combinauon of the nvo methods. Secunty Management also handles the logging of proper and illegal access, provides a way to audit secunty informanon. recufy secunn- breaches and address unauthorized use of the svstem.
Process Capabihty Assessment Instrument: Quesuonnarre
Process Area 2.8 Secunty Planmng & Management
Figure imgf000128_0001
Is the secunty plan penodicaiiv reviewed to ensure ongomg appropnateness in the face of changes to the em ironment and secunn- threats?
4 -Are secunty mechanisms in place to protect the dismbuted system?
.Are all customers made aware of their role m supporting secunty?
Is informanon on new departing employees necessary for creating disabling accounts communicated m a timeiv manner?
.Axe customer accounts and IDs maintained?
.Are access events ( i.e. account name, time logged in. duraπon of access ) logged?
9 Are virus checks and eradicanoπ penodicaiiv performed?
10 .Are secunn- logs momtored and acnon taken if a secunn- breech is detected?
1 1 If a secunty v lolanon occurs, are the appropπate conecnve acnons taken?
12 Is the effecπv eness of the secunn- plan penodicaiiv tested?
Process Capabihty Assessment Instrument: Work Product list
Process .Area 2.8 Secunn- Planning & Management
Secunty policy document
Secunty plans and procedures document
Sample of secunty log
Secunty violanons reports
Repon on any tests of the secunty system
Figure imgf000129_0001
Base Pracnces
Figure imgf000129_0002
Figure imgf000130_0002
Process Area: Physical Site Planning & Management
Level 1
Assessment Indicators Process Performance
Geneπc Pracnce: Ensure that Base pracnces are performed
Figure imgf000130_0001
environmental failure on a per-site other off site customers regarding a planned power basis outage during a weekend period. This system unavailability notice is sent several weeks before-hand via various media.
2.9.3 Momtor progress of Progress is monitored for all sites regarding corrective correcπve acnons to failure on a per- action taken, for example: monitoring the installation site basis of a larger air conditioning unit shoπ-s fewer temperature alert alarms.
2.9.4 Momtor physical site Each site is monitored to provide physical site management plan for its effecnveness management with areas of improvement or success, for on a per-site basis example: H 'AC alarms at three sites are down due to new thermostat installs.
2.9.5 Provide feedback on physicai Feedback via e-mail survey to data center operators is site management to physical site colleaed and incorporated into future physicai site planmng funcπon plan.
Level 2
Process Genenc Pracnce Example of Assessment Indicator Assessment Attribute Indicators at Client
Performance GP2.1 Establish and maintain .-1 policy regarding physical site management Management a policy for performing with regards to procedures is established and operational tasks followed.
GP2.2 .Allocate sufficient All physical site management personnel have resources to meet the appropriate documentation regarding expectations regulatory /environmental controls, diagrams. security and safety measures.
GP23 Ensure personnel New physical site management personnel are receive the appropπate type trained on the process, procedures and and amount of training technologies of the group. Organization wide, customers are aware of capabilities of physical site planning and management
GP2.4 Collect data to Data are colleaed to provide information, for measure performance example: monthly count of alarm activation 's and types.
GP2.5 Maintain Physical site management collects/ logs issue communication among team and feedback. These are both noted in reports members and provided to physical site planning for knowledge gain purposes.
Work GP2.6 Ensure work products A complete tracking document or report
Product satisfy documented showing the progression of a physical site
Management requirements management issueproblem progressing through to resolution.
GP2.7 Employ v ersion \ ersion control is kept on the physical site control to manage changes to management plan and change control work products documents are produced noting any referencing departments (Le. physical site planning, security, etc).
Level 3 Assessment Indicators
Process Geneπc Pracnce Example of Assessment Indicator Assessment Attribute Indicators at Client
Process GP3.1: Define policies and Physical site management handles all site Definition procedures at an organization management issues according to a stated ' level policy vs. ad hoc.
GP3.2 Define tasks that Physical site management provides and satisfy the process purpose subsequently obtains feedback from physical and business goals site planning and security via reports, consistently and repeatedly meetings and e-mails.
Process GP33: Plan for human There is one centralized physical site
Resource ' resources proactively management group that is responsible for all saes throughout the organization vs. each sue using different methods or protocols.
GP3.4 Provide feedback in ew physical site management personnel order to maintain knowledge receives training and all employees receive and experience subsequent training from new skills, issues and technologies. Future employee requirements are addressed.
Lev el 4 Assessment Indicators
Figure imgf000132_0001
Level 5 Assessment Indicators
Process Geneπc Pracnce Example of Assessment Indicator i Assessment
Attribute ! Indicators I at Client
Continuous GP5.1: Continually improve Physical site management is continuously |
Improvemen tasks and processes improved via incremental changes, an 1 t example: upgrading of air filters to a smaller micron size for all computer room vents on ! three sites.
Process GP 5.1 :Deploy -best Process improvement noted in 5.1 example is
Change practices"' across the validated via metrics and business goals, for organization example: lower cost by 4% due to longer produa life and 10% decrease in air 1 conditioning maintenance
Interview Guide
Process Area 2 9 Physical Site Planmng & Management
Questions
Base Pracnce: 2.9.0 Determine physical site needs
Is there a procedure in place that plans for die comrol and management of construcnon. development or changes to the physical site? If ves. what it is? Is it follo ed? Who is responsible for this plan7
Is the physical site planmng handled via one plan or several? If more than one. why7 Is feedback collected for one or all plans7 If yes, how often and by whom?
Are plans determined by balancing lmplementanon costs with estimated busmess benefits? If ves. bv whom ( e.g. team. mdividuaL management, etc. )9
Does planmng consider the following requirements and funcπons: hardware capacrry and layout HVAC and fire suppression, power, structural planning (i.e. miπgate Manmade or natural disaster), and mtegranon with secunty planmng & management? If v es explain
.-Are busmess goals esraDhshed for physical site planning and incorporated7 If yes. bv whom7 How otten is the plan review eα 1
Base Pracnce 2 9 1 Test env ironmental regulatory control plans penodicaiiv on a per-site basis 1. Is testing performed regarding environmental and regulatory controls on a periodic basis? If y es. how often for each site and bv w hom? If no. explain?
What are the main environmental regulatory concerns for each site? Please pπoritize and explain? Are the plans for tesnng updated to include new equipment, regulations, etc.? If v es. how often are thev reviewed and bv hom?
Base Pracnce 2 9 2 Noπfy appropπate pan of environmental failure on a per-site basis
When a failure is encountered are there identified contacts w ho v ou nonfy for each site? If v es. bow is notification done (e.g. pager, e-mail, phone, etc.)?
What are the most common failures within each site and how often do they occur? How is feedback from the various sites collected (e.g. reports, conference calls, e-mail, etc.)?
Are data collected regarding the types of failures, response time, locations, reasons, etc? If yes. what data are collected and who receives this data? Are data collected on a manual basis or automatically ?
Base Pracnce 2 9 3 Momtor progress of correcnv e acnons to failure on a per-site basis
Are correctiv e acnons. in response to previous failures, monitored per site? If y es. how are thev momtored and who is responsible for this? .Are other related groups notified of changes or issues concerning anv correcnve action? If y es. how and when? If no. explain?
Are metπcs collected on the progress or status of physical site management procedures for each site? If yes. how often and are these collecnons done manually or are there software/automation tools in use? Are these metrics analyzed against goals and quantified objectives? If y es. by w hom?
Base Pracnce 2 9 4 Momtor physical site management plan for its effecπv eness on a per-site basis
Are business goals and strategies for each site used to measure the success or failure of correcαons and/or the general operation procedures for physical site management?
Are the physical site management tasks continuously improved? If yes. are these improvements deployed and measured for effecnveness?
Are enough resources a ailable, as far as equipment, space, procedures, software and/or personnel on each site? If no. explain how the addition of resources would improve the effectiveness of a site (e.g. better momtoπng. quicker response time, accurate data, etc.)?
Base Pracnce 2 9 5 Provide feedback on physical site management to phv sical site planmng funcπon Is feedback from physical site management for arded to phv sical site planmng? If v es. how (e.g. conference calls, reports, e-mail, etc.)?
.Are the plans, procedure reviews, issues and problems for each site collected and addressed via one centralized group or is each site a completely separate ennty ? If separate, does each communicate with ph sical site planning?
Genenc Quesnons for Process .Area
Is there a written policy regarding physical site management's procedures? If v es. is it followed? Is version controlled enacted on this plan? Are change control documents regarding the plan cut and forwarded to appropπate departments?
Is training made avadable to new hires within physical site management? Is follow-up training covering new technologies, procedures, etc. provided? Are plans made for future employ ment needs within physical site management?
Is the entire physical site management process reviewed for continuous improvement? If yes. by whom and how often? Are the improvements deplo ed and measured against business goals and metrics? If ves. bv whom?
Process Capability Assessment Instrument
Figure imgf000133_0001
Quesπonnarre
Process .Area 2.9 Physical Site Planmng & Management
Figure imgf000134_0001
Work Product list
Process Area * 2.9 Ph - y ' s -ical S -ite Planmng —• & Manag —ement
Procedures noting physical site planning (e.g. expansion, new layout etc. )
Procedures regarding environmental/regulatory control plans for each site
Failure monitoring' reporting procedures for each site
Reports noting status of physical site management for each site
List of risk issues for physical site management (e.g. earthquakes, wild fires, temperaπirε exnemes. brown/black outs, frequency of lighting strikes, tornadoes, etc.) for each site
Figure imgf000134_0002
Base Practices
Figure imgf000135_0001
Figure imgf000136_0002
References
Figure imgf000136_0003
Process .Area: Mass Storage Management
Level 1
Assessment Indicators: Process Performance
Geneπc Pracnce: Ensure that Base pracnces are performed
Base Practice Example of Assessment Indicator Assessment | Indicators at Client
2.10.1 Momtor and control A mass storage management team exists and an appropriate I storage usage tool b employed to monitor and control storage usage
2.10.2 Define usage standards Documentation exists detailing storage policies, naming for storage media standards and storage hardware configurations, ac
2.103 Disk space Usage profiles are maintained to aid in allocating shared disk management for Mass Storage space.
2.10.4 Rectify problems with During backups, all stateless files are tracked and subsequently stateless file svstems backed up. ^
2.10.5 Locate datasheets Data is stored in a "media hierarchy" (e.g. online, nearline and accordmg to access priority offline storage) which places frequently accessed data at levels or in storage mediums with faster rarieval provisions.
2.10.6 Tape Management Procedures for handling tapes (loading, read/write, labeling etc) are specified and followed.
Figure imgf000136_0001
GP2.4 Collect data to Data such as the following are colleaed: measure performance percentage of rime when system lacks appropriate storage items, number of times per month that backups are unsuccessful due to breadth of the system.
GP2.5 Maintain The mass storage management team provides communication among team status reports to appropriate parties. members
Work GP2.6 Ensure work products Tape labeling and naming of other storage Product satisfy documented devices occurs according to documented Management requirements specifications.
GP2.7 Employ version If multiple copies of data are stored, control to manage changes to mechanisms are in place to ensure that the work products labeling corresponds and any changes are applied to all copies (or appropriate note of the changes are made)
Level 3 Assessment Indicators
Process Geneπc Pracnce Example of .Assessment Indicator Assessment Atmbute Indicators at Client
Process GP3.1: Define policies and All data storage occurring wahin the
Definition procedures at an organization organization is tracked/controlled through a level single mass storage management system.
GF32 Define tasks that Planning of future human resource needs satisfy the process purpose occurs for mass storage management, given and business goals organization and related storage growth consistently and repeatedly projections and or new technology considerations.
Process GP3J: Plan for human The functioning of the mass storage
Resource resources proactively monitoring and control processes according to specifications keep backup problems at a minimum.
GP3.4 Provide feedback in The storage management team provides order to maintain knowledge feedback on the storage management process and experience at periodic meetings. ■<
Level 4 Assessment Indicators
Process Genenc Pracnce Example of Assessment Indicator .Assessment Atmbute Indicators at Che
Process GP4.1 Establish measurable Quantitative targasfor mass storage
Measuremen quality objectives for the management performance are periodically set t operations environment
GP4.2 Automate data The storage management tool automatically collection colleas data on how frequently files are accessed and reassigns storage locations within the media hierarchy accordingly. j GP43 Provide adequate A 11 data specified as necessary for mass I resources and infrastructure storage management (eg. available disk i for data collection space, access frequencies) or for assessing the process are colleaed.
Process GP4.4 Use data analysis Assessment metrics colleaed are compared to Control methods and tools to manage targets and discrepancies addressed. j and improve the process
Level 5 Assessment Indicators
Process Genenc Pracnce Example of Assessment Indicator Assessment Atmbute Indicators at Client
Continuous I GP5.1: Continually improve l The mass storage management process and Improvemen tasks and processes relevant technologies are periodically t reviewed to identify- potential enhancements. Actions are taken to implement improvements identified.
Process GP 5.1 :Deploy "best Any modifications of standards or processes Change , practices" across the are applied to all storage activities throughout organization the organization, where applicable
Process Capability Assessment Instrument: Interview Guide
Process Area 2.10 Mass Storage Management
Questions
Base Pracnce: 2.10.1 Momtor and Co rol Storage Usage
What type of system or tool do you have in place for monitoring and controlling storage usage? What utilities does it have?
Can the tool support all the operating systems within the distributed environment?
Does the tool have the ability to assess the physical file placement and determine space availabilit '
Does the tool allow for reordering of files to eliminate fragmentation?
What media types are used for storage? Can the tool momtor all these media types?
Who oversees or manages the monitoring and control process? Wnat are their responsibilities?
Base Pracnce: 2.10.2 Define Usage Standards for Storage Media
What information is specified as part of the storage media's usage standards? .Are system descriptions, operational procedures, help-desk/problem resolution contacts. Mass Storage Management configuration files etc included?
Where is the usage standards documentation stored and who accesses these documents? Who is responsible for maintaining usage standards documentation?
How frequently are usage standards reviewed and updated? What is the process for doing so?
Base Pracnce: 2.10.3 Disk Space Management for Mass Storage
What is the procedure for determining shared disk space requirements? On what basis is disk-space partitioning done?
How is disk space allocation kept track of?
How frequently are disk space requirements reevaiuated and space reallocated?
Base Pracnce: 2.10.4 Rectify Problems with Stateless File Systems
What mechanisms are employed to rectify backup problems resulting from stateless file systems?
Has an assessment been made of how well these mechamsms deal with the problem? If so. what was the outcome of the assessment?
Base Pracnce: 2.10.5 Locate Datasheets Accordmg to Access Pnoπtv
Does a storage media hierarchy (based on ease of access) exist and is data stored at particular levels based on defined strategies or priorities? If so. what are the levels of the hierarchy (e.g. online. nearline. offline) and how is data assigned to a particular level?
Are data moved around within the hierarchy? What circumstances initiate such location changes? Is there an automated process for discerning what datasheets should be moved? (e.g. the storage management software keeps track of the number of times particular files are accessed and determines which files should be moved to make retrieval more efficient)? If manual intervention is required, what needs to be done and who does it?
Do you have any means of gauging the efficiency of your data organization at a particular time? If so how frequently is the efficiency assessed? Are any efficiency-related targets set?
Base Pracnce: 2.10.6 Tape Management
What is your procedure for requesting, locating and loading tapes?
Where are tapes stored? How is the location of each tape in storage tracked?
How do you ensure that all tapes are labeled? What information is recorded on the label?
Geneπc Quesnons for Process Area
Are problems ever experienced in running backups due to large data volumes, inadequate bandwidth or sub-optimal hardware/software support?
What type of training do storage management personnel receive on standards, policies and actual operation of the mass storage management system?
Are procedures audited to verify that standards and policies are being followed? Are storage management operations periodically reviewed with the purpose of identifying potential improvements?
Do you find that the resources devoted to mass storage management satisfactorily meet the storage needs of the organization?
Process Capabihty .Assessment Instrument
Process .Area 2.10 Mass Storage Management
Process Area Mass storage involves those activines related to the handling of vanous types of centralized Description and dismbuted storage media (e.g.. tapes, disks, etc. ) including the monitoπng and controlling of storage resources and their usage. Mass Storage Management can be viewed as providing the top level of storage management with support form Archiving and Backup Restore Management.
Questionnane
Process Area 2.10 Mass Storage Management
Yes No Don't N A Know
1 Do vou use a storage management tool for monitoring and conttolling
Do you have a dedicated mass storage management team for centralized and dismbuted storage media management?
Are usage standards for storage media defined?
Are disk space requirements determined and is disk space partitioned accordingly?
.Are measures adopted to rectify problems with stateless file systems?
Are data stored and moved around in the media hierarchy (eg. on-line, near- line. off-line) according to determined strategies or priorities?
.Are tape management procedures defined for retention, rotation and labeling of files?
Work Product list
Process .Area 2.5 Backup Restore/. Archiving
Storage policies document Naming standards document Tape management procedures Usage level reports
Figure imgf000139_0001
Figure imgf000140_0001
Base Pracnces
Figure imgf000140_0002
Figure imgf000141_0001
References
Figure imgf000141_0002
Process Area Release Management
Level 1
Assessment Indicators Process Performance
Genenc Pracnce: Ensure that Base pracnces are performed
Base Pracnce Example of Assessment Indicator .Assessment Indicators at Client
3.1.1 .Analyze change request Each request is clearly marked with an emergency/non- pπoπnes emergency status (eg. planned vs. unplanned). When questioned, rules of prioritization can be explained
3.1.2 Confirm techmcal feasibility Results ofimpaa analysis show that technical feasibility of the release package of release packages was evaluated.
3.1.3 Peribrm release requirements Results ofimpaa analysis show that payroll resource analysis requirements (e.g. software and people) were assessed.
3.1.4 Define contents of the Platform, distribution lists, configuration and data release package paramaers are considered for scope and contents of release.
3.1.5 Plan release tesnng Release work-plans are available that note testing schedules for modules, versions or packages and their subsequent release date.
3.1.6 Agree on and document Payroll agrees on release schedule after it is confirmed release schedule, confirm with with migration control and validation. appropπate pames
3.1.7 Report on progress of release Reports are distributed to various parries and feedback plan is coordinated.
Level 2
Process Geneπc Pracnce Example of Assessment Indicator Assessment Atmbute Indicators at Client
Performance GP2.1 Establish and A policy for release planning and tracking exists Management maintain a policy for and is followed. performing operational tasks
GP2.2 Allocate sufficient Release management personnel have access to resources to meet software, documentation and reports to perform expectations their tasks.
GP23 Ensure personnel Training policy is in place for all new release receive the appropriate management personnel Organization wide, type and amount of customers are aware of release management's training capabilities.
GP2.4 Collect data to Data are colleaed. for example: number of measure performance rollouts per month, number of emergency releases per month, etc
GP2.5 Maintain Status reports covering progress, issues and communication among problems are provided to other related areas and team members management Feedback is provided via meetings, reports, etc
Work Product GP2.6 Ensure work All release contents are defined and accessible Management products satisfy prior to scheduling. documented requirements
GP2.7 Employ version Version control is placed on release components. control to manage changes to work products
Level 3 Assessment Indicators
Process Geneπc Pracnce Example of Assessment Indicator Assessment Attπbute Indicators at Client
Process GP3.1: Define policies There is one centralized release management Definition and procedures at an area vs. several throughout the organization. organization level
GP3.2 Define tasks that New release management personnel receive ι satisfy the process training on the process. Subsequent training is ' purpose and business provided for new technologies, software or
1 { goals consistently and procedures. Future employee requirements are ' i repeatedly addressed. '
, Process GP33: Plan for human Release management handles releases according >
1 Resource resources proactively to the stated policy vs. ad hoc ι
| 1 GP3.4 Provide feedback Release management receive feedback from SLA, ,
1 1 in order to maintain Service Desk, etc via e-mail, reports or meetings knowledge and regarding changes, concerns or issues.
1 experience
Level 4 Assessment Indicators
Process Geneπc Pracnce Example of Assessment Indicator .Assessment
Atmbute Indicators
! at Client
Process GP4.1 Establish Release management processes are based on Measurement measurable quality- strategic business needs vs. industry standards. objectives for the operations environment
GP4.2 Automate data Metrics are automatically colleaed from the collection release management software tool vs. manual collection, for example: % of releases approved per month, % of on time delivery per month, etc.
GP43 Provide adequate Metrics are colleaed by release management resources and personnel are analyzed and reported Software infrastructure for data tools may tie deployment and release collection management for improved data collection.
Process I GP4.4 Use data analysis Release management is evaluated against
Control i methods and tools to performance goals and metrics for suggested
' manage and improve the improvements and process revisions. process
Level 5 Assessment Indicators
Process Genenc Pracnce Example of Assessment Indicator Assessment Attribute Indicators at Chent
Continuous GP5.1 : Continually Release management is connnuousty improved Improvement improve tasls amd via incremental changes, an example: Change process order requests reviewed for improper customer installation on a weekly basis.
Process . GP 5.2 :Deploy "best Process improvement notes in 5.1 example is
Change i practices" across the validated via metrics and business goals, for organization example: review identifies~% of requests are truly Service Desk issue, decreasing man hours for release management by 2.5%.
Process Capabihty Assessment Instrument. Interview Guide
Process .Area 3.1 Release Management
I Questions
Base Pracnce: 3.1.1 Analyze change request pπoπnes
Have change request priorities been analyzed (emergency, non-emergency)? How?
.Are rollout plans put into place? Who is involved in this?
How are emergencies documented?
Base Pracnce: 3.1.2 Confirm techmcal feasibihn- of the release package
.Are SLAs considered for technical/compliance issues? If no. w hy not?
Ho is the technical feasibility of the release package confirmed (e.g. meetings. conference calls.)? With whom (e.g. operations, vendors, etc.)?
Are model and architecture requirements in place for "new" releases? If no. why'
Base Pracnce: 3.1.3 Perform release reαuirement≤ analysis
1. Identify which work flow tools are utilized?
Have the resource requirements been analyzed (e.g. hardware, software, people, etc.)? If v es. bv whom?
3. Is there a process model in place that provides a "big picture" view ?
Base Pracnce: 3.1.4 Define contents of the release package
Are the versioned application/system software and hardware platforms defined? If ves. bv whom?
2. .Are operation test procedures clearly defined and distributed? If yes. what are the testing procedures? If no. why?
3. Is the test reference data and configuration parameters required documented? If no. w hy? If yes. what are the parameters?
Base Pracnce: 3.1.5 Plan release tesnng
Do the work plans include release dates for all modules/versions (e.g. hardware, software, application, etc.) being released? If no. why? 2. Is appropriate lead time provided to customers prior to release and subsequent training? How is ! this lead time and training communicated?
3. Who in management reviews release plans? Does management quantify and qualify objectives for all release management processes performed?
4. Is the release fallback/contingency approach defined? If yes. elaborate.
Base Pracnce: 3.1.6 Agree on and document release schedule, confirm with appropnate pames How is the overall release plan/effort managed and coordinated with the appropriate parties?
2. Is feedback on implementation of releases received and reviewed? If yes. by w hom?
Base Pracnce: 3.1.7 Report on progress of release plan
What reports are produced regarding release planning? How are these updates/reports distributed I (e.g. meeting reviews, e-mail, hard copy, etc.)? Who are these reports distributed to? Geneπc Quesπons for Process .Area
1. .Are there release management procedures/policies noting how change orders, schedules, reports, analysis and feedback are processed? If ves. w hat are the procedures? Is this followed? If not, why'
2. Please describe training for release management personnel? Is this enacted for new and well as existing staff members?
3. What metrics are collected on the release management process to measure success, completion/failure? Are adequate resources provided to gather these statistics? What is done with these metrics?
4. Are standardized checklists, processed and required deliverables noted to personnel who perform the release process? If yes, what is used (e.g. checklists, process, etc.)?
5. Is the release management process reviewed for continuous improvement and are these improvements enacted? If yes. how is the improvement vahdated against business and performance goals (e.g. benchmarks, basic measurements, etc.)?
Process Capability Assessment Instrument
Process .Area 3.1 Release Management
Process .Area Release Management is the overall process of delivering an on-nme release mto producnon. Descπpπon Release Management is broken down mto several areas, which are described below-
Release Planmng
Release Planning coordinates the release of updates to the distπbuted and central sites. Due to the fact that any change m the dismbuted environment may impact other components, releases must be planned carefully to ensure that a change will not neganvely impact the dismbuted system.
Release Planmng defines the content of a release; groups new or changed software, data, procedures, training matenal and upgrade packages for dismbunon and impiementauon: apphes versions to the release components, and creates a release schedule.
Release Tracking
Release Trackmg is the process of momtoπng the progress of release contents and all releases.
Quesnonnarre
Process .Area 3 1 Release Management
Yes No Don't N A Know
.Axe change requests pπoπttzed over others because of their emergency status l e.g. Emergency over planned requests)?
Are service level agreements and techmcal feasibύityimpact considered when determining techmcal issues for the release?
Are procedures followed to ensure that the release is companble with the existing dismbuted environment?
Is a procedure m place and used which helps determine the contents and scope of a future release?
.Are release work plans schedules prepared m advance of the actual release?
Are schedules agreed upon and dates confirmed with all related pames pπor to the actual release?
7 .Are reports on the progress of release plans provided?
Work Product list
Process Area 1 Release Management
Documented release procedures Example of a past release schedule
Example of configuranon parameters
Example of build procedures and scnpts
Example of operanons procedures
Example of customer procedures
Example of customer training mateπals
Example of legacy data interfaces
Example of early release rollout process successes and failures.
Figure imgf000145_0001
Base Pracnces
BP Number 3.2.1
BP Name Change Initiaπon
BP Descnpnon The change request serves as a formal record to document and track the status of a change from identificanon to its eventual compleπon. In mis acnvity. a change request is created and logged and the cnticahty of the change is determined. Receipt of change request with the requestor is confirmed.
Example The requester completes and submits a Change Request Form that contains informanon such as the date the Change Request Form was completed, the name and signamre of the employ ee requesnng the change, the type of change request (systems secunty: or other), the descnpnon of the change, ranonalefor the change, and the pnoπn: The Change Request Form is logged in the change control database
BP Number 3.2.2
Figure imgf000146_0001
Figure imgf000147_0001
References
Figure imgf000147_0002
Process Area: Change Co roi
Level 1
.Assessment Indicators: Process Performance
Geneπc Pracnce: Ensure that Base pracnces are performed
Base Pracnce Example of Assessment Indicator .Assessment i
Indicators at Client
3.2.1 Change Imnaπon A log shows change requests are recorded at time of receipt
3.2.2 Change Impact Impact analysis template or an example of completed Analysis, Assessment aannaallyyssiiss rreepnoortrt.
3.2.3 Change Approval Example of completed change request form shows authorization by appropriate personnel
3.2.4 Change Communicanon and A system for coordinating the planned date of a Scheduling change with other activities is in place
32.5 Change lmplementanon When questioned, the process for notifying all affeaed Planmng and Preparanon parties of a change can be explained.
3.2.6 Change Request Tracking Change control log shows that each change request is tracked until completion and the log updated at important milestones.
." Change lmplementanon Results of testing of implemented changes exist
3.2. S Change Backout and Contingency and/or back-out plans are available for Contingency Planmng each planned change
.2.9 Change Reporting Example of reports daailing compiaed. suspended and pending changes.
3.2.10 Change Post-Implementaπon Example of notification of change completion that is Reviews sent to the requestor.
Level
Process Atmbute Genenc Pracnce Example of Assessment Indicator Assessment Indicators at Client
Performance GP2.1 Establish and maintain A documented policy- is maintained, that
Management a policy for performing describes procedures for requesting operational tasks changes, time-frames for implementing changes, and change reporting requirements.
GP2.2 .Allocate sufficient 1 Adequate resources enable scheduled resources to meet | changes to be compiaed on time and expectations 1 according to plan.
GP2J Ensure personnel All new employees receive training on receive the appropriate type change control policies and procedures. ! and amount of training
1 GP2.4 Collect data to Data is colleaed to measure the change j 1 measure performance control process. For example: Percentage of requests declined by- Change Control Number of requests scheduled by Change Control each month. Percentage of requests not completed by Change Control on time
GP2.5 Maintain Issues are tracked and logged The communication among team Change Control team provides status members reports.
Work Product GP2.6 Ensure work products AU required information from a change Management satisfy documented request is logged. Logs are updated at all requirements necessary milestones.
GP2.7 Employ v ersion Modifications to reporting requirements control to manage changes to or report formats are duly noted and work products applied.
Level 3 Assessment Indicators
Process Atmbute Geneπc Pracnce Example of Assessment Indicator Assessment
Indicators at Client
Process GP3.1: Define policies and Procedures for requesting changes are Definition procedures at an organization known throughout organization. If level applicable, change request forms are readily available
GP3.2 Define tasks that Planned changes are reviewed to projea satisfy the process purpose the need for additional staff or human and business goals resource qualifications. consistently and repeatedly
Process GP33: Plan for human Change requests are always handled Resource resources proactively according to the documented policy (eg. requests are not processed without appropriate approval).
GP3.4 Provide feedback in On completion of a change request, order to maintain knowledge feedback is solicited from the requester. and experience
Lev el 4 Assessment Indicators
Process Atmbute Geneπc Pracnce Example of Assessment Indicator Assessment Indicators at Client
Process GP4.1 Establish measurable Quantitative performance targas are Measurement quality- objectives for the periodically set operations environment
GP4.2 Automate data Capability numbers are estimated for the collection defined performance marics.
GP43 Provide adequate All predefined metrics are colleaed and resources and infrastructure distribution to appropriate parties. for data collection
Process Control GP4.4 Use data analysis Performance metrics are compared to methods and tools to manage targets and discrepancies addressed. and improve the process
Level 5 Assessment Indicators
Process Atmbute Genenc Pracnce Example of Assessment Indicator Assessment I Indicators | at Client
Continuous GP5.1: Continually improve The Change Control process and relevant Improvement tasks and processes technologies are periodically reviewed to identify potential enhancements. Actions i are taken to implement improvements I identified Process Change GP 5.1 :Deplov -best All appropriate parties in the practices" across the organization are informed of organization new/modified procedures aimed at improving the Change Control process.
Process Capabihty Assessment Instrument: Interview Guide
Process .Area 3.2 Change Connol
Questions
Base Pracnce: 3.2.1 Change Imtianon
How is a change initiated? Is a change-request form completed and submitted? What information is required on a change-request form?
Is confirmation of request receipt sent?
Where is a change-request logged? What information is recorded when a change-request is logged?
Does each change-request receive a priority level? If so. what are the various priority levels and what action or service level does a particular priority level warrant? Does a documented policy specify these actionslevels?
Does the requestor specify the criticality of the change or do change control personnel determine the request's priority- level? If the latter is the case, on w hat basis is a criticality level assigned to the request?
Base Pracnce: 3.2.2 Change Impact Analysis Assessment
What type of analysis of the change's impact is performed? What issues are considered?
Are both technical and business implications taken into consideration?
Is the effort required to complete the change determined?
Who performs the analysis and who reviews it?
WTiat are the consequences of the change impact analysis (i.e. is the change request rejected if the change analysis yields particular results)?
Base Pracnce: 3.2.3 Change Approval
WTiose approval is needed before a change request can be implemented? Does the person(s) whose approval is necessary depend on the scope or priority level of the change?
How is approval obtained and documented?
Is the change requestor notified of change approval or rejection?
Base Practice: 3.2.4 Change Communication and Scheduling
Once approval is obtained, hat is the process for estimating the time and scheduling the change?
.Are other completion times and dates factored into the estimated time of a change to be implemented?
Does a master schedule exist on which the change is noted, or how is the scheduled change communicated to appropriate parties?
Base Practice: 3.2.5 Change lmplementanon Planmng and Preparation
Who is notified of an impending change?
Ho does change notification take place?
How much time before the implementation of the change does notification occur?
If the system or parts of the system will be unavailable during the change implementation, how is this unavailability managed?
Base Practice: 3.2.6 Change Request Tracking
What is the process for tracking the implementation of a change request?
What events or conditions related to the change request are logged, i.e. when is the change request status updated?
Is the log reviewed to identify changes that might be overdue or that require additional action?
Base Pracnce: 3.2.7 Change Implementation
If necessary are change requests escalated/re-routed? What is the process for doing so ; Is this process documented and followed?
How is successful completion of the requested change testified or verified?
Who is responsible for verifying the successful completion of the change?
Base Pracnce: 3.2.8 Change Backout and Contingency Planning For what types of changes are back-out or contingency plans devised? Does a policy exist specifving changes that require such plans?
Where are back-out/contingency plans documented?
How frequently (often, rarely, never) are these back-out or contingency plans utilized?
Base Pracnce: 3.2.9 Change Reporting
What reports are generated pertaining to changes? What are the contents of these reports?
Do the reports follow documented guidelines on format and content?
How frequently are these reports created and disseminated?
Who views these reports and for what purposes?
Base Pracnce: 3.2.10 Change Post-Implementanon Reviews
Is requestor notified of change completion and confirmation received?
What is the process for closing a change request?
Is an audit trail of each change request stored? If so. what documentation is saved?
Can the audit trail for a particular change request be obtained? If so. how?
Generic Quesnons for Process Area
Are any metrics (e.g. percent of change requests completed on time, percent of requests put on hold) collected to measure performance of the change control process? If so. what are they?
Are any quantitative performance targets set for change control? If so. please describe them. Is performance evaluated against these targets?
What type of training do change control personnel receive? Are employees aw are of all document policies and procedures?
Is the change control process periodically reviewed/ evaluated with the intent of identifying potential improvements?
Process Capabihty Assessment Instrument
Process -Area 3.2 Change Control
Process .Area Change Control is responsible for coordinating and controlling all change administration Descπpπon activities with the enterprise environment ( i.e. document impact authorize, schedule, implementation control). Change Comrol determines if and when a change will be carried through in the enterprise environment. Change potentially covers all events that impact application software, systems software, or hardware.
Changes may often be divided into categories, for example:
New capabihty, such as new apphcanons or hardware components.
Modificanons. which can change functionality, improve penormance. etc.
Maintenance, typically to coπect enors.
Emergency, which require immediate aπennon and coπecπon lmplementanon.
Questionnaire
Process .Area I 3.2 Change Connol
Figure imgf000150_0001
10 i Is the change requestor informed w nen the change is complete? I
1 1 I Is an audit nail of all changes available ?
Work Product list
Process .Area : Change Control
Change request form
Sample change co rol log record
Change connol reports
Complete audit nail of a change request
Impact analysis results
Master change control schedule
Example of back-out/contingency plan
Validation (33
Figure imgf000151_0001
Base Pracnces
Figure imgf000151_0002
Figure imgf000152_0001
References
Figure imgf000152_0002
Process .Area: Vahdanon
Level 1
Assessment Indicators: Process Performance Genenc Practice: Ensure that Base pracnces are performed
Base Pracnce Example of Assessment Indicator Assessment Indicators at Client
, 3.3.1 Determine what needs to be . An example testing requirements document shows that 1 tested for the product ' business requirements, technical standards etc are evaluated.
3.3.2 Prepare test plans Test preparation documents, such as necessary resources/data lists and test execution schedules exist
3.3.3 Document test inputs and ' Test scripts are created that detail input data and expected results expeaed results for each testing requirement i 3.3.4 Install new product m test ' Prior to testing, a test environment is appropriately | environment 1 prepared and the produa installed i 3.3.5 Test product and evaluate results i All specified testing requirements are tested and test 1 reports are produced. j 3.3.6 Perform regression tesnng on I Regression testing results show that previously existing 1 environment &. system's funcnonality ' functionality is tested.
Level 2
Process Atmbute | Genenc Pracnce Example of Assessment Indicator .Assessment Indicators at Client
Performance GP2.1 Establish and maintain A documented policy is maintained that
Management a policy for performing specifies in general terms any standard operational tasks tests that need to be performed for every new product
GP2.2 Allocate sufficient Sufficient resources exist so that all resources to meet needed testing occurs on schedule. expectations
I GP23 Ensure personnel Validation personnel receive training on receive the appropriate type | any testing tools used and all validation and amount of training policies procedures.
GP2.4 Collect data to Data such as the following are colleaed: measure performance percentage of new items migrated successfully, average test set up time
GP2.5 Maintain The validation team provides status communication among team reports to appropriate parties. members
Work Product GP2.6 Ensure work products Test plans and test scripts are produced Management satisfy documented according to documented specifications. requirements
GP2.7 Employ version If the produa is tested at multiple levels control to manage changes to (unit, system etc), a method for keeping work products track of all documents belonging to each test exists.
Level 3 Assessment Indicators
Process Atmbute Genenc Pracnce Example of Assessment Indicator Assessment Indicators at Client
Process GP3.1 : Define policies and Validation of all new products for the Definition procedures at an organization dismbuted environment is controlled or level overseen by a single central validation team.
GP3.2 Define tasks that When new technology for testing is satisfy the process purpose considered, corresponding human and business goals resource requirements areprojeaed and consistently and repeatedly planned for if the new technology is to be adopted Process GP33: Plan for human The validation process manages to keep Resource resources proactively problems with new products at a minimum.
GP3.4 Provide feedback in The validation team provides feedback on order to maintain knowledge the validation process and raises any and experience issues or concerns at periodic meetings.
Level 4 Assessment Indicators
Figure imgf000154_0001
Level 5 .Assessment Indicators
Figure imgf000154_0002
Process Capabihty Assessment Instrument: Interview Guide
Process .Area J.J vahdanon
Questions
Base Pracnce: 3.3.1 Determine what needs to be tested for the product
What is the process for identifying all that needs to be tested for a new product? .Are business requirements reviewed and taken into consideration?
Has a general set of technical standards been defined for components of the distributed environment? If so. are the testing requirements defined to ensure that compliance with these standards will be tested?
For any prodnct are there certain standard tests performed (e.g. capacity-, operabihty, compatibility etc.)? If so. what are these tests?
Base Pracnce: 3.3.2 Prepare test plans
What tasks are completed while preparing test plans?
Is a test environment specified, and the necessary preparations detailed?
How is the appropriate testing approach and test model developed?
What test plan documents are produced? Are these a standard set of documents produced for ever testing project? If not how might they vary?
Are all resources required for the testing process identified? Who is in charge of identifying them.
(i.e. are others consulted for this decision or is this just done by the validation team)?
Who is involved in creating the final test plans? Who reviews the final test plan documents? Base Practice: 3.3.3 Document test inputs and expected results
What document(s) are prepared detailing all test inputs to be used and the expected results? What other information do these documents contain? Are these documents prepared according to predefined specifications?
Are the test inputs expected results directly linked back to individual testing requirements identified earlier?
Base Pracnce: 3.3.4 Install new product in test environment
Please describe the test environment used for testing. Does a single environment exist for all testing purposes?
Does the test environment cover all operating systems, configurations, applications, etc. that are in the production environment?
What tasks or activities are involved in preparing the test environment for the installation of a new product (e.g. verifying proper setup of hardware, software, network, clear data from previous tests. load test data in appropriate regions)? Are these procedures documented?
Can information be copied from the production environment to the test environment? If so. typically what information is transferred? How is this information transferred?
Is the product's installation method documented and installation issues noted? Does the installation follow a standard process or policy for all new installations? If yes. please describe this policy or process.
Base Pracnce: 3.3.5 Test product and evaluate results
Are all predefined testing requirements tested? Are any mechanisms in place to ensure that all specified test cases are run? If yes, what are these mechanisms?
.Are any tools used for automated testing? If so. please describe them. Approximately what proportion of testing is automated and what proportion is performed manually?
Who manages/controls the testing process? What are his/her responsibilities?
In addition to testing the product functionality, is the product's business functionalit verified (i.e. does the product meet the business requirements for which it is intended)? If so. what is the process for doing so?
If appropriate, is the product tested on customers to check system navigation/ease of use and adequacy of training/job aids that accompany the product?
What reports or documents are produced as the output of the testing process? What information is presented and who receives this information? Have reporting guidelines been defined?
Base Practice: 3.3.6 Perform regression testing on environment and system's funcuonalin-
What is the process for identifying the requirements for regression testing?
Is any tool employed for automated regression testing? If so. please elaborate. Does this tool meet all regression testing requirements? If not. where does it fall short? How are these shortcomings addressed?
Are any manual or automated test scripts created and retained for reuse during future regression testing activities? If yes. are these test scripts periodically updated or changed to accommodate new processes or requirements? Who updates these scripts?
If regression testing results show that the product has unintended impacts on other areas, what is done? Is the change rolled back? Who decides that a roll back should occur and at what point during the process does this happen?
Geneπc Questions for Process Area
Does a designated "validation team" exist? If so. please describe the roles and responsibilities of members of the team. How does the team coordinate its activities?
What other groups does validation interface with? Where do requests for testing of a particular product originate?
Are the testing process and new technologies periodically evaluated to identify- potential improvements? Are associated future human resource requirements considered? How frequeπtly does such a review occur? Who is involved in the process?
What type of training do testing personnel receive? Does formal training occur or does training primarily occur on-the-job?
.Are any statistics collected for purposes of evaluating the testing process (e.g. percent of successful migrations of tested products)? If so please describe them and the method by which they are collected. Are targets for these metrics set? What is the process for assessing performance against these targets? How has performance been vis-a-vis the targets defined?
Do you find that adequate resources are allocated for validation activities. Please elaborate.
Process Capabihty Assessment Instrument Process .Area Vahdanon
Process .Area Vahdanon involves tesnng potenπai hard are and software for the dismbuted environment Descnpnon pπor to procurement to determine how well a product will fulfill the requirements identified Vahdanon also ensures that the lmplementanon of a new product will not adversely affect the existing environment
Quesuonnaire
Process .Area 3.3 Vahdanon
Yes i No Don't N/A Know
In preparanon for tesnng of a new product for the dismbuted environment are all condiuons that need to be tested determined?
Is a tesnng approach test model developed?
Is a schedule for executing the test dev eloped?
.Are test scπpts created that document all test inputs and expected results?
Is the new product installed in a test environment?
Is the product tested against all the predefined requirements 7
Is regression tesnng peπormed to v enfy that the new product will not adversely affect exisnng funcnonahn-9
Work Product list
Process .Area 4 3 Vahdanon
Sample test plans (e.g. test requirements, test execunon schedule) Sample testmg documents (e.g. test scπpts)
Sample test report
Techmcal standards required of all products De lo ment 3.4
Figure imgf000156_0001
Figure imgf000157_0001
Base Pracnces
Figure imgf000157_0002
References
Figure imgf000157_0003
Process Area: Deployment
Level 1
Assessment Indicators Process Performance
Genenc Pracnce: Ensure that Base pracnces are performed
Figure imgf000158_0001
Level 2
Figure imgf000158_0002
GP23 Ensure personnel Training policy is in place for new receive the appropriate type deployment personnel regarding, and amount of training procedures, technologies, software, etc Organization wide customers are aware of the capabilities within the deployment group.
GP2.4 Collect data to Data are colleaed. for example: number measure performance of batch jobs per month, number of hours
! per batch job, etc
! GP2.5 Maintain Feedback might be colleaed via meetings i communication among team and reports from physical planning and
1 members management regarding lead times. ι
1 Work Product GP2.6 Ensure work products Deployment schedules are complete with 1 ' Management satisfy documented all necessary data (eg. lead time,
1 i requirements external/internal groups effected, \ j ) resources, ac).
1 '
GP2.7 Employ version Approvals are gained on schedules to control to manage changes to accommodate the latest issues noted on
1 work products current deployment plan.
Level 3 Assessment Indicators
Figure imgf000158_0003
Definition procedures at an organization vs. multiple plans throughout the level company in compliance with policies and procedures.
GP3.2 Define tasks that employees receive training on the satisfy the process purpose deployment process and subsequent new and business goals technologies, processes, software, etc consistently and repeatedly Future employment needs are also considered.
Process GP3J: Plan for human Deployment schedule is always handled
Resource ' resources proactively according to stated policy vs. ad hoc.
GP3.4 Provide feedback in Deployment solicits and provides order to maintain knowledge feedback from external and internal and experience groups for issues/problems and changes
1 that should be refleaed on the schedule
Level 4 Assessment Indicators
Process Atmbute ι Geneπc Pracnce Example of Assessment Indicator Assessment
I Indicators at Client
Process GP4.1 Estabhsh measurable Deployment plan is based on strategic Measurement qualify objectives for the business needs vs. indusτn- standards. operations environment
GP4.2 Automate data Metrics are automatically colleaed from collection the deployment schedule vs. colleaed manually
GP43 Provide adequate Metrics automatically coUeaed by resources and infrastructure deployment personnel are analyzed and for data collection reported. Deployment software tool maybe linked to physicai site management ι i schedule and might reflea scheduling '
! conflia via e-mail message i Process Control i GP4.4 Use data analysis Deployment is evaluated against
' methods and tools to manage performance goals and metrics for '
1 ! and improve the process suggested improvements and revisions to i the process.
Level 5 Assessment Indicators
Process Atmbute Geneπc Pracnce Example of Assessment Indicator Assessment Indicators at Client
Continuous GP5.1 : Continually improve Deployment issues are connnuously Improvement tasks and processes improved via incremental changes, for example: e-mail sun-ey information maybe colleaed from external and internal parties for phase one of four vs. hard copy surveys.
Process Change GP 5.1 :Deploy -best Process improvement noted in 5.1 practices"' across the example is validated via metrics and organization business goals, for example: by collecting data via e-maU survey 20% more external responses were received which decreased deployment planning for phases two - four by 30%.
Process Capabihty Assessment Instrument: Interview Guide
Process .Area I 3 4 Deployment
Questions
Base Pracnce: 3.4.1 Confirm schedule with all key groups penodicaiiv Does the deployment schedule include rollout dates, software v erification. training time, backout strategy, physical site preparation, locations, numbers and type of customers involved, either internal and/ or external?
What is the procedure for identifying, assigning, and defining responsibilities and schedules to key internal and external deployment groups? Is this being done? How often are people re-assigned different responsibilities based on schedules and need?
How often are meetings set with internal and or external groups to discuss deployment activities?
What avenues are available to internal/external groups to communicate and provide feedback regarding deployment activities (e.g. meetings, video/conference calls, hardcopy. email, etc.) ?
Base Practice: 3.4.2 Determine whether schedule will be impacted based on issues or problems that anse
Is lead-time provided to internal external groups so they may evaluate any issues or problems? If yes. how- long?
Are required resources (e.g. personnel, time, software, hardware, etc.) reviewed individually for deployment schedule purposes? What resources do you take into consideration?
Who is responsible for collecting internal/external group responses regarding deployment issues? Is this communicated to all stakeholders?
Base Practice: 3.4.3 Change deployment schedule as necessary to accommodate issues problems
Have deployment schedules been changed to reflect issues and problems from stakeholders'? If so. please describe how?
In the past, what have been the reoccurring problems and issues considered by the deployment schedule?
Do the deployment schedules allow rime for "catch-up" or recovery time for deployment errors?
Base Pracnce: 3.4.4 Report on progress of deployment plan
How are audits performed regarding rollout activities and reported? Are adequate resources provided for this task?
What mode of communication is used to distribute reports? How often?
Are qualified /quantifiable deployment milestones determined and reported to all internal/ external groups?
Are customers provided with a contact person to communicate progress/issues/problems (e.g. service desk personnel, deployment contact)?
What data is collected and reported upon with deployment?
Base Pracnce: 3.4.5 Disseminate reports to appropπate pames
1. Who receives reports noting progress/success/failures/concerns about deployment?
2. How often are these report disseminated to internal external stakeholders?
Base Pracnce: 3.4.6 Provide feedback on the deployment to deployment planning
Do other departments monitor and respond to deployment feedback? If so, whom and what type of feedback do vou receive?
How does the deployment team/personnel receive feedback from stakeholders (e.g. through service desk request tickets, deployment public mailbox, etc.)?
Is this information tracked and used for current and future deployment ease and troubleshooting?
Generic Questions for Process .Area
Is training provided that reviews the deployment process/procedure? If yes. describe the training. Is training provided for all customers effected by the deployment? If yes. describe the training.
.Are the deployment activities and processes momtored for continuous improvement^ If ves. how?
Have any changes been enacted and validated after they have been identified as a continuous improvement area?
Process Capabihty Assessment Instrument
Process Area 3.4 Deployment
Process Area Deployment monitors the rollout schedule against the activities taking place to ensure that Description rollout happens smoothly according to the planned schedule. .As there are many dependencies withm a distnbuted system, deployment can become highly complex and must be synchronized.
In addition, numerous groups within and external to the organization will be involved in the rollout. Deployment is responsible for managing these groups, coordinating the information received from these groups, and determining whether or not the schedule will be negatively impacted by any acnvity taking place. If changes to the schedule are required Deployment is responsible for coordinating the changes across all of the groups involved and seek management approval for the changes.
Questionnaire
Process Area 3.4 Deployment
i Yes ! No Don't N/A Kno
.Are deployment schedules confirmed with all key groups penodicaiiv ?
.Are deployment schedules sometimes impacted by problems that anse with other networking functions or busmess issues'?
Is the deployment schedule changed to accommodate issues problems ?
.Are reports generated which provide informanon on the deployment progress ?
.Are reports disseminated to the appropπate external and internal pames ?
Is feedback provided to deployment planning personnel regarding the deployment process progress?
Work Product list
Process .Area [ J.4 Deployment
Example of a previous deployment plan
Example of training schedule' materials that was provided to employees who recently received deployed apphcanon
Example of a previous deployment reports A copy of the standard procedures regarding deployment
Example of a backout strategy if deployment is not successful
Software & Data Distribution (3.5)
PA Number 3.5
PA Name Software & Data Dismbunon
Figure imgf000162_0001
Base Practices
Figure imgf000162_0002
References
Figure imgf000162_0003
Process Area: Software & Data Dismbunon
Level 1
Assessment Indicators Process Performance
Genenc Pracnce: Ensure that Base pracnces are performed Base Pracnce Example of .Assessment Indicator Assessment Indicators at Client
3.5.1 Idennfy architecture The quarterly reports for sales will be distributed appropπate for environment using push architecture. Phased software distribution wϋl take place for the new software. Software and data distribution personnel can identify an architecture for each distribution. i.3._ Idennfy architectures per On-line processing is used with new software. busmess process Software and data distribution personnel can identify which business processes are used
Detailed Design A detail design shows overlap into other process areas (eg. SLA Management. Event Management, Reporting, etc). Reports/issues/problems are identified between process areas and noted by personnel when asked.
Level 2
Process Atmbute Genenc Pracnce Example of .Assessment Indicator Assessment Indicators at Chem
Performance | GP2.1 Estabhsh and maintain A policy that would cover the software Management , a policy for performing distribution of an application, patch or ! operational tasks upgrade and is followed.
GP22 Allocate sufficient Software copies are ordered and resources to meet personnel have been assigned expectations distribution tasks.
GP23 Ensure personnel Training policy is in place for new receive the appropriate type software and data distribution staff. and amount of training Organization wide, customers are aware of the capabilities of software and data distribution.
GP2.4 Collect data to Data are coUeaed. for example: total measure performance number of batch rollouts scheduled per month, number of hours used per month for roUouts, etc
GP2_5 Maintain I Status reports noting issues/scope of communication among team change are forwarded to customers members and departments. Feedback via meetings, report, ac is colleaed
Work Product GP2.6 Ensure ork products I A complae copy of the detailed design
Management satisfy documented plan available requirements
GP2.7 Employ v ersion A change control document referencing < control to manage changes to overlapping department being notified work products 1 of distribution and related documents 1 being updated accordingly.
Level 3 Assessment Indicators
Figure imgf000163_0001
GP3.4 Provide feedback in ew employees within the software and order to maintain knowledge , data distribution group received and experience l training on the process. Subsequent i process changes for distribution are I noted to aU distribution staff within I training, meetings, videoconferencing, I etc Future employment needs are I considered.
Level 4 Assessment Indicators
Process Atmbute Geneπc Pracnce Example of Assessment Indicator Assessment Indicators at Client
Process GP4.1 Establish measurable Software and data distribution is based
Measurement qualit objectives for the on strategic business needs vs. industry- operations environment standards.
GP4.2 Automate data Metrics are automatically colleaed collection from the distributed network vs. manually coUected.
GP4.3 Provide adequate Software and data distribution is resources and infrastructure tracked via a remote diagnostic tool for data collection that allows the administrator, for example, to perform remote troubleshooting of workstations and servers on the network. Problems can be solved without visiting each machine and analysis can be rendered.
Process Control GP4.4 Use data analysis Completed software and data
, methods and tools to manage distribution processes are evaluated and improve the process against performance goals and metrics for suggested improvements and revisions to the process.
Level 5 Assessment Indicators
Process At bute Geneπc Pracnce Example of Assessment Indicator Assessment Indicators at Client
Continuous GP5.1 : Continually improve Software and data distribution Improvement tasks and processes continuously improved via incremental changes, an example: push architecture might be identified as better for small patches vs. upgrades.
Process Change GP 5.1 rDeploy "best Process improvement noted in 5.1 practices" across the example is validated via metrics and organization business goals, for example: it improved ease of installation, shorter downtime by 10% and fewer service caUsfrom customers.
Process Capabihty Assessment Instrument
Process Area 3.5 Software & Data Dismbunon
Process Area The Software and Data Dismbunon process allows software and data to be installed or Descnpnon updated on hosts, sen ers and workstanons providmg customers with new and improved system funcnonahty. Dismbuted architectures require companbihty between software and data on the v anous machmes withm die system and at nmes. across different platforms (e.g. MVS host and Wmdows clients). Updates therefore must be carefully planned synchronized executed and, if necessary, regressed.
Quesuonnaire Process Area 3.5 Software & Data Dismbunon
Figure imgf000165_0001
Work Produ list
Process Area - Software & Data Distribution
Example of Software Performance Evaluation
Example of ""Manual" Dismbunon Package sent to User's.
Example output of Software'Data Dismbunon Reports (SuccessesPailures etc.
Example of Asset Inventory Report for Softw are. Data Dismbunon
Current copy of Detailed Design Plan
Example of Change Co rol Document
Migration Control 3.6)
Figure imgf000165_0002
Base Pracnces
BP Number 3.6.1
BP Name .Assemble the release package
BP Description The piupose of this acnvity is to bundle die requirement components of a release, and ensure that it is correct and complete.
J Example .Assurances are made that the tools, tesnng, software, space and version control is in place before a package is released.
BP Number 3.6.2
Figure imgf000166_0001
References
Figure imgf000166_0002
Process Area: Migranon Control
Level 1
Assessment Indicators Process Performance
Genenc Pracnce: Ensure that Base pracnces are performed
Base Pracnce Example of Assessment Indicaior Assessment Indicators at Chent
3.6.1 .Assemble the release Release packages are assembled/complete from package development Hardcopy or electronic checklist is available for bundle requirements. When 1 1 questioned, personnel can explain tools used, 1
Figure imgf000167_0001
Level 2
Figure imgf000167_0002
Level 3 Assessment Indicators
Process Attπbute , Geneπc Pracnce Example of Assessment Indicator Assessment Indicators at Chent
Figure imgf000168_0001
Level 4 Assessment Indicators
Process Attribute j Generic Practice Example of Assessment Indicator Assessment ) Indicators ι at Chent I
Process GP4.1 Estabhsh measurable Addressing and responding to migration
Measurement quality objectives for the control issues based on strategic operations environment business needs vs. industry standards.
GP4.2 Automate data Metrics are automatically coUeaed from collection migration control vs. a manual coUection, for example: % of time a i i migrated piece is in testing, % ofroU- backs a month that are patches, etc
. GP43 Provide adequate Metrics are colleaed by migration ,
| resources and infrastructure control personnel are analyzed and for data collection reported. Migration tools are tied to validation, in the event testing is complete, migration reports wiU be updated with completion date
Process Control GP4.4 Use data analysis Migration control is evaluated against methods and tools to manage performance goals and metrics for and improve the process suggested improvements and revisions to the process.
Level 5 Assessment Indicators
Figure imgf000168_0002
Interview Guide
Process Area | 3.6 Migration Control Questions
Base Pracnce: 3.6.1 Assemble the release package
.Are tools, software, space and v ersion controls always m place to secure a complete and bundled release ? If yes, who does this and how ? Ifno. explain.
Who does migranon control coordinate this process with (e.g. Change ControL Vahdanon. Deployment Software and Data Dismbunon. etc.)? Explain the interactions.
Base Pracnce 3.6.2 Mamtam integrity of all master release packages
Are all master release packages mamtamed m their own file and directory structure? Ifno. explain.
Are all documents for the master release package archived mamtamed? If yes. by whom (e.g. owners. developers, programmers, etc.) and are they accessible?
Base Pracnce: 3.6.3 Implement version control on release received from development
Is version control mamtamed on release software from development? If yes. how and who is responsible? How is feedback provided (e.g. reports, form provided, etc.)?
Is change control made aware of releases received from development? If yes, how? If no. explain.
Base Pracnce: 3.6.4 Migrate proper versions of release from development to test environment
Are versions validated to ensure that the correct versions of releases are migrated mto the test environment? If ves. how and bv whom?
Is vahdanon made aware of release migranon mto the environment? If yes. how? If no. explain.
Base Pracnce: 3.6.5 Receive confirmanon that release package has been tested successfullv
How is confirmanon receiv ed regardmg successful tesnng? By whom and to whom is this infoπnanon sent?
2. .Are all schedules updated with this informanon? If yes, which ones Ifno. why0
Base Pracnce: 3.6.6 Notify appropnate pames of status of release package's migranon
How are other pames notified of release package's migranon? Who would be the typical receivers of such informanon?
Do other pames supply feedback to migranon control regardmg concerns, problems or collaboraπve efforts? If yes, how is typical communicanon handled (e.g. e-maiL reports, meetings, etc.)?
Base Pracnce: 3.6.7 Mamtam migranon libranes
Are migranon libranes mamtamed? If ves. by who and how9 If no. explam how histoncal software or versions are kept?
2. How long are migranon libranes mamtamed for?
Geneπc Questions for Process Area
Is there a formal policy m place that cov ers the ennre migranon connol process? If yes. is it follow ed and who is responsible for its maintenance If no. explain.
Is there training m place for new employees7 If yes. explain the training provided (e.g. ad hoc. on the job. formal, lecture)? Is follow-up training provided on new technologies and procedures for all migranon connol employees9 Explain.
.Are data collected on the migranon process7 If yes. is this automated? .Are memcs gathered nonng more statisncal informanon? If yes. explain what memcs are collected and what tools are used (e g software, programs, etc.).
.Are strategic goals m place for migranon connol? If yes. what are they and are they measured against memcs? Are these memcs analyzed against busmess goals and reported on. If yes, how and by whom? Ifno. explain.
Is the migranon control process reviewed for continuous improvement? If yes. are these improvements ever deployed and measured against memcs and busmess goals9
Are there enough resources provided for the migranon control process (e g. software, tools. personneL etc.)9 Ifno. explam?
Process Capabihty Assessment Instrument
Figure imgf000169_0001
updates are: received from development versioned according to the version strategy of Release Planmng moved into the test env ironment moved from the test env ironment mto the producnon environment after the pre release tests have been successfully completed
Quesnonnaire
Figure imgf000170_0001
Work Product list
Process Area | 3.6 Migranon Control
A copy of the policy or procedure guide regarding migranon control
Samples of changes requests nonng migration connol information
Samples of reports noting migranon control status and future schedules
A copy of a migranon connol schedule calendar for a typical software migration process
Figure imgf000170_0002
Base Pracnces
Figure imgf000170_0003
Figure imgf000171_0001
Figure imgf000171_0002
References
MODE .
MODEv Toolkit
Figure imgf000171_0003
Base Pracnces
Figure imgf000171_0004
Figure imgf000172_0001
References
MODE v-2
MODE vl Toolkit — - NetCentric Designers Guide - Content Management
Process Area: Content Management
Level 1
Assessment Indicators: Process Performance
Generic Pracnce: Ensure that Base practices are performed
Base Pracnce Example of Assessment Indicator Assessment ; Indicators at Chent '•
3.5.1 Content development A formal procedure for content development has been established Content management personnel are familiar, when asked, with procedures and standards for content
3.S.2 Content approval Content management personnel are aware of a formal approval process and can relay that process to the interviewer.
3.S.3 Content integration When questioned, personnel can explain process for migrating content into production environment j 3.8.4 Technical review Technical reviews are performed regarding content to ensure all requirements (eg. links, tags) are in place Notes from the review session are available
Content testing Content is tested in various environments platforms (eg. unix vs. pc) for performance problems and issues. Test plans for content management testing are available
3. S .6 Content restoration When questioned personnel can explain procedure for identifying content for removal and archive location. 3.8. Content agmg There is a procedure for identifying content for removal and archive location. Time table is sa-upfor removal
Level 2
Process Atmbute Geneπc Pracnce Example of Assessment Indicator Assessment Indicators at Chent
Performance GP2.1 Establish and A policy that would cover content Management maintain a policy for management from end-to-end and b performing the process followed.
GP2.2 .Allocate adequate Software tools, languages, equipment and resources for performing the personnel are available for content process management tasks.
GP 3 Ensure adequate Training policy is in place for new people skill content management personnel Organization wide, customers are aware of content management's capabilities.
GP2.4 Measure process Data are colleaed. for example: number of successful content approvals each month, average number of successful content migrations per month, etc
GP2.5 Coordinate and Status reports noting problems/ concerns communicate are forwarded to authors, content management and the web master. Feedback is provided via e-mail meetings, ac.
Work Product GP2.6 Verify adherence of A completed tracking document or report Management work products to the showing the progression of a web page applicable requirements throughout the entire content management process.
GP2.7 Manage the Version control has been established on j configuration of work all web pages and change control I products documents are produced referencing any- overlapping departments (eg. marketing, legal ac).
Lev el 3 Assessment Indicators
Process Atmbute Genenc Pracnce Example ol Assessment Indicator Assessment Indicators at Chent
Process GP3.1: Perform the process Content management is always handled Definition according to a defined according to the stated policy vs. ad hoc process
GP3.2 Provide feedback Authors and customers report back to content management about reports they have received and results that impaa them.
Process GP33: Define and establish All web documents are processed through
Resource adequate process content management prior to migration in i infrastructure a production environment i GP3.4 Provide adequate .Yew employees within the content human resource management group receive training on I competencies the process. New processes and technologies are handled in subsequent training sessions. Future staff requirements are addressed
Lev ei -i Assessment Indicators
Process Atmbute I Geneπc Pracnce Example of Assessment Indicator .Assessment Indicators at Chent
Process GP4.1 Establish measurable Content management is based on
Measurement quality objectives for the strategic business needs vs. industry services of the operations standards. environment's standard and defined processes
GP4.2 Determine the Metrics are automatically coUeaed from quantitative process i the network vs. manually. capability- of the defined process
GP43 Provide adequate Metrics are colleaed, for example, resources and infrastructure regarding hits per page and peak usage for data collection via automated software program. ; Software tools are tied to another process i area, for example: content : management's readiness to migrate into ! production would automatically update a I migration report
Process Control GP4.4 Use the quantitative , Content management is evaluated against process capability to manage i performance goals and metrics for the process suggested improvements and revisions to the process.
Level 5 Assessment Indicators
Process Atmbute Genenc Pracnce Example of Assessment Indicator .Assessment Indicators
1 at Client
Continuous GP5.1 : Continually improve Content management is continuously
Improvement process 1 improved via incremental changes, an i example: weekly content management meeting is held instead of monthly to 1 discuss problems/threats.
Process Change GP 5.2:Deploy "best Process improvement noted in 5.1 practices" across the example is validated via metrics and organization business goals, for example: resolution i of identified problems is quicker by 12 i days and 3% decrease needed restorations required
Process Capability Assessment Instrument: Interview Guide
Process .Area J.8 Content Management
Figure imgf000174_0001
4. Is version control established for all web related documents?
Base Practice: 3.8.3 Content Integranon
Who is responsible for migrating documents into the production environment? Is migration performed on an ad-hoc bases or on a scheduled basis? What is the process for migrating documents?
How is old or outdated material archived/stored when new data is migrated onto the system to replace it?
Base Practice: 3.8.4 Techmcal Review
Are technical standards and procedures established for content review? If yes. what are thev? Who conducts these reviews?
How are technical problems/concerns reported to the author, customers, content management or the web master (e.g. meetings, reports, e-mails)? Does Content Management coordinate an action plan/corrections with the author (e.g. scheduled, prioritized, ad hoc etc)?
What are the most common technical problems encountered? What are the future technical threats or issues to be considered? How are these problems fixed or resolved?
Base Practice: 3.8.5 Content Testing
1. Is the content tested before or after it is integrated into the production environment?
When testing content, which environments/platforms are checked for problems issues (e . unix. standalone, network)?
Who is responsible for testing? How is feedback provided from and to content management, customers, authors, web masters, etc?
Base Practice: 3.8.6 Content Restoration
Has any part or all of an archived web site ever been migrated into a production environment? If yes, explain the reason?
Who handles content restoration? What are the most common problems encountered when replacing current pages with older versions?
Is there an approval procedure as to what is restored and when? If yes, what is the process? Base Practice: 3.8.7 Content Aeing
Does the web site contain date sensitive/volatile content that must be updated often? If yes. how often and bv whom?
Is the site checked for relevant and current information on a scheduled basis? If yes, by whom? How frequently does such a check occur?
Are files removed from a site (eg. erased, archived), updated to include historical information/content or both? Is content volume an issue?
Are metrics gathered regarding content management? If yes, explain what data is gathered, why, and who is it distributed to?
Generic Questions for Process Area
Is a policy established, maintained and followed for the entire content management process? If yes. please describe it.
Are there enough personnel available in content management to perform all necessary tasks and manage the different types of contents (video, voice etc.)? If no. why?
Is training provided for new content management personnel? If yes. how is it performed (e.g. on the job, scheduled, ad-hoc)?
Is formal training provided on a continuous basis for all content management personnel? If yes. describe training.
Are metrics collected? Is software used to perform metric collection on an automated basis? If yes, what program are used? What data is being collected?
Is the content management process reviewed for continuous improvement? If yes, is this process measured? How?
Are all documents processed through the content management personnel prior to migration in a production environment? If no. why?
Are strategic goals established for content management? Are these measured? If yes, how?
Is the content management process compared against goals and metrics? Do these comparisons lead to suggested improvements for the process? Are deployed improvements then validated via metrics?
Does content management lack any resources that are needed to perform tasks and follow procedure? If yes, what ""
Process Capabihty Assessment Instrument Process Area 3.8 Content Management
Process Area Content Management represents the people, processes, and technologies that allow a net- Descπption centnc site to mamtam up-to-date, secure, and valid contents for fts customers.
Questionnaire
Figure imgf000176_0001
Work Product List
Process Area | 3.8 Content Management
Content Management Manual Example of any Content Management Reports Example of a web page that progressed through Content Management cycle
Metπcs collected for the Content Management process Examples of tracking documents/reports noting the status of web pages throughout die Content Management process
Figure imgf000176_0002
Base Pracnces I BP Number " 3.9.1
Figure imgf000177_0001
References
Figure imgf000177_0002
Process Area: License Management
Level 1
Assessment Indicators: Process Performance
Generic Practice: Ensure that Base practices are performed
Figure imgf000177_0003
Level 2
Figure imgf000177_0004
Figure imgf000178_0001
Level 3 Assessment Indicators
Figure imgf000178_0002
Level 4 Assessment Indicators
Figure imgf000178_0003
Level 5 Assessment Indicators
Figure imgf000179_0001
Process Capability Assessment Instrument: Interview Guide
Process Area 3.9 License Management
Questions
Base Practice: 3.9.1 Acquire New/Increased Number of Licenses
How are new/increased number of hcenses acquired? By whom?
Are the software programs used authorized by the original manufacturer? If no. explain?
Are housekeeping duties performed on license information? If yes, when? How?
Is the ability available to track, run detailed reports with version information, and measure the license management process regarding software hcenses? If yes, how?
Does hcense management authorize license use? If yes, how?
Base Practice: 3.92 Delete expired software and corresponding licenses
Is there a process in place for removing software with expired hcenses? If yes. what is the process? How often does this occur?
Are there any reports or data collected on software where the hcense has expired? If so, what detailed information is collected on the expired software? What is done with the data?
Base Practice: 3.9.3 Support Vanous License Types
1. Are vaπous hcense types supported? If yes, idennfy?
How are hcense renewals handled? Bv whom?
Are notices sent when hcense expiration dates are near? If yes, how is notification sent? Is unlicensed software searched for? If yes, how (physical, system)?
What is done when unlicensed software is discovered?
Generic Questions for Process Areas
What is the hcense management process?
2. Are reviews for the hcense management process conducted for continual improvement?
3. If improvements are implemented, how are the outcomes measured?
4. What training is provided to new and existing personnel regarding the hcense management process?
5. What hcense management reports are generated to management for review/feedback?
6. What policy, standards or procedures have been established for hcense management?
7. What are the needs, priorities and quantitative goals for hcense management?
8. Are any resources lacking that would facihtate data collection regardmg hcense management?
Process Capabihty Assessment Instrument
Process Area 3.9 License Management
Process Area License Management ensures that software hcenses are properly mamtamed This is Description especially important since organizations are legally bound to maintain hcense arrangements. These arrangements are complex and can be based on the number of copies, on the number of shared servers, on dates, etc. |
Quesnonnaire
Process Area 3.9 License Management
! Yes No Don't N A
; i Know
1 Do vou track software hcenses ? !
2 | Is there a process for acquiring new or additional software licenses ? I
3 .Are various hcense types supported?
4 Do software searches and reviews take place to ensure current licenses are > being held? ' ' i
5. J Does hcense management authorize hcense use? i ' | ι
Work Product list
Process Area 3.9 License Management
Sample Software License Agreement
Sample of Software License Purchases
List of available software with details (expiration date, number of customers, etc.)
Customer's Guide for Software Trackmg Program
Asset Management (3.10)
PA Number 3.10
PA Name Asset Management
PA Purpose Asset Management ensures that all assets are registered within die mventorv system and that detaded informanon for registered assets is updated and vahdated throughout the asset's lifetime. This informanon will be required for such acπvities as managmg service levels, managmg change, assisting m incident and problem resolunon and providing necessary financial mformanon to the orgamzaπon.
PA s Base Manage and mamtam asset mformanon Pracnces Audit informanon m system Repon on discrepancies .Archive asset mformanon Log ail assets m inventory
PA Goals To gather and mamtam asset informanon to assist in mcident and problem resolunon.
To utilize auto discovery capabihnes withm asset managemenL to aid m the nonficanon of new device lmplementanon.
Provide an easy way to store version information about devices and softw are/ data on a machine.
I PA's Memcs Percentage of incorrect asset data
The difference between the pπces paid and budgets for pamcular items
Percentage of requested products delivered on time
The cost of busmess items purchased unnecessaπly or mconecdy
Base Pracnces
BP Number 3.10.1
BP Name Manage and mamtam asset mformanon
BP Description Update and delete asset informanon locally or remotely. The purpose of this acnvity is to ensure that asset mformanon is accurate.
Example Software packages, such as Valuϋise, can allow an Asset Management team to maintain a database with what assets are assi ned to whom
Figure imgf000181_0001
References
Figure imgf000181_0002
Process .Area: Asset Management
Level 1
.Assessment Indicators: Process Performance
Generic Practice: Ensure that Base practices are peπormed
Base Pracnce Example of Assessment Indicator Assessment Indicators at Chent
3.10.1 Manage and maintain asset A software package (eg. ValuWise) is utilized to store informanon and update information on all assas.
3.10.2 Audit informanon in svstem Test results of an audit of the assa management system is available and shows that a check was done to verify that the information contained corresponds to aaual state of assets. When questioned, personnel can explain how the system is audited.
3.10.3 Report on discrepancies Reports on the audit process are available
3.10.4 .Archive asset informanon When questioned, personnel can describe the archival process for assa information.
3.10.5 Log all assets in inventory Procedures are in place to ensure that all assas in inventory are also logged in the assa management system.
Level 2
Process Atmbute Genenc Pracnce Example of Assessment Indicator .Assessment Indicators at Chent
Performance ' GP2.1 Estabhsh and maintain A documented policy is maintained, that Management a policy for performing describes what asset information must be ι operational tasks recorded, the frequency of updates and , reporting requirements. '
Figure imgf000182_0001
I GP2.7 Employ version control If multiple records for individual assets 1 to manage changes to work are maintained, mechanisms are in products place to ensure changes/updates are applied to all
Lev el 3 Assessment Indicators
Process Atmbute Geneπc Pracnce Example of Assessment Indicator Assessment i Indicators at Chent
Process GP3.1: Define policies and Channels exist for assa information to Definition procedures at an organization flow between asset management and level relevant parties such as Service Desk, Procurement etc
GP3.2 Define tasks that satisf Planning of future human resource the process purpose and needs occurs for assa management business goals consistently and given organization and related assa repeatedly growth projections
Process GP3J: Plan for human Asset information is updated in Resource resources proactively accordance with documented policy on events and frequencies that should be lo^ed.
GP3.4 Provide feedback in The assa management team provides order to maintain knowledge feedback on the assa management and experience process at periodic meetings. Departments/projeas provide feedback to assa management about content of reports received
Level 4 Assessment Indicators
Process Atmbute Genenc Pracnce Example of Assessment Indicator Assessment Indicators at Chent
Process GP4.1 Establish measurable Quantitative targ sfor assessing assa Measurement quality objectives for the management performance are operations environment periodically set
GP4.2 Automate data All predefined assa management data collection and assessment metrics are colleaed
GP4J Provide adequate Adequate staff is provided for manual resources and infrastructure checks to compare aaual assets with for data collection information in the system.
Process Control GP4.4 Use data analysis Assessment memcs colleaed are methods and tools to manage compared to targas and discrepancies and improve the process I addressed.
Level 5 Assessment Indicators
Figure imgf000183_0001
Process Capabihty Assessment Instrument: Interview Guide
Process Area -? .10 Asset Management
Questions
Base Pracnce: 3.10.1 Manage and Mamtam Asset Informanon
What tool or svstem is used to maintain asset information?
What attribute information is initiaUy recorded about the assets? What types of updates are made, and how frequently?
For what purposes is asset information used (e.g. financial reporting, managing service levels etc)? How does the asset management system interface with the other functions (such as accounting) that need access to asset information?
Does the tool enable detection and tracking of all hardware and software components instaUed on the network?
Can asset information be updated/deleted/browsed remotely and/or locally?
Base Pracnce: 3.10.2 Audit Informanon m Svstem
How is information in the system audited for correctness, completeness and accuracy?
How frequently do audits occur?
Can asset information be searched based on customer-defined parameters?
Who is responsible for overseeing the audit process?
Base Pracnce: 3.10.3 Report on Discrepancies
What reports are generated based on discrepancies identified during the audit process? What information do these reports contain?
.Are the content and format of these reports based on documented standards?
Who receives these reports and for what purposes?
What action is taken if discrepancies are identified? Does the action depend on the severity of the discrepancy? Are these procedures documented?
How frequently does this reporting process occur?
Base Pracnce: 3.10.4 Archive Asset Informanon
How long is asset information stored for? In what format and where is old asset information archived?
For what purposes and how frequently is archived asset information accessed?
Base Practice: 3.10.5 Log all Assets m Inventory
How is it ensured that in addition to assets in use. all assets in inventory are logged on the asset management system?
What is the updating process when an asset in inventory is moved for use?
Does the process for auditing informational accuracy cover assets in inventory?
Geneπc Quesnons for Process .Area
Is the asset management tool/process periodically reviewed to identify potential improvements? If so. how frequently does this occur and who controls this process?
How is performance of asset management functions measured? Are any performance targets (e.g. percent ot incorrect asset data in svstem) tor the asset management process defined? If so. what are they and how is performance assessed against these targets?
Do you find that the existing asset management system adequately meets the organization's asset information needs?
What type of relevant qualifications and training do asset management personnel have?
Process Capability Assessment Instrument
Process Area 3 10 Asset Management
Process Area Asset Management ensures that all assets are registered within the inventory system and that Description detailed information for registered assets is updated and validated throughout the asset's lifetime This information will be required for such activities as managing service levels, managing change, assisting in incident and problem resolution and providing necessary financial information to the organization
Questionnaire
Process Area 3 10 Asset Management
Figure imgf000184_0001
Work Product list
Process Area 3 10 Asset Management
Example list of assets and details related to each asset
Sample asset log
Audit reports
Discrepancy reports (if different from above)
Procurement (3.11
Figure imgf000184_0002
To ensure all assets purchased are entered into asset management system.
PA's Memcs Differential between actual and budgeted equipment costs Percentage of requested items delivered on time Costs incurred from returns due to mconect purchases
Base Practices
Figure imgf000185_0001
Figure imgf000186_0001
References
Figure imgf000186_0002
Process Area Procurement
Level 1
Assessment Indicators Process Performance
Generic Practice Ensure that Base practices are performed
Figure imgf000186_0003
Level 2
Figure imgf000186_0004
Figure imgf000187_0001
Level 3 Assessment Indicators
Figure imgf000187_0002
Level 4 Assessment Indicators
Figure imgf000187_0003
Level 5 Assessment Indicators
Figure imgf000188_0001
Process Capability Assessment Instrument: Interview Guide
Process Area 3.1 1 Procurement
Questions
Base Practice: 3.1 1.1 Maintain vendor information
What was the process for creating a list of approved vendors? Have vendors been identified for each type of standard equipment? Does the list include more than one potential vendor for each type of standard equipment?
What information about potential vendors and those used in the past is stored? For example, is the history of transactions and quality of service received noted? Are special terms or conditions that apply to a vendor recorded?
Is information maintained on any regulatory requirements or existing contracts that could affect vendor selection?
When does vendor information get entered and who is responsible for maintaining it?
Who accesses the vendor information and for what purposes?
Base Practice: 3.1 1.2 Receive and log request
In what format does procurement receive a purchase request (e.g. a request form, on-line etc.)? What information does the purchase request contain?
Does procurement verify that the request carries the necessary approval or authorization? How is this done? Whose approval is required for purchases? Does a documented policy describe the necessary authorizations?
For non-standard orders, does procurement verify the technical compatibility of the equipment/software requested? What is the process for verifying compatibility?
Is every request logged when received? If so how? Are these procedures documented?
Base Practice: 3.1 1.3 Identify vendor and place order
What is the process for selecting a vendor for a particular order? Is the vendor listing and information used?
Does negotiation of specific terms occur with the vendor after selection, or does any preliminary negotiation occur with several potential vendors and then are the outcomes considered during selection?
Who is responsible for placing an order? Is a purchase order or other document used? If so, please describe. Is the log updated when the order is placed?
Is the requester notified of the order placement and estimated delivery date?
Base Practice: 3.11.4 Track orders
How are open orders tracked? Do specified checkpoints exist when all open orders are reviewed to identifv anv over-due deliveries?
Is a backlog and backorder information maintained? If yes, by whom?
In what instances does procurement need to communicate with rollout/release management? What information is exchanged?
What action is taken if an order is overdue?
Base Practice: 3.1 1.5 Ensure timely/accurate delivery & log assets received
What is the procedure for handling receipt of equipment delivered? How is procurement involved? Are any proactive steps taken to ensure timely delivery (e.g. supplier is contacted shortly before the delivery date to verify the delivery)
Does procurement verify that the correct equipment was received? How?
Is the receipt logged and the request record closed? What is the procedure for this?
Who is responsible for logging all assets received in the asset management system?
Figure imgf000189_0001
Process Capability Assessment Instrument
Process Area 3.11 Procurement
Process Area Procurement is responsible for ensuring that the necessary quantities of equipment (both Description hardware and software) are purchased and delivered on time to the appropπate locations. Procurement is also responsible for logging all assets into the inventory as they are received.
Questionnaire
Process Area 3.1 1 Procurement
Figure imgf000189_0002
Work Product list Process Area 3 11 Procurement
Purchase request form
Purchase order
Sample vendor profile
Procurement reports
Current Procurement catalogue of vendors/suppliers
Figure imgf000190_0001
Base Practices
Figure imgf000190_0002
Figure imgf000191_0001
Process Area Quality Management
Level 1
Assessment Indicators Process Performance
Generic Practice: Ensure that Base practices are performed
Figure imgf000191_0002
Level 2
Figure imgf000191_0003
Figure imgf000192_0001
Level 3 Assessment Indicators
Figure imgf000192_0002
Level 4 Assessment Indicators
Figure imgf000192_0003
Level 5 Assessment Indicators
Figure imgf000192_0004
Process Capability Assessment Instrument' Interview Guide
Process Area 4.3 Quality Management
Figure imgf000193_0001
Process Capability Assessment Instrument
Process Area 4 3 Quality Management
Process Area Quality Management is an on-going process, which monitors how well the distnbuted Description environment is being managed, and looks toward continually improving its management capabilities and service Within this process, quality improvement actions are determined, agreed upon, planned and monitored
Questionnaire
Process Area | 4 3 Quality Management
Figure imgf000194_0001
Work Product list
Process Area | 4 3 Quality Management
Quality improvement action plan
Quality improvement action schedule
Quality assessment reports
Or anizational chart or hiring matrix of uality assessment team
Figure imgf000194_0002
Base Practices
Figure imgf000194_0003
Figure imgf000195_0002
References
Figure imgf000195_0001
Process Area: Legal Issues Management
Level 1
Assessment Indicators Process Performance
Generic Practice: Ensure that Base practices are performed
Figure imgf000195_0003
Figure imgf000196_0001
Level 2
Figure imgf000196_0002
Level 3 Assessment Indicators
Figure imgf000196_0003
Figure imgf000197_0001
Level 4 Assessment Indicators
Process Attπbute Generic Practice Example of Assessment Indicator Assessment Indicators at Client
Process GP4.1 Establish measurable Addressing and responding to legal Measurement quality objectives for the issues based on strategic business needs operations environment vs. industry standards.
GP4.2 Automate data Metrics are automatically collected from collection the web site for legal issues management purposes vs. a manual collection, for example: hits per page, peak usage, survey results, e-mail per month regarding a subject matter, ac
GP4.3 Provide adequate Metrics automatically collected by legal resources and infrastructure issues personnel are analyzed and for data collection reported.
Process Control GP4.4 Use data analysis Legal issues management is evaluated methods and tools to manage against performance goals and metrics and improve the process for suggested improvements and revisions to the process.
Level 5 Assessment Indicators
Figure imgf000197_0002
Process Capability Assessment Instrument Interview Guide
Process Area 4 5 Legal Issues Management
Questions Base Practice: 4 5.1 Identify legal risk areas
1. Is the web site reviewed for legal risk issues prior to publishing? If yes. by whom and how often? If no, why?
What issues have provided the most concern? Have these concerns been made known to and been addressed by the web master, content management or other related operational areas? If yes, how are they made know (e.g. symposiums, reports, conferences, phone mail, etc.) and addressed (e.g. policy, procedures, reviews, etc.)?
Are legal issues personnel consistently made aware of new issues, litigation and laws that might affect future web publishing? If yes, what is of concern?
Does the web site contain any disclaimers that would remove you from liability issues? If yes, what are they and what prompted their use?
Are legal issues reviewed on a state, domestic or worldwide scope? How has this view helped or hindered the process? Is jurisdiction a justification for the chosen scope?
Base Practice 4.5.2 Identify types of content where one may be legally at risk
Does the legal issues personnel review the different types of content (e.g. graphics, video, audio, Java applets, etc.) for risk? If yes, what types provide the most and least concern? Who is responsible for this review? How often is it done?
Is there a process in place to gain permission to use/publish copyrighted material? If yes, what is it? Is it consistently followed? Who is responsible for this? What types of content are the most protected/least protected?
Does the site allow any customers to download/FTP software? If yes, what software and what legal notifications are provided to the customers?
Are the graphics/text for any sales products provided with a disclaimer (e.g. color may be different than actual, size may be different, quantities are limited, etc.)? If yes, what are they?
Base Practice: 4.5.3 Identify customers
Are pages evaluated with customers, laws, business goals, and employees in mind? If yes what are the areas of concentration/review for these each of these audiences?
Do customers communicate with the firm regarding legal concerns or complaints? If yes, how do they do this? To whom is this communication directed? Which group of customers seem to be the most vocal about the content (e.g. system, public, corporations, government, etc.)?
Do all customers, who are not employed with the firm, have the ability to gain access to all parts of the web site (e.g. chat rooms, join e-mail lists, place orders, view inventory, etc.)? If yes, what are the most popular destinations and peak times? Are surveys offered to these customers? If no, what type of access control do you provide (log-on and password, return e-mail address, etc.) and are legal disclaimers provided for any legally sensitive areas? Please explain.
When responding to complaints or legal instruments initiated by a customer, do the legal issues management personnel meet with other counsel to respond or is the issue handed off to another department? In your experience, has this happened before and what were the circumstances?
Base Practice: 4.5.4 Legal process setup and refinement
Is the legal issues management procedure/policy maintained to address new net centric issues? If yes, by whom and how often? Is it consistently followed?
Does the legal issues management personnel forward documents in question to corporate counsel for review, approval/change and/or resolution? If yes, explain the procedure. Who is responsible for tracking the document once it is transferred to corporate counsel? Explain this tracking.
What legal requirements and issues(e.g. privacy, censorship, freedom of information, intellectual property, etc.) are gathered on an on-going basis to ensure legal credibility for the site?
Are new business offerings by the firm viewed for operational legal requirements? If yes, by whom and how often (e.g. scheduled vs. ad hoc)?
Does the legal issues group maintain contracts and ensure their deployment for compliance? If yes, who is responsible for this and how often are reviews performed?
Geneπc Questions for Process Area
What is the standard procedure/policy with regard to legal issues management tasks and procedures? Is it followed? At any time are some procedures done in an ad hoc manner ? If yes, please explain?
Are adequate tools and personnel available for legal issues management tasks and procedures? What are the tools and who are the personnel?
Is training held for new employees within the legal issues management group? If yes, is this done on the job or during formal training sessions? Are classes / training provided to all legal issues personnel which cover new issues/procedures/tasks etc.? If yes, how often is this planned?
Have measures been defined, selected and subsequent data collected for legal issues management? If yes, what type and how often?
What reports are provided to various departments within the firm from legal issues management regarding pertinent issues (e.g. changes to plans, decisions, process, requirements, etc.)? To whom do they go and how often? Do recipients of these reports provide feedback to legal issues management? If yes, what method is used (e.g. e-mail, meetings, hardcopy, etc.)?
Does the legal issues management group provide web pages with version control numbers and change order requests for updated page content?
Are all change order requests for web pages signed off by legal issues management? If no, why? If yes, by whom? How often is this done?
Are metrics automatically collected from the web site for use by legal issues personnel? If yes, what is it? How is it collected (e.g. automated, manually, both)?
Are the legal issues management processes continually improved? If yes, how? Are the improvements validated and quantified against business goals and objectives?
Is your legal issues team made up of qualified lawyers? What type of continuous education do the pursue?
Process Capability Assessment Instrument
Process Area 4.5 Legal Issues Management
Process Area Legal Issues Management addresses the legal liability considerations associated with domg Description business on a public network. To ensure that a legal risk is limited, there is a need for a close tie between Service Provider's Operations departments Legal department.
Questionnaire
Process Area | 4.5 Legal Issues Management
Figure imgf000199_0001
Work Product list
Process Area | 4.5 Legal Issues Management
Legal Issues Management procedure manual /policy
Examples of bulletins/notifications regarding new legislation that would affect content Sample reports from legal issues group noting complaints, issues or concerns for existing and future web development.
Example of a legal issues tracking document for web pages/sites showing the progression of the page/s through review/approval cycle.
Figure imgf000199_0002
Figure imgf000200_0001
Base Practices
Figure imgf000200_0002
the system or the business occur, an entirely new capacity plan may need to be developed
Example f a company is projected to grow by 10% and it actually exceeds this, the Capacity Plan must be revisited to make appropnate adjustments for network/ sen>er space, software licensing issues/accommodations hardware issues, etc
References
Figure imgf000201_0001
Process Area: Capacity Modeling & Planning
Level 1
Assessment Indicators Process Performance
Generic Practice Ensure that Base practices are performed
Figure imgf000201_0002
Level 2
Figure imgf000201_0003
Figure imgf000202_0001
Level 3 Assessment Indicators
Figure imgf000202_0002
Level 4 Assessment Indicators
Figure imgf000202_0003
Level 5 Assessment Indicators
Figure imgf000202_0004
Figure imgf000203_0001
Process Capability Assessment Instrument: Interview Guide
Process Area | 4.6 Capacity Modeling & Planning
Questions
Base Practice: 4.6.1 Define Overall Capacity Modeling & Planning Requirements
Has a base level model of the system's capacity been created and verified based on information from vendors, independent tests, etc.? Are service measures used as comparisons? If yes, what are they? If no, explain.
Explain your standard capacity planning process/policy, including CPU, memory, I/O and router usage and needs. All existing or future mainframe and server processors, storage, network configurations, and peripheral requirements should be addressed
Are the capacity requirements coordinated across distributed system based on SLAs/OLAs? If yes, explain. Are there outstanding SLA/OLA issues to be resolved? If yes, explain.
Are alarms activated when a SLA/OLA is not met? If yes, how and to whose attention? If no, explain.
Are workload balancing forecasts/plans in place? If yes, do they consider key transactions that have been collected and verified? Explain.
What are the existing and future applications/data requirements that drive the capacity plan?
What are the functional requirements/data that drive the capacity plan?
Is there a policy in place to ensure the capacity plan updated regularly (semi-annual/annual/bi- annual) or only when changes/deviations are encountered? Please describe the policy.
Are possible future threats/changes to service levels noted in the capacity plan?
What is the plan of action for identified threats?
Base Practice: 4.6.2 Collect All Capacity Information (Based on Busmess Requirements)
What are the business drivers that affect the capacity model':
What are the verified capacity plan requirements for the networks/distributed system? (e.g. financial, physical, operational, software, vendor, applications, constraints/limts.)
Is the current system/version reviewed on a scheduled, documented basis to see how well it is being utilized? How often?
Is performance/cost benefit analysis performed and tracked for each configuration? If yes, who does this and how often?
What tools have been used to measure the system's capacity?
What reports are produced regarding capacity planning? Who receives these reports? Are the accuracy of assumptions, forecasts and results tracked?
Base Practice: 4.6.3 Determine Ongoing Support Requirements
What projections have been created and reviewed that address ongoing support requirements for operations, personnel and functions?
2. Has the impact of planned business growth been evaluated with regards to support? If so, how?
3. Has the impact of planned future locations been evaluated with regards to support? If so, how?
Base Practice: 4.6.4 Build and Test Model
1. How is the base model calibrated prior to adding forecast parameters? (e.g. verify model parameters, account for discrepancies, verify accuracy of base model, etc.)
2. What forecast parameters/assumptions were added to the base model?
3. How are capacity shortfalls identified?
4. What model solutions address capacity shortfalls?
5. Have assumptions and strategies been documented? Base Practice- 4 6 5 Deploy Model, and Adjust as Appropriate
1 How often are reports disseminated to appropriate parties (e g weekly, monthly, etc )''
Is feedback received on utilization, capacity and performance?
Do management, development, and customers receive status reports that compare actual to planned utilization for review/discussion?
Does management review, revise and approve capacity plans? If no, explain.
What is the course of action/process regarding the capacity plan if major changes to the system or business occur? Are other groups/process informed (e.g. release management, SLA, procurement security, etc.)? Explain
Generic Questions for Process Area
1. Are training sessions held for personnel on a scheduled basis regarding the capacity planning process and its defined tasks? If so what type of training is provided to personnel to ensure adequate/competent execution of capacity plan?
2. Is there written documentation that covers the established capacity plan procedures for personnel?
3. How often is the capacity process reviewed for continuous improvement purposes? How often are improvements implemented and by whom?
4. When continuous improvement strategies are executed, how is the improvement validated against business and performance goals (e.g. benchmarks, basic measurements, etc.)?
Process Capability Assessment Instrument-
Process Area 4 6 Capacity Modeling & Planning
I Process Area Capacity Planning attempts to ensure that the adequate resources will be m place to meet Descnption SLA requirements Resources include physical facilities, computers, memory, disk space, communications equipment, and personnel Capacity Planning must be done for the system as a whole so that the planners can understand how the capacity of one portion of the system affects the capacity of another. Due to the large number of components typically found within a system, the interdependencies between business functions and resource components must be clearly defined
Questionnaire
Process Area [ 4 6 Capacity Modeling and Planning
Figure imgf000204_0001
Work Product list
Process Area | 4 6 Capacity Modeling and Planning
Example of an Existing Capacity Plan/Reports List of SLAs/OLAs requirements
List of resources referenced in Capacity Plan(e g physical facilities, computers, memory, disk space, communication equipment and personnel)
PA Number 4 7
Figure imgf000205_0001
Base Practices
Figure imgf000205_0002
Figure imgf000206_0001
References
Figure imgf000206_0002
Process Area: Business/Disaster Recovery Planning & Management
Level 1
Assessment Indicators Process Performance
Generic Practice: Ensure that Base practices are performed
Figure imgf000206_0003
Level 2
Figure imgf000206_0004
Management a policy for performing business/disaster recovery, whether it is operational tasks for a system interruption, loss, natural disaster or malicious in nature.
GP2.2 Allocate sufficient Business/disaster personnel have access resources to meet to procedures, software, hardware and expectations emergency contact information at all sites.
GP2.3 Ensure personnel Training policy is in place for new receive the appropriate type business/disaster recovery personnel and and amount of training all personnel have attended this training.
GP2.4 Collect data to Data are collected, for example: number measure performance of system interruption per month.
GP2.5 Maintain Status reports that compare actual to communication among team planned business/disaster objectives are members distributed to users, management and other process area personnel. Issues are tracked and reported.
Work Product GP2.6 Ensure work products All applicable SLAs, sites, systems, Management satisfy documented applications, and type of disaster are requirements considered when producing a business/disaster recovery plan.
GP2.7 Employ version The business/disaster recovery plan is control to manage changes to placed under version control at each site work products so the most current copy is reviewed on a scheduled and documented basis.
Level 3 Assessment Indicators
Figure imgf000207_0001
Level 4 Assessment Indicators
Figure imgf000207_0002
Figure imgf000208_0001
Level 5 Assessment Indicators
Figure imgf000208_0002
Process Capability Assessment Instrument: Interview Guide
Process Area 4.7 Business/Disaster Recovery Planning & Management
Questions
Base Practice: 4.7.1 Determine what disaster recovery requirements are based on SLAs
Are business/disaster recovery plans based on SLAs or documented business requirements? If yes, how are these communicated to the group and how often?
What SLA requirements are difficult to address or have not been addressed thus far? Are these issues being examined for possible solutions? If yes, by whom?
Do SLA requirements note speed of recovery and capacity? Are they prioritized? If no, explain.
Base Practice. 4.7.2 Perform business and system risk assessment
Are business and system risk assessments done? If yes, by whom and how often? Is potential revenue loss considered during system failure or loss?
Is cost-benefit analysis performed when additions or changes are made to the recovery plan? Is this based on servers, applications, SLAs? Explain.
Are business goals developed during the risk assessment? If yes, what are they?
Has it been determined what critical data should be moved off site when performing the risk assessment? If ves, how is this determined?
Are business risk assessments performed considering security management, political instability and malicious intent? If ves, bv whom and how?
Base Practice: 4 7.3 Determine recovery implementation plan Is there a formal policy regarding the recovery plan at all sites? If yes. is it followed? Is it accessible to all recovery personnel? If no, explain. If yes, is it in multiple locations? Which sites? Is revision control maintained?
Are teams established within the plan for notification and at a predetermined location in case of a disaster declaration? If yes, explain.
Are metrics collected regarding the recovery plan? If yes, how often and what are they? Are they collected automatically or manually?
Are lists maintained showing hardware and supplies needed during a disaster? If yes, where is this list? Are copies maintained for each site and at a remote location for safeguard? Who is aware of these lists?
Does the plan examine the recovery of dependent or independent applications? If yes, which ones? Has a cost analysis been performed on the loss of each application?
Are any recovery procedures performed by hot/cold sites? If yes, do they have back-ups, procedures and schedules? If yes, how are these maintained/updated?
How often is the plan reviewed? Do other process area personnel (e.g. Backup/Restore/Archive, Fault Management, Monitoring) review the plan? If yes, explain the process and describe who participates in the review.
Base Practice: 4.7 4 Review recovery plan with management
Does the management team review business/disaster recovery plans? If yes, how often? Is the management team static or dynamic?
Does the plan call for the management team to resolve resource conflicts? If yes, is a procedure noted for each site?
Base Practice: 4 7.5 Plan disaster recovery testing procedures
Are tests performed on the business/disaster recovery procedures/tasks at each site? If yes, how n off«tnenn9?
Explain what procedures pose the most concern (e.g. business or disaster) during the testing phase? Have modifications been implemented to improve process? If yes, what has been the outcome?
Are other departments brought into the testing environment for an end-to-end run through (e.g. Fault Management, Back-up/Restore/Archive, Monitoring, Physical Site Management, etc.)? If yes, which ones and how? Are other process areas tied with business/disaster recovery systems for automatic notification or metrics collection? If yes, explain.
Base Practice: 4.7.6 Produce and disseminate report on disaster recovery
Are reports produced and disseminated regarding the business/disaster recovery plan? If yes, to whom and how often? If no, explain.
What are the contents of the reports that are disseminated?
Do reports include the latest testing results? Metrics? If yes, which ones?
Base Practice: 4.7.7 Receive feedback on disaster recovery strategy
Is feedback sought and collected regarding the business/disaster recovery plan9 If yes, by whom and how9
Is the feedback used for continuous improvement reasons? If yes, has this proven to be beneficial? If no, how could the feedback process be changed to provide benefit?
Genenc Questions for Process Area
Is training provided to new business/disaster recovery personnel? If yes, in what format (e.g. on the job, formal training, computer based training, etc.)?
Are adequate resources (e.g. personnel, equipment, software, etc.) provided to perform the necessary recovery procedures?
Process Capability Assessment Instrument
Process Area 4.7 Business/Disaster Recovery Planning & Management
Process Area Determines what the requirements are for disaster recovery based upon agreed upon SLAs, Descπption sttategies and plans to restore a business or service after it has been interrupted or failed This planning process develops the strategy for recovering a system or a portion of the system. The contingency plans must consider failure of both centralized and remote components and strategies for the recovery of these systems
Questionnaire
Process Area 4 7 Business/Disaster Recovery Planning & Management
Figure imgf000210_0001
Work Product list
Process Area 4 7 Business/Disaster Recovery Planning & Management
1. Example of an existing business/disaster recovery procedure for each of the sites (on site copy and off site copy should be the same).
2. Example of a business/disaster recovery plan report.
3. List of SLAs prioritized business/disaster recovery management
4. Schedule of Back-up/Restore/ Archive tasks for each site.
While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. Thus, the breadth and scope of a preferred embodiment should not be limited by any of the above described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.

Claims

CLAIMSWhat is claimed is:
1. A method for determining capability levels of a monitoring process area when gauging a maturity of an operations organization comprising the steps of:
(a) defining a plurality of process attributes;
(b) determining a plurality of generic practices for each of the process attributes, the generic practices including base practices selected from the group consisting of polling for a current status, gathering and documenting monitoring information, classifying events, assigning severity levels, assessing impact, analyzing faults, routing the faults to be corrected, mapping event types to pre-defined diagnostic and/or corrective procedures, logging the events locally and/or remotely, suppressing messages until thresholds are reached, displaying status information on at least one console in multiple formats, displaying status information in multiple locations, issuing commands on remote processors, setting up and changing local and/or remote filters, setting up and changing local and/or remote threshold schemes, analyzing traffic patterns, and sending broadcast messages; and
(c) calculating a maturity of an operations organization based at least in part on the achievement of the generic practices.
2. The method as set forth in claim 1, and further comprising the steps of: defining a plurality of capability levels in terms of groups of the process attributes, rating each of the process attributes based on achievement of the corresponding generic practices, and determining which of the capability levels is achieved by a process area based on the rating of the process attributes of the capability levels, wherein the maturity of the operations organization is calculated based on the capability level that is achieved.
3. The method as set forth in claim 2, wherein each capability level is defined by the process attributes of a lower capability level and is further defined by at least one more process attribute.
4. The method as set forth in claim 1 , wherein the process attributes include process attributes selected from the group of process attributes consisting of process performance, performance management, work product management, process definition, process resource, process measurement, process control, continuous improvement, and process change.
5. The method as set forth in claim 2, wherein the capability levels include capability levels selected from the group of capability levels consisting of performed informally, planned and tracked, well defined, quantitatively controlled, and continuously improving.
6. The method as set forth in claim 1, wherein the generic practices are further selected from the group consisting of establishing and maintaining a policy for performing operational tasks, allocating resources to meet expectations, ensuring personnel receive the appropriate type and amount of training, collecting data to measure performance, maintaining communication among team members, ensuring work products satisfy documented requirements, employing version control to manage changes to work products.
7. The method as set forth in claim 1 , wherein the base practices include polling for a current status, gathering and documenting monitoring information, classifying events, assigning severity levels, assessing impact, analyzing faults, routing the faults to be corrected, mapping event types to pre-defined diagnostic and/or corrective procedures, logging the events locally and/or remotely, suppressing messages until thresholds are reached, displaying status information on at least one console in multiple formats, displaying status information in multiple locations, issuing commands on remote processors, setting up and changing local and/or remote filters, setting up and changing local and/or remote threshold schemes, analyzing traffic patterns, and sending broadcast messages.
8. A computer program embodied on a computer readable medium for determining capability levels of a monitoring process area when gauging a maturity of an operations organization comprising:
(a) a code segment that defines a plurality of process attributes;
(b) a code segment that determines a plurality of generic practices for each of the process attributes, the generic practices including base practices selected from the group consisting of polling for a current status, gathering and documenting monitoring information, classifying events, assigning severity levels, assessing impact, analyzing faults, routing the faults to be corrected, mapping event types to pre-defined diagnostic and/or corrective procedures, logging the events locally and/or remotely, suppressing messages until thresholds are reached, displaying status information on at least one console in multiple formats, displaying status information in multiple locations, issuing commands on remote processors, setting up and changing local and/or remote filters, setting up and changing local and/or remote threshold schemes, analyzing traffic patterns, and sending broadcast messages; and (c) a code segment that calculates a maturity of an operations organization based at least in part on the achievement of the generic practices.
9. The computer program as set forth in claim 8, and further comprising a code segment for defining a plurality of capability levels in terms of groups of the process attributes, rating each of the process attributes based on achievement of the corresponding generic practices, and determining which of the capability levels is achieved by a process area based on the rating of the process attributes of the capability levels, wherein the maturity of the operations organization is calculated based on the capability level that is achieved.
10. The computer program as set forth in claim 9, wherein each capability level is defined by the process attributes of a lower capability level and is further defined by at least one more process attribute.
11. The computer program as set forth in claim 8, wherein the process attributes include process attributes selected from the group of process attributes consisting of process performance, performance management, work product management, process definition, process resource, process measurement, process control, continuous improvement, and process change.
12. The computer program as set forth in claim 9, wherein the capability levels include capability levels selected from the group of capability levels consisting of performed informally, planned and tracked, well defined, quantitatively controlled, and continuously improving.
13. The computer program as set forth in claim 8, wherein the generic practices are further selected from the group consisting of establishing and maintaining a policy for performing operational tasks, allocating resources to meet expectations, ensuring personnel receive the appropriate type and amount of training, collecting data to measure performance, maintaining communication among team members, ensuring work products satisfy documented requirements, employing version control to manage changes to work products.
14. The computer program as set forth in claim 8, wherein the base practices include polling for a current status, gathering and documenting monitoring information, classifying events, assigning severity levels, assessing impact, analyzing faults, routing the faults to be corrected, mapping event types to pre-defined diagnostic and/or corrective procedures, logging the events locally and/or remotely, suppressing messages until thresholds are reached, displaying status information on at least one console in multiple formats, displaying status information in multiple locations, issuing commands on remote processors, setting up and changing local and/or remote filters, setting up and changing local and/or remote threshold schemes, analyzing traffic patterns, and sending broadcast messages.
15. A system for determining capability levels of a monitoring process area when gauging a maturity of an operations organization comprising: (a) logic that defines a plurality of process attributes;
(b) logic that determines a plurality of generic practices for each of the process attributes, the generic practices including base practices selected from the group consisting of polling for a current status, gathering and documenting monitoring information, classifying events, assigning severity levels, assessing impact, analyzing faults, routing the faults to be corrected, mapping event types to pre-defined diagnostic and/or corrective procedures, logging the events locally and/or remotely, suppressing messages until thresholds are reached, displaying status information on at least one console in multiple formats, displaying status information in multiple locations, issuing commands on remote processors, setting up and changing local and/or remote filters, setting up and changing local and/or remote threshold schemes, analyzing traffic patterns, and sending broadcast messages; and (c) logic that calculates a maturity of an operations organization based at least in part on the achievement of the generic practices.
16. The system as set forth in claim 15, and further comprising logic for defining a plurality of capability levels in terms of groups of the process attributes, rating each of the process attributes based on achievement of the corresponding generic practices, and determining which of the capability levels is achieved by a process area based on the rating of the process attributes of the capability levels, wherein the maturity of the operations organization is calculated based on the capability level that is achieved.
17. The system as set forth in claim 16, wherein each capability level is defined by the process attributes of a lower capability level and is further defined by at least one more process attribute.
18. The system as set forth in claim 15, wherein the process attributes include process attributes selected from the group of process attributes consisting of process performance, performance management, work product management, process definition, process resource, process measurement, process control, continuous improvement, and process change.
19. The system as set forth in claim 16, wherein the capability levels include capability levels selected from the group of capability levels consisting of performed informally, planned and tracked, well defined, quantitatively controlled, and continuously improving.
20. The system as set forth in claim 15, wherein the base practices include polling for a current status, gathering and documenting monitoring information, classifying events, assigning severity levels, assessing impact, analyzing faults, routing the faults to be corrected, mapping event types to pre-defined diagnostic and/or corrective procedures, logging the events locally and/or remotely, suppressing messages until thresholds are reached, displaying status information on at least one console in multiple formats, displaying status information in multiple locations, issuing commands on remote processors, setting up and changing local and/or remote filters, setting up and changing local and/or remote threshold schemes, analyzing traffic patterns, and sending broadcast messages. A SYSTEM, METHOD AND ARTICLE OF MANUFACTURE FOR DETERMINING CAPABILITY LEVELS OF A MONITORING PROCESS AREA FOR PROCESS ASSESSMENT PURPOSES IN AN OPERATIONAL MATURITY INVESTIGATION
ABSTRACT OF THE DISCLOSURE
A system, method, and article of manufacture consistent with the principles of the present invention are provided for determining capability levels of a monitoring process area when gauging a maturity of an operations organization. First, a plurality of process attributes are defined. Next, a plurality of generic practices are determined for each of the process attributes. The generic practices include base practices such as polling for a current status, gathering and documenting monitoring information, classifying events, assigning severity levels, assessing impact, analyzing faults, routing the faults to be corrected, mapping event types to pre-defined diagnostic and/or corrective procedures, logging the events locally and/or remotely, suppressing messages until thresholds are reached, displaying status information on at least one console in multiple formats, displaying status information in multiple locations, issuing commands on remote processors, setting up and changing local and/or remote filters, setting up and changing local and/or remote threshold schemes, analyzing traffic patterns, and sending broadcast messages. Thereafter, a maturity of an operations organization is calculated based at least in part on the achievement of the generic practices.
PCT/US2000/020280 1999-07-26 2000-07-26 A system, method and article of manufacture for determining capability levels of a monitoring process area for process assessment purposes in an operational maturity investigation WO2001008004A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU63752/00A AU6375200A (en) 1999-07-26 2000-07-26 A system, method and article of manufacture for determining capability levels ofa monitoring process area for process assessment purposes in an operational mat urity investigation

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US36162299A 1999-07-26 1999-07-26
US09/361,622 1999-07-26

Publications (2)

Publication Number Publication Date
WO2001008004A2 true WO2001008004A2 (en) 2001-02-01
WO2001008004A8 WO2001008004A8 (en) 2001-11-22

Family

ID=23422789

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2000/020280 WO2001008004A2 (en) 1999-07-26 2000-07-26 A system, method and article of manufacture for determining capability levels of a monitoring process area for process assessment purposes in an operational maturity investigation

Country Status (2)

Country Link
AU (1) AU6375200A (en)
WO (1) WO2001008004A2 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109992652A (en) * 2019-03-25 2019-07-09 联想(北京)有限公司 A kind of information replying method, device, equipment and storage medium
CN113222300A (en) * 2021-06-15 2021-08-06 中国银行股份有限公司 Method and device for processing product modification data, readable medium and equipment
CN114444917A (en) * 2022-01-21 2022-05-06 清华大学 Fire-fighting airplane putting effect evaluation method and system
CN114462737A (en) * 2020-11-09 2022-05-10 中核核电运行管理有限公司 Accurate matching method applied to nuclear power plant work order task and operation event report

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
No Search *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109992652A (en) * 2019-03-25 2019-07-09 联想(北京)有限公司 A kind of information replying method, device, equipment and storage medium
CN109992652B (en) * 2019-03-25 2023-04-28 联想(北京)有限公司 Information reply method, device, equipment and storage medium
CN114462737A (en) * 2020-11-09 2022-05-10 中核核电运行管理有限公司 Accurate matching method applied to nuclear power plant work order task and operation event report
CN113222300A (en) * 2021-06-15 2021-08-06 中国银行股份有限公司 Method and device for processing product modification data, readable medium and equipment
CN113222300B (en) * 2021-06-15 2024-04-30 中国银行股份有限公司 Method, device, readable medium and equipment for processing product modification data
CN114444917A (en) * 2022-01-21 2022-05-06 清华大学 Fire-fighting airplane putting effect evaluation method and system
CN114444917B (en) * 2022-01-21 2022-09-13 清华大学 Fire-fighting airplane putting effect evaluation method and system

Also Published As

Publication number Publication date
WO2001008004A8 (en) 2001-11-22
AU6375200A (en) 2001-02-13

Similar Documents

Publication Publication Date Title
US6738736B1 (en) Method and estimator for providing capacacity modeling and planning
US20060161444A1 (en) Methods for standards management
US20060161879A1 (en) Methods for managing standards
US8332807B2 (en) Waste determinants identification and elimination process model within a software factory operating environment
US7810067B2 (en) Development processes representation and management
US8448129B2 (en) Work packet delegation in a software factory
US8140367B2 (en) Open marketplace for distributed service arbitrage with integrated risk management
US20100017782A1 (en) Configuring design centers, assembly lines and job shops of a global delivery network into &#34;on demand&#34; factories
US20150356477A1 (en) Method and system for technology risk and control
WO2001025876A2 (en) Method and estimator for providing capacity modeling and planning
Niessink et al. The IT service capability maturity model
US20070073572A1 (en) Data collection and distribution system
US20030055697A1 (en) Systems and methods to facilitate migration of a process via a process migration template
US20080091676A1 (en) System and method of automatic data search to determine compliance with an international standard
US10460265B2 (en) Global IT transformation
WO2007030633A2 (en) Method and system for remotely monitoring and managing computer networks
WO2001008004A2 (en) A system, method and article of manufacture for determining capability levels of a monitoring process area for process assessment purposes in an operational maturity investigation
Cleveland et al. Orchestrating End‐User Perspectives in the Software Release Process: An Integrated Release Management Framework
WO2001008035A2 (en) A system, method and computer program for determining capability level of processes to evaluate operational maturity in an administration process area
Spencer et al. Technology best practices
WO2001008038A2 (en) A system, method and computer program for determining operationalmaturity of an organization
WO2001008037A2 (en) A system, method and computer program for determining capability levels of processes to evaluate operational maturity of an organization
WO2001008074A2 (en) A system, method and article of manufacture for determining capability levels of a release management process area for process assessment purposes in an operational maturity investigation
Spasic et al. Information and Communication Technology Unit Service Management in a Non-Profit Organization Using ITIL Standards.
Rae A guide to SLAs

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE GH GM HR HU ID IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG US UZ VN YU ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
AK Designated states

Kind code of ref document: C1

Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE GH GM HR HU ID IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG US UZ VN YU ZW

AL Designated countries for regional patents

Kind code of ref document: C1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

D17 Declaration under article 17(2)a
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase in:

Ref country code: JP