JP5605476B2 - System operation management apparatus, system operation management method, and program storage medium - Google Patents

System operation management apparatus, system operation management method, and program storage medium Download PDF

Info

Publication number
JP5605476B2
JP5605476B2 JP2013168691A JP2013168691A JP5605476B2 JP 5605476 B2 JP5605476 B2 JP 5605476B2 JP 2013168691 A JP2013168691 A JP 2013168691A JP 2013168691 A JP2013168691 A JP 2013168691A JP 5605476 B2 JP5605476 B2 JP 5605476B2
Authority
JP
Japan
Prior art keywords
correlation
correlation model
performance information
analysis
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2013168691A
Other languages
Japanese (ja)
Other versions
JP2013229064A (en
Inventor
清志 加藤
Original Assignee
日本電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to JP2009238747 priority Critical
Priority to JP2009238747 priority
Application filed by 日本電気株式会社 filed Critical 日本電気株式会社
Priority to JP2013168691A priority patent/JP5605476B2/en
Publication of JP2013229064A publication Critical patent/JP2013229064A/en
Application granted granted Critical
Publication of JP5605476B2 publication Critical patent/JP5605476B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/079Root cause analysis, i.e. error or fault diagnosis
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0706Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
    • G06F11/0709Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment in a distributed system consisting of a plurality of standalone computer nodes, e.g. clusters, client-server systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0751Error or fault detection not based on redundancy
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0766Error or fault reporting or storing
    • G06F11/0787Storage of error reports, e.g. persistent data storage, storage using memory protection
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3452Performance evaluation by statistical analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06QDATA PROCESSING SYSTEMS OR METHODS, SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/02Reservations, e.g. for tickets, services or events

Description

  The present invention relates to a system operation management apparatus, a system operation management method, and a program storage medium, and more particularly, to a system operation management apparatus, a system operation management method, and a program storage medium that determine the operating status of a system to be managed.

  In providing services for customers, in recent years, there are many services using computer systems and information communication technology, such as mail order sales using the Internet. In order to execute such services smoothly, it is required that a computer system always operates stably. For this purpose, operation management of the computer system is indispensable.

  However, such system operation management has been performed manually by a system administrator. Therefore, as the system becomes larger and more complex, the knowledge and experience required of system administrators will become more sophisticated, and erroneous operations by system administrators with little such knowledge and experience may occur. There was a problem.

  In order to avoid such problems, there is provided a system operation management apparatus that centrally monitors and controls the state of hardware constituting the system. This system operation management device obtains data indicating the operating status of the hardware of the system to be managed (hereinafter referred to as performance information) online, and analyzes the performance information to analyze the failure on the system to be managed. And the contents thereof are displayed on a display unit (for example, a monitor) which is one element constituting the system operation management apparatus. Here, as an example of a method for determining the presence or absence of a failure described above, a method for determining by setting a threshold value in advance in performance information, a measured value of performance information and a calculated value (theoretical value) of performance information in advance There is a method of determining by providing a reference range for the deviation.

  In this system operation management apparatus, as described above, information on the presence or absence of a failure on the system is displayed on a display unit such as a monitor. Therefore, if it is displayed that there is a failure, the display contents indicate whether the cause of the failure is due to insufficient memory capacity or an overload of the CPU (Central Processing Unit). In order to improve the failure, it is necessary to narrow down the cause of the failure. However, such work to narrow down the cause of a failure requires investigation of the system history and parameters of the part that is likely to be involved in the occurrence of the failure. You need to rely on it. Therefore, a system administrator who operates the system operation management apparatus inevitably requires high skills. At the same time, solving system failures by operating the system operation management system imposes many time and physical burdens on the system administrator.

  Therefore, in this system operation management device, the system automatically analyzes the combination of abnormal conditions and the like based on the information on the processing capacity collected from the managed system and estimates the rough problem and the cause of failure. It is important to accept management instructions after notifying the administrator.

  As described above, there are various related technologies for the system operation management apparatus having the function of reducing the burden on the system administrator for managing the system and repairing the failure. The related technologies are introduced below.

  The technique disclosed in Japanese Patent Application Laid-Open No. 2004-062741 is a technique related to a failure information display apparatus that displays system failure information. In this technology, when a failure is discovered in managing the operating status of the data processing system to be managed, a failure message according to the order of occurrence of the failure and the actual arrangement relationship of the failure unit is presented to the outside. Thus, it is easy to visually recognize the fault location, facilitate the estimation work of the fault occurrence source, and reduce the burden on the system administrator.

  The technique disclosed in Japanese Patent Application Laid-Open No. 2005-257416 is a technique related to an apparatus for diagnosing the measurement target apparatus based on time-series information of parameters acquired from the measurement target apparatus. The technique appropriately detects a failure due to performance degradation of the measurement target device by calculating the strength of correlation between information of each parameter based on the degree of change of the time-series information of the parameter. According to the present technology, it is possible to appropriately determine whether or not time-series changes of information of different parameters are similar.

  The technology disclosed in Japanese Patent Laid-Open No. 2006-024017 is a technology related to a system for predicting the capacity of computer resources. The technology compares the history of processing of system elements with the history of changes in performance information, thereby identifying the amount of load caused by a specific processing and analyzing the load at a future processing amount. According to the present technology, the behavior of the system can be specified when the relationship between the processing and the load can be grasped in advance.

  The technique disclosed in Japanese Patent Laid-Open No. 2006-146668 is a technique related to an operation management support apparatus. The technology acquires hardware operation status information such as a CPU from the managed system and information on the amount of access to the Web management server at a certain time interval, and obtains a correlation between a plurality of elements constituting the information, It is determined from the correlation whether or not the current system state is normal. According to the present technology, it is possible to more flexibly detect the degradation state of the system performance, and it is possible to present the cause and countermeasures of the degradation in detail.

  The technique disclosed in Japanese Patent Application Laid-Open No. 2007-293393 is a technique related to a failure monitoring system that searches for past similar failures. The technology periodically acquires information related to various processing capabilities and displays the information on the time axis together with information related to failures that have occurred in the past. Future failure occurrences can be predicted based on whether they are similar.

  The technique disclosed in Japanese Patent Laid-Open No. 10-074188 is a technique for a data learning apparatus. The technology compares the learning target information acquired from the data management target device with the information related to the predicted value created in advance, and if the similarity between the two is below a predetermined standard, the acquired information is an exception. It is determined that the information is typical information. In addition, the technique corrects the content of information related to the expected value based on the difference between the two. According to the present technology, by repeating these operations, it is possible to improve the accuracy of processing by the data management target device.

  However, the techniques disclosed in the above-described patent documents have the following problems.

  First, the technique disclosed in Japanese Patent Application Laid-Open No. 2004-062741 has a problem that prevention of a system failure that may occur in the future is not solved, although an actual system failure can be dealt with accurately and easily. Therefore, there is a problem that prevention of a future system failure is still a burdensome work for an inexperienced system administrator.

  Next, in the technique disclosed in Japanese Patent Application Laid-Open No. 2005-257416, in order to identify a failure that actually occurred from the number and content of broken correlations, the configuration and behavior of the target system are accurately understood. It is necessary to keep. In other words, it is necessary to grasp what kind of failure occurs when the correlation is broken. For this reason, the system administrator is required to have a great deal of experience and knowledge, and there is a problem that a great burden is imposed when the present technology is implemented.

  Next, in the technique disclosed in Japanese Patent Application Laid-Open No. 2006-024017, when the prediction target system is large-scale or has a configuration that cooperates with other systems, the relationship between processing and load is extremely complicated. Thus, in order to accurately predict the amount of load, it was necessary to collect and analyze the history of all processes that could be involved.

  For this reason, in order to make an accurate prediction in the analysis, there is a problem in that the burden of data collection and analysis is large, which imposes a heavy burden on those involved in the analysis. Further, there is a problem that a person who is involved in the analysis needs to have extremely high knowledge.

  Next, in the technology disclosed in Japanese Patent Application Laid-Open No. 2006-146668, the cause of the system abnormality actually occurred and the corrective action are accurately performed. However, regarding the prediction of the future system abnormality, Based on the result of the determination of the normality of the system status, the system administrator or the like has to do it himself. Therefore, there is a problem that the system administrator is required to have a lot of experience and imposes a lot of burden.

  Next, in the technique disclosed in Japanese Patent Application Laid-Open No. 2007-293393, if the content of the information to be analyzed is information that is continuous in time series without distinction between normal and abnormal, which part is determined only from its value and change state. It is not possible to clearly identify whether or not this is an obstacle. Therefore, in such a case, there is a problem that the system administrator or the like has to detect the faulty part based on his / her own experience, which imposes a great burden on the system administrator.

  Next, in the technique disclosed in Japanese Patent Application Laid-Open No. 10-074188, the system administrator needs to create information related to the above-described predicted value by himself. Since this creation requires a lot of experience, there is a problem that the system administrator is burdened with a lot of burden.

  As described above, in each related art, the system administrator is required to have a certain level of skill and experience, and the burden on the system administrator is large.

  In addition, since the contents of managed systems tend to become more sophisticated and complex in recent years, the burden on system administrators is expected to increase further in the future.

[Object of the invention]
The present invention provides a system operation management apparatus, a system operation management method, and a program storage medium capable of solving the above-described problems and reducing the burden on the system administrator when assigning judgment criteria in future failure detection. With the goal.

  The system operation management apparatus of the present invention includes a performance information storage unit that stores performance information including a plurality of types of performance values in a system in time series, and a plurality of performance information based on the performance information stored in the performance information storage unit. Based on a correlation model that is generated for each period and includes one or more correlations between performance values of different types, one or more periods to which the same correlation model is applied are extracted, and the one or more periods And assigning the correlation model to the model, and determining a calendar attribute suitable for the one or more periods, thereby associating the calendar attribute with the correlation model, and the performance of the input system Analysis means for detecting abnormality of the performance information using the information and the correlation model for the calendar attribute of the period when the performance information was acquired.

  The system operation management method of the present invention stores performance information including a plurality of types of performance values in the system in a time series, and generates different types of performance values generated for a plurality of periods based on the performance information. Based on a correlation model including one or more correlations, one or more periods in which the same correlation model is applied are extracted, the correlation model is assigned to the one or more periods, and the one or more periods are By determining a suitable calendar attribute, the calendar attribute is associated with the correlation model, and the input performance information of the system and the calendar attribute of the period in which the performance information was acquired Using the correlation model, abnormality detection of the performance information is performed.

  The program of the present invention stores, in a computer, performance information including a plurality of types of performance values in the system in time series, and is generated between the different types of performance values generated for a plurality of periods based on the performance information. Based on a correlation model including one or more correlations, one or more periods in which the same correlation model is applied are extracted, the correlation model is assigned to the one or more periods, and the one or more periods are By determining a suitable calendar attribute, the calendar attribute is associated with the correlation model, and the input performance information of the system and the calendar attribute of the period in which the performance information was acquired Using the correlation model, processing for detecting an abnormality in the performance information is executed.

  The effect of the present invention is that the system operation management apparatus can greatly reduce the burden on the system administrator when assigning judgment criteria for future failure detection.

It is a block diagram which shows the structure of 1st Embodiment of the system operation management apparatus of this invention. It is explanatory drawing which shows an example of the schedule information in the 1st Embodiment of this invention. It is explanatory drawing which shows the other example of the schedule information in the 1st Embodiment of this invention. It is explanatory drawing which shows the other example of the schedule information in the 1st Embodiment of this invention. It is explanatory drawing which shows an example of the production | generation operation | movement of a correlation change analysis result in the 1st Embodiment of this invention. It is a flowchart which shows operation | movement of the system operation management apparatus in the 1st Embodiment of this invention. It is a block diagram which shows the structure of 2nd Embodiment of the system operation management apparatus of this invention. It is a block diagram which shows the structure of the candidate information generation part 21 in the 2nd Embodiment of this invention. It is explanatory drawing which shows an example of the production | generation operation | movement of schedule candidate information in the 2nd Embodiment of this invention. It is explanatory drawing which shows an example of the production | generation operation | movement of a correlation change analysis result in the 2nd Embodiment of this invention. It is a block diagram which shows the structure of the correction candidate production | generation part 22 in the 2nd Embodiment of this invention. It is explanatory drawing which shows an example of the production | generation procedure of the correction candidate of an analysis schedule in the 2nd Embodiment of this invention. It is explanatory drawing which shows an example (continuation of FIG. 12) of the production | generation procedure of the correction candidate of an analysis schedule in the 2nd Embodiment of this invention. It is explanatory drawing which shows an example of the content displayed by the administrator dialogue part 14 in the 2nd Embodiment of this invention. It is a flowchart which shows the operation | movement of the production | generation of schedule candidate information in the 2nd Embodiment of this invention. It is a flowchart which shows operation | movement of the production | generation of the correction candidate of schedule information in the 2nd Embodiment of this invention. It is a block diagram which shows the structure of 3rd Embodiment of the system operation management apparatus of this invention. It is explanatory drawing which shows an example of the content displayed by the administrator dialogue part 14 in the 3rd Embodiment of this invention. It is a flowchart which shows the operation | movement by the conformity model determination part 23 in the 3rd Embodiment of this invention. It is a block diagram which shows the structure used as the premise of the system operation management apparatus concerning this invention. It is explanatory drawing which shows an example of the performance information of the system operation management apparatus shown in FIG. It is explanatory drawing which shows an example of the state in which the performance information shown in FIG. 21 was accumulate | stored and stored. It is explanatory drawing which shows an example of the correlation model of the system operation management apparatus shown in FIG. It is a flowchart which shows operation | movement of the system operation management apparatus shown in FIG. It is explanatory drawing which shows an example of the content displayed by the administrator dialogue part 14 of the system operation management apparatus shown in FIG. It is a block diagram which shows the characteristic structure of 1st embodiment of this invention.

  Embodiments of a system operation management apparatus according to the present invention will be described below with reference to FIGS.

[System Operation Management Device Premised on the Present Invention]
First, before describing the first embodiment, a system operation management apparatus 101 which is a premise of the present invention will be described with reference to FIGS.

  FIG. 20 is a block diagram showing a configuration as a premise of the system operation management apparatus according to the present invention.

  In FIG. 20, the system operation management apparatus 101 manages the operating state of the customer service execution system 4. The customer service execution system 4 receives information E desired by the customer through the telecommunication line and executes a service for providing the information to the customer.

  The customer service execution system 4 is composed of one or more servers. The customer service execution system 4 may be composed of a computer independent of the system operation management apparatus 101.

  As illustrated in FIG. 20, the system operation management apparatus 101 includes a performance information collection unit 11 and a performance information storage unit 12. Here, the performance information collection unit 11 periodically acquires performance information of a server constituting the customer service execution system 4 from the server. The performance information storage unit 12 sequentially stores the performance information acquired by the performance information collection unit 11. Thereby, the performance information of the server which comprises the customer service execution system 4 can be preserve | saved with time.

  Here, the server performance information is constituted by a plurality of types of performance values obtained by specifically quantifying the states of various elements (for example, CPU and memory) that affect the operation of the server constituting the customer service execution system 4. Information. Specific examples of the performance value include a CPU usage rate and a remaining memory capacity.

  FIG. 21 is an explanatory diagram showing an example of performance information of the system operation management apparatus shown in FIG. FIG. 22 is an explanatory diagram showing an example of a state in which the performance information shown in FIG. 21 is accumulated and stored.

  For example, the performance information collection unit 11 acquires performance information as shown in FIG. 21, and the performance information storage unit 12 stores the performance information as shown in FIG.

  As shown in FIG. 20, the system operation management apparatus 101 includes a correlation model generation unit 16, an analysis model storage unit 17, and a correlation change analysis unit 18. The correlation model generation unit 16 generates a correlation model of the operating state of the customer service execution system 4. The analysis model storage unit 17 stores the correlation model generated by the correlation model generation unit 16. The correlation change analysis unit 18 has a difference between the measured value of the performance value constituting the performance information and the calculated value of the conversion function applied to the correlation model stored in the analysis model storage unit 17 within a preset reference range. Judge whether or not there is and output the result. Thereby, the operating state of the customer service execution system 4 can be confirmed. Here, the correlation model generation unit 16 extracts time series data of performance information for a certain period stored in the performance information storage unit 12, and based on the time series data, performance values of any two types in the performance information. A correlation model is generated by deriving a conversion function between the two.

  Furthermore, as illustrated in FIG. 20, the system operation management apparatus 101 includes a failure analysis unit 13, an administrator dialogue unit 14, and a countermeasure execution unit 15. The failure analysis unit 13 analyzes the presence / absence of a system failure in the customer service execution system 4 based on the analysis result of the correlation change analysis unit 18 on the performance information. When the failure analysis unit 13 determines that there is a possibility of a system failure, the administrator dialogue unit 14 displays the determination result to the outside, and from the outside, an instruction to improve the system abnormality with respect to the displayed content When is entered, the information related to this input is accepted. When the improvement instruction is input to the administrator dialogue unit 14, the coping execution unit 15 receives the information related to the input, and the server constituting the customer service execution system 4 according to the content of the information related to the input Execute the process to deal with the system failure above.

  Thereby, while being able to detect correctly the abnormality of the performance information of the server which comprises the service execution system 4 for customers, it can respond appropriately.

  Next, each component of the system operation management apparatus 101 will be described in detail.

  The performance information collection unit 11 periodically accesses the server of the customer service execution system 4 and acquires the performance information. Then, the acquired performance information is stored in the performance information storage unit 12. In the embodiment of the present invention, the performance information collection unit 11 periodically acquires performance information and sequentially stores the performance information in the performance information storage unit 12.

  Next, the performance information storage unit 12 stores the performance information acquired by the performance information collection unit 11. As described above, the performance information storage unit 12 periodically and sequentially stores performance information.

  Next, the correlation model generation unit 16 receives the performance information stored in the performance information storage unit 12 for a preset acquisition period, selects any two types in the performance information, A conversion function (hereinafter referred to as a correlation function) for converting a time series of performance values of a type into a time series of performance values of the other type is derived.

  Further, the correlation model generation unit 16 derives the correlation function described above for all combinations of types, and generates a correlation model by combining the correlation functions obtained as a result.

  Further, the correlation model generation unit 16 stores the correlation model in the analysis model storage unit 17 after generating the above-described correlation model.

  The analysis model accumulating unit 17 stores the correlation model received from the correlation model generating unit 16.

  Next, the correlation change analysis unit 18 uses the other type of performance information obtained by substituting the performance value of one type into the correlation function described above for the performance information newly acquired by the performance information collection unit 11 for analysis. The theoretical value (calculated value) of the performance value is compared with the actual value (actually measured value) of the performance value. As a result, it is determined whether the correlation between the two types of performance values is maintained by determining whether the difference between the two values is within a preset reference range (hereinafter referred to as correlation change analysis). I do.

  When the difference is within the reference range, the correlation change analysis unit 18 determines that the correlation between the performance values of both types is maintained normally. Based on this analysis result, it is possible to check the operating status of the server constituting the acquisition source system, that is, the customer service execution system 4 at the time when the information relating to the processing capability is acquired.

  Thereafter, the correlation change analysis unit 18 sends the analysis result to the failure analysis unit 13.

  Next, the failure analysis unit 13 determines whether or not there is a possibility of failure on the server constituting the customer service execution system 4 based on a preset method for the analysis result received from the correlation change analysis unit 18. The result of this determination is sent to the administrator dialogue unit 14.

  Here, examples of the determination method include the following.

  As a first example, the failure analysis unit 13 confirms whether or not the number of correlations determined to be abnormal in the correlation change analysis result of the performance information is greater than a preset value. When it is confirmed that there are many, it is determined that there is a possibility of failure in the customer service execution system 4.

  In addition, as a second example, a customer only when the number of correlations related to a specific element (for example, CPU usage rate) among correlations determined to be abnormal is equal to or greater than a preset threshold value. It is determined that there is a possibility of failure in the service execution system 4.

  Next, the administrator dialogue unit 14 sends the content of the determination result regarding the possibility of failure received from the failure analysis unit 13 from an output unit (not shown) (for example, a monitor equipped in the administrator dialogue unit 14). Output externally for display.

  FIG. 25 is a diagram showing an example of contents displayed on the administrator dialogue unit 14 of the system operation management apparatus 101 shown in FIG.

  For example, the administrator dialogue unit 14 displays the determination result as shown in a display screen 14A in FIG. As shown in the display screen 14A, the manager dialogue unit 14 displays a number of charts so that the system administrator can easily grasp the determination result.

  The screen display 14A will be further described. The display screen 14A includes a correlation destruction number 14Aa indicating the degree of abnormality in the performance information analysis result, a correlation diagram 14Ab indicating the abnormality location, and a list 14Ac of elements having a large degree of abnormality. By displaying in this way, for example, as shown in FIG. When the degree of abnormality of the CPU is large, C.I. It is possible to accurately notify the system administrator that there is a possibility that the CPU is faulty.

  Further, after displaying the determination result of the failure analysis (FIG. 25, display screen 14A), the administrator dialogue unit 14 receives an input of an improvement command for the failure from the system administrator who has confirmed the content, and the information Is sent to the countermeasure execution unit 15.

  Next, the coping execution unit 15 executes a measure based on the failure improvement instruction input to the administrator dialogue unit 14 on the server of the customer service execution system 4.

  For example, when a command to reduce the amount of work is input from the administrator dialogue unit 14 because the load on a specific CPU is high, the coping execution unit 15 sends a response to the server of the customer service execution system 4 Take measures to reduce the workload.

[Generate correlation model]
Here, generation of the correlation model by the correlation model generation unit 16 described above will be described more specifically.

  The correlation model generation unit 16 takes out the performance information stored in the performance information storage unit 12 and acquired in a certain period set in advance from the outside.

  Next, the correlation model generation unit 16 selects any two types in the performance information.

  Here, the correlation model generation unit 16 selects “A.CPU” (A.CPU usage rate) and “A.MEM” (A.remaining amount of memory) from the types in the performance information 12B of FIG. Let's proceed with the explanation.

  The correlation model generation unit 16 calculates a correlation function F for converting from a time series of performance values (input X) of “A.CPU” to a time series of performance values (output Y) of “A.MEM”.

  Here, in the embodiment of the present invention, the correlation model generation unit 16 can select a suitable function from various types of functions as the contents of the function F. Here, the description will be continued assuming that a function of the format “Y = αX + β” is selected as the conversion function F.

  The correlation model generation unit 16 can compare the time series change of the performance value X of “A.MEM” of the performance information 12B with the time series change of the performance value Y of “A.MEM”, and can convert from X to Y. The values of α and β in the equation “Y = αX + β” are calculated. Here, it is assumed that “−0.6” is calculated as α and “100” is calculated as β as a result of the calculation.

  Further, the correlation model generation unit 16 compares the time series of the Y value obtained by converting X with the above-described correlation function “Y = −0.6X + 100” and the time series of the actual Y value, and the difference is obtained. Weight information w of this correlation function is calculated from the conversion error.

  The correlation model generation unit 16 performs the above operation for all combinations of the two types of performance information 12B. For example, when the performance information 12B is composed of performance values of five types, the correlation model generation unit 16 generates a correlation function F for 20 combinations obtained from these five types.

  Here, since this correlation function F serves as a reference for checking the stability of the customer service execution system 4 to be managed, the performance acquired during the period when the customer service execution system 4 is stable (normal time). Created based on information.

  The correlation model is generated by the correlation model generation unit 16 combining the various correlation functions obtained in this way into one.

  FIG. 23 is an explanatory diagram showing an example of a correlation model of the system operation management apparatus shown in FIG.

  The correlation model 17A shown in FIG. 23 includes a plurality of correlation functions based on combinations of two types.

[Correlation change analysis]
Next, the correlation change analysis performed by the correlation change analysis unit 18 will be described in more detail.

  Here, the performance information collection unit 11 uses the performance information 12Ba (performance information acquired at 8:30 on November 7, 2007) shown in the bottom row of 12B in FIG. 22 as performance information for analysis. The explanation is based on the premise that it has been acquired.

  When the correlation change analysis unit 18 receives the performance information 12Ba from the performance information collection unit 11, the correlation change analysis unit 18 accesses the analysis model storage unit 17 and extracts the correlation model stored therein, and the correlation function constituting this correlation model Among them, those suitable for the analysis of the performance information 12Ba are extracted.

  Specifically, the correlation change analysis unit 18 extracts a correlation function relating to all combinations of types in the performance information 12Ba. For example, when the types in the performance information 12Ba are “A.CPU”, “A.MEM”, and “B.CPU”, the correlation change analysis unit 18 performs the above-described “X” and “Y”. Select and extract all correlation functions whose combinations are “A.CPU” and “A.MEM”, “A.MEM” and “B.CPU”, and “A.CPU” and “B.CPU”. To do.

  Hereinafter, description will be continued regarding a case in which a correlation function having a combination of types “A.CPU” and “A.MEM” is extracted and a correlation change analysis is executed based on the correlation function.

  The correlation change analysis unit 18 substitutes the actual measurement value of “A.CPU” for the performance information 12Ba into X of the correlation function, and calculates the value of Y. Then, the correlation change analysis unit 18 compares the calculated Y value (that is, the theoretical value of “A.MEM”) with the actual value (actually measured value) of “A.MEM” of the performance information.

  As a result of this comparison, if the difference between the theoretical value of “A.MEM” and the actual measurement value of “A.MEM” is confirmed to be within a preset reference range (within an allowable error range), The change analysis unit 18 determines that the correlation between the two types “A.CPU” and “A.MEM” related to the performance information 12Ba is maintained (that is, normal).

  On the other hand, when it is confirmed that the above-described difference is outside the reference range, the correlation change analysis unit 18 has a correlation between the two types “A.CPU” and “A.MEM” related to the performance information 12Ba. It is determined that it has collapsed (that is, it is abnormal).

[Operation of System Operation Management Device in FIG. 20]
Next, the operation of the system operation management apparatus 101 will be described with reference to FIG.

  FIG. 24 is a flowchart showing the operation of the system operation management apparatus shown in FIG.

  The performance information collection unit 11 periodically acquires performance information from the customer service execution unit 4 (step S101) and stores it in the performance information storage unit 12 (step S102).

  Next, the correlation model generation unit 16 acquires the performance information stored in the performance information storage unit 12 for a preset period, and generates a correlation model based on these (step S103). The correlation model generated here is stored in the analysis model storage unit 17.

  Subsequently, the correlation change analysis unit 18 acquires performance information to be analyzed from the performance information collection unit 11 (step S104). At the same time, the correlation change analysis unit 18 acquires a correlation model used for the correlation change analysis from the analysis model storage unit 17.

  Subsequently, the correlation change analysis unit 18 performs a correlation change analysis on the performance information for analysis, and detects correlation destruction (step S105).

  After the correlation change analysis is completed, the correlation change analysis unit 18 sends the analysis result to the failure analysis unit 13.

  The failure analysis unit 13 that has received the analysis result checks the number of correlations determined that the correlation in the analysis result is broken (the number of correlation destructions), and whether the number exceeds a preset criterion. It is confirmed whether or not (step S106). As a result of the confirmation, if the preset standard is exceeded (step S106 / Yes), the failure analysis unit 13 determines that there is a possibility of failure in the customer service execution system 4, and the detailed analysis content thereof Is sent to the administrator dialogue unit 14. On the other hand, if the preset reference is not exceeded (No in step S106), the steps after the step of acquiring the performance information for analysis in step S104 are repeated.

  Based on this information, the administrator dialogue unit 14 that has received the detailed analysis content information from the failure analysis unit 13 displays that there is a possibility of failure in the customer service execution system 4 (step S107).

  Subsequently, when the system administrator who has confirmed the analysis result displayed on the administrator dialogue unit 14 inputs an improvement command for the failure to the administrator dialogue unit 14, the administrator dialogue unit 14 inputs the improvement command. This information is sent to the countermeasure execution unit 15 (step S108).

  Subsequently, when the information relating to the input of the improvement command is received, the countermeasure execution unit 15 executes an improvement measure for the customer service execution system 4 according to the content (step S109).

  Henceforth, the process after the process (step S104) of the analysis performance information acquisition operation is repeated. Thereby, a change with time of the state of the customer service execution system 4 can be confirmed.

[First Embodiment]
Next, specific contents of the first embodiment of the present invention will be described with reference to FIGS.

  FIG. 1 is a block diagram showing the configuration of the first embodiment of the system operation management apparatus of the present invention.

  Here, as shown in FIG. 1, the system operation management apparatus 1 in the first exemplary embodiment of the present invention is similar to the system operation management apparatus 101 in FIG. Unit 12, correlation model generation unit 16, analysis model storage unit 17, correlation change analysis unit 18, failure analysis unit 13, administrator dialogue unit 14, and countermeasure execution unit 15. The performance information collection unit 11 acquires performance information from the customer service execution system 4. The performance information storage unit 12 stores the acquired performance information. The correlation model generation unit 16 generates a correlation model based on the acquired performance information. The analysis model storage unit 17 stores the generated correlation model. The correlation change analysis unit 18 analyzes the abnormality of the performance information acquired using the correlation model. The failure analysis unit 13 determines the abnormality of the customer service execution system 4 based on the analysis result by the correlation change analysis unit 18. The administrator dialogue unit 14 outputs the determination result by the failure analysis unit 13. When the coping execution unit 15 receives an input of an improvement command for the content output by the administrator dialogue unit 14, the coping execution unit 15 improves the customer service execution system 4 based on the command.

  Furthermore, the system operation management apparatus 1 includes an analysis schedule storage unit 19. The analysis schedule accumulation unit 19 stores schedule information that is a schedule for switching the correlation model in accordance with the acquisition timing of the performance information for analysis during the above-described correlation change analysis. Here, the schedule information is created in advance by a system administrator.

  The analysis schedule storage unit 19 is accessible from the correlation model generation unit 16 and the correlation change analysis unit 18. Thereby, based on the schedule information stored in this analysis schedule accumulation | storage part 19, a correlation model can be produced | generated and performance information analysis can be performed.

  In addition, the administrator interaction unit 14, the correlation model generation unit 16, and the correlation change analysis unit 18 in the first embodiment of the present invention have new functions in addition to the various functions described above. Hereinafter, these functions will be described.

  The administrator interaction unit 14 receives an input of schedule information generated in advance externally and stores the input schedule information in the analysis schedule storage unit 19.

  2, 3 and 4 are explanatory diagrams showing examples of schedule information in the first embodiment of the present invention.

  For example, in the schedule information 19A in FIG. 2, a schedule with the first priority representing weekly weekends and a schedule with the second priority representing daily are specified. The schedule information 19A is applied in the order of priority, and the analysis period is classified into two every Saturday and Sunday, and other days of the week (Monday to Friday).

  Similarly, in the schedule information 19B in FIG. 3, only the first priority schedule representing every day is designated.

  Further, in the schedule information 19C in FIG. 4, the first priority schedule that is the last day of every month and a weekday, the second priority schedule that represents every weekend, and the third priority that represents every day. Schedule is specified.

[Generate correlation model]
Next, generation of a correlation model by the correlation model generation unit 16 in the first embodiment of the present invention will be further described.

  When generating the correlation model, the correlation model generation unit 16 acquires performance information for a preset period from the performance information storage unit 12 and receives schedule information from the analysis schedule storage unit 19. And the correlation model production | generation part 16 classify | categorizes performance information according to the analysis period defined in schedule information about the acquisition time by the performance information collection part 11 of the said performance information. Thereafter, the correlation model generation unit 16 generates a correlation model by the above-described method based on the divided performance information groups. Thereby, the correlation model in each analysis period is obtained.

  For example, consider a case where the correlation model generation unit 16 acquires the schedule information 19A (FIG. 2) and generates a correlation model.

  First, the correlation model generation unit 16 derives a correlation function based on the performance information acquired by the performance information collection unit 11 during the analysis period of the first priority, that is, Saturday and Sunday, and generates a correlation model based on this. .

  Next, the correlation model generation unit 16 calculates the correlation function based on the performance information acquired from Monday to Friday, which is the analysis period of the second priority, that is, the period in which the first priority period is excluded from every day. A correlation model is generated based on the derivation.

  Thereafter, the correlation model generation unit 16 stores all the generated correlation models for each analysis period in the analysis model storage unit 17 in association with each analysis period.

  In the first embodiment of the present invention, the model generation unit 30 includes the correlation model generation unit 16. The analysis unit 31 includes a correlation change analysis unit 18 and a failure analysis unit 13.

[Correlation change analysis]
Next, the correlation change analysis by the correlation change analysis unit 18 in the first embodiment of the present invention will be further described.

  First, the correlation change analysis unit 18 receives the performance information for analysis from the information collection unit 11 and extracts all of the correlation models generated based on the schedule information from the analysis model storage unit 17. Further, the correlation change analysis unit 18 acquires schedule information from the analysis schedule storage unit 19.

  Next, the correlation change analysis unit 18 confirms the acquisition date and time of the acquired performance information. As a confirmation method of the acquisition date and time at this time, for example, the correlation change analysis unit 18 may read the date and time information (see performance information 12A in FIG. 21) included in the performance information.

  Then, the correlation change analysis unit 18 determines whether or not the currently set correlation model is suitable for performing the correlation change analysis of the performance information acquired for analysis (that is, for generating this correlation model). Whether or not the acquisition timing of the used performance information is the same analysis period as the acquisition timing of the acquired performance information for analysis).

  As a result of the confirmation, if the correlation model is not suitable for use in the correlation change analysis, the correlation change analysis unit 18 extracts a correlation model suitable for analysis from the analysis model storage unit 17, and the correlation model is extracted. Change the setting to.

  At this time, if a correlation model suitable for analysis has not yet been generated, the correlation change analysis unit 18 sends information indicating that there is no correlation model suitable for analysis to the correlation model generation unit 16. The correlation model generation unit 16 that has received this information supplementally generates a correlation model suitable for analysis and stores it in the analysis model storage unit 17. Further, the correlation model generation unit 16 sends information indicating that the generation of the correlation model is completed to the correlation change analysis unit 18.

  FIG. 5 is an explanatory diagram illustrating an example of a correlation change analysis result generation operation according to the first embodiment of this invention.

  18A of FIG. 5 shows the analysis result when the analysis period switching determination and the analysis execution operation are repeatedly executed as described above. In 18Aa of FIG. 5, the analysis period is distinguished between a holiday (corresponding to the first priority schedule of the schedule information 19A of FIG. 2) and a weekday (corresponding to the second priority schedule of the schedule information 19A of FIG. 2). The correlation model is generated and analyzed in each section. By extracting and synthesizing these analysis results for each analysis period, an analysis result as shown by 18 Ab in FIG. 5 is obtained.

  In this case, the weekday correlation model is used on weekdays and the holiday correlation model is used on holidays, so that an analysis result corresponding to the operating characteristics of each period is provided. Thus, by automatically switching and analyzing the correlation model according to the schedule information designated in advance, a highly accurate analysis result can be obtained without increasing the burden on the administrator.

  Other functions of the above-described units are the same as those of the system operation management apparatus 101 in FIG. 20 described above.

[Operation of First Embodiment]
Next, operation | movement of the system operation management apparatus 1 in the 1st Embodiment of this invention is demonstrated based on FIG. 6 below.

  FIG. 6 is a flowchart showing the operation of the system operation management apparatus in the first exemplary embodiment of the present invention.

  Here, in order to clarify the overall flow of operation, reference is also made to what overlaps with the operation of the system operation management apparatus 101 in FIG. 20 described above.

  The administrator dialogue unit 14 sends the schedule information input from the outside to the analysis schedule storage unit 19 and stores it (step S201, schedule information storage step).

  Further, the performance information collection unit 11 periodically acquires performance information from the server constituting the customer service execution system 4 (step S202, performance information acquisition step) and stores it in the performance information storage unit 12 (step S203, Performance information accumulation process).

  Next, the correlation model generation unit 16 acquires performance information for a certain period from the performance information storage unit 12. Furthermore, the correlation model generation unit 16 acquires analysis schedule information from the analysis schedule storage unit 19.

  Next, the correlation model generation unit 16 generates a correlation model for each analysis period included in the acquired analysis schedule information (step S204, correlation model generation step), and associates each analysis period with the analysis model storage unit 17. save.

  Subsequently, the correlation change analysis unit 18 acquires performance information for analysis from the performance information collection unit 11 (step S205, performance information acquisition process for analysis). The correlation change analysis unit 18 acquires a correlation model for each period from the analysis model storage unit 17 and schedule information from the analysis schedule storage unit 19 (step S206, correlation model and schedule information acquisition step).

  Then, the correlation change analysis unit 18 confirms the acquisition date and time of the performance information to be analyzed, confirms whether or not the currently set correlation model is suitable for the analysis of the performance information, and the correlation model It is determined whether or not switching is necessary (step S207, analysis period selection step).

  That is, when the currently set correlation model is not suitable for the analysis of performance information, the correlation change analysis unit 18 determines to switch to the correlation model suitable for the analysis. On the other hand, when a correlation model suitable for analysis is already set, the correlation change analysis unit 18 determines that the correlation model is not switched.

  If it is determined in step S207 that the setting of the correlation model is to be switched (step S207 / Yes), the correlation analysis unit 18 confirms whether or not a correlation model for the analysis period after the switching has already been generated. (Step S208). If it has not been generated yet (step S208 / No), the correlation analysis unit 18 transmits information to the correlation model generation unit 16 that the correlation model for the analysis period after switching has not been generated. Upon receipt of the information, the correlation model generation unit 16 supplements and generates the correlation model (step S209, correlation model supplement generation step) and stores the correlation model in the analysis model storage unit 17, and completes the completion of supplementation of the correlation model after switching. Information to that effect is sent to the correlation change analysis unit 18.

  If the correlation model after the switching has already been generated (step S208 / Yes), the correlation change analysis unit 18 performs a correlation change analysis on the performance information using the correlation model (step S210, correlation). Change analysis process).

  If it is determined in step S207 that the correlation model is not switched (No in step S207), the correlation change analysis unit 18 uses the correlation model for the currently set analysis period as it is to perform a correlation change analysis. (Step S210, correlation change analysis step).

  After the correlation change analysis is completed, the correlation change analysis unit 18 sends the analysis result to the failure analysis unit 13.

  The failure analysis unit 13 that has received the analysis result checks whether or not the number of correlations determined to be abnormal in the correlation change analysis result of the performance information exceeds a predetermined value (step S211, failure analysis). Process). As a result of the confirmation, if it exceeds (step S211 / Yes), the failure analysis unit 13 sends information on the detailed content of the abnormality in the performance information to the administrator dialogue unit 14. On the other hand, when it does not exceed (No at Step S211), the steps after the analysis performance information acquisition step at Step S205 are repeated.

  When the administrator dialogue unit 14 receives the information related to the detailed contents of the abnormality of the performance information from the failure analysis unit 13, the administrator dialogue unit 14 displays that the customer service execution system 203 may have a failure based on the information ( Step S212, failure information output step).

  Subsequently, when the system administrator who has confirmed the analysis result displayed in the administrator dialogue unit 14 inputs an improvement instruction for the above-described system failure to the administrator dialogue unit 14, the administrator dialogue unit 14 Then, information related to the improvement command input is sent to the countermeasure execution unit 15 (step S213, improvement command information input step).

  Subsequently, when the information relating to the input of the improvement command is received from the administrator dialogue unit 14, the countermeasure execution unit 15 executes the improvement measure for the customer service execution system 4 according to the content of the information (step S214, system improvement). Process).

  Thereafter, the steps after the operation for acquiring the performance information for analysis (step S205) are repeatedly executed. Thereby, the change of the operation state of the customer service execution system 4 can be confirmed over time.

  Here, the specific contents executed in each step described above may be programmed and executed by a computer.

  Next, a characteristic configuration of the first embodiment of the present invention will be described. FIG. 26 is a block diagram showing a characteristic configuration of the first embodiment of the present invention.

  The system operation management device 1 includes a performance information storage unit 12, a model generation unit 30, and an analysis unit 31.

  Here, the performance information storage unit 12 stores performance information including a plurality of types of performance values in the system in time series. The model generation unit 30 generates a correlation model including one or more correlations between performance values of different types stored in the performance information storage unit 12 in each of a plurality of periods having any of a plurality of attributes. The analysis unit 31 detects an abnormality in the performance information using the input system performance information and the correlation model corresponding to the attribute in the period when the performance information was acquired.

[Effect of the first embodiment]
According to the first embodiment of the present invention, since the schedule information is introduced and the correlation change analysis is performed with the correlation model based on the performance information acquired in the same analysis period as when the performance information for analysis is acquired, the customer Even when the environment of the service execution system 4 changes from moment to moment, the correlation change analysis can be executed after appropriately selecting a suitable correlation model. Thereby, operation of the customer service execution system 4 can be managed with high accuracy.

  Furthermore, according to the first embodiment of the present invention, by registering a business pattern as schedule information, creation and switching of models necessary for a combination of business patterns are automated, and the burden on the system administrator is increased. It is greatly reduced.

  Here, the present invention is not limited to this example. In the present invention, the same effect can be obtained by using another method that can specify the switching of the correlation model in the analysis period corresponding to the acquisition date and time of the performance information for analysis.

  In the above description, the correlation change analysis unit 18 determines whether to switch the correlation model. However, the present invention is not limited to this example. The correlation model generation unit 16 may determine whether to switch the correlation model, or one of the correlation model generation unit 16 and the correlation change analysis unit 18 determines, The other may be controlled. Further, the correlation model generation unit 16 and the correlation change analysis unit 18 may jointly determine the analysis period.

  Regardless of which method is used, the system operation management apparatus 1 can provide the same effect as long as the analysis can be performed by switching the correlation model according to the acquisition date and time of the performance information for analysis.

[Second Embodiment]
Next, a second embodiment of the operation management system according to the present invention will be described with reference to FIGS.

  FIG. 7 is a block diagram showing the configuration of the second embodiment of the system operation management apparatus of the present invention.

  As shown in FIG. 7, the system operation management apparatus 2 according to the second embodiment of the present invention is similar to the system operation management apparatus 1 according to the first embodiment described above. Unit 12, correlation model generation unit 16, analysis model storage unit 17, correlation change analysis unit 18, failure analysis unit 13, administrator dialogue unit 14, coping execution unit 15, and analysis schedule storage unit 19 Including. The performance information collection unit 11 acquires performance information from the customer service execution system 4. The performance information storage unit 12 stores the acquired performance information. The correlation model generation unit 16 generates a correlation model based on the acquired performance information. The analysis model storage unit 17 stores the generated correlation model. The correlation change analysis unit 18 analyzes the abnormality of the performance information acquired using the correlation model. The failure analysis unit 13 determines the abnormality of the customer service execution system 4 based on the analysis result by the correlation change analysis unit 18. The administrator dialogue unit 14 outputs the determination result by the failure analysis unit 13. The coping execution part 15 improves the service execution system 4 for customer based on the instruction | indication of the improvement instruction | command with respect to the content which the administrator dialogue part 14 output. The analysis schedule accumulation unit 19 stores an analysis schedule.

  Further, as shown in FIG. 7, the system operation management apparatus 2 includes a regular model storage unit 20, a candidate information generation unit 21, and a correction candidate generation unit 22. The regular model storage unit 20 stores the correlation model that the correlation model generation unit 16 periodically generates. The candidate information generation unit 21 receives the correlation model from the regular model storage unit 20 and generates schedule candidate information that is a tentative schedule information from the fluctuation state of the contents of the correlation model. The correction candidate generation unit 22 sequentially applies calendar information that is a calendar attribute to each analysis period in the schedule candidate information generated by the candidate information generation unit 21 (compare each analysis period with the calendar information, By extracting calendar attributes suitable for each analysis period), schedule information correction candidates are generated.

  As shown in FIG. 7, the regular model storage unit 20 is connected to the correlation model generation unit 16. Thereby, the regular model accumulation unit 20 can sequentially store the correlation models sequentially generated by the correlation model generation unit 16.

  FIG. 8 is a block diagram illustrating a configuration of the candidate information generation unit 21 in the second exemplary embodiment of the present invention.

  As shown in FIG. 8, the candidate information generation unit 21 includes a common correlation determination unit 21a, a static element change point extraction unit 21b, a dynamic element similarity determination unit 21c, and a necessary model group extraction unit 21d. The common correlation determination unit 21a extracts a common correlation between the correlation models created by the correlation model generation unit 16 during successive periods. The static element change point extraction unit 21b extracts a time point at which the correlation model for performance information analysis is switched from the increase / decrease in the number of common correlations extracted by the common correlation determination unit 21a. The dynamic element similarity determination unit 21c includes correlation similarities included in the correlation model of the new analysis period extracted by the static element change point extraction unit 21b and the correlation model used in the past analysis period. Confirm. The necessary model group extraction unit 21d generates schedule candidate information based on each analysis period to which the correlation model is assigned by the static element change point extraction unit 21b and the dynamic element similarity determination unit 21c.

  FIG. 11 is a block diagram showing a configuration of the correction candidate generation unit 22 in the second embodiment of the present invention.

  As shown in FIG. 11, the correction candidate generation unit 22 includes a calendar information storage unit 22a, a calendar characteristic determination unit 22b, and a correction candidate generation unit 22c. The calendar information accumulating unit 22a stores information (hereinafter referred to as calendar information) related to calendar attributes such as day information and holiday information. The calendar characteristic determination unit 22b receives the schedule candidate information from the necessary model group extraction unit 21d of the candidate information generation unit 21, and applies the calendar information stored in the calendar information storage unit 22a to the content of the schedule candidate information. The characteristics of the date of each analysis period in the information (hereinafter, calendar characteristics) are determined. The correction candidate generation unit 22c compares the calendar characteristics determined by the calendar characteristic determination unit 22b with the contents of the existing schedule information, and when there is a difference between them, the correction of the schedule information is performed based on the contents of the calendar characteristics. Generate candidates.

  In the second embodiment of the present invention, the correlation model generation unit 16 and the administrator interaction unit 14 have new functions in addition to the various functions described above. Hereinafter, these functions will be described.

  The correlation model generation unit 16 generates a correlation model at a time interval set in advance from the outside. As a result, correlation models corresponding to various operational situations of the customer service execution system 4 can be obtained.

  The administrator dialogue unit 14 acquires a schedule information correction candidate from the analysis schedule storage unit 19 and displays it. As a result, the generated schedule information plan can be presented to the system administrator, and the system administrator can be asked about whether or not the schedule information can be changed.

  In the second embodiment of the present invention, the model generation unit 30 includes a correlation model generation unit 16, a candidate information generation unit 21, and a correction candidate generation unit 22. The analysis unit 31 includes a correlation change analysis unit 18 and a failure analysis unit 13.

[Regular generation of correlation model]
The generation of the correlation model in the second embodiment of the present invention will be described focusing on the differences from the first embodiment described above.

  As described above, the correlation model generation unit 16 creates a correlation model at a time interval set in advance from the outside (for each section). Here, as an example of setting the time interval, the system administrator can set the content of “generate correlation model at 15:00 every day” in the time interval.

  The length of the time interval (section) may be the same for each time interval (section) or may be different.

  Then, the sequentially generated correlation models are sequentially stored not in the analysis model storage unit 17 but in the regular model storage unit 20.

[Generate schedule candidate information]
Next, generation of schedule candidate information by the above-described candidate information generation unit 21 will be described below.

  The common correlation determination unit 21a extracts a plurality of correlation models stored in the regular model accumulation unit 20. Then, among the extracted correlation models, those having continuous acquisition timings of the performance information that is the basis of generation are compared, and a common correlation (for example, a correlation function) is extracted.

  The common correlation determination unit 21a performs this operation for combinations of correlation models created in all consecutive periods.

  Next, the static element change point extraction unit 21b confirms a temporal change in the number of the common correlations for each common correlation extracted by the common correlation determination unit 21a.

  The confirmation operation of the change in the number of correlations over time by the static element change point extraction unit 21b will be described using a specific example.

  As an example, the correlation model P, Q, generated by the correlation model generation unit 16 based on the performance information acquired by the performance information collection unit 11 in each continuous period p, q, r, s, t. Consider the case where R, S, and T exist.

  The static element change point extraction unit 21b includes (a) the number of common correlations in the correlation model P and the correlation model Q, (b) the number of common correlations in the correlation model Q and the correlation model R, and (c) the correlation. The number of common correlations in the model R and the correlation model S and (d) the number of common correlations in the correlation model S and the correlation model T are sequentially confirmed.

  As a result of confirmation by the static element change point extraction unit 21b, the number of common correlations is 3 in the combination (a), 2 in the combination (b), 3 in the combination (c), It is assumed that there are zero combinations in the above (d).

  At this time, the static element change point extraction unit 21b increases the amount of change over time with respect to the change in the number of common correlations between the correlation models in the continuous period described above from the number set in advance from the outside. The time point is determined as a time point (a division point of the analysis period) at which the correlation model for performance information analysis is switched.

  In this example, it is assumed that the above setting has the content that “the correlation model is switched when the change in the number of common correlations is 3 or more”.

  Thereby, in the above-described case, the change amount is 1 from the combination (a) to the combination (b), and the change amount is 1 from the combination (b) to the above (c). From the combination (c) to the combination (d), the amount of change is 3.

  Therefore, since the time point when the combination of (c) is changed to the combination of (d) matches the above setting, the static element change point extraction unit 21b is the time point when the correlation model is switched, that is, the analysis period. Judged as the division point. And the static element change point extraction part 21b divides | segments an analysis period in this division | segmentation point.

  Next, the dynamic element similarity determination unit 21c selects the latest one of the correlation models periodically generated by the correlation model generation unit 16 in the new analysis period set by the division of the analysis period described above. Assign temporarily.

  Furthermore, the dynamic element similarity determination unit 21c and the correlation model temporarily allocated before the analysis period is divided by the static element change point extraction unit 21b (from the division point). Correlation model assigned to each previous analysis period) is checked for similarity in content.

  As a result of the confirmation, when it is confirmed that the two are similar to each other exceeding a preset similarity criterion, the dynamic element similarity determination unit 21c, before the correlation model in the new analysis period is divided (Correlation model similar to the temporarily assigned correlation model among the correlation models assigned in the respective analysis periods before the dividing point).

  Here, the division of the analysis period and the assignment of the correlation model for each analysis period by the static element change point extraction unit 21b and the dynamic element similarity determination unit 21c described above will be further described with reference to FIG.

  FIG. 9 is an explanatory diagram illustrating an example of an operation for generating schedule candidate information according to the second embodiment of this invention.

  9A shows the division of the analysis period and the assignment of a new correlation model. In stage 1 (21b1) in FIG. 9, the section in which the performance information analysis has been performed by the correlation model A is divided, and a correlation model B is newly set. In this case, first, in a state where the performance information analysis is being performed with the correlation model A, the static element change point extraction unit 21b of the candidate information generation unit 21 finds a difference between the correlation models generated periodically, The analysis period is divided and the correlation model B, which is the latest periodic correlation model, is assigned to the period.

  In Step 2 (21b2) of FIG. 9, after the analysis using the correlation model B is continued in this manner, the static element change point extraction unit 21b sets a new analysis period in the same manner, and the latest periodic Correlation model C, which is a correlation model, is assigned. At the same time, the dynamic element similarity determination unit 21c of the candidate information generation unit 21 determines the similarity between the correlation model A and the correlation model C. As a result, when it is determined that they are similar, the dynamic element similarity determination unit 21c, as shown in stage 3 (21c1) in FIG. Assign model A.

  As a result, a large number of correlation models are generated by generating different analysis models for each analysis period even though the correlation models are similar between the set different analysis periods. Can prevent the situation of shortage. Furthermore, it is possible to prevent a decrease in the operation speed of the entire system operation management apparatus 2 due to a shortage of storage memory and a situation in which the operation becomes unstable.

  Next, the necessary model group extraction unit 21d combines the analysis periods to which the correlation models are assigned by the static element change point extraction unit 21b and the dynamic element similarity determination unit 21c into one, thereby obtaining the schedule candidate information. Generate.

  FIG. 10 is an explanatory diagram illustrating an example of an operation of generating a correlation change analysis result in the second embodiment of the present invention.

  Here, 21B in FIG. 10 shows the analysis result of the correlation change in the second embodiment of the present invention.

  As shown in 21c2 of FIG. 10, by the correlation element allocation operation to the analysis period described above by the static element change point extraction unit 21b and the dynamic element similarity determination unit 21c, each analysis period 1, For 2 and 3, correlation model A or B is assigned. Here, among the analysis results in the analysis periods 1, 2, and 3, the analysis results using the correlation model A are A1 and A3, respectively. Similarly, each analysis result using the correlation model B is B2.

  And as shown to 21d1 of FIG. 10, the analysis result A1, analysis result B2, and analysis result A3 mentioned above are produced | generated as an analysis result.

  The necessary model group extraction unit 21d accumulates the correlation model assigned to each analysis period of the schedule candidate information in the analysis model accumulation unit 20, and sends the schedule candidate information to the calendar characteristic determination unit 22b of the correction candidate generation unit 22. .

  FIG. 12 is an explanatory diagram illustrating an example of a procedure for generating analysis schedule correction candidates according to the second embodiment of this invention.

  For example, the necessary model group extraction unit 21d sends the schedule candidate information 21d2 of FIG. 12 to the calendar characteristic determination unit 22b.

[Generate correction candidates for schedule information]
The calendar characteristic determination unit 22b receives the schedule candidate information from the necessary model group extraction unit 21d and acquires the calendar information from the calendar information storage unit 22a. Here, the calendar information is created in advance by the system administrator.

  Then, the calendar characteristic determining unit 22b compares the contents of the schedule candidate information with the calendar information, and sequentially applies the corresponding calendar information to each analysis period in the schedule candidate information. Thereby, calendar characteristics are determined.

  Here, the determination of the calendar characteristic by the calendar characteristic determination unit 22b described above will be further described with reference to FIG.

  As shown in FIG. 12, the schedule candidate information 21d2 for August 2009 received from the necessary model group extraction unit 21d is divided into three types of analysis periods A to C: Saturday and Sunday, Monday to Friday, and the last day of the month. Consider the case. In this case, the calendar information 22a1 is set with calendar attributes such as “holiday” for Saturday and Sunday, “weekday” for Monday to Friday, and “end of month” for August 31, 2009. Assuming that

  At this time, the calendar characteristic determination unit 22b compares the schedule candidate information 21d2 with the calendar information 23a1, and extracts the attributes of the calendar information 23a1 suitable for each analysis period of the schedule candidate information 21d2 (generation procedure 21b1). As a result, the analysis period for Saturday and Sunday is “holiday”, the analysis period for Monday to Friday is “weekday”, and the analysis period for August 31 is “end of month”. Calendar characteristic 22b2 is determined.

  By determining the calendar characteristics, the calendar attributes of each analysis period can be automatically specified without examining the contents of each analysis period of the schedule candidate information.

  Next, the correction candidate generation unit 22c receives calendar characteristics from the calendar characteristic determination unit 22b, and receives schedule information generated in advance by the system administrator from the analysis schedule storage unit 19. And the correction candidate production | generation part 22c compares the content about the calendar characteristic and the schedule information already produced | generated.

  As a result of the comparison, when the contents indicated by the calendar characteristics have changed from the contents of the schedule information generated in advance, the schedule information generating unit 22c generates a schedule information correction candidate based on the contents of the calendar characteristics. Then, the schedule information generation unit 22 c stores the schedule information correction candidates in the analysis schedule storage unit 19.

  FIG. 13 is an explanatory diagram showing an example of a procedure for generating correction candidates for an analysis schedule (continuation of FIG. 12) in the second embodiment of the present invention.

  Here, the function of generating the schedule information correction candidate by the schedule information generation unit 21c described above will be further described with reference to FIG.

  As shown in FIG. 13, it is assumed that the calendar characteristic determination unit 22 b generates a calendar characteristic 22 b 2 and the existing schedule information 19 B is stored in the analysis schedule storage unit 19.

  When both are compared, the contents of the calendar characteristic 22b2 are clearly changed from the contents of the existing schedule information 19B (generation procedure 22c1). Therefore, the schedule information generation unit 22c generates the schedule correction candidate 22c2 by reflecting the calendar characteristic 22b2 in the schedule information.

  Thereby, even if the existing schedule information is not suitable, suitable schedule information can be obtained automatically.

[Display schedule information correction candidates]
The administrator dialogue unit 14 takes out the schedule information correction candidates from the analysis schedule storage unit 19 together with the schedule information generated in advance, and displays both on the same screen.

  FIG. 14 is an explanatory diagram illustrating an example of content displayed by the administrator dialogue unit 14 in the second exemplary embodiment of the present invention.

  For example, the administrator dialogue unit 14 displays the display screen 14B of FIG.

  As shown in the display screen 14B, the manager interaction unit 14 displays both the schedule information generated in advance and the correction candidate of the schedule information side by side so that the contents can be easily compared.

  In addition, the administrator interaction unit 14 simultaneously displays a correlation model (14Ba) and a list of necessary correlation models (14Bb) for each analysis period in the schedule information generated in advance and the schedule information correction candidates. This is because the difference between the schedule information generated in advance and the schedule information can be clarified by specifying the correlation model as a component.

  Further, the administrator dialogue unit 14 also displays an operation button 14Bc for changing the regular schedule information from the schedule information generated in advance to the schedule information correction candidate. When the system administrator inputs the change of the regular schedule information by using the operation button 14Bc, information related to the input is sent from the administrator dialogue unit 14 to the analysis schedule storage unit 19, and the contents of the schedule information correction candidates Based on the above, the contents of the schedule information generated in advance are corrected.

  In this way, the system administrator generates schedule information of rough contents in advance, and the system operation management apparatus 2 corrects the contents to contents suitable for correlation change analysis. The burden at the time of generation can be greatly reduced.

  Other functions of the above-described units are the same as those in the first embodiment described above.

[Operation of Second Embodiment]
Next, the operation of the system operation management apparatus 2 in the second exemplary embodiment of the present invention will be described below with reference to FIGS. 15 and 16, focusing on the differences from the first exemplary embodiment described above.

  FIG. 15 is a flowchart showing an operation of generating schedule candidate information in the second exemplary embodiment of the present invention.

  First, similarly to the system operation management apparatus 1 of the first embodiment described above, the performance information collection unit 11 periodically acquires performance information from the server of the customer service execution system 3 and stores it in the performance information storage unit 12. Store sequentially.

  Next, the correlation model generation unit 16 generates a correlation model at a time interval set in advance from the outside (FIG. 15: Step S301, correlation model periodic generation step). Thereafter, the generated correlation models are sequentially stored in the regular model storage unit 20.

  Subsequently, the common correlation determination unit 21 a of the candidate information generation 21 acquires a correlation model for a period set in advance from the outside from the regular model storage unit 20. Then, the common correlation determination unit 21a compares the generated correlation models among the obtained correlation models and extracts a correlation (correlation function or the like) common to both (FIG. 15). : Step S302, common correlation extracting step).

  Next, the static element change point extraction unit 21b confirms the change over time in the number of the above-described common correlations (FIG. 15: Step S303), and the change is within the reference range set in advance from the outside. (FIG. 15: Step S304).

  At this time, if the change in the number of correlation functions is within the reference range (step S304 / Yes), the static element change point extraction unit 21b determines that the performance information should be analyzed using the same correlation model. On the other hand, when the change in the number of correlation functions exceeds the reference range (No in step S304), the static element change point extraction unit 21b is the time when the correlation model for correlation change analysis is switched. At this time, the analysis period is divided (FIG. 15: Step S305, correlation model dividing step).

  Next, the dynamic element similarity determination unit 21c temporarily assigns the latest correlation model to the correlation model in the new analysis period by the static element change point extraction unit 21b. Thereafter, the contents of the correlation model assigned in the analysis period prior to the division point are compared with the contents of the latest correlation model (FIG. 15: step S306), and the degree of similarity between the two is confirmed (FIG. 15). : Step S307).

  At this time, when it is confirmed that they are similar to each other beyond a preset reference range (step S307 / Yes), the dynamic element similarity determination unit 21c uses the correlation model for this new analysis period as a correlation model. The correlation model before the dividing point is assigned (FIG. 15: Step S308, correlation model assignment step). On the other hand, when it is confirmed that the similarity is below the reference range (No in step S307), the dynamic element similarity determination unit 21c uses the above-mentioned temporarily assigned correlation as the correlation model for this new analysis period. Assign a model.

  Next, the necessary model group extraction unit 21d constructs schedule candidate information based on each analysis period to which the correlation model is assigned by the static element change point extraction unit 21b and the dynamic element similarity determination unit 21c, and corrects the schedule candidate information. The data is sent to the calendar characteristic determining unit 22b of the candidate generating unit 22 (FIG. 15: Step S309, candidate information generating / transmitting step). At the same time, the necessary model group extraction unit 21d stores each correlation model assigned to each analysis period of the schedule candidate information in the analysis model storage unit 17 in association with each analysis period.

  FIG. 16 is a flowchart showing an operation of generating a schedule information correction candidate in the second embodiment of the present invention.

  Next, the calendar characteristic determination unit 22b receives schedule candidate information from the necessary model group extraction unit 21d (FIG. 16: step S310, candidate information acquisition step), and acquires calendar information from the calendar information storage unit 22a. Then, the calendar characteristic determining unit 22b compares the contents of the schedule candidate information with the contents of the calendar information, and determines the calendar characteristics by fitting the calendar information to each analysis period in the schedule candidate information (FIG. 16: Step). S311, calendar characteristic determination step).

  Next, the correction candidate generation unit 22c receives the calendar characteristics determined by the calendar characteristic determination unit 22b, and compares the contents of the calendar characteristics with the contents of schedule information that has already been generated (step S312 in FIG. 16). .

  As a result of this comparison, when it is confirmed that the contents of the calendar characteristics have changed from the contents of the schedule information that has already been created (step S313 / Yes), the correction candidate generation unit 22c is based on the calendar characteristics. Correction candidates for schedule information are generated and stored in the analysis schedule storage unit 19 (FIG. 16: step S314, correction candidate generation and storage step). Then, the manager dialogue unit 14 acquires the schedule information correction candidates from the schedule storage unit 19 and displays them externally (FIG. 16: step S315, correction candidate output step). On the other hand, as a result of the comparison, when it is confirmed that the content of the calendar characteristic has not changed from the content of the existing schedule information (No in step S313), the correction candidate generation unit 22c generates a correction candidate for the schedule information do not do.

  Then, when a change to the schedule information is input from the outside to the administrator dialogue unit 14, the administrator dialogue unit 14 sends information related to the input to the analysis schedule storage unit 19 and is also used for the correlation change analysis. Change the schedule information to the contents of the correction candidates.

  Thereafter, the correlation change analysis unit 18 performs a correlation change analysis on the performance information acquired for analysis based on the generated schedule information.

  The subsequent steps are the same as those in the first embodiment described above.

  Here, the specific contents executed in each step described above may be programmed and executed by a computer.

[Effects of Second Embodiment]
According to the second embodiment of the present invention, since the system operation management device 2 generates schedule information, the system administrator has little knowledge and experience, and it is difficult for the system administrator to generate schedule information by himself / herself. Even so, it is not necessary for the system administrator to accurately grasp each business pattern and generate schedule information one by one, and the burden can be greatly reduced.

  In addition, according to the second embodiment of the present invention, the system operation management device 2 reads changes in the environment of the customer service execution system 4 from time to time, and generates schedule information in response to the changes. Even if it is difficult to register this business pattern as schedule information due to irregularities, it is possible to automatically and accurately assign a correlation model according to changes in the customer service execution system 4 Therefore, it is possible to always provide a highly accurate analysis result according to the actual usage pattern.

  As a case where this effect works most effectively, there is a case where the customer service execution system 4 is commonly used in a plurality of departments.

  In this case, since there are a plurality of users of the system, the usage pattern is complicated. However, as described above, in the second embodiment of the present invention, generation and switching of necessary correlation models are automated, so that the accuracy of analysis results does not deteriorate due to improper scheduling, and is always appropriate. Analysis results are maintained. Thereby, the coping efficiency with respect to the performance degradation of a management object system improves.

  Here, in the above description, when the correlation model to be switched is detected, the system operation management apparatus 2 creates a schedule information correction candidate and, as shown in the display screen 14B (FIG. 12), an existing schedule. The information and the correction candidates are displayed side by side, and the schedule information is corrected in response to an input related to a schedule information correction command from a system administrator or the like. However, the present invention is not limited to this example. For example, even if the system operation management device 2 automatically corrects the schedule within a certain range, plans a future schedule change upon receiving input from the system administrator, etc., or re-analyzes past performance data Good. That is, the same effect can be obtained if the system operation management apparatus automatically generates schedule information that has conventionally been generated one by one by the system administrator.

[Third Embodiment]
Next, a third embodiment of the operation management system according to the present invention will be described with reference to FIGS.

  FIG. 17 is a block diagram showing the configuration of the third embodiment of the system operation management apparatus of the present invention.

  As shown in FIG. 17, the system operation management apparatus 3 in the third embodiment of the present invention is similar to the system operation management apparatus 2 in the second embodiment described above. Unit 12, correlation model generation unit 16, analysis model storage unit 17, correlation change analysis unit 18, failure analysis unit 13, administrator dialogue unit 14, and countermeasure execution unit 15. The performance information collection unit 11 acquires performance information from the customer service execution system 4. The performance information storage unit 12 stores the acquired performance information. The correlation model generation unit 16 generates a correlation model based on the acquired performance information. The analysis model storage unit 17 stores the generated correlation model. The correlation change analysis unit 18 analyzes the abnormality of the performance information acquired using the correlation model. The failure analysis unit 13 determines the abnormality of the customer service execution system 4 based on the analysis result by the correlation change analysis unit 18. The administrator dialogue unit 14 outputs the determination result by the failure analysis unit 13. When there is an improvement command for the content output by the administrator dialogue unit 14, the coping execution unit 15 improves the customer service execution system 4 based on the command.

  As shown in FIG. 17, the system operation management apparatus 3 according to the third embodiment of the present invention is similar to the system operation management apparatus 2 according to the second embodiment described above. A model storage unit 20, a candidate information generation unit 21, and a correction candidate generation unit 22 are included. The analysis schedule accumulation unit 19 stores an analysis schedule. The regular model storage unit 20 sequentially stores the correlation models periodically generated by the correlation model generation unit 16. The candidate information generation unit 21 generates schedule candidate information that is a draft of schedule information based on the performance information stored in the regular model storage unit 20. The correction candidate generator 22 generates a schedule information correction candidate by fitting a calendar attribute to the schedule candidate information.

  Furthermore, the system operation management apparatus 3 includes an adapted model determination unit 23 as shown in FIG. When there are a plurality of correlation change analysis results by the correlation change analysis unit 18, the matching model determination unit 23 compares the degree of abnormality to determine an order based on the degree of abnormality of each analysis result.

  Further, the correlation change analysis unit 18, the failure analysis unit 13, and the manager dialogue unit 14 have new functions in addition to the functions described above. Hereinafter, these functions will be described.

  The correlation change analysis unit 18 not only performs correlation change analysis using the correlation model assigned in accordance with the schedule information, but also other performance information stored in the analysis model storage unit 17 for the performance information received from the performance information collection unit 11. Correlation change analysis is also performed using a correlation model.

  The failure analysis unit 13 receives the analysis result using the other correlation model in addition to the analysis result using the correlation model assigned according to the schedule information from the matching model determination unit 23, and performs the failure analysis. The result is sent to the manager dialogue unit 14.

  The administrator dialogue unit 14 displays the analysis result according to the schedule information received from the failure analysis unit 13 and the analysis result based on the other correlation model. In addition, the administrator dialogue unit 14 receives an input indicating that the analysis result using the other correlation model is a regular analysis result, and the contents of the schedule information stored in the analysis schedule storage unit 19 are described above. Modify based on the contents of other correlation models.

  As a result, even if there is some problem with the contents of the schedule information in the first and second embodiments described above, it is possible to select a suitable correlation model from other correlation models and apply it to the correlation change analysis, thereby achieving high accuracy. Correlation change analysis can be performed.

  In the third embodiment of the present invention, the model generation unit 30 includes a correlation model generation unit 16, a candidate information generation unit 21, a correction candidate generation unit 22, and a matching model determination unit 23. . The analysis unit 31 includes a correlation change analysis unit 18 and a failure analysis unit 13.

  The contents of the third embodiment of the present invention will be described in detail below, focusing on the differences from the first and second embodiments described above.

  The correlation change analysis unit 18 acquires performance information for analysis from the performance information collection unit 11, schedule information from the analysis schedule storage unit 19, and each correlation for the analysis period set in advance from the analysis model storage unit 17. Get the model.

  Next, the correlation change analysis unit 18 performs a correlation change analysis on the performance information for analysis using a correlation model assigned according to the schedule information. Further, the correlation change analysis unit 18 performs correlation change analysis using various correlation models acquired from the analysis model storage unit 17.

  Then, the correlation change analysis unit 18 sends all of the analysis results obtained by the above-described correlation change analysis to the matching model determination unit 23.

  The matching model determination unit 23 compares the degree of abnormality (difference between the actual measurement value and the theoretical value) for all analysis results received from the correlation change analysis unit 18, and determines the rank of each analysis result.

  Then, the matching model determination unit 23 checks whether an analysis result having a lower degree of abnormality than the analysis result according to the schedule information exists in the analysis result using another correlation model. If there is such an analysis result as a result of the confirmation, the conformity model determination unit 23 determines an analysis result using the other correlation model as an alternative to the analysis result, and uses this as an alternative to the analysis result. Such a correlation model is determined as a fitting model. In addition, when there are a plurality of analysis results having a lower degree of abnormality than the analysis results according to the schedule information, the conforming model determination unit 23 determines the analysis result having the lowest degree of abnormality as an alternative to the analysis result. May be.

  Finally, the conformity model determination unit 23 sends both the analysis result according to the schedule information and an alternative to the analysis result to the failure analysis unit 13.

  Here, as a method for comparing the degree of abnormality of each analysis result by the adaptive model determination unit 23, for example, there is a method of judging from information on whether the degree of abnormality is constantly large or small.

  As one specific example, referring to 21c2 in FIG. 10, an analysis result A3 that is one of the results of the performance information analysis by the correlation model A and an analysis result that is one of the results of the performance information analysis by the correlation model B Consider the case where B3 is compared.

  As a result of comparing the two, the analysis result B3 has a higher degree of abnormality than the analysis result A3 for a long time (FIGS. 10 and 21c2). Therefore, in this case, the matching model determination unit 23 determines that the analysis result B3 is not a suitable analysis result. Since the analysis result A3 has a lower degree of abnormality than B3, the adaptive model determination unit 23 determines that the analysis result A3 is a better analysis result than B3.

  Therefore, in the case where the correlation model assigned according to the schedule information is model B, the analysis result is B3, and the analysis result A3 using the correlation model A exists as the analysis result by another correlation model, the matching model determination unit 22 determines the analysis result A3 as an alternative to the analysis result.

  The failure analysis unit 13 receives both the analysis result according to the schedule information and the alternative from the adaptation model determination unit 23 when the alternative model determination unit 23 determines the alternative plan, and performs the analysis according to the schedule information. After performing the above-described failure analysis on the result, both are sent to the administrator dialogue unit 14.

  When the analysis result according to the schedule information and the alternative are sent from the failure analysis unit 13, the administrator dialogue unit 14 receives both of them and simultaneously displays them.

  FIG. 18 is an explanatory diagram illustrating an example of contents displayed by the administrator dialogue unit 14 in the third embodiment of the present invention.

  For example, the administrator dialogue unit 14 displays the display screen 14C of FIG.

  This display screen 14C includes a current analysis result (analysis result according to schedule information) 14Ca indicating the degree of abnormality (difference between the actual measurement value and the theoretical value based on the correlation function). Further, the display screen 14C is used with the analysis result 14Cb of the analysis model in the analysis period in which the analysis result alternative exists among the above-described current analysis results, and the analysis result of the analysis result alternative. The correlation model information 14Cc is included. Furthermore, the display screen 14C includes an operation button 14Cd for adopting an analysis result alternative instead of the current analysis result as a regular analysis result.

  As a result, the system administrator manages an improvement command corresponding to the degree of abnormality detected in the current analysis result (analysis result according to the schedule information) based on various information displayed on the display screen 14C. To the person interaction unit 14.

  Furthermore, the system administrator can input a command to the administrator dialogue unit 14 to adopt the alternative analysis result instead of the current analysis result as the analysis result of the performance information as the regular analysis result (FIG. 18). Operation button 14Cd).

  In addition, when the alternative of the analysis result is adopted as the analysis result, the administrator dialogue unit 14 determines the content of the current schedule information stored in the analysis schedule storage unit 19 based on the content of the conformance model. Modify (replace the correlation model corresponding to the analysis period in which the alternative was presented with a fit model). Thereby, the accuracy of subsequent analysis results can be improved.

  Other functions of the above-described units are the same as those in the second embodiment described above.

[Operation of Third Embodiment]
Next, the operation of the system operation management device 3 according to the third exemplary embodiment of the present invention will be described below with reference to FIG. 19, focusing on parts different from the first and second exemplary embodiments described above.

  FIG. 19 is a flowchart showing an operation by the conformity model determination unit 23 in the third embodiment of the present invention.

  Of the operations of the system operation management device 3 in the third exemplary embodiment of the present invention, the steps for generating schedule information are the same as those in the second exemplary embodiment.

  In the subsequent correlation change analysis step, the correlation change analysis unit 18 acquires performance information for analysis from the performance information collection unit 11 and is set in advance among the correlation models stored from the analysis model storage unit 17. All of the correlation models for a given period.

  Then, the correlation change analysis unit 18 performs a correlation change analysis of the performance information using the correlation model assigned according to the schedule information (step S401, originally a model analysis step).

  Subsequently, the correlation change analysis unit 18 performs a correlation change analysis of the performance information using another correlation model acquired from the analysis model storage unit 17 (step S402, other model analysis step).

  Then, the correlation change analysis unit 18 sends all of the analysis results according to the schedule information and the analysis results using the other correlation models to the matching model determination unit 23.

  Next, the matching model determination unit 23 compares the analysis result according to the schedule information with the analysis result using the other correlation model (step S403, matching model determination step).

  As a result, when the analysis result using the other correlation model is superior to the analysis result according to the schedule information (the degree of abnormality is low) (Yes in step S404), the matching model determination unit 23 The analysis result using the correlation model is an alternative to the analysis result according to the schedule information. Then, the matching model determination unit 23 sets the other correlation model related to the alternative plan of the analysis result as the matching model, and sends the analysis result according to the schedule information and the alternative plan of the analysis result to the failure analysis unit 13.

  On the other hand, when the analysis result using the other correlation model is not superior to the analysis result according to the schedule information (No in step S404), the conforming model determination unit 23 performs only the analysis result according to the schedule information. Is sent to the failure analysis unit 13.

  Next, the failure analysis unit 13 receives the analysis results and alternatives according to the schedule information from the conformance model determination unit 23, performs the failure analysis, and then outputs the analysis results and alternatives according to the schedule information after the failure analysis. It is sent to the administrator dialogue unit 14.

  Next, the administrator dialogue unit 14 displays the contents of the analysis results and alternatives according to the schedule information received from the failure analysis unit 13 (step S405, alternative output process).

  Then, the administrator interaction unit 14 receives an input related to a countermeasure command by a system administrator or the like who has browsed the display content described above, and sends information related to the input to the countermeasure execution unit 15 (step S406).

  Further, when the administrator dialogue unit 14 receives an input indicating that the alternative of the analysis result is adopted as the regular schedule information, the administrator dialogue unit 14 uses the current schedule information stored in the analysis schedule storage unit 19 as the conforming model. (Correlation model corresponding to the analysis period in which the alternative is presented is replaced with a matching model) (step S407, schedule information correction step).

  Thereafter, the steps after step S401 are repeatedly executed.

  Here, the specific contents executed in each step described above may be programmed and executed by a computer.

[Effect of the third embodiment]
According to the third embodiment of the present invention, the operation pattern of the customer service execution system 4 changes every moment (that is, the customer service execution system 4 is not necessarily operated as set in the schedule information). ), The system operation management apparatus 3 can execute the correlation change analysis with high accuracy. The reason is that the system operation management apparatus 3 outputs a correlation change analysis result using another correlation model that is not assigned in the schedule information, and even if a temporary operation pattern disruption occurs, the disorder has occurred. This is because the correlation change analysis result using the correlation model during the operation pattern can be applied as an alternative to the analysis result.

  For example, even if the work normally performed on the last day of the month is brought forward for some reason, according to the third embodiment, “If it is considered the last day of the month, it is normal. Can be presented together with the analysis result according to the schedule information. As described above, even when a sudden operation pattern difference occurs in the customer service execution system 4, the system operation management device 3 can present an appropriate analysis result to the system administrator.

  Furthermore, according to the third embodiment of the present invention, since the system operation management apparatus 3 can sequentially correct the contents of the schedule information stored in the analysis schedule storage unit 19 based on the contents of the applied model, it is always scheduled. The content of information can be updated to the latest state, and an operation management environment that can deal with various system errors flexibly can be obtained.

  Although the present invention has been described in the above embodiments, the present invention is not limited to the above embodiments.

  This application claims the priority on the basis of Japanese application Japanese Patent Application No. 2009-238747 for which it applied on October 15, 2009, and takes in those the indications of all here.

  The system operation management apparatus, system operation management method, and program storage medium according to the present invention can be applied to an information processing apparatus that provides various information communication services such as Web services and business services as described above. Because this information processing device can detect system performance degradation, not only Internet mail order sales devices and in-house information devices, but also many customers such as railway and aircraft seat reservation ticketing devices and movie theater automatic seat ticket purchasing devices. It can also be used for various devices that are expected to be flooded at a time.

1, 2, 3, 101 System operation management device 4 Customer service execution system 11 Performance information collection unit 12 Performance information storage unit 13 Failure analysis unit 14 Administrator dialogue unit 15 Countermeasure execution unit 16 Correlation model generation unit 17 Analysis model storage Unit 18 Correlation change analysis unit 19 Analysis schedule storage unit 20 Regular model storage unit 21 Candidate information generation unit 21a Common correlation determination unit 21b Static element change point extraction unit 21c Dynamic element similarity determination unit 21d Necessary model group extraction unit 22 Modification Candidate generator 22a Calendar information storage unit 22b Calendar characteristic determination unit 22c Correction candidate generation unit 23 Applicable model determination unit 30 Model generation unit 31 Analysis unit

Claims (21)

  1. Performance information storage means for storing performance information including multiple types of performance values in the system in time series,
    The same correlation based on a correlation model that is generated for each of a plurality of periods based on the performance information stored in the performance information storage means and includes one or more correlations between performance values of different types. model extracts one or more periods are applied, together with the assign the same of the correlation model in one or more the period of time that is the extraction, specifies the attribute on compatible calendar to one or more periods which are the extracted A model generating means for associating the calendar attribute with the correlation model;
    Analyzing means for detecting abnormality of the performance information using the input performance information of the system and the correlation model for the calendar attribute of the period when the performance information was acquired;
    including,
    System operation management device.
  2. The system operation management apparatus according to claim 1, wherein the analysis unit performs abnormality detection based on the number of correlation destructions of the correlation calculated by applying the correlation model to the performance information.
  3. The model generation unit generates the correlation model for each of a plurality of periods included in the predetermined period based on the performance information stored in the performance information storage unit for a predetermined period, and has a common correlation The system operation management apparatus according to claim 2, wherein an analysis period including one or more periods is set, and any one of the correlation models generated for each analysis period is assigned to the analysis period.
  4. When the degree of increase or decrease in the number of correlations that is common between the correlation models in two consecutive periods is equal to or greater than a predetermined value, the model generation unit sets the division period as a division point. The system operation management apparatus according to claim 3, wherein the analysis period consisting of one or more periods divided by (1) is set.
  5. The model generation means is similar to the correlation included in the correlation model set in the analysis period and the correlation included in the correlation model set in an analysis period other than the analysis period. 5. The system operation management apparatus according to claim 4, wherein, if the correlation model is set to the analysis period, the correlation model set to the other analysis period is assigned to the correlation model.
  6. The model generation means acquires the performance information in a period having the calendar attribute stored in the performance information storage means for each of the plurality of calendar attributes, and based on the performance information, The system operation management apparatus according to claim 2, wherein a correlation model is generated and set in the correlation model for the calendar attribute.
  7. The analysis means performs abnormality detection of the performance information using the correlation model corresponding to the calendar attribute of the period in which the performance information was acquired, and a correlation model other than the correlation model, The degree of abnormality for abnormality detection using the other correlation model is lower than the degree of abnormality for abnormality detection using the correlation model corresponding to the calendar attribute in the period when the performance information was acquired. In this case, the system operation management apparatus according to any one of claims 1 to 6, wherein the other correlation model is selected as a matching model for the calendar attribute.
  8. Stores performance information including multiple types of performance values in the system in time series,
    One or more periods in which the same correlation model is applied based on a correlation model that is generated for each of a plurality of periods based on the performance information and includes one or more correlations between performance values of different types. extracting, with assigned the same of the correlation model in one or more the period of time that is the extracted, by identifying the attributes of the matching calendar to one or more of the period of time that is the extracted, the attribute on the calendar Associating the correlation model;
    Anomaly detection of the performance information is performed using the performance information of the system that has been input and the correlation model for the calendar attribute of the period in which the performance information was acquired.
    System operation management method.
  9. The system operation according to claim 8, wherein when performing abnormality detection of the performance information, abnormality detection is performed based on the number of correlation destructions of the correlation calculated by applying the correlation model to the performance information. Management method.
  10. When associating the calendar attribute with the correlation model, the correlation model is generated for each of a plurality of periods included in the predetermined period based on the performance information of a predetermined period, and the one or more having a common correlation The system operation management method according to claim 9, wherein an analysis period composed of the period is set, and any one of the correlation models generated for each analysis period is assigned to the analysis period.
  11. When associating the calendar attribute with the correlation model, the predetermined period is divided when the degree of increase or decrease in the number of correlations common to the correlation models in two consecutive periods is equal to or greater than a predetermined value. The system operation management method according to claim 10, wherein the analysis period including one or more periods divided by the division point is set as a division point.
  12. When associating the calendar attribute with the correlation model, the correlation included in the correlation model set in the analysis period and included in the correlation model set in another analysis period other than the analysis period The system operation management method according to claim 11, wherein when the correlation is similar, the correlation model set in the other analysis period is assigned to the correlation model set in the analysis period.
  13. Further, for each of the plurality of calendar attributes, the performance information in a period having the calendar attribute is obtained, the correlation model is generated based on the performance information, and the calendar attribute with respect to the calendar attribute The system operation management method according to claim 9, wherein the system operation management method is set in a correlation model.
  14. When performing anomaly detection of the performance information, the anomaly detection of the performance information includes the correlation model corresponding to the calendar attribute of the period in which the performance information was acquired, and other correlation models other than the correlation model, The degree of abnormality in abnormality detection using the other correlation model is an abnormality in abnormality detection using the correlation model corresponding to the calendar attribute in the period when the performance information is acquired. 14. The system operation management method according to claim 8, wherein, when the degree is lower than the degree, the other correlation model is selected as a fitting model for the calendar attribute.
  15. On the computer,
    Stores performance information including multiple types of performance values in the system in time series,
    One or more periods in which the same correlation model is applied based on a correlation model that is generated for each of a plurality of periods based on the performance information and includes one or more correlations between performance values of different types. extracting, with assigned the same of the correlation model in one or more the period of time that is the extracted, by identifying the attributes of the matching calendar to one or more of the period of time that is the extracted, the attribute on the calendar Associating the correlation model;
    Anomaly detection of the performance information is performed using the performance information of the system that has been input and the correlation model for the calendar attribute of the period in which the performance information was acquired.
    A program that executes processing.
  16. The abnormality detection of the performance information is performed based on the number of correlation destructions of the correlation calculated by applying the correlation model to the performance information. The listed program.
  17. When associating the calendar attribute with the correlation model, the correlation model is generated for each of a plurality of periods included in the predetermined period based on the performance information of a predetermined period, and the one or more having a common correlation 17. The program according to claim 16, wherein an analysis period composed of the period is set, and a process of assigning one of the correlation models generated for each analysis period to the analysis period is executed.
  18. When associating the calendar attribute with the correlation model, the predetermined period is divided when the degree of increase or decrease in the number of correlations common to the correlation models in two consecutive periods is equal to or greater than a predetermined value. The program according to claim 17, wherein a process for setting the analysis period including one or more periods divided by the division point is executed as a division point.
  19. When associating the calendar attribute with the correlation model, the correlation included in the correlation model set in the analysis period and included in the correlation model set in another analysis period other than the analysis period 19. The program according to claim 18, wherein, when the correlation is similar, the process of assigning the correlation model set in the other analysis period to the correlation model set in the analysis period is executed.
  20. Further, for each of the plurality of calendar attributes, the performance information in a period having the calendar attribute is obtained, the correlation model is generated based on the performance information, and the calendar attribute with respect to the calendar attribute The program according to claim 16, which executes processing for setting a correlation model.
  21. When performing anomaly detection of the performance information, the anomaly detection of the performance information includes the correlation model corresponding to the calendar attribute of the period in which the performance information was acquired, and other correlation models other than the correlation model, The degree of abnormality in abnormality detection using the other correlation model is an abnormality in abnormality detection using the correlation model corresponding to the calendar attribute in the period when the performance information is acquired. The program according to any one of claims 15 to 20, wherein a process of selecting the other correlation model as a fitting model for the calendar attribute is executed when the degree is lower than the degree.
JP2013168691A 2009-10-15 2013-08-14 System operation management apparatus, system operation management method, and program storage medium Active JP5605476B2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2009238747 2009-10-15
JP2009238747 2009-10-15
JP2013168691A JP5605476B2 (en) 2009-10-15 2013-08-14 System operation management apparatus, system operation management method, and program storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2013168691A JP5605476B2 (en) 2009-10-15 2013-08-14 System operation management apparatus, system operation management method, and program storage medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
JP2011536206 Division 2010-10-13

Publications (2)

Publication Number Publication Date
JP2013229064A JP2013229064A (en) 2013-11-07
JP5605476B2 true JP5605476B2 (en) 2014-10-15

Family

ID=43876274

Family Applications (2)

Application Number Title Priority Date Filing Date
JP2011536206A Pending JPWO2011046228A1 (en) 2009-10-15 2010-10-13 System operation management apparatus, system operation management method, and program storage medium
JP2013168691A Active JP5605476B2 (en) 2009-10-15 2013-08-14 System operation management apparatus, system operation management method, and program storage medium

Family Applications Before (1)

Application Number Title Priority Date Filing Date
JP2011536206A Pending JPWO2011046228A1 (en) 2009-10-15 2010-10-13 System operation management apparatus, system operation management method, and program storage medium

Country Status (5)

Country Link
US (3) US8959401B2 (en)
EP (1) EP2490126A4 (en)
JP (2) JPWO2011046228A1 (en)
CN (1) CN102576328B (en)
WO (1) WO2011046228A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680329A (en) * 2015-03-17 2015-06-03 中国农业银行股份有限公司 Method and device for determining occurrence reasons of operation and maintenance problems

Families Citing this family (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5375829B2 (en) * 2008-09-18 2013-12-25 日本電気株式会社 Operation management apparatus, operation management method, and operation management program
CN103026344B (en) * 2010-06-07 2015-09-09 日本电气株式会社 Fault test set, fault detection method and program recorded medium
CN103262048B (en) * 2010-12-20 2016-01-06 日本电气株式会社 operation management device, operation management method and program thereof
SG191105A1 (en) * 2011-02-24 2013-07-31 Ibm Network event management
US9665630B1 (en) * 2012-06-18 2017-05-30 EMC IP Holding Company LLC Techniques for providing storage hints for use in connection with data movement optimizations
US20140040447A1 (en) * 2012-07-31 2014-02-06 Hitachi, Ltd. Management system and program product
EP2924580B1 (en) 2012-11-20 2017-10-04 NEC Corporation Operation management apparatus and operation management method
JP5958348B2 (en) * 2013-01-07 2016-07-27 富士通株式会社 Analysis method, analysis device, and analysis program
US9063966B2 (en) * 2013-02-01 2015-06-23 International Business Machines Corporation Selective monitoring of archive and backup storage
SG11201508013YA (en) * 2013-03-29 2015-10-29 Cumulus Systems Inc Organizing and fast searching of data
JP6126891B2 (en) * 2013-03-29 2017-05-10 富士通株式会社 Detection method, detection program, and detection apparatus
US20160055044A1 (en) * 2013-05-16 2016-02-25 Hitachi, Ltd. Fault analysis method, fault analysis system, and storage medium
JP6068296B2 (en) * 2013-08-29 2017-01-25 日本電信電話株式会社 Control device, computer resource management method, and computer resource management program
US10228994B2 (en) 2013-09-09 2019-03-12 Nec Corporation Information processing system, information processing method, and program
US20160283304A1 (en) * 2013-12-20 2016-09-29 Hitachi, Ltd. Performance prediction method, performance prediction system and program
US9450833B2 (en) * 2014-03-26 2016-09-20 International Business Machines Corporation Predicting hardware failures in a server
JP6369089B2 (en) * 2014-03-26 2018-08-08 セイコーエプソン株式会社 Information communication system, information processing apparatus, and information collection method
EP3152697A4 (en) * 2014-06-09 2018-04-11 Northrop Grumman Systems Corporation System and method for real-time detection of anomalies in database usage
JP6387777B2 (en) 2014-06-13 2018-09-12 富士通株式会社 Evaluation program, evaluation method, and evaluation apparatus
WO2016035338A1 (en) * 2014-09-03 2016-03-10 日本電気株式会社 Monitoring device and monitoring method thereof, monitoring system, and recording medium in which computer program is stored
US20170262561A1 (en) * 2014-09-11 2017-09-14 Nec Corporation Information processing apparatus, information processing method, and recording medium
JP6502062B2 (en) * 2014-11-04 2019-04-17 Kddi株式会社 Communication quality prediction device and communication quality prediction program
US20180032640A1 (en) * 2015-03-11 2018-02-01 Nec Corporation Information processing apparatus, information processing method, and recording medium
JP6627258B2 (en) * 2015-05-18 2020-01-08 日本電気株式会社 System model generation support device, system model generation support method, and program
JP6625839B2 (en) * 2015-07-08 2019-12-25 株式会社東芝 Load actual data determination device, load prediction device, actual load data determination method, and load prediction method
JP6555061B2 (en) 2015-10-01 2019-08-07 富士通株式会社 Clustering program, clustering method, and information processing apparatus
WO2018122889A1 (en) * 2016-12-27 2018-07-05 日本電気株式会社 Abnormality detection method, system, and program

Family Cites Families (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5528516A (en) * 1994-05-25 1996-06-18 System Management Arts, Inc. Apparatus and method for event correlation and problem reporting
JPH1074188A (en) 1996-05-23 1998-03-17 Hitachi Ltd Data learning device and plant controller
JPH10224990A (en) 1997-02-10 1998-08-21 Fuji Electric Co Ltd Method for correcting predicted value of electric power demand
JP3668642B2 (en) * 1999-06-30 2005-07-06 キヤノンシステムソリューションズ株式会社 Data prediction method, data prediction apparatus, and recording medium
JP2001142746A (en) 1999-11-11 2001-05-25 Nec Software Chubu Ltd Load monitor device for computer system
US7065566B2 (en) * 2001-03-30 2006-06-20 Tonic Software, Inc. System and method for business systems transactions and infrastructure management
CA2471013C (en) * 2001-12-19 2011-07-26 David Helsper Method and system for analyzing and predicting the behavior of systems
JP4089339B2 (en) 2002-07-31 2008-05-28 日本電気株式会社 Fault information display device and program
JP2004086897A (en) 2002-08-06 2004-03-18 Fuji Electric Holdings Co Ltd Method and system for constructing model
JP2004086896A (en) 2002-08-06 2004-03-18 Fuji Electric Holdings Co Ltd Method and system for constructing adaptive prediction model
US8479057B2 (en) * 2002-11-04 2013-07-02 Riverbed Technology, Inc. Aggregator for connection based anomaly detection
US20040093193A1 (en) * 2002-11-13 2004-05-13 General Electric Company System statistical associate
JP3922375B2 (en) * 2004-01-30 2007-05-30 インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Maschines Corporation Anomaly detection system and method
JP4183185B2 (en) 2004-03-10 2008-11-19 インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Maschines Corporation Diagnostic device, detection device, control method, detection method, program, and recording medium
JP2005316808A (en) * 2004-04-30 2005-11-10 Nec Software Chubu Ltd Performance monitoring device, performance monitoring method and program
JP4756675B2 (en) 2004-07-08 2011-08-24 インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Maschines Corporation System, method and program for predicting computer resource capacity
JP2006146668A (en) 2004-11-22 2006-06-08 Ntt Data Corp Operation management support apparatus and operation management support program
JP4661250B2 (en) 2005-02-09 2011-03-30 富士電機ホールディングス株式会社 Prediction method, prediction device, and prediction program
US7802144B2 (en) * 2005-04-15 2010-09-21 Microsoft Corporation Model-based system monitoring
US8379538B2 (en) * 2005-06-22 2013-02-19 Hewlett-Packard Development Company, L.P. Model-driven monitoring architecture
US7246043B2 (en) * 2005-06-30 2007-07-17 Oracle International Corporation Graphical display and correlation of severity scores of system metrics
JP4896573B2 (en) 2006-04-20 2012-03-14 株式会社東芝 Fault monitoring system and method, and program
JP2009543233A (en) * 2006-07-06 2009-12-03 アコリ ネットワークス,インコーポレイテッド Application system load management
JP5018120B2 (en) * 2007-02-19 2012-09-05 Kddi株式会社 Mobile terminal, program, and display screen control method for mobile terminal
US8095830B1 (en) * 2007-04-03 2012-01-10 Hewlett-Packard Development Company, L.P. Diagnosis of system health with event logs
JP4990018B2 (en) * 2007-04-25 2012-08-01 株式会社日立製作所 Apparatus performance management method, apparatus performance management system, and management program
US20090171718A1 (en) * 2008-01-02 2009-07-02 Verizon Services Corp. System and method for providing workforce and workload modeling
JP4872944B2 (en) 2008-02-25 2012-02-08 日本電気株式会社 Operation management apparatus, operation management system, information processing method, and operation management program
JP4872945B2 (en) * 2008-02-25 2012-02-08 日本電気株式会社 Operation management apparatus, operation management system, information processing method, and operation management program
US8098585B2 (en) * 2008-05-21 2012-01-17 Nec Laboratories America, Inc. Ranking the importance of alerts for problem determination in large systems
US8230269B2 (en) * 2008-06-17 2012-07-24 Microsoft Corporation Monitoring data categorization and module-based health correlations
US8166351B2 (en) * 2008-10-21 2012-04-24 At&T Intellectual Property I, L.P. Filtering redundant events based on a statistical correlation between events
US8392760B2 (en) * 2009-10-14 2013-03-05 Microsoft Corporation Diagnosing abnormalities without application-specific knowledge

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680329A (en) * 2015-03-17 2015-06-03 中国农业银行股份有限公司 Method and device for determining occurrence reasons of operation and maintenance problems

Also Published As

Publication number Publication date
US8959401B2 (en) 2015-02-17
US10496465B2 (en) 2019-12-03
WO2011046228A1 (en) 2011-04-21
EP2490126A4 (en) 2015-08-12
US20160274965A1 (en) 2016-09-22
US20150113329A1 (en) 2015-04-23
CN102576328B (en) 2015-09-09
EP2490126A1 (en) 2012-08-22
US9384079B2 (en) 2016-07-05
JP2013229064A (en) 2013-11-07
JPWO2011046228A1 (en) 2013-03-07
CN102576328A (en) 2012-07-11
US20110246837A1 (en) 2011-10-06

Similar Documents

Publication Publication Date Title
US9720941B2 (en) Fully automated SQL tuning
JP6160673B2 (en) Operation management apparatus, operation management method, and program
Armony et al. The impact of delay announcements in many-server queues with abandonment
Cherkasova et al. Automated anomaly detection and performance modeling of enterprise applications
US9038030B2 (en) Methods for predicting one or more defects in a computer program and devices thereof
US7444263B2 (en) Performance metric collection and automated analysis
TWI282949B (en) Alarm management method and apparatus therefor
US8401726B2 (en) Maintenance interval determination and optimization tool and method
DE60205356T2 (en) System, device and method for diagnosing a flow system
JP4541364B2 (en) Statistical analysis of automatic monitoring and dynamic process metrics to reveal meaningful variations
Ostrand et al. Predicting the location and number of faults in large software systems
JP4089427B2 (en) Management system, management computer, management method and program
US8645769B2 (en) Operation management apparatus, operation management method, and program storage medium
JP4710720B2 (en) Failure prevention diagnosis support system and failure prevention diagnosis support method
US8595685B2 (en) Method and system for software developer guidance based on analyzing project events
US7472388B2 (en) Job monitoring system for browsing a monitored status overlaps with an item of a pre-set browsing end date and time
US8019734B2 (en) Statistical determination of operator error
Gmach et al. Capacity management and demand prediction for next generation data centers
US7552067B2 (en) System and method supply chain demand satisfaction
JP2667376B2 (en) Client / server data processing system
US7418366B2 (en) Maintenance request systems and methods
US8762777B2 (en) Supporting detection of failure event
CN102216908B (en) Support the system, the method and apparatus that perform the action corresponding to detection event
US7050943B2 (en) System and method for processing operation data obtained from turbine operations
US20120123994A1 (en) Analyzing data quality

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20130814

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20140204

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20140401

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20140729

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20140811

R150 Certificate of patent or registration of utility model

Ref document number: 5605476

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150