WO2022013944A1 - Failure analysis assistance device, failure analysis assistance method, and program - Google Patents

Failure analysis assistance device, failure analysis assistance method, and program Download PDF

Info

Publication number
WO2022013944A1
WO2022013944A1 PCT/JP2020/027375 JP2020027375W WO2022013944A1 WO 2022013944 A1 WO2022013944 A1 WO 2022013944A1 JP 2020027375 W JP2020027375 W JP 2020027375W WO 2022013944 A1 WO2022013944 A1 WO 2022013944A1
Authority
WO
WIPO (PCT)
Prior art keywords
failure
program
target
unit
sequence
Prior art date
Application number
PCT/JP2020/027375
Other languages
French (fr)
Japanese (ja)
Inventor
振宇 徐
Original Assignee
日本電信電話株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電信電話株式会社 filed Critical 日本電信電話株式会社
Priority to PCT/JP2020/027375 priority Critical patent/WO2022013944A1/en
Publication of WO2022013944A1 publication Critical patent/WO2022013944A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance

Definitions

  • the present invention relates to a failure analysis support device, a failure analysis support method, and a program.
  • Non-Patent Document 1 As a method for reproducing failures and failures that occur in a program, an approach that reproduces based on the user's operation (for example, Non-Patent Document 1) and an approach that reapplies input data to an object (for example, Non-Patent Document 2). There is.
  • Non-Patent Document 1 automatically reproduces the application crash from the bug report.
  • Non-Patent Document 2 utilizes time travel debugging to efficiently and accurately reconstruct the state of an object as a unit test, and uses differential analysis of code coverage data.
  • the existing method is an approach that realizes the reproduction of the failure by recording the input (external input data) to the application and reapplying it.
  • the present invention has been made in view of the above points, and an object of the present invention is to improve the efficiency of failure analysis of a program.
  • the failure analysis support device uses an analysis unit that analyzes an executable route and a trace of execution among the components included in the route for the call relationship of the program components.
  • a selection unit that selects a component for outputting the indicated information based on a predetermined rule, a modification unit that modifies the program so that the component selected by the selection unit outputs the information, and the modification unit.
  • a sequence of statements for causing the failure is specified. It has a generation unit that generates a test case for executing the sequence.
  • the failure analysis of the program can be streamlined.
  • target application information showing an execution trace of a component of a target application executed before an application program (hereinafter referred to as "target application”) fails in a commercial environment is collected, and the collected information is collected. It provides an approach to automatically generate a test case in a verification environment, execute the test case, and finally determine whether or not the same event (conventional log or error message) as at the time of failure appears. By doing so, we will realize an approach that directly reproduces the obstacle.
  • a mechanism is prepared in which the component that outputs the execution trace can be selected from the components of the target application in the verification environment before collecting the information in the commercial environment.
  • the commercial environment refers to a computer or computer system in which a user who has purchased the target application uses the target application.
  • the verification environment refers to a computer or computer system used by the developer or maintenance person of the target application to analyze the failure of the target application.
  • FIG. 1 is a diagram for explaining an overall picture of the method proposed in the embodiment of the present invention. As shown in FIG. 1, the proposed method is composed of three parts, Part 1 to 3.
  • Part 1 Recording run-time information
  • the structure of the entire target application is analyzed, and among the components of the target application, the components that output the trace that is the trace of execution (execution trace) (hereinafter referred to as "target”)) are narrowed down.
  • a list of targets (hereinafter referred to as "target list”) is generated.
  • Class class
  • Method method
  • Statement statement
  • the component of the target application has a hierarchical structure (grain size).
  • the class is the component of the highest level (maximum particle size)
  • the statement is the component of the lowest level (minimum particle size).
  • any one of these hierarchies (grain size) (class, method, or statement) is selected as the target.
  • the target application executed in the commercial environment is changed (modified) in the verification environment so that the call sequence (execution trace) between the targets can be acquired using the generated target list.
  • the execution trace of the component corresponding to the target among the components of the target application is acquired (collected).
  • Part2 Acquisition of failure path
  • a call graph between targets is generated as a candidate for the failure path for each function of the target application based on the execution trace collected in Part 1 in the verification environment.
  • the function of the target application means, for example, a unit until an output is performed with respect to an input by a user, and is a concept classified by a set of an input and an output.
  • the functions may be distinguished for each menu item.
  • Part3 Automatic reproduction and judgment of failure
  • a test case that covers the call sequence at the statement level is created using the failure path candidates that are the call graphs between targets generated in Part2. After that, the created test case is used, the test is performed in the verification environment, and the failure is reproduced.
  • Source code / binary code of the target application (2) Setting of target hierarchy (grain size) (Class / Method / Status) (3) Conventional log at the time of failure (including error message) (4) Clone of commercial environment (to build verification environment)
  • the conventional log in (3) is a log output by the target application from the beginning, in addition to the execution trace, and includes an error message at the time of failure.
  • the faults in which the call sequence (execution trace) is different from the normal time at the statement level, and the faults in which the program behaves the same as in the normal time are included in the troubles that can be dealt with. No. For example, failures when exception handling or functions that would not normally be executed are executed are targeted, and failures such as memory leaks that gradually deplete resources are excluded.
  • fault analysis support device 10 that functions as a verification environment in the present embodiment will be described.
  • FIG. 2 is a diagram showing a hardware configuration example of the failure analysis support device 10 according to the embodiment of the present invention.
  • the failure analysis support device 10 of FIG. 2 has a drive device 100, an auxiliary storage device 102, a memory device 103, a CPU 104, an interface device 105, a display device 106, an input device 107, and the like, which are connected to each other by a bus B, respectively. ..
  • the program that realizes the processing in the failure analysis support device 10 is provided by a recording medium 101 such as a CD-ROM.
  • a recording medium 101 such as a CD-ROM.
  • the program is installed in the auxiliary storage device 102 from the recording medium 101 via the drive device 100.
  • the program does not necessarily have to be installed from the recording medium 101, and may be downloaded from another computer via the network.
  • the auxiliary storage device 102 stores the installed program and also stores necessary files, data, and the like.
  • the memory device 103 reads a program from the auxiliary storage device 102 and stores it when there is an instruction to start the program.
  • the CPU 104 realizes the function related to the failure analysis support device 10 according to the program stored in the memory device 103.
  • the interface device 105 is used as an interface for connecting to a network.
  • the display device 106 displays a GUI (Graphical User Interface) or the like by a program.
  • the input device 107 is composed of a keyboard, a mouse, and the like, and is used for inputting various operation instructions.
  • FIG. 3 is a diagram showing a functional configuration example of the failure analysis support device 10 according to the embodiment of the present invention.
  • the failure analysis support device 10 includes a code analysis unit 11, a target selection unit 12, a modification unit 13, a failure path candidate identification unit 14, a test case generation unit 15, a verification unit 16, and the like. Each of these parts is realized by a process of causing the CPU 104 to execute one or more programs installed in the failure analysis support device 10.
  • FIG. 4 is a flowchart for explaining an example of a processing procedure executed by the failure analysis support device 10 with respect to Part 1.
  • step S101 the code analysis unit 11 inputs the source code or binary code of the target application (hereinafter referred to as “target application code”) in the verification environment, and calls the components of the target application based on the target application code.
  • target application code the source code or binary code of the target application
  • the executable route is analyzed, and a graph (CFG (Control Flow Graph): call graph) showing the route is generated for each function of the target application.
  • CFG Control Flow Graph
  • FIG. 5 is a diagram showing an example of CFG.
  • Each node in the CFG shown in FIG. 5 corresponds to a component corresponding to the target. That is, when "Class” is specified as the target hierarchy, each node corresponds to the class constituting the target application. When “Method” is specified as the target hierarchy, each node corresponds to the method of each class. When “Statement” is specified as the target hierarchy, each node corresponds to the statement that constitutes each method.
  • L0 to 3 described on the left side of FIG. 5 indicate the hierarchy in the depth direction of CFG.
  • the code analysis unit 11 analyzes the code line by line for each file constituting the target application code, and each time a method call is found, the node corresponding to the called method is newly added to the CFG. Add and add an edge to the CFG that has a direction from the node corresponding to the calling method to the node corresponding to the called method. If the target hierarchy is a statement, a node is created for each statement. If the target hierarchy is a class, a node will be created for each class.
  • every time the code analysis unit 11 finds a call to a method of another class from within a method of one class it adds a node of the called class to the CFG and corresponds to the calling class. Add an edge to the CFG that has a direction from the node to the node corresponding to the called class.
  • the code analysis unit 11 generates two files as files for storing information indicating the CFG along with the generation of the CFG.
  • the first file is a file for storing a list of target names (hereinafter, referred to as "node list") corresponding to each node of CFG.
  • the file name of the file is "csv.txt".
  • Specific examples of the contents of the file are as follows. ['Aa','bb', ...,'ff' ...]
  • each of'aa','bb', ..., and'ff' is a target name.
  • the target hierarchy is a method
  • the target name is a method.
  • the target name is a class.
  • the target name is a statement
  • the target name is a statement.
  • the second file is a file for storing a list (hereinafter referred to as "adjacency list”) of adjacency relations (hereinafter referred to as “adjacency list”) of each node of CFG (when the target particle size is a method, the method call relations).
  • the file name of the file is "cfg.txt”.
  • Specific examples of the contents of the file are as follows. [[0, [1,2,3]], [2, [3,4]], ...]
  • Each numerical value in the adjacency list (hereinafter referred to as "node number”) corresponds to the order of method names in the node list, and the beginning of this order is 0 (that is, 0 origin). For example, [0, [1, 2, 3]] indicates that the 0th method in the node list calls the 1st, 2nd, and 3rd methods in the node list.
  • FIG. 3 shows an example in which CFGs (cfg_A, cfg_B, cfg_C, cfg_D) are generated for each of the functions A, B, C and D.
  • the target selection unit 12 accepts the input of the monitoring target function (hereinafter referred to as “target function”) and the trace pattern among the functions of the target application from the user (user in the verification environment) (S102).
  • target function the monitoring target function
  • the trace pattern among the functions of the target application from the user (user in the verification environment) S102.
  • the target function any one or more of the functions A, B, C, and D is selected as the target function. Only some functions may be selected as target functions, or all functions may be selected as target functions.
  • a CFG is generated for each function. Therefore, the target selection unit 12 may display the GUI with the generated CFG as an option on the display device 106, and may set the function related to the CFG corresponding to the option selected by the user as the target function.
  • the trace pattern refers to a rule composed of a combination of a selection criterion of a node to be a target (trace target) among the nodes constituting the CFG of the target function and information output to an execution trace regarding the target. ..
  • trace target a target among the nodes constituting the CFG of the target function and information output to an execution trace regarding the target. ..
  • the A pattern is a trace pattern in which nodes in every other layer in the depth direction of the CFG (that is, nodes that are not adjacent in the CFG) are selected as targets, and the date and time when the target is executed and the target name are included in the output information. ..
  • the node surrounded by the thick line is targeted.
  • the A pattern has a feature that the recording load of the execution trace is small and the reproduction time can be suppressed.
  • the B pattern is a trace pattern in which the branch destination node of each branch in CFG is selected as a target, and the date and time when the target is executed and the target name are included in the output information.
  • the B pattern has a feature that the recording load of the execution trace is relatively large, but the reproduction time is small.
  • the C pattern is a trace pattern in which the target is modified based on the execution frequency after the A pattern is applied for a certain period of time. For example, when the A pattern is applied, the target with high execution frequency may be excluded, and the nodes before and after the target with low execution frequency may be added to the target.
  • the C pattern has a feature that the recording load of the execution trace can be further reduced and the reproduction time can be further suppressed.
  • the D pattern is a trace pattern in which the target argument and the return value are further added to the output information when the A pattern, the B pattern, or the C pattern is applied.
  • the D pattern has a feature that the recording load of the execution trace is large, but the reproduction time can be reduced.
  • the target selection unit 12 selects a target node from each node of the CFG of the target function based on the trace pattern specified by the user, and a list of the target names of the selected node (that is, that is). "Target list" is generated (S103).
  • the trace pattern an example in which the A pattern is specified will be described.
  • the target selection unit 12 uses the cfg. Of the target function. Depth-first search processing is performed on the adjacency list stored in txt, node numbers with even-numbered depths are selected, and csv. Of the target function. In the node list stored in txt, a list of target names (for example, method names) corresponding to the selected node number is generated. Therefore, for example, if the CFG in FIG. 5 is the CFG of the target function, a list of target names surrounded by a thick line is generated. That is, non-adjacent nodes are selected as targets in CFG.
  • target names for example, method names
  • the code analysis unit 11 always generates a CFG having a target (statement in the present embodiment) of the lowest layer (minimum particle size) as a node regardless of the designation of the target layer (particle size). You may.
  • the target selection unit 12 may change (aggregate) the CFG so as to correspond to the designated target hierarchy, and generate a target list based on the changed CFG. That is, by aggregating the nodes of the CFG of the lowest layer, the CFG of another layer can be generated based on the known technology.
  • the modification unit 13 outputs the target application so that the execution trace of the target is output when each target (for example, a method) whose target name is included in the target list generated by the target selection unit 12 is called.
  • the modification unit 13 applies the setting file to the source code of the target application so that when each target whose target list includes the target name is called, the execution trace of the target is output.
  • the source code is modified to generate a modified binary code based on the modified source code (S105).
  • Steps S104 and S105 can be realized by using, for example, the mechanism of AspectJ that creates and uses a script that embeds a print statement. If the target hierarchy is a method, the print statement is modified to be called at the beginning of the method.
  • the execution trace should include the time when the target was called, the target name, and so on.
  • the modified version of the target application is applied to the commercial environment.
  • the target application outputs the execution time information such as the conventional log and the sequence of the execution traces to a predetermined file.
  • target failure a failure occurs in the target application (hereinafter referred to as "target failure") in the commercial environment
  • the verification environment acquires the run-time information output from the target application before the target failure occurs.
  • the acquisition of run-time information may be performed by any method.
  • the target application may automatically upload the run-time information to the verification environment, or the run-time information may be manually sent from the commercial environment to the verification environment.
  • the run-time information acquired by the verification environment may be limited to the information for a predetermined period before the occurrence of the target failure.
  • FIG. 6 is a flowchart for explaining an example of the processing procedure executed by the failure analysis support device 10 with respect to Part 2 and Part 3.
  • the failure analysis support device 10 receives input information from the user in the verification environment.
  • the input information is a series of conventional logs and execution traces acquired from a commercial environment, a failure occurrence date and time specified by a user based on the conventional logs, and the like.
  • the recording time of the error message corresponding to the target failure, which is included in the conventional log is specified as the failure occurrence date and time.
  • the failure path candidate identification unit 14 generates failure path candidates based on the sequence of execution traces included in the input information and the failure occurrence date and time (S202).
  • the failure path is a sequence (execution order) of targets (for example, methods) executed (called) by the failure occurrence date and time, which is specified based on a series of execution traces.
  • the failure path candidate identification unit 14 has a call relationship at the target level based on the sequence of execution traces up to the failure occurrence date and time and the trace pattern specified by the user for each CFG of each function of the target application.
  • Generate failure path candidates that indicate. For example, when the trace pattern specified by the user is the A pattern and the sequence of the execution traces is "aa, dd", the sequence of the execution traces is the execution traces of every other layer in the depth direction of the CFG. It should be a series of (ie, a series of discontinuous execution traces). Therefore, for example, for the CFG (csv. Txt, cfg. Txt) in FIG. 5, the following three sequences are generated as failure path candidates.
  • failure path candidates include the target names of targets that may have been executed between the targets included in the sequence of execution traces, based on the CFG.
  • the execution trace includes information about the call (call), but does not include the return information from the call (return) in order to suppress the information output load. .. This is because if the CFG and the call information are combined, the information to be returned is unnecessary in most cases.
  • the failure path candidate identification unit 14 may also refer to the source code or binary code of the target application to generate failure path candidates.
  • the test case generation unit 15 identifies all possible sequences (routes) at the statement level for each failure path candidate, and tests for reproducing the sequence for each specified sequence.
  • the content of the test case is a group of instructions for executing the sequence of the specified statements.
  • test case generator 15 will use the failure path based on the source code or binary code of the target application. You just need to identify the statement-level sequence that can be executed in.
  • the verification unit 16 executes the loop process L1 including steps S204 to S206 for each test case included in the generated test case group.
  • the test case targeted for processing in the loop processing is referred to as a "target test case”.
  • step S204 the verification unit 16 executes the target test case for the clone of the commercial environment constructed in the verification environment. Subsequently, the verification unit 16 determines whether or not the same phenomenon as the target failure has occurred as a result of executing the target test case (S205). Such a judgment is made by the conventional log output from the target application by executing the target test case (hereinafter referred to as "verification log") and the conventional log included in the input information (hereinafter referred to as "commercial log"). It can be done by comparing with. For example, if the verification log and the portion of the commercial log corresponding to the target test case match, it may be determined that the same event as the target failure has occurred.
  • verification log the target test case
  • commercial log included in the input information
  • the match may include, for example, different states for parameters that change depending on the execution timing and the execution user (login user). Further, it may be determined that the same phenomenon as the target failure has occurred because the verification log contains the same error message as the error message corresponding to the target failure.
  • the verification unit 16 When the same phenomenon as the target failure occurs (Yes in S205), the verification unit 16 outputs the target test case and the like (S206).
  • the file containing the target test case may be saved in a predetermined location (folder or the like) in the auxiliary storage device 102.
  • a user in the verification environment (for example, a maintenance person, etc.) can reproduce the target failure by executing the target test case (that is, the statement level failure path).
  • the present embodiment it is possible to generate a test case for reproducing the failure of the target application based on the execution trace of the target application. Therefore, it is possible to reduce the dependence of the external input on the program in reproducing the failure of the target application, and it is possible to improve the efficiency of the failure analysis of the program of the target application or the like.
  • Non-Patent Document 3 discloses a commercially available product "log4j” that sets and reduces the trace level.
  • Log4j is a log API for the Java (registered trademark) program under development in the Jakarta project. If you set the log output level in the configuration file, all logs with higher levels will be output. As for the log output level, the arrangement as shown in FIG. 7 is common.
  • log4j you can set the log output level.
  • the developer needs to select the code to be monitored and add the content to be output as a log and the output timing (output level of the log) to the code to be monitored.
  • the trace level can be set by the hierarchy (class, method, statement) of the components of the target application. Further, it is not necessary to select the log output location on the code to be monitored, and it is not necessary to set the content and timing to be output one by one.
  • the target application is an example of a predetermined program.
  • the code analysis unit 11 is an example of an analysis unit.
  • the target selection unit 12 is an example of the selection unit.
  • the failure path candidate specifying unit 14 is an example of the specific unit.
  • the test case generation unit 15 is an example of the generation unit.
  • the verification unit 16 is an example of a determination unit.
  • Failure analysis support device 11 Code analysis unit 12 Target selection unit 13 Modification unit 14 Failure path candidate identification unit 15 Test case generation unit 16 Verification unit 100 Drive device 101 Recording medium 102 Auxiliary storage device 103 Memory device 104 CPU 105 Interface device 106 Display device 107 Input device B Bus

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

This failure analysis assistance device comprises: an analysis unit which analyzes an executable path for the calling relationship between components of a program; a selection unit which, on the basis of a prescribed rule, selects, from among the components included in the path, components to be caused to output information indicating traces of the execution thereof; an altering unit which alters the program so that the components selected by the selection unit output the information; and a generation unit which, on the basis of said path and the component sequence indicated by the information that the program altered by the altering unit outputted until a failure occurred in the program, identifies a sequence of statements for causing such failure, and generates a test case for executing the sequence. Thus, the failure analysis assistance device improves program failure analysis efficiency.

Description

障害解析支援装置、障害解析支援方法及びプログラムFailure analysis support device, failure analysis support method and program
 本発明は、障害解析支援装置、障害解析支援方法及びプログラムに関する。 The present invention relates to a failure analysis support device, a failure analysis support method, and a program.
 プログラムについて発生した障害や故障を再現する手法として、ユーザの操作ベースに再現するアプローチ、(例えば、非特許文献1)や、対象への入力データを適用し直すアプローチ(例えば、非特許文献2)がある。 As a method for reproducing failures and failures that occur in a program, an approach that reproduces based on the user's operation (for example, Non-Patent Document 1) and an approach that reapplies input data to an object (for example, Non-Patent Document 2). There is.
 具体的には、非特許文献1のアプローチは、バグレポートからアプリケーションのクラッシュを自動的に再現する。 Specifically, the approach of Non-Patent Document 1 automatically reproduces the application crash from the bug report.
 非特許文献2のアプローチは、ユニットテストとして、タイムトラベルデバッグを活用してオブジェクトの状態を効率的かつ正確に再構築し、コードカバレッジデータの差分分析を利用する。 The approach of Non-Patent Document 2 utilizes time travel debugging to efficiently and accurately reconstruct the state of an object as a unit test, and uses differential analysis of code coverage data.
 何れにせよ、既存手法は、アプリケーションへのインプット(外部入力データ)を記録し、それを適用し直すことで障害の再現を実現するアプローチである。 In any case, the existing method is an approach that realizes the reproduction of the failure by recording the input (external input data) to the application and reapplying it.
 しかしながら、アプリケーションへのインプット(外部入力)を適用し直すことで障害を再現するアプローチは間接的であるため、障害の再現性が非確定的である商用環境への適用が困難である。 However, since the approach of reproducing the failure by reapplying the input (external input) to the application is indirect, it is difficult to apply it to the commercial environment where the reproducibility of the failure is uncertain.
 そのため、既存の障害解析では、保守担当者等が、障害の状況を示している情報(エラーメッセージやログ)から、その障害に繋がる複数の仮説を立て、検証環境で仮説を1つずつ試行して障害が再現するか否かが確認されている。しかし、このような方法では、大量な稼働が発生してしまう。 Therefore, in the existing failure analysis, maintenance personnel, etc. make multiple hypotheses leading to the failure from the information (error messages and logs) indicating the situation of the failure, and try the hypotheses one by one in the verification environment. It has been confirmed whether or not the failure is reproduced. However, such a method causes a large amount of operation.
 本発明は、上記の点に鑑みてなされたものであって、プログラムの障害解析を効率化することを目的とする。 The present invention has been made in view of the above points, and an object of the present invention is to improve the efficiency of failure analysis of a program.
 そこで上記課題を解決するため、障害解析支援装置は、プログラムの構成要素の呼び出し関係について、実行可能な経路を解析する解析部と、前記経路に含まれる前記構成要素のうち、実行された痕跡を示す情報を出力させる構成要素を所定の規則に基づいて選択する選択部と、前記選択部によって選択された前記構成要素が前記情報を出力するように前記プログラムを改変する改変部と、前記改変部によって改変された前記プログラムの障害の発生までに前記プログラムから出力された前記情報が示す前記構成要素の系列と、前記経路とに基づいて、前記障害を発生させるためのステートメントのシーケンスを特定し、当該シーケンスを実行させるためのテストケースを生成する生成部と、を有する。 Therefore, in order to solve the above problem, the failure analysis support device uses an analysis unit that analyzes an executable route and a trace of execution among the components included in the route for the call relationship of the program components. A selection unit that selects a component for outputting the indicated information based on a predetermined rule, a modification unit that modifies the program so that the component selected by the selection unit outputs the information, and the modification unit. Based on the sequence of components indicated by the information output from the program before the occurrence of the failure of the program modified by, and the path, a sequence of statements for causing the failure is specified. It has a generation unit that generates a test case for executing the sequence.
 プログラムの障害解析を効率化することができる。 The failure analysis of the program can be streamlined.
本発明の実施形態において提案される方式の全体像を説明するための図である。It is a figure for demonstrating the whole picture of the method proposed in embodiment of this invention. 本発明の実施の形態における障害解析支援装置10のハードウェア構成例を示す図であるIt is a figure which shows the hardware configuration example of the fault analysis support apparatus 10 in embodiment of this invention. 本発明の実施の形態における障害解析支援装置10の機能構成例を示す図である。It is a figure which shows the functional structure example of the trouble analysis support apparatus 10 in embodiment of this invention. Part1に関して障害解析支援装置10が実行する処理手順の一例を説明するためのフローチャートである。It is a flowchart for demonstrating an example of the processing procedure which the failure analysis support apparatus 10 executes with respect to Part 1. CFGの一例を示す図である。It is a figure which shows an example of CFG. Part2及びPart3に関して障害解析支援装置10が実行する処理手順の一例を説明するためのフローチャートである。It is a flowchart for demonstrating an example of the processing procedure which the failure analysis support apparatus 10 executes with respect to Part 2 and Part 3. ログの出力レベルを説明するための図である。It is a figure for demonstrating the output level of a log.
 本実施の形態は、商用環境においてアプリケーションプログラム(以下、「対象アプリ」という。)が障害に至るまでに実行された、対象アプリの構成要素の実行痕跡を示す情報を収集し、収集した情報を用いて検証環境においてテストケースを自動的に生成して当該テストケースを実行し、最終的に障害時と同じ事象(従来ログやエラーメッセージ)が出現するか否かの判定を行うアプローチを提供することで、障害を直接的に再現するアプローチを実現する。 In this embodiment, information showing an execution trace of a component of a target application executed before an application program (hereinafter referred to as "target application") fails in a commercial environment is collected, and the collected information is collected. It provides an approach to automatically generate a test case in a verification environment, execute the test case, and finally determine whether or not the same event (conventional log or error message) as at the time of failure appears. By doing so, we will realize an approach that directly reproduces the obstacle.
 但し、障害に至るまでの実行情報を収集することは商用環境に負荷を与えるところ、対象アプリの運用の現場(商用環境)によって許容できる負荷の限度が異なることが考えられる。 However, collecting execution information up to the failure imposes a load on the commercial environment, but it is possible that the allowable load limit differs depending on the operation site (commercial environment) of the target application.
 そこで、本実施の形態では、商用環境で情報を収集する前に、検証環境において、対象アプリの構成要素のうち、実行痕跡を出力させる構成要素を選択可能とする仕組みが用意される。 Therefore, in the present embodiment, a mechanism is prepared in which the component that outputs the execution trace can be selected from the components of the target application in the verification environment before collecting the information in the commercial environment.
 なお、商用環境とは、対象アプリを購入したユーザが対象アプリを利用するコンピュータ又はコンピュータシステムをいう。一方、検証環境とは、対象アプリの開発者又は保守担当者が、対象アプリの障害を解析するために利用するコンピュータ又はコンピュータシステムをいう。 The commercial environment refers to a computer or computer system in which a user who has purchased the target application uses the target application. On the other hand, the verification environment refers to a computer or computer system used by the developer or maintenance person of the target application to analyze the failure of the target application.
 以下、図面に基づいて本発明の実施の形態を説明する。図1は、本発明の実施形態において提案される方式の全体像を説明するための図である。図1に示されるように、提案方式は、Part1~3の3つの部分から構成される。 Hereinafter, embodiments of the present invention will be described with reference to the drawings. FIG. 1 is a diagram for explaining an overall picture of the method proposed in the embodiment of the present invention. As shown in FIG. 1, the proposed method is composed of three parts, Part 1 to 3.
 [Part1:実行時情報の記録]
 検証環境において、対象アプリ全体の構造が解析され、対象アプリの構成要素のうち、実行された痕跡(実行痕跡)であるトレースを出力させる構成要素(以下「ターゲット」という。」))が絞り込まれて、ターゲットのリスト(以下、「ターゲットリスト」という。)が生成される。なお、対象アプリの構成要素の種類の一例として、Class(クラス)、Method(メソッド)、Statement(ステートメント)等が挙げられる。すなわち、対象アプリの構成要素は、階層構造(粒度)を有する。具体的には、クラスが最上位の階層(最大の粒度)の構成要素であり、ステートメントが最下位の階層(最小の粒度)の構成要素である。これらの階層(粒度)のうちのいずれか(クラス、メソッド、又はステートメント)がターゲットとして選択される。その後、生成されたターゲットリストを用いて、ターゲット間の呼び出しシーケンス(実行痕跡)が取得できるように、商用環境で実行される対象アプリが検証環境において変更(改変)される。その結果、商用環境における対象アプリの利用時に、対象アプリの構成要素のうちターゲットに該当する構成要素の実行痕跡が取得(収集)される。
[Part 1: Recording run-time information]
In the verification environment, the structure of the entire target application is analyzed, and among the components of the target application, the components that output the trace that is the trace of execution (execution trace) (hereinafter referred to as "target")) are narrowed down. A list of targets (hereinafter referred to as "target list") is generated. In addition, as an example of the type of the component of the target application, Class (class), Method (method), Statement (statement) and the like can be mentioned. That is, the component of the target application has a hierarchical structure (grain size). Specifically, the class is the component of the highest level (maximum particle size), and the statement is the component of the lowest level (minimum particle size). Any one of these hierarchies (grain size) (class, method, or statement) is selected as the target. After that, the target application executed in the commercial environment is changed (modified) in the verification environment so that the call sequence (execution trace) between the targets can be acquired using the generated target list. As a result, when the target application is used in the commercial environment, the execution trace of the component corresponding to the target among the components of the target application is acquired (collected).
 [Part2:障害パスの取得]
 商用環境において対象アプリに障害が発生した後、検証環境において、Part1において収集された実行痕跡に基づいて、対象アプリの機能ごとに、ターゲット間の呼び出しグラフが障害パスの候補として生成される。ここで、対象アプリの機能とは、例えば、ユーザによる入力に対して出力が行われるまでの単位をいい、入力及び出力の組によって区分される概念である。例えば、対象アプリのGUI(Graphical User Interface)の観点において、メニュー項目単位で機能が区別されてもよい。
[Part2: Acquisition of failure path]
After a failure occurs in the target application in the commercial environment, a call graph between targets is generated as a candidate for the failure path for each function of the target application based on the execution trace collected in Part 1 in the verification environment. Here, the function of the target application means, for example, a unit until an output is performed with respect to an input by a user, and is a concept classified by a set of an input and an output. For example, from the viewpoint of the GUI (Graphical User Interface) of the target application, the functions may be distinguished for each menu item.
 [Part3:障害の自動再現及び判定]
 Part2で生成された、ターゲット間の呼び出しグラフである障害パス候補を用いて、ステートメントレベルでの呼び出し系列が網羅されたテストケースが作成される。その後、作成されたテストケースが用いられて、検証環境においてテストが実施され、障害が再現される。
[Part3: Automatic reproduction and judgment of failure]
A test case that covers the call sequence at the statement level is created using the failure path candidates that are the call graphs between targets generated in Part2. After that, the created test case is used, the test is performed in the verification environment, and the failure is reproduced.
 上記を実施するため、検証環境では、以下の入力が必要とされる。
(1)対象アプリのソースコード/バイナリコード
(2)ターゲットの階層(粒度)(Class/Method/Statement)の設定
(3)障害時の従来ログ(エラーメッセージを含む)
(4)商用環境のクローン(検証環境を構築するため)
 なお、(3)の従来ログとは、実行痕跡とは別に、対象アプリが当初より出力するログをいい、障害時のエラーメッセージ等が含まれる。
In order to carry out the above, the following inputs are required in the verification environment.
(1) Source code / binary code of the target application (2) Setting of target hierarchy (grain size) (Class / Method / Status) (3) Conventional log at the time of failure (including error message)
(4) Clone of commercial environment (to build verification environment)
The conventional log in (3) is a log output by the target application from the beginning, in addition to the execution trace, and includes an error message at the time of failure.
 また、本実施の形態において対応可能な障害としては、呼び出しシーケンス(実行痕跡)がステートメントレベルで正常時と異なる障害であり、プログラムが正常時と同じ振舞いをする障害は対応可能な障害に含まれない。例えば、例外処理や通常であれば実行されるはずがない機能が実行された場合の障害は対象であり、メモリリークのような資源が徐々に枯渇する障害は対象外である。 Further, as the faults that can be dealt with in this embodiment, the faults in which the call sequence (execution trace) is different from the normal time at the statement level, and the faults in which the program behaves the same as in the normal time are included in the troubles that can be dealt with. No. For example, failures when exception handling or functions that would not normally be executed are executed are targeted, and failures such as memory leaks that gradually deplete resources are excluded.
 次に、本実施の形態において検証環境として機能するコンピュータ(障害解析支援装置10)について説明する。 Next, a computer (fault analysis support device 10) that functions as a verification environment in the present embodiment will be described.
 図2は、本発明の実施の形態における障害解析支援装置10のハードウェア構成例を示す図である。図2の障害解析支援装置10は、それぞれバスBで相互に接続されているドライブ装置100、補助記憶装置102、メモリ装置103、CPU104、インタフェース装置105、表示装置106、及び入力装置107等を有する。 FIG. 2 is a diagram showing a hardware configuration example of the failure analysis support device 10 according to the embodiment of the present invention. The failure analysis support device 10 of FIG. 2 has a drive device 100, an auxiliary storage device 102, a memory device 103, a CPU 104, an interface device 105, a display device 106, an input device 107, and the like, which are connected to each other by a bus B, respectively. ..
 障害解析支援装置10での処理を実現するプログラムは、CD-ROM等の記録媒体101によって提供される。プログラムを記憶した記録媒体101がドライブ装置100にセットされると、プログラムが記録媒体101からドライブ装置100を介して補助記憶装置102にインストールされる。但し、プログラムのインストールは必ずしも記録媒体101より行う必要はなく、ネットワークを介して他のコンピュータよりダウンロードするようにしてもよい。補助記憶装置102は、インストールされたプログラムを格納すると共に、必要なファイルやデータ等を格納する。 The program that realizes the processing in the failure analysis support device 10 is provided by a recording medium 101 such as a CD-ROM. When the recording medium 101 storing the program is set in the drive device 100, the program is installed in the auxiliary storage device 102 from the recording medium 101 via the drive device 100. However, the program does not necessarily have to be installed from the recording medium 101, and may be downloaded from another computer via the network. The auxiliary storage device 102 stores the installed program and also stores necessary files, data, and the like.
 メモリ装置103は、プログラムの起動指示があった場合に、補助記憶装置102からプログラムを読み出して格納する。CPU104は、メモリ装置103に格納されたプログラムに従って障害解析支援装置10に係る機能を実現する。インタフェース装置105は、ネットワークに接続するためのインタフェースとして用いられる。表示装置106はプログラムによるGUI(Graphical User Interface)等を表示する。入力装置107はキーボード及びマウス等で構成され、様々な操作指示を入力させるために用いられる。 The memory device 103 reads a program from the auxiliary storage device 102 and stores it when there is an instruction to start the program. The CPU 104 realizes the function related to the failure analysis support device 10 according to the program stored in the memory device 103. The interface device 105 is used as an interface for connecting to a network. The display device 106 displays a GUI (Graphical User Interface) or the like by a program. The input device 107 is composed of a keyboard, a mouse, and the like, and is used for inputting various operation instructions.
 図3は、本発明の実施の形態における障害解析支援装置10の機能構成例を示す図である。図3において、障害解析支援装置10は、コード解析部11、ターゲット選択部12、改変部13、障害パス候補特定部14、テストケース生成部15及び検証部16等を有する。これら各部は、障害解析支援装置10にインストールされた1以上のプログラムが、CPU104に実行させる処理により実現される。 FIG. 3 is a diagram showing a functional configuration example of the failure analysis support device 10 according to the embodiment of the present invention. In FIG. 3, the failure analysis support device 10 includes a code analysis unit 11, a target selection unit 12, a modification unit 13, a failure path candidate identification unit 14, a test case generation unit 15, a verification unit 16, and the like. Each of these parts is realized by a process of causing the CPU 104 to execute one or more programs installed in the failure analysis support device 10.
 以下、障害解析支援装置10が実行する処理手順について説明する。図4は、Part1に関して障害解析支援装置10が実行する処理手順の一例を説明するためのフローチャートである。 Hereinafter, the processing procedure executed by the failure analysis support device 10 will be described. FIG. 4 is a flowchart for explaining an example of a processing procedure executed by the failure analysis support device 10 with respect to Part 1.
 ステップS101において、コード解析部11は、検証環境で対象アプリケーションのソースコード又はバイナリコード(以下、「対象アプリコード」という。)を入力し、対象アプリコードに基づいて、対象アプリの構成要素の呼び出し関係について、実行可能な経路を解析し、当該経路を示すグラフ(CFG(Control Flow Graph):呼び出しグラフ)を対象アプリの機能ごとに生成する。 In step S101, the code analysis unit 11 inputs the source code or binary code of the target application (hereinafter referred to as “target application code”) in the verification environment, and calls the components of the target application based on the target application code. Regarding the relationship, the executable route is analyzed, and a graph (CFG (Control Flow Graph): call graph) showing the route is generated for each function of the target application.
 図5は、CFGの一例を示す図である。図5に示されるCFGにおける各ノードは、ターゲットに該当する構成要素に対応する。すなわち、ターゲットの階層として「Class」が指定された場合、各ノードは対象アプリを構成するクラスに対応する。ターゲットの階層として「Method」が指定された場合、各ノードは各クラスのメソッドに対応する。ターゲットの階層として「Statement」が指定された場合、各ノードは各メソッドを構成するステートメントに対応する。なお、図5の左側に記載されているL0~3は、CFGの深さ方向の階層を示す。 FIG. 5 is a diagram showing an example of CFG. Each node in the CFG shown in FIG. 5 corresponds to a component corresponding to the target. That is, when "Class" is specified as the target hierarchy, each node corresponds to the class constituting the target application. When "Method" is specified as the target hierarchy, each node corresponds to the method of each class. When "Statement" is specified as the target hierarchy, each node corresponds to the statement that constitutes each method. In addition, L0 to 3 described on the left side of FIG. 5 indicate the hierarchy in the depth direction of CFG.
 CFGは、公知技術を利用して生成可能である。本実施の形態では、ターゲットの階層として「Method」が予めユーザによって指定された場合について説明する。この場合、例えば、コード解析部11は、対象アプリコードを構成するファイルごとにコードを1行ずつに解析し、メソッド呼び出しを発見する度に、呼び出し先のメソッドに対応するノードを新たにCFGに追加し、呼び出し元のメソッドに対応するノードから呼び出し先のメソッドに対応するノードへの方向を有するエッジをCFGに追加する。なお、ターゲットの階層がステートメントである場合には、ステートメントごとにノードが生成される。ターゲットの階層がクラスである場合には、クラスごとにノードが生成される。この場合、例えば、コード解析部11は、或るクラスのメソッド内から他のクラスのメソッドの呼び出しを発見するたびに、呼び出し先のクラスのノードをCFGに追加し、呼び出し元のクラスに対応するノードから呼び出し先のクラスに対応するノードへの方向を有するエッジをCFGに追加する。 CFG can be generated using known technology. In the present embodiment, a case where "Method" is specified in advance by the user as the target hierarchy will be described. In this case, for example, the code analysis unit 11 analyzes the code line by line for each file constituting the target application code, and each time a method call is found, the node corresponding to the called method is newly added to the CFG. Add and add an edge to the CFG that has a direction from the node corresponding to the calling method to the node corresponding to the called method. If the target hierarchy is a statement, a node is created for each statement. If the target hierarchy is a class, a node will be created for each class. In this case, for example, every time the code analysis unit 11 finds a call to a method of another class from within a method of one class, it adds a node of the called class to the CFG and corresponds to the calling class. Add an edge to the CFG that has a direction from the node to the node corresponding to the called class.
 なお、コード解析部11は、CFGの生成に伴って、CFGを示す情報を格納するためのファイルとして、2つのファイルを生成する。 Note that the code analysis unit 11 generates two files as files for storing information indicating the CFG along with the generation of the CFG.
 1つ目のファイルは、CFGの各ノードに対応するターゲット名のリスト(以下、「ノードリスト」という。)を格納するためのファイルである。本実施の形態において、当該ファイルのファイル名は、「csv.txt」であるとする。当該ファイルの内容の具体例は以下の通りである。
['aa','bb',…,'ff'…]
 ここで、'aa','bb',…,'ff'のそれぞれは、ターゲット名である。なお、ターゲットの階層がメソッドである場合のターゲット名は、メソッドである。ターゲットの階層がクラスである場合のターゲット名は、クラスである。ターゲットの階層がステートメントである場合のターゲット名は、ステートメントである。
The first file is a file for storing a list of target names (hereinafter, referred to as "node list") corresponding to each node of CFG. In the present embodiment, the file name of the file is "csv.txt". Specific examples of the contents of the file are as follows.
['Aa','bb', ...,'ff' ...]
Here, each of'aa','bb', ..., and'ff'is a target name. When the target hierarchy is a method, the target name is a method. If the target hierarchy is a class, the target name is a class. If the target hierarchy is a statement, the target name is a statement.
 2番目のファイルは、CFGの各ノードの隣接関係(ターゲットの粒度がメソッドである場合、メソッドの呼び出し関係)のリスト(以下、「隣接リスト」という。)を格納するためのファイルである。本実施の形態において、当該ファイルのファイル名は、「cfg.txt」であるとする。当該ファイルの内容の具体例は以下の通りである。
[[0,[1,2,3]],[2,[3,4]],…]
 隣接リストの各数値(以下、「ノード番号」という。)は、ノードリストにおけるメソッド名の順番に対応し、この順番の先頭は0である(すなわち、0オリジン)。例えば、[0,[1,2,3]]は、ノードリストにおける0番目のメソッドがノードリストにおける1番目、2番目及び3番目のメソッドを呼び出すことを示す。
The second file is a file for storing a list (hereinafter referred to as "adjacency list") of adjacency relations (hereinafter referred to as "adjacency list") of each node of CFG (when the target particle size is a method, the method call relations). In the present embodiment, the file name of the file is "cfg.txt". Specific examples of the contents of the file are as follows.
[[0, [1,2,3]], [2, [3,4]], ...]
Each numerical value in the adjacency list (hereinafter referred to as "node number") corresponds to the order of method names in the node list, and the beginning of this order is 0 (that is, 0 origin). For example, [0, [1, 2, 3]] indicates that the 0th method in the node list calls the 1st, 2nd, and 3rd methods in the node list.
 なお、ノードリスト及び隣接リストは、CFGごとに生成される。また、CFGは、対象アプリの機能ごとに生成される。したがって、ノードリスト及び隣接リストは、対象アプリの機能ごとに生成される。なお、相互に異なる機能に対応するCFGは、ルートノードが異なる。図3では、機能A、B、C及びDのそれぞれについて、CFG(cfg_A、cfg_B、cfg_C、cfg_D)が生成された例が示されている。 Note that the node list and adjacency list are generated for each CFG. In addition, CFG is generated for each function of the target application. Therefore, the node list and the adjacency list are generated for each function of the target application. The CFGs that support different functions have different root nodes. FIG. 3 shows an example in which CFGs (cfg_A, cfg_B, cfg_C, cfg_D) are generated for each of the functions A, B, C and D.
 続いて、ターゲット選択部12は、対象アプリの機能のうち監視対象とする機能(以下、「対象機能」という。)及びトレースパターンの入力をユーザ(検証環境のユーザ)から受け付ける(S102)。例えば、図3の例によれば、機能A、B、C及びDのうちのいずれか1以上が対象機能として選択される。一部の機能のみが対象機能として選択されてもよいし、全機能が対象機能として選択されてもよい。なお、機能ごとにCFGが生成される。したがって、ターゲット選択部12は、生成されたCFGを選択肢とするGUIを表示装置106に表示し、ユーザによって選択された選択肢に対応するCFGに係る機能を対象機能としてもよい。 Subsequently, the target selection unit 12 accepts the input of the monitoring target function (hereinafter referred to as “target function”) and the trace pattern among the functions of the target application from the user (user in the verification environment) (S102). For example, according to the example of FIG. 3, any one or more of the functions A, B, C, and D is selected as the target function. Only some functions may be selected as target functions, or all functions may be selected as target functions. A CFG is generated for each function. Therefore, the target selection unit 12 may display the GUI with the generated CFG as an option on the display device 106, and may set the function related to the CFG corresponding to the option selected by the user as the target function.
 また、トレースパターンとは、対象機能のCFGを構成するノードのうち、ターゲット(トレース対象)とするノードの選択基準と、ターゲットに関して実行痕跡に出力される情報との組み合わせによって構成される規則をいう。本実施の形態では、以下のAパターン~Dパターンの4つのトレースパターンのうちのいずれかが選択可能であるとする。 Further, the trace pattern refers to a rule composed of a combination of a selection criterion of a node to be a target (trace target) among the nodes constituting the CFG of the target function and information output to an execution trace regarding the target. .. In the present embodiment, it is assumed that any of the following four trace patterns A to D can be selected.
 Aパターンとは、CFGの深さ方向の1階層おきのノード(すなわち、CFGにおいて隣接しないノード)がターゲットとして選択され、ターゲットが実行された日時及びターゲット名が出力情報に含まれるトレースパターンをいう。Aパターンの場合、図5のCFGにおいては、太線で囲まれたノードがターゲットとされる。Aパターンは、実行痕跡の記録負荷が小さく、かつ、再現時間も抑えられるという特徴を有する。 The A pattern is a trace pattern in which nodes in every other layer in the depth direction of the CFG (that is, nodes that are not adjacent in the CFG) are selected as targets, and the date and time when the target is executed and the target name are included in the output information. .. In the case of the A pattern, in the CFG of FIG. 5, the node surrounded by the thick line is targeted. The A pattern has a feature that the recording load of the execution trace is small and the reproduction time can be suppressed.
 Bパターンとは、CFGにおける各分岐の分岐先のノードがターゲットとして選択され、ターゲットが実行された日時及びターゲット名が出力情報に含まれるトレースパターンをいう。Bパターンは、実行痕跡の記録負荷は相対的に大きいが、再現時間は小さいという特徴を有する。 The B pattern is a trace pattern in which the branch destination node of each branch in CFG is selected as a target, and the date and time when the target is executed and the target name are included in the output information. The B pattern has a feature that the recording load of the execution trace is relatively large, but the reproduction time is small.
 Cパターンとは、Aパターンを一定期間に適用した後、実行頻度に基づいてターゲットの修正が行われるトレースパターンをいう。例えば、Aパターンの適用時において実行頻度が高いターゲットが除外され、実行頻度が低いターゲット前後のノードがターゲットに追加されてもよい。Cパターンは、実行痕跡の記録負荷を更に小さくことができ、再現時間も更に抑えることができるという特徴を有する。 The C pattern is a trace pattern in which the target is modified based on the execution frequency after the A pattern is applied for a certain period of time. For example, when the A pattern is applied, the target with high execution frequency may be excluded, and the nodes before and after the target with low execution frequency may be added to the target. The C pattern has a feature that the recording load of the execution trace can be further reduced and the reproduction time can be further suppressed.
 Dパターンとは、Aパターン、Bパターン又はCパターンの適用に際し、ターゲットの引数と戻り値の値が更に出力情報に追加されるトレースパターンをいう。Dパターンは、実行痕跡の記録負荷が大きくなるが、再現時間を小さくすることができるという特徴を有する。 The D pattern is a trace pattern in which the target argument and the return value are further added to the output information when the A pattern, the B pattern, or the C pattern is applied. The D pattern has a feature that the recording load of the execution trace is large, but the reproduction time can be reduced.
 続いて、ターゲット選択部12は、対象機能のCFGの各ノードのうち、ユーザによって指定されたトレースパターンに基づいて、ターゲットとするノードを選択し、選択されたノードのターゲット名のリスト(すなわち、「ターゲットリスト」)を生成する(S103)。ここでは、トレースパターンの一例として、Aパターンが指定された例について説明する。 Subsequently, the target selection unit 12 selects a target node from each node of the CFG of the target function based on the trace pattern specified by the user, and a list of the target names of the selected node (that is, that is). "Target list") is generated (S103). Here, as an example of the trace pattern, an example in which the A pattern is specified will be described.
 具体的には、Aパターンが指定された場合、ターゲット選択部12は、対象機能のcfg.txtに格納されている隣接リストを深さ優先探索処理し、深さが偶数段にあるノード番号を選択し、対象機能のcsv.txtに格納されているノードリストの中で、選択したノード番号に対応するターゲット名(例えば、メソッド名)のリストを生成する。したがって、例えば、図5のCFGが対象機能のCFGであれば、太線で囲まれたターゲット名のリストが生成される。すなわち、CFGにおいて隣接しないノードがターゲットとして選択される。 Specifically, when the A pattern is specified, the target selection unit 12 uses the cfg. Of the target function. Depth-first search processing is performed on the adjacency list stored in txt, node numbers with even-numbered depths are selected, and csv. Of the target function. In the node list stored in txt, a list of target names (for example, method names) corresponding to the selected node number is generated. Therefore, for example, if the CFG in FIG. 5 is the CFG of the target function, a list of target names surrounded by a thick line is generated. That is, non-adjacent nodes are selected as targets in CFG.
 なお、コード解析部11は、ターゲットの階層(粒度)の指定に関係なく、常に、最下位層(最小の粒度)のターゲット(本実施の形態ではステートメント)をノードとするCFGを生成するようにしてもよい。この場合、ターゲット選択部12が、指定されたターゲットの階層に対応するようにCFGを変更(集約)し、変更後のCFGに基づいて、ターゲットリストを生成してもよい。すなわち、最下位の階層のCFGのノードを集約することで、他の階層のCFGを公知技術に基づいて生成することができる。 The code analysis unit 11 always generates a CFG having a target (statement in the present embodiment) of the lowest layer (minimum particle size) as a node regardless of the designation of the target layer (particle size). You may. In this case, the target selection unit 12 may change (aggregate) the CFG so as to correspond to the designated target hierarchy, and generate a target list based on the changed CFG. That is, by aggregating the nodes of the CFG of the lowest layer, the CFG of another layer can be generated based on the known technology.
 続いて、改変部13は、ターゲット選択部12によって生成されたターゲットリストにターゲット名が含まれる各ターゲット(例えば、メソッド)が呼び出された際に当該ターゲットの実行痕跡を出力させるように対象アプリを改変するための設定ファイルを生成する(S104)。 Subsequently, the modification unit 13 outputs the target application so that the execution trace of the target is output when each target (for example, a method) whose target name is included in the target list generated by the target selection unit 12 is called. Generate a setting file for modification (S104).
 続いて、改変部13は、設定ファイルを対象アプリのソースコードに適用することで、ターゲットリストにターゲット名が含まれる各ターゲットが呼び出された際に、当該ターゲットの実行痕跡が出力されるように当該ソースコードを改変し、改変後のソースコードに基づいて改変版のバイナリコードを生成する(S105)。 Subsequently, the modification unit 13 applies the setting file to the source code of the target application so that when each target whose target list includes the target name is called, the execution trace of the target is output. The source code is modified to generate a modified binary code based on the modified source code (S105).
 ステップS104及びS105は、例えば、print文を埋め込むscriptを作成して利用する、AspectJの機構を利用して実現できる。ターゲットの階層がメソッドであれば、メソッドの先頭においてprint文が呼び出されるように改変される。 Steps S104 and S105 can be realized by using, for example, the mechanism of AspectJ that creates and uses a script that embeds a print statement. If the target hierarchy is a method, the print statement is modified to be called at the beginning of the method.
 なお、実行痕跡には、ターゲットが呼び出された時刻やターゲット名等が含まれるようにする。 The execution trace should include the time when the target was called, the target name, and so on.
 その後、改変版の対象アプリが商用環境に適用される。その結果、商用環境におけるユーザによる対象アプリの利用に応じて、対象アプリから従来ログ及び実行痕跡の系列等の実行時情報が所定のファイルに出力される。商用環境において対象アプリに障害(以下、「対象障害」という。)が発生すると、検証環境は、対象障害が発生するまでに対象アプリから出力された実行時情報を取得する。実行時情報の取得は任意の方法で行われればよい。例えば、自動的に対象アプリが検証環境に実行時情報をアップロードしてもよいし、人手によって商用環境から検証環境へ実行時情報が送付されてもよい。また、検証環境が取得する実行時情報は、対象障害発生以前の所定期間のものに限定されてもよい。 After that, the modified version of the target application is applied to the commercial environment. As a result, depending on the user's use of the target application in the commercial environment, the target application outputs the execution time information such as the conventional log and the sequence of the execution traces to a predetermined file. When a failure occurs in the target application (hereinafter referred to as "target failure") in the commercial environment, the verification environment acquires the run-time information output from the target application before the target failure occurs. The acquisition of run-time information may be performed by any method. For example, the target application may automatically upload the run-time information to the verification environment, or the run-time information may be manually sent from the commercial environment to the verification environment. Further, the run-time information acquired by the verification environment may be limited to the information for a predetermined period before the occurrence of the target failure.
 図6は、Part2及びPart3に関して障害解析支援装置10が実行する処理手順の一例を説明するためのフローチャートである。 FIG. 6 is a flowchart for explaining an example of the processing procedure executed by the failure analysis support device 10 with respect to Part 2 and Part 3.
 ステップS201において、障害解析支援装置10は、検証環境のユーザから入力情報を受け付ける。入力情報とは、商用環境から取得された従来ログ及び実行痕跡の系列、並びに従来ログに基づいてユーザによって特定された障害発生日時等である。例えば、従来ログに含まれている、対象障害に対応するエラーメッセージの記録時刻が障害発生日時として特定される。 In step S201, the failure analysis support device 10 receives input information from the user in the verification environment. The input information is a series of conventional logs and execution traces acquired from a commercial environment, a failure occurrence date and time specified by a user based on the conventional logs, and the like. For example, the recording time of the error message corresponding to the target failure, which is included in the conventional log, is specified as the failure occurrence date and time.
 続いて、障害パス候補特定部14は、入力情報に含まれている実行痕跡の系列及び障害発生日時に基づいて、障害パスの候補を生成する(S202)。障害パスとは、実行痕跡の系列に基づいて特定される、障害発生日時までに実行された(呼び出された)ターゲット(例えば、メソッド)のシーケンス(実行順序)をいう。 Subsequently, the failure path candidate identification unit 14 generates failure path candidates based on the sequence of execution traces included in the input information and the failure occurrence date and time (S202). The failure path is a sequence (execution order) of targets (for example, methods) executed (called) by the failure occurrence date and time, which is specified based on a series of execution traces.
 具体的には、障害パス候補特定部14は、対象アプリの各機能のCFGごとに、故障発生日時までの実行痕跡の系列及びユーザによって指定されたトレースパターンに基づいて、ターゲットレベルでの呼び出し関係を示す障害パスの候補を生成する。例えば、ユーザによって指定されたトレースパターンがAパターンである場合において、実行痕跡の系列が「aa,dd」である場合、当該実行痕跡の系列は、CFGの深さ方向の1階層おきの実行痕跡の系列(すなわち、不連続な実行痕跡の系列)であるはずである。そこで、例えば、図5のCFG(csv.txt,cfg.txt)については、以下の3つのシーケンスが障害パスの候補として生成される。
aa→bb→dd
aa→bb→dd→ee
aa→bb→dd→ff
 すなわち、或るCFGについて、実行痕跡に基づいて取りうる経路が、障害パスの候補として生成される。したがって、障害パスの候補には、CFGに基づいて、実行痕跡の系列に含まれるターゲット間において実行された可能性のあるターゲットのターゲット名が含まれる。
Specifically, the failure path candidate identification unit 14 has a call relationship at the target level based on the sequence of execution traces up to the failure occurrence date and time and the trace pattern specified by the user for each CFG of each function of the target application. Generate failure path candidates that indicate. For example, when the trace pattern specified by the user is the A pattern and the sequence of the execution traces is "aa, dd", the sequence of the execution traces is the execution traces of every other layer in the depth direction of the CFG. It should be a series of (ie, a series of discontinuous execution traces). Therefore, for example, for the CFG (csv. Txt, cfg. Txt) in FIG. 5, the following three sequences are generated as failure path candidates.
aa → bb → dd
aa → bb → dd → ee
aa → bb → dd → ff
That is, for a certain CFG, a path that can be taken based on the execution trace is generated as a candidate for a failure path. Therefore, failure path candidates include the target names of targets that may have been executed between the targets included in the sequence of execution traces, based on the CFG.
 なお、本実施の形態では、説明の便宜上、実行痕跡には呼び出し(call)に関する情報は含まれるが、情報出力負荷を抑えるため、呼び出しから戻る(return)情報については含まれない例を説明した。CFGとcall情報とを合わせれば、ほとんどのケースで戻る情報が不要であるからである。 In the present embodiment, for convenience of explanation, an example has been described in which the execution trace includes information about the call (call), but does not include the return information from the call (return) in order to suppress the information output load. .. This is because if the CFG and the call information are combined, the information to be returned is unnecessary in most cases.
 このため、往復するような複雑なシーケンスは、上記の障害パスの候補の例には含まれていない。例えば、上記の例では、例えば、aa→gg→bb→ddのようなシーケンスも有りうるが、当該シーケンスは上記の例から省略されている。 For this reason, complicated sequences such as reciprocating are not included in the above example of failure path candidates. For example, in the above example, there may be a sequence such as aa → gg → bb → dd, but the sequence is omitted from the above example.
 なお、対象アプリのソースコード又はバイナリコードがあれば、CFGに含められていないメソッド間の順序やメソッド間の依存関係で往復するような複雑な呼び出しを判別することが可能である。したがって、障害パス候補特定部14は、対象アプリのソースコード又はバイナリコードをも参照して、障害パスの候補を生成してもよい。 If there is a source code or binary code of the target application, it is possible to determine a complicated call that goes back and forth depending on the order between methods and the dependency between methods that are not included in CFG. Therefore, the failure path candidate identification unit 14 may also refer to the source code or binary code of the target application to generate failure path candidates.
 続いて、テストケース生成部15は、障害パスの候補ごとに、当該障害パスに関してステートメントレベルにおいて取りうる全てのシーケンス(経路)を特定し、特定したシーケンスごとに、当該シーケンスを再現するためのテストケースを生成する(S203)。例えば、ターゲットの階層がメソッドである場合、障害パスは、メソッドのシーケンスである。1つのメソッド内において、ステートメントレベルでは分岐等によって複数のシーケンス(経路)が存在しうる。例えば、2つのメソッドのシーケンスを示す障害パスにおいて、一方のメソッド内でのステートメントレベルのシーケンスが3通りであり、他方のメソッド内でのステートメントレベルのシーケンスが2通りである場合、当該障害パスについては3×2=6通りのステートメントレベルのシーケンスが特定される。したがって、当該故障パスについては、6個のテストケースが生成される。なお、テストケースの内容は、特定されたステートメントのシーケンスを実行させるための命令群である。 Subsequently, the test case generation unit 15 identifies all possible sequences (routes) at the statement level for each failure path candidate, and tests for reproducing the sequence for each specified sequence. Generate a case (S203). For example, if the target hierarchy is a method, the failure path is a sequence of methods. In one method, there may be multiple sequences (routes) at the statement level due to branching or the like. For example, in a failure path showing a sequence of two methods, if there are three statement-level sequences in one method and two statement-level sequences in the other method, the failure path is 3 × 2 = 6 statement-level sequences are specified. Therefore, six test cases are generated for the failure path. The content of the test case is a group of instructions for executing the sequence of the specified statements.
 なお、CFGが、ステートメントよりも大きな階層(メソッドやクラス)を単位(ノード)として生成されている場合には、テストケース生成部15は、対象アプリのソースコード又はバイナリコードに基づいて、障害パスにおいて実行されうるステートメントレベルのシーケンスを特定すればよい。 If the CFG is generated with a hierarchy (method or class) larger than the statement as a unit (node), the test case generator 15 will use the failure path based on the source code or binary code of the target application. You just need to identify the statement-level sequence that can be executed in.
 続いて、検証部16は、生成されたテストケース群に含まれるテストケースごとに、ステップS204~S206を含むループ処理L1を実行する。以下、ループ処理において処理対象とされているテストケースを「対象テストケース」という。 Subsequently, the verification unit 16 executes the loop process L1 including steps S204 to S206 for each test case included in the generated test case group. Hereinafter, the test case targeted for processing in the loop processing is referred to as a "target test case".
 ステップS204において、検証部16は、検証環境において構築された商用環境のクローンに対して対象テストケースを実行する。続いて、検証部16は、対象テストケースの実行の結果、対象障害と同じ現象が発生したか否かを判定する(S205)。斯かる判定は、対象テストケースの実行によって対象アプリから出力される従来ログ(以下、「検証ログ」という。)と、入力情報に含まれている従来ログ(以下、「商用ログ」という。)とを比較することで行うことができる。例えば、検証ログと、商用ログのうち対象テストケースに対応する部分とが一致する場合、対象障害と同じ事象が発生したと判定されてもよい。ここで、一致とは、例えば、実行のタイミングや実行ユーザ(ログインユーザ)に応じて変化するパラメータについては異なる状態が含まれてもよい。また、対象障害に対応するエラーメッセージと同じエラーメッセージが検証ログに含まれていることをもって、対象障害と同じ現象が発生したと判定されてもよい。 In step S204, the verification unit 16 executes the target test case for the clone of the commercial environment constructed in the verification environment. Subsequently, the verification unit 16 determines whether or not the same phenomenon as the target failure has occurred as a result of executing the target test case (S205). Such a judgment is made by the conventional log output from the target application by executing the target test case (hereinafter referred to as "verification log") and the conventional log included in the input information (hereinafter referred to as "commercial log"). It can be done by comparing with. For example, if the verification log and the portion of the commercial log corresponding to the target test case match, it may be determined that the same event as the target failure has occurred. Here, the match may include, for example, different states for parameters that change depending on the execution timing and the execution user (login user). Further, it may be determined that the same phenomenon as the target failure has occurred because the verification log contains the same error message as the error message corresponding to the target failure.
 対象障害と同じ現象が発生した場合(S205でYes)、検証部16は、対象テストケース等を出力する(S206)。例えば、対象テストケースを格納したファイルが、補助記憶装置102における所定の場所(フォルダ等)に保存されてもよい。 When the same phenomenon as the target failure occurs (Yes in S205), the verification unit 16 outputs the target test case and the like (S206). For example, the file containing the target test case may be saved in a predetermined location (folder or the like) in the auxiliary storage device 102.
 検証環境のユーザ(例えば、保守担当者等)は、対象テストケース(すなわち、ステートメントレベルの障害パス)を実行することで対象障害を再現させることができる。 A user in the verification environment (for example, a maintenance person, etc.) can reproduce the target failure by executing the target test case (that is, the statement level failure path).
 上述したように、本実施の形態によれば、対象アプリの実行痕跡に基づいて、対象アプリの障害を再現させるためのテストケースを生成することができる。したがって、対象アプリの障害の再現におけるプログラムへの外部入力の依存度を低減することができ、対象アプリ等のプログラムの障害解析を効率化することができる。 As described above, according to the present embodiment, it is possible to generate a test case for reproducing the failure of the target application based on the execution trace of the target application. Therefore, it is possible to reduce the dependence of the external input on the program in reproducing the failure of the target application, and it is possible to improve the efficiency of the failure analysis of the program of the target application or the like.
 なお、非特許文献3には、トレースレベルを設定及び削減する市販製品「log4j」が開示されている。「log4j」は、Jakartaプロジェクトで開発が進められているJava(登録商標)プログラム用のログAPIである。設定ファイルにおいてログの出力レベルを設定すれば、それより高いレベルのログはすべて出力されることになる。ログの出力レベルに関しては、図7に示されるような取り決めが一般的である。 Note that Non-Patent Document 3 discloses a commercially available product "log4j" that sets and reduces the trace level. "Log4j" is a log API for the Java (registered trademark) program under development in the Jakarta project. If you set the log output level in the configuration file, all logs with higher levels will be output. As for the log output level, the arrangement as shown in FIG. 7 is common.
 「log4j」では、ログの出力のレベルを設定できるが。ログの出力を実現するため、開発者は監視対象のコードを選び、ログとして出力したい内容及び出力タイミング(当該ログの出力レベル)を監視対象のコードに追記する必要がある。 With "log4j", you can set the log output level. In order to realize log output, the developer needs to select the code to be monitored and add the content to be output as a log and the output timing (output level of the log) to the code to be monitored.
 一方、本実施の形態では、対象アプリの構成要素の階層(クラス、メソッド、ステートメント)によってトレースレベルを設定することができる。更に、ログの出力箇所について、監視対象のコード上での選択が不要であり、出力したい内容及びタイミングを一々設定する必要もない。 On the other hand, in this embodiment, the trace level can be set by the hierarchy (class, method, statement) of the components of the target application. Further, it is not necessary to select the log output location on the code to be monitored, and it is not necessary to set the content and timing to be output one by one.
 なお、本実施の形態では、アプリケーションプログラム以外のプログラムに関して適用されてもよい。 In this embodiment, it may be applied to a program other than the application program.
 なお、本実施の形態において、対象アプリは、所定のプログラムの一例である。コード解析部11は、解析部の一例である。ターゲット選択部12は、選択部の一例である。障害パス候補特定部14は、特定部の一例である。テストケース生成部15は、生成部の一例である。検証部16は、判定部の一例である。 In the present embodiment, the target application is an example of a predetermined program. The code analysis unit 11 is an example of an analysis unit. The target selection unit 12 is an example of the selection unit. The failure path candidate specifying unit 14 is an example of the specific unit. The test case generation unit 15 is an example of the generation unit. The verification unit 16 is an example of a determination unit.
 以上、本発明の実施の形態について詳述したが、本発明は斯かる特定の実施形態に限定されるものではなく、請求の範囲に記載された本発明の要旨の範囲内において、種々の変形・変更が可能である。 Although the embodiments of the present invention have been described in detail above, the present invention is not limited to such specific embodiments, and various modifications are made within the scope of the gist of the present invention described in the claims.・ Can be changed.
10     障害解析支援装置
11     コード解析部
12     ターゲット選択部
13     改変部
14     障害パス候補特定部
15     テストケース生成部
16     検証部
100    ドライブ装置
101    記録媒体
102    補助記憶装置
103    メモリ装置
104    CPU
105    インタフェース装置
106    表示装置
107    入力装置
B      バス
10 Failure analysis support device 11 Code analysis unit 12 Target selection unit 13 Modification unit 14 Failure path candidate identification unit 15 Test case generation unit 16 Verification unit 100 Drive device 101 Recording medium 102 Auxiliary storage device 103 Memory device 104 CPU
105 Interface device 106 Display device 107 Input device B Bus

Claims (7)

  1.  プログラムの構成要素の呼び出し関係について、実行可能な経路を解析する解析部と、
     前記経路に含まれる前記構成要素のうち、実行された痕跡を示す情報を出力させる構成要素を所定の規則に基づいて選択する選択部と、
     前記選択部によって選択された前記構成要素が前記情報を出力するように前記プログラムを改変する改変部と、
     前記改変部によって改変された前記プログラムの障害の発生までに前記プログラムから出力された前記情報が示す前記構成要素の系列と、前記経路とに基づいて、前記障害を発生させるためのステートメントのシーケンスを特定し、当該シーケンスを実行させるためのテストケースを生成する生成部と、
    を有することを特徴とする障害解析支援装置。
    An analysis unit that analyzes the feasible route for the call relationship of the program components,
    Among the components included in the path, a selection unit for selecting a component for outputting information indicating an executed trace based on a predetermined rule, and a selection unit.
    A modification unit that modifies the program so that the component selected by the selection unit outputs the information.
    Based on the sequence of the components indicated by the information output from the program before the occurrence of the failure of the program modified by the modification unit and the path, a sequence of statements for causing the failure is obtained. A generator that identifies and generates a test case to execute the sequence,
    A fault analysis support device characterized by having.
  2.  前記構成要素は階層構造を有し、
     前記解析部は、ユーザによって指定された階層の構成要素の呼び出し関係に関して実行可能な経路を解析する、
    ことを特徴とする請求項1記載の障害解析支援装置。
    The components have a hierarchical structure
    The analysis unit analyzes the feasible route for the call relationship of the components of the hierarchy specified by the user.
    The fault analysis support device according to claim 1, wherein the fault analysis support device is characterized by the above.
  3.  前記選択部は、前記経路に含まれる前記構成要素のうち、前記経路において隣接しない前記構成要素を選択し、
     前記改変部によって改変された前記プログラムの障害の発生までに前記プログラムから出力された前記情報が示す前記構成要素の系列に含まれる前記構成要素の間において実行された可能性の有る前記構成要素を前記経路に基づいて特定する特定部を有し、
     前記生成部は、前記特定部によって特定された前記構成要素も含む前記系列に基づいて、前記テストケースを生成する、
    ことを特徴とする請求項1又は2記載の障害解析支援装置。
    The selection unit selects, among the components included in the path, the components that are not adjacent to each other in the path.
    The component that may have been executed between the components included in the sequence of the components indicated by the information output from the program before the occurrence of the failure of the program modified by the modification unit. It has a specific part to be specified based on the above route, and has a specific part.
    The generator generates the test case based on the sequence including the component identified by the particular unit.
    The failure analysis support device according to claim 1 or 2, wherein the fault analysis support device is characterized by the above.
  4.  前記生成部が生成した前記テストケースを実行し、前記テストケースの実行によって前記障害と同じ事象が発生したか否かを判定する判定部、
    を有することを特徴とする請求項1乃至3いずれか一項記載の障害解析支援装置。
    A determination unit that executes the test case generated by the generation unit and determines whether or not the same event as the failure has occurred due to the execution of the test case.
    The fault analysis support device according to any one of claims 1 to 3, wherein the fault analysis support device is characterized by having the above.
  5.  前記判定部は、前記障害と同じ事象が発生した前記テストケースを出力する、
    ことを特徴とする請求項4記載の障害解析支援装置。
    The determination unit outputs the test case in which the same event as the failure has occurred.
    The fault analysis support device according to claim 4, wherein the fault analysis support device is characterized by the above.
  6.  プログラムの構成要素の呼び出し関係について、実行可能な経路を解析する解析手順と、
     前記経路に含まれる前記構成要素のうち、実行された痕跡を示す情報を出力させる構成要素を所定の規則に基づいて選択する選択手順と、
     前記選択手順によって選択された前記構成要素が前記情報を出力するように前記プログラムを改変する改変手順と、
     前記改変手順によって改変された前記プログラムの障害の発生までに前記プログラムから出力された前記情報が示す前記構成要素の系列と、前記経路とに基づいて、前記障害を発生させるためのステートメントのシーケンスを特定し、当該シーケンスを実行させるためのテストケースを生成する生成手順と、
    をコンピュータが実行することを特徴とする障害解析支援方法。
    An analysis procedure that analyzes the feasible route for the call relationship of the program components, and
    Among the components included in the path, a selection procedure for selecting a component for outputting information indicating an executed trace based on a predetermined rule, and a selection procedure.
    A modification procedure for modifying the program so that the component selected by the selection procedure outputs the information.
    Based on the sequence of the components indicated by the information output from the program and the route before the failure of the program modified by the modification procedure, a sequence of statements for causing the failure is obtained. A generation procedure to identify and generate a test case to execute the sequence,
    A failure analysis support method characterized by a computer executing.
  7.  請求項1乃至5いずれか一項記載の障害解析支援装置としてコンピュータを機能させることを特徴とするプログラム。 A program characterized by operating a computer as the failure analysis support device according to any one of claims 1 to 5.
PCT/JP2020/027375 2020-07-14 2020-07-14 Failure analysis assistance device, failure analysis assistance method, and program WO2022013944A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2020/027375 WO2022013944A1 (en) 2020-07-14 2020-07-14 Failure analysis assistance device, failure analysis assistance method, and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2020/027375 WO2022013944A1 (en) 2020-07-14 2020-07-14 Failure analysis assistance device, failure analysis assistance method, and program

Publications (1)

Publication Number Publication Date
WO2022013944A1 true WO2022013944A1 (en) 2022-01-20

Family

ID=79555392

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/027375 WO2022013944A1 (en) 2020-07-14 2020-07-14 Failure analysis assistance device, failure analysis assistance method, and program

Country Status (1)

Country Link
WO (1) WO2022013944A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009070322A (en) * 2007-09-18 2009-04-02 Nec Corp Data processor, system, program and method
JP2012163997A (en) * 2011-02-03 2012-08-30 Nec System Technologies Ltd Failure analysis support system, failure analysis support method, and failure analysis support program
US20150278074A1 (en) * 2014-03-28 2015-10-01 International Business Machines Corporation Logging code generation and distribution

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009070322A (en) * 2007-09-18 2009-04-02 Nec Corp Data processor, system, program and method
JP2012163997A (en) * 2011-02-03 2012-08-30 Nec System Technologies Ltd Failure analysis support system, failure analysis support method, and failure analysis support program
US20150278074A1 (en) * 2014-03-28 2015-10-01 International Business Machines Corporation Logging code generation and distribution

Similar Documents

Publication Publication Date Title
Urli et al. How to design a program repair bot? insights from the repairnator project
US9703677B2 (en) Code coverage plugin
Vassallo et al. A tale of CI build failures: An open source and a financial organization perspective
US10430319B1 (en) Systems and methods for automatic software testing
US8561024B2 (en) Developing software components and capability testing procedures for testing coded software component
US20200250070A1 (en) Techniques for evaluating collected build metrics during a software build process
Xu et al. POD-Diagnosis: Error diagnosis of sporadic operations on cloud applications
US20130061210A1 (en) Interactive debugging environments and methods of providing the same
KR20140072726A (en) Function Test Apparatus based on Unit Test Cases Reusing and Function Test Method thereof
JP2010134643A (en) Test case selection method and selection system
JPWO2014171047A1 (en) Fault recovery procedure generation device, fault recovery procedure generation method, and fault recovery procedure generation program
US11481245B1 (en) Program inference and execution for automated compilation, testing, and packaging of applications
CN114579467B (en) Smoking testing system and method based on publish-subscribe mechanism
Naslavsky et al. Using scenarios to support traceability
KR101266565B1 (en) Test case creating mehtod and running method of robot software component using specifications of required interface
CN113722204A (en) Application debugging method, system, device and medium
Ghosh et al. A systematic review on program debugging techniques
WO2022013944A1 (en) Failure analysis assistance device, failure analysis assistance method, and program
CN110990177B (en) Fault repairing method, device, system, storage medium and electronic equipment
CN113094238A (en) Method and device for monitoring abnormity of business system
Winzinger et al. Automatic test case generation for serverless applications
CN115168175A (en) Program error solving method, device, electronic equipment and storage medium
Cao et al. CATMA: Conformance Analysis Tool For Microservice Applications
Zirkelbach et al. The collaborative modularization and reengineering approach CORAL for open source research software
CN113868140A (en) Automatic testing method and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20945023

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20945023

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP