WO2022013944A1 - Dispositif d'aide à l'analyse de défaillance, procédé d'aide à l'analyse de défaillance, et programme - Google Patents

Dispositif d'aide à l'analyse de défaillance, procédé d'aide à l'analyse de défaillance, et programme Download PDF

Info

Publication number
WO2022013944A1
WO2022013944A1 PCT/JP2020/027375 JP2020027375W WO2022013944A1 WO 2022013944 A1 WO2022013944 A1 WO 2022013944A1 JP 2020027375 W JP2020027375 W JP 2020027375W WO 2022013944 A1 WO2022013944 A1 WO 2022013944A1
Authority
WO
WIPO (PCT)
Prior art keywords
failure
program
target
unit
sequence
Prior art date
Application number
PCT/JP2020/027375
Other languages
English (en)
Japanese (ja)
Inventor
振宇 徐
Original Assignee
日本電信電話株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電信電話株式会社 filed Critical 日本電信電話株式会社
Priority to PCT/JP2020/027375 priority Critical patent/WO2022013944A1/fr
Publication of WO2022013944A1 publication Critical patent/WO2022013944A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance

Definitions

  • the present invention relates to a failure analysis support device, a failure analysis support method, and a program.
  • Non-Patent Document 1 As a method for reproducing failures and failures that occur in a program, an approach that reproduces based on the user's operation (for example, Non-Patent Document 1) and an approach that reapplies input data to an object (for example, Non-Patent Document 2). There is.
  • Non-Patent Document 1 automatically reproduces the application crash from the bug report.
  • Non-Patent Document 2 utilizes time travel debugging to efficiently and accurately reconstruct the state of an object as a unit test, and uses differential analysis of code coverage data.
  • the existing method is an approach that realizes the reproduction of the failure by recording the input (external input data) to the application and reapplying it.
  • the present invention has been made in view of the above points, and an object of the present invention is to improve the efficiency of failure analysis of a program.
  • the failure analysis support device uses an analysis unit that analyzes an executable route and a trace of execution among the components included in the route for the call relationship of the program components.
  • a selection unit that selects a component for outputting the indicated information based on a predetermined rule, a modification unit that modifies the program so that the component selected by the selection unit outputs the information, and the modification unit.
  • a sequence of statements for causing the failure is specified. It has a generation unit that generates a test case for executing the sequence.
  • the failure analysis of the program can be streamlined.
  • target application information showing an execution trace of a component of a target application executed before an application program (hereinafter referred to as "target application”) fails in a commercial environment is collected, and the collected information is collected. It provides an approach to automatically generate a test case in a verification environment, execute the test case, and finally determine whether or not the same event (conventional log or error message) as at the time of failure appears. By doing so, we will realize an approach that directly reproduces the obstacle.
  • a mechanism is prepared in which the component that outputs the execution trace can be selected from the components of the target application in the verification environment before collecting the information in the commercial environment.
  • the commercial environment refers to a computer or computer system in which a user who has purchased the target application uses the target application.
  • the verification environment refers to a computer or computer system used by the developer or maintenance person of the target application to analyze the failure of the target application.
  • FIG. 1 is a diagram for explaining an overall picture of the method proposed in the embodiment of the present invention. As shown in FIG. 1, the proposed method is composed of three parts, Part 1 to 3.
  • Part 1 Recording run-time information
  • the structure of the entire target application is analyzed, and among the components of the target application, the components that output the trace that is the trace of execution (execution trace) (hereinafter referred to as "target”)) are narrowed down.
  • a list of targets (hereinafter referred to as "target list”) is generated.
  • Class class
  • Method method
  • Statement statement
  • the component of the target application has a hierarchical structure (grain size).
  • the class is the component of the highest level (maximum particle size)
  • the statement is the component of the lowest level (minimum particle size).
  • any one of these hierarchies (grain size) (class, method, or statement) is selected as the target.
  • the target application executed in the commercial environment is changed (modified) in the verification environment so that the call sequence (execution trace) between the targets can be acquired using the generated target list.
  • the execution trace of the component corresponding to the target among the components of the target application is acquired (collected).
  • Part2 Acquisition of failure path
  • a call graph between targets is generated as a candidate for the failure path for each function of the target application based on the execution trace collected in Part 1 in the verification environment.
  • the function of the target application means, for example, a unit until an output is performed with respect to an input by a user, and is a concept classified by a set of an input and an output.
  • the functions may be distinguished for each menu item.
  • Part3 Automatic reproduction and judgment of failure
  • a test case that covers the call sequence at the statement level is created using the failure path candidates that are the call graphs between targets generated in Part2. After that, the created test case is used, the test is performed in the verification environment, and the failure is reproduced.
  • Source code / binary code of the target application (2) Setting of target hierarchy (grain size) (Class / Method / Status) (3) Conventional log at the time of failure (including error message) (4) Clone of commercial environment (to build verification environment)
  • the conventional log in (3) is a log output by the target application from the beginning, in addition to the execution trace, and includes an error message at the time of failure.
  • the faults in which the call sequence (execution trace) is different from the normal time at the statement level, and the faults in which the program behaves the same as in the normal time are included in the troubles that can be dealt with. No. For example, failures when exception handling or functions that would not normally be executed are executed are targeted, and failures such as memory leaks that gradually deplete resources are excluded.
  • fault analysis support device 10 that functions as a verification environment in the present embodiment will be described.
  • FIG. 2 is a diagram showing a hardware configuration example of the failure analysis support device 10 according to the embodiment of the present invention.
  • the failure analysis support device 10 of FIG. 2 has a drive device 100, an auxiliary storage device 102, a memory device 103, a CPU 104, an interface device 105, a display device 106, an input device 107, and the like, which are connected to each other by a bus B, respectively. ..
  • the program that realizes the processing in the failure analysis support device 10 is provided by a recording medium 101 such as a CD-ROM.
  • a recording medium 101 such as a CD-ROM.
  • the program is installed in the auxiliary storage device 102 from the recording medium 101 via the drive device 100.
  • the program does not necessarily have to be installed from the recording medium 101, and may be downloaded from another computer via the network.
  • the auxiliary storage device 102 stores the installed program and also stores necessary files, data, and the like.
  • the memory device 103 reads a program from the auxiliary storage device 102 and stores it when there is an instruction to start the program.
  • the CPU 104 realizes the function related to the failure analysis support device 10 according to the program stored in the memory device 103.
  • the interface device 105 is used as an interface for connecting to a network.
  • the display device 106 displays a GUI (Graphical User Interface) or the like by a program.
  • the input device 107 is composed of a keyboard, a mouse, and the like, and is used for inputting various operation instructions.
  • FIG. 3 is a diagram showing a functional configuration example of the failure analysis support device 10 according to the embodiment of the present invention.
  • the failure analysis support device 10 includes a code analysis unit 11, a target selection unit 12, a modification unit 13, a failure path candidate identification unit 14, a test case generation unit 15, a verification unit 16, and the like. Each of these parts is realized by a process of causing the CPU 104 to execute one or more programs installed in the failure analysis support device 10.
  • FIG. 4 is a flowchart for explaining an example of a processing procedure executed by the failure analysis support device 10 with respect to Part 1.
  • step S101 the code analysis unit 11 inputs the source code or binary code of the target application (hereinafter referred to as “target application code”) in the verification environment, and calls the components of the target application based on the target application code.
  • target application code the source code or binary code of the target application
  • the executable route is analyzed, and a graph (CFG (Control Flow Graph): call graph) showing the route is generated for each function of the target application.
  • CFG Control Flow Graph
  • FIG. 5 is a diagram showing an example of CFG.
  • Each node in the CFG shown in FIG. 5 corresponds to a component corresponding to the target. That is, when "Class” is specified as the target hierarchy, each node corresponds to the class constituting the target application. When “Method” is specified as the target hierarchy, each node corresponds to the method of each class. When “Statement” is specified as the target hierarchy, each node corresponds to the statement that constitutes each method.
  • L0 to 3 described on the left side of FIG. 5 indicate the hierarchy in the depth direction of CFG.
  • the code analysis unit 11 analyzes the code line by line for each file constituting the target application code, and each time a method call is found, the node corresponding to the called method is newly added to the CFG. Add and add an edge to the CFG that has a direction from the node corresponding to the calling method to the node corresponding to the called method. If the target hierarchy is a statement, a node is created for each statement. If the target hierarchy is a class, a node will be created for each class.
  • every time the code analysis unit 11 finds a call to a method of another class from within a method of one class it adds a node of the called class to the CFG and corresponds to the calling class. Add an edge to the CFG that has a direction from the node to the node corresponding to the called class.
  • the code analysis unit 11 generates two files as files for storing information indicating the CFG along with the generation of the CFG.
  • the first file is a file for storing a list of target names (hereinafter, referred to as "node list") corresponding to each node of CFG.
  • the file name of the file is "csv.txt".
  • Specific examples of the contents of the file are as follows. ['Aa','bb', ...,'ff' ...]
  • each of'aa','bb', ..., and'ff' is a target name.
  • the target hierarchy is a method
  • the target name is a method.
  • the target name is a class.
  • the target name is a statement
  • the target name is a statement.
  • the second file is a file for storing a list (hereinafter referred to as "adjacency list”) of adjacency relations (hereinafter referred to as “adjacency list”) of each node of CFG (when the target particle size is a method, the method call relations).
  • the file name of the file is "cfg.txt”.
  • Specific examples of the contents of the file are as follows. [[0, [1,2,3]], [2, [3,4]], ...]
  • Each numerical value in the adjacency list (hereinafter referred to as "node number”) corresponds to the order of method names in the node list, and the beginning of this order is 0 (that is, 0 origin). For example, [0, [1, 2, 3]] indicates that the 0th method in the node list calls the 1st, 2nd, and 3rd methods in the node list.
  • FIG. 3 shows an example in which CFGs (cfg_A, cfg_B, cfg_C, cfg_D) are generated for each of the functions A, B, C and D.
  • the target selection unit 12 accepts the input of the monitoring target function (hereinafter referred to as “target function”) and the trace pattern among the functions of the target application from the user (user in the verification environment) (S102).
  • target function the monitoring target function
  • the trace pattern among the functions of the target application from the user (user in the verification environment) S102.
  • the target function any one or more of the functions A, B, C, and D is selected as the target function. Only some functions may be selected as target functions, or all functions may be selected as target functions.
  • a CFG is generated for each function. Therefore, the target selection unit 12 may display the GUI with the generated CFG as an option on the display device 106, and may set the function related to the CFG corresponding to the option selected by the user as the target function.
  • the trace pattern refers to a rule composed of a combination of a selection criterion of a node to be a target (trace target) among the nodes constituting the CFG of the target function and information output to an execution trace regarding the target. ..
  • trace target a target among the nodes constituting the CFG of the target function and information output to an execution trace regarding the target. ..
  • the A pattern is a trace pattern in which nodes in every other layer in the depth direction of the CFG (that is, nodes that are not adjacent in the CFG) are selected as targets, and the date and time when the target is executed and the target name are included in the output information. ..
  • the node surrounded by the thick line is targeted.
  • the A pattern has a feature that the recording load of the execution trace is small and the reproduction time can be suppressed.
  • the B pattern is a trace pattern in which the branch destination node of each branch in CFG is selected as a target, and the date and time when the target is executed and the target name are included in the output information.
  • the B pattern has a feature that the recording load of the execution trace is relatively large, but the reproduction time is small.
  • the C pattern is a trace pattern in which the target is modified based on the execution frequency after the A pattern is applied for a certain period of time. For example, when the A pattern is applied, the target with high execution frequency may be excluded, and the nodes before and after the target with low execution frequency may be added to the target.
  • the C pattern has a feature that the recording load of the execution trace can be further reduced and the reproduction time can be further suppressed.
  • the D pattern is a trace pattern in which the target argument and the return value are further added to the output information when the A pattern, the B pattern, or the C pattern is applied.
  • the D pattern has a feature that the recording load of the execution trace is large, but the reproduction time can be reduced.
  • the target selection unit 12 selects a target node from each node of the CFG of the target function based on the trace pattern specified by the user, and a list of the target names of the selected node (that is, that is). "Target list" is generated (S103).
  • the trace pattern an example in which the A pattern is specified will be described.
  • the target selection unit 12 uses the cfg. Of the target function. Depth-first search processing is performed on the adjacency list stored in txt, node numbers with even-numbered depths are selected, and csv. Of the target function. In the node list stored in txt, a list of target names (for example, method names) corresponding to the selected node number is generated. Therefore, for example, if the CFG in FIG. 5 is the CFG of the target function, a list of target names surrounded by a thick line is generated. That is, non-adjacent nodes are selected as targets in CFG.
  • target names for example, method names
  • the code analysis unit 11 always generates a CFG having a target (statement in the present embodiment) of the lowest layer (minimum particle size) as a node regardless of the designation of the target layer (particle size). You may.
  • the target selection unit 12 may change (aggregate) the CFG so as to correspond to the designated target hierarchy, and generate a target list based on the changed CFG. That is, by aggregating the nodes of the CFG of the lowest layer, the CFG of another layer can be generated based on the known technology.
  • the modification unit 13 outputs the target application so that the execution trace of the target is output when each target (for example, a method) whose target name is included in the target list generated by the target selection unit 12 is called.
  • the modification unit 13 applies the setting file to the source code of the target application so that when each target whose target list includes the target name is called, the execution trace of the target is output.
  • the source code is modified to generate a modified binary code based on the modified source code (S105).
  • Steps S104 and S105 can be realized by using, for example, the mechanism of AspectJ that creates and uses a script that embeds a print statement. If the target hierarchy is a method, the print statement is modified to be called at the beginning of the method.
  • the execution trace should include the time when the target was called, the target name, and so on.
  • the modified version of the target application is applied to the commercial environment.
  • the target application outputs the execution time information such as the conventional log and the sequence of the execution traces to a predetermined file.
  • target failure a failure occurs in the target application (hereinafter referred to as "target failure") in the commercial environment
  • the verification environment acquires the run-time information output from the target application before the target failure occurs.
  • the acquisition of run-time information may be performed by any method.
  • the target application may automatically upload the run-time information to the verification environment, or the run-time information may be manually sent from the commercial environment to the verification environment.
  • the run-time information acquired by the verification environment may be limited to the information for a predetermined period before the occurrence of the target failure.
  • FIG. 6 is a flowchart for explaining an example of the processing procedure executed by the failure analysis support device 10 with respect to Part 2 and Part 3.
  • the failure analysis support device 10 receives input information from the user in the verification environment.
  • the input information is a series of conventional logs and execution traces acquired from a commercial environment, a failure occurrence date and time specified by a user based on the conventional logs, and the like.
  • the recording time of the error message corresponding to the target failure, which is included in the conventional log is specified as the failure occurrence date and time.
  • the failure path candidate identification unit 14 generates failure path candidates based on the sequence of execution traces included in the input information and the failure occurrence date and time (S202).
  • the failure path is a sequence (execution order) of targets (for example, methods) executed (called) by the failure occurrence date and time, which is specified based on a series of execution traces.
  • the failure path candidate identification unit 14 has a call relationship at the target level based on the sequence of execution traces up to the failure occurrence date and time and the trace pattern specified by the user for each CFG of each function of the target application.
  • Generate failure path candidates that indicate. For example, when the trace pattern specified by the user is the A pattern and the sequence of the execution traces is "aa, dd", the sequence of the execution traces is the execution traces of every other layer in the depth direction of the CFG. It should be a series of (ie, a series of discontinuous execution traces). Therefore, for example, for the CFG (csv. Txt, cfg. Txt) in FIG. 5, the following three sequences are generated as failure path candidates.
  • failure path candidates include the target names of targets that may have been executed between the targets included in the sequence of execution traces, based on the CFG.
  • the execution trace includes information about the call (call), but does not include the return information from the call (return) in order to suppress the information output load. .. This is because if the CFG and the call information are combined, the information to be returned is unnecessary in most cases.
  • the failure path candidate identification unit 14 may also refer to the source code or binary code of the target application to generate failure path candidates.
  • the test case generation unit 15 identifies all possible sequences (routes) at the statement level for each failure path candidate, and tests for reproducing the sequence for each specified sequence.
  • the content of the test case is a group of instructions for executing the sequence of the specified statements.
  • test case generator 15 will use the failure path based on the source code or binary code of the target application. You just need to identify the statement-level sequence that can be executed in.
  • the verification unit 16 executes the loop process L1 including steps S204 to S206 for each test case included in the generated test case group.
  • the test case targeted for processing in the loop processing is referred to as a "target test case”.
  • step S204 the verification unit 16 executes the target test case for the clone of the commercial environment constructed in the verification environment. Subsequently, the verification unit 16 determines whether or not the same phenomenon as the target failure has occurred as a result of executing the target test case (S205). Such a judgment is made by the conventional log output from the target application by executing the target test case (hereinafter referred to as "verification log") and the conventional log included in the input information (hereinafter referred to as "commercial log"). It can be done by comparing with. For example, if the verification log and the portion of the commercial log corresponding to the target test case match, it may be determined that the same event as the target failure has occurred.
  • verification log the target test case
  • commercial log included in the input information
  • the match may include, for example, different states for parameters that change depending on the execution timing and the execution user (login user). Further, it may be determined that the same phenomenon as the target failure has occurred because the verification log contains the same error message as the error message corresponding to the target failure.
  • the verification unit 16 When the same phenomenon as the target failure occurs (Yes in S205), the verification unit 16 outputs the target test case and the like (S206).
  • the file containing the target test case may be saved in a predetermined location (folder or the like) in the auxiliary storage device 102.
  • a user in the verification environment (for example, a maintenance person, etc.) can reproduce the target failure by executing the target test case (that is, the statement level failure path).
  • the present embodiment it is possible to generate a test case for reproducing the failure of the target application based on the execution trace of the target application. Therefore, it is possible to reduce the dependence of the external input on the program in reproducing the failure of the target application, and it is possible to improve the efficiency of the failure analysis of the program of the target application or the like.
  • Non-Patent Document 3 discloses a commercially available product "log4j” that sets and reduces the trace level.
  • Log4j is a log API for the Java (registered trademark) program under development in the Jakarta project. If you set the log output level in the configuration file, all logs with higher levels will be output. As for the log output level, the arrangement as shown in FIG. 7 is common.
  • log4j you can set the log output level.
  • the developer needs to select the code to be monitored and add the content to be output as a log and the output timing (output level of the log) to the code to be monitored.
  • the trace level can be set by the hierarchy (class, method, statement) of the components of the target application. Further, it is not necessary to select the log output location on the code to be monitored, and it is not necessary to set the content and timing to be output one by one.
  • the target application is an example of a predetermined program.
  • the code analysis unit 11 is an example of an analysis unit.
  • the target selection unit 12 is an example of the selection unit.
  • the failure path candidate specifying unit 14 is an example of the specific unit.
  • the test case generation unit 15 is an example of the generation unit.
  • the verification unit 16 is an example of a determination unit.
  • Failure analysis support device 11 Code analysis unit 12 Target selection unit 13 Modification unit 14 Failure path candidate identification unit 15 Test case generation unit 16 Verification unit 100 Drive device 101 Recording medium 102 Auxiliary storage device 103 Memory device 104 CPU 105 Interface device 106 Display device 107 Input device B Bus

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

Ce dispositif d'aide à l'analyse de défaillance comprend : une unité d'analyse qui analyse un trajet exécutable pour la relation d'appel entre des composants d'un programme; une unité de sélection qui, sur la base d'une règle prescrite, sélectionne, parmi les composants inclus dans le trajet, des composants devant être amenés à délivrer des informations indiquant des traces de l'exécution de ceux-ci; une unité de modification qui modifie le programme de sorte que les composants sélectionnés par l'unité de sélection délivrent en sortie les informations; et une unité de génération qui, sur la base dudit trajet et de la séquence de composants indiquée par les informations que le programme a modifiées par l'unité de modification délivrée jusqu'à ce qu'une défaillance se soit produite dans le programme, identifie une séquence d'instructions pour provoquer une telle défaillance, et génère un cas d'essai pour exécuter la séquence. Ainsi, le dispositif d'aide à l'analyse de défaillance améliore l'efficacité d'analyse de défaillance de programme.
PCT/JP2020/027375 2020-07-14 2020-07-14 Dispositif d'aide à l'analyse de défaillance, procédé d'aide à l'analyse de défaillance, et programme WO2022013944A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2020/027375 WO2022013944A1 (fr) 2020-07-14 2020-07-14 Dispositif d'aide à l'analyse de défaillance, procédé d'aide à l'analyse de défaillance, et programme

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2020/027375 WO2022013944A1 (fr) 2020-07-14 2020-07-14 Dispositif d'aide à l'analyse de défaillance, procédé d'aide à l'analyse de défaillance, et programme

Publications (1)

Publication Number Publication Date
WO2022013944A1 true WO2022013944A1 (fr) 2022-01-20

Family

ID=79555392

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/027375 WO2022013944A1 (fr) 2020-07-14 2020-07-14 Dispositif d'aide à l'analyse de défaillance, procédé d'aide à l'analyse de défaillance, et programme

Country Status (1)

Country Link
WO (1) WO2022013944A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009070322A (ja) * 2007-09-18 2009-04-02 Nec Corp データ処理装置、システム、プログラム、及び、方法
JP2012163997A (ja) * 2011-02-03 2012-08-30 Nec System Technologies Ltd 障害解析支援システム、障害解析支援方法、および障害解析支援プログラム
US20150278074A1 (en) * 2014-03-28 2015-10-01 International Business Machines Corporation Logging code generation and distribution

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009070322A (ja) * 2007-09-18 2009-04-02 Nec Corp データ処理装置、システム、プログラム、及び、方法
JP2012163997A (ja) * 2011-02-03 2012-08-30 Nec System Technologies Ltd 障害解析支援システム、障害解析支援方法、および障害解析支援プログラム
US20150278074A1 (en) * 2014-03-28 2015-10-01 International Business Machines Corporation Logging code generation and distribution

Similar Documents

Publication Publication Date Title
Urli et al. How to design a program repair bot? insights from the repairnator project
US9703677B2 (en) Code coverage plugin
Vassallo et al. A tale of CI build failures: An open source and a financial organization perspective
US10430319B1 (en) Systems and methods for automatic software testing
US8561024B2 (en) Developing software components and capability testing procedures for testing coded software component
US20200250070A1 (en) Techniques for evaluating collected build metrics during a software build process
Xu et al. POD-Diagnosis: Error diagnosis of sporadic operations on cloud applications
US20130061210A1 (en) Interactive debugging environments and methods of providing the same
KR20140072726A (ko) 단위 테스트 케이스 재사용 기반의 함수 테스트 장치 및 그 함수 테스트 방법
JP2010134643A (ja) テストケースの選択方法及び選択システム
JPWO2014171047A1 (ja) 障害復旧手順生成装置、障害復旧手順生成方法および障害復旧手順生成プログラム
US11481245B1 (en) Program inference and execution for automated compilation, testing, and packaging of applications
CN114579467B (zh) 一种基于发布订阅机制的冒烟测试系统及方法
Naslavsky et al. Using scenarios to support traceability
KR101266565B1 (ko) 요구 인터페이스의 명세 정보를 이용한 소프트웨어 컴포넌트의 테스트 케이스 생성 방법 및 실행 방법
CN113722204A (zh) 一种应用调试方法、系统、设备及介质
Ghosh et al. A systematic review on program debugging techniques
WO2022013944A1 (fr) Dispositif d'aide à l'analyse de défaillance, procédé d'aide à l'analyse de défaillance, et programme
CN110990177B (zh) 故障修复方法、装置、系统、存储介质及电子设备
CN113094238A (zh) 一种业务系统异常监控方法及装置
Winzinger et al. Automatic test case generation for serverless applications
CN115168175A (zh) 程序错误解决方法、装置、电子设备和存储介质
Cao et al. CATMA: Conformance Analysis Tool For Microservice Applications
Zirkelbach et al. The collaborative modularization and reengineering approach CORAL for open source research software
CN113868140A (zh) 一种自动化测试的方法及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20945023

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20945023

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP