WO2004075061A1 - Dispositif de mesure/analyse des performances d'un systeme - Google Patents

Dispositif de mesure/analyse des performances d'un systeme Download PDF

Info

Publication number
WO2004075061A1
WO2004075061A1 PCT/JP2004/002011 JP2004002011W WO2004075061A1 WO 2004075061 A1 WO2004075061 A1 WO 2004075061A1 JP 2004002011 W JP2004002011 W JP 2004002011W WO 2004075061 A1 WO2004075061 A1 WO 2004075061A1
Authority
WO
WIPO (PCT)
Prior art keywords
processing unit
performance
processing units
application
analyzing
Prior art date
Application number
PCT/JP2004/002011
Other languages
English (en)
Japanese (ja)
Inventor
Myongho Ahn
Masami Takai
Atsuko Yamada
Original Assignee
Intellasset, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intellasset, Inc. filed Critical Intellasset, Inc.
Publication of WO2004075061A1 publication Critical patent/WO2004075061A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3466Performance evaluation by tracing or monitoring
    • G06F11/3495Performance evaluation by tracing or monitoring for systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/865Monitoring of software
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/875Monitoring of systems including the internet

Definitions

  • the present invention optimizes the cooperative operation of a system by measuring and analyzing the operating performance of the entire system in a system environment configured by a network, a plurality of servers and software connected by a network. On how to do. Background art
  • Patent Literature 1 and Patent Literature 2 discuss conventional approaches of a system for monitoring the load status of IT resources.
  • Patent Document 3 discloses a system for monitoring the performance of software without depending on the language.
  • Patent Document 4 discloses a method of visualizing and displaying a software configuration, and
  • Patent Document 5 discloses a method of analyzing a source code.
  • the application is divided into smaller processing units, the performance of each processing unit is evaluated, and performance degradation occurs in both the processing unit and the hardware resources on which it runs. Factors need to be identified. Disclosure of the invention
  • the system performance measurement / analysis apparatus of the present invention is installed on a plurality of servers connected via a network, and a part or all of the plurality of applications cooperate in a virtual machine environment. Is a system performance measurement and analysis device that measures and analyzes the results and optimizes the cooperative operation based on the results.
  • Application analysis means for analyzing and extracting processing units constituting the application, and analyzing and acquiring a calling relationship between the processing units;
  • Operating performance measuring means for measuring the operating performance of the processing unit
  • Hardware resource specifying means for analyzing and acquiring a correspondence relationship between the processing unit and hardware resources used by the processing unit;
  • Display means for displaying the calling relationship between the processing units, the operation performance of the processing unit, and the hardware resources on which the processing unit operates together this is referred to as an object / destruction diagram.
  • FIG. 1 is a diagram showing a configuration of .NET which is one example of a virtual machine environment.
  • Figure 2 shows the concept of generating a binary file from source code and restoring source code from the binary file.
  • FIG. 3 shows the flow of the source code restoration process.
  • Figure 4 shows the flow of the process of the preparation process of the lazy worker method.
  • FIG. 5 is a flow chart of the lazy generation process.
  • FIG. 6 is a diagram showing a system configuration of the device according to the embodiment of the present invention.
  • FIG. 7 is a conceptual diagram of a functional configuration of software according to the embodiment of the present invention.
  • FIG. 8 is a flowchart of a process for measuring the performance of a smart object.
  • FIG. 9 is a flowchart of a process of collecting the object arrangement information.
  • Figure 10 shows a display example of an object deployment diagram (flow type).
  • Figure 11 is a display example of an object deployment diagram (tree type).
  • a system constituted by a plurality of hardware and software (applications) connected via a network, that is, hardware resources (computers such as servers and clients, networks, and the like)
  • hardware resources computers such as servers and clients, networks, and the like
  • the system environment consists of a CPU, memory, hard disk, network card, etc.
  • software resources applications that are distributed and executed on hardware resources.
  • some or all of the application will run under a virtual machine environment (CLR on .NET platform, JVM on J2EE platform, etc.).
  • VM virtual machine
  • VM virtual machine
  • VMs can be used include Microsoft's .NET and J2EE proposed by Sun Microsystems.
  • a method (flow chart) based on a .NET environment will be described. Similar methods can be easily devised and implemented in other VM environments based on the methods described in this specification.
  • Figure 1 shows the structure of the .NET framework.
  • the inside of the .NET framework consists of three main components.
  • CLR Common Language Runtime
  • VM Engine
  • ASP .NET uses Web Services and Web applications, excluding Windows applications Class library to implement.
  • Applications on the .NET framework are coded using the class libraries provided by the .NET framework.
  • the generated source code is converted into executable code of the application or component by the compiler, and the generated code is not native code depending on specific CPU instructions, but managed code and managed code. This is the intermediate code called.
  • the managed code is converted to the final native code by the CLR JIT compiler and executed.
  • the CLR requires managed code to have two main pieces of information:
  • MSIL Microsoft Intermediate Language (middle j): Executable code
  • Metadata Information about MSIL
  • the software of the system whose performance is to be measured is analyzed, the processing units (objects) constituting the software are extracted, and the relationship between the processing units is clarified and shown in the figure.
  • This figure is called the call graph.
  • the operation performance of each processing unit (time required for execution, communication time between each processing unit, etc.) is measured. This is achieved without altering the source code of the application by measuring the operation performance using data that can be read from the virtual machine. Then, it clarifies on which hardware resource each processing unit is operating among the hardware resources that make up the system. It also measures and displays the load status of each hardware resource at the execution stage of the process. ⁇ Detailed description of the embodiment
  • the source code is restored from the binary file by using the method of the present invention.
  • FIG. 2 is a schematic diagram showing the process of generating a binary file from the source code and the process of restoring the source code from the binary file.
  • 203 binary files are generated by compiling 202 from source code written in a high-level language such as 201 C #, C ++, VB (visual basic).
  • compiling 202 from source code written in a high-level language such as 201 C #, C ++, VB (visual basic).
  • FIG. 3 is a flowchart showing processing steps for restoring the source code.
  • step S301 all server machines in the system are searched to find a server on which the .NET environment is installed, and connect to it.
  • step S302 the binary files of the application on .NET of the connected server are collected.
  • step S303 the binary file is read, and in step S304, the metadata and MSIL (intermediate code) are extracted from the binary file using the MSIL disassembler engine.
  • step S305 syntax analysis of the MSIL of the module is performed, and in step S306, semantic analysis is performed. Scrutinize each keyword and its parameters and match it with the corresponding keyword in the original high-level language. During this process, Identify classes, methods, properties, etc. As a result, the source code is restored as in step S306.
  • the lazy worker method is one of the methods to analyze the source code of the application in the system, extract the processing units that make up the application, and analyze the calling relationship between the processing units.
  • the processing unit refers to a detailed execution unit of an application such as a class, a method, a property, and an instance or an object generated from the class.
  • the Rage-Worker method consists of two stages: a preparation process and a tree generation process.
  • the preparation process the “caller and callee” of the class are analyzed and acquired based on the source code.
  • the call relationship of all the processing units that the processing unit directly or indirectly calls for any processing unit is a one-type.
  • the tree structure generated by the tree generation process is represented by “nodes” and “pointers”. A node corresponds to a processing unit, and a pointer corresponds to a call relation between processing units.
  • the call graph is generated by drawing the generated library structure with an appropriate drawing application.
  • the source code is analyzed in the preparation process, and the tree generation process generates the tree structure without referring to the source code.
  • a call graph starting from an arbitrary processing unit can be efficiently generated and drawn without re-analyzing the source code.
  • FIG. 4 is a flowchart showing an example of the processing steps of the preparation process of the Rage-Worker method.
  • a class list is prepared in step S401.
  • the class list is a data structure for registering classes.
  • Each class in the list has a tree structure, and stores information on the methods in each class and the methods of the caller and callee of the class.
  • the source file processing of step S402 is performed on all source codes in the application.
  • the class is extracted by detecting the start of the description of the class in step S411 for all the source codes in the source file, and the class processing in step S412 is performed on the detected class.
  • the detected class is set as the own class in step S421, and the method or property is extracted by detecting the start of the description of the method or property in step S422 from all the descriptions in the class, and the detected method or property is extracted.
  • the cola-core processing of step S423 is performed on the patty.
  • the method or property detected in step S431 is set as the own method, and in step S432, list / data processing described below is performed on the own class and the own method.
  • step S433 From all the descriptions in the own method, a description calling another method or property is detected in step S433, the class to which the method or property belongs is set as a call destination class in step S434, and step S435 Alternatively, the port property is used as the callee method.
  • step S436 list data processing is performed for the called class and the called method as well as in step S432. Then, in step S437, the callee method is added to the callee information of the own class,
  • step S438 the own method is added to the caller information of the callee class.
  • the list 'data processing if the class does not exist, it is added in step S441. If the method does not exist in the class, it is added in step S443.
  • Analyzed in the preparation process Generates a tree-type data structure for the specified class based on the obtained caller (Caller) ⁇ callee (Callee) information according to the processing steps shown in Fig. 5.
  • step S501 it is referred to whether the class specified in step S501 exists in the class list. If the class exists, a node corresponding to the class is generated in step S502, and the node Perform processing. If not, an appropriate error message is displayed in step S504, and the process ends.
  • the current node is set as the current node in step S511, the call destination information of the current node is referred to in step S512, and if it is not null, all of the information registered as the information are searched in step S513. Generates the called node. This node is called a target node.
  • step S514 pointers from the current node to all target nodes are generated.
  • step S515 the same node-vointer processing is performed on all the generated target nodes.
  • the calling relation between the processing units can be generated and drawn at a detailed level.
  • FIG. 6 is a diagram showing a configuration of a system in which a program according to the embodiment of the present invention is mounted.
  • Reference numeral 601 denotes a monitoring dedicated terminal
  • 602 denotes a monitoring target terminal. These terminals are connected via 603 LAN, WAN, Internet and other networks. 4.
  • Software configuration
  • FIG. 7 is a conceptual diagram of a functional configuration of a program according to the embodiment of the present invention.
  • Agent Handler 701 each server 1 ⁇ ! ! Is the software that supervises and manages Agents # 1 to #n.
  • the agent handler 701 operates as a monitoring-only terminal that monitors each monitoring target terminal.
  • Agents are software that runs on each server.
  • Agents #l to #n are servers 1 to which the agent is running! ! Monitor the operating performance of Agent Handler Obtains performance information from agents via agents # 1 to #n running on servers l to n and consoles # 1 to # ⁇ to monitor performance of each server. I do.
  • Agents # 1 to # ⁇ of servers 1 to ⁇ acquire OS performance information acquisition means, application performance information acquisition means, database system (DBMS) performance information acquisition means, and network performance information, respectively.
  • DBMS database system
  • OS performance information is obtained from the OS level monitoring program.
  • the OS level monitoring flowchart is, for example, WMI (Windows Management Instrumentation) in the Windows environment.
  • Application performance information is obtained using the management service program.
  • the management service program is a utility such as CLR (Common Language Runtime: having the function of a virtual machine) provided in the Microsoft (registered trademark) .NET platform.
  • CLR Common Language Runtime: having the function of a virtual machine
  • the performance information of the application is measured at a more detailed level by measuring the operation performance of the processing unit described later.
  • Database performance information can be obtained by accessing the database directly.
  • network performance information can be obtained by monitoring network devices using SNMP and monitoring the connection status of the line. 5. Measurement of operation performance for each processing unit
  • the operation performance is measured without any modification to the source code by the method according to the present invention.
  • the VM manages code execution as an application execution engine and provides various services to the application. At the time of application execution, the VM performs various managements related to the execution of each processing unit. Therefore, in the present invention, by introducing small software called a hooker into the VM, it is possible to freely communicate directly with the VM and acquire data relating to the measurement of operation performance from the VM.
  • the VM manages the start, end, and call of the processing unit.
  • the hooker detects the event and collects necessary information to measure the operation performance of the processing unit.
  • FIG. 8 is a flowchart showing processing steps for measuring the operation performance of each processing unit. This process starts when the fulfilment detects the above-mentioned VM event. -In S801, access the VM and check the type of event.
  • step S802 If the event type is generation of a processing unit in step S802,
  • step S803 the metadata is accessed, and information necessary for performance measurement such as the class name, the method / property included in the inside, the external class to be used, and the method / property to be used is collected. Save time.
  • step S806 If the event type is a call of a processing unit in step S805, it is determined in step S806 whether the call destination is a processing unit executed by another server. In step S807, the time required for communication between the servers is recorded. If the type of event is the end of the processing unit in step S808, the total execution time is calculated in step S809, and stored in step S810. 6. Hardware resource identification means
  • the information of application 'components placed on the server can be collected by referring to the metadata. This specifies the processing unit that is placed on the server and executed.
  • FIG. 9 is a flow for collecting information on the arrangement status of processing units.
  • a connection is made to a server connected to the network.
  • a connection is made to the VM of the server, and in step S903, the metadata is referred to, and in step S904, information of a processing unit that can be executed in the server is collected. Save the information collected in step S905.
  • the operation performance of the processing unit measured by the above means can be expressed as an object deployment diagram (hereinafter referred to as ODD) together with the calling relationship between the processing units, the operation performance of the processing unit, and the hardware resources used by the processing unit. It is displayed according to the display method named "abbreviation").
  • ODD object deployment diagram
  • Fig. 10 and Fig. 11 show ODD display examples.
  • Cold Daraf Graph showing the relationship between processing units occupies most of the display area. Power is shown in Fig. 10 and in tree form in Fig. 11.
  • Each display unit has a color and indicates a server machine that constitutes a live application system. If Application # 2 is shown in red, and "Billing Process I was shown in red” Then, you can see that "billing process" is running on "application server # 2". All machine names in the system are displayed in different colors. The color of the text displayed on the screen corresponds to the color of the server machine.
  • This operation performance includes network performance (communication time), operation performance per processing unit (execution time), and the like.
  • the ODDs shown in the cases of Figs. 10 and 11 mainly deal with network performance and operating performance of processing units, and do not show elements that can be ignored in the operating performance of other systems.
  • This table is used to display detailed numerical information. Also, the user can obtain the history of specific data from the table. The display on this screen is updated at any time specified by the user. Users use this screen to identify problems when they want to see the performance of the system at a more detailed level, such as when a failure occurs that degrades the performance of the system. Failure factors can be categorized into any one of "application”, “environment setting”, “hardware resource”, and “network”, and correspond. "Hardware resources” refers to a failure of IT equipment such as a hard disk failure or display failure, and can be dealt with by repairing or replacing the failed part.
  • the system administrator can easily compare the “Average Response Time” and “Response Time” values in the table to easily determine that the performance of a particular processing unit has deteriorated. You can notice. If there is a large difference between these two values, we can assume that there is something wrong. In such a case, it is possible to examine the operation performance of other processing units and to specify the processing unit whose operation performance is most deteriorated. For example, if the current response time of a processing unit is 8.1 seconds and the average response time is 6.1 seconds, then it is possible that the system has failed. The system administrator also checks other running processing units.
  • the "accounting" has a difference of 2 seconds compared to the average.
  • the response time of other processing units is within 0.5 seconds, the system administrator can click "Accounting" and check the source code to determine the cause of the failure. .
  • Faults can be found by carefully observing the degradation of operating performance, as in the case of applications.
  • the consumption time of the CPU, memory, hard disk, etc. of a specific machine may not be obtained or the consumption rate may become 0, which is lower than the case of the application or environment setting. Thus, finding obstacles is easy.
  • Server A can be considered to have failed.
  • network failures affect the server machines that make up the system when they are distributed over two or more LANs connected via the Internet. Even if the LAN of each site has good performance, it cannot guarantee the performance of the Internet.
  • the operation performance of an application can be measured in finer processing units, and the correspondence with the hardware resources in which each processing unit is running can be easily clarified.
  • ODD makes it possible to provide a visual and intuitive understanding of the correspondence between the performance of the individual processing units and the hardware resources on which they operate.
  • ODD makes it easy for users to investigate response performance by providing intuitively understandable information that helps identify the cause of a problem.

Abstract

Les applications du système dont on veut mesurer les performances sont analysées, et l'unité de traitement exécutant les applications est extraite. Une corrélation entre l'unité de traitement est définie, et les ressources matérielles formant le système sur lequel fonctionne chaque unité de traitement sont définies. De plus, les performances d'exploitation de chaque unité de traitement (temps d'exécution nécessaire, temps de transmission entre chaque unité de traitement et analogue) sont mesurées sans modification du code source d'application. De plus, à l'étape d'exécution du traitement, l'état de charge de chaque ressource matérielle est mesuré et affiché en même temps que les performances d'exploitation de chaque unité de traitement, les ressources matérielles actives et l'état de charge des ressources matérielles.
PCT/JP2004/002011 2003-02-24 2004-02-20 Dispositif de mesure/analyse des performances d'un systeme WO2004075061A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2003-046120 2003-02-24
JP2003046120A JP2004264914A (ja) 2003-02-24 2003-02-24 システム性能測定分析装置

Publications (1)

Publication Number Publication Date
WO2004075061A1 true WO2004075061A1 (fr) 2004-09-02

Family

ID=32905540

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2004/002011 WO2004075061A1 (fr) 2003-02-24 2004-02-20 Dispositif de mesure/analyse des performances d'un systeme

Country Status (2)

Country Link
JP (1) JP2004264914A (fr)
WO (1) WO2004075061A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11222078B2 (en) 2019-02-01 2022-01-11 Hewlett Packard Enterprise Development Lp Database operation classification

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4938576B2 (ja) * 2007-07-24 2012-05-23 日本電信電話株式会社 情報収集システムおよび情報収集方法
US8566800B2 (en) * 2010-05-11 2013-10-22 Ca, Inc. Detection of method calls to streamline diagnosis of custom code through dynamic instrumentation
US8429187B2 (en) * 2011-03-21 2013-04-23 Amazon Technologies, Inc. Method and system for dynamically tagging metrics data
US9411616B2 (en) 2011-12-09 2016-08-09 Ca, Inc. Classloader/instrumentation approach for invoking non-bound libraries
CN102609351B (zh) * 2012-01-11 2015-12-02 华为技术有限公司 用于分析系统的性能的方法、设备和系统
CN111512285A (zh) 2017-12-25 2020-08-07 三菱电机株式会社 设计辅助装置、设计辅助方法及程序
JPWO2022123763A1 (fr) * 2020-12-11 2022-06-16
US11816364B2 (en) * 2022-01-13 2023-11-14 Microsoft Technology Licensing, Llc Performance evaluation of an application based on detecting degradation caused by other computing processes

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6435637A (en) * 1987-07-31 1989-02-06 Hitachi Ltd Program tracing system
JPH0580992A (ja) * 1991-09-20 1993-04-02 Hokkaido Nippon Denki Software Kk 手続き・関数関連図出力方式
JPH05274132A (ja) * 1992-03-25 1993-10-22 Matsushita Electric Ind Co Ltd プログラム解析装置
JP2000122901A (ja) * 1998-10-12 2000-04-28 Hitachi Ltd ジャーナル取得解析装置
JP2000315198A (ja) * 1999-05-06 2000-11-14 Hitachi Ltd 分散処理システム及びその性能モニタリング方法
JP2001282759A (ja) * 2000-03-31 2001-10-12 Suntory Ltd ウェブアプリケーションシステムに対する運用を行う運用機構およびウェブアプリケーションシステム
JP2002082926A (ja) * 2000-09-06 2002-03-22 Nippon Telegr & Teleph Corp <Ntt> 分散アプリケーション試験・運用管理システム

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6435637A (en) * 1987-07-31 1989-02-06 Hitachi Ltd Program tracing system
JPH0580992A (ja) * 1991-09-20 1993-04-02 Hokkaido Nippon Denki Software Kk 手続き・関数関連図出力方式
JPH05274132A (ja) * 1992-03-25 1993-10-22 Matsushita Electric Ind Co Ltd プログラム解析装置
JP2000122901A (ja) * 1998-10-12 2000-04-28 Hitachi Ltd ジャーナル取得解析装置
JP2000315198A (ja) * 1999-05-06 2000-11-14 Hitachi Ltd 分散処理システム及びその性能モニタリング方法
JP2001282759A (ja) * 2000-03-31 2001-10-12 Suntory Ltd ウェブアプリケーションシステムに対する運用を行う運用機構およびウェブアプリケーションシステム
JP2002082926A (ja) * 2000-09-06 2002-03-22 Nippon Telegr & Teleph Corp <Ntt> 分散アプリケーション試験・運用管理システム

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
MENDLEN DAVE: "Microsoft. NET no subete: Nyumon visual studio.NET: Web service to XML o tsukatte web application o jinsoku katsu kantan ni sakusei suru", MICROSOFT DEVELOPMENT NETWORK MAGAZINE JAPANESE VERSION, no. 7, 18 October 2000 (2000-10-18), pages 73 - 86, XP002982551 *
RICHTER JEFFREY: "Tokushu 1 microsoft.NET programming nyumon part I: NET frame work application to kata no build, package, haifu, kanri part I", MICROSOFT DEVELOPMENT NETWORK MAGAZINE JAPANESE VERSION, no. 12, 18 March 2001 (2001-03-18), pages 27 - 39, XP002982553 *
SAKAKIBARA KAZUYA: "Tokushu 2: Enterprise e mukau visual studio 6.0: Shin no tool suite o mezashite", DR. DOBB'S JOURNAL JAPAN., vol. 7, no. 11, 1 November 1998 (1998-11-01), pages 71 - 76, XP002982550 *
TSUZUKI KAORU: "Tokubetsu kikaku: Windows apli kaihatsu no saishin trend o saguru: Visual studio 6.0 no zen'yo", ASCII, vol. 22, no. 11, 1 November 1998 (1998-11-01), pages 252 - 257, XP002982552 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11222078B2 (en) 2019-02-01 2022-01-11 Hewlett Packard Enterprise Development Lp Database operation classification
US11755660B2 (en) 2019-02-01 2023-09-12 Hewlett Packard Enterprise Development Lp Database operation classification

Also Published As

Publication number Publication date
JP2004264914A (ja) 2004-09-24

Similar Documents

Publication Publication Date Title
US11659020B2 (en) Method and system for real-time modeling of communication, virtualization and transaction execution related topological aspects of monitored software applications and hardware entities
Zhao et al. lprof: A non-intrusive request flow profiler for distributed systems
CA2969131C (fr) Langage de traitement de flux de donnees pour l&#39;analyse de logiciel instrumente
US8832665B2 (en) Method and system for tracing individual transactions at the granularity level of method calls throughout distributed heterogeneous applications without source code modifications including the detection of outgoing requests
US10489264B2 (en) Monitoring activity on a computer
US20090049429A1 (en) Method and System for Tracing Individual Transactions at the Granularity Level of Method Calls Throughout Distributed Heterogeneous Applications Without Source Code Modifications
US20170357524A1 (en) Automated software configuration management
US20100070980A1 (en) Event detection system, event detection method, and program
US20070168994A1 (en) Debugging a computer program in a distributed debugger
US7496795B2 (en) Method, system, and computer program product for light weight memory leak detection
US20150220421A1 (en) System and Method for Providing Runtime Diagnostics of Executing Applications
US7720950B2 (en) Discovery, maintenance, and representation of entities in a managed system environment
JP2009516239A (ja) コンピュータアプリケーションの追跡及びモニタリングを行う汎用のマルチインスタンスメソッド及びgui検出システム
Jayaraman et al. Compact visualization of Java program execution
US10084637B2 (en) Automatic task tracking
WO2004075061A1 (fr) Dispositif de mesure/analyse des performances d&#39;un systeme
JP2010009411A (ja) 仮想化環境運用支援システム及び仮想環境運用支援プログラム
WO2018200961A1 (fr) Extension de gestion de java hyperdynamique
WO2012062515A1 (fr) Procédé et système permettant de visualiser un modèle de système
JPWO2012070294A1 (ja) 可用性評価装置及び可用性評価方法
US11354220B2 (en) Instrumentation trace capture technique
US11615015B2 (en) Trace anomaly grouping and visualization technique
JP2014092821A (ja) ログ取得プログラム、ログ取得装置及びログ取得方法
Keller et al. Towards a CIM schema for runtime application management
US11811804B1 (en) System and method for detecting process anomalies in a distributed computation system utilizing containers

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase