CN115202990A - Method, device, equipment and storage medium for acquiring IO performance data - Google Patents

Method, device, equipment and storage medium for acquiring IO performance data Download PDF

Info

Publication number
CN115202990A
CN115202990A CN202211100074.3A CN202211100074A CN115202990A CN 115202990 A CN115202990 A CN 115202990A CN 202211100074 A CN202211100074 A CN 202211100074A CN 115202990 A CN115202990 A CN 115202990A
Authority
CN
China
Prior art keywords
target
request
performance data
current system
system layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211100074.3A
Other languages
Chinese (zh)
Other versions
CN115202990B (en
Inventor
贺成
李宇奇
张健
杨满堂
朱明祖
徐斌
冯景华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Tianhe Computer Technology Co ltd
Original Assignee
Tianjin Tianhe Computer Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Tianhe Computer Technology Co ltd filed Critical Tianjin Tianhe Computer Technology Co ltd
Priority to CN202211100074.3A priority Critical patent/CN115202990B/en
Publication of CN115202990A publication Critical patent/CN115202990A/en
Application granted granted Critical
Publication of CN115202990B publication Critical patent/CN115202990B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/302Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system component is a software system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3065Monitoring arrangements determined by the means or processing involved in reporting the monitored data

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Quality & Reliability (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The embodiment of the disclosure relates to a method, a device, equipment and a storage medium for acquiring IO performance data. The IO performance data acquisition method comprises the following steps: monitoring the running progress of a target kernel code based on a preset execution engine in the target equipment in the process of running the target kernel code corresponding to the target IO request by the target equipment; when monitoring that the target kernel code runs to the kernel code corresponding to the preset trigger event, acquiring IO performance data corresponding to a current system layer, wherein the current system layer is a system layer which processes a target IO request currently in an operating system of the target device; tracking metadata for identifying a source of IO performance data corresponding to a current system layer is determined. Therefore, the acquisition of the IO performance data can be completed in the kernel space, and the pile insertion in the application space is not needed like in the prior art, so that the portability is favorably improved.

Description

Method, device, equipment and storage medium for acquiring IO performance data
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a method, a device, equipment and a storage medium for acquiring IO performance data.
Background
With the continuous development of computer technology, supercomputers have been developing themselves in the scientific and technological fields such as weather forecast, aerospace, and the like. For a super computer, an application program of the super computer generates or depends on a large amount of data, the data is stored in a PB-level shared high-performance file system in a file form, and since both a user applying the super computer and a developer of the high-performance file system have very limited knowledge of IO performance of the super computer when accessing the files, research on the IO performance of the super computer is of great significance.
At present, when IO performance data of a super computer is collected, instrumentation is generally required to be performed in a user space of the super computer. Because instrumentation is strongly related to the development language of an application program in a user space, the existing IO performance data collection tool has low portability.
Disclosure of Invention
In order to solve the technical problem or at least partially solve the technical problem, embodiments of the present disclosure provide a method, an apparatus, a device, and a storage medium for acquiring IO performance data.
A first aspect of the embodiments of the present disclosure provides a method for acquiring IO performance data, where the method includes:
monitoring the running progress of a target kernel code based on a preset execution engine in the target equipment in the process of running the target kernel code corresponding to the target IO request by the target equipment;
when monitoring that the target kernel code runs to the kernel code corresponding to the preset trigger event, acquiring IO performance data corresponding to a current system layer, wherein the current system layer is a system layer which processes a target IO request currently in an operating system of the target device;
tracking metadata for identifying a source of IO performance data corresponding to a current system layer is determined.
A second aspect of the embodiments of the present disclosure provides an IO performance data acquisition apparatus, including:
the monitoring module is used for monitoring the running progress of the target kernel code based on a preset execution engine in the target equipment in the process that the target equipment runs the target kernel code corresponding to the target IO request;
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring IO performance data corresponding to a current system layer when monitoring that a target kernel code runs to a kernel code corresponding to a preset trigger event, and the current system layer is a system layer which is used for processing a target IO request currently in an operating system of target equipment;
and the determining module is used for determining tracking metadata used for identifying the source of the IO performance data corresponding to the current system layer.
A third aspect of the embodiments of the present disclosure provides an electronic device, where the server includes: a processor and a memory, wherein the memory has stored therein a computer program which, when executed by the processor, performs the method of the first aspect described above.
A fourth aspect of embodiments of the present disclosure provides a computer-readable storage medium having a computer program stored therein, which, when executed by a processor, may implement the method of the first aspect described above.
In a fifth aspect, the present disclosure provides a computer program product comprising a computer program/instructions which, when executed by a processor, implements the method of the first aspect described above.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has the following advantages:
according to the method and the device, in the process of operating the target kernel code corresponding to the target IO request by the target equipment, monitoring the operating progress of the target kernel code based on a preset execution engine in the target equipment; when monitoring that the target kernel code runs to the kernel code corresponding to the preset trigger event, acquiring IO performance data corresponding to a current system layer, wherein the current system layer is a system layer which processes a target IO request currently in an operating system of the target device; tracking metadata for identifying a source of IO performance data corresponding to a current system layer is determined. By adopting the technical scheme, the acquisition of the IO performance data can be completed in the kernel space, and the pile insertion in the application space is not needed like in the prior art, so that the portability can be improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments or technical solutions in the prior art of the present disclosure, the drawings used in the embodiments or technical solutions in the prior art description will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
Fig. 1 is a logic diagram of an embodiment of the present disclosure for acquiring IO performance data through instrumentation;
fig. 2 is a schematic flowchart of an IO performance data acquisition method according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of the logic for implementing an eBPF according to an embodiment of the present disclosure;
fig. 4 is a logic diagram of an IO performance data collection process provided by an embodiment of the present disclosure;
fig. 5 is a schematic flowchart of another IO performance data acquisition method according to an embodiment of the present disclosure;
fig. 6 is a logic diagram of an IO performance data collection process according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an IO performance data acquisition device according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of an electronic device in an embodiment of the present disclosure.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, aspects of the present disclosure will be further described below. It should be noted that the embodiments and features of the embodiments of the present disclosure may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced in other ways than those described herein; it is to be understood that the embodiments disclosed in the specification are only a few embodiments of the present disclosure, and not all embodiments.
Fig. 1 is a logic diagram of collecting IO performance data through instrumentation according to an embodiment of the present disclosure. Referring to fig. 1, when an application developer performs application development, an application program written to implement some application functions includes multiple pieces of application function codes (i.e., user codes) such as "a first code, a second code, a third code, and a fourth code …", some of the application function codes may call a kernel function code (i.e., kernel code) in a kernel space, for example, "the third code" may call "the first kernel function code", and "the fourth code" may call "the second kernel function code". While instrumentation needs to be performed to achieve the collection of IO performance data, for example, a first insertion code and a second insertion code are inserted between a "second code" and a "third code", a "third insertion code and a fourth insertion code are inserted between a" fourth code "and a" fifth code ", and an application developer needs to write the" first insertion code, the second insertion code, the third insertion code, and the fourth insertion code "using a programming language that is the same as an application function code, which results in low portability, on one hand, application side instrumentation strongly depends on the programming language and the operating scale of an application program, is low in portability, and cannot dynamically detect I/O performance data; on the other hand, the system side plug-in relies on specific system supporting software, cannot adapt to super computing systems with different hardware system architectures, and has low portability.
In view of this, the present disclosure provides a method, an apparatus, a device, and a storage medium for acquiring IO performance data. Next, a detailed description is first given of an IO performance data acquisition method.
Fig. 2 is a schematic flowchart of an IO performance data collection method provided in an embodiment of the present disclosure, where the method may be executed by an electronic device. The electronic device may be understood as an exemplary device such as a mobile phone, a tablet computer, a notebook computer, a desktop computer, a smart television, etc. As shown in fig. 1, the method provided by this embodiment includes the following steps:
s210, monitoring the running progress of the target kernel code based on a preset execution engine in the target equipment in the process that the target equipment runs the target kernel code corresponding to the target IO request.
Specifically, the target device may include a supercomputer, but is not limited thereto.
Specifically, the target device may be divided into a plurality of nodes in the hardware system architecture, for example, for a supercomputer of a two-layer hardware system architecture, it includes two nodes, a compute node and a storage node, and for a supercomputer of a three-layer hardware system architecture, it includes three nodes, a compute node, an IO forwarding node and a storage node.
A plurality of tasks, such as task a.1, task a.2, and task a.3, may run on one node, and one task may issue a plurality of IO requests, for example, task a.1 issues IO requests, i.e., input/Output (I/O) requests, a.1.Io.1, and a.1. Io.2.
The target IO request may be any IO request made by any task.
Specifically, the software operating system of the target device divides the memory space of the target device into an application space and a kernel space. The user space runs user codes written by application developers, and the kernel space runs kernel codes provided by a software operating system.
The target kernel code corresponding to the target IO request is the kernel code in the kernel space called for responding to the target IO request.
Specifically, the default execution engine may include eBPF (extended BPF), but is not limited thereto.
eBPF is a general purpose execution engine in Linux Kernel extended by Berkeley Packet Filter (BPF). The eBPF provides a capability of executing a section of code when a kernel or application specific event occurs, namely a method for extending the capability of the kernel is provided, and the kernel executing process is snooped so that the process of the code running in the kernel space has observability.
And S220, when the target kernel code is monitored to run to the kernel code corresponding to the preset trigger event, acquiring IO performance data corresponding to the current system layer.
Specifically, the preset trigger event may be any event that can trigger the acquisition of IO performance data corresponding to the current system layer.
Specifically, the target device may be divided into a plurality of system layers on the software operating system architecture, such as a client, a buffer storage pool, an I/O service optimization layer, a shared storage layer, and the like, where the system layer, that is, an IO software stack, logically divides a process of completing an IO request according to a function hierarchy, and may be divided into a plurality of IO software stacks, where different IO software stacks are responsible for completing a part of the whole IO.
The current system layer is a system layer which is used for processing the target IO request currently in the operating system of the target device.
In particular, IO performance data is data that characterizes IO quality, and/or characteristics.
Illustratively, fig. 3 is a schematic diagram of an execution logic of an eBPF provided in an embodiment of the present disclosure. Referring to fig. 3, a code (BPF Program) for implementing IO performance data and tracking metadata collection is compiled into a bytecode file (prog. BPF) by a compiler (LLVM/Clang), and the bytecode file submits the compiled bytecode file to eBPF in the kernel space by a loader (loader). The eBPF checks whether the bytecode file will cause damage to the current system through a Verifier (Verifier), and the bytecode file passing the check will be compiled into machine instruction execution by just-in-time compilation (JIT), and the bytecode file is attached to a hook (Hooks, i.e. a preset trigger event) set by the eBPF, and when the execution stream passes through the hook or leaves the hook, the bytecode file will be triggered to execute. The communication bridge (BPF Maps) is used to implement communication between the bytecode file in the kernel and the application program in the user space, for example, the collected IO performance data may be transmitted to the user space through the BPF Maps for visual display.
It should be noted that specific contents of the preset trigger event may be set by a person skilled in the art according to the type of the IO performance data that needs to be collected, and the setting is not limited herein. The following description is of some typical examples and should not be construed as limiting the present disclosure.
In some embodiments, the target IO request may include a block IO request, and at this time, S220 may include: s221, when the target kernel code is monitored to run to the block request sending tracking point, acquiring a request sending time corresponding to the block request sending tracking point;
s222, when the target kernel code is monitored to run to a block request completion tracking point, acquiring a request completion time corresponding to the block request completion tracking point;
and S223, making a difference between the request sending time and the request finishing time, and determining the block IO request delay of the block IO request.
For example, fig. 4 is a logic diagram of an IO performance data collection process provided in an embodiment of the present disclosure. Referring to fig. 4, a loader is used in advance to attach a bytecode file corresponding to a compiled "request sending time corresponding to a block request sending tracking point" to a "block request sending tracking point", that is, block _ request _ complete, and attach a bytecode file corresponding to a compiled "request completing time corresponding to a block request completing tracking point" to a "block request completing tracking point", that is, block _ request _ complete, so that when a block IO request, that is, a request block device IO, is sent by an application, block _ request _ complete is triggered, and at this time, the request sending time (space _ time) can be obtained by triggering and acquiring the block _ request _ complete trigger time. Similarly, when the core completes the corresponding block device IO, block _ request _ complete is triggered, and at this time, the collection request completion time (complete _ time) is triggered. The communication bridge can obtain the block IO request delay of the block IO request by calculating the difference value between the request sending time and the request finishing time, and can also transmit the block IO request delay to the user space.
In other embodiments, if the IO performance data includes disk access patterns (random access/sequential access), the block request issue tracking point (block _ rq _ isuue) is set to a preset trigger event.
And if the IO performance data comprises the IO type, setting a file open function (vfs _ open) in the virtual file system as a preset trigger event.
If the IO performance data includes the IO size, a virtual file system file read function (vfs _ read), a virtual file system file write function (vfs _ write), and other file operation functions (struct file _ operation _ fop) are set as preset trigger events.
If the IO performance data includes the number of times of performing read/write Operations Per Second (IOPS), the IO performance data may be obtained by calculation according to the relevant data in the acquired IO performance data, for example, by the following calculation formula:
IO Time = Seek Time +(60 sec/Rotational Speed)/2 + IO Chunk Size/Transfer Rate;
IOPS = 1/IO Time = 1/(Seek Time + 60 sec/Rotational Speed/2 + IO Chunk Size/Transfer Rate);
wherein IO Time is the Time required by single input/output operation, seek Time is the disk Seek Time, 60 sec is 60 seconds, rotational Speed is the disk rotation Speed, (60 sec/Rotational Speed)/2 is the positioning sector average rotation delay, IO Chunk Size is the disk block Size, transfer Rate is the transmission Rate, and IO Chunk Size/Transfer Rate is the data Transfer Time.
If the IO performance data includes a Transfer Rate (Transfer Rate), transfer Rate = IOPS IO Chunk Size.
It should be noted that, when the preset execution engine is eBPF, since eBPF is Linux Kernel general engine, the compiling of the code (BPF Program) for implementing the collection of the IO performance data and the trace metadata may be performed on the non-collection node and run on the collected node.
And S230, determining tracking metadata for identifying the source of the IO performance data corresponding to the current system layer.
Specifically, tracking the specific content item included in the metadata may be set by a person skilled in the art according to practical situations, as long as the source of the IO performance data can be identified, and is not limited herein.
For example, the tracking metadata may include a task identifier jobid of a task to which the target IO request belongs, an IO request identifier traceid of the target IO request, a parent node identifier parentid, a current processing node identifier ioid of a node to which a current system layer belongs, and a time identifier timestamp for recording the tracking metadata, and according to the jobid, it may be known which task the target IO request is proposed by, which IO request of which task the target IO request is, and according to the traceid, it may be known which node in the target device the IO stream corresponding to the target IO request is streamed from, and according to the timetag, it may be known which start time the current system layer processes the target IO request. But is not limited thereto.
Thus, the structure of the trace metadata I/OContext may be as follows:
I/OContext
{
jobid;//
traceid;//
parentid;
ioid;//
timestamp;//
according to the method and the device, in the process of running the target kernel code corresponding to the target IO request by the target equipment, the running progress of the target kernel code is monitored based on a preset execution engine in the target equipment; when monitoring that a target kernel code runs to a kernel code corresponding to a preset trigger event, acquiring IO performance data corresponding to a current system layer, wherein the current system layer is a system layer which processes a target IO request currently in an operating system of target equipment; tracking metadata for identifying a source of IO performance data corresponding to a current system layer is determined. By adopting the technical scheme, the acquisition of the IO performance data can be completed in the kernel space, and the pile insertion in the application space is not needed like in the prior art, so that the portability can be improved.
In another embodiment of the present disclosure, determining tracking metadata for identifying a source of IO performance data corresponding to a current system layer includes:
if the current system layer belongs to a first node of the target device and is a first preset trigger event in a plurality of preset trigger events corresponding to the first node, determining a task identifier of a task to which the target IO request belongs, determining an IO request identifier of the target IO request, setting a parent node identifier to be null, determining a current processing node identifier of the node to which the current system layer belongs, and determining a time identifier when tracking metadata is recorded.
Specifically, the first node described herein refers to a node in the target device that initiates the target IO request.
Specifically, the target device may be divided into a plurality of nodes from a hardware system architecture, and may be divided into a plurality of system layers from a software operating system architecture. Each node corresponds to a plurality of system layers, and in the plurality of system layers, which system layer corresponds to a preset trigger event can be set according to the actual situation by a person skilled in the art, and the setting is not limited here, so that IO performance data can be flexibly collected.
Specifically, the first preset trigger event in the multiple preset trigger events corresponding to the first node refers to a preset trigger event that a first one of the multiple preset trigger events corresponding to the first node can trigger the acquisition of the IO performance data in the process of processing the target IO request by the target device.
For example, if a.1 task of an a node (e.g., a computing node) initiates a target IO request a.1.Io.1, the a node may generate IO trace metadata, where jobid = a.1 in the trace metadata, based on which it can be identified which task the target IO request is initiated, traceid = a.1.Io.1, based on which it can be identified which IO request of which task the target IO request is, parentid = null, ioid = a, based on which it can be identified which task the target IO request initiated by which node belongs, timeframe = current time, based on which it can be known when the target IO request is initiated.
Optionally, if the current system layer belongs to a first node of the target device and is a non-first preset trigger event in the plurality of preset trigger events corresponding to the first node, inheriting the tracking metadata of the IO performance data triggered and obtained by a previous preset trigger event.
Specifically, for the inside of the same node, the tracking metadata of the IO performance data corresponding to different system layers does not need to be updated, and only the tracking metadata is inherited and continuously transmitted downward.
Specifically, when the trace metadata is transmitted between different system layers, the trace metadata may be transmitted by attaching it to a header message of a Transmission Control Protocol/Internet Protocol (TCP/IP) network flow, but the present invention is not limited thereto.
Optionally, determining tracking metadata for identifying a source of IO performance data corresponding to the current system layer further includes: if the current system layer belongs to the non-first node of the target device and is the first preset trigger event in the plurality of preset trigger events corresponding to the non-first node, inheriting the task identifier and the IO request identifier from the father node, and updating the father node identifier, the current processing node identifier and the update time identifier.
Specifically, the non-first node described herein refers to a node subsequent to a node that initiates a target IO request, with respect to the sequence in which each node in the target device processes the target IO request.
For example, for a supercomputer with a three-layer hardware system architecture, if a target IO request is initiated by a compute node, both the IO translation node and the storage node are non-first nodes.
Specifically, the first preset trigger event in the multiple preset trigger events corresponding to the non-first node described herein refers to a preset trigger event that can trigger the acquisition of IO performance data in the process of processing the target IO request by the target device, where the first preset trigger event is a first preset trigger event in the multiple preset trigger events corresponding to the non-first node.
Specifically, after the IO stream corresponding to the target IO request crosses nodes, the first system layer corresponding to the preset trigger event, which crosses the next node, analyzes the tracking metadata after receiving the tracking metadata, and updates the parent node identifier parentid, the current processing node identifier ioid, and the start time identifier timestamp in the tracking metadata.
Specifically, the parent node described herein refers to a node that is previous to a node that currently processes a target IO request, with respect to a sequence in which each node in the target device processes the target IO request.
Specifically, the parent node identifier is updated, that is, the parent node identifier parentid is set to the ioid in the tracking metadata corresponding to the parent node.
Specifically, the current processing node identifier is updated, that is, the current system layer of the current processing target IO request generates an identifier of a node to which the current system layer belongs, and the identifier is used as the current processing node identifier ioid.
Specifically, the time identifier is updated, that is, the time when the current system layer starts to process the target IO request is recorded and is used as the start time identifier timestamp.
Illustratively, when a node B (e.g., IO switch node) receives a trace metadata from a node A, the trace metadata may be parsed. Since the destination IO request may not be destined for the B node, it may also be possible to continue to pass to the C node (e.g., storage node). So at this time, the jobid and traceid received from the a node may be inherited, and parentid = ioid = a in the trace metadata received from the a node, so as to record that the IO stream corresponding to the target IO request is sent by the a node. And let ioid = B, the process for recording new trace metadata is handled by B. And acquiring the starting time of the IO request processed by the current system layer as a timestamp.
Optionally, if the current system layer belongs to a non-node of the target device and is a non-first preset trigger event in a plurality of preset trigger events corresponding to the non-node, the tracking metadata of the IO performance data triggered and obtained by a previous preset trigger event is inherited.
Specifically, for the inside of the same node, the tracking metadata of the IO performance data corresponding to different system layers does not need to be updated, and only the tracking metadata is inherited and continuously passed down.
It can be understood that, currently, most target devices, for example, supercomputers, are separated from "computation" and "storage", so that after a computing node initiates an IO request, when an IO stream corresponding to the IO request flows to other non-computing nodes for execution, some high-level information may be lost, for example, information such as who initiated the IO request, and the like, so that IO performance data acquired at each node becomes an information island, and independent IO performance data (i.e., sub IO streams) cannot be connected in series to form a complete IO stream. However, in the embodiment of the present disclosure, by setting the trace metadata, the information island may be broken, and the related sub IO streams are associated, so as to form complete IO stream data.
Fig. 5 is a schematic flowchart of another IO performance data collection method according to an embodiment of the present disclosure. The embodiments of the present disclosure are optimized based on the above embodiments, and the embodiments of the present disclosure may be combined with various alternatives in one or more of the above embodiments.
As shown in fig. 5, the IO performance data collection method may include the following steps.
S510, monitoring the running progress of the target kernel code based on a preset execution engine in the target equipment in the process of running the target kernel code corresponding to the target IO request by the target equipment.
Specifically, S510 is similar to S210, and is not described here.
S520, when the target kernel code is monitored to run to the kernel code corresponding to the preset trigger event, IO performance data corresponding to the current system layer is collected.
Specifically, S520 is similar to S220, and is not described herein again.
S530, determining tracking metadata for identifying the source of the IO performance data corresponding to the current system layer.
Specifically, S530 is similar to S230, and is not described herein again.
And S540, transmitting the IO performance data and the tracking metadata corresponding to the current system layer to a user space.
Specifically, the IO performance data and the tracking metadata corresponding to the current system layer may be transferred to the user space through a communication bridge (BPF Maps), but is not limited thereto.
It can be understood that, as described above, the collection of the IO performance data and the tracking metadata is completed in the kernel space, and the IO performance data and the tracking metadata are transmitted to the user space, which is beneficial to visually displaying the IO performance data to the user after the IO performance data and the tracking metadata are subsequently processed.
Optionally, the method further comprises: when a plurality of preset trigger events corresponding to the target IO request respectively trigger and acquire IO performance data and tracking metadata corresponding to a plurality of current system layers and are transmitted to a user space, the IO performance data and the tracking metadata corresponding to the plurality of current system layers are stored based on a preset storage rule.
Specifically, in the process of processing the target IO request, the target device may trigger the acquisition of the IO performance data and the determination of the tracking metadata once when the target kernel runs to a preset trigger event, so that a plurality of IO performance data and tracking metadata corresponding to the current system layers may be obtained in the whole process of processing the target IO request. Furthermore, the IO performance data corresponding to different IO requests can be distinguished by the tracking metadata, so that the IO performance data corresponding to the same IO request can be collected together.
Specifically, the database may be used to store the IO performance data and the tracking metadata, but the present invention is not limited thereto, and the specific content of the preset storage rule may be set by a person skilled in the art according to the actual situation, and is not limited herein.
It can be understood that by storing the IO performance data and the tracking metadata, the IO performance data can be persisted so that a subsequent user can perform secondary processing such as viewing, copying, editing and the like on the IO performance data.
Optionally, based on a preset storage rule, storing IO performance data and tracking metadata corresponding to a plurality of current system layers, including: and storing the IO performance data and the tracking metadata corresponding to the current system layer based on taking the task identifier, the IO request identifier and the current processing node identifier in the tracking metadata corresponding to the current system layer as the primary key of the IO performance data corresponding to the current system layer.
In particular, the following data structures may be employed to store IO performance data and tracking metadata: ((x 1, x2, x 3), x4, x 5), where x1 represents jobid, x2 represents traceid, x3 represents ioid, x4 represents timeframe, and x5 represents the collected IO performance data.
It can be understood that since the jobid uniquely identifies the task running in the target device, the traceid uniquely identifies the IO request initiated by a certain task, and the ioid uniquely identifies the sub IO stream in a certain IO request, the primary key is designed to be (x 1, x2, x 3) and can uniquely identify a certain piece of I/O performance data, which is convenient for indexing and ordered storage.
In order to more clearly illustrate the IO performance data collection method provided by the embodiment of the present disclosure, a detailed example will be described below.
For example, fig. 6 is a logic diagram of an IO performance data collection process provided in an embodiment of the present disclosure. Referring to fig. 6, an acquisition tool for acquiring IO performance data and tracking metadata may include: probes (probes), agents (agents), collectors (collectors), and data stores (stores).
The detector is used for collecting IO performance data and adding tracking metadata (I/OContext) in the IO stream, and the tracking metadata is used for solving the following problems: because a certain complete IO flow flows in different nodes (such as a computing node, an IO forwarding node, a storage node, and the like), IO performance data collected on a certain node cannot determine which IO flow the IO performance data belongs to, and therefore, the existence of the tracking metadata means that the IO flow is maintained, so that IO performance data independently collected in each node after the tracking metadata is added can be associated through the tracking metadata to form the complete IO flow.
The agent is used for monitoring the IO performance data and the tracking metadata collected by the detector, processing the IO performance data and the tracking metadata and then sending the IO performance data and the tracking metadata to the collector. The agent is a daemon process for monitoring communication, when IO performance data and tracking metadata acquired by a bytecode file (prog.bpf) compiled by codes for acquiring the IO performance data and the tracking metadata in a kernel space are updated to a communication bridge, the agent needs to read the IO performance data and the tracking metadata to a user space and perform simple processing (such as generating a table, an image and the like) for visual display, and then the IO performance data and the tracking metadata are sent to an acquirer. Lightweight for the agent and handling of data in kernel space by bytecode files (prog. Bpf) makes the agent's loss of the system (overhead) very small. Meanwhile, data can be written into the communication bridge through the proxy so as to control the byte code file (prog.
The collector is used for receiving IO performance data and tracking metadata sent by the agent, cleaning, extracting, associating and/or collecting the IO performance data and the tracking metadata, and storing the IO performance data and the tracking metadata to the data storage. Specifically, the processing of the IO performance data and the tracking metadata by the collector includes, but is not limited to, distinguishing IO requests initiated by different tasks according to jobid, distinguishing which IO request different sub IO streams belong to according to traceid, determining a sub IO stream sequence relation according to parentid and ioid, and the like.
With continued reference to fig. 4, when the computing node processes an IO request (for example, IO request a1.Io.1, IO request a1. Io.2), a preset trigger event corresponding to the computing node may trigger the detector to acquire IO performance data and tracking metadata, and the detector may also transmit the IO performance data and the tracking metadata to the agent through a communication bridge (BPF Maps), so that the agent processes the IO performance data and the tracking metadata and transmits the IO performance data and the tracking metadata to the collector; when the computing node finishes processing the IO request and processes the IO request by the IO conversion node, a preset trigger event corresponding to the IO conversion node can trigger the detector to acquire IO performance data and tracking metadata, and the detector can also transmit the IO performance data and the tracking metadata to the agent through a communication bridge (BPF Maps) so that the agent can process the IO performance data and the tracking metadata and transmit the IO performance data and the tracking metadata to the collector; when the IO conversion node completes processing of the IO request and transfers the IO conversion node to the storage node to process the IO request, a preset trigger event corresponding to the storage node can trigger the detector to collect IO performance data and tracking metadata, and the detector can also transmit the IO performance data and the tracking metadata to the agent through a communication bridge (BPF Maps) so that the agent can process the IO performance data and the tracking metadata and transmit the IO performance data and the tracking metadata to the collector. Besides the computing nodes, other nodes (such as IO conversion nodes and storage nodes) only need to generate ioid and timeframe, the jobid and traceid corresponding to the computing nodes are inherited, and the parentid is set to be the ioid corresponding to the previous node, so that when a certain IO stream flows at each node, the task (jobid) and the corresponding IO request (traceid) of the IO stream are always recorded, and the fact that the sub-IO stream association cannot be lost in the IO stream flowing process is guaranteed, and the parentid and the ioid can record the precedence relationship of the sub-IO stream. After receiving the IO performance data and the tracking metadata, the collector may perform processing such as cleaning, extraction, association, and/or aggregation on the IO performance data and the tracking metadata, and then store the IO performance data and the tracking metadata in the data storage.
To sum up, in a supercomputer scenario, the acquisition tool of the embodiment of the present disclosure can make the user space level (system support software does not need instrumentation) in a software system unaware of the acquisition of IO performance data; the acquisition of IO performance data of an IO request initiated by a user space can be realized without inserting piles in the user space (such as application codes, supporting software and the like); and the availability of the collection tool is independent of the development language of the user space application program; IO performance data can be dynamically collected, and can be collected in real time in the program running process, no matter whether the program runs or not due to faults; the IO performance data on different system layers (such as a client, a buffer storage pool, an I/O service optimization layer and a shared storage layer) can be independently collected, and the independent IO performance data collected on different system layers can be associated from an IO stream angle generated by one-time request of an I/O request to form IO stream characteristic information; and can be applied to supercomputers with different system architectures (such as a two-layer hardware system architecture supercomputer and a three-layer hardware system architecture supercomputer).
Fig. 7 is a schematic structural diagram of an IO performance data acquisition apparatus provided in an embodiment of the present disclosure, where the IO performance data acquisition apparatus may be understood as the electronic device or a part of functional modules in the electronic device. As shown in fig. 7, the IO performance data acquisition apparatus 700 includes:
the monitoring module 710 is configured to monitor an operation progress of a target kernel code based on a preset execution engine in a target device in a process that the target device operates the target kernel code corresponding to the target IO request;
the acquisition module 720 is configured to acquire IO performance data corresponding to a current system layer when it is monitored that the target kernel code runs to a kernel code corresponding to a preset trigger event, where the current system layer is a system layer that currently processes a target IO request in an operating system of the target device;
a determining module 730, configured to determine tracking metadata for identifying a source of IO performance data corresponding to the current system layer.
In another embodiment of the present disclosure, the target IO request comprises a block IO request;
wherein, the collecting module 720 may include:
the first acquisition submodule is used for acquiring the request sending time corresponding to the block request sending tracking point when the target kernel code is monitored to run to the block request sending tracking point;
the second acquisition submodule is used for acquiring a request completion time corresponding to the block request completion tracking point when the target kernel code is monitored to run to the block request completion tracking point;
the first determining submodule is used for making a difference between the request sending time and the request finishing time and determining the block IO request delay of the block IO request.
In another embodiment of the present disclosure, the determining module 730 may include:
and the second determining submodule is used for determining a task identifier of a task to which the target IO request belongs, determining an IO request identifier of the target IO request, setting the father node identifier to be null, determining a current processing node identifier of a node to which the current system layer belongs, and determining a time identifier when the tracking metadata is recorded if the current system layer belongs to a first node of the target device and is a first preset trigger event in a plurality of preset trigger events corresponding to the first node.
In another embodiment of the present disclosure, the determining module 730 may further include:
and the third determining submodule is used for inheriting the task identifier and the IO request identifier from the parent node and updating the parent node identifier, the current processing node identifier and the update time identifier if the current system layer belongs to the non-first node of the target device and is the first preset trigger event in the plurality of preset trigger events corresponding to the non-first node.
In still another embodiment of the present disclosure, the apparatus may further include:
and the transmission module is used for transmitting the IO performance data and the tracking metadata corresponding to the current system layer to the user space.
In still another embodiment of the present disclosure, the apparatus may further include:
and the storage module is used for storing the IO performance data and the tracking metadata corresponding to the plurality of current system layers based on a preset storage rule when the IO performance data and the tracking metadata corresponding to the plurality of current system layers acquired by respectively triggering the plurality of preset trigger events corresponding to the target IO request are transmitted to the user space.
In still another embodiment of the present disclosure, a memory module includes:
and the storage sub-module is used for storing the IO performance data and the tracking metadata corresponding to the current system layer based on taking the task identifier, the IO request identifier and the current processing node identifier in the tracking metadata corresponding to the current system layer as the main key of the IO performance data corresponding to the current system layer.
The apparatus provided in this embodiment can execute the method of any of the above embodiments, and the execution manner and the beneficial effects are similar, and are not described herein again.
In addition to the method and apparatus described above, the present disclosure also provides a computer-readable storage medium, in which instructions are stored, and when the instructions are executed on a terminal device, the terminal device is caused to implement the method of any one of the above embodiments.
Embodiments of the present disclosure also provide a computer program product comprising a computer program/instructions which, when executed by a processor, implement the method of any of the above embodiments.
An embodiment of the present disclosure further provides an electronic device, including: a memory having a computer program stored therein; a processor for executing the computer program, the computer program when executed by the processor may implement the method of any of the above embodiments.
For example, fig. 8 is a schematic structural diagram of an electronic device in an embodiment of the present disclosure. Referring now specifically to fig. 8, a schematic diagram of a structure suitable for implementing an electronic device 800 in embodiments of the present disclosure is shown. The electronic device 800 in the disclosed embodiment may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 8 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 8, an electronic device 800 may include a processing means (e.g., central processing unit, graphics processor, etc.) 801 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 802 or a program loaded from a storage means 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data necessary for the operation of the electronic apparatus 800 are also stored. The processing apparatus 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
Generally, the following devices may be connected to the I/O interface 805: input devices 806 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 807 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage 808 including, for example, magnetic tape, hard disk, etc.; and a communication device 809. The communication means 809 may allow the electronic device 800 to communicate wirelessly or by wire with other devices to exchange data. While fig. 8 illustrates an electronic device 800 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication means 809, or installed from the storage means 808, or installed from the ROM 802. The computer program, when executed by the processing apparatus 801, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to:
monitoring the running progress of a target kernel code based on a preset execution engine in the target equipment in the process of running the target kernel code corresponding to the target IO request by the target equipment;
when monitoring that a target kernel code runs to a kernel code corresponding to a preset trigger event, acquiring IO performance data corresponding to a current system layer, wherein the current system layer is a system layer which processes a target IO request currently in an operating system of target equipment;
tracking metadata for identifying a source of IO performance data corresponding to a current system layer is determined.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems on a chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The embodiments of the present disclosure further provide a computer-readable storage medium, where a computer program is stored in the storage medium, and when the computer program is executed by a processor, the method of any of the embodiments may be implemented, where an execution manner and beneficial effects are similar, and are not described herein again.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present disclosure, which enable those skilled in the art to understand or practice the present disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. An IO performance data acquisition method is characterized by comprising the following steps:
monitoring the running progress of a target kernel code based on a preset execution engine in target equipment in the process that the target equipment runs the target kernel code corresponding to a target IO request;
when monitoring that the target kernel code runs to a kernel code corresponding to a preset trigger event, acquiring IO performance data corresponding to a current system layer, wherein the current system layer is a system layer which processes the target IO request currently in an operating system of the target device;
and determining tracking metadata for identifying the source of the IO performance data corresponding to the current system layer.
2. The method of claim 1, wherein the target IO request comprises a block IO request;
when monitoring that the target kernel code runs to the kernel code corresponding to the preset trigger event, acquiring IO performance data corresponding to the current system layer, including:
when monitoring that the target kernel code runs to a block request sending tracking point, acquiring a request sending time corresponding to the block request sending tracking point;
when monitoring that the target kernel code runs to a block request tracking completion point, acquiring a request completion time corresponding to the block request tracking completion point;
and determining the block IO request delay of the block IO request by taking the difference between the request sending time and the request finishing time.
3. The method of claim 1, wherein determining tracking metadata for identifying a source of IO performance data corresponding to the current system layer comprises:
if the current system layer belongs to a first node of the target device and is a first preset trigger event in a plurality of preset trigger events corresponding to the first node, determining a task identifier of a task to which the target IO request belongs, determining an IO request identifier of the target IO request, setting a parent node identifier to be null, determining a current processing node identifier of the node to which the current system layer belongs, and determining a time identifier when tracking metadata is recorded.
4. The method of claim 3, wherein determining tracking metadata for identifying a source of IO performance data corresponding to the current system level further comprises:
if the current system layer belongs to a non-first node of the target device and is a first preset trigger event in a plurality of preset trigger events corresponding to the non-first node, inheriting the task identifier and the IO request identifier from a parent node, and updating the parent node identifier, the current processing node identifier and the time identifier.
5. The method of claim 1, further comprising:
and transmitting the IO performance data and the tracking metadata corresponding to the current system layer to a user space.
6. The method according to claim 5, wherein when the IO performance data and the tracking metadata corresponding to the plurality of current system layers, which are triggered and collected by the plurality of preset trigger events corresponding to the target IO request respectively, are transmitted to the user space, the IO performance data and the tracking metadata corresponding to the plurality of current system layers are stored based on a preset storage rule.
7. The method according to claim 6, wherein the storing the IO performance data and the tracking metadata corresponding to the plurality of current system layers based on the preset storage rule comprises:
and storing the IO performance data and the tracking metadata corresponding to the current system layer based on taking the task identifier, the IO request identifier and the current processing node identifier in the tracking metadata corresponding to the current system layer as the primary key of the IO performance data corresponding to the current system layer.
8. An IO performance data acquisition device, comprising:
the monitoring module is used for monitoring the running progress of a target kernel code based on a preset execution engine in target equipment in the process that the target equipment runs the target kernel code corresponding to the target IO request;
the acquisition module is used for acquiring IO performance data corresponding to a current system layer when monitoring that the target kernel code runs to a kernel code corresponding to a preset trigger event, wherein the current system layer is a system layer which is used for processing the target IO request currently in an operating system of the target equipment;
and the determining module is used for determining tracking metadata used for identifying the source of the IO performance data corresponding to the current system layer.
9. An electronic device, comprising:
a processor and a memory, wherein the memory has stored therein a computer program which, when executed by the processor, performs the method of any of claims 1-7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-7.
CN202211100074.3A 2022-09-09 2022-09-09 Method, device, equipment and storage medium for acquiring IO performance data Active CN115202990B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211100074.3A CN115202990B (en) 2022-09-09 2022-09-09 Method, device, equipment and storage medium for acquiring IO performance data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211100074.3A CN115202990B (en) 2022-09-09 2022-09-09 Method, device, equipment and storage medium for acquiring IO performance data

Publications (2)

Publication Number Publication Date
CN115202990A true CN115202990A (en) 2022-10-18
CN115202990B CN115202990B (en) 2022-12-06

Family

ID=83571770

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211100074.3A Active CN115202990B (en) 2022-09-09 2022-09-09 Method, device, equipment and storage medium for acquiring IO performance data

Country Status (1)

Country Link
CN (1) CN115202990B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102098227A (en) * 2011-03-03 2011-06-15 成都市华为赛门铁克科技有限公司 Packet capture method and kernel module
CN110955631A (en) * 2018-09-26 2020-04-03 上海瑾盛通信科技有限公司 File access tracking method and device, storage medium and terminal
CN110955584A (en) * 2018-09-26 2020-04-03 Oppo广东移动通信有限公司 Block device access tracking method and device, storage medium and terminal
CN111756575A (en) * 2020-06-19 2020-10-09 星辰天合(北京)数据科技有限公司 Performance analysis method and device of storage server and electronic equipment
US20210026746A1 (en) * 2019-07-25 2021-01-28 Deep Factor, Inc. Systems, methods, and computer-readable media for processing telemetry events related to operation of an application
CN114461644A (en) * 2022-01-30 2022-05-10 中国农业银行股份有限公司 Data acquisition method and device, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102098227A (en) * 2011-03-03 2011-06-15 成都市华为赛门铁克科技有限公司 Packet capture method and kernel module
CN110955631A (en) * 2018-09-26 2020-04-03 上海瑾盛通信科技有限公司 File access tracking method and device, storage medium and terminal
CN110955584A (en) * 2018-09-26 2020-04-03 Oppo广东移动通信有限公司 Block device access tracking method and device, storage medium and terminal
US20210026746A1 (en) * 2019-07-25 2021-01-28 Deep Factor, Inc. Systems, methods, and computer-readable media for processing telemetry events related to operation of an application
CN111756575A (en) * 2020-06-19 2020-10-09 星辰天合(北京)数据科技有限公司 Performance analysis method and device of storage server and electronic equipment
CN114461644A (en) * 2022-01-30 2022-05-10 中国农业银行股份有限公司 Data acquisition method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN115202990B (en) 2022-12-06

Similar Documents

Publication Publication Date Title
CN109597677B (en) Method and apparatus for processing information
US9336001B2 (en) Dynamic instrumentation
CN110865898B (en) Method, device, medium and equipment for converging crash call stack
US8862538B2 (en) Maintaining a network connection of a workload during transfer
CN111400625B (en) Page processing method and device, electronic equipment and computer readable storage medium
TWI659305B (en) Facility for extending exclusive hold of a cache line in private cache
CN110442402A (en) Data processing method, device, equipment and storage medium
CN111274503B (en) Data processing method, device, electronic equipment and computer readable medium
WO2022105563A1 (en) Indexed file generation method, terminal device, electronic device, and medium
WO2023030224A1 (en) Information presentation method and apparatus, and electronic device and storage medium
CN111625422B (en) Thread monitoring method, thread monitoring device, electronic equipment and computer readable storage medium
CN112507259A (en) Webpage loading method and device, electronic equipment and storage medium
CN111678527A (en) Path network graph generation method and device, electronic equipment and storage medium
CN109697034B (en) Data writing method and device, electronic equipment and storage medium
CN113312119B (en) Information synchronization method and device, computer readable storage medium and electronic equipment
CN115202990B (en) Method, device, equipment and storage medium for acquiring IO performance data
CN111382058B (en) Service testing method and device, server and storage medium
CN111198853B (en) Data processing method, device, electronic equipment and computer readable storage medium
CN116521317A (en) Mirror image management method and device, electronic equipment and computer readable storage medium
US10749802B2 (en) Channeling elements in an analytics engine environment
WO2023273576A1 (en) Abnormal request processing method and apparatus, electronic device and storage medium
CN111241368B (en) Data processing method, device, medium and equipment
CN111382057B (en) Test case generation method, test method and device, server and storage medium
CN112311842A (en) Method and device for information interaction
CN112799863A (en) Method and apparatus for outputting information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant