CN117215850A - Method, device and storage medium for detecting memory leakage - Google Patents

Method, device and storage medium for detecting memory leakage Download PDF

Info

Publication number
CN117215850A
CN117215850A CN202311256376.4A CN202311256376A CN117215850A CN 117215850 A CN117215850 A CN 117215850A CN 202311256376 A CN202311256376 A CN 202311256376A CN 117215850 A CN117215850 A CN 117215850A
Authority
CN
China
Prior art keywords
request
memory resource
memory
linked list
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311256376.4A
Other languages
Chinese (zh)
Inventor
王永刚
仇锋利
杨善松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Metabrain Intelligent Technology Co Ltd
Original Assignee
Suzhou Metabrain Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Metabrain Intelligent Technology Co Ltd filed Critical Suzhou Metabrain Intelligent Technology Co Ltd
Priority to CN202311256376.4A priority Critical patent/CN117215850A/en
Publication of CN117215850A publication Critical patent/CN117215850A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Debugging And Monitoring (AREA)

Abstract

The invention provides a method, a device and a storage medium for detecting memory leakage, belonging to the technical field of memory management, wherein the method comprises the following steps: firstly, respectively creating corresponding memory resource pools for a plurality of service modules, wherein each memory resource pool is associated with a unidirectional linked list; the following processing is then performed for each traffic module: determining whether the current free memory resource size of the memory resource pool meets the processing of the I/O request or not every time an I/O request comes; if the processing of the I/O request is not satisfied, adding the I/O request into a unidirectional linked list, and recording the adding moment; finally, determining whether the service module generates memory leakage or not based on the waiting time of the stored I/O requests in the unidirectional linked list; wherein the waiting time is based on the addition time. By the method for detecting the memory leakage, the service module generating the memory resource leakage can be accurately searched, and the time for finding the memory resource leakage of the service module can be shortened.

Description

Method, device and storage medium for detecting memory leakage
Technical Field
The invention belongs to the technical field of memory management, and particularly relates to a method, a device and a storage medium for detecting memory leakage.
Background
For the storage device, the software system is a system-level software system, a plurality of modules are involved in an IO stack of the whole system, when IO is processed, each module of the IO stack dynamically applies for partial memory to store intermediate data information in the IO processing process, and when the IO is processed by the module, the applied memory resource is released. In the daily system development process, the leakage of memory resources is inevitably caused due to the code problem, the waste of system memory is caused, the running speed of a program is reduced, even serious consequences such as system breakdown are caused, the problem of memory leakage is not easy to find, even if the problem of memory leakage is found, the memory leakage caused by the problem of the module is often not well confirmed due to the plurality of service modules, each module performs one by one investigation, and the time and the labor are wasted, and the problem is high in solution cost.
Disclosure of Invention
In view of the foregoing, embodiments of the present application provide a method, apparatus and storage medium for detecting memory leaks, so as to overcome or at least partially solve the foregoing problems.
In a first aspect of the embodiment of the present application, a method for detecting a memory leak is provided, where the method includes:
Respectively creating corresponding memory resource pools for a plurality of service modules, wherein each memory resource pool is associated with a unidirectional linked list; wherein the following processing is performed for each of the service modules:
determining whether the current free memory resource size of the memory resource pool meets the processing of the I/O request or not every time an I/O request comes;
if the processing of the I/O request is not satisfied, adding the I/O request into the unidirectional linked list, and recording the adding time;
determining whether the service module generates memory leakage or not based on the waiting time of each I/O request stored in the unidirectional linked list; wherein the waiting time is obtained based on the addition time.
Further, the determining whether the service module generates the memory leak based on the waiting time of the I/O requests stored in the unidirectional linked list includes:
periodically traversing the first waiting time of each stored I/O request in the unidirectional linked list; the first waiting time is obtained based on the traversing time and the adding time of the current period;
if the I/O request with the first waiting time being more than or equal to the first preset waiting time exists, determining that the service module generates memory leakage.
Further, when determining whether the current free memory resource size of the memory resource pool satisfies the processing of the I/O request for each I/O request, the method further comprises:
responding to the arrival of the I/O request, and inquiring second waiting time of the I/O request corresponding to the header stored in the unidirectional linked list; wherein the second waiting time is obtained based on the I/O request arrival time and the addition time;
and if the second waiting time is greater than or equal to the I/O request with the second preset waiting time, determining that the service module generates memory leakage.
Further, before the creating the corresponding memory resource pools for the plurality of service modules, the method further includes:
determining the working modes of a storage system where a plurality of service modules are located;
the creating a corresponding memory resource pool for each of the plurality of service modules includes:
under the condition that the working mode is a diagnosis mode, acquiring the minimum memory resource size of normal operation corresponding to each of a plurality of service modules;
and respectively creating corresponding memory resource pools for a plurality of service modules based on the minimum memory resource size.
Further, before determining whether the service module generates a memory leak based on the waiting time of the I/O requests stored in the unidirectional linked list, the method further includes:
determining an I/O request to be released in response to the memory resource release request; wherein the I/O request to be released comprises: the business module processes the completed I/O request and/or the I/O request being processed;
releasing the I/O request to be released, and updating the current idle memory resource size of the memory resource pool;
and processing the target I/O request based on the current free memory resource size in the memory resource pool after the memory resource is released and the stored target I/O request in the unidirectional linked list.
Further, the processing the target I/O request based on the current free memory resource size in the memory resource pool after the memory resource is released and the target I/O request stored in the singly linked list includes:
determining the size of a target memory resource required by each target I/O request;
determining whether a target I/O request with the idle memory resource meeting the size of the target memory resource exists currently from the unidirectional linked list;
If the memory leakage number does not exist, accumulating the memory leakage number according to a preset increment;
if so, allocating memory resources for at least one target I/O request from the idle memory resources to process at least one target I/O request, and cleaning the at least one target I/O request from the unidirectional linked list;
based on the waiting time of the stored I/O requests in the unidirectional linked list, determining whether the service module generates memory leakage comprises the following steps:
and determining whether the service module generates memory leakage or not based on the current accumulated memory leakage times and the waiting time of the I/O requests stored in the unidirectional linked list.
Further, after determining that the service module generates the memory leak, the method further includes:
acquiring a module identifier of the service module, the memory resource size of the memory resource pool, the currently applied memory resource size and the current idle memory resource size;
and generating alarm information and an alarm log based on the module identifier, the memory resource size, the currently applied memory resource size and the current idle memory resource size.
Further, prior to each incoming I/O request, determining whether a current free memory resource size of the memory resource pool meets processing of the I/O request, the method further comprises:
Determining a target service module for processing the I/O request for each incoming I/O request;
distributing the I/O request to the target service module for processing;
the determining whether the current free memory resource size of the memory resource pool meets the I/O request includes:
responding to the I/O request, and reading a data structure of a memory resource pool corresponding to the target service module; the data structure comprises first data and second data, wherein the first data represents currently applied memory resources, and the second data represents currently idle memory resources;
determining, from the data structure, whether a current free memory resource size of the memory resource pool meets processing of the I/O request;
if the processing of the I/O request is met, updating the first data based on the size of the target memory resource; and updating the second data based on the target memory resource size.
In a second aspect of the embodiment of the present application, there is provided an apparatus for detecting a memory leak, the apparatus including:
the creation module is used for respectively creating corresponding memory resource pools for the plurality of service modules, and each memory resource pool is associated with a unidirectional linked list; wherein the following processing is performed for each of the service modules:
A first determining module, configured to determine, for each incoming I/O request, whether a current free memory resource size of the memory resource pool meets a processing of the I/O request;
the adding module is used for adding the I/O request into the unidirectional linked list and recording the adding moment if the processing of the I/O request is not satisfied;
the second determining module is used for determining whether the service module generates memory leakage or not based on the waiting time of the I/O requests stored in the unidirectional linked list; wherein the waiting time is obtained based on the addition time.
Further, the second determining module includes:
the period module is used for periodically traversing the first waiting time of each stored I/O request in the unidirectional linked list; the first waiting time is obtained based on the traversing time and the adding time of the current period;
if the I/O request with the first waiting time being more than or equal to the first preset waiting time exists, determining that the service module generates memory leakage.
Further, the first determining module further includes:
a query module, configured to query, in response to an arrival of the I/O request, a second latency of the I/O request corresponding to a header already stored in the unidirectionally linked list; wherein the second waiting time is obtained based on the I/O request arrival time and the addition time;
And if the second waiting time is greater than or equal to the I/O request with the second preset waiting time, determining that the service module generates memory leakage.
Further, the creation module further includes:
the third determining module is used for determining the working modes of the storage system where the plurality of service modules are located;
the creating a corresponding memory resource pool for each of the plurality of service modules includes:
under the condition that the working mode is a diagnosis mode, acquiring the minimum memory resource size of normal operation corresponding to each of a plurality of service modules;
and respectively creating corresponding memory resource pools for a plurality of service modules based on the minimum memory resource size.
Further, the second determining module further includes:
the release module is used for responding to the memory resource release request and determining an I/O request to be released; wherein the I/O request to be released comprises: the business module processes the completed I/O request and/or the I/O request being processed;
releasing the I/O request to be released, and updating the current idle memory resource size of the memory resource pool;
and processing the target I/O request based on the current free memory resource size in the memory resource pool after the memory resource is released and the stored target I/O request in the unidirectional linked list.
Further, the release module includes:
the cleaning module is used for determining the size of the target memory resource required by each target I/O request;
determining whether a target I/O request with the idle memory resource meeting the size of the target memory resource exists currently from the unidirectional linked list;
if the memory leakage number does not exist, accumulating the memory leakage number according to a preset increment;
if so, allocating memory resources for at least one target I/O request from the idle memory resources to process at least one target I/O request, and cleaning the at least one target I/O request from the unidirectional linked list;
based on the waiting time of the stored I/O requests in the unidirectional linked list, determining whether the service module generates memory leakage comprises the following steps:
and determining whether the service module generates memory leakage or not based on the current accumulated memory leakage times and the waiting time of the I/O requests stored in the unidirectional linked list.
Further, the apparatus further comprises:
the alarm module is used for acquiring the module identification of the service module, the memory resource size of the memory resource pool, the currently applied memory resource size and the current idle memory resource size;
And generating alarm information and an alarm log based on the module identifier, the memory resource size, the currently applied memory resource size and the current idle memory resource size.
Further, the first determining module further includes:
an updating module, configured to determine, for each incoming one of the I/O requests, a target service module that processes the I/O request;
distributing the I/O request to the target service module for processing;
the determining whether the current free memory resource size of the memory resource pool meets the I/O request includes:
responding to the I/O request, and reading a data structure of a memory resource pool corresponding to the target service module; the data structure comprises first data and second data, wherein the first data represents currently applied memory resources, and the second data represents currently idle memory resources;
determining, from the data structure, whether a current free memory resource size of the memory resource pool meets processing of the I/O request;
if the processing of the I/O request is met, updating the first data based on the size of the target memory resource; and updating the second data based on the target memory resource size.
In a third aspect of the embodiment of the present application, a computer readable storage medium is provided, where a computer program is stored, where the program when executed by a processor implements the method for detecting a memory leak according to the first aspect of the embodiment of the present application.
The embodiment of the application provides a method for detecting memory leakage, which comprises the following steps: firstly, respectively creating corresponding memory resource pools for a plurality of service modules, wherein each memory resource pool is associated with a unidirectional linked list; wherein the following processing is then performed for each traffic module: determining whether the current free memory resource size of the memory resource pool meets the processing of the I/O request or not every time an I/O request comes; if the processing of the I/O request is not satisfied, adding the I/O request into a unidirectional linked list, and recording the adding moment; finally, determining whether the service module generates memory leakage or not based on the waiting time of the stored I/O requests in the unidirectional linked list; wherein the waiting time is based on the addition time.
The method for detecting the memory leakage is applied to a memory management system, and can respectively create corresponding memory resource pools for a plurality of service modules, so that each service module can be ensured to successfully apply for memory resources, then the memory resource pools are associated with a unidirectional linked list, and each service module can apply for corresponding memory resources to process the I/O request when the I/O request arrives, but because the created memory resource is a limited memory resource, whether the current free memory resources of the memory resource pools meet the requirement of processing the I/O request is required to be judged, if the current free memory resources of the memory resource pools cannot meet the requirement, the I/O request is added into the unidirectional linked list for waiting, and the adding moment is recorded until the current free memory resources of the memory resource pools meet the requirement of the service module for processing the I/O request, and then the I/O request is processed.
Because the time for processing an I/O request is in a life cycle, if an I/O request exceeds the life cycle and is not processed, the memory leakage generated by the service module is indicated, so that the waiting time of the I/O request stored in the single-line chain table can be obtained to determine whether the memory resource pool associated with the single-line chain table generates the memory leakage or not, whether the memory leakage is generated by the corresponding service module can be ensured in a plurality of service modules, the accurate judgment of which service module generates the memory leakage can be ensured, and because the capacity of the memory resource pool is a fixed value, the time for discovering the memory leakage generated by the service module can be shortened by setting a proper value, and the efficiency for discovering the memory leakage generated by the service module can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments of the present application will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart illustrating a method for detecting memory leaks according to an embodiment of the present application;
FIG. 2 is a flowchart of applying for memory resources according to an embodiment of the present application;
FIG. 3 is a flowchart for releasing memory resources according to an embodiment of the present application;
fig. 4 is a schematic diagram of an apparatus for detecting memory leakage according to an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application will be described in more detail below with reference to the accompanying drawings in the embodiments of the present application. While exemplary embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the application to those skilled in the art.
The business modules involved in the IO stack of the storage system are numerous, and the business modules mainly comprise: the system comprises a front-end protocol module, a cache module, a snapshot module, a front-end protocol module, a remote copy/dual-activity module, a storage pool module, a RAID module, a back-end protocol module and the like, wherein the front-end protocol module is used for carrying out protocol interaction with an application host; the cache module is used for caching acceleration of the read-write request; the snapshot module is used for taking a snapshot of the data volume in the storage system and protecting local data; the remote copy/dual-activity module is used for disaster recovery protection of data through the multi-storage system; the storage pool module is used for carrying out virtualized management on the storage space; the RAID module is used for protecting the fault redundancy of the data disk; the back-end protocol module is used for managing the back-end data disk and interacting with the protocol. Each service module has different functions, but when each service module works, part of memory resources are dynamically applied to the public resource pool to save intermediate data information in the processing process of the I/O request, and when the I/O request is processed by the service module, the applied memory resources are released and are re-classified into the public resource pool.
Because the memory resource is dynamically applied to the public resource pool, and the I/O requests processed by each business module in the IO stack are processed successively, a plurality of I/O requests are applied to the same business module to request processing at the same time, and the business module can apply the memory resource of the public resource pool, but the business module fails, so that the memory resource cannot be released to the public resource pool, other business modules cannot apply the memory resource to process the I/O requests, and whether the business modules are generated by other business modules cannot be judged, and because the business modules in the IO stack are quite a lot of business modules, a lot of time is wasted if the business modules are checked one by one, and the efficiency is not high.
Therefore, the present embodiment provides a method for detecting memory leakage to solve the above-mentioned problems. Referring to fig. 1, fig. 1 is a flowchart of steps of a method for detecting a memory leak according to an embodiment of the present application, where the method step flow includes:
step S101: and respectively creating corresponding memory resource pools for a plurality of service modules, wherein each memory resource pool is associated with a unidirectional linked list.
In this embodiment, in order to avoid the situation that the service modules that first process the I/O request occupy all the memory resources of the common memory resource pool, it is ensured that each service module can apply for the memory resource pool, so that a corresponding memory resource pool is created for each service module, so that when the I/O request comes, each service module can apply for the memory resources to process the I/O request, the memory resource pool can only be used by the corresponding service module, other non-corresponding service modules cannot use the memory resource pool, and because the separately created memory resource pools are a limited fixed value, if a plurality of I/O requests need to be processed concurrently at the same time, the memory resources in the memory resource pools cannot be applied by the service module to process the I/O request, so that each memory resource pool is associated with a one-way linked list, and the I/O request that is not processed can be temporarily stored in the one-way linked list to wait until the service module meets the processing of the I/O request.
Step S102: wherein the following processing is performed for each of the service modules: each time an I/O request comes, determining whether the current free memory resource size of the memory resource pool meets the processing of the I/O request.
In this embodiment, since the I/O request carries the size of the memory resource that needs to be applied for processing the I/O request, the following processing is performed for each service module, and each time an I/O request arrives, the current free memory resource size of the memory resource pool corresponding to the service module is first determined, and only when the free memory resource exists in the memory resource pool, the service module can apply the memory resource to the memory resource pool for processing the I/O request, so it needs to determine whether the current free memory resource size of the memory resource pool meets the requirement for processing the I/O request.
Step S103: if the processing of the I/O request is not satisfied, the I/O request is added to the single linked list, and the adding time is recorded.
In this embodiment, under the circumstance that the concurrent processing of the I/O request is relatively large, there is an I/O request that has been previously applied to the memory resource and has not yet been processed, and the memory resource is occupied to reach the upper limit of the memory resource pool, or at this time, the current free resources in the memory resource pool have not yet met the processing of the I/O request, that is, if the current free memory resource size of the memory resource pool does not meet the processing of the I/O request, it is indicated that at this time, there are other I/O requests in the memory resource pool that occupy the memory resource pool, and there are insufficient free memory resources to process the I/O request, so that the I/O request will be added in the unidirectional linked list, and the adding moment of the I/O request is recorded in the unidirectional linked list, and after the service module finishes processing other I/O requests, the memory resource of the application corresponding to the I/O request that has been processed is released, and when the free resources in the memory resource pool meet the processing of the I/O request, the I/O request is processed.
Step S104: determining whether the service module generates memory leakage or not based on the waiting time of each I/O request stored in the unidirectional linked list; wherein the waiting time is obtained based on the addition time.
In this embodiment, since the adding time of the unidirectional linked list is recorded when the I/O request is added to the unidirectional linked list, the adding time corresponding to each of the I/O requests added in the unidirectional linked list can be queried at the current time, and the waiting time corresponding to each of the I/O requests stored in the unidirectional linked list can be obtained by the difference between the current time and the adding time, and since the time of normally processing one I/O request does not exceed one life cycle, and one life cycle is typically 300 seconds, the waiting time of each of the I/O requests stored in the unidirectional linked list can be queried, and if the waiting time exceeds one life cycle, it can be determined that the service module generates memory leakage.
The method for detecting the memory leakage is applied to a memory management system, and can respectively create corresponding memory resource pools for a plurality of service modules, so that each service module can be ensured to successfully apply for memory resources, then the memory resource pools are associated with a unidirectional linked list, and each service module can apply for corresponding memory resources to process the I/O request when the I/O request arrives, but because the created memory resource is a limited memory resource, whether the current free memory resources of the memory resource pools meet the requirement of processing the I/O request is required to be judged, if the current free memory resources of the memory resource pools cannot meet the requirement, the I/O request is added into the unidirectional linked list for waiting, and the adding moment is recorded until the current free memory resources of the memory resource pools meet the requirement of the service module for processing the I/O request, and then the I/O request is processed.
Because the time for processing an I/O request is in a life cycle, if an I/O request exceeds the life cycle and is not processed, the memory leakage generated by the service module is indicated, so that the waiting time of the I/O request stored in the single-line chain table can be obtained to determine whether the memory resource pool associated with the single-line chain table generates the memory leakage or not, whether the memory leakage is generated by the corresponding service module can be ensured in a plurality of service modules, the accurate judgment of which service module generates the memory leakage can be ensured, and because the capacity of the memory resource pool is a fixed value, the time for discovering the memory leakage generated by the service module can be shortened by setting a proper value, and the efficiency for discovering the memory leakage generated by the service module can be improved.
In one embodiment, the determining whether the service module generates a memory leak based on the waiting time of the I/O requests stored in the unidirectional linked list includes: periodically traversing the first waiting time of each stored I/O request in the unidirectional linked list; the first waiting time is obtained based on the traversing time and the adding time of the current period; if the I/O request with the first waiting time being more than or equal to the first preset waiting time exists, determining that the service module generates memory leakage.
In this embodiment, when an I/O request is sent to an IO stack, a plurality of service modules in the IO stack apply for a memory resource to process the I/O request, which is generally obtained by sequentially processing the I/O request according to the sequence of each service module in the IO stack, so a timer may be set, and a first waiting time of each of the I/O requests stored in the unidirectional link list corresponding to the plurality of service modules is periodically traversed, the timer does not affect the process of the service modules for processing the I/O request, but only plays a role of a query, the first waiting time is obtained by reaching the current time of the I/O request in the unidirectional link list in the current period, and a difference between the adding time of the I/O request and the adding time of the I/O request to the unidirectional link list is obtained, if the I/O request with the first waiting time being greater than or equal to the first preset waiting time exists in the unidirectional link list, leakage occurs in a memory resource pool associated with the unidirectional link list, and the service module corresponding to the memory resource pool generates the first preset time, and the first waiting time may be a life cycle.
In one embodiment, when determining whether the current free memory resource size of the memory resource pool satisfies the processing of the I/O request, the method further comprises: responding to the arrival of the I/O request, and inquiring second waiting time of the I/O request corresponding to the header stored in the unidirectional linked list; wherein the second waiting time is obtained based on the I/O request arrival time and the addition time; and if the second waiting time is greater than or equal to the I/O request with the second preset waiting time, determining that the service module generates memory leakage.
In this embodiment, when determining whether the current free memory resource size of the memory resource pool meets the processing of the I/O request, before determining to process the I/O request, the second waiting time of the I/O request corresponding to the table header already stored in the unidirectional linked list may be queried in real time, according to the characteristics of the unidirectional linked list, generally, the I/O request having the table header position of the unidirectional linked list is the I/O request corresponding to the highest adding time, and processing the I/O request stored in the unidirectional linked list is also generally according to the longest priority processing of the waiting time.
Therefore, the comparison can be performed by acquiring the second waiting time of the I/O request corresponding to the header in the unidirectional linked list and the second preset waiting time, and if the second waiting time is greater than or equal to the second preset waiting time, it can be determined that the service module generates the memory leak. The second waiting time is obtained by responding to the difference between the current moment of the I/O request and the adding moment of the I/O request corresponding to the header of the unidirectional linked list, and the second preset waiting time can be a life cycle and is the same as the first preset waiting time. In addition, the second preset waiting time may be set to be less than the life cycle according to the time of processing the I/O request of the specific service module, which is different from the first preset waiting time.
In one embodiment, before the creating the corresponding memory resource pools for the plurality of service modules, the method further includes: determining the working modes of a storage system where a plurality of service modules are located; the creating a corresponding memory resource pool for each of the plurality of service modules includes: under the condition that the working mode is a diagnosis mode, acquiring the minimum memory resource size of normal operation corresponding to each of a plurality of service modules; and respectively creating corresponding memory resource pools for a plurality of service modules based on the minimum memory resource size.
In this embodiment, before the corresponding memory resource pools are created for the plurality of service modules respectively, the working mode of the storage system where the plurality of service modules are located is further determined, specifically, the working mode may be determined by acquiring an environment variable of the storage system, if the environment variable represents that the storage system is in a diagnostic mode at this time, then an interface for creating the memory resource pool provided by the memory management module is called at this time, an interface function is utilized to create a dedicated resource pool for the service module, and the creating rule is to create a minimum memory resource for ensuring normal operation by the service module, and then create a corresponding memory resource pool for the plurality of service modules respectively by the minimum memory resource size corresponding to each service module.
For example, the service module a needs 100000 memory pages to ensure that the storage system can provide the maximum system performance, but only 1000 memory pages are needed to simply process the I/O request normally without considering the storage system performance, and when a memory resource pool is created for the service module a, the number of memory resources in the resource pool is 1000. The basis for determining the minimum memory resource size may be derived from: the service module a multiplies the size of the data structure corresponding to the control block for I/O access by the number of I/O requests concurrently processed by I/O, for example: in the defined diagnosis mode, the number of I/Os which can be processed by the storage system in parallel is limited to 100, the memory resource required by the service module A for one I/O access is 1KB, and the minimum memory resource number of the module A is 1KB, 100=100 KB.
In addition, in order to ensure that the service modules have unique corresponding memory resource pools respectively, all service module IDs for creating the corresponding memory resource pools, namely component_ids, may be defined in a memory management module of the storage system, and specifically as follows:
typedef enum{
component_a,component_b,…component_max,
}component_id;
then, the memory management module provides the service module with an interface for creating a memory resource pool as follows:
mem_pool*create_memory_pool(component_id component,int object_size,int object_num)
the function's entry component is the ID of the corresponding service module, the object_size is the memory resource size required by the module to process a single I/O request, and the object_num is the resource size required by the module.
When the storage system is started, an interface for creating a resource pool is called to create a memory resource pool corresponding to each service module when each service module is initialized, and the data structure of the memory resource pool is as follows:
typedef struct
{
component_id component;
uint64 total_mem_pages;
uint64 alloct_mem_pages;
uint64 free_mem_pages;
list *wait_list;
}mem_pool;
the component_id is a service module ID corresponding to the resource pool, total_mem_pages are minimum memory sizes for ensuring normal operation of the module, alloct_mem_pages are the number of currently applied memory resources, free_mem_pages are the number of currently idle memory resources, wait_list is a one-way linked list, and when the request is added to wait for an I/O request of a memory resource, the time for adding wait_list is recorded.
From the data structure of the memory resource pool, it can be known that the memory management module needs to have the resource pool parameters corresponding to the module ID for providing the interface, and the application of the memory, and the release interface is as follows:
void*mem_alloc(mem_pool*pmem_pool,uint64 alloc_num);
void mem_free(mem_pool*pmem_pool,uint64 free_num)。
the parameter id is a module id of the applied/released memory, and mem_num is an applied/released memory size.
In one embodiment, before determining whether the service module generates a memory leak based on the respective latency of the I/O requests stored in the singly linked list, the method further includes: determining an I/O request to be released in response to the memory resource release request; wherein the I/O request to be released comprises: the business module processes the completed I/O request and/or the I/O request being processed; releasing the I/O request to be released, and updating the current idle memory resource size of the memory resource pool; and processing the target I/O request based on the current free memory resource size in the memory resource pool after the memory resource is released and the stored target I/O request in the unidirectional linked list.
In this embodiment, before determining whether the service module generates the memory leak, the service module further determines an I/O request to be released in response to the memory resource release request, where the I/O request to be released may include an I/O request processed by the service module and/or an I/O request being processed, and according to a difference between the released I/O requests included in the I/O request to be released, the service module releases the memory resource size of the memory resource pool that has been applied by the I/O request to be released, and then updates the current free memory resource size of the memory resource pool.
At this time, the current free memory resource size of the memory resource pool after the memory resource is released, that is, the updated current free memory resource size is obtained again, then the stored target I/O request is queried in the unidirectional linked list, and the target I/O request is processed. The target I/O request refers to an I/O request which is stored in a single-line linked list and has a waiting time smaller than a life cycle, and memory resources required by the target I/O request are smaller than the updated current idle resource size.
In one embodiment, the processing the target I/O request based on the current free memory resource size in the memory resource pool after the memory resource is released and the target I/O request stored in the singly linked list includes: determining the size of a target memory resource required by each target I/O request; determining whether a target I/O request with the idle memory resource meeting the size of the target memory resource exists currently from the unidirectional linked list; if the memory leakage number does not exist, accumulating the memory leakage number according to a preset increment; if so, allocating memory resources for at least one target I/O request from the idle memory resources to process at least one target I/O request, and cleaning the at least one target I/O request from the unidirectional linked list; based on the waiting time of the stored I/O requests in the unidirectional linked list, determining whether the service module generates memory leakage comprises the following steps: and determining whether the service module generates memory leakage or not based on the current accumulated memory leakage times and the waiting time of the I/O requests stored in the unidirectional linked list.
In this embodiment, when processing the target I/O request, it is necessary to determine the size of the target memory resource that needs to be applied for each I/O request, and since each I/O request carries the size of the memory resource that needs to be applied for processing the I/O request, the service module determines, from the unidirectional linked list, whether the size of the target memory resource is smaller than or equal to the current size of the free memory resource in the memory resource pool, and the waiting time is the longest target I/O request in the unidirectional linked list. The target I/O request may be one I/O request with the longest latency, or may be a plurality of I/O requests with the longest latency.
If the target I/O request which is satisfied does not exist in the unidirectional linked list, the memory leakage times are accumulated according to the preset increment, wherein the preset increment accumulates the memory leakage times, and the accumulated memory leakage times are the times of failure of accumulating and releasing the memory resources.
If the target I/O request is met in the unidirectional linked list, memory resources with the same size as the target memory resources are distributed to the service module from idle memory resources in the memory resource pool, so that the service module processes at least one target I/O request, and at the same time, at least one I/O request to be processed is cleaned out of the unidirectional linked list, and the stored I/O request in the unidirectional linked list is updated in time.
Before that, since the memory leakage times are accumulated according to the preset increment, whether the service module generates the memory leakage is determined based on the waiting time of the stored I/O requests in the unidirectional linked list, or whether the service module generates the memory leakage can be determined based on whether the current accumulated memory leakage times exceed the preset leakage times and the waiting time of the stored I/O requests in the unidirectional linked list.
For example, the preset leakage times are 3 times, if in one detection, the I/O request stored in the unidirectional linked list is smaller than the life cycle, and if the accumulated leakage times of the service module exceeds 3 times, it may also be determined that the service module generates a memory leak.
In one embodiment, after determining that the service module generates a memory leak, the method further includes: acquiring a module identifier of the service module, the memory resource size of the memory resource pool, the currently applied memory resource size and the current idle memory resource size; and generating alarm information and an alarm log based on the module identifier, the memory resource size, the currently applied memory resource size and the current idle memory resource size.
In this embodiment, after determining that the service module generates the memory leak, the module identifier of the service module, that is, the service module ID, the memory resource size corresponding to the service module, the currently applied memory resource size, and the currently idle memory resource size, are further required to be obtained, and then these generated alarm information or alarm logs are uploaded to an operation page of the tester, so as to remind the service personnel to analyze specific alarm content.
In one embodiment, prior to each I/O request, determining whether the current free memory resource size of the memory resource pool meets the processing of the I/O request, the method further comprises: determining a target service module for processing the I/O request for each incoming I/O request; distributing the I/O request to the target service module for processing; the determining whether the current free memory resource size of the memory resource pool meets the I/O request includes: responding to the I/O request, and reading a data structure of a memory resource pool corresponding to the target service module; the data structure comprises first data and second data, wherein the first data represents currently applied memory resources, and the second data represents currently idle memory resources; determining, from the data structure, whether a current free memory resource size of the memory resource pool meets processing of the I/O request; if the processing of the I/O request is met, updating the first data based on the size of the target memory resource; and updating the second data based on the target memory resource size.
In this embodiment, before determining whether the current free memory resource size of the memory resource pool meets the processing of the I/O request, since there are multiple service modules in the I/O stack, there are multiple service modules that all process the I/O request, so that a target service module for processing the I/O request needs to be determined for each incoming I/O request, and then the I/O request is distributed to the target service module for processing according to the determined target service module, where the processing procedure of the target service module is as follows:
firstly, responding to an I/O request, reading a memory resource pool structure corresponding to a target module, wherein the memory resource pool structure comprises first data and second data, and the first data represents currently applied memory resources, namely alloct_mem_pages; the second data characterizes the current free memory resources, i.e., free_mem_pages. Determining whether the current free memory resource size of the memory resource pool meets the processing of the I/O request from the data structure; if the processing of the I/O request is satisfied, updating the first data based on the size of the target memory resource, and updating the second data based on the size of the target memory resource, i.e. the size of the target memory resource is free_num, then subtracting free_num from the alloct_mem_pages in the memory resource pool, and adding free_num to the free_pages, thereby realizing the updating of the first data and the second data in the data structure of the memory resource pool, i.e. the currently applied memory resource and the currently idle memory resource.
With reference to fig. 2 and 3, fig. 2 is a flowchart of applying for a memory resource according to an embodiment of the present application, and fig. 3 is a flowchart of releasing a memory resource according to an embodiment of the present application.
A detailed description will be given below with reference to fig. 2 and 3:
step S201, applying for memory resources. Firstly, when an I/O request is sent to an IO stack, a service module in the IO stack that needs to process the I/O request will process the I/O request, firstly, determining the size of a memory resource that needs to be applied for by the I/O request, applying for a memory resource to a memory resource pool corresponding to the service module to process the I/O request, and at this time, entering step S202.
Step S202: whether the current free memory resource of the memory resource pool is larger than or equal to the size of the memory resource applied by the I/O request. At this time, the current free memory resource in the memory resource pool needs to be acquired, and the service module can smoothly process the current free memory resource in the memory resource pool only when the current free memory resource in the memory resource pool is larger than or equal to the memory resource size required to be applied by the I/O request, so that the current free memory resource size in the memory resource pool is acquired, and if the current free memory resource size is larger than or equal to the I/O request application memory resource size, step S203 is entered. Otherwise, the I/O request is added to the single phase linked list, i.e., wait_list.
Step S203: at this time, it indicates that the current memory resource size of the memory resource pool satisfies the processing of the I/O request, so that the first data in the data structure included in the memory resource pool, that is, the applied memory resource plus the I/O request applied memory resource size, and the second data, that is, the idle memory resource minus the I/O request applied memory resource size, indicate that the service module has completed applying for memory resources from the memory resource pool to process the I/O request applied memory resource size at this time, and represent that this step is completed after the processing is completed.
Step S203: at this time, it is indicated that the current memory resource size of the memory resource pool does not meet the requirement of processing the I/O request, in order to ensure that the service module can successfully process the I/O request, the I/O request is added to the single-phase linked list, and the next time the service module can process the I/O request, processing is performed.
Meanwhile, in the process of applying for the memory resource by the service module, the memory resource is released along with the processing, as shown in fig. 3:
step S301: and releasing the memory resource. When the processing flow of the service module is finished, an I/O request generally invokes an interface for releasing memory resources to release the memory resources for recovering the memory resources, so that in the process of processing the I/O request by the service module, there is also release of the I/O request, and when the service module receives an instruction or a request for releasing the memory resources, step S302 is entered.
Step S302: at this time, the service module responds to the release of the memory resources, and releases the occupied memory resources according to the I/O request indicated by the release of the memory resources, and first obtains the memory resource size of the application memory in the memory resource pool of the I/O request to be released, and then updates and adjusts the first data and the second data included in the data structure corresponding to the memory resource pool, that is, the applied memory resources represented by the first data are reduced by the applied memory resource size, and the idle memory resources represented by the second data are added by the applied memory resource size, so that the memory resources of the memory resource pool are released, and the current idle memory resource size of the memory resource pool is updated, and step S303 is entered.
Step S303: and acquiring whether I/O requests waiting for application of memory resources exist in the single-phase linked list, wherein the I/O requests in the single-phase linked list are I/O requests which cannot be processed in time by a service module and need to be processed in a delayed manner, if the I/O requests exist in the single-phase linked list, entering a step S304, otherwise, ending.
Step S304: since the current free memory resource size of the memory resource pool has been updated in step S302, the waiting I/O request in the single-phase linked list needs to be processed, and if there is an I/O request satisfying the current free memory resource size in the single-phase linked list, the service module is triggered to process the I/O request in the single-phase linked list until the processing of the I/O request in the single-phase linked list is completed.
By combining the explanation of fig. 2 and fig. 3, the whole process of processing the I/O request by the service module can be known, and since this embodiment divides a separate memory resource pool for each service module, and then associates the memory resource pool with the unidirectional linked list, it is necessary to determine whether the current free memory resource in the memory resource pool satisfies processing the I/O request, if the current free memory resource in the memory resource pool cannot satisfy processing the I/O request, the I/O request is added to the unidirectional linked list for waiting, and the adding time is recorded until the current free memory resource in the memory resource pool satisfies processing the I/O request. Therefore, whether the memory resource pool associated with the single-line linked list generates memory leakage or not can be determined by acquiring the waiting time of the I/O request stored in the single-line linked list, so that the accurate judgment of which service module generates the memory leakage in a plurality of service modules can be ensured, and the time for discovering the memory leakage of the service module can be shortened by setting a proper value because the capacity of the memory resource pool is a fixed value, and the efficiency of discovering the memory leakage of the service module is improved.
Based on the same inventive concept, referring to fig. 4, fig. 4 is a schematic diagram of an apparatus for detecting memory leakage according to an embodiment of the present application, as shown in fig. 4, the apparatus includes: a creation module 401, a first determination module 402, an addition module 403, and a second determination module 404.
The creating module 401 is configured to create corresponding memory resource pools for the plurality of service modules, where each memory resource pool is associated with a unidirectional linked list. Wherein the following processing is performed for each of the service modules:
a first determining module 402 is configured to determine, for each incoming I/O request, whether a current free memory resource size of the memory resource pool meets a processing of the I/O request.
And the adding module 403 is configured to add the I/O request to the singly linked list and record an adding time if the processing of the I/O request is not satisfied.
A second determining module 404, configured to determine whether the service module generates a memory leak based on the waiting time of each of the I/O requests stored in the unidirectional linked list; wherein the waiting time is obtained based on the addition time.
In this embodiment, the second determining module 404 includes: the period module is used for periodically traversing the first waiting time of each stored I/O request in the unidirectional linked list; the first waiting time is obtained based on the traversing time and the adding time of the current period; if the I/O request with the first waiting time being more than or equal to the first preset waiting time exists, determining that the service module generates memory leakage.
In this embodiment, the first determining module 401 further includes: a query module, configured to query, in response to an arrival of the I/O request, a second latency of the I/O request corresponding to a header already stored in the unidirectionally linked list; wherein the second waiting time is obtained based on the I/O request arrival time and the addition time; and if the second waiting time is greater than or equal to the I/O request with the second preset waiting time, determining that the service module generates memory leakage.
In this embodiment, the creating module 401 further includes: the third determining module is used for determining the working modes of the storage system where the plurality of service modules are located; the creating a corresponding memory resource pool for each of the plurality of service modules includes: under the condition that the working mode is a diagnosis mode, acquiring the minimum memory resource size of normal operation corresponding to each of a plurality of service modules; and respectively creating corresponding memory resource pools for a plurality of service modules based on the minimum memory resource size.
In this embodiment, the second determining module 404 further includes: the release module is used for responding to the memory resource release request and determining an I/O request to be released; wherein the I/O request to be released comprises: the business module processes the completed I/O request and/or the I/O request being processed; releasing the I/O request to be released, and updating the current idle memory resource size of the memory resource pool; and processing the target I/O request based on the current free memory resource size in the memory resource pool after the memory resource is released and the stored target I/O request in the unidirectional linked list.
In this embodiment, the release module includes: the cleaning module is used for determining the size of the target memory resource required by each target I/O request; determining whether a target I/O request with the idle memory resource meeting the size of the target memory resource exists currently from the unidirectional linked list; if the memory leakage number does not exist, accumulating the memory leakage number according to a preset increment; if so, allocating memory resources for at least one target I/O request from the idle memory resources to process at least one target I/O request, and cleaning the at least one target I/O request from the unidirectional linked list; based on the waiting time of the stored I/O requests in the unidirectional linked list, determining whether the service module generates memory leakage comprises the following steps: and determining whether the service module generates memory leakage or not based on the current accumulated memory leakage times and the waiting time of the I/O requests stored in the unidirectional linked list.
In this embodiment, the apparatus further includes: the alarm module is used for acquiring the module identification of the service module, the memory resource size of the memory resource pool, the currently applied memory resource size and the current idle memory resource size; and generating alarm information and an alarm log based on the module identifier, the memory resource size, the currently applied memory resource size and the current idle memory resource size.
In this embodiment, the first determining module 401 further includes: an updating module, configured to determine, for each incoming one of the I/O requests, a target service module that processes the I/O request; distributing the I/O request to the target service module for processing; the determining whether the current free memory resource size of the memory resource pool meets the I/O request includes: responding to the I/O request, and reading a data structure of a memory resource pool corresponding to the target service module; the data structure comprises first data and second data, wherein the first data represents currently applied memory resources, and the second data represents currently idle memory resources; determining, from the data structure, whether a current free memory resource size of the memory resource pool meets processing of the I/O request; if the processing of the I/O request is met, updating the first data based on the size of the target memory resource; and updating the second data based on the target memory resource size.
Based on the same inventive concept, the present embodiment also provides a storage medium having a computer program stored thereon, which when executed by a processor, implements the method for detecting memory leaks according to the first aspect of the embodiment of the present invention.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described by differences from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or terminal device comprising the element.
The above detailed description of the method, the device and the storage medium for detecting memory leakage provided by the invention applies specific examples to illustrate the principles and the implementation of the invention, and the above examples are only used to help understand the method and the core idea of the invention; meanwhile, it will be apparent to those skilled in the art from this disclosure that the present invention is not limited to the embodiments and the application scope.

Claims (10)

1. A method for detecting a memory leak, the method comprising:
respectively creating corresponding memory resource pools for a plurality of service modules, wherein each memory resource pool is associated with a unidirectional linked list; wherein the following processing is performed for each of the service modules:
determining whether the current free memory resource size of the memory resource pool meets the processing of the I/O request or not every time an I/O request comes;
if the processing of the I/O request is not satisfied, adding the I/O request into the unidirectional linked list, and recording the adding time;
determining whether the service module generates memory leakage or not based on the waiting time of each I/O request stored in the unidirectional linked list; wherein the waiting time is obtained based on the addition time.
2. The method of claim 1, wherein the determining whether the service module generates a memory leak based on the respective latencies of the I/O requests stored in the singly linked list comprises:
periodically traversing the first waiting time of each stored I/O request in the unidirectional linked list; the first waiting time is obtained based on the traversing time and the adding time of the current period;
if the I/O request with the first waiting time being more than or equal to the first preset waiting time exists, determining that the service module generates memory leakage.
3. The method of claim 1, wherein, when determining whether the current free memory resource size of the memory resource pool satisfies the processing of the I/O request for each I/O request, the method further comprises:
responding to the arrival of the I/O request, and inquiring second waiting time of the I/O request corresponding to the header stored in the unidirectional linked list; wherein the second waiting time is obtained based on the I/O request arrival time and the addition time;
and if the second waiting time is greater than or equal to the I/O request with the second preset waiting time, determining that the service module generates memory leakage.
4. The method of claim 1, wherein prior to creating the corresponding memory resource pools for each of the plurality of business modules, the method further comprises:
determining the working modes of a storage system where a plurality of service modules are located;
the creating a corresponding memory resource pool for each of the plurality of service modules includes:
under the condition that the working mode is a diagnosis mode, acquiring the minimum memory resource size of normal operation corresponding to each of a plurality of service modules;
and respectively creating corresponding memory resource pools for a plurality of service modules based on the minimum memory resource size.
5. The method of claim 1, wherein prior to determining whether the traffic module has generated a memory leak based on the respective latencies of the I/O requests stored in the singly linked list, the method further comprises:
determining an I/O request to be released in response to the memory resource release request; wherein the I/O request to be released comprises: the business module processes the completed I/O request and/or the I/O request being processed;
releasing the I/O request to be released, and updating the current idle memory resource size of the memory resource pool;
And processing the target I/O request based on the current free memory resource size in the memory resource pool after the memory resource is released and the stored target I/O request in the unidirectional linked list.
6. The method of claim 5, wherein the processing the target I/O request based on the current free memory resource size in the memory resource pool after the memory resource is released and the target I/O request stored in the singly linked list comprises:
determining the size of a target memory resource required by each target I/O request;
determining whether a target I/O request with the idle memory resource meeting the size of the target memory resource exists currently from the unidirectional linked list;
if the memory leakage number does not exist, accumulating the memory leakage number according to a preset increment;
if so, allocating memory resources for at least one target I/O request from the idle memory resources to process at least one target I/O request, and cleaning the at least one target I/O request from the unidirectional linked list;
based on the waiting time of the stored I/O requests in the unidirectional linked list, determining whether the service module generates memory leakage comprises the following steps:
And determining whether the service module generates memory leakage or not based on the current accumulated memory leakage times and the waiting time of the I/O requests stored in the unidirectional linked list.
7. A method according to claim 2 or 3, wherein after determining that the traffic module has generated a memory leak, the method further comprises:
acquiring a module identifier of the service module, the memory resource size of the memory resource pool, the currently applied memory resource size and the current idle memory resource size;
and generating alarm information and an alarm log based on the module identifier, the memory resource size, the currently applied memory resource size and the current idle memory resource size.
8. The method of claim 1, wherein prior to determining whether the current free memory resource size of the memory resource pool satisfies the processing of the I/O request for each I/O request, the method further comprises:
determining a target service module for processing the I/O request for each incoming I/O request;
distributing the I/O request to the target service module for processing;
the determining whether the current free memory resource size of the memory resource pool meets the I/O request includes:
Responding to the I/O request, and reading a data structure of a memory resource pool corresponding to the target service module; the data structure comprises first data and second data, wherein the first data represents currently applied memory resources, and the second data represents currently idle memory resources;
determining, from the data structure, whether a current free memory resource size of the memory resource pool meets processing of the I/O request;
if the processing of the I/O request is met, updating the first data based on the size of the target memory resource; and updating the second data based on the target memory resource size.
9. An apparatus for detecting a memory leak, the apparatus comprising:
the creation module is used for respectively creating corresponding memory resource pools for the plurality of service modules, and each memory resource pool is associated with a unidirectional linked list; wherein the following processing is performed for each of the service modules:
a first determining module, configured to determine, for each incoming I/O request, whether a current free memory resource size of the memory resource pool meets a processing of the I/O request;
the adding module is used for adding the I/O request into the unidirectional linked list and recording the adding moment if the processing of the I/O request is not satisfied;
The second determining module is used for determining whether the service module generates memory leakage or not based on the waiting time of the I/O requests stored in the unidirectional linked list; wherein the waiting time is obtained based on the addition time.
10. A computer readable storage medium having stored thereon a computer program, which when executed by a processor implements a method of detecting a memory leak as claimed in any of claims 1-8.
CN202311256376.4A 2023-09-26 2023-09-26 Method, device and storage medium for detecting memory leakage Pending CN117215850A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311256376.4A CN117215850A (en) 2023-09-26 2023-09-26 Method, device and storage medium for detecting memory leakage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311256376.4A CN117215850A (en) 2023-09-26 2023-09-26 Method, device and storage medium for detecting memory leakage

Publications (1)

Publication Number Publication Date
CN117215850A true CN117215850A (en) 2023-12-12

Family

ID=89042291

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311256376.4A Pending CN117215850A (en) 2023-09-26 2023-09-26 Method, device and storage medium for detecting memory leakage

Country Status (1)

Country Link
CN (1) CN117215850A (en)

Similar Documents

Publication Publication Date Title
US10846137B2 (en) Dynamic adjustment of application resources in a distributed computing system
CN1213376C (en) Protocol for replicated servers
US10365976B2 (en) Scheduling and managing series of snapshots
US8074222B2 (en) Job management device, cluster system, and computer-readable medium storing job management program
US10817386B2 (en) Virtual machine recovery method and virtual machine management device
US10831387B1 (en) Snapshot reservations in a distributed storage system
Wang et al. Hybrid checkpointing for MPI jobs in HPC environments
CN112346829B (en) Method and equipment for task scheduling
CN108701048B (en) Data loading method and device
US20090024679A1 (en) Apparatus, system, and method for improving system performance in a large memory heap environment
BR112016010555B1 (en) MANAGED SERVICE METHODS AND SYSTEMS FOR ACQUISITION, STORAGE AND CONSUMPTION OF LARGE-SCALE DATA STREAM AND STORAGE MEDIA ACCESSIBLE BY NON-TRANSITORY COMPUTER
CN104969168A (en) Persistent storage device with NVRAM for staging writes
US10817380B2 (en) Implementing affinity and anti-affinity constraints in a bundled application
US10620871B1 (en) Storage scheme for a distributed storage system
JP6288275B2 (en) Virtualization infrastructure management apparatus, virtualization infrastructure management system, virtualization infrastructure management method, and virtualization infrastructure management program
US20100313069A1 (en) Computer system and failure recovery method
CN108475201B (en) Data acquisition method in virtual machine starting process and cloud computing system
JP2007133544A (en) Failure information analysis method and its implementation device
CN111381928A (en) Virtual machine migration method, cloud computing management platform and storage medium
US9336250B1 (en) Systems and methods for efficiently backing up data
US8402230B2 (en) Recoverability while adding storage to a redirect-on-write storage pool
CN113849344B (en) Method, device and storage medium for creating volume snapshot
CN116594734A (en) Container migration method and device, storage medium and electronic equipment
CN117215850A (en) Method, device and storage medium for detecting memory leakage
US20130247037A1 (en) Control computer and method for integrating available computing resources of physical machines

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination