CN113626181A - Memory cleaning method, device, equipment and readable storage medium - Google Patents

Memory cleaning method, device, equipment and readable storage medium Download PDF

Info

Publication number
CN113626181A
CN113626181A CN202110744436.1A CN202110744436A CN113626181A CN 113626181 A CN113626181 A CN 113626181A CN 202110744436 A CN202110744436 A CN 202110744436A CN 113626181 A CN113626181 A CN 113626181A
Authority
CN
China
Prior art keywords
memory
target
message
clearing
cleaning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110744436.1A
Other languages
Chinese (zh)
Other versions
CN113626181B (en
Inventor
范瑞春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Suzhou Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Inspur Intelligent Technology Co Ltd filed Critical Suzhou Inspur Intelligent Technology Co Ltd
Priority to CN202110744436.1A priority Critical patent/CN113626181B/en
Publication of CN113626181A publication Critical patent/CN113626181A/en
Application granted granted Critical
Publication of CN113626181B publication Critical patent/CN113626181B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0253Garbage collection, i.e. reclamation of unreferenced memory
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a memory cleaning method, a device, equipment and a readable storage medium, wherein the method comprises the following steps: under the condition that a target memory needs to be cleaned, determining at least two target CPUs participating in memory cleaning; dividing a target memory into memory segments matched with the number of target CPUs; respectively allocating the memory segments to each target CPU participating in memory cleaning; respectively and independently determining the clearing message of each clearing by using each target CPU; sending each clearing message to a clearing engine by combining the message sending queue and the message waiting queue corresponding to each target CPU; and cleaning the target memory based on each cleaning message by using a cleaning engine. In the application, the number of the target CPUs participating in the memory cleaning is at least 2, and the target CPUs do not need extra communication, so that the memory cleaning efficiency can be greatly improved.

Description

Memory cleaning method, device, equipment and readable storage medium
Technical Field
The present application relates to the field of storage technologies, and in particular, to a method, an apparatus, a device, and a readable storage medium for clearing a memory.
Background
Memory cleaning is essential in the development and use of computer equipment. For example, in the development process of an SSD (Solid State Disk), many services have a requirement for clearing large pieces of DDR memory, for example, performing FORMAT operation on the SSD, or performing TRIM operation on the SSD.
In the existing memory cleaning scheme, under the condition of multiple CPUs, in order to easily and accurately control the cleaning progress, when all CPUs receive a FORMAT or TRIM command, the commands need to be forwarded to one fixed CPU, so that the fixed CPU performs the cleaning operation, and other CPUs are in an idle state. Therefore, the method of only one CPU to do the clearing action can not fully utilize the advantages of multiple CPUs, and the message interaction between the CPUs is needed, thereby seriously influencing the clearing speed. Secondly, in the process of clearing a certain fixed CPU, after the CPU sends a message to the clearing engine, the CPU will start the next clearing operation until the clearing engine finishes clearing and returns a completion message, so that the queue depth between the CPU and the clearing engine cannot be fully utilized, and the clearing efficiency is low.
In summary, how to effectively improve the memory definition speed and the like is a technical problem that needs to be solved urgently by those skilled in the art.
Disclosure of Invention
The application aims to provide a memory cleaning method, a device, equipment and a readable storage medium, which enable a plurality of CPUs (central processing units) to simultaneously clean memories so as to improve the memory cleaning efficiency.
In order to solve the technical problem, the application provides the following technical scheme:
a memory cleaning method comprises the following steps:
under the condition that a target memory needs to be cleaned, determining at least two target CPUs participating in memory cleaning;
dividing the target memory into memory segments matched with the number of the target CPUs;
respectively distributing the memory segments to each target CPU participating in memory cleaning;
respectively and independently determining the clearing message of each clearing by using each target CPU;
sending each clearing message to a clearing engine by combining a message sending queue and a message waiting queue corresponding to each target CPU;
and cleaning the target memory based on each cleaning message by using the cleaning engine.
Preferably, the determining at least two target CPUs participating in memory cleaning when the target memory needs to be cleaned includes:
acquiring service parameters of each CPU under the condition that the target memory needs to be cleaned;
and determining at least two target CPUs from the CPUs by using the service parameters.
Preferably, dividing the target memory into memory segments matching the number of the target CPUs includes:
and dividing the target memory into memory segments with the length matched with the CPU service parameters and the number matched with the number of the target CPUs.
Preferably, dividing the target memory into memory segments matching the number of the target CPUs includes:
and averagely dividing the target memory into memory segments matched with the number of the target CPUs.
Preferably, the determining the clearing message for each clearing independently by each target CPU includes:
independently calculating the clearing information of each clearing by utilizing each target CPU based on the memory segment corresponding to the target CPU; the purge message includes a purge length and a purge start position.
Preferably, after the allocating the memory segments to the target CPUs participating in memory cleaning, the method further includes:
acquiring the uncleaned length corresponding to each memory segment;
if the target memory segment with the uncleaned length larger than the preset threshold exists, determining that the target memory segment corresponds to the uncleaned segment;
and replacing the target CPU corresponding to the uncleaned section.
Preferably, the sending each of the purge messages to a purge engine in combination with a message sending queue and a message waiting queue corresponding to each of the target CPUs includes:
sending each clearing message to the clearing engine according to the message sending queue;
receiving a response message fed back by the clearing engine;
clearing the clearing message in the message sending queue by using the response message;
and storing the clearing message to be sent currently into the message waiting queue under the condition that the message sending queue is full.
A memory scrubbing apparatus comprising:
the CPU determining module is used for determining at least two target CPUs participating in memory cleaning under the condition that the target memory needs to be cleaned;
the memory dividing module is used for dividing the target memory into memory segments matched with the number of the target CPUs;
the memory allocation module is used for respectively allocating the memory segments to the target CPUs participating in memory cleaning;
a clearing message determining module, configured to determine, by using each target CPU, a clearing message for each clearing independently;
the message sending module is used for sending each clearing message to a clearing engine by combining the message sending queue and the message waiting queue corresponding to each target CPU;
and the memory cleaning module is used for cleaning the target memory based on each cleaning message by using the cleaning engine.
An electronic device, comprising:
a memory for storing a computer program;
and the processor is used for realizing the steps of the memory cleaning method when executing the computer program.
A readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps of the memory cleaning method described above.
By applying the method provided by the embodiment of the application, under the condition that the target memory needs to be cleaned, at least two target CPUs participating in memory cleaning are determined; dividing a target memory into memory segments matched with the number of target CPUs; respectively allocating the memory segments to each target CPU participating in memory cleaning; respectively and independently determining the clearing message of each clearing by using each target CPU; sending each clearing message to a clearing engine by combining the message sending queue and the message waiting queue corresponding to each target CPU; and cleaning the target memory based on each cleaning message by using a cleaning engine.
In the application, when the target memory needs to be cleaned, at least two target CPUs which need to participate in the memory cleaning are found out firstly. And segmenting the target memory, and distributing the memory segments to corresponding target CPUs. Each target CPU can then independently determine the purge message for each purge. Because each target CPU respectively and independently determines the corresponding memory segment, a communication link for determining the clearing message can be saved between the CPUs, and the problem of clearing confusion can not be caused. And the clearing message determined by each target CPU can be sent to the clearing engine by combining the message sending queue and the message waiting queue. The flush engine may quickly complete a memory flush based on the received flush message. The number of the target CPUs participating in the memory cleaning is at least 2, and the target CPUs do not need additional communication, so that the memory cleaning efficiency can be greatly improved.
Accordingly, embodiments of the present application further provide a memory cleaning device, an apparatus, and a readable storage medium corresponding to the memory cleaning method, which have the above technical effects and are not described herein again.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments or related technologies of the present application, the drawings needed to be used in the description of the embodiments or related technologies are briefly introduced below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a flowchart illustrating an implementation of a memory cleaning method according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of a memory cleaning device in an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device in an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device in an embodiment of the present application.
Detailed Description
In order that those skilled in the art will better understand the disclosure, the following detailed description will be given with reference to the accompanying drawings. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, fig. 1 is a flowchart illustrating a memory cleaning method according to an embodiment of the present application, where the method includes the following steps:
s101, under the condition that target memory needs to be cleaned, at least two target CPUs participating in memory cleaning are determined.
In the embodiment of the present application, it may be considered that the target memory needs to be cleaned when performing FORMAT operation on the SSD or performing TRIM operation on the SSD. The determination of the target memory may be based on formatting or cropping the corresponding object.
In a multi-core system, there are often multiple CPUs. In the present application, in order to improve the memory cleaning efficiency, at least 2 target CPUs are used to perform memory cleaning. That is to say, in the embodiment of the present application, a certain CPU is not fixed to perform memory cleaning, but at least 2 target CPUs are determined to participate in memory cleaning.
For example, if there are 4 CPUs in the multi-core system, 2 or 3 target CPUs may be randomly selected from the 4 CPUs, or all of the 4 CPUs may be directly determined as target CPUs.
S102, dividing the target memory into memory segments matched with the number of the target CPUs.
After the target CPU is determined, the target memory can be divided into memory segments matched with the number of the target CPU. The lengths of the memory segments may or may not be consistent.
For example, if the number of target CPUs is N (N is an integer greater than 1), the target memory may be divided into N segments, or may be 2N segments, or greater than N segments. That is, after the target memory is divided, each target CPU can be divided into at least one memory segment.
Preferably, in order to balance the load among the target CPUs, the step S102 divides the target memory into memory segments matching the number of the target CPUs, and may include: and averagely dividing the target memory into memory segments matched with the number of the target CPUs. That is, when the target memory is divided, the length of each memory segment can be made equal. Of course, the number of the memory segments may be kept consistent with the number of the target CPUs, or the number of the memory segments may be an integral multiple of the number of the target CPUs, so that the total length of the memory segments for which each target CPU is responsible is guaranteed to be consistent.
And S103, respectively distributing the memory segments to each target CPU participating in memory cleaning.
After the memory segments are partitioned, the memory segments can be allocated to the target CPUs respectively. That is, at least 2 target CPUs participate in the cleaning of the target memory, and each target CPU only processes the allocated memory segment, so that it can be ensured that the target CPUs can clean in order without mutual negotiation.
For example, if the memory segment of the target memory has 3 segments in total, such as memory segment 1, memory segment 2, and memory segment 3, and the target CPU, CPU1, and CPU2 and CPU3, then memory segment 1 may be allocated to CPU1, memory segment 2 may be allocated to CPU2, and memory segment 3 may be allocated to CPU 3.
And S104, independently determining the cleaning message of each cleaning by using each target CPU.
After each target CPU knows the memory segment to be processed, the target CPU can respectively and independently determine the clearing message of each clearing aiming at the memory segment.
It should be noted that each cleaning is for the cleaning engine. Since the flush engine performs a memory flush upon receiving a flush message, each flush message corresponds to a flush by the flush engine. Each target CPU determines the clearing message for each clearing, that is, each target CPU determines to clear the clearing message corresponding to the memory segment based on the memory segment corresponding to the target CPU.
Generally, there is an upper limit to the length of the flush by the flush engine, and thus the number of flush messages determined by the target CPU is related to the length of the allocated memory segment and the upper limit to the length of the memory flushed each time by the flush engine. In the present embodiment, the number of clear messages determined by the target CPU is not limited. The length of each erasure corresponding to each erasure message is not limited.
Specifically, the step of determining the clearing message of each clearing independently by using each target CPU includes: independently calculating the clearing information of each clearing by using each target CPU based on the corresponding memory segment; the purge message includes a purge length and a purge start position. That is, after the memory segments are allocated to the corresponding target CPUs, the target CPUs may be triggered, so that each target CPU simultaneously starts to clean the memory segments allocated to itself. Specifically, before sending a clearing message to the clearing engine, each target CPU calculates the length that can be cleared this time (generally not more than 32k), then updates the start address of the clearing this time and the length to be cleared remaining according to the length that can be cleared this time, and then tells the clearing engine to perform clearing operation on the start address of the clearing this time and the length of the clearing this time through the clearing message. That is, the cleaning length and the cleaning start position are carried in the cleaning message.
And S105, combining the message sending queue and the message waiting queue corresponding to each target CPU, and sending each clearing message to a clearing engine.
The message sending queue is a queue for sending a clear message, and the message waiting queue temporarily stores the clear message when the message sending queue is full.
In this embodiment, each target CPU may separately maintain a corresponding message sending queue and a message waiting queue to speed up sending a clear message to the clear engine.
Preferably, in an embodiment of the present application, in order to increase the memory clearing efficiency, the next clearing message may also be sent directly without waiting for a response message of the clearing engine. In a specific implementation process, step S105, in combination with the message sending queue and the message waiting queue corresponding to each target CPU, sends each clearing message to the clearing engine, which may specifically include:
step one, each clearing message is sent to a clearing engine according to a message sending queue;
step two, receiving a response message fed back by the clearing engine;
clearing the clearing message in the message sending queue by using the response message;
and step four, storing the clearing message to be sent into the message waiting queue under the condition that the message sending queue is full.
For convenience of description, the above four steps will be described in combination.
After each target CPU sends a clearing message to the clearing engine, as long as the total remaining length of the memory to be cleared in the allocated memory segment is not 0, the next clearing message can be directly sent to the clearing engine without waiting for the response message of the clearing engine until the clearing message sending queue is full of messages. The function that the target CPU sends the purge message may be registered as a callback function. The target CPU needs to accurately update the start address to be cleared and the total remaining length to be cleared each time a clear message is sent.
After the message send queue is filled with clear messages, sending is stopped, and the last clear message to be sent can be added to the message wait queue (PENDING LIST), at which point the target CPU can be released. That is, the released target CPU may process messages of other queues (i.e., messages corresponding to non-memory clear service), wait for all other queue messages to be processed, then process PENDING LIST, and after processing PENDING LIST, continue to call the callback function corresponding to the sent clear message, continue to perform clear operation, and thus execute in a loop until all clear operations are completed.
And S106, cleaning the target memory based on each cleaning message by using a cleaning engine.
After the purge engine receives the purge messages, the target memory may be purged based on each of the purge messages. That is, each time a clear message is received, memory cleaning is performed until all clear messages corresponding to the target memory are received and successfully processed by the clear engine, so that the target memory is cleaned by the multiple CPUs.
By applying the method provided by the embodiment of the application, under the condition that the target memory needs to be cleaned, at least two target CPUs participating in memory cleaning are determined; dividing a target memory into memory segments matched with the number of target CPUs; respectively allocating the memory segments to each target CPU participating in memory cleaning; respectively and independently determining the clearing message of each clearing by using each target CPU; sending each clearing message to a clearing engine by combining the message sending queue and the message waiting queue corresponding to each target CPU; and cleaning the target memory based on each cleaning message by using a cleaning engine.
In the application, when the target memory needs to be cleaned, at least two target CPUs which need to participate in the memory cleaning are found out firstly. And segmenting the target memory, and distributing the memory segments to corresponding target CPUs. Each target CPU can then independently determine the purge message for each purge. Because each target CPU respectively and independently determines the corresponding memory segment, a communication link for determining the clearing message can be saved between the CPUs, and the problem of clearing confusion can not be caused. And the clearing message determined by each target CPU can be sent to the clearing engine by combining the message sending queue and the message waiting queue. The flush engine may quickly complete a memory flush based on the received flush message. The number of the target CPUs participating in the memory cleaning is at least 2, and the target CPUs do not need additional communication, so that the memory cleaning efficiency can be greatly improved.
It should be noted that, based on the above embodiments, the embodiments of the present application also provide corresponding improvements. In the preferred/improved embodiment, the same steps as those in the above embodiment or corresponding steps may be referred to each other, and corresponding advantageous effects may also be referred to each other, which are not described in detail in the preferred/improved embodiment herein.
In a specific embodiment of the present application, in order to ensure memory cleaning efficiency, a target memory participating in memory cleaning may be determined by combining with relevant service parameters of the CPU, considering that different CPUs may have different business busyness degrees due to business division and the like. That is to say, in the case that the target memory needs to be cleaned in step S102, determining at least two target CPUs participating in the memory cleaning may specifically include:
step one, acquiring service parameters of each CPU under the condition that a target memory needs to be cleaned;
and step two, determining at least two target CPUs from the CPUs by using the service parameters.
For convenience of description, the above two steps will be described in combination.
After the target memory is definitely required to be cleaned, the service parameters of each CPU may be obtained first, and the service parameters may specifically be parameters such as CPU occupancy rate and the like which can mark the busy degree of the CPU.
After the service parameters are obtained, a plurality of target CPUs with relatively idle services can be screened from the CPUs based on the threshold value. For example, a traffic parameter threshold may be set, and a CPU having a traffic parameter less than the traffic parameter threshold may be determined as the target CPU. Of course, the CPUs may also be sorted based on the service parameters, and then a fixed number of CPUs with relatively idle service may be taken as target CPUs based on the sorting.
Further, when the memory segment is divided, in order to balance the CPU load, the target memory may be divided into memory segments whose length matches the CPU service parameter and whose number matches the number of the target CPU. That is, a relatively free target CPU may be partitioned into longer memory segments, and a relatively busy target CPU may be partitioned into shorter memory segments.
In one embodiment of the present application, it is considered that in practical applications, the service requirement changes with time. In order to guarantee the memory cleaning efficiency, the cleaning progress/efficiency of each CPU can be effectively monitored after the memory segment is allocated to the target CPU, so that the memory segment allocation is finely adjusted. That is, after the step S103 is executed to allocate the memory segments to the target CPUs participating in the memory cleaning, the method further includes the following steps:
step one, acquiring an uncleaned length corresponding to each memory segment;
step two, if a target memory segment with the length which is not cleaned and is larger than a preset threshold value exists, determining that the target memory segment corresponds to the uncleaned segment;
and step three, replacing the target CPU corresponding to the uncleaned section.
For convenience of description, the above three steps will be described in combination.
After each memory segment is allocated to the corresponding target CPU, the cleaning progress of each CPU can be effectively monitored. The uncleaned length corresponding to each memory can be obtained. In this embodiment, a threshold may be set for the uncleaned length for different time periods, and then, after finding a target memory segment with the uncleaned length greater than the preset threshold, first, the uncleaned segment is determined from the target memory segment, and then the uncleaned segment is allocated to another target CPU except for the currently allocated target CPU. For example, if it is found that 78K of the memory segment corresponding to the CPU1 is not cleared, and the preset threshold is 50K, it may be considered that the CPU1 is busy, and it is difficult to quickly clear the memory segment, and the remaining 78K that is not cleared may be allocated to the CPU2 for clearing.
Corresponding to the above method embodiments, the present application further provides a memory cleaning device, and the memory cleaning device described below and the memory cleaning method described above may be referred to correspondingly.
Referring to fig. 2, the apparatus includes the following modules:
the CPU determining module 101 is used for determining at least two target CPUs participating in memory cleaning under the condition that the target memory needs to be cleaned;
a memory dividing module 102, configured to divide a target memory into memory segments whose number is matched with that of target CPUs;
the memory allocation module 103 is configured to allocate memory segments to target CPUs participating in memory cleaning respectively;
a clear message determination module 104, configured to determine, by using each target CPU, a clear message for each clearing independently;
a message sending module 105, configured to send each purge message to the purge engine in combination with the message sending queue and the message waiting queue corresponding to each target CPU;
and a memory cleaning module 106, configured to clean the target memory based on each cleaning message by using the cleaning engine.
By applying the device provided by the embodiment of the application, under the condition that the target memory needs to be cleaned, at least two target CPUs participating in memory cleaning are determined; dividing a target memory into memory segments matched with the number of target CPUs; respectively allocating the memory segments to each target CPU participating in memory cleaning; respectively and independently determining the clearing message of each clearing by using each target CPU; sending each clearing message to a clearing engine by combining the message sending queue and the message waiting queue corresponding to each target CPU; and cleaning the target memory based on each cleaning message by using a cleaning engine.
In the application, when the target memory needs to be cleaned, at least two target CPUs which need to participate in the memory cleaning are found out firstly. And segmenting the target memory, and distributing the memory segments to corresponding target CPUs. Each target CPU can then independently determine the purge message for each purge. Because each target CPU respectively and independently determines the corresponding memory segment, a communication link for determining the clearing message can be saved between the CPUs, and the problem of clearing confusion can not be caused. And the clearing message determined by each target CPU can be sent to the clearing engine by combining the message sending queue and the message waiting queue. The flush engine may quickly complete a memory flush based on the received flush message. The number of the target CPUs participating in the memory cleaning is at least 2, and the target CPUs do not need additional communication, so that the memory cleaning efficiency can be greatly improved.
In a specific embodiment of the present application, the CPU determining module 101 is specifically configured to obtain service parameters of each CPU when a target memory needs to be cleaned; and determining at least two target CPUs from the CPUs by using the service parameters.
In an embodiment of the present application, the memory dividing module 102 is specifically configured to divide the target memory into memory segments, where the length of the memory segments is matched with the CPU service parameter, and the number of the memory segments is matched with the number of the target CPUs.
In an embodiment of the present application, the memory dividing module 102 is specifically configured to averagely divide the target memory into memory segments matched with the number of target CPUs.
In a specific embodiment of the present application, the clear message determining module 104 is specifically configured to utilize each target CPU to independently calculate a clear message for each clearing based on its corresponding memory segment; the purge message includes a purge length and a purge start position.
In one embodiment of the present application, the method further includes:
the dynamic adjustment module is used for acquiring the uncleaned length corresponding to each memory segment after the memory segment is respectively allocated to each target CPU participating in memory cleaning; if a target memory segment with the length which is not cleaned and is larger than a preset threshold value exists, determining that the target memory segment corresponds to the uncleaned segment; and replacing the target CPU corresponding to the uncleaned section.
In a specific embodiment of the present application, the message sending module 105 is specifically configured to send each clearing message to the clearing engine according to a message sending queue; receiving a response message fed back by the clearing engine; clearing the clearing message in the message sending queue by using the response message; and storing the clearing message to be sent currently into the message waiting queue under the condition that the message sending queue is full.
Corresponding to the above method embodiment, an electronic device is further provided in the embodiments of the present application, and a reference may be made to an electronic device described below and a memory cleaning method described above in correspondence with each other.
Referring to fig. 3, the electronic device includes:
a memory 332 for storing a computer program;
the processor 322 is configured to implement the steps of the memory cleaning method according to the foregoing method embodiment when executing the computer program.
Specifically, referring to fig. 4, fig. 4 is a schematic structural diagram of an electronic device provided in this embodiment, which may generate relatively large differences due to different configurations or performances, and may include one or more processors (CPUs) 322 (e.g., one or more processors) and a memory 332, where the memory 332 stores one or more computer applications 342 or data 344. Memory 332 may be, among other things, transient or persistent storage. The program stored in memory 332 may include one or more modules (not shown), each of which may include a sequence of instructions operating on a data processing device. Still further, the central processor 322 may be configured to communicate with the memory 332 to execute a series of instruction operations in the memory 332 on the electronic device 301.
The electronic device 301 may also include one or more power sources 326, one or more wired or wireless network interfaces 350, one or more input-output interfaces 358, and/or one or more operating systems 341.
The steps in the memory cleaning method described above may be implemented by a structure of an electronic device.
Corresponding to the above method embodiment, the present application further provides a readable storage medium, and a readable storage medium described below and a memory cleaning method described above may be referred to correspondingly.
A readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of the memory cleaning method of the above-mentioned method embodiments.
The readable storage medium may be a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and various other readable storage media capable of storing program codes.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.

Claims (10)

1. A memory cleaning method is characterized by comprising the following steps:
under the condition that a target memory needs to be cleaned, determining at least two target CPUs participating in memory cleaning;
dividing the target memory into memory segments matched with the number of the target CPUs;
respectively distributing the memory segments to each target CPU participating in memory cleaning;
respectively and independently determining the clearing message of each clearing by using each target CPU;
sending each clearing message to a clearing engine by combining a message sending queue and a message waiting queue corresponding to each target CPU;
and cleaning the target memory based on each cleaning message by using the cleaning engine.
2. The method according to claim 1, wherein determining at least two target CPUs participating in memory cleaning when the target memory needs to be cleaned comprises:
acquiring service parameters of each CPU under the condition that the target memory needs to be cleaned;
and determining at least two target CPUs from the CPUs by using the service parameters.
3. The method according to claim 2, wherein dividing the target memory into memory segments matching the number of the target CPUs comprises:
and dividing the target memory into memory segments with the length matched with the CPU service parameters and the number matched with the number of the target CPUs.
4. The method according to claim 1, wherein dividing the target memory into memory segments matching the number of the target CPUs comprises:
and averagely dividing the target memory into memory segments matched with the number of the target CPUs.
5. The method according to claim 1, wherein the determining the clear message for each clearing independently by each target CPU comprises:
independently calculating the clearing information of each clearing by utilizing each target CPU based on the memory segment corresponding to the target CPU; the purge message includes a purge length and a purge start position.
6. The method according to claim 1, wherein after the allocating the memory segments to the target CPUs participating in memory scrubbing, further comprises:
acquiring the uncleaned length corresponding to each memory segment;
if the target memory segment with the uncleaned length larger than the preset threshold exists, determining that the target memory segment corresponds to the uncleaned segment;
and replacing the target CPU corresponding to the uncleaned section.
7. The method according to any one of claims 1 to 6, wherein the sending each of the purge messages to a purge engine in combination with a message sending queue and a message waiting queue corresponding to each of the target CPUs includes:
sending each clearing message to the clearing engine according to the message sending queue;
receiving a response message fed back by the clearing engine;
clearing the clearing message in the message sending queue by using the response message;
and storing the clearing message to be sent currently into the message waiting queue under the condition that the message sending queue is full.
8. A memory cleaning apparatus, comprising:
the CPU determining module is used for determining at least two target CPUs participating in memory cleaning under the condition that the target memory needs to be cleaned;
the memory dividing module is used for dividing the target memory into memory segments matched with the number of the target CPUs;
the memory allocation module is used for respectively allocating the memory segments to the target CPUs participating in memory cleaning;
a clearing message determining module, configured to determine, by using each target CPU, a clearing message for each clearing independently;
the message sending module is used for sending each clearing message to a clearing engine by combining the message sending queue and the message waiting queue corresponding to each target CPU;
and the memory cleaning module is used for cleaning the target memory based on each cleaning message by using the cleaning engine.
9. An electronic device, comprising:
a memory for storing a computer program;
a processor for implementing the steps of the memory cleaning method according to any one of claims 1 to 7 when executing the computer program.
10. A readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the memory cleaning method according to any one of claims 1 to 7.
CN202110744436.1A 2021-06-30 2021-06-30 Memory cleaning method, device, equipment and readable storage medium Active CN113626181B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110744436.1A CN113626181B (en) 2021-06-30 2021-06-30 Memory cleaning method, device, equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110744436.1A CN113626181B (en) 2021-06-30 2021-06-30 Memory cleaning method, device, equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN113626181A true CN113626181A (en) 2021-11-09
CN113626181B CN113626181B (en) 2023-07-25

Family

ID=78378820

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110744436.1A Active CN113626181B (en) 2021-06-30 2021-06-30 Memory cleaning method, device, equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN113626181B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070239949A1 (en) * 2006-03-31 2007-10-11 Lenovo (Singapore) Pte. Ltd. Method and apparatus for reclaiming space in memory
CN101055533A (en) * 2007-05-28 2007-10-17 中兴通讯股份有限公司 Multithreading processor dynamic EMS memory management system and method
CN101178669A (en) * 2007-12-13 2008-05-14 华为技术有限公司 Resource recovery method and apparatus
CN101178679A (en) * 2007-12-14 2008-05-14 华为技术有限公司 EMS memory checking method and system in multi-nucleus system
CN102799471A (en) * 2012-05-25 2012-11-28 上海斐讯数据通信技术有限公司 Method and system for process recycling of operating system
CN103544063A (en) * 2013-09-30 2014-01-29 三星电子(中国)研发中心 Process clearing method and device applied to Android platform
CN103581008A (en) * 2012-08-07 2014-02-12 杭州华三通信技术有限公司 Router and software upgrading method thereof
CN105205409A (en) * 2015-09-14 2015-12-30 浪潮电子信息产业股份有限公司 Method for preventing data leakage during memory multiplexing and computer system
CN107315622A (en) * 2017-06-19 2017-11-03 杭州迪普科技股份有限公司 A kind of method and device of cache management
CN110069422A (en) * 2018-01-23 2019-07-30 普天信息技术有限公司 Core buffer recovery method based on MIPS multi-core processor
CN112286688A (en) * 2020-11-05 2021-01-29 北京深维科技有限公司 Memory management and use method, device, equipment and medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070239949A1 (en) * 2006-03-31 2007-10-11 Lenovo (Singapore) Pte. Ltd. Method and apparatus for reclaiming space in memory
CN101055533A (en) * 2007-05-28 2007-10-17 中兴通讯股份有限公司 Multithreading processor dynamic EMS memory management system and method
CN101178669A (en) * 2007-12-13 2008-05-14 华为技术有限公司 Resource recovery method and apparatus
CN101178679A (en) * 2007-12-14 2008-05-14 华为技术有限公司 EMS memory checking method and system in multi-nucleus system
CN102799471A (en) * 2012-05-25 2012-11-28 上海斐讯数据通信技术有限公司 Method and system for process recycling of operating system
CN103581008A (en) * 2012-08-07 2014-02-12 杭州华三通信技术有限公司 Router and software upgrading method thereof
CN103544063A (en) * 2013-09-30 2014-01-29 三星电子(中国)研发中心 Process clearing method and device applied to Android platform
CN105205409A (en) * 2015-09-14 2015-12-30 浪潮电子信息产业股份有限公司 Method for preventing data leakage during memory multiplexing and computer system
CN107315622A (en) * 2017-06-19 2017-11-03 杭州迪普科技股份有限公司 A kind of method and device of cache management
CN110069422A (en) * 2018-01-23 2019-07-30 普天信息技术有限公司 Core buffer recovery method based on MIPS multi-core processor
CN112286688A (en) * 2020-11-05 2021-01-29 北京深维科技有限公司 Memory management and use method, device, equipment and medium

Also Published As

Publication number Publication date
CN113626181B (en) 2023-07-25

Similar Documents

Publication Publication Date Title
WO2020211579A1 (en) Processing method, device and system for distributed bulk processing system
CN110389843B (en) Service scheduling method, device, equipment and readable storage medium
CN109525410B (en) Distributed storage system upgrading management method and device and distributed storage system
CN102255926B (en) Method for allocating tasks in Map Reduce system, system and device
US20060095247A1 (en) Predictive analysis of availability of systems and/or system components
CN107682417B (en) Task allocation method and device for data nodes
CN111309644B (en) Memory allocation method and device and computer readable storage medium
CN105740063A (en) Data processing method and apparatus
CN110708377A (en) Data transmission method, device and storage medium
CN111338779B (en) Resource allocation method, device, computer equipment and storage medium
CN112269661B (en) Partition migration method and device based on Kafka cluster
CN112256433B (en) Partition migration method and device based on Kafka cluster
CN116954816A (en) Container cluster control method, device, equipment and computer storage medium
CN111831408A (en) Asynchronous task processing method and device, electronic equipment and medium
CN106775975B (en) Process scheduling method and device
CN112261125B (en) Centralized unit cloud deployment method, device and system
CN113626173A (en) Scheduling method, device and storage medium
CN116483546B (en) Distributed training task scheduling method, device, equipment and storage medium
CN112631994A (en) Data migration method and system
CN112559179A (en) Job processing method and device
CN113626181A (en) Memory cleaning method, device, equipment and readable storage medium
CN110750362A (en) Method and apparatus for analyzing biological information, and storage medium
CN115878309A (en) Resource allocation method, device, processing core, equipment and computer readable medium
CN107656810B (en) Method for ensuring service quality of delay sensitive program under data center environment
CN112748883A (en) IO request pipeline processing device, method, system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant