CN109522113B - Memory management method and device - Google Patents

Memory management method and device Download PDF

Info

Publication number
CN109522113B
CN109522113B CN201811121410.6A CN201811121410A CN109522113B CN 109522113 B CN109522113 B CN 109522113B CN 201811121410 A CN201811121410 A CN 201811121410A CN 109522113 B CN109522113 B CN 109522113B
Authority
CN
China
Prior art keywords
memory
service module
core
service
allocated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811121410.6A
Other languages
Chinese (zh)
Other versions
CN109522113A (en
Inventor
张贤义
叶国洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Maipu Communication Technology Co Ltd
Original Assignee
Maipu Communication Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Maipu Communication Technology Co Ltd filed Critical Maipu Communication Technology Co Ltd
Priority to CN201811121410.6A priority Critical patent/CN109522113B/en
Publication of CN109522113A publication Critical patent/CN109522113A/en
Application granted granted Critical
Publication of CN109522113B publication Critical patent/CN109522113B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5011Pool

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Memory System (AREA)

Abstract

The embodiment of the invention discloses a memory management method and a memory management device, which relate to the field of data communication. When the forwarding core detects that the memory pre-allocated for the service module is insufficient, the capacity is expanded according to a certain proportion, so that the service module can occupy the memory step by step according to the actual requirement, and the memory overhead of equipment is saved. The method comprises the following steps: the control core pre-allocates a memory for the service module according to the memory requirement parameter of the service module on the forwarding core; and when the forwarding core detects that the residual amount of the memory in the memory pre-allocated to the service module is smaller than a set threshold value, triggering a control core to expand the memory pre-allocated to the service module.

Description

Memory management method and device
Technical Field
The invention belongs to the field of data communication, and particularly relates to a memory management method and device.
Background
A Data Forwarding PlatForm (DFP) is a framework module running on a Forwarding core and responsible for packet Forwarding processing. In a multi-core environment containing a DFP framework module, a forwarding core is responsible for data forwarding and controls the core to distribute and recycle resources. In the prior art, a memory delayed release technology is used for realizing the non-locking of critical resources among cores to improve the message forwarding efficiency, and the memory delayed release technology is controllable depending on the maximum time for each forwarding core to process the current message. However, when the service module of the forwarding plane receives the data packet and needs to allocate the memory, because the existing memory application or release interface of the operating system may be blocked, if the service module thread of the forwarding plane is blocked, the implementation of the memory delayed release technology may be affected, and the CPU may access the released memory. To solve this problem, there are currently two solutions: one solution is that the forwarding core notifies the control core to allocate memory through an inter-core message queue, and the current message is discarded; after the control core completes the memory allocation, the forwarding core can continue processing when receiving the subsequent retransmission message. Another solution is that the forwarding core sends the current packet to the control core through an inter-core message queue, and performs packet processing (including memory allocation) on the control core. Both solutions affect the packet forwarding efficiency.
Disclosure of Invention
The invention provides a memory management method and a memory management device, which are used for solving the problem that the existing memory allocation technology influences the message forwarding efficiency.
In order to achieve the above object, in a first aspect, the present invention provides a memory management method, including:
the control core pre-allocates a memory for the service module according to the memory requirement parameter of the service module on the forwarding core;
and when the forwarding core detects that the residual amount of the memory in the memory pre-allocated to the service module is smaller than a set threshold value, triggering a control core to expand the memory pre-allocated to the service module.
In a second aspect, the present invention provides a memory management apparatus, the apparatus comprising a control core and at least one forwarding core,
the control core is used for pre-allocating a memory for the service module according to the memory requirement parameter of the service module on the forwarding core;
and the forwarding core is used for triggering the control core to expand the memory pre-allocated to the service module when detecting that the memory surplus in the memory pre-allocated to the service module is smaller than a set threshold value.
The invention pre-allocates the memory requirement for processing the data message by the service module on the forwarding core, thereby avoiding the memory application in the message processing process and improving the message forwarding efficiency. When the forwarding core detects that the memory pre-allocated for the service module is insufficient, the capacity is expanded according to a certain proportion, so that the service module can occupy the memory step by step according to the actual requirement, and the memory overhead of equipment is saved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a flowchart of a method of memory management according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a memory pool according to an embodiment of the present invention;
fig. 3 is a schematic diagram of an architecture of a control core and a forwarding core to which the method according to the embodiment of the present invention is applied;
fig. 4 is a schematic structural diagram of a memory management device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The system architecture and the service scenario described in the embodiment of the present invention are for more clearly illustrating the technical solution of the embodiment of the present invention, and do not form a limitation on the technical solution provided in the embodiment of the present invention, and it can be known by those skilled in the art that the technical solution provided in the embodiment of the present invention is also applicable to similar technical problems along with the evolution of the system architecture and the appearance of a new service scenario.
The above method is described in detail with reference to specific examples.
An embodiment of the present invention provides a memory management method, which improves data packet forwarding efficiency through a memory pre-allocation mechanism of a control core, and as shown in fig. 1, the memory management method provided in the embodiment of the present invention includes:
step 101, a control core pre-allocates a memory for a service module according to a memory requirement parameter of the service module on a forwarding core;
before the control core of step 101 pre-allocates the memory for the service module according to the memory requirement parameter of the service module on the forwarding core, the method further includes:
the control core acquires the memory requirement parameters of the service module, wherein the memory requirement parameters comprise a pre-allocation proportion; the pre-allocation proportion is the proportion of pre-allocated memory to the total memory requirement.
In the embodiment of the present invention, the memory requirement parameters further include a memory size corresponding to a single service object, a total number of service objects, and a number used for multi-core contention in the total number of service objects, where a service object is a data unit for storing service module data.
In this step, the pre-allocating, by the control core, the memory for the service module according to the memory requirement parameter of the service module on the forwarding core specifically includes:
the control core obtains the total memory requirement of the service module according to the type of the service object of the service module on the forwarding core and the memory requirement parameter of each type of service object, and pre-allocates the memory for the service module according to the pre-allocation proportion in the memory requirement parameter.
In the embodiment of the invention, the total number of each type of service object of the service module is a predicted value, and a certain proportion of the total number of the connection tracking can be taken. In the message processing process, connection tracking is used, the connection tracking is used for tracking and recording the connection state, each flow corresponds to one connection tracking table entry, and the number of the connection tracking table entries can be configured through commands. A certain service module, such as a DPI module, only needs to process a packet hitting the DPI rule, and considering that only about 100 packets of about 10000 packets may need to be processed by the DPI module in an actual application scenario, the total number of service objects of each class of service objects corresponding to the DPI module may be set to be one percent of the total number of connection traces, that is, one percent is used as a pre-allocation proportion for allocating a memory to each class of service objects for the DPI module.
Specifically, the total memory requirement of each class of service object is calculated according to the memory size corresponding to a single service object of each class of service object corresponding to the service module and the total number of the service objects, and then the calculated total memory requirement of each class of service object is overlapped to obtain the total memory requirement of the service module. And then calculating the total memory requirement for multi-core competition in each class of service object according to the memory size corresponding to the single service object of each class of service object and the number of multi-core competition in the total number of service objects. And finally, pre-allocating the memory for the service module according to the pre-allocation proportion of each class of service object.
And step 102, when the forwarding core detects that the residual amount of the memory in the memory pre-allocated to the service module is smaller than a set threshold value, triggering a control core to expand the memory pre-allocated to the service module.
In this step, the memory pre-allocated to the service module includes a memory corresponding to each forwarding core and a memory for multi-core contention; when the forwarding core detects that the remaining amount of the memory in the memory pre-allocated to the service module is smaller than a set threshold, triggering a control core to expand the memory pre-allocated to the service module, specifically comprising:
after acquiring any service object of the service module, the forwarding core detects a memory corresponding to the forwarding core of any service object in a memory pre-allocated to the service module and a memory used for multi-core competition;
and when the forwarding core detects that the sum of the memory corresponding to the forwarding core of any service object in the memory pre-allocated to the service module and the memory used for multi-core competition is smaller than a set threshold value, triggering a control core to expand the memory pre-allocated to any service object according to a set expansion proportion. In the embodiment of the present invention, the set threshold may be a certain proportion of the pre-allocated memory, such as 20% of the pre-allocated memory. And the control core expands the memory pre-allocated to any business object according to the set expansion ratio. The set expansion ratio may be selectively set as the pre-distribution ratio, or may be selectively set as another ratio, which is not specifically limited herein.
In the embodiment of the present invention, the memory pre-allocated to the service module may be managed by using a memory pool. The following describes in detail how to manage the pre-allocated memory in the embodiment of the present invention by combining the memory pool technology and implement the specific process of the memory management method in the embodiment of the present invention.
In the embodiment of the invention, the memory pool is created according to the type of the business object corresponding to the business module, and each type of business object corresponds to one memory pool. For example, as shown in fig. 2, assuming that a service object corresponding to a service module 1 includes n (n is greater than or equal to 1, n is a natural number) classes, n memory pools, which are schematically shown in fig. 2 as a memory pool 1, a memory pool 2, and a memory pool n, need to be created for the service module 1.
Specifically, the service module calls an Open (Open) interface of the memory pool to Open the memory pool, obtains the number of the memory pools needing to be created according to the number of the types of the service objects corresponding to the service module, and obtains a handle of the memory pool; in the implementation of the present invention, a handle is used to record memory pool information that needs to be created by a service module, one service module corresponds to one handle, and one handle contains information of all memory pools that need to be created by the service module, including: the number of the memory pools, the type information of the business objects in the memory pools and the like. A business object herein refers to a data unit that stores business module data.
And the service module calls a creation interface of the memory pool according to the handle to specify the memory size corresponding to a single service object in each memory pool, the total number of the service objects, the number of multi-core competitions in the total number of the service objects and the pre-allocation proportion.
After a service module is initialized on a control core, the control core acquires service object types corresponding to the service module, and acquires memory requirement parameters of each type of service object, wherein the memory requirement parameters comprise the size of a memory corresponding to a single service object, the total number of the service objects, the number of multi-core competitions in the total number of the service objects and a pre-allocation proportion, and the service objects are data units for storing service module data; the pre-allocation proportion is the proportion of pre-allocated memory to the total memory requirement.
The control core obtains the total memory requirement of the service module according to the type of the service object corresponding to the service module on the forwarding core and the memory requirement parameter of each type of service object, and pre-allocates the memory for the service module according to the pre-allocation proportion in the memory requirement parameter. And establishing a memory pool according to the pre-allocated memory, dividing the pre-allocated memory according to the size of the memory corresponding to the single service object, and storing the divided pre-allocated memory into the memory pool.
In the embodiment of the present invention, in order to improve the application efficiency of the memory pools and avoid the decrease of the forwarding performance caused by locking, as shown in fig. 2, each memory pool includes a memory occupied linked list Usedlist and a memory free linked list Freelist, which includes: fig. 2 schematically shows Usedlist and Freelist corresponding to m (m is greater than or equal to 1, and m is a natural number) forwarding cores in the memory pool 2, and Usedlist and Freelist for multi-core contention. As shown in fig. 3, when a service module on a forwarding core needs to use a memory, a service object is always obtained from a memory occupied linked list Usedlist, and when the memory is released, the service object is always returned to a corresponding memory free linked list Freelist. Therefore, the pre-allocated memory is divided according to the size of the memory corresponding to the single service object and stored in the memory pool, and specifically includes that the pre-allocated memory is divided according to the size of the memory corresponding to the single service object and then is mounted in the memory occupation linked list corresponding to each forwarding core and the memory occupation linked list used for multi-core competition. Fig. 3 schematically shows n forwarding cores, n Usedlist corresponding to the n forwarding cores on the control core, and one Usedlist for multi-core contention, where a service module on a forwarding core applies for a memory from the Usedlist corresponding to which forwarding core, and when the memory is released, the memory is correspondingly returned to the Freelist corresponding to the forwarding core. That is, if the service object is taken out from Usedlist corresponding to the forwarding core 1, the service object is returned to Freelist corresponding to the forwarding core 1 at the time of release. If the business object is taken from the Usedlist for multi-core competition, the business object is returned to Freelist for multi-core competition when released. And when the memory occupation linked list is empty, the memory occupation linked list and the memory idle linked list are exchanged.
As a preferred implementation manner of the embodiment of the present invention, when the memory is released, whether there may be a memory repeat release may be detected, and a specific detection method is not specifically limited herein.
In a preferred implementation manner of the embodiment of the present invention, when a memory application is performed from a memory occupied linked list corresponding to a current forwarding core, if the memory in the memory occupied linked list is used up at this time, the memory occupied linked list and the memory free linked list corresponding to the current forwarding core are interchanged; and if all the service objects in the memory linked list corresponding to the current forwarding core are used up, acquiring the service objects from the memory linked list for multi-core competition.
In the embodiment of the invention, when a service module on a forwarding core identifies a data message which needs to be processed by the service module, a service object is taken out from a memory occupation linked list of a memory pool corresponding to the service module, and a processing strategy for the data message is stored in the taken-out service object. And when all the service objects in the memory occupation linked list corresponding to the current forwarding core are used up, taking out one service object from the memory occupation linked list for multi-core competition, and storing the processing strategy of the data message into the taken out service object.
In a preferred implementation manner of the embodiment of the present invention, when the forwarding core detects that the sum of the remaining amounts of the service objects in the memory linked list corresponding to the forwarding core in the memory pool and the service objects in the memory linked list for multi-core contention is smaller than the set proportion of the memory pool, the forwarding core is triggered to control the core to expand the memory pool.
In the embodiment of the present invention, after a service module on a forwarding core acquires any service object of the service module from a memory linked list of a memory pool, the forwarding core detects the remaining amount of the service object in the memory linked list used for multi-core competition and a memory linked list corresponding to the forwarding core of the any service object in the memory pool.
When the forwarding core detects that the sum of the residual amounts of the service objects in the memory linked list corresponding to the forwarding core in the memory pool and the memory linked list for multi-core competition is smaller than the set proportion of the memory pool, the control core is triggered to expand the memory pool of any service object in the memory pool according to the expansion proportion. In the embodiment of the present invention, a step policy may be adopted to expand the memory pool of any service object in the memory pool according to the expansion ratio, the ratio of each step is specified by the service module when the memory pool is created, and the memory pool is gradually expanded according to the step policy when the memory of the device is abundant. The certain proportion of the stepping may be a pre-distribution proportion, or may be other proportions that are set, and is not particularly limited herein. Such as: when the sum of the remaining amount of the service object in the memory linked list corresponding to a certain forwarding core and the service object in the memory linked list for multi-core competition is less than 20% of the memory pool, the memory pool of any service object is triggered and controlled to be expanded.
In a preferred implementation manner of the embodiment of the present invention, the memory pool may be deactivated when the service module no longer needs to use the memory pool, and the memory pool may not immediately destroy all the memory pools after being deactivated, but may destroy the memory pool after all the service objects allocated to the service module are returned to the memory pool. Meanwhile, when the service module no longer needs the memory pool on the forwarding plane, if the function of the service module is disabled or configured to be empty effectively, the service module needs to call the memory pool disabling interface to return all occupied service objects of the service module to the operating system.
In a preferred implementation manner of the embodiment of the present invention, the memory pool may also be reset, and the reset of the memory pool needs to be used in cooperation with a clear command of the connection tracking. The user may specify to clear a connection tracking entry associated with ipv4 or ipv6 by entering a clear command for connection tracking. And when the connection tracking table entry is cleared, all memories related to the connection are sequentially released, and the memory pool is returned to the initial state. And monitoring whether the memory leakage exists or not when the memory pool is reset.
According to the memory management method provided by the invention, the memory requirement for processing the data message by the service module on the forwarding core is pre-allocated, so that the memory application is avoided in the message processing process, and the message forwarding efficiency is improved. When the forwarding core detects that the memory pre-allocated for the service module is insufficient, the capacity is expanded according to a certain proportion, so that the service module can occupy the memory step by step according to the actual requirement, and the memory overhead of equipment is saved.
An embodiment of the present invention provides a memory management device, which includes a control core and at least one forwarding core, as shown in fig. 4, a device 40 is schematically shown, and includes m (m is greater than or equal to 1, m is a natural number) forwarding cores, such as a control core 401, a forwarding core 1(402), a forwarding core 2(403), a forwarding core m (404), and the like, where the forwarding cores m (m is greater than or equal to 1, m is a natural number), and a detailed description is given below on the memory management device provided in the embodiment of the present invention by taking the forwarding core 1(402) (hereinafter referred to as "forwarding core 402") as an example.
The control core 401 is configured to pre-allocate a memory for a service module according to a memory requirement parameter of the service module on the forwarding core 402;
and the forwarding core 402 is configured to trigger the control core 401 to perform capacity expansion on the memory pre-allocated to the service module when it is detected that the remaining amount of the memory in the memory pre-allocated to the service module is smaller than a set threshold.
The control core 401 is further configured to obtain a memory requirement parameter of the service module, where the memory requirement parameter includes a pre-allocation ratio; the pre-allocation proportion is the proportion of pre-allocated memory to the total memory requirement.
The control core 401 is specifically configured to obtain a total memory requirement of the service module according to the type of the service object of the service module on the forwarding core and the memory requirement parameter of each type of service object, and pre-allocate a memory for the service module according to a pre-allocation proportion in the memory requirement parameter.
The memory pre-allocated for the service module in the control core 401 includes a memory corresponding to each forwarding core and a memory for multi-core contention;
a forwarding core 402, configured to detect, after acquiring any service object of the service module, a memory corresponding to the forwarding core of the any service object in a memory pre-allocated to the service module and a memory used for multi-core contention;
when detecting that the sum of the memory corresponding to the forwarding core of any service object in the memory pre-allocated to the service module and the memory used for multi-core competition is smaller than a set threshold, triggering the control core 401 to perform capacity expansion on the memory pre-allocated to any service object according to a set capacity expansion ratio.
The control core 401 is further configured to manage a memory pre-allocated to the service module by using a memory pool.
According to the memory management device provided by the invention, the memory requirement for processing the data message by the service module on the forwarding core is pre-allocated, so that the memory application is avoided in the message processing process, and the message forwarding efficiency is improved. When the forwarding core detects that the memory pre-allocated for the service module is insufficient, the capacity is expanded according to a certain proportion, so that the service module can occupy the memory step by step according to the actual requirement, and the memory overhead of equipment is saved.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (4)

1. A memory management method, the method comprising:
the method comprises the steps that a control core obtains memory requirement parameters of a service module on a forwarding core, wherein the memory requirement parameters comprise a pre-allocation proportion; the pre-allocation proportion is the proportion of pre-allocated memory to the total memory requirement;
the control core pre-allocates a memory for the service module according to the memory requirement parameter of the service module on the forwarding core; the method comprises the steps that a memory pre-allocated to a service module is managed by adopting a memory pool, the memory pool is created according to the type of a service object corresponding to the service module, and each type of service object corresponds to one memory pool; the service object is a data unit for storing service module data;
when the forwarding core detects that the residual amount of the memory in the memory pre-allocated to the service module is smaller than a set threshold value, triggering a control core to expand the memory pre-allocated to the service module;
the memory pre-allocated for the service module comprises a memory corresponding to each forwarding core and a memory used for multi-core competition; when the forwarding core detects that the remaining amount of the memory in the memory pre-allocated to the service module is smaller than a set threshold, triggering a control core to expand the memory pre-allocated to the service module, specifically comprising:
after acquiring any service object of the service module, the forwarding core detects a memory corresponding to the forwarding core of any service object in a memory pre-allocated to the service module and a memory used for multi-core competition;
and when the forwarding core detects that the sum of the memory corresponding to the forwarding core of any service object in the memory pre-allocated to the service module and the memory used for multi-core competition is smaller than a set threshold value, triggering the control core to expand the memory pre-allocated to the service object according to a set expansion proportion.
2. The method according to claim 1, wherein the pre-allocating, by the control core, the memory for the service module according to the memory requirement parameter of the service module on the forwarding core specifically comprises:
the control core obtains the total memory requirement of the service module according to the type of the service object of the service module on the forwarding core and the memory requirement parameter of each type of service object, and pre-allocates the memory for the service module according to the pre-allocation proportion in the memory requirement parameter.
3. A memory management device, characterized in that the device comprises a control core and at least one forwarding core,
the control core is used for acquiring memory requirement parameters of a service module on the forwarding core, wherein the memory requirement parameters comprise a pre-allocation proportion; the pre-allocation proportion is the proportion of pre-allocated memory to the total memory requirement;
the control core is used for pre-allocating a memory for the service module according to the memory requirement parameter of the service module on the forwarding core; the method comprises the steps that a memory pre-allocated to a service module is managed by adopting a memory pool, the memory pool is created according to the type of a service object corresponding to the service module, and each type of service object corresponds to one memory pool; the service object is a data unit for storing service module data;
the forwarding core is used for triggering the control core to expand the memory pre-allocated to the service module when the memory surplus in the memory pre-allocated to the service module is detected to be smaller than a set threshold value;
the memory pre-allocated for the service module in the control core comprises a memory corresponding to each forwarding core and a memory used for multi-core competition;
the forwarding core is specifically configured to detect, after acquiring any service object of the service module, a memory corresponding to the forwarding core of the any service object in a memory pre-allocated to the service module and a memory used for multi-core contention;
and when detecting that the sum of the memory corresponding to the forwarding core of any service object in the memory pre-allocated to the service module and the memory used for multi-core competition is smaller than a set threshold value, triggering the control core to expand the memory pre-allocated to the service object according to a set expansion proportion.
4. The apparatus according to claim 3, wherein the control core is specifically configured to obtain a total memory requirement of the service module according to the type of the service object of the service module on the forwarding core and a memory requirement parameter of each type of service object, and pre-allocate a memory for the service module according to a pre-allocation ratio in the memory requirement parameter.
CN201811121410.6A 2018-09-28 2018-09-28 Memory management method and device Active CN109522113B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811121410.6A CN109522113B (en) 2018-09-28 2018-09-28 Memory management method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811121410.6A CN109522113B (en) 2018-09-28 2018-09-28 Memory management method and device

Publications (2)

Publication Number Publication Date
CN109522113A CN109522113A (en) 2019-03-26
CN109522113B true CN109522113B (en) 2020-12-18

Family

ID=65769968

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811121410.6A Active CN109522113B (en) 2018-09-28 2018-09-28 Memory management method and device

Country Status (1)

Country Link
CN (1) CN109522113B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113542055A (en) * 2021-06-15 2021-10-22 新华三信息安全技术有限公司 Message processing method, device, equipment and machine readable storage medium
CN113849309B (en) * 2021-09-26 2022-09-16 北京元年科技股份有限公司 Memory allocation method and device for business object

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7231504B2 (en) * 2004-05-13 2007-06-12 International Business Machines Corporation Dynamic memory management of unallocated memory in a logical partitioned data processing system
CN100407655C (en) * 2005-11-15 2008-07-30 华为技术有限公司 Method of dynamic allocation of network communication apparatus system resource
CN100487660C (en) * 2007-05-28 2009-05-13 中兴通讯股份有限公司 Multithreading processor dynamic EMS memory management system and method
CN101799772B (en) * 2010-02-26 2014-06-11 上海华为技术有限公司 Kernel dispatching method, kernel backup method and multi-core processor
CN102508717B (en) * 2011-11-17 2013-07-10 大唐移动通信设备有限公司 Memory scheduling method and memory scheduling device for multi-core processor
CN105701019A (en) * 2014-11-25 2016-06-22 阿里巴巴集团控股有限公司 Memory management method and memory management device
CN106557427B (en) * 2015-09-25 2021-11-12 中兴通讯股份有限公司 Memory management method and device for shared memory database
CN106844041B (en) * 2016-12-29 2020-06-16 华为技术有限公司 Memory management method and memory management system

Also Published As

Publication number Publication date
CN109522113A (en) 2019-03-26

Similar Documents

Publication Publication Date Title
CN112269641B (en) Scheduling method, scheduling device, electronic equipment and storage medium
CN105468450A (en) Task scheduling method and system
CN109522113B (en) Memory management method and device
CN109710416B (en) Resource scheduling method and device
KR101997816B1 (en) System and method for using a sequencer in a concurrent priority queue
CN115220921B (en) Resource scheduling method, image processor, image pickup device, and medium
CN108255608B (en) Management method of memory pool
CN109062681A (en) A kind of execution method, system, device and the storage medium of periodic cycle task
CN103841562A (en) Time slot resource occupation processing method and time slot resource occupation processing device
CN115617497A (en) Thread processing method, scheduling component, monitoring component, server and storage medium
CN114385370A (en) Memory allocation method, system, device and medium
JP3683082B2 (en) Call processing equipment
CN111835797A (en) Data processing method, device and equipment
CN112260962B (en) Bandwidth control method and device
CN112073532A (en) Resource allocation method and device
CN110543357B (en) Method, related device and system for managing application program object
CN110489232A (en) Resource isolation method, apparatus, electronic equipment and storage medium
CN113986458A (en) Container set scheduling method, device, equipment and storage medium
CN114706663A (en) Computing resource scheduling method, medium and computing device
CN116450328A (en) Memory allocation method, memory allocation device, computer equipment and storage medium
CN113886082A (en) Request processing method and device, computing equipment and medium
CN105612727B (en) A kind of dispositions method and device based on cloud environment system
WO2017070869A1 (en) Memory configuration method, apparatus and system
CN111857992B (en) Method and device for allocating linear resources in Radosgw module
CN117519988B (en) RAID-based memory pool dynamic allocation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP02 Change in the address of a patent holder
CP02 Change in the address of a patent holder

Address after: 610041 nine Xing Xing Road 16, hi tech Zone, Sichuan, Chengdu

Patentee after: MAIPU COMMUNICATION TECHNOLOGY Co.,Ltd.

Address before: 610041 15-24 floor, 1 1 Tianfu street, Chengdu high tech Zone, Sichuan

Patentee before: MAIPU COMMUNICATION TECHNOLOGY Co.,Ltd.