WO2016011811A1 - 一种内存管理方法、装置及存储介质 - Google Patents

一种内存管理方法、装置及存储介质 Download PDF

Info

Publication number
WO2016011811A1
WO2016011811A1 PCT/CN2015/073575 CN2015073575W WO2016011811A1 WO 2016011811 A1 WO2016011811 A1 WO 2016011811A1 CN 2015073575 W CN2015073575 W CN 2015073575W WO 2016011811 A1 WO2016011811 A1 WO 2016011811A1
Authority
WO
WIPO (PCT)
Prior art keywords
buffer
linked list
memory
buffers
node
Prior art date
Application number
PCT/CN2015/073575
Other languages
English (en)
French (fr)
Inventor
张晓艳
沈寒
Original Assignee
深圳市中兴微电子技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市中兴微电子技术有限公司 filed Critical 深圳市中兴微电子技术有限公司
Publication of WO2016011811A1 publication Critical patent/WO2016011811A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/06Addressing a physical block of locations, e.g. base addressing, module addressing, memory dedication

Definitions

  • the present invention relates to the field of memory management technologies, and in particular, to a method, an apparatus, and a storage medium for efficient memory management on a terminal.
  • memory management plays an important role in software development and application, and has an important impact on the performance of software.
  • users have more and more demand for terminals, and the requirements are getting higher and higher.
  • memory capacity of various terminals is getting larger and larger, but the current memory management technology can not fully meet the needs of software development, therefore, memory is still a valuable and tight resource, how to memory Implementing effective management has also been a hot topic of research.
  • the existing memory allocation and recovery operations are very complicated, and the time cost of memory management operations becomes software.
  • Development and memory management technology bottlenecks At present, the existing memory management methods on various terminals are basically realized by dynamically allocating memory, and dynamically apply for memory usage according to the memory size required by the system. Due to the inconsistent memory size required by external systems, it is easy to generate memory fragmentation when dynamically applying for memory, which invisibly reduces memory usage efficiency.
  • the embodiment of the present invention is to provide a memory management method, device, and storage medium, which can improve the efficiency of memory allocation and recovery to a certain extent, reduce the generation of memory fragments, improve the efficiency of memory management, and improve the terminal.
  • Software running speed and data transfer rate can improve the efficiency of memory allocation and recovery to a certain extent, reduce the generation of memory fragments, improve the efficiency of memory management, and improve the terminal.
  • the embodiment of the invention provides a memory management method, and the method includes:
  • the buffer is allocated by deleting a single linked list node; and the buffer is recovered by inserting a single linked list node.
  • the allocating the buffer by deleting the single linked list node includes: allocating a buffer pointed by the head pointer of the single linked list to a thread or a function module applying for memory, and the header of the single linked list The pointer points to the next buffer, and the number of buffers is decremented by 1.
  • the recovering the buffer by inserting the single linked list node includes: inserting a first address of the buffer released by the thread or the function module into a tail of the single linked list, and inserting the single linked list
  • the tail pointer points to the inserted buffer, and the number of buffers is increased by 1.
  • the method further includes: setting, in the data part of each buffer node in the singly linked list, an idle flag for marking whether the current buffer is free.
  • the method further includes: setting, in the data part of each buffer node in the singly linked list, an out-of-bounds access mark for determining whether the current buffer has an out-of-bounds access.
  • the embodiment of the present invention further provides a memory management device, where the device includes: a memory allocation module, a buffer processing module, and an allocation recovery module, where
  • the memory allocation module is configured to allocate physical memory having a fixed start address and an end address
  • the buffer processing module is configured to divide the physical memory into a plurality of buffer buffers of a fixed size, and connect the buffers into a single linked list structure;
  • the allocation and recovery module is configured to allocate the buffer by deleting a single linked list node, and recover the buffer by inserting a single linked list node.
  • the allocating and retrieving module allocates the buffer by deleting the single linked list node, and the allocation and recycling module allocates a buffer pointed by the head pointer of the single linked list to a thread or function for applying for memory. Module, and point the head of the single-linked list to the next buffer, while the number of buffers is reduced by 1;
  • the allocating the recovery mode, the reclaiming the buffer by inserting a single linked list node includes: inserting, by the allocation and recycling module, a first address of a buffer released by the thread or a function module into the The tail of the singly linked list, and the tail pointer of the singly linked list is pointed to the inserted buffer, and the number of buffers is increased by 1.
  • the buffer processing module is further configured to: in the data part of each buffer node in the single linked list, set an idle flag for marking whether the current buffer is free.
  • the buffer processing module is further configured to: in the data part of each buffer node in the single-link list, set an out-of-bounds access mark for determining whether the current buffer has an out-of-bounds access.
  • the embodiment of the present invention further provides a computer storage medium, where the computer storage medium stores a computer program for executing a memory management method of an embodiment of the present invention.
  • the memory management method, device and storage medium provided by the embodiment of the present invention first allocate physical memory having a fixed start address and an end address; and then divide the physical memory into a plurality of buffer buffers of a fixed size, and The buffers are concatenated into a single-linked list structure; finally, the buffers are allocated and recovered by deleting and inserting the single-linked list nodes. So, can The efficiency of the memory allocation and the recycling is improved to a certain extent, and the memory fragmentation generated during the memory allocation is effectively reduced. Moreover, the unidirectional linked list structure is simple in the embodiment of the present invention, and the traversal method for finding the free memory used in the prior art is avoided. Improve the memory processing speed and the utilization of memory resources, effectively improve the memory management efficiency, and thus greatly improve the software running speed and data transmission rate on the terminal.
  • FIG. 1 is a schematic flowchart of a memory management method according to an embodiment of the present invention.
  • FIG. 2 is a schematic structural diagram of a fixed-size physical memory allocated according to an embodiment of the present invention
  • FIG. 3 is a schematic structural diagram of an uplink memory and a downlink memory according to an embodiment of the present invention
  • FIG. 4 is a schematic structural diagram of a buffer single-link list after serial connection according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of a buffer allocation method according to an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of a buffer recovery method according to an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of a memory management apparatus according to an embodiment of the present invention.
  • physical memory having a fixed start address and an end address is first allocated; then the physical memory is divided into multiple fixed-size buffers, and the buffers are connected in series.
  • the structure is a single-linked list; finally, the buffer is allocated and reclaimed by deleting and inserting the single-linked list node.
  • the single linked list includes a head pointer, a tail pointer, a buffer counter, and one or more nodes.
  • Each node of the single linked list structure corresponds to a buffer, and each node includes a data part and a pointer part, and a buffer counter of the single linked list records the total number of the buffers.
  • Allocating the buffer by deleting a single linked list node includes: The buffer pointed to by the head pointer of the singly linked list is allocated to the thread or function module applying for memory, and the head pointer of the singly linked list is pointed to the next buffer, and the number of buffers is decreased by 1.
  • Retrieving the buffer by inserting a single linked list node includes inserting a first address of the buffer released by the thread or function module into a tail of the single linked list, and pointing a tail pointer of the single linked list to the Insert the buffer and increase the number of buffers by 1.
  • the method further includes: setting, in a data list of each buffer node in the single linked list, an idle flag for marking whether the current buffer is free; in the single linked list, the data portion of each buffer node is set to determine whether the current buffer is out of bounds.
  • the cross-border access token
  • the flow of the memory management method in the embodiment of the present invention includes the following steps:
  • Step 101 Allocating physical memory having a fixed start address and an end address
  • the size of the physical memory can be determined according to the size of the IP packet to be transmitted, the number of IP packets transmitted simultaneously, and the management structure size of the IP packet.
  • FIG. 2 is a schematic structural diagram of a fixed-size physical memory allocated according to an embodiment of the present invention.
  • the starting address of the physical memory is 0x22A00000, and the ending address is 0x23400000.
  • Step 102 Divide the physical memory into a plurality of fixed-size buffers, and connect the buffers into a single-linked list structure;
  • the physical memory is divided into a plurality of fixed-size buffers, and the fixed-size buffers are concatenated using a singly linked list structure.
  • Each node of the single-linked list structure corresponds to a buffer, and the head pointer of the single-linked list after initialization is pointed to the base address of the allocated fixed-size physical memory, and the buffer counter of the single-linked list The total number of buffers.
  • the physical memory is divided into two parts: the uplink memory and the downlink memory, as shown in FIG. 3, the first address of the uplink memory is the starting address of the pre-allocated physical memory; the downlink memory is The first address is the address obtained by shifting N bytes on the basis of the first address of the upstream memory and the memory size occupied by all the upstream buffers, for example, the base address of the upstream memory and the memory size occupied by all the upstream buffers.
  • the upper address is offset by 1024 bytes to obtain the first address of the downlink memory; here, a slight offset processing is performed, which does not waste memory and prevents overlapping of uplink and downlink memory addresses.
  • the upper and lower memory are divided into fixed-size buffers respectively.
  • the number of uplink and downlink buffers can be designed to be 1024, and respectively connected by a linked list.
  • the serialized linked list structure of the buffer is shown in FIG. 4, including the header. Pointer, tail pointer, buffer counter, and more than one node.
  • Each node of the single linked list structure corresponds to a buffer, and each node includes a data part and a pointer part, and a buffer counter of the single linked list records the total number of the buffers.
  • Step 103 Allocating the buffer by deleting a single linked list node; and recovering the buffer by inserting a single linked list node;
  • the allocating the buffer by deleting the single linked list node includes: allocating a buffer pointed by the head pointer of the single linked list to a thread or a function module applying for memory, and pointing the head pointer of the single linked list The next buffer, while the number of buffers is reduced by 1;
  • FIG. 5 A schematic diagram of a buffer allocation method according to an embodiment of the present invention is shown in FIG. 5.
  • the buffer[0] pointed to by the head pointer of the uplink buffer list is allocated to a function module or thread applying for memory, and then the head pointer is moved backward.
  • the buffer[0] pointed to by the head of the buffer chain list is allocated to the function module or thread applying for the memory, and then the head pointer is moved backward to the next buffer[1], and the number of buffers is decreased by 1. Until the head and tail node pointers of the linked list are the same, the buffer is no longer allocated.
  • the recovering the buffer by inserting the single linked list node includes: inserting a first address of the buffer released by the thread or the function module into a tail of the single linked list, and pointing a tail pointer of the single linked list
  • the inserted buffer increases the number of buffers by one at the same time.
  • FIG. 6 A schematic diagram of a buffer recovery method according to an embodiment of the present invention is shown in FIG. 6.
  • the first address of the upstream buffer[0] is inserted into the tail of the upstream buffer single-link list, and the buffer is [ The next node pointer of 0] is set to null, and the tail pointer of the singly linked list is pointed to the inserted buffer[0], and the number of buffers is incremented by 1.
  • the first address of the downlink buffe[0] is inserted into the tail of the downlink buffer singly linked list, and The next node pointer of the buffer[0] is set to null, and the tail pointer of the singly linked list is pointed to the inserted buffer[0], and the number of buffers is increased by 1.
  • the uplink and downlink buffers are allocated, it is first determined whether the first and last pointers are equal or whether the number of buffers is reduced to 0; if the first and last pointers are equal or the number of buffers is reduced to 0, then the buffer is exhausted, Continue to allocate memory. Since the pre-allocated physical memory block is large enough and the number of buffers is sufficient, if the buffer is exhausted, the system itself has a problem.
  • each node buffer In the singly linked list, the data part of each node buffer is set to the free Free flag, and the corresponding buffer is idle.
  • the buffer is free Free during initialization, and is set to non-free NotFree after the allocation, and is set to idle Free during the recovery. Prevent the buffer from being released repeatedly.
  • the flag OverFlud is set to check whether the boundary is accessed. It is used to determine whether an out-of-bounds access occurs. When the initialization is performed, the flag does not have an out-of-bounds access "NoFlud”. If the judgment is still "NoFlud", there is no An exception occurred that crossed the boundary.
  • an out-of-bounds access refers to an address overlap between two buffers.
  • the buffer since the buffer may be used by different function modules or threads, the buffer and the mutex need to be mutually exclusive when allocating and reclaiming;
  • Mutexes ensure that any thread accessing a block of memory has exclusive access to that memory, thereby ensuring data integrity.
  • a thread or function module applies for a buffer, it acquires the mutex. During this period, other threads or function modules cannot operate on the linked list. After the thread or function module finishes applying, the mutex is released. At this point, other modules can apply and release the linked list. In this way, different functional modules or threads cannot operate on the same buffer linked list at the same time.
  • the embodiment of the present invention further provides a memory management device. As shown in FIG. 7, the device includes a memory allocation module 71, a buffer processing module 72, and an allocation recovery module 73.
  • the memory allocation module 71 is configured to allocate physical memory having a fixed start address and an end address;
  • the memory allocation module 71 pre-plans to allocate a fixed-size physical memory from the DDR_RAM memory, where the starting address and the ending address of the physical memory are determined when the partitioning is performed; the size of the fixed-sized physical memory may be as needed.
  • the size of the transmitted IP packet, the number of IP packets transmitted at the same time, and the management structure size of the IP packet are determined.
  • the buffer processing module 72 is configured to divide the physical memory into a plurality of fixed-size buffers, and connect the buffers into a single-linked list structure;
  • the buffer processing module 72 divides the physical memory into a plurality of fixed-size buffers, and uses a singly linked list structure to concatenate the fixed-size buffers.
  • Each node of the single-linked list structure corresponds to a buffer.
  • the head pointer of the single-linked list points to the base address of the allocated fixed-size physical memory, and the buffer counter of the single-linked list records the total number of the buffers.
  • the buffer processing module 72 divides the physical memory into two parts: the uplink memory and the downlink memory, and the first address of the uplink memory is the starting address of the pre-allocated physical memory; the downlink memory The first address is an address obtained by offsetting N bytes based on the base address of the upstream memory and the memory size occupied by all the upstream buffers, for example, offset by 1024 bytes; here, a slight offset processing is performed, No memory is wasted, and the uplink and downlink memory addresses are prevented from overlapping.
  • the upper and lower memory are divided into fixed-size buffers. For example, the number of uplink and downlink buffers can be designed as 1024, and they are respectively connected by a linked list.
  • the allocation and recovery module 73 is configured to allocate the buffer by deleting the single linked list node, and collect the buffer by inserting the single linked list node.
  • the allocation and collection module 73 allocates the buffer by deleting the single linked list node, and the allocation and collection module 73 allocates the buffer pointed to by the head pointer of the single linked list to the thread or function for applying for memory. Module, and point the head of the single-linked list to the next buffer, while the number of buffers is reduced by 1;
  • the allocation and collection module 73 allocates the buffer pointed to by the head pointer of the upstream buffer list to the function module or thread applying for the memory, and then the head pointer moves backward to the next buffer, and the number of buffers is decreased by 1 until the number of buffers is decreased. If the head and tail nodes of the linked list have the same pointer, the buffer will not be allocated upstream.
  • the allocation and collection module 73 allocates the buffer pointed to by the head pointer of the buffer chain list to the function module or thread applying for the memory, and then shifts the head pointer to the next buffer, and the number of buffers is decreased by 1. Until the head and tail node pointers of the linked list are the same, the buffer is no longer allocated.
  • the allocation and recovery module 73 enters the buffer by inserting a single linked list node
  • the line recycling includes: the allocation and recycling module 73 inserts the first address of the buffer released by the thread or function module into the tail of the single linked list, and points the tail pointer of the single linked list to the inserted buffer, and simultaneously buffers Add 1 to the number.
  • the allocation and recovery module 73 inserts the first address of the upstream buffer into the tail of the upstream buffer single-link list, and the buffer is The next node pointer is set to null, and the tail pointer of the singly linked list is pointed to the inserted buffer[0], and the number of buffers is incremented by 1.
  • the allocation recovery module 73 inserts the first address of the downlink buffer into the tail of the downlink buffer singly linked list, and The buffer's next node pointer is set to null, and the tail pointer of the singly linked list is pointed to the inserted buffer[0], and the number of buffers is incremented by one.
  • the allocation and collection module 73 is further configured to determine whether the first and last pointers are equal or whether the number of buffers is reduced to 0; if the first and last pointers are equal or the number of buffers is reduced to 0, the buffer is exhausted, Cannot continue to allocate memory. Since the pre-allocated physical memory block is large enough and the number of buffers is sufficient, if the buffer is exhausted, the system itself has a problem.
  • the buffer processing module 72 is further configured to: in the data portion of each buffer node in the singly linked list, set an idle flag for marking whether the current buffer is free.
  • the buffer processing module 72 sets an idle Free flag in the data portion of each node buffer in the linked list, and records whether the corresponding buffer is free.
  • the buffer is idle Free during initialization, and is set to non-idle NotFree after the allocation, and is set to idle during the recovery. Free, in this way, prevents the buffer from being repeatedly released.
  • the buffer processing module 72 is further configured to: in the singly linked list, the data portion of each buffer node is set to determine whether the current buffer is out of bounds. Access to the cross-border access token.
  • the buffer processing module 72 sets a flag OverFlud for the out-of-bounds access in the data portion of each node buffer in the single-link list, and is used to determine whether an out-of-bounds access occurs.
  • the flag does not have an out-of-bounds access "NoFlud”, and the judgment is still "NoFlud", there is no abnormality in the outbound access.
  • an out-of-bounds access refers to an address overlap between two buffers.
  • the allocation and collection module 73 needs to mutually exclusive the buffer plus mutex when allocating and reclaiming;
  • the allocation reclamation module 73 can ensure that an application or thread has exclusive access to a single resource by adding a mutex to protect a memory block accessed by multiple threads. Mutexes ensure that any thread accessing a block of memory has exclusive access to that memory, thereby ensuring data integrity.
  • a thread or function module applies for a buffer, it acquires the mutex. During this period, other threads or function modules cannot operate on the linked list. After the thread or function module finishes applying, the mutex is released. At this point, other modules can apply and release the linked list. In this way, different functional modules or threads cannot operate on the same buffer linked list at the same time.
  • the memory allocation module, the buffer processing module, and the distribution recovery module in the memory management device proposed in the embodiment of the present invention may be implemented by a processor, and may also be implemented by a specific logic circuit; wherein the processor may be a mobile terminal. Or a processor on the server.
  • the processor may be a central processing unit (CPU), a microprocessor (MPU), a digital signal processor (DSP), or a field programmable gate array (FPGA).
  • the above memory management method may also be stored in a computer readable storage medium.
  • the technical solution of the embodiments of the present invention may be embodied in the form of a software product in essence or in the form of a software product. It is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes various media that can store program codes, such as a USB flash drive, a mobile hard disk, a read only memory (ROM), a magnetic disk, or an optical disk.
  • program codes such as a USB flash drive, a mobile hard disk, a read only memory (ROM), a magnetic disk, or an optical disk.
  • the embodiment of the present invention further provides a computer storage medium, where the computer storage medium stores a computer program, and the computer program is used to execute the foregoing memory management method of the embodiment of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System (AREA)
  • Information Transfer Systems (AREA)

Abstract

一种内存管理方法、内存管理装置及存储介质,所述方法包括:分配具有固定起始地址和结束地址的物理内存(101);将所述物理内存划分为多个固定大小的缓冲区buffer,并将所述buffer串联成单链表结构(102);通过对单链表节点的删除操作,对所述buffer进行分配;并通过对单链表节点的插入操作,对所述buffer进行分配和回收(103)。

Description

一种内存管理方法、装置及存储介质 技术领域
本发明涉及内存管理技术领域,尤其涉及一种终端上高效内存管理方法、装置及存储介质。
背景技术
目前,内存管理在软件开发和应用中占有重要的地位,并且对软件的性能有重要的影响。随着软件行业的迅猛发展及各类终端的涌现,用户对终端的需求越来越多,要求也越来越高。虽然随着软件技术的发展,各种终端的内存容量越来越大,但是目前的内存管理技术仍然不能完全满足软件发展的需要,因此,内存仍然是一种宝贵且紧俏的资源,如何对内存实施有效的管理也一直是研究的热点。
由于操作系统要考虑底层硬件管理、内存限制、内存碎片、多软件同时运行、多线程环境等情况,导致现有的内存的分配和回收操作都非常复杂,内存管理操作所消耗的时间成本成为软件开发和内存管理技术的瓶颈。目前,各类终端上现有的内存管理方法基本上都是通过动态分配内存实现的,根据系统需要的内存大小动态申请内存使用。由于外部系统需要的内存大小不一致,在动态申请使用内存时很容易产生内存碎片,无形中降低了内存使用效率。
另外,一些传统的方法在分配和回收时都需要进行大量的内存块结构查询,才能完成分配和回收操作,这就导致了内存管理的效率低下。在高性能的软件开发及应用领域,尤其是在迅猛发展的终端上,内存管理是关键的基础功能,内存管理的性能直接影响了整个软件系统的运行。
可见,现有内存管理方法中,内存管理效率低下以及内存分配和回收 的过程中存在大量的内存碎片等问题是目前亟待解决的问题。
发明内容
有鉴于此,本发明实施例期望提供一种内存管理方法、装置及存储介质,能够在一定程度上提高内存分配和回收的效率,减少内存碎片的产生,提高内存管理的效率,进而提升终端上的软件运行速度及数据传输速率。
为达到上述目的,本发明实施例的技术方案是这样实现的:
本发明实施例提供了一种内存管理方法,所述方法包括:
分配具有固定起始地址和结束地址的物理内存;
将所述物理内存划分为多个固定大小的缓冲区(buffer),并将所述buffer串联成单链表结构;
通过对单链表节点的删除操作,对所述buffer进行分配;并通过对单链表节点的插入操作,对所述buffer进行回收。
上述方案中,所述通过对单链表节点的删除操作,对所述buffer进行分配包括:将所述单链表的头指针指向的buffer分配给申请内存的线程或功能模块,并将单链表的头指针指向下一个buffer,同时buffer个数减1。
上述方案中,所述通过对单链表节点的插入操作,对所述buffer进行回收包括:将所述线程或功能模块释放的buffer的首地址插入到所述单链表的尾部,并将单链表的尾指针后指向所述插入的buffer,同时将buffer个数加1。
上述方案中,所述方法还包括:在单链表中每个buffer节点的数据部分设置用于标记当前buffer是否空闲的空闲标记。
上述方案中,所述方法还包括:在单链表中每个buffer节点的数据部分设置用于判断当前buffer是否发生越界访问的越界访问标记。
本发明实施例还提供了一种内存管理装置,所述装置包括:内存分配模块、buffer处理模块、分配回收模块,其中,
所述内存分配模块,配置为分配具有固定的起始地址和结束地址的物理内存;
所述buffer处理模块,配置为将所述物理内存划分为多个固定大小的缓冲区buffer,并将所述buffer串联成单链表结构;
所述分配回收模块,配置为通过对单链表节点的删除操作,对所述buffer进行分配;并通过对单链表节点的插入操作,对所述buffer进行回收。
上述方案中,所述分配回收模块通过对单链表节点的删除操作,对所述buffer进行分配包括:所述分配回收模块将所述单链表的头指针指向的buffer分配给申请内存的线程或功能模块,并将单链表的头指针指向下一个buffer,同时buffer个数减1;
上述方案中,所述分配回收模所述通过对单链表节点的插入操作,对所述buffer进行回收包括:所述分配回收模块将所述线程或功能模块释放的buffer的首地址插入到所述单链表的尾部,并将单链表的尾指针后指向所述插入的buffer,同时将buffer个数加1。
上述方案中,所述buffer处理模块还配置为:在单链表中每个buffer节点的数据部分设置用于标记当前buffer是否空闲的空闲标记。
上述方案中,所述buffer处理模块还配置为:在单链表中每个buffer节点的数据部分设置用于判断当前buffer是否发生越界访问的越界访问标记。
本发明实施例还提供了一种计算机存储介质,所述计算机存储介质存储有计算机程序,该计算机程序用于执行本发明实施例的内存管理方法。
本发明实施例所提供的内存管理方法、装置及存储介质,先分配具有固定起始地址和结束地址的物理内存;再将所述物理内存划分为多个固定大小的缓冲区buffer,并将所述buffer串联成单链表结构;最后通过对单链表节点的删除和插入操作,对所述buffer进行分配和回收。如此,能够在 一定程度上提高内存分配和回收的效率,有效减少内存分配时产生的内存碎片;而且,本发明实施例采用单向链表结构简单,避免了现有技术中采用的遍历查找空闲内存的方式,大大提升内存处理速度及内存资源的利用率,有效提高内存管理效率,进而能大大提升终端上的软件运行速度及数据传输速率。
附图说明
图1为本发明实施例内存管理方法流程示意图;
图2为本发明实施例分配的固定大小的物理内存结构示意图;
图3为本发明实施例上行内存和下行内存结构示意图;
图4为本发明实施例串联后的buffer单链表结构示意图;
图5为本发明实施例buffer分配方法示意图;
图6为本发明实施例buffer回收方法示意图;
图7为本发明实施例内存管理装置结构示意图。
具体实施方式
在多模通信系统协议软件开发过程中,上下行数据的传输以及各个制式间数据的转移,都需要内存buffer来缓存或者暂存这些上下行数据。
为提高内存分配和回收的效率,本发明实施例中,先分配具有固定起始地址和结束地址的物理内存;再将所述物理内存划分为多个固定大小的buffer,并将所述buffer串联成单链表结构;最后通过对单链表节点的删除和插入操作,对所述buffer进行分配和回收。
本发明实施例所述单链表,包括头指针、尾指针、buffer计数器以及一个以上节点。其中,所述单链表结构的每个节点对应一个buffer,每个节点包括数据部分和指针部分,单链表的buffer计数器记录所述buffer的总个数。
通过对单链表节点的删除操作,对所述buffer进行分配包括:将所述 单链表的头指针指向的buffer分配给申请内存的线程或功能模块,并将单链表的头指针指向下一个buffer,同时buffer个数减1。
通过对单链表节点的插入操作,对所述buffer进行回收包括:将所述线程或功能模块释放的buffer的首地址插入到所述单链表的尾部,并将单链表的尾指针后指向所述插入的buffer,同时将buffer个数加1。
所述方法还包括:在单链表中每个buffer节点的数据部分设置用于标记当前buffer是否空闲的空闲标记;在单链表中每个buffer节点的数据部分设置用于判断当前buffer是否发生越界访问的越界访问标记。
下面结合附图及具体实施例,对本发明实施例技术方案的实施作详细描述。本发明实施例内存管理方法流程,如图1所示,包括以下步骤:
步骤101:分配具有固定起始地址和结束地址的物理内存;
从双倍速率同步动态随机存储器(DDR_RAM,Double Data Rate Random Access Memory)内存中预先规划分配一块固定大小的物理内存,所述物理内存的起始地址和结束地址在进行划分时确定;所述固定大小的物理内存的大小可以根据所需要传输的IP包的大小、同时传输的IP包个数、以及IP包的管理结构大小确定。
例如,本发明实施例中,可以分配10M固定大小的物理内存。如图2所示,图2为本发明实施例分配的固定大小的物理内存结构示意图。所述物理内存的起始地址为0x22A00000,结束地址为0x23400000。
步骤102:将所述物理内存划分为多个固定大小的buffer,并将所述buffer串联成单链表结构;
在初始化时,将所述物理内存划分为多个固定大小的buffer,并使用单向链表结构将这些固定大小的buffer串接起来。
所述单链表结构的每一个节点对应一个buffer,初始化后单链表的头指针指向分配的固定大小物理内存的基地址,单链表的buffer计数器记录所 述buffer的总个数。
针对终端的上下行数据传输,初始化时将所述物理内存分为上行内存和下行内存两部分,如图3所示,上行内存的首地址即预先分配的物理内存的起始地址;下行内存的首地址是在上行内存的首地址和所有上行所有buffer占的内存大小的基础上偏移N字节得到的地址,例如,可以在上行内存的首地址和所有上行所有buffer占的内存大小的基础上偏移1024个字节得到下行内存的首地址;这里,做一个微小的偏移处理,既不会浪费内存,同时防止上下行内存地址重叠。将上下行内存分别划分成固定大小的buffer,例如,上下行buffer个数可分别设计为1024个,并分别通过链表串接起来,串接后的buffer单链表结构如图4所示,包括头指针、尾指针、buffer计数器以及一个以上节点。其中,所述单链表结构的每个节点对应一个buffer,每个节点包括数据部分和指针部分,单链表的buffer计数器记录所述buffer的总个数。
步骤103:通过对单链表节点的删除操作,对所述buffer进行分配;并通过对单链表节点的插入操作,对所述buffer进行回收;
其中,所述通过对单链表节点的删除操作,对所述buffer进行分配包括:将所述单链表的头指针指向的buffer分配给申请内存的线程或功能模块,并将单链表的头指针指向下一个buffer,同时buffer个数减1;
本发明实施例buffer分配方法示意图如图5所示,当发送上行数据时,将上行buffer链表的头指针指向的buffer[0]分配给申请内存的功能模块或者线程,然后头指针后移指向下一个buffer[1],同时buffer个数减1,直到链表的头尾节点指针相同,则不再上行分配buffer。
当接收下行数据时,将下行了buffer链表的头指针指向的buffer[0]分配给申请内存的功能模块或者线程,然后头指针后移指向下一个buffer[1],同时buffer个数减1,直到链表的头尾节点指针相同,则不再分配buffer。
所述通过对单链表节点的插入操作,对所述buffer进行回收包括:将所述线程或功能模块释放的buffer的首地址插入到所述单链表的尾部,并将单链表的尾指针后指向所述插入的buffer,同时将buffer个数加1。
本发明实施例buffer回收方法示意图如图6所示。在上行数据发送结束之后,即申请内存的功能模块或线程使用完某个上行buffer[0]之后,将所述上行buffer[0]的首地址插入到上行buffer单链表的尾部,将该buffer[0]的下一节点指针置空,并将单链表的尾指针后指向所述插入的buffer[0],同时将buffer个数加1。
在下行数据接收完成之后,即申请内存的功能模块或线程使用完某个下行buffer[0]之后业,将所述下行buffe[0]的首地址其插入到下行buffer单链表的尾部,并将该buffer[0]的下一节点指针置空,并将单链表的尾指针后指向所述插入的buffer[0],同时将buffer个数加1。
本发明实施例中,在分配上下行buffer之前,要先判断首尾指针是否相等或者判断buffer个数是否减到0;如果首尾指针相等或者buffer个数减到0,则说明buffer耗尽,则不能继续分配内存。由于预先分配的物理内存块足够大,并且buffer个数足够多,如果出现buffer耗尽的情况,则说明系统本身出了问题。
本发明实施例中,为了防止buffer重复释放,还需要在单链表中每个buffer节点的数据部分设置用于标记当前buffer是否空闲的空闲标记。
在单链表中每个节点buffer的数据部分设置空闲Free标记,记录相应的buffer是否空闲,初始化时所述buffer为空闲Free,分配之后置为非空闲NotFree,回收时置为空闲Free,如此,能够防止buffer重复释放。
本发明实施例中,为了防止buffer发生越界访问,还需要在单链表中每个buffer节点的数据部分设置用于判断当前buffer是否发生越界访问的越界访问标记。
在单链表中每个节点buffer的数据部分设置是否越界访问的标记OverFlud,用于判断是否发生越界访问,初始化时为标记没有发生越界访问“NoFlud”,回收时判断仍为“NoFlud”,则没有发生越界访问的异常。
这里,越界访问指的是两个buffer之间出现了地址重叠。
本发明实施例中,由于buffer可能会被不同功能模块或者线程使用,因此在分配和回收时,需要对所述buffer加互斥量进行互斥;
这里,通过增加互斥量,能够确保应用程序或线程拥有对单个资源的互斥访问权,以保护由多个线程访问的内存块。互斥量能够保证访问内存块的任何线程拥有对该内存的独占访问权,进而保证数据的完整性。在某个线程或者功能模块申请buffer时,获取互斥量,在此期间,其他线程或者功能模块不能对链表进行操作,该线程或功能模块申请结束之后,释放互斥量。此时其他模块可进行申请和释放对链表进行操作。如此,能够不同的功能模块或者线程不能够同时对同一个buffer链表进行操作。
本发明实施例还提供了一种内存管理装置,如图7所示,所述装置包括内存分配模块71、buffer处理模块72、分配回收模块73,其中,
所述内存分配模块71,配置为分配具有固定起始地址和结束地址的物理内存;
所述内存分配模块71从DDR_RAM内存中预先规划分配一块固定大小的物理内存,所述物理内存的起始地址和结束地址在进行划分时确定;所述固定大小的物理内存的大小可以根据所需要传输的IP包的大小、同时传输的IP包个数、以及IP包的管理结构大小确定。
所述buffer处理模块72,配置为将所述物理内存划分为多个固定大小的buffer,并将所述buffer串联成单链表结构;
在初始化时,所述buffer处理模块72将所述物理内存划分为多个固定大小的buffer,并使用单向链表结构将这些固定大小的buffer串接起来。
所述单链表结构的每一个节点对应一个buffer,初始化后单链表的头指针指向分配的固定大小物理内存的基地址,单链表的buffer计数器记录所述buffer的总个数。
针对终端的上下行数据传输,初始化时所述buffer处理模块72将所述物理内存分为上行内存和下行内存两部分,上行内存的首地址即预先分配的物理内存的起始地址;下行内存的首地址是在上行内存的基地址和所有上行所有buffer占的内存大小的基础上偏移N字节得到的地址,例如,偏移1024个字节;这里,做一个微小的偏移处理,既不会浪费内存,同时防止上下行内存地址重叠。将上下行内存分别划分成固定大小的buffer,例如,上下行buffer个数可分别设计为1024个,并分别通过链表串接起来。
所述分配回收模块73,配置为通过对单链表节点的删除操作,对所述buffer进行分配;并通过对单链表节点的插入操作,对所述buffer进行回收。
其中,所述分配回收模块73通过对单链表节点的删除操作,对所述buffer进行分配包括:所述分配回收模块73将所述单链表的头指针指向的buffer分配给申请内存的线程或功能模块,并将单链表的头指针指向下一个buffer,同时buffer个数减1;
当发送上行数据时,所述分配回收模块73将上行buffer链表的头指针指向的buffer分配给申请内存的功能模块或者线程,然后头指针后移指向下一个buffer,同时buffer个数减1,直到链表的头尾节点指针相同,则不再上行分配buffer。
当接收下行数据时,所述分配回收模块73将下行了buffer链表的头指针指向的buffer分配给申请内存的功能模块或者线程,然后头指针后移指向下一个buffer,同时buffer个数减1,直到链表的头尾节点指针相同,则不再分配buffer。
所述分配回收模块73通过对单链表节点的插入操作,对所述buffer进 行回收包括:所述分配回收模块73将所述线程或功能模块释放的buffer的首地址插入到所述单链表的尾部,并将单链表的尾指针后指向所述插入的buffer,同时将buffer个数加1。
在上行数据发送结束之后,即申请内存的功能模块或线程使用完某个上行buffer之后,所述分配回收模块73将所述上行buffer的首地址插入到上行buffer单链表的尾部,将该buffer的下一节点指针置空,并将单链表的尾指针后指向所述插入的buffer[0],同时将buffer个数加1。
在下行数据接收完成之后,即申请内存的功能模块或线程使用完某个下行buffer之后业,所述分配回收模块73将所述下行buffer的首地址其插入到下行buffer单链表的尾部,并将该buffer的下一节点指针置空,并将单链表的尾指针后指向所述插入的buffer[0],同时将buffer个数加1。
在分配上下行buffer之前,所述分配回收模块73还配置为判断首尾指针是否相等或者判断buffer个数是否减到0;如果首尾指针相等或者buffer个数减到0,则说明buffer耗尽,则不能继续分配内存。由于预先分配的物理内存块足够大,并且buffer个数足够多,如果出现buffer耗尽的情况,则说明系统本身出了问题。
为了防止buffer重复释放,所述buffer处理模块72还配置为:在单链表中每个buffer节点的数据部分设置用于标记当前buffer是否空闲的空闲标记。
所述buffer处理模块72在链表中每个节点buffer的数据部分设置空闲Free标记,记录相应的buffer是否空闲,初始化时所述buffer为空闲Free,分配之后置为非空闲NotFree,回收时置为空闲Free,如此,能够防止buffer重复释放。
为了防止buffer发生越界访问,所述buffer处理模块72还配置为:在单链表中每个buffer节点的数据部分设置用于判断当前buffer是否发生越界 访问的越界访问标记。
所述buffer处理模块72在单链表中每个节点buffer的数据部分设置是否越界访问的标记OverFlud,用于判断是否发生越界访问,初始化时为标记没有发生越界访问“NoFlud”,回收时判断仍为“NoFlud”,则没有发生越界访问的异常。
这里,越界访问指的是两个buffer之间出现了地址重叠。
由于buffer可能会被不同功能模块或者线程使用,因此在分配和回收时,所述分配回收模块73需要对所述buffer加互斥量进行互斥;
这里,分配回收模块73通过增加互斥量,能够确保应用程序或线程拥有对单个资源的互斥访问权,以保护由多个线程访问的内存块。互斥量能够保证访问内存块的任何线程拥有对该内存的独占访问权,进而保证数据的完整性。在某个线程或者功能模块申请buffer时,获取互斥量,在此期间,其他线程或者功能模块不能对链表进行操作,该线程或功能模块申请结束之后,释放互斥量。此时其他模块可进行申请和释放对链表进行操作。如此,能够不同的功能模块或者线程不能够同时对同一个buffer链表进行操作。
本发明实施例中提出的内存管理装置中的内存分配模块、buffer处理模块、分配回收模块都可以通过处理器来实现,当然也可通过具体的逻辑电路实现;其中所述处理器可以是移动终端或服务器上的处理器,在实际应用中,处理器可以为中央处理器(CPU)、微处理器(MPU)、数字信号处理器(DSP)或现场可编程门阵列(FPGA)等。
本发明实施例中,如果以软件功能模块的形式实现上述内存管理方法,并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存 储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本发明各个实施例所述方法的全部或部分。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read Only Memory,ROM)、磁碟或者光盘等各种可以存储程序代码的介质。这样,本发明实施例不限制于任何特定的硬件和软件结合。
相应地,本发明实施例还提供一种计算机存储介质,该计算机存储介质中存储有计算机程序,该计算机程序用于执行本发明实施例的上述内存管理方法。
以上所述,仅为本发明的较佳实施例而已,并非用于限定本发明的保护范围。

Claims (11)

  1. 一种内存管理方法,所述方法包括:
    分配具有固定起始地址和结束地址的物理内存;
    将所述物理内存划分为多个固定大小的缓冲区buffer,并将所述buffer串联成单链表结构;
    通过对单链表节点的删除操作,对所述buffer进行分配;并通过对单链表节点的插入操作,对所述buffer进行回收。
  2. 根据权利要求1所述方法,其中,所述通过对单链表节点的删除操作,对所述buffer进行分配包括:将所述单链表的头指针指向的buffer分配给申请内存的线程或功能模块,并将单链表的头指针指向下一个buffer,同时buffer个数减1。
  3. 根据权利要求1所述方法,其中,所述通过对单链表节点的插入操作,对所述buffer进行回收包括:将所述线程或功能模块释放的buffer的首地址插入到所述单链表的尾部,并将单链表的尾指针后指向所述插入的buffer,同时将buffer个数加1。
  4. 根据权利要求1所述方法,其中,所述方法还包括:在单链表中每个buffer节点的数据部分设置用于标记当前buffer是否空闲的空闲标记。
  5. 根据权利要求1所述方法,其中,所述方法还包括:在单链表中每个buffer节点的数据部分设置用于判断当前buffer是否发生越界访问的越界访问标记。
  6. 一种内存管理装置,所述装置包括:内存分配模块、buffer处理模块、分配回收模块,其中,
    所述内存分配模块,配置为分配具有固定的起始地址和结束地址的物理内存;
    所述buffer处理模块,配置为将所述物理内存划分为多个固定大小的缓冲区buffer,并将所述buffer串联成单链表结构;
    所述分配回收模块,配置为通过对单链表节点的删除操作,对所述buffer进行分配;并通过对单链表节点的插入操作,对所述buffer进行回收。
  7. 根据权利要求6所述装置,其中,所述分配回收模块通过对单链表节点的删除操作,对所述buffer进行分配包括:所述分配回收模块将所述单链表的头指针指向的buffer分配给申请内存的线程或功能模块,并将单链表的头指针指向下一个buffer,同时buffer个数减1。
  8. 根据权利要求6所述装置,其中,所述分配回收模块所述通过对单链表节点的插入操作,对所述buffer进行回收包括:所述分配回收模块将所述线程或功能模块释放的buffer的首地址插入到所述单链表的尾部,并将单链表的尾指针后指向所述插入的buffer,同时将buffer个数加1。
  9. 根据权利要求6所述装置,其中,所述buffer处理模块还配置为:在单链表中每个buffer节点的数据部分设置用于标记当前buffer是否空闲的空闲标记。
  10. 根据权利要求6所述装置,其中,所述buffer处理模块还配置为:在单链表中每个buffer节点的数据部分设置用于判断当前buffer是否发生越界访问的越界访问标记。
  11. 一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,该计算机可执行指令用于执行权利要求1至5任一项所述的内存管理方法。
PCT/CN2015/073575 2014-07-21 2015-03-03 一种内存管理方法、装置及存储介质 WO2016011811A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410348883.5A CN105302739A (zh) 2014-07-21 2014-07-21 一种内存管理方法和装置
CN201410348883.5 2014-07-21

Publications (1)

Publication Number Publication Date
WO2016011811A1 true WO2016011811A1 (zh) 2016-01-28

Family

ID=55162481

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/073575 WO2016011811A1 (zh) 2014-07-21 2015-03-03 一种内存管理方法、装置及存储介质

Country Status (2)

Country Link
CN (1) CN105302739A (zh)
WO (1) WO2016011811A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112000588A (zh) * 2020-07-30 2020-11-27 北京浪潮数据技术有限公司 fifo链表管理方法、装置、设备及可读存储介质
CN112328389A (zh) * 2020-10-12 2021-02-05 长沙新弘软件有限公司 一种用于二叉树添加和删除结点的内存分配方法
CN113419715A (zh) * 2021-06-17 2021-09-21 吕锦柏 一种基于链表的动态内存管理方法和设备
CN113453276A (zh) * 2021-05-18 2021-09-28 翱捷科技股份有限公司 一种提高lte终端上下行内存利用率的方法及装置
CN115934000A (zh) * 2023-03-07 2023-04-07 苏州浪潮智能科技有限公司 一种存储系统的定时方法及相关装置
CN117032995A (zh) * 2023-10-08 2023-11-10 苏州元脑智能科技有限公司 内存池管理方法、装置、计算机设备和存储介质

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105868014A (zh) * 2016-04-08 2016-08-17 京信通信技术(广州)有限公司 内存优化的队列方法和系统
CN106028144A (zh) * 2016-06-15 2016-10-12 青岛海信宽带多媒体技术有限公司 电视终端中的音视频资源监控方法、装置及电视终端
CN106201910A (zh) * 2016-08-27 2016-12-07 浪潮(北京)电子信息产业有限公司 一种小块内存的管理方法和装置
CN107329833B (zh) * 2017-07-03 2021-02-19 苏州浪潮智能科技有限公司 一种利用链表实现内存连续的方法和装置
CN109101438B (zh) * 2018-07-25 2020-07-28 百度在线网络技术(北京)有限公司 用于存储数据的方法和装置
CN109144892A (zh) * 2018-08-27 2019-01-04 南京国电南自轨道交通工程有限公司 一种管理内存中高频变化数据的缓冲链表数据结构设计方法
CN110674053B (zh) * 2019-09-30 2021-09-14 深圳忆联信息系统有限公司 Ssd数据存储节点管理方法、装置、计算机设备及存储介质
CN111259014B (zh) * 2020-02-04 2023-01-10 苏州浪潮智能科技有限公司 一种fpga的单向链表数据存储方法及系统
CN113422793B (zh) * 2021-02-05 2024-06-21 阿里巴巴集团控股有限公司 数据传输方法、装置、电子设备及计算机存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5577243A (en) * 1994-03-31 1996-11-19 Lexmark International, Inc. Reallocation of returned memory blocks sorted in predetermined sizes and addressed by pointer addresses in a free memory list
CN101630992A (zh) * 2008-07-14 2010-01-20 中兴通讯股份有限公司 共享内存管理方法
CN102455976A (zh) * 2010-11-02 2012-05-16 上海宝信软件股份有限公司 一种中间件内存管理方案
CN102999434A (zh) * 2011-09-15 2013-03-27 阿里巴巴集团控股有限公司 一种内存管理方法及装置

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103678172B (zh) * 2013-12-25 2017-05-03 Tcl集团股份有限公司 一种本地数据缓存管理方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5577243A (en) * 1994-03-31 1996-11-19 Lexmark International, Inc. Reallocation of returned memory blocks sorted in predetermined sizes and addressed by pointer addresses in a free memory list
CN101630992A (zh) * 2008-07-14 2010-01-20 中兴通讯股份有限公司 共享内存管理方法
CN102455976A (zh) * 2010-11-02 2012-05-16 上海宝信软件股份有限公司 一种中间件内存管理方案
CN102999434A (zh) * 2011-09-15 2013-03-27 阿里巴巴集团控股有限公司 一种内存管理方法及装置

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112000588A (zh) * 2020-07-30 2020-11-27 北京浪潮数据技术有限公司 fifo链表管理方法、装置、设备及可读存储介质
CN112328389A (zh) * 2020-10-12 2021-02-05 长沙新弘软件有限公司 一种用于二叉树添加和删除结点的内存分配方法
CN112328389B (zh) * 2020-10-12 2024-04-30 长沙新弘软件有限公司 一种用于二叉树添加和删除结点的内存分配方法
CN113453276A (zh) * 2021-05-18 2021-09-28 翱捷科技股份有限公司 一种提高lte终端上下行内存利用率的方法及装置
CN113453276B (zh) * 2021-05-18 2024-01-16 翱捷科技股份有限公司 一种提高lte终端上下行内存利用率的方法及装置
CN113419715A (zh) * 2021-06-17 2021-09-21 吕锦柏 一种基于链表的动态内存管理方法和设备
CN115934000A (zh) * 2023-03-07 2023-04-07 苏州浪潮智能科技有限公司 一种存储系统的定时方法及相关装置
CN117032995A (zh) * 2023-10-08 2023-11-10 苏州元脑智能科技有限公司 内存池管理方法、装置、计算机设备和存储介质

Also Published As

Publication number Publication date
CN105302739A (zh) 2016-02-03

Similar Documents

Publication Publication Date Title
WO2016011811A1 (zh) 一种内存管理方法、装置及存储介质
US10693787B2 (en) Throttling for bandwidth imbalanced data transfers
CN1282339C (zh) 一种用于以太网无源光网络的数据帧缓存设备和方法
CN104731569B (zh) 一种数据处理方法及相关设备
CN105511954A (zh) 一种报文处理方法及装置
WO2015027806A1 (zh) 一种内存数据的读写处理方法和装置
CN104102586A (zh) 一种地址映射处理的方法、装置
CN107479833B (zh) 一种面向键值存储的远程非易失内存访问与管理方法
WO2011015055A1 (zh) 一种存储管理的方法和系统
WO2014135038A1 (zh) 基于pcie总线的报文传输方法与装置
CN105573922B (zh) 一种实现数据格式转换的方法和装置
CN104065588A (zh) 一种数据包调度和缓存的装置及方法
WO2023160088A1 (zh) 一种区块链交易的处理方法、区块链节点及电子设备
WO2019024763A1 (zh) 报文处理
WO2016202113A1 (zh) 一种队列管理方法、装置及存储介质
CN104572498A (zh) 报文的缓存管理方法和装置
US11385900B2 (en) Accessing queue data
CN102629235A (zh) 一种提高ddr存储器读写速率的方法
CN104679507B (zh) NAND Flash编程器烧录映像文件的生成方法及装置
CN104486442A (zh) 分布式存储系统的数据传输方法、装置
US10579308B2 (en) Hardware system for data conversion and storage device
US20160085683A1 (en) Data receiving device and data receiving method
CN107846328B (zh) 基于并发无锁环形队列的网络速率实时统计方法
CN111045817A (zh) 一种PCIe传输管理方法、系统和装置
CN114024844B (zh) 数据调度方法、数据调度装置及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15825455

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15825455

Country of ref document: EP

Kind code of ref document: A1