WO2017133439A1 - 一种数据管理方法及装置、计算机存储介质 - Google Patents

一种数据管理方法及装置、计算机存储介质 Download PDF

Info

Publication number
WO2017133439A1
WO2017133439A1 PCT/CN2017/071323 CN2017071323W WO2017133439A1 WO 2017133439 A1 WO2017133439 A1 WO 2017133439A1 CN 2017071323 W CN2017071323 W CN 2017071323W WO 2017133439 A1 WO2017133439 A1 WO 2017133439A1
Authority
WO
WIPO (PCT)
Prior art keywords
linked list
queue
data
cache
queue data
Prior art date
Application number
PCT/CN2017/071323
Other languages
English (en)
French (fr)
Inventor
胡永春
Original Assignee
深圳市中兴微电子技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市中兴微电子技术有限公司 filed Critical 深圳市中兴微电子技术有限公司
Publication of WO2017133439A1 publication Critical patent/WO2017133439A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • G06F12/121Replacement control using replacement algorithms
    • G06F12/123Replacement control using replacement algorithms with age lists, e.g. queue, most recently used [MRU] list or least recently used [LRU] list
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0893Caches characterised by their organisation or structure

Definitions

  • the present invention relates to data management technologies, and in particular, to a data management method and apparatus, and a computer storage medium.
  • One method of data storage is to use an exclusive space configuration, that is, each queue is allocated a separate space for use, and the address spaces are independent of each other and do not affect each other.
  • This method is simple in structure. If you use on-chip or off-chip storage space, you only need to allocate a separate address. If you use a first-in first-out (FIFO) cache structure, you must have a queue for each queue. Set a FIFO of sufficient depth.
  • FIFO first-in first-out
  • Another method of data storage is to use a shared space configuration, that is, to configure a total shared space for all queues, which is generally implemented by means of a linked list.
  • the implementation of the linked list can make full use of the shared space.
  • the shared space can be relatively fairly occupied, and in the case of a small queue, greedy occupation can be achieved, and the resource utilization is greatly improved. .
  • the advantage of this approach is that it can save on-chip resources and handle bursty data traffic well, but it has the following disadvantages:
  • an embodiment of the present invention provides a data management method and apparatus, and a computer storage medium.
  • the queue data is sent to the cache for storage; when it is determined that the queue data is sent to the linked list, the queue data is sent to The linked list is stored;
  • the queue data is searched from the cache; when it is determined by the cache that part or all of the queue data is stored in the linked list, the queue is scheduled to the linked list through the cache data.
  • the determining whether the queue data is sent to a cache or sent to a linked list includes:
  • the queue data is sent to the cache when the space allocated to the queue in the cache is not full and the queue is not called in the linked list.
  • the sending the queue data to the linked list for storage comprises:
  • the scheduling the queue data to the linked list by using the cache includes:
  • the linked list When receiving the scheduling instruction, the linked list reads the linked list address of the queue according to the queue number, and reads the queue data from the linked list according to the linked list address.
  • the reading the queue data from the linked list according to the linked list address includes:
  • the linked list address is cached, and after the linked list stores the queue data successfully, the queue data is read from the linked list according to the linked list address.
  • the data management apparatus includes: a cache and a linked list
  • a push module configured to: when the queue data is obtained, determine whether the queue data is sent to the cache or sent to the linked list; when it is determined that the queue data is sent to the cache, send the queue data to The cache is stored;
  • the linked list enqueue module is configured to: when it is determined that the queue data is sent to the linked list, send the queue data to the linked list for storage;
  • the cache is configured to: when the queue data is scheduled, look up the queue data from the cache; and determine that part or all of the queue data is stored in the linked list, send a schedule to the scheduling module instruction;
  • the pull module is configured to send queue request information to the linked list dequeue module when receiving the scheduling instruction sent by the cache;
  • the linked list dequeue module is configured to schedule the queue data to the linked list when receiving the queue request information.
  • the Push module is further configured to: determine whether a space allocated to the queue in the cache is full, and determine whether the queue is called in the linked list; The space allocated to the queue in the cache is not full, and the queue sends the queue data to the cache when the queue is not called in the linked list;
  • the linked list enqueue module is further configured to send the queue data to the linked list when the space allocated to the queue in the cache is full, or the queue is called in the linked list.
  • the linked list entry module is further configured to: read the linked list address of the queue according to the queue number, store the queue data to the linked list according to the linked list address; update the occupied information of the linked list and Free information.
  • the linked list dequeuing module is further configured to: when receiving the queue request information, read the linked list address of the queue according to the queue number, and read the queue data from the linked list according to the linked list address.
  • the device further includes:
  • the dequeue instruction buffering module is configured to cache the linked list address; after the linked list stores the queue data successfully, send the linked list address to the linked list dequeue module;
  • the linked list dequeuing module is further configured to: after receiving the linked list address, read queue data from the linked list according to the linked list address.
  • the embodiment of the invention further provides a computer storage medium storing a computer program configured to execute the above data management method.
  • the embodiment of the present invention when the queue data is obtained, it is determined that the queue data is sent to the cache or sent to the linked list; when it is determined that the queue data is sent to the cache, the queue data is sent. Storing to the cache; when it is determined that the queue data is sent to the linked list, sending the queue data to the linked list for storage; when scheduling queue data, searching for the queue from the cache Data; when the cache determines that part or all of the queue data is stored in the linked list, the queue data is scheduled to the linked list by the cache. It can be seen that the embodiment of the present invention combines a cache and a linked list for queue management. The data access rate is improved, the exclusive use of the cache and the shared space of the linked list are fully utilized, and the problem of insufficient data processing capability caused by the network burst traffic is solved, and the processing capability of the linked list is greatly improved.
  • FIG. 1 is a schematic flowchart diagram of a data management method according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a cache according to an embodiment of the present invention.
  • FIG. 3 is a schematic flowchart of a data management method according to another embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of a data management apparatus according to an embodiment of the present invention.
  • the embodiment of the present invention proposes a method for managing queue data by using a combination of a cache and an off-chip linked list for the use of an off-chip linked list.
  • the technical solution of the embodiment of the invention ensures that the data can be directly managed by using a high-speed cache when the data traffic is small, and the data can be managed by using a combination of a cache and an off-chip linked list in the case of bursty traffic. Combined with the push-pull mechanism between the high-speed cache and the linked list, the data processing performance of the off-chip linked list can be greatly improved.
  • FIG. 1 is a schematic flowchart of a data management method according to an embodiment of the present invention. As shown in FIG. 1, the data management method includes the following steps:
  • Step 101 When obtaining queue data, determine whether the queue data is sent to the cache or sent. To the linked list.
  • the data management method of the embodiment of the present invention is applied to a data management apparatus, and the data management apparatus includes: a cache, a linked list, a Push module, a linked list enqueue module, a pull module, a linked list dequeue module, and a dequeue instruction buffer module.
  • the Push module controls when data is directly pushed into the cache for high-speed processing without the need for an off-chip linked list space. Specifically, when the Push module obtains the queue data, it is determined that the queue data is sent to the cache or sent to the linked list. Specifically, it is determined whether the space allocated to the queue in the cache is full, and whether the queue is called in the linked list.
  • Step 102 When it is determined that the queue data is sent to the cache, send the queue data to the cache for storage; when it is determined that the queue data is sent to the linked list, the queue is Data is sent to the linked list for storage.
  • the Push module sends the queue data to the cache for storage.
  • the linked list enqueue module sends the queue data to the linked list for storage.
  • the Push module sends the queue data to the cache for storage.
  • the linked list enqueue module sends the queue data to the linked list for storage.
  • the linked list enqueue module stores the queue data according to the linked list address according to the linked list address to the linked list according to the linked list address; and updates the occupied information and the idle information of the linked list.
  • Step 103 When scheduling queue data, searching for the queue data from the cache; and determining, by the cache, that part or all of the queue data is stored in the linked list, scheduling the linked list through the cache The queue data.
  • the pull module when the queue data occupies the off-chip linked list space, if the data in the cache generates a scheduled action, the pull module issues a queue request message indicating that the data needs to be populated from the off-chip linked list space. If the data does not occupy the off-chip linked list space, the pull module is not started.
  • the cache module is configured to store the queue data directly pushed and the queue data from the off-chip linked list, and the cache module is implemented by using a block-random random access memory (RAM), that is, A piece of RAM is made into multiple FIFO-like devices, each of which occupies a separate FIFO.
  • FIG. 2 is a schematic diagram of a cache according to an embodiment of the present invention.
  • the RAM is used to implement multiple FIFI functions, including a write cache control module, a write pointer, a read cache control module, and a read pointer.
  • the queue table dequeue module when the queue table dequeue module receives the queue request information, the queue table address of the queue is read according to the queue number, and the queue data is read from the linked list according to the linked list address.
  • the dequeue instruction buffer module caches the linked list address; after the linked list stores the queue data successfully, sends the linked list address to the linked list dequeue module; and the linked list dequeue module receives After the linked list address, the queue data is read from the linked list according to the linked list address.
  • the dequeue instruction buffer module buffers a linked list address that needs to extract queue data from the off-chip linked list space. Since the data cannot be written quickly when using the off-chip linked list space to store data, it is necessary to wait for the off-chip space to write a success flag. Therefore, when the cache is scheduled to issue a scheduling instruction to read data from the off-chip linked list space, the linked list address must be buffered first, and the data write is initiated to the off-chip linked list space after waiting for the previous write success flag to be valid. instruction.
  • the technical solution of the embodiment of the present invention combines the characteristics of exclusive and shared.
  • data management can be directly performed through the cache without performing linked list operations; and in the case of large data traffic, Can pass the push-pull machine between the off-chip linked list and the cache System to greatly speed up data processing capabilities.
  • FIG. 3 is a flowchart of a data management method according to another embodiment of the present invention.
  • the push module first judges, and if the push condition is satisfied, the cache is directly entered, and the data flow is directly divided into queues.
  • the linked list, the off-chip linked list enqueue module reads the queue linked list address according to the queue number, including the queue head pointer, the queue tail pointer, and the free link tail pointer. At the same time, the occupancy information and idle information of the linked list are updated.
  • the data scheduling action occurs, the data is directly read from the cache. If the queue does not occupy the linked list space, the pull module will not be started; otherwise, the pull module will be triggered, and the pull module will send the team out of the off-chip linked list.
  • Request information the off-chip linked list dequeue module reads the address of the linked list according to the queue number, including the queue head pointer, the queue tail pointer, and the free chain header pointer. Update the linked list information after the team is completed.
  • the address of the off-chip linked list space is calculated by the off-chip linked list module, and then the off-chip linked list dequeue information is stored in the off-chip dequeue instruction buffer device. The judgment of the off-chip space write success flag is performed.
  • the information stored in the off-chip dequeue instruction cache module can be sent to the DDR to perform the data read operation. After the valid data is returned in the DDR, the data is sent to the cache to complete the pull operation.
  • FIG. 4 is a schematic structural diagram of a data management apparatus according to an embodiment of the present invention. As shown in FIG. 4, the apparatus includes: a cache 11, a linked list 12;
  • Pushing the Push module 13 to configure the queue data to be sent to the cache 11 or to the linked list 12; when it is determined that the queue data is sent to the cache 11, the queue data is Sent to the cache 11 for storage;
  • the linked list enqueue module 14 is configured to, when it is determined that the queue data is sent to the linked list 12, send the queue data to the linked list 12 for storage;
  • the cache 11 is configured to: when the queue data is scheduled, look up the queue data from the cache 11; and determine that part or all of the queue data is stored in the linked list 12, to the scheduling pull module 15 Send a dispatch instruction;
  • the pull module 15 is configured to send queue request information to the linked list dequeue module 16 when receiving the scheduling instruction sent by the cache 11;
  • the linked list dequeue module 16 is configured to schedule the queue data to the linked list 12 when receiving the queue request information.
  • the Push module 13 is further configured to determine whether the space allocated to the queue in the cache 11 is full, and determine whether the queue is called in the linked list 12; when the cache 11 is allocated to the queue When the space is not full, and the queue is not called in the linked list 12, the queue data is sent to the cache 11;
  • the linked list enqueue module 14 is further configured to send the queue data to the linked list when the space allocated to the queue in the cache 11 is full, or the queue is called in the linked list 12 12.
  • the linked list enqueue module 14 is further configured to read the linked list 12 address of the queue according to the queue number, store the queue data to the linked list 12 according to the linked list 12 address, and update the occupied information and idle of the linked list 12 information.
  • the linked list dequeue module 16 is further configured to, when receiving the queue request information, read the linked list 12 address according to the queue number, and read the queue data from the linked list 12 according to the linked list 12 address.
  • the device also includes:
  • the dequeue instruction buffering module 17 is configured to cache the linked list 12 address; after the linked list 12 stores the queue data successfully, the linked list 12 address is sent to the linked list dequeue module 16;
  • the linked list dequeue module 16 is further configured to receive the address of the linked list 12 according to the The linked list 12 address reads queue data from the linked list 12.
  • each unit in the data management device may be a Central Processing Unit (CPU) or a Micro Processor Unit (MPU) located in the data management device. Or a digital signal processor (DSP), or a Field Programmable Gate Array (FPGA).
  • CPU Central Processing Unit
  • MPU Micro Processor Unit
  • DSP digital signal processor
  • FPGA Field Programmable Gate Array
  • the data management device may be stored in a computer readable storage medium if it is implemented in the form of a software function module and sold or used as a stand-alone product.
  • the technical solution of the embodiments of the present invention may be embodied in the form of a software product in essence or in the form of a software product stored in a storage medium, including a plurality of instructions.
  • a computer device (which may be a personal computer, server, or network device, etc.) is caused to perform all or part of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes various media that can store program codes, such as a USB flash drive, a mobile hard disk, a read only memory (ROM), a magnetic disk, or an optical disk.
  • program codes such as a USB flash drive, a mobile hard disk, a read only memory (ROM), a magnetic disk, or an optical disk.
  • an embodiment of the present invention further provides a computer storage medium, wherein a computer program is configured, and the computer program is configured to execute the data management method of the embodiment of the present invention.
  • the disclosed method and smart device may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner such as: multiple units or components may be combined, or Can be integrated into another The system, or some features can be ignored or not executed.
  • the coupling, or direct coupling, or communication connection of the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may be electrical, mechanical or other forms. of.
  • the units described above as separate components may or may not be physically separated, and the components displayed as the unit may or may not be physical units, that is, may be located in one place or distributed to multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one second processing unit, or each unit may be separately used as one unit, or two or more units may be integrated into one unit;
  • the above integrated unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
  • the queue data when the queue data is obtained, it is determined that the queue data is sent to the cache or sent to the linked list; when it is determined that the queue data is sent to the cache, the queue data is sent to The cache is stored; when it is determined that the queue data is sent to the linked list, the queue data is sent to the linked list for storage; when the queue data is scheduled, the queue data is searched from the cache.
  • the cache determines that part or all of the queue data is stored in the linked list, the queue data is scheduled to the linked list by the cache.
  • the embodiment of the present invention combines the cache and the linked list for queue management, improves the data access rate, fully utilizes the exclusive use of the cache and the shared space of the linked list, and solves the problem of insufficient data processing capability caused by the network burst traffic. Greatly improved the processing capacity of the linked list.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Multi Processors (AREA)

Abstract

一种数据管理方法及装置、计算机存储介质,包括:获得队列数据时,判断所述队列数据是发送至缓存或是发送至链表(101);当判断出所述队列数据是发送至所述缓存时,将所述队列数据发送至所述缓存进行存储;当判断出所述队列数据是发送至所述链表时,将所述队列数据发送至所述链表进行存储(102);当调度队列数据时,从所述缓存中查找所述队列数据;通过所述缓存确定出所述队列数据的部分或全部存储在所述链表时,通过所述缓存向所述链表调度所述队列数据(103)。

Description

一种数据管理方法及装置、计算机存储介质 技术领域
本发明涉及数据管理技术,尤其涉及一种数据管理方法及装置、计算机存储介质。
背景技术
在网络数据的管理过程中,数据的存储方式往往能够影响数据处理的性能。目前数据存储的方式包括以下两种:
一种数据存储的方法是采用独享的空间配置,即为每一个队列分配各自单独使用的空间,地址空间都是相互独立的,互相不会造成影响。这种方法结构简单,如果使用片内或者片外存储空间,只需要分配独立的地址即可;如果使用先入先出(FIFO,First-In First-Out)类缓存结构,则必须为每个队列设置足够大深度的FIFO。这种方法的缺点是资源占用大,在队列较少的情况下,极易造成空间的浪费,并且对于数据突发的处理能力相当薄弱。
另一种数据存储的方法是采用共享的空间配置,即为所有的队列配置一个总的共享空间,一般采用链表的方式来实现。采用链表的实现方式能够充分利用共享空间,在队列较多的情况下能够做到相对公平占用这片共享空间,而在队列较小的情况能够做到贪婪占用,极大的提高了资源利用率。这种做法的好处是即能够节省片内资源,又能够很好的处理突发数据流量的情况,但是却存在如下缺点:
在数据流量较小的情况下,由于采用链表的装置,链表的入队和出队都需要浪费时间,尤其是使用片外链表的情况下,由于受到片外存储装置(如双倍速率同步动态随机存储器(DDR,Double Data Rate))性能的影响, 极易导致存储在链表中数据不能够被很快的调度,严重制约链表的数据处理能力。
发明内容
为解决上述技术问题,本发明实施例提供了一种数据管理方法及装置、计算机存储介质。
本发明实施例提供的数据管理方法,包括:
获得队列数据时,判断所述队列数据是发送至缓存或是发送至链表;
当判断出所述队列数据是发送至所述缓存时,将所述队列数据发送至所述缓存进行存储;当判断出所述队列数据是发送至所述链表时,将所述队列数据发送至所述链表进行存储;
当调度队列数据时,从所述缓存中查找所述队列数据;通过所述缓存确定出所述队列数据的部分或全部存储在所述链表时,通过所述缓存向所述链表调度所述队列数据。
本发明实施例中,所述判断所述队列数据是发送至缓存或是发送至链表,包括:
判断所述缓存中分配给队列的空间是否已满,以及判断所述队列在所述链表中是否处于被调用;
当所述缓存中分配给队列的空间已满,或者所述队列在所述链表中处于被调用时,将所述队列数据发送至所述链表;
当所述缓存中分配给队列的空间未满,且所述队列在所述链表中处于未被调用时,将所述队列数据发送至所述缓存。
本发明实施例中,所述将所述队列数据发送至所述链表进行存储,包括:
根据队列号读取队列的链表地址,根据所述链表地址将所述队列数据存储至所述链表;
更新所述链表的占用信息和空闲信息。
本发明实施例中,所述通过所述缓存向所述链表调度所述队列数据,包括:
通过所述缓存向所述链表发送调度指令;
所述链表接收到所述调度指令时,根据队列号读取队列的链表地址,根据所述链表地址从所述链表中读取队列数据。
本发明实施例中,所述根据所述链表地址从所述链表中读取队列数据,包括:
对所述链表地址进行缓存,当所述链表存储所述队列数据成功后,根据所述链表地址从所述链表中读取队列数据。
本发明实施例提供的数据管理装置,包括:缓存、链表;
推送(Push)模块,配置为获得队列数据时,判断所述队列数据是发送至缓存或是发送至链表;当判断出所述队列数据是发送至所述缓存时,将所述队列数据发送至所述缓存进行存储;
链表入队模块,配置为当判断出所述队列数据是发送至所述链表时,将所述队列数据发送至所述链表进行存储;
所述缓存,配置为当调度队列数据时,从所述缓存中查找所述队列数据;确定出所述队列数据的部分或全部存储在所述链表时,向所述调度(pull)模块发送调度指令;
所述pull模块,配置为接收到所述缓存发送的调度指令时,向链表出队模块发送队列请求信息;
所述链表出队模块,配置为接收到所述队列请求信息时,向所述链表调度所述队列数据。
本发明实施例中,所述Push模块,还配置为判断所述缓存中分配给队列的空间是否已满,以及判断所述队列在所述链表中是否处于被调用;当 所述缓存中分配给队列的空间未满,且所述队列在所述链表中处于未被调用时,将所述队列数据发送至所述缓存;
所述链表入队模块,还配置为当所述缓存中分配给队列的空间已满,或者所述队列在所述链表中处于被调用时,将所述队列数据发送至所述链表。
本发明实施例中,所述链表入队模块,还配置为根据队列号读取队列的链表地址,根据所述链表地址将所述队列数据存储至所述链表;更新所述链表的占用信息和空闲信息。
本发明实施例中,所述链表出队模块,还配置为接收到所述队列请求信息时,根据队列号读取队列的链表地址,根据所述链表地址从所述链表中读取队列数据。
本发明实施例中,所述装置还包括:
出队指令缓冲模块,配置为对所述链表地址进行缓存;当所述链表存储所述队列数据成功后,将所述链表地址发送给所述链表出队模块;
所述链表出队模块,还配置为接收到所述链表地址后,根据所述链表地址从所述链表中读取队列数据。
本发明实施例还提供一种计算机存储介质,该计算机存储介质存储有计算机程序,该计算机程序配置为执行上述数据管理方法。
本发明实施例的技术方案中,获得队列数据时,判断所述队列数据是发送至缓存或是发送至链表;当判断出所述队列数据是发送至所述缓存时,将所述队列数据发送至所述缓存进行存储;当判断出所述队列数据是发送至所述链表时,将所述队列数据发送至所述链表进行存储;当调度队列数据时,从所述缓存中查找所述队列数据;通过所述缓存确定出所述队列数据的部分或全部存储在所述链表时,通过所述缓存向所述链表调度所述队列数据。可见,本发明实施例结合了缓存(cache)和链表进行队列管理, 提高了数据访问速率,充分利用了cache的独享和链表的共享空间,解决了网络突发流量造成的数据处理能力不足的问题,同时大大提高了链表的处理能力。
附图说明
图1为本发明实施例的数据管理方法的流程示意图;
图2为本发明实施例的cache的示意图;
图3为本发明另一实施例的数据管理方法的流程示意图;
图4为本发明实施例的数据管理装置的结构组成示意图。
具体实施方式
为了能够更加详尽地了解本发明的特点与技术内容,下面结合附图对本发明的实现进行详细阐述,所附附图仅供参考说明之用,并非用来限定本发明。
随着链表技术在互联网交换芯片中的运用不断加深,如何能够提高链表的处理能力受到越来越广泛的关注。由于链表本身的入队出队会导致数据处理能力的不足,尤其是使用片外链表的情况下,此时数据处理的性能会受到片外较大的约束。本发明实施例对片外链表的运用,提出了通过cache和片外链表组合使用,对队列数据进行管理的方法。本发明实施例的技术方案,保证在数据流量较小的情况下可以直接使用高速的cache对数据进行管理,而在突发流量的情况下可以使用cache和片外链表的组合对数据进行管理。结合了高速cache和链表之间的push-pull机制,能够大大提高片外链表的数据处理性能。
图1为本发明实施例的数据管理方法的流程示意图,如图1所示,所述数据管理方法包括以下步骤:
步骤101:获得队列数据时,判断所述队列数据是发送至缓存或是发送 至链表。
本发明实施例的数据管理方法应用于数据管理装置中,所述数据管理装置包括:缓存、链表、Push模块、链表入队模块、pull模块、链表出队模块、出队指令缓冲模块。
Push模块控制数据何时直接push进cache进行高速处理,而不需要使用片外链表空间。具体地,Push模块获得队列数据时,判断所述队列数据是发送至缓存或是发送至链表。具体地,判断所述缓存中分配给队列的空间是否已满,以及判断所述队列在所述链表中是否处于被调用。
步骤102:当判断出所述队列数据是发送至所述缓存时,将所述队列数据发送至所述缓存进行存储;当判断出所述队列数据是发送至所述链表时,将所述队列数据发送至所述链表进行存储。
当判断出所述队列数据是发送至所述缓存时,Push模块将所述队列数据发送至所述缓存进行存储。
当判断出所述队列数据是发送至所述链表时,链表入队模块将所述队列数据发送至所述链表进行存储。
具体地,当所述缓存中分配给队列的空间已满,或者所述队列在所述链表中处于被调用时,Push模块将所述队列数据发送至所述缓存进行存储。当所述缓存中分配给队列的空间未满,且所述队列在所述链表中处于未被调用时,链表入队模块将所述队列数据发送至所述链表进行存储。
本发明实施例中,所述链表入队模块根据队列号读取队列的链表地址,根据所述链表地址将所述队列数据存储至所述链表;更新所述链表的占用信息和空闲信息。
步骤103:当调度队列数据时,从所述缓存中查找所述队列数据;通过所述缓存确定出所述队列数据的部分或全部存储在所述链表时,通过所述缓存向所述链表调度所述队列数据。
本发明实施例中,当队列数据占用了片外链表空间时,如果cache中的数据产生被调度的动作时,pull模块便会发出队列请求信息,指示需要从片外链表空间pull数据来填充cache;如果数据没有占用片外链表空间,则不启动pull模块。
本发明实施例中,cache模块用来存储直接push的队列数据以及从片外链表pull的队列数据,cache模块中采用分块随机存取存储器(RAM,Random-Access Memory)来实现,也就是将一片RAM制成多个类似FIFO功能的装置,每个队列单独占用一个FIFO。参照图2,图2为本发明实施例的cache的示意图,使用了RAM来实现多个FIFI的功能,包括写cache控制模块、写指针、读cache控制模块和读指针等构成。
本发明实施例中,链表出队模块接收到队列请求信息时,根据队列号读取队列的链表地址,根据所述链表地址从所述链表中读取队列数据。
本发明实施例中,出队指令缓冲模块对所述链表地址进行缓存;当所述链表存储所述队列数据成功后,将所述链表地址发送给所述链表出队模块;链表出队模块接收到所述链表地址后,根据所述链表地址从所述链表中读取队列数据。
具体地,出队指令缓冲模块缓冲需要从片外链表空间提取队列数据的链表地址。由于在使用片外链表空间存储数据的时候,不能保证数据能够很快的写入,需要等待片外空间写入成功标志。因此,当cache进行调度发出调度指令需要从片外链表空间读取数据的情况下,必须先进行链表地址的缓冲,等待之前的写入成功标志有效之后,才向片外链表空间发起数据读取指令。
本发明实施例的技术方案,结合了独享和共享的特征,在数据流量较小的情况下,可以直接经过cache进行数据管理,无需进行链表操作;而在数据流量较大的情况下,也可以通过片外链表和cache之间的push-pull机 制来大大的加快数据处理能力。
图3为本发明另一实施例的数据管理方法的流程图,当有队列数据需要进行队列管理时,首先经过push模块进行判断,满足push条件则直接进入cache,将数据流按队列的划分直接送入cache进行数据管理;如果当队列数据到来的过程中,cache中分配给队列的空间已满,或者此时队列空间非满但是该队列的pull模块正在工作,则直接让队列数据进入片外链表,片外链表入队模块根据队列号读取队列链表地址,包括队列头指针,队列尾指针和空闲链表尾指针。同时更新链表的占用信息和空闲信息。
如果发生数据调度的动作,则直接从cache中读取数据,如果该队列没有占用链表空间,不会启动pull模块;否则此时会触发pull模块,pull模块向片外链表出队模块发送出队请求信息;片外链表出队模块根据队列号读取链表的地址,包括队列头指针,队列尾指针,空闲链表头指针。出队完成后更新链表信息。通过片外链表出队模块计算片外链表空间的地址,然后将片外链表出队信息存储到片外出队指令缓冲装置中。进行片外空间写入成功标志的判断,只有当数据成功写入片外DDR,并且返回成功信息之后,才能将存储在片外出队指令缓存模块中的信息发送给DDR,进行读数据的操作。等到DDR中返回有效数据之后,将数据发往cache,完成pull的操作。
图4为本发明实施例的数据管理装置的结构组成示意图,如图4所示,所述装置包括:缓存11、链表12;
推送Push模块13,配置为获得队列数据时,判断所述队列数据是发送至缓存11或是发送至链表12;当判断出所述队列数据是发送至所述缓存11时,将所述队列数据发送至所述缓存11进行存储;
链表入队模块14,配置为当判断出所述队列数据是发送至所述链表12时,将所述队列数据发送至所述链表12进行存储;
所述缓存11,配置为当调度队列数据时,从所述缓存11中查找所述队列数据;确定出所述队列数据的部分或全部存储在所述链表12时,向所述调度pull模块15发送调度指令;
所述pull模块15,配置为接收到所述缓存11发送的调度指令时,向链表出队模块16发送队列请求信息;
所述链表出队模块16,配置为接收到所述队列请求信息时,向所述链表12调度所述队列数据。
所述Push模块13,还配置为判断所述缓存11中分配给队列的空间是否已满,以及判断所述队列在所述链表12中是否处于被调用;当所述缓存11中分配给队列的空间未满,且所述队列在所述链表12中处于未被调用时,将所述队列数据发送至所述缓存11;
所述链表入队模块14,还配置为当所述缓存11中分配给队列的空间已满,或者所述队列在所述链表12中处于被调用时,将所述队列数据发送至所述链表12。
所述链表入队模块14,还配置为根据队列号读取队列的链表12地址,根据所述链表12地址将所述队列数据存储至所述链表12;更新所述链表12的占用信息和空闲信息。
所述链表出队模块16,还配置为接收到所述队列请求信息时,根据队列号读取队列的链表12地址,根据所述链表12地址从所述链表12中读取队列数据。
所述装置还包括:
出队指令缓冲模块17,配置为对所述链表12地址进行缓存;当所述链表12存储所述队列数据成功后,将所述链表12地址发送给所述链表出队模块16;
所述链表出队模块16,还配置为接收到所述链表12地址后,根据所述 链表12地址从所述链表12中读取队列数据。
本领域技术人员应当理解,图4所示的数据管理装置中的各单元的实现功能可参照前述数据管理方法的相关描述而理解。
在实际应用中,所述数据管理装置中的各个单元所实现的功能,均可由位于数据管理装置中的中央处理器(Central Processing Unit,CPU)、或微处理器(Micro Processor Unit,MPU)、或数字信号处理器(Digital Signal Processor,DSP)、或现场可编程门阵列(Field Programmable Gate Array,FPGA)等实现。
本发明实施例上述数据管理装置如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本发明各个实施例所述方法的全部或部分。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read Only Memory)、磁碟或者光盘等各种可以存储程序代码的介质。这样,本发明实施例不限制于任何特定的硬件和软件结合。
相应地,本发明实施例还提供一种计算机存储介质,其中存储有计算机程序,该计算机程序配置为执行本发明实施例的数据管理方法。
本发明实施例所记载的技术方案之间,在不冲突的情况下,可以任意组合。
在本发明所提供的几个实施例中,应该理解到,所揭露的方法和智能设备,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个 系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元,即可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。
另外,在本发明各实施例中的各功能单元可以全部集成在一个第二处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。
工业实用性
本发明实施例的技术方案,获得队列数据时,判断所述队列数据是发送至缓存或是发送至链表;当判断出所述队列数据是发送至所述缓存时,将所述队列数据发送至所述缓存进行存储;当判断出所述队列数据是发送至所述链表时,将所述队列数据发送至所述链表进行存储;当调度队列数据时,从所述缓存中查找所述队列数据;通过所述缓存确定出所述队列数据的部分或全部存储在所述链表时,通过所述缓存向所述链表调度所述队列数据。可见,本发明实施例结合了cache和链表进行队列管理,提高了数据访问速率,充分利用了cache的独享和链表的共享空间,解决了网络突发流量造成的数据处理能力不足的问题,同时大大提高了链表的处理能力。

Claims (11)

  1. 一种数据管理方法,所述方法包括:
    获得队列数据时,判断所述队列数据是发送至缓存或是发送至链表;
    当判断出所述队列数据是发送至所述缓存时,将所述队列数据发送至所述缓存进行存储;当判断出所述队列数据是发送至所述链表时,将所述队列数据发送至所述链表进行存储;
    当调度队列数据时,从所述缓存中查找所述队列数据;通过所述缓存确定出所述队列数据的部分或全部存储在所述链表时,通过所述缓存向所述链表调度所述队列数据。
  2. 根据权利要求1所述的数据管理方法,其中,所述判断所述队列数据是发送至缓存或是发送至链表,包括:
    判断所述缓存中分配给队列的空间是否已满,以及判断所述队列在所述链表中是否处于被调用;
    当所述缓存中分配给队列的空间已满,或者所述队列在所述链表中处于被调用时,将所述队列数据发送至所述链表;
    当所述缓存中分配给队列的空间未满,且所述队列在所述链表中处于未被调用时,将所述队列数据发送至所述缓存。
  3. 根据权利要求1或2所述的数据管理方法,其中,所述将所述队列数据发送至所述链表进行存储,包括:
    根据队列号读取队列的链表地址,根据所述链表地址将所述队列数据存储至所述链表;
    更新所述链表的占用信息和空闲信息。
  4. 根据权利要求1或2所述的数据管理方法,其中,所述通过所述缓存向所述链表调度所述队列数据,包括:
    通过所述缓存向所述链表发送调度指令;
    所述链表接收到所述调度指令时,根据队列号读取队列的链表地址,根据所述链表地址从所述链表中读取队列数据。
  5. 根据权利要求4所述的数据管理方法,其中,所述根据所述链表地址从所述链表中读取队列数据,包括:
    对所述链表地址进行缓存,当所述链表存储所述队列数据成功后,根据所述链表地址从所述链表中读取队列数据。
  6. 一种数据管理装置,所述装置包括:缓存、链表;
    推送Push模块,配置为获得队列数据时,判断所述队列数据是发送至缓存或是发送至链表;当判断出所述队列数据是发送至所述缓存时,将所述队列数据发送至所述缓存进行存储;
    链表入队模块,配置为当判断出所述队列数据是发送至所述链表时,将所述队列数据发送至所述链表进行存储;
    所述缓存,配置为当调度队列数据时,从所述缓存中查找所述队列数据;确定出所述队列数据的部分或全部存储在所述链表时,向所述调度pull模块发送调度指令;
    所述pull模块,配置为接收到所述缓存发送的调度指令时,向链表出队模块发送队列请求信息;
    所述链表出队模块,配置为接收到所述队列请求信息时,向所述链表调度所述队列数据。
  7. 根据权利要求6所述的数据管理装置,其中,所述Push模块,还配置为判断所述缓存中分配给队列的空间是否已满,以及判断所述队列在所述链表中是否处于被调用;当所述缓存中分配给队列的空间未满,且所述队列在所述链表中处于未被调用时,将所述队列数据发送至所述缓存;
    所述链表入队模块,还配置为当所述缓存中分配给队列的空间已满,或者所述队列在所述链表中处于被调用时,将所述队列数据发送至所述链 表。
  8. 根据权利要求6或7所述的数据管理装置,其中,所述链表入队模块,还配置为根据队列号读取队列的链表地址,根据所述链表地址将所述队列数据存储至所述链表;更新所述链表的占用信息和空闲信息。
  9. 根据权利要求6或7所述的数据管理装置,其中,所述链表出队模块,还配置为接收到所述队列请求信息时,根据队列号读取队列的链表地址,根据所述链表地址从所述链表中读取队列数据。
  10. 根据权利要求6或7所述的数据管理装置,其中,所述装置还包括:
    出队指令缓冲模块,配置为对所述链表地址进行缓存;当所述链表存储所述队列数据成功后,将所述链表地址发送给所述链表出队模块;
    所述链表出队模块,还配置为接收到所述链表地址后,根据所述链表地址从所述链表中读取队列数据。
  11. 一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,该计算机可执行指令配置为执行权利要求1-5任一项所述的数据管理方法。
PCT/CN2017/071323 2016-02-01 2017-01-16 一种数据管理方法及装置、计算机存储介质 WO2017133439A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610069676.5A CN107025184B (zh) 2016-02-01 2016-02-01 一种数据管理方法及装置
CN201610069676.5 2016-02-01

Publications (1)

Publication Number Publication Date
WO2017133439A1 true WO2017133439A1 (zh) 2017-08-10

Family

ID=59499309

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/071323 WO2017133439A1 (zh) 2016-02-01 2017-01-16 一种数据管理方法及装置、计算机存储介质

Country Status (2)

Country Link
CN (1) CN107025184B (zh)
WO (1) WO2017133439A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113032295A (zh) * 2021-02-25 2021-06-25 西安电子科技大学 一种数据包二级缓存方法、系统及应用
CN117389915A (zh) * 2023-12-12 2024-01-12 北京象帝先计算技术有限公司 缓存系统、读命令调度方法、片上系统及电子设备

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113835898B (zh) * 2017-11-29 2024-03-01 北京忆芯科技有限公司 存储器分配器
CN108763109B (zh) * 2018-06-13 2022-04-26 成都心吉康科技有限公司 数据存储方法、装置及其应用
CN111782578B (zh) * 2020-05-29 2022-07-12 西安电子科技大学 一种缓存控制方法、系统、存储介质、计算机设备及应用

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0466339A2 (en) * 1990-07-13 1992-01-15 International Business Machines Corporation A method of passing task messages in a data processing system
US5951658A (en) * 1997-09-25 1999-09-14 International Business Machines Corporation System for dynamic allocation of I/O buffers for VSAM access method based upon intended record access where performance information regarding access is stored in memory
CN1378143A (zh) * 2001-03-30 2002-11-06 深圳市中兴通讯股份有限公司 一种实现快速数据传递的方法
CN1694433A (zh) * 2001-03-30 2005-11-09 中兴通讯股份有限公司 一种实现快速数据传递的方法
CN1694434A (zh) * 2001-03-30 2005-11-09 中兴通讯股份有限公司 一种实现快速数据传递的方法

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE69127936T2 (de) * 1990-06-29 1998-05-07 Digital Equipment Corp Busprotokoll für Prozessor mit write-back cache
US6915360B2 (en) * 2001-04-06 2005-07-05 Texas Instruments Incorporated Cell buffering system with priority cache in an ATM system
US8352265B1 (en) * 2007-12-24 2013-01-08 Edward Lin Hardware implemented backend search engine for a high-rate speech recognition system
CN101499956B (zh) * 2008-01-31 2012-10-10 中兴通讯股份有限公司 分级缓冲区管理系统及方法
CN101246460A (zh) * 2008-03-10 2008-08-20 华为技术有限公司 缓存数据写入系统及方法和缓存数据读取系统及方法
CN101621469B (zh) * 2009-08-13 2012-01-04 杭州华三通信技术有限公司 数据报文存取控制装置和方法
US8266344B1 (en) * 2009-09-24 2012-09-11 Juniper Networks, Inc. Recycling buffer pointers using a prefetch buffer
WO2011157136A2 (zh) * 2011-05-31 2011-12-22 华为技术有限公司 一种数据管理方法、装置及数据芯片
CN102546417B (zh) * 2012-01-14 2014-07-23 西安电子科技大学 基于网络信息的片上网络路由器调度方法
CN103514177A (zh) * 2012-06-20 2014-01-15 盛趣信息技术(上海)有限公司 数据存储方法及系统
CN104125168A (zh) * 2013-04-27 2014-10-29 中兴通讯股份有限公司 一种共享资源的调度方法和系统
CN104462549B (zh) * 2014-12-25 2018-03-23 瑞斯康达科技发展股份有限公司 一种数据处理方法和装置
CN106302238A (zh) * 2015-05-13 2017-01-04 深圳市中兴微电子技术有限公司 一种队列管理方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0466339A2 (en) * 1990-07-13 1992-01-15 International Business Machines Corporation A method of passing task messages in a data processing system
US5951658A (en) * 1997-09-25 1999-09-14 International Business Machines Corporation System for dynamic allocation of I/O buffers for VSAM access method based upon intended record access where performance information regarding access is stored in memory
CN1378143A (zh) * 2001-03-30 2002-11-06 深圳市中兴通讯股份有限公司 一种实现快速数据传递的方法
CN1694433A (zh) * 2001-03-30 2005-11-09 中兴通讯股份有限公司 一种实现快速数据传递的方法
CN1694434A (zh) * 2001-03-30 2005-11-09 中兴通讯股份有限公司 一种实现快速数据传递的方法

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113032295A (zh) * 2021-02-25 2021-06-25 西安电子科技大学 一种数据包二级缓存方法、系统及应用
CN117389915A (zh) * 2023-12-12 2024-01-12 北京象帝先计算技术有限公司 缓存系统、读命令调度方法、片上系统及电子设备
CN117389915B (zh) * 2023-12-12 2024-04-16 北京象帝先计算技术有限公司 缓存系统、读命令调度方法、片上系统及电子设备

Also Published As

Publication number Publication date
CN107025184A (zh) 2017-08-08
CN107025184B (zh) 2021-03-16

Similar Documents

Publication Publication Date Title
WO2017133439A1 (zh) 一种数据管理方法及装置、计算机存储介质
WO2016179968A1 (zh) 一种队列管理方法、装置及存储介质
US8325603B2 (en) Method and apparatus for dequeuing data
WO2018107681A1 (zh) 一种队列操作中的处理方法、装置及计算机存储介质
US11425057B2 (en) Packet processing
US8886787B2 (en) Notification for a set of sessions using a single call issued from a connection pool
US10248350B2 (en) Queue management method and apparatus
WO2023193441A1 (zh) 多核电路、数据交换方法、电子设备及存储介质
CN107613529B (zh) 消息处理方法以及基站
CN105162724A (zh) 一种数据入队与出队方法及队列管理单元
JP2016195375A (ja) 複数のリンクされるメモリリストを利用する方法および装置
US8640135B2 (en) Schedule virtual interface by requesting locken tokens differently from a virtual interface context depending on the location of a scheduling element
CN106095604A (zh) 一种多核处理器的核间通信方法及装置
WO2017219993A1 (zh) 报文调度
CN111949568A (zh) 一种报文处理方法、装置及网络芯片
US20150304124A1 (en) Message Processing Method and Device
WO2020147253A1 (zh) 一种数据读写方法及装置、交换芯片及存储介质
CN111400212B (zh) 一种基于远程直接数据存取的传输方法、设备
CN112698959A (zh) 一种多核通信方法和装置
CN113010297A (zh) 基于消息队列的数据库写入调度器、写入方法和存储介质
CN110058816A (zh) 一种基于ddr的高速多用户队列管理器及方法
EP3440547B1 (en) Qos class based servicing of requests for a shared resource
CN116414534A (zh) 任务调度方法、装置、集成电路、网络设备及存储介质
CN107911317B (zh) 一种报文调度方法及装置
CN115955441A (zh) 一种基于tsn队列的管理调度方法、装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17746759

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17746759

Country of ref document: EP

Kind code of ref document: A1