WO2022127873A1 - Method for realizing high-speed scheduling of network chip, device, and storage medium - Google Patents

Method for realizing high-speed scheduling of network chip, device, and storage medium Download PDF

Info

Publication number
WO2022127873A1
WO2022127873A1 PCT/CN2021/138905 CN2021138905W WO2022127873A1 WO 2022127873 A1 WO2022127873 A1 WO 2022127873A1 CN 2021138905 W CN2021138905 W CN 2021138905W WO 2022127873 A1 WO2022127873 A1 WO 2022127873A1
Authority
WO
WIPO (PCT)
Prior art keywords
queue
linked list
memory
address
queue number
Prior art date
Application number
PCT/CN2021/138905
Other languages
French (fr)
Chinese (zh)
Inventor
徐子轩
夏杰
Original Assignee
苏州盛科通信股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州盛科通信股份有限公司 filed Critical 苏州盛科通信股份有限公司
Publication of WO2022127873A1 publication Critical patent/WO2022127873A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues

Definitions

  • the present invention claims the priority of the Chinese patent application with the application number 202011491125.0 and the invention title "Method, Device and Storage Medium for Realizing High-speed Scheduling of Network Chips" submitted to the China Patent Office on December 16, 2020, the entire contents of which are incorporated by reference in the present invention.
  • the embodiments of the present invention belong to the field of communication technologies, and mainly relate to a method, a device and a storage medium for realizing high-speed scheduling of network chips.
  • FIG. 1 A typical data packet storage-scheduling model is shown in Figure 1, and the input signal includes: ⁇ queue number, data, linked list address (write information address) ⁇ .
  • the storage scheduling model is mainly composed of the following modules: a data memory, which caches "data” according to the "write information address" of the input signal.
  • the linked list control module is used to control the conventional linked list “enqueue” and “dequeue” operations; the control of the linked list belongs to a general technical category, and the embodiment of the present invention will not be repeated; the linked list control module mainly includes four submodules: ⁇ head pointer memory, tail pointer memory, linked list memory, queue read status ⁇ .
  • the head pointer memory is used to store the storage address pointed to by the data head pointer
  • the tail pointer memory is used to store the storage address pointed to by the data tail pointer
  • the linked list memory is used to store the storage address corresponding to the data;
  • the queue read status It is used to indicate the status of the linked list control module. When it is “0”, it means that there is no other data waiting to be scheduled in the queue at this time. When it is “1”, it means that there is other data waiting to be scheduled in the queue.
  • the scheduler if the queue read status is 1, that is, the queue status is not empty, the scheduler participates in the scheduling, and the scheduler will send the scheduled queue to the "linked list control module" to obtain the read "linked list address" of the queue and trigger " "Linked list control module” updates the queue read status information.
  • the read information module accesses the data memory according to the read "linked list address” obtained by the scheduler, obtains data, and outputs it. Read the information module, access the data memory according to the "linked list address" obtained by the scheduler, obtain data, and generate output.
  • the scheduler Since the queue status directly determines whether the current queue can participate in scheduling, the scheduler generates a "queue number” and triggers the "linked list control module” to update the "queue status" with a certain delay limit.
  • the scheduler is complex, the number of queues is large, and the linked list memory is large, it often takes multiple clock cycles to complete the update of the "queue state". For example, in a complex system, it takes J clock cycles to complete a "queue state" update. If the system requires that the minimum scheduling interval for each queue is K clock cycles (K ⁇ J), the scheduling performance of the queue cannot be obtained at this time. Guaranteed; based on this, typical scheduler design principles are only applicable to systems with low complexity and low rate requirements.
  • the purpose of the embodiments of the present invention is to provide a method for realizing high-speed scheduling of network chips, a network chip, and a readable storage medium.
  • an embodiment of the present invention provides a method for realizing high-speed scheduling of network chips, the method includes: configuring Y secondary linked lists with the same structure for each storage queue, where Y is an integer, J is the queue state update period; K is the minimum scheduling interval period of each queue; each secondary linked list includes: head pointer memory, tail pointer memory and secondary linked list memory;
  • the secondary queue numbers include: the original queue number and the secondary queue number. Number offset value.
  • a queue polling status register is configured
  • the method further includes: configuring a secondary queue state memory for each secondary linked list, and judging whether the queried secondary linked list is empty by querying the state of the secondary queue state memory.
  • replacing the original queue number carried by the current message with the secondary queue number, and performing the linked list operation with the secondary queue number corresponding to the current message as the new queue number includes:
  • the linked list operation is an enqueue operation, then:
  • the matching secondary linked list is not empty, the address of the linked list carried by the current message is used as the value, and the value of the tail pointer register is used as the address to write into the secondary linked list memory matched by the current secondary linked list; at the same time, the current packet is used to carry The number of the secondary queue is used as the address, and the address of the linked list carried by the current message is written as the value and replaces the pointer register matched by the current secondary linked list at the end.
  • the method further includes: configuring the final scheduler, the final queue state memory, the linked list address memory, and configuring the secondary scheduler, the secondary scheduler state memory, the secondary total queue state memory;
  • the final-level scheduler executes the final-level scheduling logic
  • the final queue state memory is used to store the storage state of each queue
  • the linked list address memory is used to store the storage address of any data
  • the secondary scheduler executes secondary scheduling logic
  • the secondary overall queue state memory is used to store the storage state of each secondary queue
  • the secondary scheduling logic In the enabled state of the secondary queue state memory, by querying whether the corresponding storage location of the secondary scheduler state memory is enabled, it is determined whether the secondary scheduling logic can be executed on the current secondary queue.
  • the method further includes:
  • the linked list operation is a dequeue operation corresponding to the current queue, and executes the final scheduling logic and the secondary scheduling logic;
  • the final scheduling logic includes:
  • M12 access the linked list address memory according to the first secondary queue number, obtain the access address, and access the data memory to read the data with the access address;
  • the secondary scheduling logic includes:
  • the pointer accesses its corresponding secondary linked list memory to obtain the next hop pointer, takes the acquired next hop pointer as a value, and writes the second secondary queue number as an address into the corresponding head pointer memory;
  • the method further includes:
  • step M11 includes:
  • the secondary queue state memory is disabled, the first secondary queue number is used as the address, and the corresponding secondary scheduler state memory is disabled;
  • the secondary scheduling logic is executed preferentially with the first secondary queue number.
  • an embodiment of the present invention provides an electronic device, including a memory and a processor, where the memory stores a computer program that can run on the processor, and the processor executes the program When implementing the steps in the method for realizing high-speed scheduling of network chips as described above.
  • an embodiment of the present invention provides a computer-readable storage medium on which a computer program is stored.
  • the computer program is executed by a processor, the above-mentioned method for realizing high-speed scheduling of network chips is implemented. A step of.
  • the beneficial effects of the embodiments of the present invention are: the method, device and storage medium for realizing high-speed scheduling of network chips according to the embodiments of the present invention cut off the queue status update period of the entire queue and the minimum queue scheduling interval through hierarchical scheduling. Coupling greatly improves the flexibility of network chip design.
  • FIG. 1 is a schematic structural diagram of a data storage-scheduling model provided by the background technology
  • FIG. 2 is a schematic flowchart of a method for implementing high-speed scheduling of network chips provided by an embodiment of the present invention
  • FIG. 3 is a schematic structural diagram of a scheduling model according to Embodiment 1 of the present invention.
  • a method for implementing high-speed scheduling of network chips includes:
  • each secondary linked list includes: head pointer memory, tail pointer memory and secondary linked list memory;
  • the secondary queue numbers include: the original queue number and the secondary queue number. Number offset value.
  • the linked list control module sets Y secondary linked lists to store the link status of each queue respectively.
  • the scheduling interval of the queues on the scheduler can be reduced to
  • the value of Y can be ⁇ the smallest integer value of .
  • a queue polling state register is configured; after receiving any message, the polling state is queried. register to get the offset value of the secondary number that the current message matches.
  • the offset value of the secondary number is actually the number of the secondary linked list written during the data entry operation; for any queue, the number of secondary linked lists is Y; taking Y equal to 2 as an example, then The secondary number offset value is 2, for example: strictly follow 0, 1 alternation; take Y equal to 4 as an example, the secondary number offset value is 4, for example: strictly follow 0, 1, 2, 3 alternation;
  • the value of the secondary number offset value can be set as required, as long as different secondary linked lists can be distinguished, which will not be described here.
  • the included queue information is represented as ⁇ secondary queue number, data, linked list address ⁇ ; correspondingly , use the linked list address as the address, the data as the value to write the data memory, and use the secondary queue number to perform the linked list operation.
  • a secondary queue state memory is configured for each secondary linked list, and it is determined whether the queried secondary linked list is empty by querying the state of the secondary queue state memory. If the secondary queue state memory is disabled, it means that its corresponding secondary linked list is empty, and the data can be linked to the current secondary linked list; if the secondary queue state memory is enabled, it means that its corresponding secondary linked list The linked list is not empty, and the data needs to wait for the data corresponding to the current secondary linked list to be read before the current secondary linked list can be linked.
  • step S2 replaces the original queue number carried by the current message with the secondary queue number, and uses the secondary queue number corresponding to the current message as the new queue number to perform the linked list operation
  • the linked list operation is a queue entry operation.
  • the matching secondary linked list is not empty, the address of the linked list carried by the current message is used as the value, and the value of the tail pointer register is used as the address to write into the secondary linked list memory matched by the current secondary linked list; at the same time, the current packet is used to carry The number of the secondary queue is used as the address, and the address of the linked list carried by the current message is written as the value and replaces the pointer register matched by the current secondary linked list at the end.
  • the corresponding secondary linked list is obtained through the secondary queue number, and whether the secondary linked list is empty by querying the secondary queue state memory corresponding to the secondary linked list, and then the corresponding secondary linked list is linked. .
  • the linked list operation is also performed with the secondary queue number, thereby ensuring the system's requirements for scheduling performance.
  • the method further includes: configuring a final scheduler, a final queue state memory, a linked list address memory, and configuring a secondary scheduler, a secondary scheduler state memory, secondary total queue state memory;
  • the final-level scheduler executes the final-level scheduling logic
  • the final queue state memory is used to store the storage state of each queue; the final queue state memory is queried with the initial queue number, and if the storage location corresponding to the queue number is enabled, it means that there is data stored in the queue and can be read out, If it is not enabled, it means that the queue does not store data.
  • the linked list address memory is used to store the storage address of any data
  • the secondary scheduler executes secondary scheduling logic
  • the secondary total queue state memory is used to store the storage state of each secondary queue; for example, for a queue corresponding to two secondary linked lists, the two locations of the secondary total queue state memory are used to store the corresponding two
  • the state of the secondary linked list that is, the two positions correspond to the enabled states of the two secondary queue state memories respectively; correspondingly, when the position corresponding to at least one of the two secondary linked lists is enabled, it means that its There is data in the corresponding queue to perform queue operations.
  • the secondary scheduling logic can be executed on the current secondary queue;
  • the secondary scheduling logic can be executed only when the position of the scheduler state memory corresponding to the enabled secondary queue state memory is enabled.
  • a linked list operation is performed with the secondary queue number;
  • the linked list operation is a dequeue operation corresponding to the current queue, and the final scheduling logic and the secondary scheduling logic are executed;
  • the final scheduling logic includes:
  • the secondary queue number stored by the secondary queue is represented by the number of the first secondary queue.
  • the final scheduler schedules a queue number according to the actual queue state and the preset scheduling policy column, and the queue number is the initial queue number carried by the data; optionally, the secondary total queue state memory is accessed with this queue number, The status of the Y secondary members in the queue can be obtained. In particular, it is still necessary to access each secondary member corresponding to the queue by polling to obtain a valid secondary queue number, that is, the first secondary queue. It should be noted here that at the same time, the number of the first secondary queue obtained by the final scheduler and the number of the second secondary queue scheduled by the following secondary scheduler may be the same or different, which will not be repeated here. .
  • the access address is the head pointer cached in the head pointer register during the execution of the following secondary scheduling logic.
  • the secondary scheduling logic includes:
  • the secondary scheduler schedules the second secondary queue number according to the predetermined scheduling policy.
  • a secondary linked list must be scheduled in strict accordance with the polling method, otherwise the internal data of the queue will be out of order.
  • the pointer accesses its corresponding secondary linked list memory to obtain a next hop pointer, takes the acquired next hop pointer as a value, and writes the second secondary queue number as an address into the corresponding head pointer memory.
  • the dequeue operation of the secondary linked list is performed using the second secondary queue number scheduled by the secondary scheduler.
  • the linked list needs to be updated, that is:
  • the queue state corresponding to the parsed initial queue number is set to be enabled.
  • step M11 also includes: the final scheduler uses the first secondary queue number to initiate a scheduling request to the secondary scheduler; in this process, accessing the secondary queue state memory with the first secondary queue number, if the secondary queue status If the memory is disabled, the first secondary queue number is used as the address, and the corresponding secondary scheduler state memory is disabled;
  • the secondary scheduling logic is executed preferentially with the first secondary queue number.
  • the secondary queue state memory if the secondary queue state memory is disabled, it means that there is no data to be scheduled; if the secondary queue state memory is enabled, it means that there is data to be scheduled. Scheduling logic until all data is scheduled; that is, after the final scheduler completes the dequeue operation, the secondary scheduler needs to be triggered in the reverse direction, so as to achieve high-speed scheduling.
  • an embodiment of the present invention provides an electronic device, including a memory and a processor, the memory stores a computer program that can run on the processor, and the processor implements the above when executing the program. Describe the steps in the method for realizing high-speed scheduling of network chips.
  • an embodiment of the present invention provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, implements the steps in the above-described method for implementing high-speed scheduling of network chips.
  • the method, device, and storage medium for realizing high-speed scheduling of network chips cut off the coupling between the entire queue status update period and the queue minimum scheduling interval through specially designed hierarchical scheduling, without changing the scheduler.
  • the purpose of high-speed scheduling is achieved, which greatly improves the flexibility of the design of the network chip.
  • modules described as separate components may or may not be physically separated, and the components shown as modules are logic modules, that is, one of the logic modules that may be located in the chip modules, or can also be distributed to multiple data processing modules in a chip. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution in this implementation manner. Those of ordinary skill in the art can understand and implement it without creative effort.
  • This application can be used in many general-purpose or special-purpose communication chips. For example: switch chips, router chips, server chips, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

Embodiments of the present invention provide a method for realizing high-speed scheduling of a network chip, a device, and a storage medium. The method comprises: configuring, for each storage queue, Y secondary linked lists having the same structure; any queue receiving data; polling Y secondary linked lists of a current queue, and obtaining the serial number of a currently matched secondary linked list as a secondary serial number offset value corresponding to a current packet; and replacing, with a secondary queue serial number, an original queue serial number carried in the current packet, and taking the secondary queue serial number corresponding to the current packet as a new queue serial number for linked list operation, the secondary queue serial number comprising the original queue serial number and the secondary serial number offset value. In the embodiments of the present invention, by means of hierarchical scheduling, the coupling between a queue status update cycle and a queue minimum scheduling interval of a queue is cut off, thereby greatly improving design flexibility of the network chip.

Description

实现网络芯片高速调度的方法、设备及存储介质Method, device and storage medium for realizing high-speed scheduling of network chips
本发明要求于2020年12月16日提交中国专利局、申请号为202011491125.0、发明名称“实现网络芯片高速调度的方法、设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本发明中。The present invention claims the priority of the Chinese patent application with the application number 202011491125.0 and the invention title "Method, Device and Storage Medium for Realizing High-speed Scheduling of Network Chips" submitted to the China Patent Office on December 16, 2020, the entire contents of which are incorporated by reference in the present invention.
技术领域technical field
本发明实施例属于通信技术领域,主要涉及一种实现网络芯片高速调度的方法、设备及存储介质。The embodiments of the present invention belong to the field of communication technologies, and mainly relate to a method, a device and a storage medium for realizing high-speed scheduling of network chips.
背景技术Background technique
在高密度网络芯片中,存在大量的数据包存储-调度需求。典型的数据包存储-调度模型如图1所示,输入信号包括:{队列编号,数据,链表地址(写信息地址)}。In high-density network chips, there are a large number of packet storage-scheduling requirements. A typical data packet storage-scheduling model is shown in Figure 1, and the input signal includes: {queue number, data, linked list address (write information address)}.
存储调度模型主要由以下几个模块组成:数据存储器,所述数据存储器根据输入信号的“写信息地址”将“数据”缓存起来。链表控制模块,用来控制常规的链表“入队”“出队”操作;链表的控制属于通用的技术范畴,本发明实施例不作赘述;链表控制模块主要包括四个子模块:{头指针存储器,尾指针存储器,链表存储器,队列读状态}。所述头指针存储器用于存储数据头指针指向的存储地址,所述尾指针存储器用于存储数据尾指针指向的存储地址,所述链表存储器用于存储数据对应的存储地址;所述队列读状态用于指示链表控制模块的状态,当其为“0”时,说明此时队列中没有其他数据等待调度,当其为“1”时,说明队列中有其他数据等待调度。调度器,如果队列读状态为1,即队列状态不为空,则调度器参与调度,调度器会将调度出的队列发送给“链表控制模块”获取该队列的读“链表地址”并且触发“链表控制模块”更新队列读状态信息。读信息模块,根据调度器获取的读“链表地址”访问数据存储器,得到数据,并输出。读信息模块,根据调度器获取的“链表地址”访问数据存储 器,得到数据,产生输出。The storage scheduling model is mainly composed of the following modules: a data memory, which caches "data" according to the "write information address" of the input signal. The linked list control module is used to control the conventional linked list "enqueue" and "dequeue" operations; the control of the linked list belongs to a general technical category, and the embodiment of the present invention will not be repeated; the linked list control module mainly includes four submodules: {head pointer memory, tail pointer memory, linked list memory, queue read status}. The head pointer memory is used to store the storage address pointed to by the data head pointer, the tail pointer memory is used to store the storage address pointed to by the data tail pointer, and the linked list memory is used to store the storage address corresponding to the data; the queue read status It is used to indicate the status of the linked list control module. When it is "0", it means that there is no other data waiting to be scheduled in the queue at this time. When it is "1", it means that there is other data waiting to be scheduled in the queue. The scheduler, if the queue read status is 1, that is, the queue status is not empty, the scheduler participates in the scheduling, and the scheduler will send the scheduled queue to the "linked list control module" to obtain the read "linked list address" of the queue and trigger " "Linked list control module" updates the queue read status information. The read information module accesses the data memory according to the read "linked list address" obtained by the scheduler, obtains data, and outputs it. Read the information module, access the data memory according to the "linked list address" obtained by the scheduler, obtain data, and generate output.
由于队列状态直接决定了当前的队列能否参与调度,导致调度器产生“队列编号”,并触发“链表控制模块”更新“队列状态”有一定的延时限制。当调度器很复杂,队列数量很多,链表存储器很大时,往往需要多个时钟周期才能完成“队列状态”的更新。例如,在一个复杂系统中,完成一次“队列状态”更新需要J个时钟周期,如果系统要求每个队列最小调度间隔为K个时钟周期(K<J),此时该队列的调度性能不能得到保证;基于此,典型的调度器设计原理仅适用于低复杂度、低速率要求的系统。Since the queue status directly determines whether the current queue can participate in scheduling, the scheduler generates a "queue number" and triggers the "linked list control module" to update the "queue status" with a certain delay limit. When the scheduler is complex, the number of queues is large, and the linked list memory is large, it often takes multiple clock cycles to complete the update of the "queue state". For example, in a complex system, it takes J clock cycles to complete a "queue state" update. If the system requires that the minimum scheduling interval for each queue is K clock cycles (K<J), the scheduling performance of the queue cannot be obtained at this time. Guaranteed; based on this, typical scheduler design principles are only applicable to systems with low complexity and low rate requirements.
发明内容SUMMARY OF THE INVENTION
为解决上述技术问题,本发明实施例的目的在于提供一种实现网络芯片高速调度的方法、网络芯片及可读存储介质。In order to solve the above technical problems, the purpose of the embodiments of the present invention is to provide a method for realizing high-speed scheduling of network chips, a network chip, and a readable storage medium.
为了实现上述发明目的之一,本发明一实施方式提供一种实现网络芯片高速调度的方法,所述方法包括:为每一存储队列配置结构相同的Y个次级链表,Y为整数,
Figure PCTCN2021138905-appb-000001
J为队列状态更新周期;K为每个队列的最小调度间隔周期;每一次级链表均包括:头指针存储器、尾指针存储器以及次级链表存储器;
In order to achieve one of the above purposes of the invention, an embodiment of the present invention provides a method for realizing high-speed scheduling of network chips, the method includes: configuring Y secondary linked lists with the same structure for each storage queue, where Y is an integer,
Figure PCTCN2021138905-appb-000001
J is the queue state update period; K is the minimum scheduling interval period of each queue; each secondary linked list includes: head pointer memory, tail pointer memory and secondary linked list memory;
任一队列接收数据;Either queue receives data;
轮询查询当前队列的Y个次级链表,获取当前匹配的次级链表编号以作为当前报文对应的次级编号偏移值;Polling and querying the Y secondary linked lists of the current queue, and obtaining the current matching secondary linked list number as the secondary number offset value corresponding to the current message;
以次级队列编号替换当前报文携带的原始队列编号,并以当前报文对应的次级队列编号作为新的队列编号进行链表操作,所述次级队列编号包括:原始队列编号,以及次级编号偏移值。Replace the original queue number carried by the current message with the secondary queue number, and perform the linked list operation with the secondary queue number corresponding to the current message as the new queue number. The secondary queue numbers include: the original queue number and the secondary queue number. Number offset value.
作为本发明一可选的实施方式,配置队列轮询状态寄存器;As an optional implementation manner of the present invention, a queue polling status register is configured;
接收任一报文后,查询轮询状态寄存器,以获取当前报文匹配的次级编号偏移值。After receiving any message, query the polling status register to obtain the offset value of the secondary number matched by the current message.
作为本发明一可选的实施方式,所述方法还包括:为每一次级链表均配置次级队列状态存储器,通过查询次级队列状态存储器的状态,判断查询的次级链表是否为空。As an optional embodiment of the present invention, the method further includes: configuring a secondary queue state memory for each secondary linked list, and judging whether the queried secondary linked list is empty by querying the state of the secondary queue state memory.
作为本发明一可选的实施方式,以次级队列编号替换当前报文携带的原始队列编号,并以当前报文对应的次级队列编号作为新的队列编号进行链表操作包括:As an optional embodiment of the present invention, replacing the original queue number carried by the current message with the secondary queue number, and performing the linked list operation with the secondary queue number corresponding to the current message as the new queue number includes:
所述链表操作为入队操作,则:The linked list operation is an enqueue operation, then:
查询当前匹配的次级链表,若匹配的次级链表为空,则使用当前报文携带的次级队列编号作为地址,当前报文携带的链表地址作为值,分别写入当前次级链表匹配的头指针寄存器和尾指针寄存器;同时,使用次级队列编号作为地址,将次级队列状态存储器置为使能;Query the current matching secondary linked list. If the matching secondary linked list is empty, use the secondary queue number carried by the current packet as the address, and the linked list address carried by the current packet as the value, and write into the matching current secondary linked list respectively. Head pointer register and tail pointer register; at the same time, use the secondary queue number as the address to enable the secondary queue status memory;
若匹配的次级链表不为空,则使用当前报文携带的链表地址作为值,尾指针寄存器的值作为地址,写入当前次级链表匹配的次级链表存储器;同时,使用当前报文携带的次级队列编号作为地址,当前报文携带的链表地址作为值写入并替换尾当前次级链表匹配的指针寄存器。If the matching secondary linked list is not empty, the address of the linked list carried by the current message is used as the value, and the value of the tail pointer register is used as the address to write into the secondary linked list memory matched by the current secondary linked list; at the same time, the current packet is used to carry The number of the secondary queue is used as the address, and the address of the linked list carried by the current message is written as the value and replaces the pointer register matched by the current secondary linked list at the end.
作为本发明一可选的实施方式,所述方法还包括:配置终级调度器,终极队列状态存储器,链表地址存储器,以及配置次级调度器,次级调度器状态存储器,次级总队列状态存储器;As an optional implementation manner of the present invention, the method further includes: configuring the final scheduler, the final queue state memory, the linked list address memory, and configuring the secondary scheduler, the secondary scheduler state memory, the secondary total queue state memory;
所述终级调度器执行终级调度逻辑;the final-level scheduler executes the final-level scheduling logic;
所述终极队列状态存储器用于存储每一队列的存储状态;The final queue state memory is used to store the storage state of each queue;
所述链表地址存储器用于存储任一数据的存储地址;The linked list address memory is used to store the storage address of any data;
所述次级调度器执行次级调度逻辑;the secondary scheduler executes secondary scheduling logic;
所述次级总队列状态存储器用于存储每一次级队列的存储状态;the secondary overall queue state memory is used to store the storage state of each secondary queue;
在次级队列状态存储器使能状态下,通过查询所述次级调度器状态存储器的对应存储位置是否使能,判断是否可以对当前的次级队列执行次级 调度逻辑。In the enabled state of the secondary queue state memory, by querying whether the corresponding storage location of the secondary scheduler state memory is enabled, it is determined whether the secondary scheduling logic can be executed on the current secondary queue.
作为本发明一可选的实施方式,所述方法还包括:As an optional embodiment of the present invention, the method further includes:
任一队列读出数据时,以次级队列编号进行链表操作;When any queue reads data, the linked list operation is performed with the secondary queue number;
所述链表操作为对应当前队列的出队操作,执行终极调度逻辑和次级调度逻辑;The linked list operation is a dequeue operation corresponding to the current queue, and executes the final scheduling logic and the secondary scheduling logic;
所述终极调度逻辑包括:The final scheduling logic includes:
M11、调度当前进行出队操作的报文所对应的队列编号,采用轮询方式依次访问队列编号所对应的每一次级队列,获取次级队列状态存储器为使能、且最先轮询到的次级队列所存储的次级队列编号,并以第一次级队列编号表示;M11. Schedule the queue number corresponding to the message currently being dequeued, access each secondary queue corresponding to the queue number in turn by polling, and obtain the secondary queue status memory that is enabled and polled first The secondary queue number stored in the secondary queue, which is represented by the number of the first secondary queue;
M12、根据所述第一次级队列编号访问链表地址存储器,获取访问地址,以所述访问地址访问数据存储器读出数据;M12, access the linked list address memory according to the first secondary queue number, obtain the access address, and access the data memory to read the data with the access address;
所述次级调度逻辑包括:The secondary scheduling logic includes:
M21、在调度队列对应的次级队列状态存储器为使能,且次级调度器状态存储器为非使能时,调度当前队列按照轮询方式匹配的次级队列编号,以第二队列编号表示;之后,使用第二次级队列编号作为地址将次级调度器状态存储器置为使能;M21. When the secondary queue state memory corresponding to the scheduling queue is enabled and the secondary scheduler state memory is disabled, schedule the secondary queue number matched by the current queue according to the polling method, represented by the second queue number; Afterwards, the secondary scheduler state memory is enabled using the second secondary queue number as the address;
M22、根据调度出的第二次级队列编号访问其所对应的次级链表,在次级链表对应的头指针存储器和尾指针存储器中分别获取头指针和尾指针,将获取的头指针作为值,第二次级队列编号作为地址,写进链表地址存储器;M22. Access the corresponding secondary linked list according to the scheduled second secondary queue number, obtain the head pointer and the tail pointer respectively in the head pointer memory and the tail pointer memory corresponding to the secondary linked list, and use the obtained head pointer as the value , the second secondary queue number is used as the address, and is written into the linked list address memory;
判断获取的头指针和尾指针是否相同,若是,使用第二次级队列编号作为地址,将其对应的次级链表中的次级队列状态存储器置为非使能;若否,根据获取的头指针访问其所对应的次级链表存储器得到下一跳指针,将获取的下一跳指针作为值,第二次级队列编号作为地址写进对应的头指 针存储器中;Determine whether the acquired head pointer and tail pointer are the same, if so, use the second secondary queue number as the address, and set the secondary queue state memory in the corresponding secondary linked list to be disabled; if not, according to the acquired header The pointer accesses its corresponding secondary linked list memory to obtain the next hop pointer, takes the acquired next hop pointer as a value, and writes the second secondary queue number as an address into the corresponding head pointer memory;
M23、以调度出的第二次级队列编号作为地址,将其对应的次级总队列状态置为使能,同时,解析第二次级队列编号,获取其对应的原始队列编号,以及次级编号偏移值;并使用原始队列编号作为地址,将其对应的终极队列状态存储器为使能。M23. Use the scheduled second secondary queue number as the address, set the state of the corresponding secondary total queue to enable, and at the same time, parse the second secondary queue number to obtain the corresponding original queue number and the secondary queue number. number offset value; and use the original queue number as the address to enable the corresponding final queue state memory.
作为本发明一可选的实施方式,所述方法还包括:As an optional embodiment of the present invention, the method further includes:
任一数据调度完成后,将第一次级队列编号对应次级链表中的次级总队列状态存储器置为非使能,同时,解析第一次级队列编号获取初始队列编号,若初始队列编号对应的所有次级队列中的次级队列状态存储器全部为非使能,则将解析出的初始队列编号所对应的队列状态置为非使能。After any data scheduling is completed, set the secondary total queue state memory in the secondary linked list corresponding to the first secondary queue number to inactive, and at the same time, parse the first secondary queue number to obtain the initial queue number, if the initial queue number If all the secondary queue state memories in all corresponding secondary queues are disabled, the queue state corresponding to the parsed initial queue number is set as disabled.
作为本发明一可选的实施方式,步骤M11包括:As an optional embodiment of the present invention, step M11 includes:
以第一次级队列编号访问次级队列状态存储器,access the secondary queue state memory with the first secondary queue number,
若次级队列状态存储器为非使能,则使用第一次级队列编号作为地址,将对应的次级调度器状态存储器置为非使能;If the secondary queue state memory is disabled, the first secondary queue number is used as the address, and the corresponding secondary scheduler state memory is disabled;
若次级队列状态存储器为使能,则优先以第一次级队列编号执行所述次级调度逻辑。If the secondary queue state memory is enabled, the secondary scheduling logic is executed preferentially with the first secondary queue number.
为了实现上述发明目的之一,本发明一实施方式提供一种电子设备,包括存储器和处理器,所述存储器存储有可在所述处理器上运行的计算机程序,所述处理器执行所述程序时实现如上所述实现网络芯片高速调度的方法中的步骤。In order to achieve one of the above purposes of the invention, an embodiment of the present invention provides an electronic device, including a memory and a processor, where the memory stores a computer program that can run on the processor, and the processor executes the program When implementing the steps in the method for realizing high-speed scheduling of network chips as described above.
为了实现上述发明目的之一,本发明一实施方式提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如上所述实现网络芯片高速调度的方法中的步骤。In order to achieve one of the above purposes of the present invention, an embodiment of the present invention provides a computer-readable storage medium on which a computer program is stored. When the computer program is executed by a processor, the above-mentioned method for realizing high-speed scheduling of network chips is implemented. A step of.
与现有技术相比,本发明实施例的有益效果是:本发明实施例的实现网络芯片高速调度的方法、设备及存储介质,通过分级调度,切断整队队 列状态更新周期和队列最小调度间隔的耦合性,极大的提高了网络芯片的设计的灵活性。Compared with the prior art, the beneficial effects of the embodiments of the present invention are: the method, device and storage medium for realizing high-speed scheduling of network chips according to the embodiments of the present invention cut off the queue status update period of the entire queue and the minimum queue scheduling interval through hierarchical scheduling. Coupling greatly improves the flexibility of network chip design.
附图说明Description of drawings
图1是背景技术提供的数据存储-调度模型结构示意图;1 is a schematic structural diagram of a data storage-scheduling model provided by the background technology;
图2是本发明一实施方式提供的实现网络芯片高速调度的方法的流程示意图;2 is a schematic flowchart of a method for implementing high-speed scheduling of network chips provided by an embodiment of the present invention;
图3是本发明实施例一调度模型的结构示意图。FIG. 3 is a schematic structural diagram of a scheduling model according to Embodiment 1 of the present invention.
具体实施方式Detailed ways
以下将结合附图所示的可选的实施方式对本发明实施例进行详细描述。但这些实施方式并不限制本发明实施例,本领域的普通技术人员根据这些实施方式所做出的结构、方法、或功能上的变换均包括在本发明实施例的保护范围内。The embodiments of the present invention will be described in detail below with reference to the optional implementation manners shown in the accompanying drawings. However, these embodiments do not limit the embodiments of the present invention, and the structural, method, or functional transformations made by those of ordinary skill in the art according to these embodiments are all included in the protection scope of the embodiments of the present invention.
结合图2、图3所示,本发明一实施方式提供的实现网络芯片高速调度的方法,所述方法包括:With reference to FIG. 2 and FIG. 3 , a method for implementing high-speed scheduling of network chips provided by an embodiment of the present invention includes:
S1、为每一存储队列配置结构相同的Y个次级链表,Y为整数,Y≥
Figure PCTCN2021138905-appb-000002
J为队列状态更新周期;K为每个队列的最小调度间隔周期;每一次级链表均包括:头指针存储器、尾指针存储器以及次级链表存储器;
S1. Configure Y secondary linked lists with the same structure for each storage queue, Y is an integer, Y≥
Figure PCTCN2021138905-appb-000002
J is the queue state update period; K is the minimum scheduling interval period of each queue; each secondary linked list includes: head pointer memory, tail pointer memory and secondary linked list memory;
S2、任一队列接收报文;S2. Any queue receives the message;
轮询查询当前队列的Y个次级链表,获取当前匹配的次级链表编号以作为当前报文对应的次级编号偏移值;Polling and querying the Y secondary linked lists of the current queue, and obtaining the current matching secondary linked list number as the secondary number offset value corresponding to the current message;
以次级队列编号替换当前报文携带的原始队列编号,并以当前报文对应的次级队列编号作为新的队列编号进行链表操作,所述次级队列编号包括:原始队列编号,以及次级编号偏移值。Replace the original queue number carried by the current message with the secondary queue number, and perform the linked list operation with the secondary queue number corresponding to the current message as the new queue number. The secondary queue numbers include: the original queue number and the secondary queue number. Number offset value.
对于步骤S1,链表控制模块中通过设置Y个次级链表,以分别存储 每一队列的链接状态,如此,可以将调度器上队列的调度间隔缩小至
Figure PCTCN2021138905-appb-000003
个时钟周期;相应的,通过上式可知:Y值越大,调度器调度队列的间隔将越短;同时,Y值越大,占用的空间也越多,本发明实施例可选的实施方式中,所述Y的取值可以为≥
Figure PCTCN2021138905-appb-000004
的最小整数值。
For step S1, the linked list control module sets Y secondary linked lists to store the link status of each queue respectively. In this way, the scheduling interval of the queues on the scheduler can be reduced to
Figure PCTCN2021138905-appb-000003
Correspondingly, it can be seen from the above formula that: the larger the value of Y, the shorter the interval of the scheduler scheduling queues; at the same time, the larger the value of Y, the more space is occupied. , the value of Y can be ≥
Figure PCTCN2021138905-appb-000004
the smallest integer value of .
对于步骤S2,本发明实施例可选的实施方式中,在入队操作过程中,为了便于查询对应当前队列的空闲链表,配置队列轮询状态寄存器;接收任一报文后,查询轮询状态寄存器,以获取当前报文匹配的次级编号偏移值。For step S2, in an optional implementation of the embodiment of the present invention, in the process of entering the queue, in order to facilitate the query of the free linked list corresponding to the current queue, a queue polling state register is configured; after receiving any message, the polling state is queried. register to get the offset value of the secondary number that the current message matches.
所述次级编号偏移值实际为数据入队操作过程中,写入的次级链表的编号;对于任一队列,所述次级链表的数量为Y个;以Y等于2为例,则次级编号偏移值为2个,例如:严格按照0、1交替;以Y等于4为例,则次级编号偏移值为4个,例如:严格按照0、1、2、3交替;当然,次级编号偏移值的数值可以根据需要设定,只要可以区分不同的次级链表即可,在此不做赘述。The offset value of the secondary number is actually the number of the secondary linked list written during the data entry operation; for any queue, the number of secondary linked lists is Y; taking Y equal to 2 as an example, then The secondary number offset value is 2, for example: strictly follow 0, 1 alternation; take Y equal to 4 as an example, the secondary number offset value is 4, for example: strictly follow 0, 1, 2, 3 alternation; Of course, the value of the secondary number offset value can be set as required, as long as different secondary linked lists can be distinguished, which will not be described here.
以次级队列编号替换当前报文携带的原始队列编号,以形成新的入队信息;相应的,对于数据,其包括的入队信息表示为{次级队列编号,数据,链表地址};相应的,使用链表地址作为地址,数据作为值写数据存储器,使用次级队列编号进行链表操作。Replace the original queue number carried by the current message with the secondary queue number to form new queue information; correspondingly, for data, the included queue information is represented as {secondary queue number, data, linked list address}; correspondingly , use the linked list address as the address, the data as the value to write the data memory, and use the secondary queue number to perform the linked list operation.
可选的,本发明实施例为每一次级链表均配置次级队列状态存储器,通过查询次级队列状态存储器的状态,判断查询的次级链表是否为空。若次级队列状态存储器为非使能,表示其所对应的次级链表为空,数据可以对当前次级链表做链接操作;若次级队列状态存储器为使能,表示其所对应的次级链表不为空,数据需等待当前的次级链表所对应的数据被读出后,才可以对当前次级链表做链接操作。Optionally, in this embodiment of the present invention, a secondary queue state memory is configured for each secondary linked list, and it is determined whether the queried secondary linked list is empty by querying the state of the secondary queue state memory. If the secondary queue state memory is disabled, it means that its corresponding secondary linked list is empty, and the data can be linked to the current secondary linked list; if the secondary queue state memory is enabled, it means that its corresponding secondary linked list The linked list is not empty, and the data needs to wait for the data corresponding to the current secondary linked list to be read before the current secondary linked list can be linked.
通常情况下,以二进制字符“0”“1”其中之一表示使能,以另一表示非使能,当然,对于其他存储器,均是以二进制字符表示使能和非使能,本发明实施例中,以二进制数值的“0”表示非使能,以“1”表示使能。Under normal circumstances, one of binary characters "0" and "1" indicates enable, and the other indicates non-enable. Of course, for other memories, binary characters are used to indicate enable and disable. The present invention implements In the example, a binary value of "0" indicates non-enable, and "1" indicates enable.
可选的,步骤S2以次级队列编号替换当前报文携带的原始队列编号,并以当前报文对应的次级队列编号作为新的队列编号进行链表操作时,所述链表操作为入队操作,则执行下述步骤:Optionally, when step S2 replaces the original queue number carried by the current message with the secondary queue number, and uses the secondary queue number corresponding to the current message as the new queue number to perform the linked list operation, the linked list operation is a queue entry operation. , then perform the following steps:
查询当前匹配的次级链表,若匹配的次级链表为空,则使用当前报文携带的次级队列编号作为地址,当前报文携带的链表地址作为值,分别写入当前次级链表匹配的头指针寄存器和尾指针寄存器;同时,使用次级队列编号作为地址,将次级队列状态存储器置为使能;Query the current matching secondary linked list. If the matching secondary linked list is empty, use the secondary queue number carried by the current packet as the address, and the linked list address carried by the current packet as the value, and write into the matching current secondary linked list respectively. Head pointer register and tail pointer register; at the same time, use the secondary queue number as the address to enable the secondary queue status memory;
若匹配的次级链表不为空,则使用当前报文携带的链表地址作为值,尾指针寄存器的值作为地址,写入当前次级链表匹配的次级链表存储器;同时,使用当前报文携带的次级队列编号作为地址,当前报文携带的链表地址作为值写入并替换尾当前次级链表匹配的指针寄存器。If the matching secondary linked list is not empty, the address of the linked list carried by the current message is used as the value, and the value of the tail pointer register is used as the address to write into the secondary linked list memory matched by the current secondary linked list; at the same time, the current packet is used to carry The number of the secondary queue is used as the address, and the address of the linked list carried by the current message is written as the value and replaces the pointer register matched by the current secondary linked list at the end.
该入对操作过程中,通过次级队列编号获取对应的次级链表,并通过查询次级链表对应的次级队列状态存储器查询次级链表是否为空,进而对相应的次级链表进行链接操作。During the pairing operation, the corresponding secondary linked list is obtained through the secondary queue number, and whether the secondary linked list is empty by querying the secondary queue state memory corresponding to the secondary linked list, and then the corresponding secondary linked list is linked. .
可选的,任一队列读出数据时,同样以次级队列编号进行链表操作,进而保证系统对调度性能的要求。Optionally, when any queue reads data, the linked list operation is also performed with the secondary queue number, thereby ensuring the system's requirements for scheduling performance.
本发明实施例可选的实施方式中,接续图3所示,所述方法还包括:配置终级调度器,终极队列状态存储器,链表地址存储器,以及配置次级调度器,次级调度器状态存储器,次级总队列状态存储器;In an optional implementation manner of the embodiment of the present invention, continuing as shown in FIG. 3 , the method further includes: configuring a final scheduler, a final queue state memory, a linked list address memory, and configuring a secondary scheduler, a secondary scheduler state memory, secondary total queue state memory;
所述终级调度器执行终级调度逻辑;the final-level scheduler executes the final-level scheduling logic;
所述终极队列状态存储器用于存储每一队列的存储状态;以初始队列编号查询终极队列状态存储器,若该队列编号对应的存储位置为使能,则表示对应该队列存储有数据可以读出,若其为非使能,则表示该队列未存储数据。The final queue state memory is used to store the storage state of each queue; the final queue state memory is queried with the initial queue number, and if the storage location corresponding to the queue number is enabled, it means that there is data stored in the queue and can be read out, If it is not enabled, it means that the queue does not store data.
所述链表地址存储器用于存储任一数据的存储地址;The linked list address memory is used to store the storage address of any data;
所述次级调度器执行次级调度逻辑;the secondary scheduler executes secondary scheduling logic;
所述次级总队列状态存储器用于存储每一次级队列的存储状态;例如:对于某一队列对应两个次级链表,则次级总队列状态存储器的两个位置分别用于存储对应两个次级链表的状态,即该两个位置分别对应两个次级队列状态存储器的使能状态;相应的,当对应存储两个次级链表至少其中之一的位置为使能时,即表示其所对应的队列中有数据可以做出队操作。The secondary total queue state memory is used to store the storage state of each secondary queue; for example, for a queue corresponding to two secondary linked lists, the two locations of the secondary total queue state memory are used to store the corresponding two The state of the secondary linked list, that is, the two positions correspond to the enabled states of the two secondary queue state memories respectively; correspondingly, when the position corresponding to at least one of the two secondary linked lists is enabled, it means that its There is data in the corresponding queue to perform queue operations.
在次级队列状态存储器使能状态下,通过查询所述次级调度器状态存储器的对应存储位置是否使能,判断是否可以对当前的次级队列执行次级调度逻辑;在这里,只有次级调度器状态存储器对应使能的次级队列状态存储器的位置为使能时,才可以执行次级调度逻辑。In the enabled state of the secondary queue state memory, by querying whether the corresponding storage location of the secondary scheduler state memory is enabled, it is judged whether the secondary scheduling logic can be executed on the current secondary queue; The secondary scheduling logic can be executed only when the position of the scheduler state memory corresponding to the enabled secondary queue state memory is enabled.
可选的,任一队列读出数据时,以次级队列编号进行链表操作;所述链表操作为对应当前队列的出队操作,执行终极调度逻辑和次级调度逻辑;Optionally, when any queue reads data, a linked list operation is performed with the secondary queue number; the linked list operation is a dequeue operation corresponding to the current queue, and the final scheduling logic and the secondary scheduling logic are executed;
所述终极调度逻辑包括:The final scheduling logic includes:
M11、调度当前进行出队操作的报文所对应的队列编号,采用轮询方式依次访问队列编号所对应的每一次级队列,获取次级队列状态存储器为使能、且最先轮询到的次级队列所存储的次级队列编号,并以第一次级队列编号表示。M11. Schedule the queue number corresponding to the message currently being dequeued, access each secondary queue corresponding to the queue number in turn by polling, and obtain the secondary queue status memory that is enabled and polled first The secondary queue number stored by the secondary queue is represented by the number of the first secondary queue.
在这里,终极调度器根据实际的队列状态和预设的调度策列调度出队列编号,该队列编号为数据携带的初始队列编号;可选的,以该队列编号访问次级总队列状态存储器,可以得到该队列下Y个次级成员的状态,特别的,这里还是需要采用轮询的方式访问该队列对应的每一个次级成员,以得到有效的次级队列编号,即第一次级队列编号;这里需要说明的是,在同一时刻,终极调度器得到的第一次级队列编号和下述次级调度器调度的第二次级队列编号可能相同,也可能不同,在此不做赘述。Here, the final scheduler schedules a queue number according to the actual queue state and the preset scheduling policy column, and the queue number is the initial queue number carried by the data; optionally, the secondary total queue state memory is accessed with this queue number, The status of the Y secondary members in the queue can be obtained. In particular, it is still necessary to access each secondary member corresponding to the queue by polling to obtain a valid secondary queue number, that is, the first secondary queue. It should be noted here that at the same time, the number of the first secondary queue obtained by the final scheduler and the number of the second secondary queue scheduled by the following secondary scheduler may be the same or different, which will not be repeated here. .
M12、根据所述第一次级队列编号访问链表地址存储器,获取访问地址,以所述访问地址访问数据存储器读出数据。M12. Access the linked list address memory according to the first secondary queue number, obtain the access address, and access the data memory to read the data with the access address.
这里,所述访问地址即为下述次级调度逻辑执行过程中缓存在头指针寄存器中的头指针。Here, the access address is the head pointer cached in the head pointer register during the execution of the following secondary scheduling logic.
所述次级调度逻辑包括:The secondary scheduling logic includes:
M21、在调度队列对应的次级队列状态存储器为使能,且次级调度器状态存储器为非使能时,调度当前队列按照轮询方式匹配的次级队列编号,以第二队列编号表示;之后,使用第二次级队列编号作为地址将次级调度器状态存储器置为使能;M21. When the secondary queue state memory corresponding to the scheduling queue is enabled and the secondary scheduler state memory is disabled, schedule the secondary queue number matched by the current queue according to the polling method, represented by the second queue number; Afterwards, the secondary scheduler state memory is enabled using the second secondary queue number as the address;
在这里,次级队列状态存储器为1,且次级调度器状态存储器为0时,次级调度器按照预定的调度策略调度第二次级队列编号,需要强调的是,相同队列所对应的多个次级链表,必须严格按照轮询的方式进行调度,否则将导致队列内部数据乱序。Here, when the secondary queue state memory is 1 and the secondary scheduler state memory is 0, the secondary scheduler schedules the second secondary queue number according to the predetermined scheduling policy. A secondary linked list must be scheduled in strict accordance with the polling method, otherwise the internal data of the queue will be out of order.
M22、根据调度出的第二次级队列编号访问其所对应的次级链表,在次级链表对应的头指针存储器和尾指针存储器中分别获取头指针和尾指针,将获取的头指针作为值,第二次级队列编号作为地址,写进链表地址存储器;M22. Access the corresponding secondary linked list according to the scheduled second secondary queue number, obtain the head pointer and the tail pointer respectively in the head pointer memory and the tail pointer memory corresponding to the secondary linked list, and use the obtained head pointer as the value , the second secondary queue number is used as the address, and is written into the linked list address memory;
判断获取的头指针和尾指针是否相同,若是,使用第二次级队列编号作为地址,将其对应的次级链表中的次级队列状态存储器置为非使能;若否,根据获取的头指针访问其所对应的次级链表存储器得到下一跳指针,将获取的下一跳指针作为值,第二次级队列编号作为地址写进对应的头指针存储器中。Determine whether the acquired head pointer and tail pointer are the same, if so, use the second secondary queue number as the address, and set the secondary queue state memory in the corresponding secondary linked list to be disabled; if not, according to the acquired header The pointer accesses its corresponding secondary linked list memory to obtain a next hop pointer, takes the acquired next hop pointer as a value, and writes the second secondary queue number as an address into the corresponding head pointer memory.
在这里,采用次级调度器调度出的第二次级队列编号执行次级链表的出队操作。Here, the dequeue operation of the secondary linked list is performed using the second secondary queue number scheduled by the secondary scheduler.
M23、以调度出的第二次级队列编号作为地址,将其对应的次级总队列状态置为使能,同时,解析第二次级队列编号,获取其对应的原始队列编号,以及次级编号偏移值;并使用原始队列编号作为地址,将其对应的终极队列状态存储器置为使能。M23. Use the scheduled second secondary queue number as the address, set the state of the corresponding secondary total queue to enable, and at the same time, parse the second secondary queue number to obtain the corresponding original queue number and the secondary queue number. number offset; and use the original queue number as the address to enable its corresponding final queue state memory.
可选的,在数据完成出队操作后,需要更新链表,即:Optionally, after the data is dequeued, the linked list needs to be updated, that is:
任一数据调度完成后,将第一次级队列编号对应次级链表中的次级总队列状态存储器置为非使能,同时,解析第一次级队列编号获取初非始队列编号,若初始队列编号对应的所有次级队列中的次级队列状态存储器全部为非使能,则将解析出的初始队列编号所对应的队列状态置为非使能;After any data scheduling is completed, set the secondary total queue state memory in the secondary linked list corresponding to the first secondary queue number to disabled, and at the same time, parse the first secondary queue number to obtain the initial non-initial queue number. If all secondary queue state memories in all secondary queues corresponding to the queue number are disabled, the queue state corresponding to the parsed initial queue number is set as disabled;
相应的,若初始队列编号对应的所有次级队列中的次级队列状态存储器至少其中之一为使能,则将解析出的初始队列编号所对应的队列状态置为使能。Correspondingly, if at least one of the secondary queue state memories in all secondary queues corresponding to the initial queue number is enabled, the queue state corresponding to the parsed initial queue number is set to be enabled.
另外,步骤M11还包括:终极调度器使用第一次级队列编号向次级调度器发起调度请求;在该过程中,以第一次级队列编号访问次级队列状态存储器,若次级队列状态存储器为非使能,则使用第一次级队列编号作为地址,将对应的次级调度器状态存储器置为非使能;In addition, step M11 also includes: the final scheduler uses the first secondary queue number to initiate a scheduling request to the secondary scheduler; in this process, accessing the secondary queue state memory with the first secondary queue number, if the secondary queue status If the memory is disabled, the first secondary queue number is used as the address, and the corresponding secondary scheduler state memory is disabled;
若次级队列状态存储器为使能,则优先以第一次级队列编号执行所述次级调度逻辑。If the secondary queue state memory is enabled, the secondary scheduling logic is executed preferentially with the first secondary queue number.
在这里,次级队列状态存储器为非使能说明没有待调度的数据;次级队列状态存储器为使能,说明存在待调度的数据,此时,优先调度该次级队列编号,继续执行次级调度逻辑,直至所有数据被调度完成;即在终极调度器完成出队操作后,需要反向触发次级调度器,如此,以实现高速调度。Here, if the secondary queue state memory is disabled, it means that there is no data to be scheduled; if the secondary queue state memory is enabled, it means that there is data to be scheduled. Scheduling logic until all data is scheduled; that is, after the final scheduler completes the dequeue operation, the secondary scheduler needs to be triggered in the reverse direction, so as to achieve high-speed scheduling.
可选的,本发明一实施方式提供一种电子设备,包括存储器和处理器,所述存储器存储有可在所述处理器上运行的计算机程序,所述处理器执行所述程序时实现如上所述实现网络芯片高速调度的方法中的步骤。Optionally, an embodiment of the present invention provides an electronic device, including a memory and a processor, the memory stores a computer program that can run on the processor, and the processor implements the above when executing the program. Describe the steps in the method for realizing high-speed scheduling of network chips.
可选的,本发明一实施方式提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如上所述实现网络芯片高速调度的方法中的步骤。Optionally, an embodiment of the present invention provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, implements the steps in the above-described method for implementing high-speed scheduling of network chips.
综上所述,本发明实施例的实现网络芯片高速调度的方法、设备及存 储介质,通过特殊设计的分级调度,切断整队队列状态更新周期和队列最小调度间隔的耦合性,在不改变调度器策略和队列数量的情况下,实现高速调度的目的,极大的提高了网络芯片的设计的灵活性。To sum up, the method, device, and storage medium for realizing high-speed scheduling of network chips according to the embodiments of the present invention cut off the coupling between the entire queue status update period and the queue minimum scheduling interval through specially designed hierarchical scheduling, without changing the scheduler. In the case of the strategy and the number of queues, the purpose of high-speed scheduling is achieved, which greatly improves the flexibility of the design of the network chip.
以上所描述的系统实施方式仅仅是示意性的,其中所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件是逻辑模块,即可以位于芯片逻辑中的一个模块中,或者也可以分布到芯片内的多个数据处理模块上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施方式方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。The system implementations described above are only illustrative, wherein the modules described as separate components may or may not be physically separated, and the components shown as modules are logic modules, that is, one of the logic modules that may be located in the chip modules, or can also be distributed to multiple data processing modules in a chip. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution in this implementation manner. Those of ordinary skill in the art can understand and implement it without creative effort.
本申请可用于众多通用或专用的通信芯片中。例如:交换芯片、路由器芯片,服务器芯片等等。This application can be used in many general-purpose or special-purpose communication chips. For example: switch chips, router chips, server chips, etc.
应当理解,虽然本说明书按照实施方式加以描述,但并非每个实施方式仅包括一个独立的技术方案,说明书的这种叙述方式仅仅是为清楚起见,本领域技术人员应当将说明书作为一个整体,各实施方式中的技术方案也可以经适当组合,形成本领域技术人员可以理解的其他实施方式。It should be understood that although this specification is described in terms of embodiments, not each embodiment only includes an independent technical solution, and this description in the specification is only for the sake of clarity, and those skilled in the art should take the specification as a whole, and each The technical solutions in the embodiments can also be appropriately combined to form other embodiments that can be understood by those skilled in the art.
上文所列出的一系列的详细说明仅仅是针对本发明实施例的可行性实施方式的说明,它们并非用以限制本发明实施例的保护范围,凡未脱离本发明实施例技艺精神所作的等效实施方式或变更均应包括在本发明实施例的保护范围之内。The series of detailed descriptions listed above are only for the descriptions of the feasible implementations of the embodiments of the present invention, and they are not used to limit the protection scope of the embodiments of the present invention. Equivalent embodiments or modifications should all be included within the protection scope of the embodiments of the present invention.

Claims (10)

  1. 一种实现网络芯片高速调度的方法,所述方法包括:A method for realizing high-speed scheduling of network chips, the method comprising:
    为每一存储队列配置结构相同的Y个次级链表,Y为整数,
    Figure PCTCN2021138905-appb-100001
    J为队列状态更新周期;K为每个队列的最小调度间隔周期;每一次级链表均包括:头指针存储器、尾指针存储器以及次级链表存储器;
    Configure Y secondary linked lists with the same structure for each storage queue, Y is an integer,
    Figure PCTCN2021138905-appb-100001
    J is the queue state update period; K is the minimum scheduling interval period of each queue; each secondary linked list includes: head pointer memory, tail pointer memory and secondary linked list memory;
    任一队列接收数据;Either queue receives data;
    轮询查询当前队列的Y个次级链表,获取当前匹配的次级链表编号以作为当前报文对应的次级编号偏移值;Polling and querying the Y secondary linked lists of the current queue, and obtaining the current matching secondary linked list number as the secondary number offset value corresponding to the current message;
    以次级队列编号替换当前报文携带的原始队列编号,并以当前报文对应的次级队列编号作为新的队列编号进行链表操作,所述次级队列编号包括:原始队列编号,以及次级编号偏移值。Replace the original queue number carried by the current message with the secondary queue number, and perform the linked list operation with the secondary queue number corresponding to the current message as the new queue number. The secondary queue numbers include: the original queue number and the secondary queue number. Number offset value.
  2. 根据权利要求1所述的实现网络芯片高速调度的方法,其中,配置队列轮询状态寄存器;The method for realizing high-speed scheduling of network chips according to claim 1, wherein a queue polling status register is configured;
    接收任一报文后,查询轮询状态寄存器,以获取当前报文匹配的次级编号偏移值。After receiving any message, query the polling status register to obtain the offset value of the secondary number matched by the current message.
  3. 根据权利要求1所述的实现网络芯片高速调度的方法,其中,所述方法还包括:为每一次级链表均配置次级队列状态存储器,通过查询次级队列状态存储器的状态,判断查询的次级链表是否为空。The method for realizing high-speed scheduling of network chips according to claim 1, wherein the method further comprises: configuring a secondary queue state memory for each secondary linked list, and determining the secondary queue state memory by querying the state of the secondary queue state memory. Whether the linked list is empty.
  4. 根据权利要求3所述的实现网络芯片高速调度的方法,其中,以次级队列编号替换当前报文携带的原始队列编号,并以当前报文对应的次级队列编号作为新的队列编号进行链表操作包括:The method for realizing high-speed scheduling of network chips according to claim 3, wherein the original queue number carried in the current message is replaced with the secondary queue number, and the secondary queue number corresponding to the current message is used as the new queue number to perform a linked list Operations include:
    所述链表操作为入队操作,则:The linked list operation is an enqueue operation, then:
    查询当前匹配的次级链表,若匹配的次级链表为空,则使用当前报文携带的次级队列编号作为地址,当前报文携带的链表地址作为值,分别写入当前次级链表匹配的头指针寄存器和尾指针寄存器;同 时,使用次级队列编号作为地址,将次级队列状态存储器置为使能;Query the current matching secondary linked list. If the matching secondary linked list is empty, use the secondary queue number carried by the current packet as the address, and the linked list address carried by the current packet as the value, and write into the matching current secondary linked list respectively. Head pointer register and tail pointer register; at the same time, use the secondary queue number as the address to enable the secondary queue status memory;
    若匹配的次级链表不为空,则使用当前报文携带的链表地址作为值,尾指针寄存器的值作为地址,写入当前次级链表匹配的次级链表存储器;同时,使用当前报文携带的次级队列编号作为地址,当前报文携带的链表地址作为值写入并替换尾当前次级链表匹配的指针寄存器。If the matching secondary linked list is not empty, the address of the linked list carried by the current message is used as the value, and the value of the tail pointer register is used as the address to write into the secondary linked list memory matched by the current secondary linked list; at the same time, the current packet is used to carry The number of the secondary queue is used as the address, and the address of the linked list carried by the current message is written as the value and replaces the pointer register matched by the current secondary linked list at the end.
  5. 根据权利要求3所述的实现网络芯片高速调度的方法,其中,所述方法还包括:配置终级调度器,终极队列状态存储器,链表地址存储器,以及配置次级调度器,次级调度器状态存储器,次级总队列状态存储器;The method for implementing high-speed scheduling of network chips according to claim 3, wherein the method further comprises: configuring the final scheduler, the final queue state memory, the linked list address memory, and configuring the secondary scheduler, the secondary scheduler state memory, secondary total queue state memory;
    所述终级调度器执行终级调度逻辑;the final-level scheduler executes the final-level scheduling logic;
    所述终极队列状态存储器用于存储每一队列的存储状态;The final queue state memory is used to store the storage state of each queue;
    所述链表地址存储器用于存储任一数据的存储地址;The linked list address memory is used to store the storage address of any data;
    所述次级调度器执行次级调度逻辑;the secondary scheduler executes secondary scheduling logic;
    所述次级总队列状态存储器用于存储每一次级队列的存储状态;the secondary overall queue state memory is used to store the storage state of each secondary queue;
    在次级队列状态存储器使能状态下,通过查询所述次级调度器状态存储器的对应存储位置是否使能,判断是否可以对当前的次级队列执行次级调度逻辑。In the enabled state of the secondary queue state memory, it is determined whether the secondary scheduling logic can be executed on the current secondary queue by querying whether the corresponding storage location of the secondary scheduler state memory is enabled.
  6. 根据权利要求5所述的实现网络芯片高速调度的方法,其中,所述方法还包括:The method for realizing high-speed scheduling of network chips according to claim 5, wherein the method further comprises:
    任一队列读出数据时,以次级队列编号进行链表操作;When any queue reads data, the linked list operation is performed with the secondary queue number;
    所述链表操作为对应当前队列的出队操作,执行终极调度逻辑和次级调度逻辑;The linked list operation is a dequeue operation corresponding to the current queue, and executes the final scheduling logic and the secondary scheduling logic;
    所述终极调度逻辑包括:The final scheduling logic includes:
    M11、调度当前进行出队操作的报文所对应的队列编号,采用轮询方式依次访问队列编号所对应的每一次级队列,获取次级队列状态存储器为使能、且最先轮询到的次级队列所存储的次级队列编号,并以第一次级队列编号表示;M11. Schedule the queue number corresponding to the message currently being dequeued, access each secondary queue corresponding to the queue number in turn by polling, and obtain the secondary queue status memory that is enabled and polled first The secondary queue number stored in the secondary queue, which is represented by the number of the first secondary queue;
    M12、根据所述第一次级队列编号访问链表地址存储器,获取访问地址,以所述访问地址访问数据存储器读出数据;M12, access the linked list address memory according to the first secondary queue number, obtain the access address, and access the data memory to read the data with the access address;
    所述次级调度逻辑包括:The secondary scheduling logic includes:
    M21、在调度队列对应的次级队列状态存储器为使能,且次级调度器状态存储器为非使能时,调度当前队列按照轮询方式匹配的次级队列编号,以第二队列编号表示;之后,使用第二次级队列编号作为地址将次级调度器状态存储器置为使能;M21. When the secondary queue state memory corresponding to the scheduling queue is enabled and the secondary scheduler state memory is disabled, schedule the secondary queue number matched by the current queue according to the polling method, represented by the second queue number; Afterwards, the secondary scheduler state memory is enabled using the second secondary queue number as the address;
    M22、根据调度出的第二次级队列编号访问其所对应的次级链表,在次级链表对应的头指针存储器和尾指针存储器中分别获取头指针和尾指针,将获取的头指针作为值,第二次级队列编号作为地址,写进链表地址存储器;M22. Access the corresponding secondary linked list according to the scheduled second secondary queue number, obtain the head pointer and the tail pointer respectively in the head pointer memory and the tail pointer memory corresponding to the secondary linked list, and use the obtained head pointer as the value , the second secondary queue number is used as the address, and is written into the linked list address memory;
    判断获取的头指针和尾指针是否相同,若是,使用第二次级队列编号作为地址,将其对应的次级链表中的次级队列状态存储器置为非使能;若否,根据获取的头指针访问其所对应的次级链表存储器得到下一跳指针,将获取的下一跳指针作为值,第二次级队列编号作为地址写进对应的头指针存储器中;Determine whether the acquired head pointer and tail pointer are the same, if so, use the second secondary queue number as the address, and set the secondary queue state memory in the corresponding secondary linked list to be disabled; if not, according to the acquired header The pointer accesses its corresponding secondary linked list memory to obtain the next hop pointer, takes the acquired next hop pointer as a value, and writes the second secondary queue number as an address into the corresponding head pointer memory;
    M23、以调度出的第二次级队列编号作为地址,将其对应的次级总队列状态置为使能,同时,解析第二次级队列编号,获取其对应的原始队列编号,以及次级编号偏移值;并使用原始队列编号作为地址,将其对应的终极队列状态存储器为使能。M23. Using the scheduled second secondary queue number as the address, set the state of the corresponding secondary total queue to enable, and at the same time, parse the second secondary queue number to obtain the corresponding original queue number and the secondary queue number. Number offset value; and use the original queue number as the address to enable the corresponding final queue state memory.
  7. 根据权利要求6所述的实现网络芯片高速调度的方法,其中,所述方法还包括:The method for realizing high-speed scheduling of network chips according to claim 6, wherein the method further comprises:
    任一数据调度完成后,将第一次级队列编号对应次级链表中的次级总队列状态存储器置为非使能,同时,解析第一次级队列编号获取初始队列编号,若初始队列编号对应的所有次级队列中的次级队列状态存储器全部为非使能,则将解析出的初始队列编号所对应的队列状态置为非使能。After any data scheduling is completed, set the secondary total queue state memory in the secondary linked list corresponding to the first secondary queue number to disabled, and at the same time, parse the first secondary queue number to obtain the initial queue number, if the initial queue number If the secondary queue state memories in all corresponding secondary queues are all disabled, the queue state corresponding to the parsed initial queue number is set as disabled.
  8. 根据权利要求6所述的实现网络芯片高速调度的方法,其中,步骤 M11包括:The method for realizing high-speed scheduling of network chips according to claim 6, wherein step M11 comprises:
    以第一次级队列编号访问次级队列状态存储器,access the secondary queue state memory with the first secondary queue number,
    若次级队列状态存储器为非使能,则使用第一次级队列编号作为地址,将对应的次级调度器状态存储器置为非使能;If the secondary queue state memory is disabled, the first secondary queue number is used as the address, and the corresponding secondary scheduler state memory is disabled;
    若次级队列状态存储器为使能,则优先以第一次级队列编号执行所述次级调度逻辑。If the secondary queue state memory is enabled, the secondary scheduling logic is executed preferentially with the first secondary queue number.
  9. 一种电子设备,包括存储器和处理器,所述存储器存储有可在所述处理器上运行的计算机程序,其中,所述处理器执行所述程序时实现权利要求1-8任意一项所述实现网络芯片高速调度的方法中的步骤。An electronic device, comprising a memory and a processor, wherein the memory stores a computer program that can run on the processor, wherein the processor implements the program described in any one of claims 1-8 when the processor executes the program Steps in a method for realizing high-speed scheduling of network chips.
  10. 一种计算机可读存储介质,其上存储有计算机程序,其中,所述计算机程序被处理器执行时实现权利要求1-8任意一项所述实现网络芯片高速调度的方法中的步骤。A computer-readable storage medium on which a computer program is stored, wherein, when the computer program is executed by a processor, the steps in the method for realizing high-speed scheduling of a network chip according to any one of claims 1-8 are implemented.
PCT/CN2021/138905 2020-12-16 2021-12-16 Method for realizing high-speed scheduling of network chip, device, and storage medium WO2022127873A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011491125.0A CN112433839B (en) 2020-12-16 2020-12-16 Method, equipment and storage medium for realizing high-speed scheduling of network chip
CN202011491125.0 2020-12-16

Publications (1)

Publication Number Publication Date
WO2022127873A1 true WO2022127873A1 (en) 2022-06-23

Family

ID=74692530

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/138905 WO2022127873A1 (en) 2020-12-16 2021-12-16 Method for realizing high-speed scheduling of network chip, device, and storage medium

Country Status (2)

Country Link
CN (1) CN112433839B (en)
WO (1) WO2022127873A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114885045A (en) * 2022-07-07 2022-08-09 浙江锐文科技有限公司 Method and device for saving DMA channel resources in high-speed intelligent network card/DPU

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112433839B (en) * 2020-12-16 2024-04-02 苏州盛科通信股份有限公司 Method, equipment and storage medium for realizing high-speed scheduling of network chip

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180159802A1 (en) * 2015-07-30 2018-06-07 Huawei Technologies Co., Ltd. Data enqueuing method, data dequeuing method, and queue management circuit
CN109840145A (en) * 2019-01-08 2019-06-04 盛科网络(苏州)有限公司 A kind of multi-stage scheduling method, apparatus, network chip and storage medium
CN110519180A (en) * 2019-07-17 2019-11-29 华东计算技术研究所(中国电子科技集团公司第三十二研究所) Network card virtualization queue scheduling method and system
CN112433839A (en) * 2020-12-16 2021-03-02 盛科网络(苏州)有限公司 Method, equipment and storage medium for realizing high-speed scheduling of network chip

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180159802A1 (en) * 2015-07-30 2018-06-07 Huawei Technologies Co., Ltd. Data enqueuing method, data dequeuing method, and queue management circuit
CN109840145A (en) * 2019-01-08 2019-06-04 盛科网络(苏州)有限公司 A kind of multi-stage scheduling method, apparatus, network chip and storage medium
CN110519180A (en) * 2019-07-17 2019-11-29 华东计算技术研究所(中国电子科技集团公司第三十二研究所) Network card virtualization queue scheduling method and system
CN112433839A (en) * 2020-12-16 2021-03-02 盛科网络(苏州)有限公司 Method, equipment and storage medium for realizing high-speed scheduling of network chip

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114885045A (en) * 2022-07-07 2022-08-09 浙江锐文科技有限公司 Method and device for saving DMA channel resources in high-speed intelligent network card/DPU

Also Published As

Publication number Publication date
CN112433839A (en) 2021-03-02
CN112433839B (en) 2024-04-02

Similar Documents

Publication Publication Date Title
US6307789B1 (en) Scratchpad memory
WO2022127873A1 (en) Method for realizing high-speed scheduling of network chip, device, and storage medium
KR100962769B1 (en) Supercharge message exchanger
US9280297B1 (en) Transactional memory that supports a put with low priority ring command
US7376952B2 (en) Optimizing critical section microblocks by controlling thread execution
US9069602B2 (en) Transactional memory that supports put and get ring commands
WO2022121808A1 (en) Direct forwarding mode scheduling method and device, and storage medium
CN114827048B (en) Dynamic configurable high-performance queue scheduling method, system, processor and protocol
CN109062857B (en) Novel message controller capable of realizing communication among multiple processors at high speed and communication method thereof
CN116955247B (en) Cache descriptor management device and method, medium and chip thereof
US9342313B2 (en) Transactional memory that supports a get from one of a set of rings command
CN113126911A (en) Queue management method, medium and equipment based on DDR3SDRAM
US9288163B2 (en) Low-latency packet receive method for networking devices
CN115955441A (en) Management scheduling method and device based on TSN queue
CN102420749A (en) Device and method for realizing network card issuing function
CN112822126B (en) Message storage method, message in-out queue method and storage scheduling device
James-Roxby et al. Time-critical software deceleration in a FCCM
CN105912273A (en) FPGA-basedmessage share storage management implementation method
WO2024001332A1 (en) Multi-port memory, and reading and writing method and apparatus for multi-port memory
CN101185056A (en) Data pipeline management system and method for using the system
CN116800692B (en) Scheduling method and device of active queue and storage medium
CN117312197A (en) Message processing method and device, electronic equipment and nonvolatile storage medium
CN111402940B (en) SRAM storage device and method based on SRIO protocol
CN116383123A (en) Data processing method and related equipment
CN117591448A (en) FPGA xdma driving method supporting multiple processes

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21905799

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 101123)

122 Ep: pct application non-entry in european phase

Ref document number: 21905799

Country of ref document: EP

Kind code of ref document: A1