CN115905046A - Network card drive data packet processing method and device, electronic equipment and storage medium - Google Patents

Network card drive data packet processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115905046A
CN115905046A CN202211667383.9A CN202211667383A CN115905046A CN 115905046 A CN115905046 A CN 115905046A CN 202211667383 A CN202211667383 A CN 202211667383A CN 115905046 A CN115905046 A CN 115905046A
Authority
CN
China
Prior art keywords
cache
network card
target
data
descriptor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211667383.9A
Other languages
Chinese (zh)
Other versions
CN115905046B (en
Inventor
彭元志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kedong Guangzhou Software Technology Co Ltd
Original Assignee
Kedong Guangzhou Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kedong Guangzhou Software Technology Co Ltd filed Critical Kedong Guangzhou Software Technology Co Ltd
Priority to CN202211667383.9A priority Critical patent/CN115905046B/en
Publication of CN115905046A publication Critical patent/CN115905046A/en
Application granted granted Critical
Publication of CN115905046B publication Critical patent/CN115905046B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The embodiment of the invention discloses a network card drive data packet processing method, a network card drive data packet processing device, electronic equipment and a storage medium, wherein the method comprises the following steps: determining a current cache amount of the target descriptor in the cache memory; under the condition that the current cache number of the target descriptor is determined to be equal to the set residual cache number, instructing the cache memory to prefetch the target descriptor with the set cache number from the memory; instructing the cache memory to prefetch a cache address of target data according to a data pointer of the target descriptor; responding to a cache address reading instruction sent by a network card, sending the cache address prefetched by the cache memory to the network card, so that the network card can carry out real-time transceiving processing on the target data based on the cache address. The technical scheme of the embodiment of the invention can optimize the data prefetching capacity and the caching performance of the cache memory, thereby improving the real-time processing capacity of the network card to the data.

Description

Network card drive data packet processing method and device, electronic equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of computers and data processing, in particular to a network card drive data packet processing method and device, electronic equipment and a storage medium.
Background
In recent years, as the dominant frequency of a CPU (Central Processing Unit) is increased and the number of cores is increased, the performance of the CPU is significantly improved. But the overall performance improvement of a device based on a CPU as a processor is not high. The main factor influencing the improvement of the overall performance of the equipment is the pause delay of the instruction fetching from the CPU to the memory. Currently, to address the speed difference between the CPU and the main memory, a Cache memory (Cache) may be added between the CPU and the main memory, and a Cache prefetch mechanism is introduced. FIG. 1 is a diagram of a prior art CPU cache structure. As shown in fig. 1, according to the data reading order and the tightness of the combination with the CPU, the CPU cache can be divided into a first-level cache and a second-level cache, and a part of the high-end CPU can be configured with a third-level cache. Cache is transparent and invisible to most programmers. The programmer does not need to pay attention to the working principle and relevant information of the Cache when writing the program. For example, it is not necessary to pay attention to information such as whether the Cache exists, the Cache level of the Cache, and the size of each level of Cache. And the Cache does not need to pay attention to the strategy of loading the instructions and/or the data from the memory, the time for writing the processed data back to the memory and the like.
Because the program execution has the locality phenomena of temporal locality and spatial locality, the Cache prefetching mechanism can improve the system performance by utilizing the locality phenomena of the program execution. The Cache prefetching mechanism is to predict data/instructions and store the predicted data/instructions into the Cache in advance. The Cache prefetching mechanism needs to combine the space locality and the time locality, and the current execution state, the historical execution process, the software prompt and other related information, and adopts a reasonable method to fetch the data/instruction into the Cache before the data/instruction is used. In this way, when data/instructions need to be used by the processor, the processor can quickly load the required data/instructions from the Cache for operation and execution. Temporal locality refers to that the instructions/data that will be used by a program may be the instructions/data that are currently being used. Therefore, the currently used instruction/data can be prefetched and stored in the Cache after being used, so as to be used by the processor. Illustratively, taking the instructions of the loop statement as an example, the processor needs to repeatedly execute the instructions in the loop statement until the condition for loop termination is satisfied. Thus, instructions in the loop statement may be prefetched into the Cache for use by the processor. Spatial locality means that instructions/data that are about to be used by a program may be spatially adjacent or near to instructions/data that are currently in use. Therefore, when the processor processes the current instruction/data, the instruction/data of the adjacent area can be prefetched into the Cache from the memory, so that when the processor needs to process the instruction/data of the adjacent memory area, the processor can directly read the instruction/data from the Cache, and the access time of the memory is saved. For example, taking an array requiring sequential processing as an example, other data adjacent to the currently processed data in the array may be prefetched into the Cache in sequence for use by the processor.
In the process of implementing the invention, the inventor finds that the prior art has the following defects: the network card is used as a hardware module which needs to interact with memory data to perform data receiving and sending processing, and in the process of receiving and sending data, the memory needs to be read for many times in the process of processing a data message. Therefore, the network card driver must ensure that all data to be read are prefetched into the Cache, otherwise, the network card performance will be seriously reduced once Cache miss occurs. However, currently, the Cache prefetching mechanism is basically automatically completed by a hardware prefetching unit of the CPU, but the hardware prefetching unit does not necessarily improve the efficiency of program execution, depending on how the program is executed. Although a processor of a part of system architecture introduces a software instruction capable of prefetching the Cache, a uniform software prefetching interface is not introduced into the current operating system for the driver of the network card to use, so that the performance of the driver is optimized and improved without introducing a software prefetching mode into the current network card driver. That is, the data prefetching capability and the Cache performance of the current Cache cannot meet the requirement of efficient execution of the network card program.
Disclosure of Invention
Embodiments of the present invention provide a method and an apparatus for processing a network card driver packet, an electronic device, and a storage medium, which can optimize data prefetching capability and caching performance of a cache memory, thereby improving real-time data processing capability of a network card.
According to an aspect of the present invention, a method for processing a network card driver packet is provided, including:
determining a current cache amount of the target descriptor in the cache memory;
under the condition that the current cache number of the target descriptor is determined to be equal to the set residual cache number, the cache memory is instructed to prefetch the target descriptor with the set cache number from the memory;
instructing the cache memory to prefetch a cache address of target data according to a data pointer of the target descriptor;
responding to a cache address reading instruction sent by a network card, sending the cache address prefetched by the cache memory to the network card, so that the network card can carry out real-time transceiving processing on the target data based on the cache address.
According to another aspect of the present invention, there is provided a network card drive packet processing apparatus, including:
a current cache number determination module for determining a current cache number of the target descriptor in the cache memory;
the target descriptor prefetching module is used for indicating the cache memory to prefetch the target descriptors with the set cache number from the memory under the condition that the current cache number of the target descriptors is determined to be equal to the set residual cache number;
a cache address prefetching module for instructing the cache memory to prefetch a cache address of target data according to the data pointer of the target descriptor;
and the cache address sending module is used for responding to a cache address reading instruction sent by a network card and sending the cache address prefetched by the cache memory to the network card so that the network card can carry out real-time transceiving processing on the target data based on the cache address.
According to another aspect of the present invention, there is provided an electronic apparatus including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor, and the computer program is executed by the at least one processor to enable the at least one processor to execute the network card driver packet processing method according to any embodiment of the present invention.
According to another aspect of the present invention, there is provided a computer-readable storage medium storing computer instructions for causing a processor to implement the network card driver packet processing method according to any one of the embodiments of the present invention when the computer instructions are executed.
The embodiment of the invention determines the current cache number of the target descriptors in the cache memory through the processor, and prefetches the target descriptors with the set cache number from the memory when determining that the current cache number of the target descriptors is equal to the set residual cache number, so as to prefetch the cache addresses of the target data according to the data pointers of the target descriptors. When the network card sends a cache address reading instruction, the cache address prefetched by the cache memory can be sent to the network card, so that the network card can receive and send target data in real time based on the cache address, the problem that the data processing capacity of the network card is reduced due to the fact that the data prefetching capacity and the cache performance of the cache memory are low in the prior art is solved, the data prefetching capacity and the cache performance of the cache memory can be optimized, and the real-time data processing capacity of the network card is improved.
It should be understood that the statements in this section are not intended to identify key or critical features of the embodiments of the present invention, nor are they intended to limit the scope of the invention. Other features of the present invention will become apparent from the following description.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a diagram of a prior art CPU cache structure;
FIG. 2 is a diagram illustrating a data packet processing flow in a network card packet receiving and sending process in the prior art;
fig. 3 is a flowchart of a method for processing a network card driver packet according to an embodiment of the present invention;
fig. 4 is a flowchart of a network card driver packet processing method according to a second embodiment of the present invention;
fig. 5 is a schematic diagram of a network card driver packet processing apparatus according to a third embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention.
Detailed Description
In order to make those skilled in the art better understand the technical solutions of the present invention, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It is to be understood that the terms "target" and the like in the description and claims of the present invention and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Moreover, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Fig. 2 is a schematic diagram of a data packet processing flow in a network card packet receiving and sending process in the prior art. In a specific example, as shown in fig. 2, the basic process of processing data packets of the network card transceiving packets is as follows:
first, the CPU configures a mapping relationship between a data buffer pointer (i.e., rx Desc) of a receive descriptor and a data buffer (Mbuf, CPU memory), and simultaneously configures a mapping relationship between a data buffer pointer (i.e., tx Desc) of a transmit descriptor and the data buffer.
When the network card receives the data message, the network card reads the receiving descriptor from the memory, and stores the received data message into the corresponding data buffer according to the data buffer pointer in the receiving descriptor. And then, the network card updates the member and field information of the control structure body in the receiving descriptor, and the control structure body can confirm the received message after the updating is finished. The purpose of updating the receiving descriptor control structure is to confirm information of the data packet received by the CPU, such as length and type of the data packet. Meanwhile, the network card also needs to update the receiving queue register to count the received data messages, if the current value of the receiving queue register is 6, after a data message is received, the current value of the receiving queue register is updated to be 7, which indicates that a new data message is received.
When the network card needs to send a data message, the network card needs to read the message header of the data message from the data buffer area, and a forwarding port is determined according to the message header information. Specifically, the network card reads the message information from the control structure of the transmission descriptor, fills the message information into the transmission descriptor of the transmission queue (a queue formed by a plurality of transmission descriptors), and updates the current value of the transmission queue register. Further, the network card reads the sending descriptor from the memory and checks whether a data packet is transmitted by the hardware. And if the data packet is determined to be transmitted, reading the control structure body of the corresponding transmission descriptor from the memory, and releasing the data buffer.
Example one
Fig. 3 is a flowchart of a network card driver packet processing method according to an embodiment of the present invention, where the method is applicable to a case where a cache address is prefetched by using a descriptor in a cache memory, and the method may be executed by a network card driver packet processing apparatus, which may be implemented by software and/or hardware, and may be generally integrated in a processor, and the processor may be applied to any type of electronic device, such as a terminal device or a server device, and used in cooperation with a cache memory for providing a data prefetching function. Accordingly, as shown in fig. 3, the method includes the following operations:
s110, determining the current caching quantity of the target descriptors in the cache memory.
The target descriptor may be a receive descriptor or a transmit descriptor, and the embodiment of the present invention does not limit the type of the target descriptor. The current Cache number may be the number of target descriptors currently stored in the Cache.
As can be seen from the current process of processing one data packet by the network card, the network card needs to read the memory many times when receiving and sending the data packet. The processor needs 3-5 clock cycles to read data from the primary Cache, dozens of clock cycles to read data from the secondary Cache, dozens of clock cycles to read data from the tertiary Cache, and hundreds of clock cycles to read the memory. Therefore, it is necessary to ensure that all data to be read by the network card driver is in the Cache, so as to avoid Cache miss and reduce the performance of network card data processing.
In the embodiment of the invention, in order to improve the data processing performance of the network card driver, the processor can introduce a software instruction capable of prefetching the Cache into the network card driver, so that the Cache is controlled by the software instruction, and the high-efficiency execution of the network card driver is accelerated.
Specifically, the processor may send a software prefetch instruction to the Cache, and first prefetch a certain number of target descriptors, such as a send descriptor and/or a receive descriptor, in the Cache. After the target descriptor is obtained through prefetching, the processor can continue to send the software prefetching instruction, so that the Cache can continue to prefetch the corresponding Cache address according to the data buffer pointer in the prefetched target descriptor. Correspondingly, the network card can access the Cache to read target data in the Cache address from the Cache or write the target data into the Cache address, so that the target data can be rapidly received and transmitted.
In the process that the processor sends the software prefetching instruction to indicate the Cache to prefetch the target descriptor and the Cache address, the processor can judge the current Cache number of the target descriptor in the Cache in real time so as to continuously issue the software prefetching instruction according to the current Cache number of the target descriptor in the Cache.
Optionally, the software prefetch instruction of the Cache may be an assembly instruction, and the assembly instruction may be packaged first, so as to provide the following general interfaces: (1) HW _ PREFETCH0: data is put in each level of cache; (2) HW _ PREFETCH1: placing data in each level of cache except L1 (first level cache); (3) HW _ PREFETCH2: data is placed in each level of cache outside of L1 and L2 (level two cache).
S120, under the condition that the current cache number of the target descriptors is equal to the set residual cache number, the cache memory is instructed to prefetch the target descriptors with the set cache number from the memory.
S130, instructing the cache memory to prefetch the cache address of the target data according to the data pointer of the target descriptor.
The set remaining buffer amount may be a value set according to an actual requirement, such as 1 or 2. The set Cache number may be a preset number of one-time prefetch target descriptors, for example, the Cache may be instructed to prefetch 4 or 8 target descriptors at one time. The embodiment of the invention does not limit the set residual cache number and the specific numerical value of the set cache number. The target data may be data cached in the cache address, or may be data that needs to be stored in the cache address, and a data type of the target data specifically needs to be determined according to a type of the target descriptor.
It should be noted that the software prefetch instruction is in some hot spot areas, or performance-related areas, and can load data to the Cache through display, thereby improving the efficiency of program execution. However, if the software prefetch instruction is incorrectly used, the load in the Cache is too heavy or the proportion of useless data is increased, so that the performance of the program is reduced, and the execution efficiency of other programs is possibly influenced. For example, when a certain program loads a large amount of data into the three-level Cache, the normal execution of other programs is affected. Therefore, when data of the Cache region is prefetched by software, the prefetching needs to be guaranteed to be reasonable, and the program performance can be correctly optimized.
In the embodiment of the invention, in order to ensure the reasonability of the software prefetching instruction, the processor can instruct the Cache to prefetch the target descriptors with the set caching number at one time, and determine the current caching number of the target descriptors in the Cache in real time in the program execution process. If the current Cache number of the target descriptors in the Cache is determined to be equal to the set residual Cache number, which indicates that the current Cache number of the target descriptors reaches the Cache boundary, the software prefetching instruction can be continuously sent to the Cache, the Cache is instructed to continuously prefetch the target descriptors with the set Cache number from the memory, for example, the target descriptors with the set Cache number are sequentially prefetched according to the sequence of the target descriptors, and the Cache is instructed to continuously prefetch the Cache address of the target data according to the data pointers of the target descriptors.
S140, responding to a cache address reading instruction sent by a network card, sending the cache address prefetched by the cache memory to the network card, so that the network card can carry out real-time transceiving processing on the target data based on the cache address.
The cache address reading instruction may be an instruction for reading a cache address.
Correspondingly, since the Cache has prefetched Cache data for data transceiving processing by the network card, when the network card sends a Cache address reading instruction, the processor can directly send the Cache data prefetched by the Cache to the network card. The network card can rapidly receive and transmit the target data based on the Cache address prefetched by the Cache without accessing the memory again to receive and transmit the target data, so that the real-time processing of the target data is realized, and the real-time performance of the network card in processing the data message is improved.
At present, different industrial ethernet networks have high requirements for network real-time performance, for example, in an EtherCAT (Control Automation Technology, ethernet Control Automation Technology) industrial protocol, in a master-slave DC (Distributed clock mode) synchronization mode, the requirement for the accuracy of the master station on the transmission time of process data is very high, and a key factor influencing the real-time performance of process data transmission is an operating system and a network card driver. The operating system is responsible for sending out the process data on time, and the network card drive needs to be optimized, so that the sending time and jitter delay of the process data are shortened as much as possible.
The network card driving data packet processing method provided by the embodiment of the invention drives the network card to receive and send the data such as the receiving descriptor/sending descriptor and the data Cache which are used in the packet receiving and sending process through the Cache software prefetching instruction, and the data such as the receiving descriptor/sending descriptor and the data Cache are pre-loaded to the Cache from the memory, so that the data which are used by the network card are stored in the Cache, and the real-time network card driving optimization method based on Cache software prefetching is realized, thereby greatly reducing the cost of directly reading the data from the memory by the network card, reducing the waiting time of a processor, improving the data prefetching capability and the Cache performance, further improving the Cache hit rate and the real-time property of the network card for processing the data message, and reducing the delay time of the network card for receiving and sending the packet so as to reduce the jitter during communication, thereby enabling the network card to have good real-time property and stability, improving the real-time processing capability of the network card for the data, and being effectively applied to various fields with higher requirements for the real-time property of the network card.
The embodiment of the invention determines the current cache number of the target descriptors in the cache memory through the processor, and prefetches the target descriptors with the set cache number from the memory when determining that the current cache number of the target descriptors is equal to the set residual cache number, so as to prefetch the cache addresses of the target data according to the data pointers of the target descriptors. When the network card sends a cache address reading instruction, the cache address prefetched by the cache memory can be sent to the network card, so that the network card can receive and send target data in real time based on the cache address, the problem that the data processing capacity of the network card is reduced due to the fact that the data prefetching capacity and the cache performance of the cache memory are low in the prior art is solved, the data prefetching capacity and the cache performance of the cache memory can be optimized, and the real-time data processing capacity of the network card is improved.
Example two
Fig. 4 is a flowchart of a method for processing a network card driver packet according to a second embodiment of the present invention, which is embodied based on the second embodiment, and in this embodiment, a specific optional implementation manner of an initialization process of data prefetching and a data transceiving process of a network card based on a cache address is given. Correspondingly, as shown in fig. 4, the method of this embodiment may include:
s210, initializing the network card and the target descriptor.
It will be appreciated that the processor first needs to perform initialization operations related to data prefetching before sending software prefetch instructions to the Cache to prefetch data. Since the software prefetch directive applies to the prefetching of network card data, the network card and the target descriptor can be initialized.
In an optional embodiment of the present invention, the initializing the network card and the target descriptor may include: distributing the space of the network card control block structure, and initializing the value of the network card control block structure; allocating the space for sending queue buffer and/or receiving queue buffer; allocating the space of the target descriptor, and configuring the mapping relation between a data buffer pointer and a data buffer in the target descriptor; configuring a relevant register of a sending unit and/or a receiving unit of a network card, and enabling the sending unit and/or the receiving unit.
The structure of the network card Control block may be a structure for managing data of the network card itself, such as Media Access Control Address (MAC) Address, start Address of a descriptor, and data buffer pointer. The starting address of the descriptor may be used to prefetch the target descriptor. The sending queue buffer and the receiving queue buffer are also the memory buffer mbfu in the memory.
Specifically, when the processor initializes the network card and the target descriptor, the processor may allocate a space of the network card control block structure, and initialize the value of the network card control block structure, so as to implement an initialization process of the network card control block structure. Meanwhile, the space of the sending queue buffer and/or the receiving queue buffer and the space of the target descriptor are required to be allocated. After the space of the target descriptor is allocated, the mapping relationship between the data buffer pointer in the target descriptor and the data buffer needs to be configured, so as to implement the initialization process of the target descriptor. Further, a sending unit and/or a receiving unit of the network card also need to be configured, an associated register is set for the sending unit and/or the receiving unit, and the sending unit and/or the receiving unit are enabled.
In an optional embodiment of the present invention, before initializing the network card and the target descriptor, the method may further include: determining a network card driver, a data structure of the target descriptor and a data buffer area; and according to the line structure of the cache memory, performing cache memory line alignment on the network card driver, the data structure of the target descriptor and the data buffer.
It can be understood that if the data structure of the network card driver cannot be aligned with the Cache line, the problem that the data structure of the network card driver spans multiple lines of the Cache easily occurs. Therefore, the Cache line alignment operation needs to be completed first before data prefetching. Specifically, objects that need to be Cache-line aligned, such as a data structure and a data buffer of a network card driver and a target descriptor (including a sending descriptor and a receiving descriptor), may be determined, and Cache-line alignment may be performed on the objects that need to be Cache-line aligned, such as the data structure and the data buffer of the network card driver and the target descriptor, through a related instruction. For example, the CACHE LINE alignment declaration may be performed by "_ attenbute __ ((aligned (CACHE _ LINE _ SIZE)))" to complete the CACHE LINE alignment operation.
S220, prefetching the network card control block structure and the target descriptors with the set cache number through a prefetching function.
The network card control block structure is used for storing the starting address of the target descriptor.
Among other things, a prefetch function may be used to prefetch the Cache with the required data.
In an initial phase, the processor may send a software prefetch instruction to the Cache. The Cache can respond to the software prefetching instruction, and prefetches the network card control block structure and the target descriptors with the set Cache number through the prefetching function. The Cache prefetch network card control block structure can acquire the data of the initial address of the target descriptor stored in the network card control block structure, and then prefetch the target descriptors with the set Cache number according to the initial address of the target descriptor.
And S230, determining the current caching quantity of the target descriptor in the cache.
S240, judging whether the current cache number of the target descriptor is equal to the set residual cache number or not, if so, executing S250, otherwise, executing S260.
And S250, indicating the cache memory to prefetch the target descriptors with the set caching quantity from the memory.
And S260, instructing the cache memory to prefetch the cache address of the target data according to the data pointer of the target descriptor.
Correspondingly, if the processor determines that the current Cache number of the target descriptor is equal to the set residual Cache number, which indicates that the current Cache number of the target descriptor is at the Cache boundary, the processor may continue to send the software prefetch instruction to the Cache to indicate the Cache to continue prefetching the target descriptor with the set Cache number from the memory. Otherwise, if the current Cache number of the target descriptor is not at the Cache boundary, a software prefetch instruction may be sent to the Cache to instruct the Cache to prefetch the Cache address of the target data according to the data pointer of the target descriptor.
S270, responding to a cache address reading instruction sent by a network card, sending the cache address prefetched by the cache memory to the network card, so that the network card can receive and send the target data in real time based on the cache address.
In an optional embodiment of the present invention, the target descriptor includes a receiving descriptor, and the target data is a target data packet received by the network card; the network card is used for: and storing the received target data message in a cache address prefetched by the cache memory, and updating the structural data of the receiving descriptor and the receiving queue register after determining that the target data message is completely stored.
Because the data required by the network card is pre-fetched into the Cache, when the network card receives and processes the received target data message, the network card can send a Cache address reading instruction to the Cache, obtain a Cache address which is further pre-fetched by the Cache through the pre-fetched receiving descriptor, and store the received target data message in the Cache address which is pre-fetched by the Cache through the receiving descriptor. After the target data message is stored, the network card may update the structure data of the receive descriptor and the receive queue register to confirm that the target data message is received.
It should be noted that, since the received target data packet is written into the cache address through the network card, the network card may set structure data of the receiving descriptor, such as a packet length and a descriptor type of the structure data of the receiving descriptor.
In an optional embodiment of the present invention, the target descriptor includes a sending descriptor, and the target data is a target data packet sent by the network card; the network card is used for: and reading the target data message from the cache address prefetched from the cache memory, performing packet sending operation on the target data message, and updating the structural body data of the sending descriptor and the sending queue register after the target data message is determined to be sent completely.
Since the data required by the network card is pre-fetched into the Cache, when the network card sends and processes the target data message to be sent, the network card can send a Cache address reading instruction to the Cache, obtain a Cache address which is further pre-fetched by the Cache through the pre-fetched sending descriptor, and read the target data message to be sent from the Cache address so as to perform packet sending operation on the target data message. After the target data message is sent, the network card can update the structure data of the sending descriptor and the sending queue register to confirm that the target data message is sent completely.
Specifically, the packet sending operation is performed on the target data packet, which may be to read a packet header of the target data packet and determine a forwarding port according to information of the packet header. The network card reads the message information of the target data message from the control structure body of the sending descriptor, fills the message information into the sending descriptor of the sending queue, and updates the current value of the sending queue register. Furthermore, the network card reads the sending descriptor prefetched by the Cache and checks whether a data packet is sent out by hardware. And if the data packet is determined to be sent, reading the control structure body of the corresponding sending descriptor from the Cache prefetch, and releasing the data buffer.
In an optional embodiment of the present invention, before the instructing the cache memory to prefetch the cache address of the target data according to the data pointer of the target descriptor, the method may further include: setting structure data of the sending descriptor; wherein the structure data of the transmission descriptor includes a packet length and a descriptor type.
It should be noted that, since the sent target data packet is written into the cache address by the processor, the processor may set the structure data of the sending descriptor, such as the packet length and the descriptor type of the structure data of the sending descriptor.
In summary, the method for processing the network card driving data packet according to the embodiment of the present invention achieves prefetching of the Cache at different time points of the receiving function and the sending function. Specifically, in the data receiving stage, the processor instructs the Cache to prefetch the network card control block structure and receive the descriptor into the Cache through the prefetch function. The processor determines whether the prefetched receive descriptor in the Cache is located at a Cache boundary. If it is determined that there is a cache boundary, e.g., only one receive descriptor remains, then the next set cache number (e.g., 4) of receive descriptors are prefetched. Furthermore, the Cache is indicated to prefetch the Cache address of the storage message to the Cache according to the prefetched data buffer pointer receiving description, so that the subsequent data packet can be conveniently read and analyzed, and the subsequent data packet can be quickly stored to the corresponding Cache address. In the data sending stage, the processor instructs the Cache to prefetch the network card control block structure through the prefetching function and sends the descriptor to the Cache. The processor determines whether the send descriptor that has been prefetched in the Cache is located at a Cache boundary. If it is determined that there is a cache boundary, e.g., only one transmit descriptor remains, then the next set cache number (e.g., 4) of transmit descriptors are prefetched. Furthermore, the Cache is indicated to prefetch the Cache address of the storage message into the Cache according to the prefetched data buffer pointer which is used for sending description, so that the network card can conveniently read the data packet from the Cache address prefetched by the Cache and quickly send data.
The network card driving data packet processing method provided by the embodiment of the invention drives the network card to use a certain amount of data such as receiving descriptors/sending descriptors, data Cache and the like in the packet receiving and sending process through a reasonable Cache software prefetching instruction, and pre-loads the data into the Cache from the memory, so that part of the data to be used by the network card is stored in the Cache, thereby realizing the real-time network card driving optimization method based on Cache software prefetching, greatly reducing the overhead of directly reading the data from the memory by the network card, reducing the waiting time of a processor, improving the data prefetching capacity and the Cache performance, further improving the Cache hit rate and the real-time property of the network card for processing data messages, reducing the delay time of the network card for receiving and sending packets to reduce the jitter during communication, further ensuring that the network card has good real-time property and stability, improving the real-time processing capacity of the network card for the data, and being effectively applied to various fields with higher real-time requirements of the network card for the real-time property.
It should be noted that any permutation and combination between the technical features in the above embodiments also belong to the scope of the present invention.
EXAMPLE III
Fig. 5 is a schematic diagram of a network card drive packet processing apparatus according to a third embodiment of the present invention, and as shown in fig. 5, the apparatus includes: a current cache number determination module 310, a target descriptor prefetching module 320, a cache address prefetching module 330, and a cache address sending module 340, wherein:
a current cache number determination module 310, configured to determine a current cache number of the target descriptor in the cache memory;
a target descriptor prefetching module 320, configured to instruct the cache memory to prefetch a target descriptor with a set cache amount from a memory if it is determined that the current cache amount of the target descriptor is equal to the set remaining cache amount;
a cache address prefetching module 330, configured to instruct the cache memory to prefetch a cache address of target data according to the data pointer of the target descriptor;
the cache address sending module 340 is configured to send, in response to a cache address reading instruction sent by a network card, a cache address prefetched by the cache memory to the network card, so that the network card performs real-time transceiving processing on the target data based on the cache address.
The embodiment of the invention determines the current cache number of the target descriptors in the cache memory through the processor, and prefetches the target descriptors with the set cache number from the memory when determining that the current cache number of the target descriptors is equal to the set residual cache number, so as to prefetch the cache addresses of the target data according to the data pointers of the target descriptors. When the network card sends a cache address reading instruction, the cache address prefetched by the cache memory can be sent to the network card, so that the network card can receive and send target data in real time based on the cache address, the problem that the data processing capacity of the network card is reduced due to the fact that the data prefetching capacity and the cache performance of the cache memory are low in the prior art is solved, the data prefetching capacity and the cache performance of the cache memory can be optimized, and the real-time data processing capacity of the network card is improved.
Optionally, the target descriptor includes a receiving descriptor, and the target data is a target data packet received by the network card; the network card is used for: and storing the received target data message in a cache address prefetched by the cache memory, and updating the structural data of the receiving descriptor and the receiving queue register after determining that the target data message is completely stored.
Optionally, the target descriptor includes a sending descriptor, and the target data is a target data message sent by the network card; the network card is used for: and reading the target data message from the cache address prefetched from the cache memory, performing packet sending operation on the target data message, and updating the structural data of the sending descriptor and the sending queue register after determining that the sending of the target data message is finished.
Optionally, the network card drive data packet processing apparatus may further include a structure data setting module, configured to: setting structure data of the sending descriptor; wherein the structure data of the transmission descriptor includes a packet length and a descriptor type.
Optionally, the network card drive data packet processing apparatus may further include an initialization module, configured to: initializing the network card and the target descriptor; prefetching the network card control block structure and the target descriptors with the set cache number through a prefetching function; the network card control block structure is used for storing the starting address of the target descriptor.
Optionally, the initialization module is specifically configured to: allocating the space of the network card control block structure body and initializing the value of the network card control block structure body; allocating the space of sending queue buffer and/or receiving queue buffer; allocating the space of the target descriptor, and configuring the mapping relation between a data buffer pointer and a data buffer in the target descriptor; and configuring a relevant register of a sending unit and/or a receiving unit of the network card, and enabling the sending unit and/or the receiving unit.
Optionally, the network card drive packet processing apparatus may further include a cache line alignment module, configured to: determining a network card driver, a data structure of the target descriptor and a data buffer area; and according to the line structure of the cache memory, performing cache memory line alignment on the network card driver, the data structure of the target descriptor and the data buffer.
The network card drive data packet processing device can execute the network card drive data packet processing method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method. For details of the technology that is not described in detail in this embodiment, reference may be made to the network card driver packet processing method provided in any embodiment of the present invention.
Since the network card driver packet processing apparatus described above is an apparatus capable of executing the network card driver packet processing method in the embodiment of the present invention, based on the network card driver packet processing method described in the embodiment of the present invention, a person skilled in the art can understand a specific implementation of the network card driver packet processing apparatus in the embodiment and various variations thereof, and therefore, a detailed description of how the network card driver packet processing apparatus implements the network card driver packet processing method in the embodiment of the present invention is not repeated here. As long as those skilled in the art implement the apparatus used in the method for processing the network card driving data packet in the embodiment of the present invention, the apparatus is within the scope of the present application.
Example four
FIG. 6 illustrates a schematic structural diagram of an electronic device 10 that may be used to implement an embodiment of the present invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital assistants, cellular phones, smart phones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 6, the electronic device 10 includes at least one processor 11, and a memory communicatively connected to the at least one processor 11, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, and the like, wherein the memory stores a computer program executable by the at least one processor, and the processor 11 can perform various suitable actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from a storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data necessary for the operation of the electronic apparatus 10 can also be stored. The processor 11, the ROM 12, and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to the bus 14.
A number of components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, or the like; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The processor 11 performs the various methods and processes described above, such as a network card driven packet processing method.
In some embodiments, the network card drive packet processing method may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into RAM 13 and executed by processor 11, one or more steps of the network card driven packet processing method described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the network card driven packet processing method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for implementing the methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be performed. A computer program can execute entirely on a machine, partly on a machine, as a stand-alone software package partly on a machine and partly on a remote machine or entirely on a remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. A computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user may provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the Internet.
The computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service are overcome.

Claims (10)

1. A network card driving data packet processing method is characterized by comprising the following steps:
determining a current cache amount of the target descriptor in the cache memory;
under the condition that the current cache number of the target descriptor is determined to be equal to the set residual cache number, instructing the cache memory to prefetch the target descriptor with the set cache number from the memory;
instructing the cache memory to prefetch a cache address of target data according to a data pointer of the target descriptor;
responding to a cache address reading instruction sent by a network card, sending the cache address prefetched by the cache memory to the network card, so that the network card can carry out real-time transceiving processing on the target data based on the cache address.
2. The method according to claim 1, wherein the target descriptor comprises a receive descriptor, and the target data is a target data message received by the network card; the network card is used for:
and storing the received target data message in a cache address pre-fetched by the cache memory, and updating the structural data of the receiving descriptor and a receiving queue register after determining that the target data message is completely stored.
3. The method according to claim 1, wherein the target descriptor comprises a sending descriptor, and the target data is a target data message sent by the network card; the network card is used for:
and reading the target data message from the cache address prefetched from the cache memory, performing packet sending operation on the target data message, and updating the structural data of the sending descriptor and the sending queue register after determining that the sending of the target data message is finished.
4. The method of claim 3, further comprising, prior to the instructing the cache memory to prefetch the cache address of the target data according to the data pointer of the target descriptor:
setting structure data of the sending descriptor;
wherein the structure data of the transmission descriptor includes a packet length and a descriptor type.
5. The method of claim 1, further comprising, prior to said determining a current cache amount of target descriptors in the cache memory:
initializing the network card and the target descriptor;
prefetching the network card control block structure and the target descriptors with the set cache number through a prefetching function;
the network card control block structure is used for storing the starting address of the target descriptor.
6. The method of claim 5, wherein initializing the network card and the target descriptor comprises:
allocating the space of the network card control block structure body and initializing the value of the network card control block structure body;
allocating the space for sending queue buffer and/or receiving queue buffer;
allocating the space of the target descriptor, and configuring the mapping relation between a data buffer pointer and a data buffer in the target descriptor;
and configuring a relevant register of a sending unit and/or a receiving unit of the network card, and enabling the sending unit and/or the receiving unit.
7. The method of claim 1, further comprising:
determining a network card driver, a data structure of the target descriptor and a data buffer area;
and according to the line structure of the cache memory, performing cache memory line alignment on the network card driver, the data structure of the target descriptor and the data buffer.
8. A network card drive packet processing apparatus, comprising:
a current cache number determination module for determining a current cache number of the target descriptor in the cache memory;
the target descriptor prefetching module is used for indicating the cache memory to prefetch the target descriptors with the set cache number from the memory under the condition that the current cache number of the target descriptors is determined to be equal to the set residual cache number;
a cache address prefetching module for instructing the cache memory to prefetch a cache address of target data according to the data pointer of the target descriptor;
and the cache address sending module is used for responding to a cache address reading instruction sent by a network card and sending the cache address prefetched by the cache memory to the network card so that the network card can carry out real-time transceiving processing on the target data based on the cache address.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to enable the at least one processor to perform the network card driver packet processing method of any one of claims 1-7.
10. A computer storage medium, wherein the computer-readable storage medium stores computer instructions for causing a processor to implement the network card driver packet processing method according to any one of claims 1 to 7 when the computer instructions are executed.
CN202211667383.9A 2022-12-23 2022-12-23 Network card driving data packet processing method and device, electronic equipment and storage medium Active CN115905046B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211667383.9A CN115905046B (en) 2022-12-23 2022-12-23 Network card driving data packet processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211667383.9A CN115905046B (en) 2022-12-23 2022-12-23 Network card driving data packet processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115905046A true CN115905046A (en) 2023-04-04
CN115905046B CN115905046B (en) 2023-07-07

Family

ID=86493743

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211667383.9A Active CN115905046B (en) 2022-12-23 2022-12-23 Network card driving data packet processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115905046B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116827880A (en) * 2023-08-29 2023-09-29 珠海星云智联科技有限公司 Cache space management method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010013129A1 (en) * 1996-06-25 2001-08-09 Matsushita Electric Industrial Co., Ltd. Video network server for distributing sound and video image information to a plurality of terminals
US6330616B1 (en) * 1998-09-14 2001-12-11 International Business Machines Corporation System for communications of multiple partitions employing host-network interface, and address resolution protocol for constructing data frame format according to client format
US20040034718A1 (en) * 2002-08-15 2004-02-19 Dror Goldenberg Prefetching of receive queue descriptors
US7647436B1 (en) * 2005-04-29 2010-01-12 Sun Microsystems, Inc. Method and apparatus to interface an offload engine network interface with a host machine
CN102184151A (en) * 2011-04-29 2011-09-14 杭州华三通信技术有限公司 PCI-E (peripheral component interconnect express) to PCI bridge device and method for actively prefetching data thereof
CN113225307A (en) * 2021-03-18 2021-08-06 西安电子科技大学 Optimization method, system and terminal for pre-reading descriptors in offload engine network card
CN113535395A (en) * 2021-07-14 2021-10-22 西安电子科技大学 Descriptor queue and memory optimization method, system and application of network storage service

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010013129A1 (en) * 1996-06-25 2001-08-09 Matsushita Electric Industrial Co., Ltd. Video network server for distributing sound and video image information to a plurality of terminals
US6330616B1 (en) * 1998-09-14 2001-12-11 International Business Machines Corporation System for communications of multiple partitions employing host-network interface, and address resolution protocol for constructing data frame format according to client format
US20040034718A1 (en) * 2002-08-15 2004-02-19 Dror Goldenberg Prefetching of receive queue descriptors
US7647436B1 (en) * 2005-04-29 2010-01-12 Sun Microsystems, Inc. Method and apparatus to interface an offload engine network interface with a host machine
CN102184151A (en) * 2011-04-29 2011-09-14 杭州华三通信技术有限公司 PCI-E (peripheral component interconnect express) to PCI bridge device and method for actively prefetching data thereof
CN113225307A (en) * 2021-03-18 2021-08-06 西安电子科技大学 Optimization method, system and terminal for pre-reading descriptors in offload engine network card
CN113535395A (en) * 2021-07-14 2021-10-22 西安电子科技大学 Descriptor queue and memory optimization method, system and application of network storage service

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
任敏华;刘宇;罗云宝;赵永建;张激;: "两级链表在交换控制芯片描述符管理中的应用", 计算机工程, pages 82 - 84 *
苏文; 章隆兵; 高翔; 苏孟豪: "基于Cache锁和直接缓存访问的网络处理优化方法", 计算机研究与发展, pages 681 - 690 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116827880A (en) * 2023-08-29 2023-09-29 珠海星云智联科技有限公司 Cache space management method and device
CN116827880B (en) * 2023-08-29 2023-11-17 珠海星云智联科技有限公司 Cache space management method and device

Also Published As

Publication number Publication date
CN115905046B (en) 2023-07-07

Similar Documents

Publication Publication Date Title
CN110275841B (en) Access request processing method and device, computer equipment and storage medium
US11620255B2 (en) Time sensitive networking device
CN108958157B (en) Control program control scheduling method, control program control scheduling device, computer equipment and storage medium
US8578069B2 (en) Prefetching for a shared direct memory access (DMA) engine
US11803490B2 (en) Apparatus and method for data transmission and readable storage medium
US7299341B2 (en) Embedded system with instruction prefetching device, and method for fetching instructions in embedded systems
CN115905046B (en) Network card driving data packet processing method and device, electronic equipment and storage medium
CN115098412B (en) Peripheral access controller, data access device and corresponding method, medium and chip
CN114936173B (en) Read-write method, device, equipment and storage medium of eMMC device
CN115934625B (en) Doorbell knocking method, equipment and medium for remote direct memory access
CN114911596B (en) Scheduling method and device for model training, electronic equipment and storage medium
CN113141288B (en) Mailbox message receiving and sending method and device of CAN bus controller
CN112799723A (en) Data reading method and device and electronic equipment
CN115904259B (en) Processing method and related device of nonvolatile memory standard NVMe instruction
CN117132446A (en) GPU data access processing method, device and storage medium
CN116301627A (en) NVMe controller and initialization and data read-write method thereof
CN112949847B (en) Neural network algorithm acceleration system, scheduling system and scheduling method
CN112565474B (en) Batch data transmission method oriented to distributed shared SPM
CN107085557A (en) Direct memory access system and associated method
CN117312202B (en) System on chip and data transmission method for system on chip
CN116841773B (en) Data interaction method and device, electronic equipment and storage medium
CN113778526B (en) Cache-based pipeline execution method and device
KR102260820B1 (en) Symmetrical interface-based interrupt signal processing device and method
CN116185670B (en) Method and device for exchanging data between memories, electronic equipment and storage medium
EP2799979B1 (en) Hardware abstract data structure, data processing method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant