WO2017032178A1 - 一种校验和的计算方法、网络处理器及计算机存储介质 - Google Patents

一种校验和的计算方法、网络处理器及计算机存储介质 Download PDF

Info

Publication number
WO2017032178A1
WO2017032178A1 PCT/CN2016/089700 CN2016089700W WO2017032178A1 WO 2017032178 A1 WO2017032178 A1 WO 2017032178A1 CN 2016089700 W CN2016089700 W CN 2016089700W WO 2017032178 A1 WO2017032178 A1 WO 2017032178A1
Authority
WO
WIPO (PCT)
Prior art keywords
calculation
checksum
thread
current thread
storage unit
Prior art date
Application number
PCT/CN2016/089700
Other languages
English (en)
French (fr)
Inventor
胡达
Original Assignee
深圳市中兴微电子技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市中兴微电子技术有限公司 filed Critical 深圳市中兴微电子技术有限公司
Publication of WO2017032178A1 publication Critical patent/WO2017032178A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt

Definitions

  • the present invention relates to the field of network processor technologies, and in particular, to a method for calculating a checksum, a network processor, and a computer storage medium.
  • a checksum is the sum of a set of data items used for verification purposes in the field of data processing and data communication. It is usually used in communications, especially in long-distance communications to ensure data integrity and correctness.
  • a checksum is provided in an IP (Internet Protocol) header, a Transmission Control Protocol (TCP) header, and a User Datagram Protocol (UDP) header that are interconnected between networks of a packet. Field.
  • IP Internet Protocol
  • TCP Transmission Control Protocol
  • UDP User Datagram Protocol
  • embodiments of the present invention are expected to provide a method for calculating a checksum, a network processor, and a computer storage medium.
  • an embodiment of the present invention provides a method for calculating a checksum, including: acquiring, according to a received user instruction and a descriptor field in a network processor data storage unit, a calculation parameter corresponding to a current thread; Performing checksum calculation on the source data read in the data storage unit and the calculation parameter, and scheduling the current thread to enter a sleep state; when the calculation is completed, writing the calculated checksum to the current thread Checksum register, and schedule the current thread to enter an awake state; when the current thread is scheduled to enter the working state by the awake state, the calculated checksum is written into the current data storage unit The location of the thread.
  • the obtaining, according to the received user instruction and the descriptor field in the data storage unit, the calculation parameter corresponding to the current thread comprising: receiving and parsing the user instruction, obtaining an analysis result; and confirming that the user instruction is After the checksum calculation instruction, based on the parsing result and the descriptor field, the calculation parameters corresponding to the current thread are obtained.
  • the performing checksum calculation based on the source data read by the data storage unit and the calculation parameter comprising: accumulating according to the source data and the calculation parameter by 16 bits .
  • the method further includes: when the calculation is completed, placing the calculation completion identifier in a calculation completion register of the network processor; correspondingly, when scheduling the current thread to enter a working state from the awake state, and When the calculation completion identifier is read in the calculation completion register, the calculated checksum is written to a position corresponding to the current thread in the data storage unit.
  • the method further includes: acquiring a calculation parameter corresponding to the next thread; performing a checksum calculation based on the calculation parameter corresponding to the next thread, where The current thread is in a sleep state.
  • an embodiment of the present invention provides a network processor, including: a multi-threaded micro-engine, a data storage unit, a register unit, a computing unit, and a thread scheduling module; a threading microengine configured to acquire a calculation parameter corresponding to the current thread based on the received user instruction and a descriptor field in the data storage unit, and send the calculation parameter to the computing unit; and configured to be scheduled by the thread scheduling module And when the current thread enters the working state by the awake state, writing the calculated checksum to a location corresponding to the current thread in the data storage unit; the calculating unit is configured to be based on the data Performing a checksum calculation on the source data read in the storage unit and the calculation parameter; when the calculation is completed, writing the calculated checksum to the checksum register corresponding to the current thread in the register unit, And instructing the thread scheduling module to schedule the current thread to enter a waking state; the thread scheduling module is configured to schedule the current thread to enter a sleep state when the calculating unit calculates
  • the multi-threaded microengine is configured to receive and parse the user instruction to obtain an analysis result; after confirming that the user instruction is a checksum calculation instruction, based on the parsing result and the descriptor a field, obtaining a calculation parameter corresponding to the current thread.
  • the calculating unit is configured to accumulate in 16 bits according to the source data and the calculation parameter.
  • the network processor further includes a calculation completion register; correspondingly, the calculation unit is further configured to: when the calculation is completed, place the calculation completion identifier in the calculation completion register; An engine configured to write the calculated checksum to a location in the data storage unit corresponding to the current thread when the calculation completion identifier is read in the calculation completion register.
  • the multi-threaded micro engine is further configured to be adjusted in the thread scheduling module. After the current thread enters the sleep state, the calculation parameter corresponding to the next thread is obtained and sent to the calculation unit; the calculation unit is further configured to perform a checksum based on the calculation parameter corresponding to the next thread. Computing, wherein the current thread is in a sleep state.
  • an embodiment of the present invention provides a computer storage medium, where the computer storage medium includes a set of instructions that, when executed, cause at least one processor to perform the above-described checksum calculation method.
  • An embodiment of the present invention provides a method for calculating a checksum, a network processor, and a computer storage medium, and acquiring a calculation parameter according to a received user instruction and a descriptor field in a data storage unit of the network processor;
  • the source data read by the data storage unit and the above calculation parameters are subjected to checksum calculation.
  • the current thread is scheduled to enter a sleep state; when the calculation is completed, the calculated checksum is written into the current thread.
  • the checksum calculation is embedded in the pipeline of the multi-threaded micro-engine, the scheduling is reduced, and the calculation of multiple threads is performed in parallel, which greatly improves the efficiency of the checksum calculation and the performance of the network processor. .
  • FIG. 1 is a schematic structural diagram of a network processor according to an embodiment of the present invention.
  • FIG. 2 is a schematic flowchart of a method for calculating a checksum according to an embodiment of the present invention
  • FIG. 3 is a schematic flowchart of thread switching in an embodiment of the present invention.
  • a multi-threaded microengine of a network processor is connected Receiving the user instruction and the descriptor field in the data storage unit of the network processor to obtain a calculation parameter for calculating the checksum and transmitting it to the computing unit of the network processor; then, the computing unit is based on reading by the data storage unit The source data and the calculation parameters are used for checksum calculation.
  • the thread scheduling module of the network processor schedules the current thread to enter a sleep state; when the calculation is completed, the calculation unit writes the calculated checksum to the current state.
  • Thread checksum register instructs the thread scheduling module to schedule the current thread to enter the awake state; finally, when the thread scheduling module schedules the current thread to enter the working state from the awake state, the multi-threaded microengine writes the checksum to the data storage unit.
  • the location of the current thread in the location In this way, the checksum calculation is embedded in the pipeline of the multi-threaded micro-engine, the scheduling is reduced, and the calculation of multiple threads is performed in parallel, which greatly improves the efficiency of the checksum calculation and the performance of the network processor. .
  • An embodiment of the present invention provides a method for calculating a checksum, which is applied to a network processor.
  • the network processor includes at least a multi-threaded micro-engine 11, a data storage unit 12, and a register unit 13.
  • the multi-threaded micro-engine 11 includes a plurality of threads that can be processed in parallel, configured to receive user instructions, obtain calculation parameters for calculating a checksum based on user instructions and descriptor fields in the data storage unit 12, and then calculate The parameter is sent to the computing unit 14; and is further configured to write the calculated checksum to the location corresponding to the current thread in the data storage unit 12 when the thread scheduling module 15 schedules the current thread to enter the working state from the awake state;
  • the foregoing calculation parameters are composed of an analysis result obtained by parsing a user instruction and a descriptor field of the source data stored in the data storage unit 12, for example, a starting position of the source data in the data storage unit, The data length (in two-byte units) of the data to be involved in the calculation.
  • a data storage unit 12 for configuring to store source data for checksum calculation and data for determining calculation parameters; further, the data storage unit 12 may be multi-threaded by the microengine 11 Reading and writing operations may also be performed by the computing unit 14;
  • the register unit 13 includes a general-purpose register and a special register of the multi-threaded micro-engine 11, and the special-purpose registers are discrete, including a checksum register and a calculation completion register; each thread of the multi-threaded micro-engine 11 corresponds to a checksum register and Calculating a completion register; wherein the checksum register is configured to store the calculated checksum;
  • the calculating unit 14 is configured to add the source data read out from the data storage unit 12, and determine whether the calculation is completed. If the calculation is completed, the verification completion flag is placed in the calculation completion register corresponding to the thread, and the school is Check and write to the checksum register corresponding to the thread; if the calculation has not ended, adjust the calculation parameters passed by the microengine, read the descriptor field in the data storage unit again, and continue the checksum calculation;
  • the thread scheduling module 15 is configured to schedule the current thread to enter a sleep state when the calculation unit 14 calculates the checksum of the current thread; and is further configured to schedule the current thread to enter the awake state according to the instruction of the computing unit 14; The awake state enters the working state.
  • the calculation method of the checksum includes:
  • the multi-threaded micro-engine obtains a calculation parameter corresponding to the current thread, and sends the calculation parameter to the calculation unit, based on the received user instruction and the descriptor field in the data storage unit.
  • the S201 includes: the multi-threaded micro-engine receives and parses the user instruction, and obtains the parsing result; after confirming that the user instruction is the checksum calculation instruction, the multi-threaded micro-engine obtains the current thread based on the parsing result and the descriptor field. Corresponding calculation parameters.
  • the multi-threaded micro-engine receives the user instruction and parses the user instruction, and after confirming that the instruction is a checksum calculation instruction, according to the user instruction Can know the corresponding thread, that is, the current thread, and then obtain the calculation parameters corresponding to the current thread from the parsing result and the descriptor field, each Threads can have different calculation parameters. After obtaining the calculation parameters of the current thread, these parameters are sent to the calculation unit.
  • the calculation unit performs checksum calculation based on the source data and the calculation parameter read by the data storage unit, and at the same time, the thread scheduling module schedules the current thread to enter a sleep state;
  • the calculation unit After receiving the calculation parameter, the calculation unit reads the source data corresponding to the current thread in the data storage unit according to the parameter; where the source data is used to calculate the checksum of the current thread. Then, the calculation unit accumulates according to the source data and the calculation parameters by 16 bits to obtain the checksum of the current thread. During the calculation of the checksum calculation by the computing unit, the thread scheduling module schedules the current thread to enter a sleep state to wait for the calculation of the checksum.
  • the IP header data is: 4500 0030 804c 4000 8006 b52e d343 117b cb51 153d
  • the message is modified to reduce the time-to-live value (TTL) by one
  • the modified IP header data is: 4500 0030 804c 4000 7f06 b52e d343 117b cb51 153d.
  • the checksum is 0xb62e.
  • the calculation unit writes the calculation result, that is, the calculated checksum, into the checksum register of the current thread, and at the same time, instructs the thread scheduling module to schedule the current thread to enter the awake state, and the thread After receiving the indication, the scheduling module schedules the current thread from the sleep state to the awake state, so that when the thread timing arrives at the thread, the multi-threaded micro-engine can wake up the thread and perform subsequent operations.
  • the thread scheduling module schedules the waking state thread into a working state one by one according to the thread timing, and then, when the current thread is scheduled to enter the working state, the multi-threaded micro engine writes the calculated checksum into the data.
  • the calculation unit may further set the calculation completion identifier in the calculation completion register corresponding to the current thread;
  • S204 may include: when the thread scheduling module schedules the current thread to enter the working state by the awake state, and the multi-threaded micro engine reads the calculation completion identifier in the calculation completion register, the calculated checksum is written into the data. The location of the current thread in the storage unit.
  • the multi-threaded micro engine acquires a calculation parameter corresponding to the next thread, and sends the calculation parameter to the computing unit.
  • the calculation unit performs a checksum calculation based on the calculation parameter corresponding to the next thread.
  • the computing unit simultaneously executes S203, and when the computing unit starts to calculate the checksum of the next thread, the next thread also enters a sleep state; when the current thread
  • the checksum calculation is completed, when the thread timing is still in the next thread, the calculation of the next thread is continued until the thread timing reaches the current thread, and then S204 is executed.
  • the multi-threaded micro-engine can use one computing unit to perform checksum calculation of multiple threads at the same time.
  • other threads can perform calculation in the background, while improving computational efficiency. Saved resources to a certain extent.
  • the multi-threaded micro-engine in the network processor acquires the calculation parameter according to the received user instruction, and sends the calculation parameter to the calculation unit; then, the calculation unit is based on the source data read by the data storage unit and the above-mentioned calculation parameter.
  • the checksum calculation is performed.
  • the thread scheduling module schedules the current thread to enter a sleep state; when the calculation is completed, the calculation unit writes the calculated checksum into the checksum register of the current thread, and notifies the thread.
  • the scheduling module schedules the current thread to enter the awake state; finally, when the thread scheduling module schedules the current thread to enter the working state from the awake state, the multi-threaded micro-engine writes the checksum to the location corresponding to the current thread in the data storage unit.
  • the checksum calculation is embedded in the pipeline of the multi-threaded microengine, the scheduling link is reduced, and since the instruction execution of multiple threads is performed in parallel, it is equivalent to the instruction to hide the checksum calculation in other threads.
  • the checksum calculation time is saved, and the efficiency of the checksum calculation and the performance of the network processor are greatly improved.
  • an embodiment of the present invention further provides a network processor, which is consistent with the network processor in one or more of the foregoing embodiments.
  • the network processor includes at least a multi-threaded micro-engine 11, a data storage unit 12, a register unit 13, a computing unit 14, and a thread scheduling module 15.
  • the multi-threaded micro-engine 11 is configured to acquire a calculation parameter corresponding to the current thread based on the received user instruction and the descriptor field in the data storage unit 12, and send the calculation parameter to the calculation unit 14;
  • the module 15 schedules the current thread to enter the working state from the awake state, the calculated checksum is written into the position corresponding to the current thread in the data storage unit 12;
  • the calculating unit 14 is configured to perform a checksum calculation based on the source data and the calculation parameters read by the data storage unit 12; when the calculation is completed, the calculated checksum is written into the corresponding thread in the register unit 13 Checksum register 131, and instruct thread scheduling module 15 to schedule the current thread to enter an awake state;
  • the thread scheduling module 15 is configured to adjust when the calculation unit 14 calculates the checksum of the current thread.
  • the current thread enters a sleep state; is further configured to schedule the current thread to enter the awake state according to the instruction of the computing unit 14; and is further configured to schedule the current thread to enter the working state from the awake state;
  • a data storage unit 12 configured to store source data configured for checksum calculation and data configured to determine a calculation parameter
  • the register unit 13 has a plurality of checksum registers 131, wherein the checksum register 131 is configured to store the calculated checksum.
  • the register unit 13 includes a general-purpose register and a special register of the multi-threaded micro-engine 11, wherein the special-purpose register includes at least the checksum register 131; and each of the multi-threaded micro-engines 11 corresponds to a checksum register 131. .
  • the multi-threaded micro-engine 11 is specifically configured to receive and parse a user instruction to obtain an analysis result; after confirming that the user instruction is a checksum calculation instruction, based on the parsing result and the descriptor field in the data storage unit 12 , get the calculation parameters corresponding to the current thread.
  • the computing unit 14 is specifically configured to accumulate by 16 bits according to the source data and the calculation parameters.
  • the network processor further includes: a calculation completion register
  • the calculating unit 14 is further configured to: when the calculation is completed, place the calculation completion identifier in the calculation completion register; and the multi-threaded micro-engine 11 is configured to read the calculation completion identifier in the calculation completion register, and the calculation is performed.
  • the checksum is written to the location of the current thread in the data storage unit 12.
  • the multi-threaded micro-engine 11 is further configured to: after the thread scheduling module 15 schedules the current thread to enter the sleep state, acquire the calculation parameters corresponding to the next thread, and send the calculation parameters to the computing unit 14;
  • the calculating unit 14 is further configured to perform a checksum calculation based on the calculation parameter corresponding to the next thread, wherein the current thread is in a sleep state.
  • the multi-threaded micro-engine 11 and the thread scheduling module 15 may be implemented in a network processor.
  • the controller is implemented; the data storage unit 12 can be implemented by a cache in the network processor; the register unit 13 can be implemented by registers in the network processor; the computing unit 14 can be implemented by an operator in the network processor.
  • embodiments of the present invention can be provided as a method, system, or computer program product. Accordingly, the present invention can take the form of a hardware embodiment, a software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage and optical storage, etc.) including computer usable program code.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
  • an embodiment of the present invention further provides a computer storage medium, the computer storage medium comprising a set of instructions, when executed, causing at least one processor to perform the above-described checksum calculation method.
  • the multi-threaded micro-engine obtains the calculation parameter according to the received user instruction and the descriptor field in the data storage unit, and sends the calculation parameter to the calculation unit; the calculation unit is based on the source data read by the data storage unit.
  • the above calculation parameters, checksum calculation at the same time of calculation, the thread scheduling module schedules the current thread to enter a sleep state; when the calculation is completed, the calculation unit writes the calculated checksum to the checksum register of the current thread.
  • the multi-threaded micro-engine writes the checksum to the location corresponding to the current thread in the data storage unit.
  • the checksum calculation is embedded in the pipeline of the multi-threaded micro-engine, the scheduling is reduced, and the calculation of multiple threads is performed in parallel, which greatly improves the efficiency of the checksum calculation and the performance of the network processor.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Debugging And Monitoring (AREA)
  • Detection And Correction Of Errors (AREA)

Abstract

一种校验和的计算方法、网络处理器及计算机存储介质,其中,方法包括:多线程微引擎(11)基于接收到的用户指令以及数据存储单元(12)中的描述符字段获取当前线程对应的计算参数,并将计算参数发送给计算单元(14)(S201);计算单元(14)基于由所述数据存储单元(12)中读取的源数据和所述计算参数,进行校验和计算,同时,调度所述当前线程进入休眠状态(S202);当计算完成时,计算单元(14)将计算得到的校验和写入所述当前线程的校验和寄存器,并指示线程调度模块(15)调度所述当前线程进入唤醒状态(S203);当线程调度模块(15)调度所述当前线程由唤醒状态进入工作状态时,多线程微引擎(11)将所述计算得到的校验和写入所述数据存储单元(12)中所述当前线程所对应的位置(S204)。

Description

一种校验和的计算方法、网络处理器及计算机存储介质 技术领域
本发明涉及网络处理器技术领域,尤其涉及一种校验和的计算方法、网络处理器及计算机存储介质。
背景技术
校验和是在数据处理和数据通信领域中用于校验目的一组数据项的和。通常用在通信中,尤其是远距离通信中保证数据的完整性和正确性。在一个报文的网络之间互连的协议(IP,Internet Protocol)头、传输控制协议(TCP,Transmission Control Protocol)头以及用户数据报协议(UDP,User Datagram Protocol)头中都有校验和字段。在报文转发过程中,通过对校验和字段的计算、修改和校验来保证传输的正确性,因此,校验和的计算对于网络处理器来说是一个非常重要的功能并且不可或缺。
目前,网络处理器校验和计算的实现方式有多种,一种完全独立的校验和计算协处理器,每次计算都需要微引擎把需要计算的数据读出并发送给协处理器,计算完成再把结果返回给微引擎,这种方式资源最省,可以多个微引擎共用一个协处理器,但是会增加数据进出协处理器的调度过程,增加协处理器的额外开销以及增加报文在网络处理器中的停留时间,影响网络处理器的性能。
发明内容
有鉴于此,本发明实施例期望提供一种校验和的计算方法、网络处理器及计算机存储介质。
为达到上述目的,本发明实施例的技术方案是这样实现的:
第一方面,本发明实施例提供一种校验和的计算方法,包括:基于接收到的用户指令以及网络处理器数据存储单元中的描述符字段获取当前线程对应的计算参数;基于由所述数据存储单元中读取的源数据和所述计算参数,进行校验和计算,同时,调度所述当前线程进入休眠状态;当计算完成时,将计算得到的校验和写入所述当前线程的校验和寄存器,并调度所述当前线程进入唤醒状态;当调度所述当前线程由唤醒状态进入工作状态时,将所述计算得到的校验和写入所述数据存储单元中所述当前线程所对应的位置。
在上述方案中,所述基于接收到的用户指令以及数据存储单元中的描述符字段获取当前线程对应的计算参数,包括:接收并解析所述用户指令,获得解析结果;确认所述用户指令为校验和计算指令之后,基于所述解析结果以及所述描述符字段,获得所述当前线程对应的计算参数。
在上述方案中,所述基于由所述数据存储单元中读取的源数据和所述计算参数,进行校验和计算,包括:根据所述源数据和所述计算参数,按16位进行累加。
在上述方案中,所述方法还包括:当计算完成时,将计算完成标识置于所述网络处理器的计算完成寄存器中;相应地,当调度所述当前线程由唤醒状态进入工作状态,且在所述计算完成寄存器中读到所述计算完成标识时,将所述计算得到的校验和写入所述数据存储单元中所述当前线程所对应的位置。
在上述方案中,所述调度所述当前线程进入休眠状态后,所述方法还包括:获取下一个线程对应的计算参数;基于所述下一个线程对应的计算参数,进行校验和计算,其中,所述当前线程处于休眠状态。
第二方面,本发明实施例提供一种网络处理器,包括:多线程微引擎、数据存储单元、寄存器单元、计算单元以及线程调度模块;其中,所述多 线程微引擎,配置为基于接收到的用户指令以及数据存储单元中的描述符字段获取当前线程对应的计算参数,并将所述计算参数发送给计算单元;还配置为当所述线程调度模块调度所述当前线程由唤醒状态进入工作状态时,将所述计算得到的校验和写入所述数据存储单元中所述当前线程所对应的位置;所述计算单元,配置为基于由所述数据存储单元中读取的源数据和所述计算参数,进行校验和计算;当计算完成时,将计算得到的校验和写入所述寄存器单元中所述当前线程对应的校验和寄存器,并指示所述线程调度模块调度所述当前线程进入唤醒状态;所述线程调度模块,配置为在所述计算单元计算所述当前线程的校验和时,调度所述当前线程进入休眠状态;还配置为根据所述计算单元的指示调度所述当前线程进入唤醒状态;还配置为调度所述当前线程由唤醒状态进入工作状态;所述数据存储单元,配置为存储用于校验和计算的所述源数据以及所述描述符字段;所述寄存器单元,具有多个校验和寄存器,其中,所述校验和寄存器配置为存储所述计算得到的校验和。
在上述方案中,所述多线程微引擎,配置为接收并解析所述用户指令,获得解析结果;在确认所述用户指令为校验和计算指令之后,基于所述解析结果以及所述描述符字段,获得所述当前线程对应的计算参数。
在上述方案中,所述计算单元,配置为根据所述源数据和所述计算参数,按16位进行累加。
在上述方案中,所述网络处理器还包括计算完成寄存器;相应地,所述计算单元,还配置为当计算完成时,将计算完成标识置于所述计算完成寄存器中;所述多线程微引擎,配置为在所述计算完成寄存器中读到所述计算完成标识时,将所述计算得到的校验和写入所述数据存储单元中所述当前线程所对应的位置。
在上述方案中,所述多线程微引擎,还配置为在所述线程调度模块调 度所述当前线程进入休眠状态之后,获取下一个线程对应的计算参数,并发送给所述计算单元;所述计算单元,还配置为基于所述下一个线程对应的计算参数,进行校验和计算,其中,所述当前线程处于休眠状态。
第三方面,本发明实施例提供一种计算机存储介质,所述计算机存储介质包括一组指令,当执行所述指令时,引起至少一个处理器执行上述的校验和的计算方法。
本发明实施例提供了一种校验和的计算方法、网络处理器及计算机存储介质,根据接收到的用户指令以及网络处理器的数据存储单元中的描述符字段获取计算参数;然后,基于由数据存储单元读取的源数据和上述计算参数,进行校验和计算,在计算的同时,就调度当前线程进入休眠状态;当计算完成时,将计算得到的校验和写入当前线程的校验和寄存器中,并调度当前线程进入唤醒状态;最后,当调度当前线程由唤醒状态进入工作状态时,将校验和写入数据存储单元中当前线程对应的位置。如此,便实现了将校验和计算嵌入到多线程微引擎的流水线中,减少调度环节,并且,由于多个线程的计算并行进行,大大提高了校验和计算的效率和网络处理器的性能。
附图说明
图1为本发明实施例中的网络处理器的结构示意图;
图2为本发明实施例中的一种校验和的计算方法流程示意图;
图3为本发明实施例中的线程切换的流程示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述。
在本发明实施例的各种实施例中:网络处理器的多线程微引擎根据接 收到的用户指令以及网络处理器的数据存储单元中的描述符字段获取用于计算校验和的计算参数,并发送给网络处理器的计算单元;然后,计算单元基于由数据存储单元读取的源数据和这些计算参数,进行校验和计算,此时,网络处理器的线程调度模块调度所述当前线程进入休眠状态;当计算完成时,计算单元将计算得到的校验和写入当前线程的校验和寄存器中,并指示线程调度模块调度当前线程进入唤醒状态;最后,当线程调度模块调度当前线程由唤醒状态进入工作状态时,多线程微引擎将校验和写入数据存储单元中当前线程对应的位置。如此,便实现了将校验和计算嵌入到多线程微引擎的流水线中,减少调度环节,并且,由于多个线程的计算并行进行,大大提高了校验和计算的效率和网络处理器的性能。
下面对上述技术方案进行详细的说明。
本发明实施例提供一种校验和的计算方法,该方法应用于一网络处理器中,如图1所示,该网络处理器至少包括多线程微引擎11、数据存储单元12、寄存器单元13、计算单元14以及线程调度模块15。
其中,多线程微引擎11包含多个可以并行处理的线程,配置为接收用户指令,基于用户指令以及数据存储单元12中的描述符字段获取用于计算校验和的计算参数,然后,将计算参数发送给计算单元14;还配置为当线程调度模块15调度当前线程由唤醒状态进入工作状态时,将计算得到的校验和写入数据存储单元12中当前线程所对应的位置;
在具体实施过程中,上述计算参数是由解析用户指令所获得的解析结果以及数据存储单元12中存储的源数据的描述符字段组成的,比如,源数据在数据存储单元中的起始位置、要参与计算的数据的数据长度(以两字节为单位)等。
数据存储单元12,用于配置为存储用于校验和计算的源数据以及用于确定计算参数的数据;进一步地,数据存储单元12可以被多线程微引擎11 进行读写操作,也可以被计算单元14进行读操作;
寄存器单元13,包含多线程微引擎11的通用寄存器和专用寄存器,专用寄存器都是离散的,包括校验和寄存器和计算完成寄存器;多线程微引擎11中每一个线程对应一个校验和寄存器和计算完成寄存器;其中,校验和寄存器配置为存储所述计算得到的校验和;
计算单元14,配置为对从数据存储单元12读出的源数据做加法,并判断计算是否完成,若计算已结束,将校验完成标识置位于该线程对应的计算完成寄存器中,并把校验和写入该线程对应的校验和寄存器中;若计算还没有结束,调整微引擎传递过来的计算参数,再次读取数据存储单元中的描述符字段,继续进行校验和计算;
线程调度模块15,配置为在计算单元14计算当前线程的校验和时,调度当前线程进入休眠状态;还配置为根据计算单元14的指示调度当前线程进入唤醒状态;还配置为调度当前线程由唤醒状态进入工作状态。
下面结合上述网络处理器,对本发明实施例提供的校验和的计算方法进行说明。
参见图2所示,该校验和的计算方法包括:
S201:多线程微引擎基于接收到的用户指令以及数据存储单元中的描述符字段,获取当前线程对应的计算参数,并将计算参数发送给计算单元;
在具体实施过程中,S201包括:多线程微引擎接收并解析用户指令,获得解析结果;多线程微引擎在确认用户指令为校验和计算指令之后,基于解析结果以及描述符字段,获得当前线程对应的计算参数。
具体来说,由于不同用户对应多线程微引擎中的不同线程,所以,多线程微引擎接收用户指令,并解析该用户指令,在确认该指令为校验和计算指令之后,根据该用户指令就能够知道其所对应的线程,也就是当前线程,进而由解析结果以及描述符字段获得该当前线程对应的计算参数,每 个线程可以有不同的计算参数。在获得当前线程的计算参数之后,将这些参数发送给计算单元。
S202:计算单元基于由数据存储单元中读取的源数据和计算参数,进行校验和计算,同时,线程调度模块调度当前线程进入休眠状态;
具体来说,计算单元接收到计算参数之后,根据该参数,读取数据存储单元中当前线程对应的源数据;这里,源数据用于计算当前线程的校验和的。然后,计算单元根据源数据和计算参数,按16bit进行累加,得到当前线程的校验和。在计算单元进行校验和计算的过程中,线程调度模块调度当前线程进入休眠状态,以等待校验和的计算。
例如,IP头数据为:4500 0030 804c 4000 8006 b52e d343 117b cb51 153d,报文经过修改,将生存时间值(TTL)减一,那么,修改后的IP头数据为:4500 0030 804c 4000 7f06 b52e d343 117b cb51 153d。
那么,校验和的具体计算过程如下:
首先,将修改后的IP头数据中的校验和清零,即将上面报文中的0xb52e换为0x0000,得到待计算数据;然后,将待计算数据以16bit为单位相加,即4500+0030+804c+4000+7f06+0000+d343+117b+cb51+153d=349ce,然后,将进位加到结果的低位,即49ce+3=49d1,最后,对得到的结果取反,获得最终结果,也就是校验和为0xb62e。
S203:当计算完成时,计算单元将计算得到的校验和写入当前线程的校验和寄存器,并指示线程调度模块调度当前线程进入唤醒状态;
具体来说,当计算完成时,计算单元将计算结果,也就是计算的得到的校验和写入当前线程的校验和寄存器,与此同时,指示线程调度模块调度当前线程进入唤醒状态,线程调度模块收到指示之后,将当前线程由休眠状态调度为唤醒状态,这样,当线程时序到达该线程时,多线程微引擎就能够将该线程唤醒,并进行后续操作。
S204:当线程调度模块调度当前线程由唤醒状态进入工作状态时,多线程微引擎将计算得到的校验和写入数据存储单元中当前线程所对应的位置。
具体来说,线程调度模块按照线程时序逐一地将处于唤醒状态的线程调度进入工作状态,那么,当当前线程被调度进入工作状态时,多线程微引擎将计算的得到的校验和写入数据存储单元中当前线程所对应的位置。
在具体实施过程中,当计算完成时,计算单元还可以将计算完成标识置于当前线程对应的计算完成寄存器中;
相应地,S204就可以包括:当线程调度模块调度当前线程由唤醒状态进入工作状态,且多线程微引擎在计算完成寄存器中读取到计算完成标识时,将计算得到的校验和写入数据存储单元中当前线程对应的位置。
在另一实施例中,为了提高校验和计算的效率和网络处理器的性能,在当前线程进入休眠模式之后,切换到下一个线程,对下一个线程进行校验和计算。参见图3所示,具体过程如下:
S301:多线程微引擎获取下一个线程对应的计算参数,并发送给计算单元;
S302:计算单元基于下一个线程对应的计算参数,进行校验和计算。
需要说明的是,在对下一个线程进行校验和计算的过程中,计算单元同时执行S203,当计算单元开始计算下一个线程的校验和时,下一个线程也进入休眠状态;当当前线程的校验和计算完成时,线程时序还在下一个线程时,则继续进行下一个线程的计算,直至线程时序到达当前线程,再执行S204。这样,多线程微引擎可以使用一个计算单元同时进行多个线程的校验和计算,当在一个线程的校验和计算的过程中,其它线程可以在后台进行计算,在提高计算效率的同时也在一定程度上节省了资源。
至此,便实现了在网络处理器的流水线中进行校验和计算的过程。
在本实施例中,网络处理器中的多线程微引擎根据接收到的用户指令获取计算参数,并发送给计算单元;然后,计算单元基于由数据存储单元读取的源数据和上述计算参数,进行校验和计算,在计算的同时,线程调度模块就调度当前线程进入休眠状态;当计算完成时,计算单元将计算得到的校验和写入当前线程的校验和寄存器中,并通知线程调度模块调度当前线程进入唤醒状态;最后,当线程调度模块调度当前线程由唤醒状态进入工作状态时,多线程微引擎将校验和写入数据存储单元中当前线程对应的位置。如此,便实现了将校验和计算嵌入到多线程微引擎的流水线中,减少调度环节,并且,由于多个线程的指令执行是并行进行,相当于把校验和计算掩藏在其它线程的指令执行过程中,节省了校验和计算的时间,大大提高了校验和计算的效率和网络处理器的性能。
基于同一发明构思,本发明实施例还提供一种网络处理器,与上述一个或者多个实施例中的网络处理器一致。
如图1所示,该网络处理器至少包括多线程微引擎11、数据存储单元12、寄存器单元13、计算单元14以及线程调度模块15。
其中,多线程微引擎11,配置为基于接收到的用户指令以及数据存储单元12中的描述符字段获取当前线程对应的计算参数,并将计算参数发送给计算单元14;还配置为当线程调度模块15调度当前线程由唤醒状态进入工作状态时,将计算得到的校验和写入数据存储单元12中当前线程所对应的位置;
计算单元14,配置为基于由数据存储单元12中读取的源数据和计算参数,进行校验和计算;当计算完成时,将计算得到的校验和写入寄存器单元13中当前线程对应的校验和寄存器131,并指示线程调度模块15调度当前线程进入唤醒状态;
线程调度模块15,配置为在计算单元14计算当前线程的校验和时,调 度当前线程进入休眠状态;还配置为根据计算单元14的指示调度当前线程进入唤醒状态;还配置为调度当前线程由唤醒状态进入工作状态;
数据存储单元12,配置为存储有配置为校验和计算的源数据以及配置为确定计算参数的数据;
寄存器单元13,具有多个校验和寄存器131,其中,校验和寄存器131配置为存储计算得到的校验和。
需要说明的是,寄存器单元13包含多线程微引擎11的通用寄存器和专用寄存器,其中,专用寄存器至少包括上述校验和寄存器131;多线程微引擎11中每一个线程对应一个校验和寄存器131。
在一实施例中,多线程微引擎11,具体配置为接收并解析用户指令,获得解析结果;在确认用户指令为校验和计算指令之后,基于解析结果以及数据存储单元12中的描述符字段,获得当前线程对应的计算参数。
在一实施例中,计算单元14,具体配置为根据源数据和计算参数,按16位进行累加。
在一实施例中,网络处理器还包括:计算完成寄存器;
相应地,计算单元14,还配置为当计算完成时,将计算完成标识置于计算完成寄存器中;多线程微引擎11,具体配置为在计算完成寄存器中读到计算完成标识时,将计算得到的校验和写入数据存储单元12中当前线程所对应的位置。
在一实施例中,多线程微引擎11,还配置为在线程调度模块15调度当前线程进入休眠状态之后,获取下一个线程对应的计算参数,并发送给计算单元14;
计算单元14,还配置为基于下一个线程对应的计算参数,进行校验和计算,其中,当前线程处于休眠状态。
实际应用时,多线程微引擎11、线程调度模块15可由网络处理器中的 控制器实现;数据存储单元12可由网络处理器中的缓存实现;寄存器单元13可由网络处理器中的寄存器实现;计算单元14可由网络处理器中的运算器实现。
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用硬件实施例、软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器和光学存储器等)上实施的计算机程序产品的形式。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
基于此,本发明实施例还提供了一种计算机存储介质,所述计算机存储介质包括一组指令,当执行所述指令时,引起至少一个处理器执行上述的校验和的计算方法。
以上所述,仅为本发明的较佳实施例而已,并非用于限定本发明的保护范围。
工业实用性
本发明实施例提供的方案中,多线程微引擎根据收到的用户指令以及数据存储单元中的描述符字段获取计算参数,并发送给计算单元;计算单元基于由数据存储单元读取的源数据和上述计算参数,进行校验和计算,在计算的同时,线程调度模块就调度当前线程进入休眠状态;当计算完成时,计算单元将计算得到的校验和写入当前线程的校验和寄存器中,并通知线程调度模块调度当前线程进入唤醒状态;当线程调度模块调度当前线程由唤醒状态进入工作状态时,多线程微引擎将校验和写入数据存储单元中当前线程对应的位置。如此,便实现了将校验和计算嵌入到多线程微引擎的流水线中,减少调度环节,并且,由于多个线程的计算并行进行,大大提高了校验和计算的效率和网络处理器的性能。

Claims (11)

  1. 一种校验和的计算方法,所述方法包括:
    基于接收到的用户指令以及网络处理器数据存储单元中的描述符字段获取当前线程对应的计算参数;
    基于由所述数据存储单元中读取的源数据和所述计算参数,进行校验和计算,同时,调度所述当前线程进入休眠状态;
    当计算完成时,将计算得到的校验和写入所述当前线程的校验和寄存器,并调度所述当前线程进入唤醒状态;
    当调度所述当前线程由唤醒状态进入工作状态时,将所述计算得到的校验和写入所述数据存储单元中所述当前线程所对应的位置。
  2. 根据权利要求1所述的方法,其中,所述基于接收到的用户指令以及数据存储单元中的描述符字段获取当前线程对应的计算参数,包括:
    接收并解析所述用户指令,获得解析结果;
    确认所述用户指令为校验和计算指令后,基于所述解析结果以及所述描述符字段,获得所述当前线程对应的计算参数。
  3. 根据权利要求1所述的方法,其中,所述基于由所述数据存储单元中读取的源数据和所述计算参数,进行校验和计算,包括:
    根据所述源数据和所述计算参数,按16位进行累加。
  4. 根据权利要求1所述的方法,其中,所述方法还包括:
    当计算完成时,将计算完成标识置于所述网络处理器的计算完成寄存器中;
    相应地,当调度所述当前线程由唤醒状态进入工作状态,且在所述计算完成寄存器中读到所述计算完成标识时,将所述计算得到的校验和写入所述数据存储单元中所述当前线程所对应的位置。
  5. 根据权利要求1所述的方法,其中,所述调度所述当前线程进入休 眠状态后,所述方法还包括:
    获取下一个线程对应的计算参数;
    基于所述下一个线程对应的计算参数,进行校验和计算,其中,所述当前线程处于休眠状态。
  6. 一种网络处理器,包括:多线程微引擎、数据存储单元、寄存器单元、计算单元以及线程调度模块;其中,
    所述多线程微引擎,配置为基于接收到的用户指令以及数据存储单元中的描述符字段获取当前线程对应的计算参数,并将所述计算参数发送给计算单元;还配置为当所述线程调度模块调度所述当前线程由唤醒状态进入工作状态时,将所述计算得到的校验和写入所述数据存储单元中所述当前线程所对应的位置;
    所述计算单元,配置为基于由所述数据存储单元中读取的源数据和所述计算参数,进行校验和计算;当计算完成时,将计算得到的校验和写入所述寄存器单元中所述当前线程对应的校验和寄存器,并指示所述线程调度模块调度所述当前线程进入唤醒状态;
    所述线程调度模块,配置为在所述计算单元计算所述当前线程的校验和时,调度所述当前线程进入休眠状态;还配置为根据所述计算单元的指示调度所述当前线程进入唤醒状态;还配置为调度所述当前线程由唤醒状态进入工作状态;
    所述数据存储单元,配置为存储用于校验和计算的所述源数据以及所述描述符字段;
    所述寄存器单元,具有多个校验和寄存器,其中,所述校验和寄存器配置为存储所述计算得到的校验和。
  7. 根据权利要求6所述的网络处理器,其中,所述多线程微引擎,配置为接收并解析所述用户指令,获得解析结果;在确认所述用户指令为校 验和计算指令之后,基于所述解析结果以及所述描述符字段,获得所述当前线程对应的计算参数。
  8. 根据权利要求6所述的网络处理器,其中,所述计算单元,配置为根据所述源数据和所述计算参数,按16位进行累加。
  9. 根据权利要求6所述的网络处理器,其中,所述网络处理器还包括计算完成寄存器;
    相应地,所述计算单元,还配置为当计算完成时,将计算完成标识置于所述计算完成寄存器中;
    所述多线程微引擎,配置为在所述计算完成寄存器中读到所述计算完成标识时,将所述计算得到的校验和写入所述数据存储单元中所述当前线程所对应的位置。
  10. 根据权利要求6所述的网络处理器,其中,所述多线程微引擎,还配置为在所述线程调度模块调度所述当前线程进入休眠状态之后,获取下一个线程对应的计算参数,并发送给所述计算单元;
    所述计算单元,还配置为基于所述下一个线程对应的计算参数,进行校验和计算,其中,所述当前线程处于休眠状态。
  11. 一种计算机存储介质,所述计算机存储介质包括一组指令,当执行所述指令时,引起至少一个处理器执行如权利要求1至5任一项所述的校验和的计算方法。
PCT/CN2016/089700 2015-08-27 2016-07-11 一种校验和的计算方法、网络处理器及计算机存储介质 WO2017032178A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510536324.1A CN106484503B (zh) 2015-08-27 2015-08-27 一种校验和的计算方法及网络处理器
CN201510536324.1 2015-08-27

Publications (1)

Publication Number Publication Date
WO2017032178A1 true WO2017032178A1 (zh) 2017-03-02

Family

ID=58099566

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/089700 WO2017032178A1 (zh) 2015-08-27 2016-07-11 一种校验和的计算方法、网络处理器及计算机存储介质

Country Status (2)

Country Link
CN (1) CN106484503B (zh)
WO (1) WO2017032178A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113972989A (zh) * 2020-07-06 2022-01-25 宇龙计算机通信科技(深圳)有限公司 数据校验方法、装置、存储介质及电子设备
CN113973111A (zh) * 2021-10-29 2022-01-25 北京天融信网络安全技术有限公司 数据转发方法、装置、网关设备及计算机可读存储介质

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108196975B (zh) * 2017-11-21 2021-09-17 深信服科技股份有限公司 基于多校验和的数据验证方法、装置及存储介质
US11468037B2 (en) * 2019-03-06 2022-10-11 Semiconductor Components Industries, Llc Memory device and data verification method
CN112612518B (zh) * 2020-12-08 2022-04-01 麒麟软件有限公司 一种基于飞腾平台的网络checksum算法优化方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6751665B2 (en) * 2002-10-18 2004-06-15 Alacritech, Inc. Providing window updates from a computer to a network interface device
CN1677952A (zh) * 2004-03-30 2005-10-05 武汉烽火网络有限责任公司 线速分组并行转发方法和装置
CN1964312A (zh) * 2005-11-10 2007-05-16 中国科学院计算技术研究所 一种网络处理器中维护ip分组出入顺序的方法
CN101221550A (zh) * 2008-01-30 2008-07-16 许新朋 一种串行通讯的方法及芯片
CN101309184A (zh) * 2008-05-28 2008-11-19 华为技术有限公司 检测微引擎故障的方法及装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8037224B2 (en) * 2002-10-08 2011-10-11 Netlogic Microsystems, Inc. Delegating network processor operations to star topology serial bus interfaces
CN103595706A (zh) * 2013-10-15 2014-02-19 航天科工深圳(集团)有限公司 一种感温数据通用服务器及其通信方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6751665B2 (en) * 2002-10-18 2004-06-15 Alacritech, Inc. Providing window updates from a computer to a network interface device
CN1677952A (zh) * 2004-03-30 2005-10-05 武汉烽火网络有限责任公司 线速分组并行转发方法和装置
CN1964312A (zh) * 2005-11-10 2007-05-16 中国科学院计算技术研究所 一种网络处理器中维护ip分组出入顺序的方法
CN101221550A (zh) * 2008-01-30 2008-07-16 许新朋 一种串行通讯的方法及芯片
CN101309184A (zh) * 2008-05-28 2008-11-19 华为技术有限公司 检测微引擎故障的方法及装置

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113972989A (zh) * 2020-07-06 2022-01-25 宇龙计算机通信科技(深圳)有限公司 数据校验方法、装置、存储介质及电子设备
CN113972989B (zh) * 2020-07-06 2023-09-15 宇龙计算机通信科技(深圳)有限公司 数据校验方法及存储介质、电子设备
CN113973111A (zh) * 2021-10-29 2022-01-25 北京天融信网络安全技术有限公司 数据转发方法、装置、网关设备及计算机可读存储介质
CN113973111B (zh) * 2021-10-29 2023-12-22 珠海天融信网络安全技术有限公司 数据转发方法、装置、网关设备及计算机可读存储介质

Also Published As

Publication number Publication date
CN106484503A (zh) 2017-03-08
CN106484503B (zh) 2019-10-18

Similar Documents

Publication Publication Date Title
WO2017032178A1 (zh) 一种校验和的计算方法、网络处理器及计算机存储介质
US10079772B2 (en) Queue scheduling method and device, and computer storage medium
JP4805281B2 (ja) 装置管理スケジューリングのためのシステム及び方法
US20140115046A1 (en) Stream processing using a client-server architecture
US20130198729A1 (en) Automated improvement of executable applications based on evaluating independent execution heuristics
US10158543B2 (en) Method of estimating round-trip time (RTT) in content-centric network (CCN)
RU2014100914A (ru) Способ и устройство передачи/приема мультимедиа-содержимого в системе мультимедиа
WO2016029738A1 (zh) 对流数据进行处理的方法及装置
CN104038846A (zh) 缓存状态估计方法及设备
CN110019386B (zh) 一种流数据处理方法及设备
CN104683457A (zh) 并发控制的方法及装置
CN105874852A (zh) 移动设备功率控制
US9880923B2 (en) Model checking device for distributed environment model, model checking method for distributed environment model, and medium
CN104750545A (zh) 一种调度进程的方法及装置
CN103207775B (zh) 采用gpu加速进行实时网络流应用程序的处理方法
RU2014143340A (ru) Устройство связи, устройство управления, система связи, способ связи, способ управления устройством связи и программа
CN104104969B (zh) 一种视频截取方法及装置
WO2015165323A1 (zh) 一种数据处理方法、处理器及数据处理设备
CN104270243A (zh) 工业物联网芯片的安全功能实现方法
CN111431892B (zh) 一种加速器管理架构、方法及加速器接口控制器
CN107704273B (zh) 一种跟踪场景状态变化的方法、触发装置及接收装置
CN113986144A (zh) 一种通信信号处理方法、装置、芯片和电子设备
JP2018538632A (ja) ノードの再起動後にデータを処理する方法及びデバイス
CN106302261B (zh) 一种控制命令转发的方法和装置
Ashjaei et al. MTU configuration for real-time switched Ethernet networks

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16838438

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16838438

Country of ref document: EP

Kind code of ref document: A1