CN110618966B - Message processing method and device and electronic equipment - Google Patents

Message processing method and device and electronic equipment Download PDF

Info

Publication number
CN110618966B
CN110618966B CN201910931664.2A CN201910931664A CN110618966B CN 110618966 B CN110618966 B CN 110618966B CN 201910931664 A CN201910931664 A CN 201910931664A CN 110618966 B CN110618966 B CN 110618966B
Authority
CN
China
Prior art keywords
size
input cache
service message
service
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910931664.2A
Other languages
Chinese (zh)
Other versions
CN110618966A (en
Inventor
李建国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Maipu Communication Technology Co Ltd
Original Assignee
Maipu Communication Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Maipu Communication Technology Co Ltd filed Critical Maipu Communication Technology Co Ltd
Priority to CN201910931664.2A priority Critical patent/CN110618966B/en
Publication of CN110618966A publication Critical patent/CN110618966A/en
Application granted granted Critical
Publication of CN110618966B publication Critical patent/CN110618966B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/78Architectures of general purpose stored program computers comprising a single central processing unit
    • G06F15/7807System on chip, i.e. computer system on a single chip; System in package, i.e. computer system on one or more chips in a single package
    • G06F15/781On-chip cache; Off-chip memory

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application provides a message processing method and device and electronic equipment. The method comprises the following steps: receiving a service message sent by a core processor; determining an input cache unit capable of storing the service message from the plurality of input cache units, and caching the service message to the determined input cache unit; the input cache units correspond to different algorithm engines one by one; processing the service message through an algorithm engine corresponding to the determined input cache unit; and sending the processed service message to a core processor. Because the storage capacity of the input cache unit has an upper limit, a large number of service messages can be prevented from being distributed to the same algorithm engine for processing, so that the load is distributed uniformly by a plurality of algorithm engines as much as possible, the load of each algorithm engine is reduced, the processing efficiency is improved, and the equipment performance is improved.

Description

Message processing method and device and electronic equipment
Technical Field
The present application relates to the field of communications technologies, and in particular, to a method and an apparatus for processing a packet, and an electronic device.
Background
To improve the performance of the device, the core processor of the electronic device may allocate a portion of the traffic to the chip processing on the acceleration board. Therefore, the chip on the acceleration board card can process the service message distributed by the core processor by using the preset algorithm engine.
Because the operation speed of the algorithm engine is limited, if a large number of service messages are distributed to one algorithm engine for processing, the load of the algorithm engine is too large, the performance is insufficient, the processing efficiency of the acceleration board card is reduced, and the performance of the equipment cannot be improved.
Disclosure of Invention
An embodiment of the present application aims to provide a method and an apparatus for processing a packet, and an electronic device, so as to improve processing efficiency and improve device performance.
In a first aspect, an embodiment of the present application provides a method for processing a packet, where the method is applied to a chip that accelerates a service of a core processor in an electronic device, and the method includes:
receiving a service message sent by the core processor;
determining an input cache unit capable of storing the service message from a plurality of input cache units, and caching the service message to the determined input cache unit; the input cache units correspond to different algorithm engines one by one;
processing the service message through an algorithm engine corresponding to the determined input cache unit;
and sending the processed service message to the core processor.
In the embodiment of the application, by setting a different input cache unit for a plurality of algorithm engines in a one-to-one correspondence manner, the service message processed by each algorithm engine is cached in a corresponding input cache unit. Because the storage capacity of the input cache unit has an upper limit, a large number of service messages can be prevented from being distributed to the same algorithm engine for processing, so that the load is distributed uniformly by a plurality of algorithm engines as much as possible, the load of each algorithm engine is reduced, the processing efficiency is improved, and the equipment performance is improved.
With reference to the first aspect, in a first possible implementation manner, determining an input cache unit capable of storing the service packet from a plurality of input cache units includes:
determining the data size of the service message, and acquiring the residual space size of each input cache unit;
and determining that the input cache unit with the residual space size larger than or equal to the data size is the input cache unit capable of storing the service message.
In the embodiment of the present application, the input cache unit capable of storing the service packet can be determined conveniently and directly by comparing the data size of the service packet with the size of the remaining space of each input cache unit.
With reference to the first possible implementation manner of the first aspect, in a second possible implementation manner, the determining that the size of the remaining space is greater than or equal to the size of the data includes:
sequentially judging whether the size of the residual space of each input cache unit sorted according to the size of the storage space is larger than or equal to the size of the data; determining that the input cache unit which determines for the first time that the size of the remaining space is greater than or equal to the size of the data is the input cache unit which can store the service packet; or
And randomly selecting one input buffer unit from the input buffer units with the residual space size larger than or equal to the data size.
In the embodiment of the application, the service message can be rapidly stored in the input cache unit judged for the first time by adopting the convenient mode of comparing the size after sequencing. Or, the service messages can be distributed to each input buffer unit as uniformly as possible by adopting a random extraction mode.
With reference to the second possible implementation manner of the first aspect, in a third possible implementation manner, the chip includes the multiple input cache units, and a size of a storage space of the input cache unit after the sorting is larger than a maximum data size of the service packet.
In the embodiment of the application, the size of the storage space of the last input cache unit after sequencing is larger than the maximum data size of the service message, so that the service message of the giant frame can be smoothly cached in the last input cache unit after sequencing, and the processing of the giant frame service message is conveniently realized.
With reference to the second possible implementation manner of the first aspect, in a fourth possible implementation manner, after sequentially determining whether the size of the remaining space of each input buffer unit is greater than or equal to the data size, the method further includes:
determining that the size of the remaining space of the input cache unit positioned at the back of the sequence is still smaller than the data size;
sending a service suspension request to the core processor so that the core processor suspends sending a new service message to the chip; or
After determining the data size of the service packet and obtaining the size of the remaining space of each input buffer unit, the method further includes:
determining the number ratio of the input cache units with the residual space size smaller than the data size in all the input cache units;
and if the number ratio is larger than a preset ratio threshold, sending a service suspension request to the core processor so that the core processor suspends sending a new service message to the chip.
In the embodiment of the application, the size of the remaining space of the input cache unit after the sorting can reflect the height of the storage water level of the whole input cache units, so when the size of the remaining space of the input cache unit after the sorting is still smaller than the data size, a pause service is requested from the core processor, and the reduction of the storage water level can be effectively controlled. Or the determined quantity ratio can reflect the storage water level of the whole input cache units, so that when the determined quantity ratio is larger than a preset ratio threshold, the core processor is requested to suspend service, and the reduction of the storage water level can be effectively controlled.
With reference to the first aspect, in a fifth possible implementation manner, the sending the processed service packet to the core processor includes:
judging whether other messages received before the service message are processed;
if so, waiting until the other messages are processed, and outputting the processed service messages and the processed other messages to the core processor according to the receiving sequence of the service messages and the other messages.
In the embodiment of the application, the processed service messages are sent according to the sequence during receiving, so that the problem of disorder does not need to be solved after the core processor receives the processed messages, and the overhead of the core processor can be further reduced.
With reference to the first aspect, in a sixth possible implementation manner, the sending the processed service packet to the core processor includes:
caching the processed service message into the output cache unit corresponding to the determined algorithm engine;
and when the processed service message can be output, extracting the processed service message from the output cache unit, and sending the processed service message to the core processor.
In the embodiment of the present application, by caching the processed service packet in the output cache unit, the processed service packet is output from the output cache unit only when the processed service packet can be output, so that an output error or disorder of the service packet can be effectively avoided.
With reference to the sixth possible implementation manner of the first aspect, in a seventh possible implementation manner, after the processed service packet is cached in the output cache unit corresponding to the determined algorithm engine, the method further includes:
determining that the storage amount of the output buffer unit has reached an upper limit;
sending a service suspension request to the determined algorithm engine to enable the determined algorithm engine to suspend the ongoing processing.
In the embodiment of the application, the service suspension request is sent to the algorithm engine, so that the storage water level of the corresponding output cache unit can be effectively controlled, and the storage water level of the output cache unit is prevented from exceeding the upper limit.
In a second aspect, an embodiment of the present application provides a message processing apparatus, which is applied to a chip for accelerating a service of a core processor in an electronic device, where the apparatus includes:
a data receiving and sending unit, configured to receive a service packet sent by the core processor;
the data processing unit is used for determining an input cache unit capable of storing the service message from a plurality of input cache units and caching the service message to the determined input cache unit; the input cache units correspond to different algorithm engines one by one; the service message is processed through the algorithm engine corresponding to the determined input cache unit;
the data transceiver unit is further configured to send the processed service packet to the core processor.
With reference to the second aspect, in a first possible implementation manner,
the data processing unit is used for determining the data size of the service message and acquiring the residual space size of each input cache unit; and determining that the input cache unit with the residual space size larger than or equal to the data size is the input cache unit capable of storing the service message.
With reference to the first possible implementation manner of the second aspect, in a second possible implementation manner, the data processing unit is configured to sequentially determine whether the size of the remaining space of each input buffer unit sorted according to the size of the storage space is greater than or equal to the size of the data; determining that the input cache unit which determines for the first time that the size of the remaining space is greater than or equal to the size of the data is the input cache unit which can store the service packet; or
The data processing unit is configured to select one input buffer unit from the input buffer units with the remaining space size greater than or equal to the data size at random.
With reference to the second possible implementation manner of the second aspect, in a third possible implementation manner, the chip includes the multiple input cache units, and a size of a storage space of the input cache unit located at the last of the sorting is larger than a maximum data size of the service packet.
With reference to the second possible implementation manner of the second aspect, in a fourth possible implementation manner, after the data processing unit sequentially determines whether the size of the remaining space of each input buffer unit is greater than or equal to the data size,
the data processing unit is further configured to determine that the size of the remaining space of the input cache unit positioned at the back of the sequence is still smaller than the size of the data; sending a service suspension request to the core processor so that the core processor suspends sending a new service message to the chip; or
After the data processing unit determines the data size of the service message and obtains the size of the remaining space of each input buffer unit,
the data processing unit is further configured to determine a number ratio of the input buffer units with the remaining space size smaller than the data size in all the input buffer units; and if the number ratio is larger than a preset ratio threshold, sending a service suspension request to the core processor so that the core processor suspends sending a new service message to the chip.
With reference to the second aspect, in a fifth possible implementation manner,
the data processing unit is used for judging whether other messages received before the service message are processed;
if so, the data processing unit is configured to control the data transceiving unit to output the processed service packet and the processed other packets to the core processor according to the receiving sequence of the service packet and the other packets after waiting until the processing of the other packets is completed.
With reference to the second aspect, in a sixth possible implementation manner,
the data processing unit is configured to cache the processed service packet in the output cache unit corresponding to the determined algorithm engine;
the data processing unit is configured to, when the processed service packet can be output, extract the processed service packet from the output cache unit, and control the data transceiver unit to send the processed service packet to the core processor.
With reference to the sixth possible implementation manner of the second aspect, in a seventh possible implementation manner, after the data processing unit caches the processed service packet in the output cache unit corresponding to the determined algorithm engine,
the data processing unit is further used for determining that the storage capacity of the output buffer unit reaches an upper limit; sending a service suspension request to the determined algorithm engine to enable the determined algorithm engine to suspend the ongoing processing.
In a third aspect, an embodiment of the present application provides an electronic device, including: a core processor, and a chip;
the core processor is used for sending a service message to the chip;
the memory is used for setting a plurality of input cache units, the input cache units correspond to different algorithm engines one by one, and the algorithm engines are preset in the chip;
the chip is configured to execute the packet processing method according to the first aspect and any possible implementation manner of the first aspect on the service packet by using the plurality of input cache units and the algorithm engine.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where the storage medium has program codes stored thereon, and when the program codes are executed by the computer, the method for processing a packet according to the first aspect or any possible implementation manner of the first aspect is performed.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a first block diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2 is a second block diagram of an electronic device according to an embodiment of the present disclosure;
fig. 3 is a flowchart of a method for processing a packet according to an embodiment of the present application;
fig. 4 is a third structural block diagram of an electronic device according to an embodiment of the present application;
fig. 5 is a fourth block diagram of an electronic device according to an embodiment of the present application;
fig. 6 is a block diagram of a structure of a message processing apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
Referring to fig. 1, an embodiment of the present application provides an electronic device 10, where the electronic device 10 includes: a core processor 11 and a chip 12.
The core Processor 11 may be a Central Processing Unit (CPU), a Network Processor (NP), or the like; a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), or the like; the chip 12 may be an FPGA (Field Programmable Gate Array).
In this embodiment, the core processor 11 may allocate a part of the traffic to the chip 12 for processing, for example, allocate encryption/decryption traffic based on a CBC (cipher Block Chaining) mode to the chip 12. Correspondingly, a plurality of algorithm engines 121 are disposed in the chip 12, and the chip 12 can process the service packet sent by the core processor 11 through the algorithm engines 121, for example, the algorithm engines 121 perform encryption and decryption processing on the service packet in the CBC mode, and then send the processed service packet to the core processor 11.
In order to improve the processing efficiency of the chip 12 on the service packet, it is necessary to fully utilize a plurality of algorithm engines 121.
For example, a total buffer unit 122 of the traffic packet and a plurality of input buffer units 123 with the same number as that of the algorithm engines 121 may be partitioned in a Memory, such as a Block RAM (Block Random Access Memory), in the chip 12, where each input buffer unit 123 corresponds to one algorithm engine 121 in the plurality of algorithm engines 121, that is, the plurality of input buffer units 123 correspond to different algorithm engines 121 one by one. Thus, after receiving the service message, the chip 12 first buffers the service message in the total buffer unit 122. An input buffer unit 123 capable of buffering the service packet is determined from the plurality of input buffer units 123, and the service packet is buffered from the total buffer unit 122 to the determined input buffer unit 123. Since each algorithm engine 121 processes the service packet in the input buffer unit 123 corresponding to itself, the number of the service packets processed by each algorithm engine 121 may be limited when the storage space of the input buffer unit 123 has an upper limit. In this way, the service packet may be uniformly distributed to each algorithm engine 121 as much as possible through the plurality of input buffer units 123, so that each algorithm engine 121 is fully utilized, and the situation that some algorithm engines 121 are overloaded and other algorithm engines 121 are not fully utilized is avoided.
Of course, the manner of dividing the memory in the chip 12 into regions is an exemplary manner of the present embodiment, and is not limited to the present embodiment. For example, as shown in fig. 2, the electronic device 10 may further divide the total buffer unit 122 and the plurality of input buffer units 123 in the memory 13 of the electronic device 10.
The following describes how the chip 12 can efficiently process the service message by using the total buffer unit 122 and the plurality of input buffer units 123 in a manner executed by the method.
Referring to fig. 3 in conjunction with fig. 1, an embodiment of the present application provides a method for processing a message, where the method is executed by a chip 12 that accelerates a service of a core processor 11 in an electronic device 10, and the method for processing the message may include:
step S100: and receiving a service message sent by the core processor.
Step S200: determining an input cache unit capable of storing the service message from a plurality of input cache units, and caching the service message to the determined input cache unit; and the input buffer units correspond to different algorithm engines one by one.
Step S300: and processing the service message through an algorithm engine corresponding to the determined input cache unit.
Step S400: and sending the processed service message to the core processor.
The following describes steps S100 to S400 in detail with reference to examples.
Step S100: and receiving the service message sent by the core processor.
The core processor 11 may send traffic messages to the chip 12 that need to be processed by the chip 12. Accordingly, the chip 12 may receive the service packet through an interface of the backplane where the chip is located, for example, an rx interface. After receiving the service message, the chip 12 may first buffer the service message into the total buffer unit 122 to set aside time for the chip 12 to determine which algorithm engine 121 processes the service message.
It should be noted that the space requirement of the total buffer unit 122 is a little larger, so that when a plurality of algorithm engines 121 are not in time to process a large number of service messages, the total buffer unit 122 can accumulate unprocessed service messages to a certain extent, so as to gain time for the processing of the algorithm engines 121.
Step S200: determining an input cache unit capable of storing the service message from a plurality of input cache units, and caching the service message to the determined input cache unit; and the input buffer units correspond to different algorithm engines one by one.
For the determination of the input buffer unit 123, the chip 12 may adopt a sequential selection manner, or may adopt a random selection manner.
Referring to fig. 1 and 3, for the sequential selection approach:
after the service packet is cached in the total cache unit 122, on one hand, the chip 12 may determine the data size of the service packet, and on the other hand, the chip 12 may obtain the remaining space size of each current input cache unit 123. In this embodiment, the chip 12 sorts the plurality of input buffer units 123 in advance according to the size of the storage space of the input buffer units 123. Thus, the chip 12 can sequentially determine whether the size of the remaining space of each input buffer unit 123 is greater than or equal to the data size of the service packet according to the sorting.
As an exemplary manner of the determination, comparators with the same number as that of the input cache units 123 may be deployed in the chip 12 for a plurality of input cache units 123, and each comparator is configured to compare the size of the remaining space of a corresponding one of the input cache units 123 with the data size of the service packet. The chip 12 may input the remaining space of each input buffer unit 123 into a corresponding one of the comparators, and input the data size of the service packet into each comparator.
Therefore, the chip 12 can sequentially determine, according to the sequence, whether the comparison result output by each comparator indicates that the size of the remaining space of the corresponding input buffer unit 123 is greater than or equal to the size of the service message data. By judging according to the judgment sequence, the chip 12 may determine that the input cache unit 123 determined that the size of the remaining space is greater than or equal to the size of the data for the first time, where the input cache unit 123 is an input cache unit 123 capable of storing the service packet, and the algorithm engine 121 corresponding to the input cache unit 123 is an algorithm engine 121 capable of processing the service packet.
For example, the comparator determines that the size of the remaining space of the input buffer unit 123 is greater than or equal to the size of the service packet data through comparison, and the comparator may output a logic signal "1", otherwise, output a logic signal "0"; then the chip 12 may determine in sequence which comparator outputs the logic signal "1"; when it is determined for the first time that a certain comparator outputs a logic signal "1", the chip 12 does not continue the subsequent determination, ends the current determination process, and determines the input buffer unit 123 corresponding to the comparator that outputs the logic signal "1".
After the input buffer unit 123 is determined, the chip 12 may extract the service packet from the buffer area and store the service packet in the determined input buffer unit 123.
It should be noted that, in order to realize that the algorithm engines 121 are fully utilized and at the same time the algorithm engines 121 can process the service packet of the macro frame, when the input buffer units 123 are divided, the space size of the last input buffer unit or units 123 in the sequence may be divided into a larger size, for example, the space size of the last input buffer unit or units 123 in the sequence is divided into at least 4 Block RAMs of 18 kbits, so that the space size of the last input buffer unit or units 123 in the sequence is larger than the maximum data size of the service packet. Thus, the last input buffer unit 123 or units in the sequence can store the maximum length 8192 bytes of the superframe service message. It can be understood that, with this arrangement, since the input buffer units 123 capable of processing the macro frame service packet are located at the end of the sorting, the space remaining in these input buffer units 123 is the best for all input buffer units 123. After receiving the service packet of the macro frame, the chip 12 may determine that the input buffer unit 123 located at the last of the sorting is not able to store the service packet of the macro frame in the input buffer unit 123 located at the front of the sorting. Since the input buffer unit 123 at the last of the sequencing not only has a space size that satisfies the requirement of storing the service packet of the macro frame, but also has a better space surplus condition of the input buffer units 123, the service packet of the macro frame can be stored, thereby realizing the processing of the service packet of the macro frame.
In addition, the chip 12 may also control the storage water levels of the plurality of input buffer units 123 based on the sequential judgment manner according to the sequence.
For example, the chip 12 may set the input buffer unit 123 located at the back in the sorting as the threshold node. In the process of sequentially judging according to the sequence, if it is judged that the input cache unit 123 corresponding to the threshold node has not determined the input cache unit 123 capable of storing the service packet, it indicates that all the input cache units 123 before the threshold node are full, and further indicates that the storage speed of the plurality of input cache units 123 reaches the upper limit, that is, the storage water level of the plurality of input cache units 123 is full. Therefore, the chip 12 may send a service suspension request to the core processor 11, so that the core processor 11 suspends sending a new service packet to the chip 12, and further, the storage water levels of the multiple input buffer units 123 start to decrease, thereby implementing control over the storage water levels of the multiple input buffer units 123.
For example, if the order is sequentially determined from the 1 st input buffer unit 123 to the 100 th input buffer unit 123, the 95 th input buffer unit 123 in the order may be set as the threshold node. If the size of the remaining space of the 95 th input buffer unit 123 is still smaller than the data size of the service packet, it indicates that the storage water level of the plurality of input buffer units 123 may reach 95%, and then sends a service suspension request to the core processor 11, so that the storage water level is decreased from 95%.
It is worth pointing out that the threshold node needs to be set before the input buffer unit 123 capable of storing the macro frame service packet, so as to avoid the accumulation of small packets in the input buffer unit 123 capable of storing the macro frame service packet.
For the random selection mode:
after the service packet is cached in the total cache unit 122, on one hand, the chip 12 may determine the data size of the service packet, and on the other hand, the chip 12 may obtain the remaining space size of each current input cache unit 123. In this way, the chip 12 may compare the size of the remaining space of each input buffer unit 123 with the data size of the service packet, so as to determine the input buffer unit 123 with the size of the remaining space being greater than or equal to the data size. The specific comparison method between the size of the remaining space of each input buffer unit 123 and the size of the data of the service packet may refer to the foregoing, and will not be described again here.
After the determination, the chip 12 may randomly select one input buffer unit 123 from the input buffer units 123 with the remaining space size greater than or equal to the data size, and buffer the service packet into the determined input buffer unit 123.
It can be understood that, since the input buffer units 123 are randomly selected, the service messages can be uniformly stored in each input buffer unit 123 as much as possible, so that the algorithm engines 121 are fully utilized.
It should be noted that, when the input cache unit 123 is randomly selected, if the storage of the macro frame service packet is to be implemented, the macro frame service packet may be identified, and the input cache unit 123 capable of storing the macro frame service packet may be identified. Thus, when the service packet is identified as a service packet that is not a macro frame, the service packet can be stored in the other unidentified input cache unit 123 by the identifier; and when the service message is identified as a macro frame service message, the macro frame service message is stored in the identified input cache unit 123.
In addition, the chip 12 may also control the storage water levels of the plurality of input buffer units 123 based on a random selection manner.
For example, the chip 12 may set a proportion threshold value of the input buffer units 123 with the remaining space size smaller than the data size of the service packet in all the input buffer units 123. In the determination process, if it is determined that the number ratio of the input buffer units 123 with the remaining space size smaller than the data size in all the input buffer units 123 is greater than the preset ratio threshold, it indicates that the storage water levels of the plurality of input buffer units 123 have reached the upper limit. Therefore, the chip 12 may send a service suspension request to the core processor 11, so that the core processor 11 suspends sending a new service packet to the chip 12, and further, the storage water levels of the plurality of input buffer units 123 start to decrease, thereby implementing control over the storage water levels of the plurality of input buffer units 123.
For example, the number of the input buffer units 123 is 100 in total, and the percentage threshold may be set to 95%. If the number ratio is greater than the size ratio threshold, it indicates that the storage water level of the plurality of input buffer units 123 may reach 95%, and then sends a service suspension request to the core processor 11 to decrease the storage water level from 95%.
In this embodiment, after buffering the service packet in the determined input buffer unit 123, the chip 12 may continue to execute step S300.
Step S300: and processing the service message through an algorithm engine corresponding to the determined input cache unit.
Each algorithm engine 121 may sequentially extract the corresponding service messages from the input buffer unit 123 corresponding to itself for processing according to the sequence of storing the service messages in the input buffer unit 123. Therefore, when the algorithm engine 121 processes the service packet, the chip 12 may extract the service packet from the input buffer unit 123 corresponding to the algorithm engine 121, and process the service packet through the algorithm engine 121, for example, perform encryption and decryption processing in the CBC mode, so as to obtain the processed service packet.
After obtaining the processed service message, the chip 12 may continue to execute step S400.
Step S400: and sending the processed service message to the core processor.
In some embodiments, after obtaining the processed service packet, the chip 12 may directly send the processed service packet to the core processor 11.
In other embodiments, after obtaining the processed service packets, the chip 12 may send the processed service packets to the core processor 11 according to the sequence of the service packets received, so as to avoid increasing the burden on the core processor 11 (the core processor 11 is not required to solve the out-of-order problem by itself).
Specifically, each time a service packet is received, the chip 12 may add a unique serial number to the service packet at the header of the service packet, for example, add a 16-bit unique serial number to the header. Wherein, the rule for adding the unique serial number is as follows: the unique sequence number added after receiving a new service packet next time is +1 based on the previous unique sequence number, for example, the unique sequence number of the service packet a received first is 0x0000, and the unique sequence number of the service packet B received later is 0x0001, and the unique sequence number may be added to 0xffff from 0x0000, and is cycled to 0x0000 after being added to 0 xffff.
The chip 12 can determine whether other messages (other service messages) received before the service message are being processed when outputting each processed service message based on the unique serial number, and if so, wait until the other messages are processed, and then output the processed service message and the processed other messages to the core processor 11 according to the receiving sequence of the service message and the other messages.
Referring to fig. 4 and 5, it can be understood that when there is another message received before the service message is processed, the processed service message needs to be buffered to implement the delayed output, so that the output buffer units 124 with the same number as the algorithm engine 121 may be divided in the memory of the chip 12 or in the memory 13 of the electronic device 10 to buffer the processed service message through the output buffer units 124.
Specifically, each output cache unit 124 corresponds to one algorithm engine 121, so that the chip 12 may store the processed service packet output by each algorithm engine 121 in the output cache unit 124 corresponding to the algorithm engine 121. When it is determined that the processed service packet can be output (that is, other packets before the processed service packet are all processed or other packets before the processed service packet are all output), the processed service packet may be extracted from the output cache unit 124 and sent to the core processor 11.
In this embodiment, the chip 12 may also control the storage water level of each output buffer unit 124. For example, the chip 12 may preset an upper storage limit of the output buffer unit 124, and when it is determined that the storage amount of the output buffer unit 124 has reached the upper limit, the chip 12 may send a service suspension request to the algorithm engine 121 corresponding to the output buffer unit 124 to cause the algorithm engine 121 to suspend the ongoing processing, thereby controlling the storage level of the output buffer unit 124 to drop.
Referring to fig. 1, please refer to fig. 6, based on the same inventive concept, an embodiment of the present application further provides a message processing apparatus 100, where the message processing apparatus 100 is applied to a chip 12 for accelerating a service of a core processor 11 in an electronic device 10, and the message processing apparatus 100 includes:
a data transceiving unit 110, configured to receive the service packet sent by the core processor 11.
The data processing unit 120 is configured to determine an input cache unit 123 capable of storing the service packet from the multiple input cache units 123, and cache the service packet in the determined input cache unit 123; the input buffer units 123 correspond to different algorithm engines 121 one by one; and, the algorithm engine 121 is configured to process the service packet through the determined input buffer unit 123.
The data transceiver unit 110 is further configured to send the processed service packet to the core processor 11.
It should be noted that, as those skilled in the art can clearly understand, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Some embodiments of the present application further provide a computer-readable storage medium of a computer-executable nonvolatile program code, where the storage medium can be a general-purpose storage medium, such as a removable disk or a hard disk, and the computer-readable storage medium has a program code stored thereon, where the program code is executed by a computer to perform the steps of the message processing method according to any of the above embodiments.
The program code product of the message processing method provided in the embodiment of the present application includes a computer-readable storage medium storing the program code, and instructions included in the program code may be used to execute the method in the foregoing method embodiment, and specific implementation may refer to the method embodiment, which is not described herein again.
In summary, the embodiment of the present application provides a method and an apparatus for processing a packet, and an electronic device. By setting a different algorithm engine for the plurality of input cache units in a one-to-one correspondence manner, the service message processed by each algorithm engine is cached in the corresponding input cache unit. Because the storage capacity of the input cache unit has an upper limit, a large number of service messages can be prevented from being distributed to the same algorithm engine for processing, so that the load is distributed uniformly by a plurality of algorithm engines as much as possible, the load of each algorithm engine is reduced, the processing efficiency is improved, and the equipment performance is improved.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
In addition, units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Furthermore, the functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (7)

1. A method for processing a message is applied to a chip for accelerating the service of a core processor in an electronic device, and the method comprises the following steps:
receiving a service message sent by the core processor;
determining an input cache unit capable of storing the service message from a plurality of input cache units, and caching the service message to the determined input cache unit; the input cache units correspond to different algorithm engines one by one;
processing the service message through an algorithm engine corresponding to the determined input cache unit;
sending the processed service message to the core processor;
determining an input cache unit capable of storing the service message from a plurality of input cache units, including:
determining the data size of the service message, and acquiring the residual space size of each input cache unit;
sequentially judging whether the size of the residual space of each input cache unit sorted according to the size of the storage space is larger than or equal to the size of the data; determining that the input cache unit which determines for the first time that the size of the remaining space is greater than or equal to the size of the data is the input cache unit which can store the service packet; the chip comprises a plurality of input cache units, the chip sequences the input cache units in advance according to the size of a storage space of the input cache units, and the size of the storage space of the last input cache unit after sequencing is larger than the maximum data size of the service message;
or
Randomly selecting one input buffer unit from the input buffer units with the residual space size larger than or equal to the data size, wherein the method comprises the following steps:
identifying an input cache unit capable of storing the giant frame service message;
when the service message is not a service message of a macro frame, randomly selecting one input cache unit from unidentified input cache units with the residual space size larger than or equal to the data size;
and when the service message is a macro-frame service message, randomly selecting one input cache unit from the input cache units with the marks of which the residual space size is larger than or equal to the data size.
2. The message processing method according to claim 1, wherein after sequentially determining whether the size of the remaining space of each of the input buffer units is greater than or equal to the data size, the method further comprises:
determining that the size of the remaining space of the input cache unit positioned at the back of the sequence is still smaller than the data size;
sending a service suspension request to the core processor so that the core processor suspends sending a new service message to the chip; or
After determining the data size of the service packet and obtaining the size of the remaining space of each input buffer unit, the method further includes:
determining the number proportion of the input cache units with the residual space size smaller than the data size in all the input cache units;
and if the number ratio is larger than a preset ratio threshold, sending a service suspension request to the core processor so that the core processor suspends sending a new service message to the chip.
3. The method according to claim 1, wherein sending the processed service packet to the core processor comprises:
judging whether other messages received before the service message are processed;
if so, waiting until the other messages are processed, and outputting the processed service messages and the processed other messages to the core processor according to the receiving sequence of the service messages and the other messages.
4. The method according to claim 1, wherein sending the processed service packet to the core processor comprises:
caching the processed service message into an output cache unit corresponding to the determined algorithm engine;
and when the processed service message can be output, extracting the processed service message from the output cache unit, and sending the processed service message to the core processor.
5. The message processing method according to claim 4, wherein after the processed service message is cached in the output cache unit corresponding to the determined algorithm engine, the method further comprises:
determining that the storage amount of the output buffer unit has reached an upper limit;
sending a service pause request to the determined algorithm engine so that the determined algorithm engine pauses the processing which is in progress.
6. A message processing device is applied to a chip for accelerating the service of a core processor in an electronic device, and the device comprises:
a data receiving and sending unit, configured to receive a service packet sent by the core processor;
the data processing unit is used for determining an input cache unit capable of storing the service message from a plurality of input cache units and caching the service message to the determined input cache unit; the input cache units correspond to different algorithm engines one by one; the service message is processed through the algorithm engine corresponding to the determined input cache unit;
the data receiving and sending unit is further configured to send the processed service packet to the core processor;
the data processing unit is specifically configured to:
determining the data size of the service message, and acquiring the residual space size of each input cache unit;
sequentially judging whether the size of the residual space of each input cache unit sequenced according to the size of the storage space is larger than or equal to the size of the data; determining that the input cache unit which determines for the first time that the size of the remaining space is greater than or equal to the size of the data is the input cache unit which can store the service packet; the chip comprises a plurality of input cache units, the chip sequences the input cache units in advance according to the size of a storage space of the input cache units, and the size of the storage space of the last input cache unit after sequencing is larger than the maximum data size of the service message;
or
Identifying an input cache unit capable of storing the giant frame service message;
when the service message is not a service message of a giant frame, randomly selecting one input cache unit from unidentified input cache units with the residual space size larger than or equal to the data size;
and when the service message is a macro-frame service message, randomly selecting one input cache unit from the input cache units with the marks of which the residual space size is larger than or equal to the data size.
7. An electronic device, comprising: a core processor, a memory, and a chip;
the core processor is used for sending a service message to the chip;
the memory is used for setting a plurality of input cache units, the input cache units correspond to different algorithm engines one by one, and the algorithm engines are preset in the chip;
the chip is used for executing the message processing method according to any one of claims 1 to 5 on the service message by using a plurality of input buffer units and the algorithm engine.
CN201910931664.2A 2019-09-27 2019-09-27 Message processing method and device and electronic equipment Active CN110618966B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910931664.2A CN110618966B (en) 2019-09-27 2019-09-27 Message processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910931664.2A CN110618966B (en) 2019-09-27 2019-09-27 Message processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN110618966A CN110618966A (en) 2019-12-27
CN110618966B true CN110618966B (en) 2022-05-17

Family

ID=68924862

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910931664.2A Active CN110618966B (en) 2019-09-27 2019-09-27 Message processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN110618966B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116635840A (en) * 2020-10-30 2023-08-22 华为技术有限公司 Instruction processing method and processor based on multi-instruction engine

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5592622A (en) * 1995-05-10 1997-01-07 3Com Corporation Network intermediate system with message passing architecture
CN1937591A (en) * 2006-11-02 2007-03-28 杭州华为三康技术有限公司 Multi-core processor for realizing adaptive dispatching and multi-core processing method
CN101018122A (en) * 2007-03-13 2007-08-15 杭州华为三康技术有限公司 Mode matching processing method and system
CN105159779A (en) * 2015-08-17 2015-12-16 深圳中兴网信科技有限公司 Method and system for improving data processing performance of multi-core CPU
CN107347039A (en) * 2016-05-05 2017-11-14 深圳市中兴微电子技术有限公司 A kind of management method and device in shared buffer memory space

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018164495A1 (en) * 2017-03-08 2018-09-13 엘지전자 주식회사 Method and apparatus for transmitting and receiving wireless signal in wireless communication system
CN109600423B (en) * 2018-11-20 2021-07-06 深圳绿米联创科技有限公司 Data synchronization method and device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5592622A (en) * 1995-05-10 1997-01-07 3Com Corporation Network intermediate system with message passing architecture
CN1937591A (en) * 2006-11-02 2007-03-28 杭州华为三康技术有限公司 Multi-core processor for realizing adaptive dispatching and multi-core processing method
CN101018122A (en) * 2007-03-13 2007-08-15 杭州华为三康技术有限公司 Mode matching processing method and system
CN105159779A (en) * 2015-08-17 2015-12-16 深圳中兴网信科技有限公司 Method and system for improving data processing performance of multi-core CPU
CN107347039A (en) * 2016-05-05 2017-11-14 深圳市中兴微电子技术有限公司 A kind of management method and device in shared buffer memory space

Also Published As

Publication number Publication date
CN110618966A (en) 2019-12-27

Similar Documents

Publication Publication Date Title
CN102843341B (en) Data transmitting method and device and data receiving method and device
US20030231630A1 (en) High speed data classification system
CN113014528B (en) Message processing method, processing unit and virtual private network server
US7924721B2 (en) Communication apparatus, transmission control method, and transmission control program
CN102891809B (en) Multi-core network device message presses interface order-preserving method and system
CN101552652A (en) A document transmission method and transmission device
CN110618966B (en) Message processing method and device and electronic equipment
US8990422B1 (en) TCP segmentation offload (TSO) using a hybrid approach of manipulating memory pointers and actual packet data
CN101651629A (en) Method and equipment for dynamic grading scheduling of CPU receiving messages
CN113992654A (en) High-speed file transmission method, system, equipment and medium
US10467161B2 (en) Dynamically-tuned interrupt moderation
US7400581B2 (en) Load-balancing utilizing one or more threads of execution for implementing a protocol stack
JP2008011015A (en) Packet processing method and packet processing apparatus
CN108093047B (en) Data sending method and device, electronic equipment and middleware system
US7643502B2 (en) Method and apparatus to perform frame coalescing
KR20120107948A (en) Data packet priority level management
CN112291336A (en) Multichannel parallel data loading method of ARINC429 network card
CN109032010B (en) FPGA device and data processing method based on same
CN110266814B (en) Transmission method and transmission device
US20070192511A1 (en) Device and method for transmitting data
CN107844262B (en) Data caching and sending method and device
CN114124829A (en) Service forwarding control method and device and electronic equipment
CN112114971A (en) Task allocation method, device and equipment
CN110768915A (en) Shunting method and device
CN106921607B (en) Password operation management method and system under password server cluster

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant