CN105791021A - Hardware acceleration device and method - Google Patents

Hardware acceleration device and method Download PDF

Info

Publication number
CN105791021A
CN105791021A CN201610224220.1A CN201610224220A CN105791021A CN 105791021 A CN105791021 A CN 105791021A CN 201610224220 A CN201610224220 A CN 201610224220A CN 105791021 A CN105791021 A CN 105791021A
Authority
CN
China
Prior art keywords
central processor
processor cpu
packet
follow
data bag
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610224220.1A
Other languages
Chinese (zh)
Inventor
刘华敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Feixun Data Communication Technology Co Ltd
Original Assignee
Shanghai Feixun Data Communication Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Feixun Data Communication Technology Co Ltd filed Critical Shanghai Feixun Data Communication Technology Co Ltd
Priority to CN201610224220.1A priority Critical patent/CN105791021A/en
Publication of CN105791021A publication Critical patent/CN105791021A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus

Abstract

The present invention discloses a hardware acceleration device and method. The hardware acceleration device is connected with the CPU and a subsequent processing module of a terminal device to accelerate the data processing rate of data of the terminal device. The hardware acceleration device comprises: a learning module connected with the CPU and configured to learn the processing rule of the CPU for a first data packet when the CPU performs data processing and processes a first transmitted data packet; a processing module configured to receive a subsequent data packet prior to the CPU receives the subsequent data packet after receiving the first data packet to process the subsequent data packet according to the processing rule; and an output module configured to transmit the processed subsequent data packet to the subsequent processing module. The hardware acceleration device and method are able to ease the terminal device processor with no need for processing network data packet so as to improve the data processing speed of the terminal device and allow the terminal device to match the speed of the fiber network with no need for upgrading, and therefore the cost is saved and the user experience is improved.

Description

A kind of hardware accelerator and method
Technical field
The present invention relates to data processing field, particularly relate to a kind of hardware accelerator and method.
Background technology
Present network is ubiquitous, it may be said that increasingly be unable to do without network, general family's bandwidth cannot meet the demand of present user, the general family of a present tier 2 cities all achieves intelligent acess, and our domestic terminal apparatus performance increasingly cannot meet the demand of user, therefore being substantially improved of terminal capabilities is needed badly, and being substantially improved of bandwidth speed.
Be not merely now terminal capabilities flow be substantially improved, in addition it is also necessary to by the scheduling of data stream is realized, namely so-called QOS dispatches.QOS scheduling involves the performance of CPU, and when adopting fiber optic network, the network equipment before cannot process the mass data that fiber optic network is incoming, causes data jamming Shenzhen to run off.And to the problems referred to above, the solution that prior art is general is exactly updating apparatus, it is desirable to provide bigger fund input.
Summary of the invention
The technical problem that present invention mainly solves is to provide a kind of hardware accelerator and method, terminal unit processor can be liberated, make it without network data bag, improve the data processing speed of terminal unit, make terminal unit can mate the speed of fiber optic network without upgrading, save cost, improve user experience.
For solving above-mentioned technical problem, the technical scheme that the present invention adopts is: provide a kind of hardware accelerator, connect central processor CPU and the subsequent treatment module of terminal unit, for the data processing speed of terminal unit is accelerated, this device includes: study module, connect central processor CPU, for when central processor CPU carries out data process and processes first incoming packet, learning the central processor CPU processing rule to first packet;Processing module, receives follow-up data bag, for follow-up data bag being processed according to processing rule before the follow-up data bag after central processor CPU receives first packet;And, output module, for being transferred to subsequent treatment module by the follow-up data bag processed.
For solving above-mentioned technical problem, the technical scheme that the present invention adopts is: provide a kind of hardware-accelerated method, for the data processing speed of terminal unit is accelerated, the step of the method includes: when the central processor CPU at terminal unit carries out data process and processes first incoming packet, learns the central processor CPU processing rule to first packet;Receive follow-up data bag before follow-up data bag after central processor CPU receives first packet, according to processing rule, follow-up data bag is processed;The follow-up data bag processed is transferred to subsequent treatment mechanism.
It is different from prior art, the hardware accelerator of the present invention makes a pending packet entrance central processor CPU process, the processing rule of packet is processed at CPU processing procedure learning CPU, processing module according to study to rule follow-up packet is processed, process complete after without being transferred to central processor CPU, be directly output to subsequent treatment module and carry out subsequent treatment.By means of the invention it is possible to liberation terminal unit processor so that it is without network data bag, improve the data processing speed of terminal unit, make terminal unit can mate the speed of fiber optic network without upgrading, save cost, improve user experience.
Accompanying drawing explanation
Fig. 1 is the structural representation of a kind of hardware accelerator the first embodiment provided by the invention;
Fig. 2 is the schematic flow sheet of a kind of hardware-accelerated method the first embodiment provided by the invention.
Detailed description of the invention
Elaborate a lot of detail in the following description so that fully understanding the present invention.But the present invention can implement being much different from alternate manner described here, and those skilled in the art can do similar popularization when without prejudice to intension of the present invention, therefore the present invention is by the following public restriction being embodied as.
Secondly, the present invention utilizes schematic diagram to be described in detail, and when describing the embodiment of the present invention in detail, for purposes of illustration only, described schematic diagram is example, it should not limit the scope of protection of the invention at this.
Consult the structural representation that Fig. 1, Fig. 1 are a kind of hardware accelerator the first embodiments provided by the invention.This hardware accelerator 100 connects central processor CPU 102 and the subsequent treatment module 103 of terminal unit 101.
In the present invention, terminal unit 101 refers mainly to the network management device in the place such as enterprise or market.With the development of internet, popular that the demand of network speed is more and more higher, a tier 2 cities at home, even if average family also uses fiber optic network as home network, What is more, and fiber optic network is universal in little scope.The good characteristic that its loss of optical fiber is low, fidelity is high, is used as the carrier of express network.Usual user from general network be upgraded to fiber optic network time, it is necessary to pay considerable expense, often select when network management device (being generally router) can support reluctantly to ignore the upgrading to network management device.But old network management device is generally difficult to the message transmission rate supporting fiber optic network, causes the obstruction of packet even to lose.Cause the Experience Degree that the network user is poor.Tracing it to its cause, the central processor CPU being because old network management device cannot mate the data transmission bauds of fiber optic network.The hardware accelerator of the present invention is to manage at existing network to accelerate its processing data packets speed on the basis of equipment so that it is eliminate the situation that packet blocks and loses.
Device 100 includes study module 110, processing module 120 and output module 130.
Wherein, study module 110 connects central processor CPU 102, for when central processor CPU 102 carries out data process and processes first incoming packet, learning the central processor CPU 102 processing rule to first described packet.Packet is carried out labelling by the particular content that the processing rule that packet is carried out by central processor CPU 102 is usually according to packet.The packet incoming to the same time period, it is generally done identical labelling by central processor CPU 102, therefore central processor CPU 102 need to only be learnt to the processing mode of first packet or processing rule, follow-up incoming packet processes according to the processing rule of first packet according to central processor CPU 102, packet is transferred to after having processed subsequent treatment module 103 and carries out subsequent treatment.Study module 110 includes monitor unit 111 and memory element 112, and wherein monitor unit 111 is used for monitoring central processor CPU 102.In the present invention, central processor CPU 102 is in data interaction, and namely user carries out uploading or packet processed during down operation.Monitor unit 111 monitor central processor CPU 102 to carry out data process time, processing rule is generated file and stores by memory element 112 when first incoming packet is processed by central processor CPU 102.This document specifically contains the processing rule of central processor CPU 102, concretely a certain field of packet is carried out labelling.When different periods central processor CPU 102 carries out processing data packets, the rule of process is likely to difference.Memory element 112 monitor unit 111 follow-up monitor central processor CPU 102 process packet time, generate the file of processing rule, and compare with the file stored before, the two is identical, deletes, and difference is then replaced.
Receive follow-up data bag before the processing module 120 follow-up data bag after central processor CPU 102 receives first packet, according to the processing rule in memory element 112, follow-up data bag is processed.Processing module 120 includes receiving unit 121, extraction unit 122 and processing unit 123.Receive the process that unit 121 has processed first packet at central processing unit 102 and gap reception second and follow-up packet that second packet is processed so that it is be no longer transferred to central processor CPU 102 and process.In other embodiments, receive unit 121 and can receive follow-up data bag when central processor CPU 102 has processed some packets.Central processor CPU 102 has certain disposal ability, and when the speed processing packet of central processor CPU 102 produces pressure, packet is received by receiving unit 121, is no longer transferred to central processor CPU 102.The file of the processing data packets rule of storage is extracted processing module 120 from memory element 112 by extraction unit 122.Processing rule in processing unit 123 resolution file, and process, according to the processing rule in file, the packet that reception unit 121 receives.The concrete mode processed is identical with the mode of central processor CPU 102, and the field in packet is carried out labelling.For speed up processing, can arranging the processing unit 123 of multiple parallel connection, packet is processed, multiple processing units 123 work simultaneously and can accelerate processing data packets speed.After the processed module 120 of packet has processed, output module 130 it is transferred to subsequent treatment module 103 and proceeds to process.
Further, device 100 also includes switch module 140, and switch module 140 connectionist learning module 110, when switch module 140 is opened, study module 110 learns central processor CPU 102 and processes the processing rule of packet, and is processed follow-up packet by processing module 120;When switch module 140 is off, study module 110 no longer learns the processing rule of central processor CPU 102, and packet is all processed by central processor CPU 102.This kind of situation is applicable to user, and to upload the data bulk of download less or carry out important network transmission work, uses central processor CPU 102 to carry out processing data packets and has stronger reliability.
Further, device 100 also includes scheduler module 150, is used for, and the transmission sequence of follow-up data bag is scheduling before being coated processing module 120 process by follow-up data according to default scheduling rule.When network transmits business, realize the via node in network by queue scheduling and how router selects a queue to be forwarded from one or more data packet queues.The scheduling rule generally adopted is strict priority scheduling, i.e. SP scheduling.SP scheduling is that different data packet queues is arranged different priority, and the high data packet queue first priority of priority is in the low data packet queue of priority, as long as the high queue of priority has packet to exist, and the queue that priority scheduling priority is high.
Assembly of the invention 100 adopts Linux platform framework, the processing rule that study module 110 is acquired can by ordering cat/proc/net/nf_conntrack to inquire about, the closing the situation of opening and can be inquired about by fcstatus of switch module 140, if disable would not open learning rules, if enable then opens rule learning.Nf_contrack learns after rule to perform order cat/proc/fcache/* again, and now study module 110 has acquired rule, and ensuing second data will be directly entered processing module 120 and will not enter central processor CPU 102.Now handling capacity reaches 6-7 100,000,000, substantially increases the performance of its QOS.
It is different from prior art, the hardware accelerator of the present invention makes a pending packet entrance central processor CPU process, the processing rule of packet is processed at CPU processing procedure learning CPU, processing module according to study to rule follow-up packet is processed, process complete after without being transferred to central processor CPU, be directly output to subsequent treatment module and carry out subsequent treatment.By means of the invention it is possible to liberation terminal unit processor so that it is without network data bag, improve the data processing speed of terminal unit, make terminal unit can mate the speed of fiber optic network without upgrading, save cost, improve user experience.
Consult the schematic flow sheet that Fig. 2, Fig. 2 are a kind of hardware-accelerated method the first embodiments provided by the invention.The step of the method includes:
S201: when the central processor CPU at terminal unit carries out data process and processes first incoming packet, learns the central processor CPU processing rule to first packet.
When central processor CPU carries out data process and processes first incoming packet, learn the central processor CPU processing rule to first described packet.Packet is carried out labelling by the particular content that the processing rule that packet is carried out by central processor CPU is usually according to packet.The packet incoming to the same time period, it is generally done identical labelling by central processor CPU, therefore central processor CPU need to only be learnt to the processing mode of first packet or processing rule, follow-up incoming packet processes according to the processing rule of first packet according to central processor CPU, packet is transferred to after having processed subsequent treatment mechanism and carries out subsequent treatment.Needing in this step to monitor central processor CPU, central processor CPU is in data interaction, and namely user carries out uploading or packet processed during down operation.Monitor central processor CPU to carry out data process time, the processing rule that first incoming packet processed central processor CPU generates file and stores.This document specifically contains the processing rule of central processor CPU, concretely a certain field of packet is carried out labelling.When different periods central processor CPU carries out processing data packets, the rule of process is likely to difference.The follow-up central processor CPU that monitors when processing packet, generates the file of processing rule, and compares with the file stored before, and the two is identical, deletes, and difference is then replaced.
S202: receive follow-up data bag before the follow-up data bag after central processor CPU receives first packet, according to processing rule, follow-up data bag is processed.
Receive follow-up data bag before follow-up data bag after central processor CPU receives first packet, according to the processing rule in file, follow-up data bag is processed.The process having processed first packet at central processing unit and the gap that second packet is processed receive second and follow-up packet so that it is be no longer transferred to central processor CPU and process.In other embodiments, follow-up data bag can be received when central processor CPU has processed some packets.Central processor CPU has certain disposal ability, and when the speed processing packet of central processor CPU produces pressure, packet is no longer transferred to central processor CPU.The file of the processing data packets of storage rule is extracted and processing rule in resolution file, process follow-up data bag according to the processing rule in file.The concrete mode processed is identical with the mode of central processor CPU, and the field in packet is carried out labelling.For speed up processing, can arranging the processing mechanism of multiple parallel connection, packet is processed, multiple processing mechanisms work simultaneously and can accelerate processing data packets speed.
S203: the follow-up data bag processed is transferred to subsequent treatment mechanism.
After packet has been processed, the packet processed is transferred to subsequent treatment mechanism and proceeds to process.
Further, before the study central processor CPU step to the processing rule of first packet, control central processor CPU is processed the learning process of the processing rule of packet.In the present invention, configured by Linux code, when parameter is configured to enable, learn the central processor CPU processing rule to packet, and according to this rule treatments follow-up data bag;When parameter is configured to disable, no longer learning this processing rule, packet is all processed by central processor CPU.This kind of situation is applicable to user, and to upload the data bulk of download less or carry out important network transmission work, uses central processor CPU to carry out processing data packets and has stronger reliability.
Before step follow-up data bag processed according to processing rule, the transmission sequence of follow-up data bag is scheduling according to default scheduling rule.When network transmits business, realize the via node in network by queue scheduling and how router selects a queue to be forwarded from one or more data packet queues.The scheduling rule generally adopted is strict priority scheduling, i.e. SP scheduling.SP scheduling is that different data packet queues is arranged different priority, and the high data packet queue first priority of priority is in the low data packet queue of priority, as long as the high queue of priority has packet to exist, and the queue that priority scheduling priority is high.
The method of the present invention is based on Linux platform framework, the processing rule acquired can by ordering cat/proc/net/nf_conntrack to inquire about, whether the step of learning rules performs to be inquired about by fcstatus, if disable would not perform the step of learning rules, if enable then performs the step of learning rules.Nf_contrack learns after rule to perform order cat/proc/fcache/* again, has now acquired processing rule, and ensuing second packet and follow-up packet will not enter central processor CPU.Now handling capacity reaches 6-7 100,000,000, substantially increases the performance of its QOS.
It is different from prior art, the hardware-accelerated method of the present invention makes a pending packet entrance central processor CPU process, the processing rule of packet is processed at CPU processing procedure learning CPU, processing module according to study to rule follow-up packet is processed, process complete after without being transferred to central processor CPU, be directly output to subsequent treatment module and carry out subsequent treatment.By means of the invention it is possible to liberation terminal unit processor so that it is without network data bag, improve the data processing speed of terminal unit, make terminal unit can mate the speed of fiber optic network without upgrading, save cost, improve user experience.
Although the present invention is with preferred embodiment openly as above; but it is not for limiting the present invention; any those skilled in the art are without departing from the spirit and scope of the present invention; may be by the method for the disclosure above and technology contents and technical solution of the present invention is made possible variation and amendment; therefore; every content without departing from technical solution of the present invention; according to any simple modification, equivalent variations and modification that above example is made by the technical spirit of the present invention, belong to the protection domain of technical solution of the present invention.

Claims (10)

1. a hardware accelerator, connects central processor CPU and the subsequent treatment module of terminal unit, for the data processing speed of described terminal unit is accelerated, it is characterised in that including:
Study module, connects described central processor CPU, for when described central processor CPU carries out data process and processes first incoming packet, learning the described central processor CPU processing rule to first described packet;
Processing module, receives described follow-up data bag before the follow-up data bag after described central processor CPU receives first described packet, for described follow-up data bag being processed according to described processing rule;And,
Output module, for being transferred to described subsequent treatment module by the described follow-up data bag processed.
2. hardware accelerator according to claim 1, it is characterised in that described study module includes:
Monitor unit, is used for monitoring described central processor CPU;
Memory element, generates file by described processing rule when first incoming described packet is processed by described central processor CPU for monitoring at described monitor unit and stores.
3. hardware accelerator according to claim 2, it is characterised in that described processing module includes:
Receive unit, for, before follow-up data bag is transferred to described central processor CPU, receiving described follow-up data bag;
Extraction unit, for extracting the described file comprising described processing rule from described memory element;
Processing unit, for processing described follow-up data bag according to the described processing rule in described file.
4. hardware accelerator according to claim 3, it is characterised in that also include:
Switch module, connects described study module, is used for controlling described study module and learns described processing rule, and when described switch module is closed, makes described follow-up data bag be transferred to described central processor CPU and process.
5. hardware accelerator according to claim 3, it is characterized in that, also include scheduler module, before described subsequent treatment resume module, the transmission sequence of described follow-up data bag is scheduling according to default data packet queue scheduling rule for continuing packet in the rear.
6. a hardware-accelerated method, for the data processing speed of terminal unit is accelerated, it is characterised in that including:
When central processor CPU at described terminal unit carries out data process and processes first incoming packet, learn the described central processor CPU processing rule to first described packet;
Receive described follow-up data bag before follow-up data bag after described central processor CPU receives first described packet, according to described processing rule, described follow-up data bag is processed;
The described follow-up data bag processed is transferred to described subsequent treatment mechanism.
7. hardware-accelerated method according to claim 6, it is characterised in that in the study described central processor CPU step to the processing rule of first described packet, including step:
Monitor described central processor CPU;
When monitoring described central processor CPU and first incoming described packet being processed, described processing rule generated file and store.
8. hardware-accelerated method according to claim 7, it is characterised in that in the step described follow-up data bag processed according to described processing rule, including step:
Before the transmission of follow-up data bag is such as described central processor CPU, receive described follow-up data bag;
Extract the described file comprising described processing rule;
Described follow-up data bag is processed according to the described processing rule in described file.
9. hardware-accelerated method according to claim 8, it is characterized in that, before the study described central processor CPU step to the processing rule of first described packet, further comprise the steps of: the described terminal unit of control and learn described processing rule, without, time hardware-accelerated, making described follow-up data bag be transferred to described central processor CPU and process.
10. hardware-accelerated method according to claim 8, it is characterised in that before the described follow-up data bag processed is transferred to the step of described subsequent treatment mechanism, including step:
The transmission sequence of described follow-up data bag is scheduling according to default data packet queue scheduling rule.
CN201610224220.1A 2016-04-12 2016-04-12 Hardware acceleration device and method Pending CN105791021A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610224220.1A CN105791021A (en) 2016-04-12 2016-04-12 Hardware acceleration device and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610224220.1A CN105791021A (en) 2016-04-12 2016-04-12 Hardware acceleration device and method

Publications (1)

Publication Number Publication Date
CN105791021A true CN105791021A (en) 2016-07-20

Family

ID=56396250

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610224220.1A Pending CN105791021A (en) 2016-04-12 2016-04-12 Hardware acceleration device and method

Country Status (1)

Country Link
CN (1) CN105791021A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1781078A (en) * 2003-02-28 2006-05-31 洛克希德马丁公司 Hardware accelerator personality compiler
CN103986585A (en) * 2014-05-13 2014-08-13 杭州华三通信技术有限公司 Message preprocessing method and device
US20140289445A1 (en) * 2013-03-22 2014-09-25 Antony Savich Hardware accelerator system and method
CN104102542A (en) * 2013-04-10 2014-10-15 华为技术有限公司 Network data packet processing method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1781078A (en) * 2003-02-28 2006-05-31 洛克希德马丁公司 Hardware accelerator personality compiler
US20140289445A1 (en) * 2013-03-22 2014-09-25 Antony Savich Hardware accelerator system and method
CN104102542A (en) * 2013-04-10 2014-10-15 华为技术有限公司 Network data packet processing method and device
CN103986585A (en) * 2014-05-13 2014-08-13 杭州华三通信技术有限公司 Message preprocessing method and device

Similar Documents

Publication Publication Date Title
US11153175B2 (en) Latency management by edge analytics in industrial production environments
US8990390B2 (en) Remote monitoring and controlling of network utilization
CN112995348B (en) Control method, device and system of Internet of things equipment
US10594606B2 (en) Wired data-connection aggregation
EP3158686B1 (en) System and method for virtual network function policy management
US9565091B2 (en) Mapping protocol endpoints to networked devices and applications based on capabilities
US11140137B2 (en) Method and industrial computing apparatus for performing a secure communication
US20210209280A1 (en) Secure one-way network gateway
EP3057264B1 (en) Method for upgrading network device version and network device
CN102546363A (en) Message processing method, device and equipment
CN111917586A (en) Container bandwidth adjusting method, server and storage medium
RU2602333C2 (en) Network system, packet processing method and storage medium
CN113595927A (en) Method and device for processing mirror flow in bypass mode
CN111835569A (en) Optical interface rate and mode self-adapting method, system and storage medium
CN101997772A (en) Flow control method, device, system and network equipment
CN108989157B (en) Method and device for controlling intelligent equipment
CN105791021A (en) Hardware acceleration device and method
CN104410721A (en) Method and system for supporting automatic caching according to update content
CN108984191A (en) A kind of method, apparatus and electronic equipment of application update
CN110865891B (en) Asynchronous message arrangement method and device
CN109347695A (en) A kind of upgrade testing system and method
US7756975B1 (en) Methods and systems for automatically discovering information about a domain of a computing device
US20110093483A1 (en) Method and apparatus for data exchange in a distributed system
US11483394B2 (en) Delayed proxy-less network address translation decision based on application payload
CN112148324A (en) Method, apparatus and computer-readable storage medium for upgrading electronic device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20160720

WD01 Invention patent application deemed withdrawn after publication