CN105025098B - The method and system of network interface data classification - Google Patents

The method and system of network interface data classification Download PDF

Info

Publication number
CN105025098B
CN105025098B CN201510413467.3A CN201510413467A CN105025098B CN 105025098 B CN105025098 B CN 105025098B CN 201510413467 A CN201510413467 A CN 201510413467A CN 105025098 B CN105025098 B CN 105025098B
Authority
CN
China
Prior art keywords
data
server
network
fpga
transmitted data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510413467.3A
Other languages
Chinese (zh)
Other versions
CN105025098A (en
Inventor
施文进
阎九吉
吴青
王飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Huiyin Science & Technology Co Ltd
ZHENJIANG HUILONG YANGTSE RIVER PORT CO Ltd
WELLONG ETOWN INTERNATIONAL LOGISTICS Co Ltd
Original Assignee
Jiangsu Huiyin Science & Technology Co Ltd
ZHENJIANG HUILONG YANGTSE RIVER PORT CO Ltd
WELLONG ETOWN INTERNATIONAL LOGISTICS Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Huiyin Science & Technology Co Ltd, ZHENJIANG HUILONG YANGTSE RIVER PORT CO Ltd, WELLONG ETOWN INTERNATIONAL LOGISTICS Co Ltd filed Critical Jiangsu Huiyin Science & Technology Co Ltd
Priority to CN201510413467.3A priority Critical patent/CN105025098B/en
Publication of CN105025098A publication Critical patent/CN105025098A/en
Application granted granted Critical
Publication of CN105025098B publication Critical patent/CN105025098B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1014Server selection for load balancing based on the content of a request
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/53Network services using third party service providers

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Of Solid Wastes (AREA)
  • Stored Programmes (AREA)

Abstract

The present invention proposes the sorting technique and system of a kind of network interface data, and this method specifically includes following steps:(1) transmitted data on network is encoded and is classified;(2) protection equipment FPGA is decoded, stores and judges to pretreated transmitted data on network, and server process meets the data flow of Rule of judgment, and otherwise garbage reclamation server process does not meet other data flows of Rule of judgment;(3) FPGA judges the state of server, if the utilization rate of CPU is low, FPGA is directly by the network data transmission in buffer area to server, and otherwise FPGA is automatically into the process of network data classification transmission;(4) it is determined transmitted data on network being transferred to corresponding server according to step (1)~(3);(5) circulation step (1)~(4).It is of the invention effectively to realize fully rationally using system resource, and under emergency circumstances, avoid cache overflow, ensure that data safety and precise delivery, improve user experience.

Description

The method and system of network interface data classification
Technical field
The present invention relates to a kind of network interface data sorting technique and system more particularly to it is a kind of by FPGA by interface Data carry out the processing of classification in advance, and are sent to the method and system of respective server.
Background technology
For large data processing system, for electronic commerce data processing system, different subsystems Between need to transmit a large amount of data in same time parallel, and the transmission method of general network data interface data, by face Face two kinds of situations, when data inflow velocity is more than processing speed, many data will lose, when data inflow velocity is less than processing During speed, CPU will be in idle state, cause the wasting of resources.
It is to solve the above problems, main using two kinds of solutions once at present:
(1) spatial cache technology is introduced, such method largely solves actual application problem, but handles burst During event, there are still buffer overflow situations.
(2) by setting protection equipment, when CPU usage is low, directly transmit, work as CPU usage at network interface Gao Shi is directly entered Processing automatically by sort, and classification processing is the different type according to data, is located in different ways Reason.
(3) prior art someone judges the type of data, such as station control according to the protocol type identifier of network data Layer communication data, digitized sampled data etc., so as to according to these type judging results, determine these data according to that side Formula is handled, these processing include:Data in caching are conveyed directly to CPU, are transferred to a memory block by data cached And it sends CPU to when the memory block is full, empty caching to receive next network data.After network data is classified, according to The storage of different situation control data and transmission opportunity during so as to fulfill handling accident, transmit the protection of data, but The technology can not solve to be needed in existing electronic commerce data processing system by relatively low and better simply equipment, to control data It sends different servers to, reduce the technical problems such as cost.
In view of drawbacks described above existing for current technology, the present invention is intended to provide one kind in large scale system, handles network The method and system of interface data transmission can be realized fully rationally using system resource, under emergency circumstances, be avoided Cache overflow guarantees data security, precise delivery so that current system treatment effeciency greatly promotes, so as to improve user Experience.
Invention content
The object of the present invention is to provide one kind in large scale system, the method and system of processing network interface data transmission, Described method and system can be realized fully rationally using system resource, under emergency circumstances, avoid cache overflow, ensure Data safety, precise delivery so that current system treatment effeciency greatly promotes, and improves user experience.
It is main using protection equipment FPGA skills an embodiment of the present invention provides a kind of sorting technique of network interface data Art first pre-processes transmitted data on network, and the type of the transmitted data on network is judged by FPGA, then judges clothes The working condition of business device, finally determines the transmitted data on network being transferred to corresponding server according to above-mentioned steps, mainly Include the following steps:
(1) transmitted data on network is encoded and is classified;
(2) protection equipment FPGA is decoded, stores and judges to pretreated transmitted data on network, server process Meet the data flow of Rule of judgment, otherwise garbage reclamation server process does not meet other data flows of Rule of judgment;
(3) FPGA judges the state of server, is primarily referred to as the utilization rate of CPU, if the utilization rate of CPU is low, FPGA is straight It connects the network data transmission in buffer area to server, otherwise FPGA is automatically into the process of network data classification transmission;
(4) it is determined transmitted data on network being transferred to corresponding server according to step (1)~(3);
(5) circulation step (1)~(4).
Preferably, it is mainly step (1) to transmitted data on network pretreatment, transmitted data on network is encoded and is divided Class, the data encoding represent that the head file digital representation of first four, the corresponding first order, which indexes, to be believed using 8 bits It ceases, the digital representation of four after tail field use, corresponding second level index information, first order index information can have 24Kind is possible Representation method, second level index information can also have 24The possible representation method of kind, it is described to be represented using 8 bits 28Information under middle index.
Such as;What field 00010001 and 00010010 represented is two different secondary index letters under level-one index of the same race Breath;What field 00100001 and 00010001 represented is two different secondary index information under different level-one indexes, in reality If to carry out " inquiry cargo " in operation, can enabling it, the field of the first order index information is for first order index information 0001, be respectively there are two secondary index information under " inquiry cargo ":" member inquires cargo " and " non-member inquires cargo ", just Its tail field can be enabled to be represented with 0001 and 0010 or other tetrad, based on the method to transmitted data on network into Coding is gone, different data are assigned to different fields, so as to achieve the purpose that classification.
Preferably, judge that the type of the transmitted data on network is mainly step (2) by FPGA, equipment FPGA is to pre- for protection Treated, and transmitted data on network is decoded, stores and judges, server process meets the data flow of Rule of judgment, otherwise rubbish Rubbish recycling server process does not meet other data flows of Rule of judgment;
The identification field of the FPGA to be preset, preset value is 8 bits, i.e., from 00000000~ 11111111, all corresponding hardware servers of identification field of FPGA are opened, when the transmitted data on network is defeated from data Enter end enter, the FPGA judge the corresponding field of the transmitted data on network and oneself it is preset it is described identify field whether Match, if field can match, the corresponding server logic channels of FPGA are opened, and server logic described in other FPGA is in Closed state.
Preferably, all corresponding hardware servers of identification field of FPGA are opened, when the transmitted data on network from Data input pin enters, and the FPGA judges the corresponding field of the transmitted data on network and oneself preset identification field Whether match, if the identification field and the corresponding field of the transmitted data on network in the FPGA do not have successful match, The corresponding Servers-all logical channels of FPGA are closed, and the field and corresponding transmitted data on network are imported into garbage reclamation In server.
Preferably, garbage reclamation server and other server concurrent workings, when FPGA is currently without default identification field During with the fields match of the transmitted data on network, the transmitted data on network enters garbage reclamation server logic channel, And then garbage reclamation server is imported, whether transmitted data on network is effective described in the garbage reclamation server end artificial judgment Data then update the default identification field of FPGA if valid data;Otherwise it rests in garbage reclamation server and pre- If time timing empties.
Preferably, the working condition for judging server is mainly step (3), when the utilization rate of the CPU is higher than 75%, The process that FPGA is encoded and classified automatically into transmitted data on network, the utilization rate of the CPU directly will not higher than 75%, FPGA The transmitted data on network in buffer area is transferred to server.
Preferably, determine that it is mainly step (4) and (5) that the transmitted data on network is transferred to corresponding server,
In order to mitigate the whole burden of server, using by the way of multiple servers are operated together to the pretreatment after net Network transmission data are handled, the corresponding different server of different data classification, such as:Some do not meet the number of Rule of judgment What is finally entered according to stream is garbage reclamation server, and other server logic channels are not opened it.
Finally recycle whole process, it is ensured that the peak use rate of data and avoid the wasting of resources.
In addition, the present invention also provides a kind of network interface data categorizing systems, which is characterized in that the system includes net Network transmission data-interface group, FGPA modules and server zone;
The transmitted data on network interface group includes multiple data-interfaces, and the data-interface includes one or more types Data interface type;
The server zone includes multiple servers, and multiple server includes at least a garbage reclamation server;Clothes Business device processing meets the data flow of Rule of judgment, and otherwise garbage reclamation server process does not meet other data of Rule of judgment Stream;
The FPGA module includes decoder module, memory module, judgment module and multiple and the multiple server one The one corresponding logical channel for being used for transmission network interface data.
Preferably, the transmitted data on network interface group further includes coding module, for network interface data into Row coding and classification.The data encoding represented using 8 bits, the head file digital representation of first four, correspondence the Level-one index information, the digital representation of four after tail field is used, corresponding second level index information, first order index information can have 24The possible representation method of kind, second level index information can also have 24The possible representation method of kind, it is described to use 8 binary systems Number can represent 28Information under middle index.
Such as;What field 00010001 and 00010010 represented is two different secondary index letters under level-one index of the same race Breath;What field 00100001 and 00010001 represented is two different secondary index information under different level-one indexes, in reality If to carry out " inquiry cargo " in operation, can enabling it, the field of the first order index information is for first order index information 0001, be respectively there are two secondary index information under " inquiry cargo ":" member inquires cargo " and " non-member inquires cargo ", just Its tail field can be enabled to be represented with 0001 and 0010 or other tetrad, based on the method to transmitted data on network into Coding is gone, different data are assigned to different fields, so as to achieve the purpose that classification.
Preferably, the judgment module by by transmitted data on network interface group transmit Lai data be decoded, and by number Coding in is matched with the identification field to prestore in a storage module;And after successful match, by corresponding server Logical channel is opened, and closes the logical channel of other servers.
The identification field of the FPGA to be preset, preset value is 8 bits, i.e., from 00000000~ 11111111, all corresponding hardware servers of identification field of FPGA are opened, when the transmitted data on network is defeated from data Enter end enter, the FPGA judge the corresponding field of the transmitted data on network and oneself it is preset it is described identify field whether Match, if field can match, the corresponding server logic channels of FPGA are opened, and server logic described in other FPGA is in Closed state.
Preferably, if the matching is unsuccessful, the corresponding logical channel of Servers-all is closed, and the field and right The transmitted data on network answered is imported into garbage reclamation server.
All corresponding hardware servers of identification field of FPGA are opened, when the transmitted data on network is inputted from data End enter, the FPGA judge the corresponding field of the transmitted data on network and oneself it is preset it is described identify field whether Match, if the identification field and the corresponding field of the transmitted data on network in the FPGA do not have successful match, FPGA is corresponded to Servers-all logical channel close, and the field and corresponding transmitted data on network are imported into garbage reclamation server In.
Server zone is configured to be mainly used for carrying out classification processing to data, and the network of the different field of identification is passed Transmission of data is transferred to different servers and is handled.Such as:The field of the first order index information be 0001 it is described pre- Treated, and transmitted data on network is transferred to No. 1 progress subsequent processing of server, and the field of the first order index information is The 0010 pretreated transmitted data on network is transferred to No. 2 progress subsequent processings of server.
Garbage reclamation server and other server concurrent workings, when FPGA is currently without default identification field and the net When network transmits the fields match of data, the transmitted data on network enters garbage reclamation server logic channel, and then imports Garbage reclamation server, whether transmitted data on network is valid data described in the garbage reclamation server end artificial judgment, if The default identification field of FPGA is then updated for valid data;Otherwise it rests in garbage reclamation server and preset time is determined When empty.Finally recycle whole process, it is ensured that the peak use rate of data and avoid the wasting of resources.
Description of the drawings
In order to illustrate more clearly of the embodiment of the present invention or technical solution of the prior art, below will to embodiment or Attached drawing needed to be used in the description of the prior art is briefly described, and the accompanying drawings in the following description is only some of the present invention Embodiment.
Fig. 1 is a kind of network interface data sorting algorithm flow chart that the embodiment of the present invention provides;
Fig. 2 is the flow chart of a kind of network interface data sorting technique that the embodiment of the present invention provides;
Fig. 3 is the flow chart of a kind of network interface data categorizing system that the embodiment of the present invention provides;
Fig. 4 provides the basic structure schematic diagram of FPGA for the embodiment of the present invention;
Fig. 5 is the structure chart of another network interface data categorizing system that the embodiment of the present invention provides.
Specific embodiment
To make the technical problem to be solved in the present invention, technical solution and advantage clearer, below in conjunction with attached drawing and tool Body embodiment is described in detail.Those skilled in the art are it is to be understood that following specific embodiments or specific embodiment The set-up mode of series of optimum enumerated for specific invention content is explained further of the present invention, and those set-up modes it Between can be combined with each other or it is interrelated use, unless clearly proposing some of which or a certain specific in the present invention Embodiment or embodiment can not be associated setting or be used in conjunction with other embodiments or embodiment.It is meanwhile following Specific embodiment or embodiment only as the set-up mode optimized, and not as limiting protection scope of the present invention Understand.
Embodiment 1
It is main to use protection equipment FPGA technology an embodiment of the present invention provides a kind of network interface data sorting technique, Transmitted data on network is pre-processed first, the type of the transmitted data on network is judged by FPGA, then judges server Working condition, finally determined the transmitted data on network being transferred to corresponding server according to above-mentioned steps, specifically, this The embodiment of invention provides a kind of network interface data categorizing system, and network interface data sorting algorithm is whole with reference to shown in Fig. 1 For, this method includes the following contents:
(1) transmitted data on network is encoded and is classified;
(2) FPGA is decoded pretreated transmitted data on network, stores;
(3) judge whether FPGA identifications field matches with data field, if successful match, judge server utilization rate Whether higher than 75%, the data of buffer area are if it is transferred to server;Otherwise step (2) is gone to;
(4) if matching is unsuccessful, transmitted data on network is transferred to garbage reclamation server and garbage reclamation server; Garbage reclamation server judges whether the data are effective, if removing the invalid data in vain, otherwise updates the data of FPGA Identify field;
(5) it after the update of FPGA data identification field, matches again, goes to step (3).
(6) step (1)~(5) are repeated;
In a specific embodiment, with reference to shown in Fig. 2, the present invention can set following datacycle process flow It realizes, includes the following steps:
(1) transmitted data on network is encoded and is classified;
(2) protection equipment FPGA is decoded, stores and judges to pretreated transmitted data on network, server process Meet the data flow of Rule of judgment, otherwise garbage reclamation server process does not meet other data flows of Rule of judgment;
(3) FPGA judges the state of server, is primarily referred to as the utilization rate of CPU, if the utilization rate of CPU is low, FPGA is straight It connects the network data transmission in buffer area to server, otherwise FPGA is automatically into the process of network data classification transmission;
(4) it is determined transmitted data on network being transferred to corresponding server according to step (1)~(3);
(5) circulation step (1)~(4).
In a specific embodiment, with reference to shown in Fig. 3, transmitted data on network is carried out in order to more easily effective Classification, and reduce data processing amount to the greatest extent, the present invention is handled by way of code identification field, and this method includes following Content:
System is by transmitted data on network interface group, FGPA chips and server zone composition.Network transport interface group is configured It is mainly used for encoding transmitted data on network and being classified, the data encoding is represented using 8 bits, and head file is used The digital representation of first four, corresponding first order index information, the digital representation of four after tail field is used, corresponding second level index letter Breath, first order index information can have 24The possible representation method of kind, second level index information can also have 24The possible expression of kind Method, it is described to represent 2 using 8 bits8Information under middle index.
Such as;What field 00010001 and 00010010 represented is two different secondary index letters under level-one index of the same race Breath;What field 00100001 and 00010001 represented is two different secondary index information under different level-one indexes, in reality If to carry out " inquiry cargo " in operation, can enabling it, the field of the first order index information is for first order index information 0001, be respectively there are two secondary index information under " inquiry cargo ":" member inquires cargo " and " non-member inquires cargo ", just Its tail field can be enabled to be represented with 0001 and 0010 or other tetrad, based on the method to transmitted data on network into Coding is gone, different data are assigned to different fields, so as to achieve the purpose that classification.
In a specific embodiment, judge that the type of the transmitted data on network is mainly step (2) by FPGA, protect Shield equipment FPGA is decoded, stores and judges that server process meets Rule of judgment to pretreated transmitted data on network Data flow, otherwise garbage reclamation server process do not meet other data flows of Rule of judgment;
The identification field of the FPGA to be preset, preset value is 8 bits, i.e., from 00000000~ 11111111, all corresponding hardware servers of identification field of FPGA are opened, when the transmitted data on network is defeated from data Enter end enter, the FPGA judge the corresponding field of the transmitted data on network and oneself it is preset it is described identify field whether Match, if field can match, the corresponding server logic channels of FPGA are opened, and server logic described in other FPGA is in Closed state.
All corresponding hardware servers of identification field of FPGA are opened, when the transmitted data on network is inputted from data End enter, the FPGA judge the corresponding field of the transmitted data on network and oneself it is preset it is described identify field whether Match, if the identification field and the corresponding field of the transmitted data on network in the FPGA do not have successful match, FPGA is corresponded to Servers-all logical channel close, and the field and corresponding transmitted data on network are imported into garbage reclamation server In.
Garbage reclamation server and other server concurrent workings, when FPGA is currently without default identification field and the net When network transmits the fields match of data, the transmitted data on network enters garbage reclamation server logic channel, and then imports Garbage reclamation server, whether transmitted data on network is valid data described in the garbage reclamation server end artificial judgment, if The default identification field of FPGA is then updated for valid data;Otherwise it rests in garbage reclamation server and preset time is determined When empty.
In a specific embodiment, the working condition for judging server is mainly step (3), when making for the CPU The process for being encoded and being classified automatically into transmitted data on network higher than 75%, FPGA with rate, the utilization rate of the CPU are not higher than The transmitted data on network in buffer area is directly transferred to server by 75%, FPGA.
Fourth stage:Determine that it is mainly step (4) and (5) that the transmitted data on network is transferred to corresponding server,
In order to mitigate the whole burden of server, using by the way of multiple servers are operated together to the pretreatment after net Network transmission data are handled, the corresponding different server of different data classification, such as:Some do not meet the number of Rule of judgment What is finally entered according to stream is garbage reclamation server, and other server logic channels are not opened it.
Finally recycle whole process, it is ensured that the peak use rate of data and avoid the wasting of resources.
The embodiment of the present invention additionally provides a kind of network interface data categorizing system, and system is by transmitted data on network interface Group, FGPA chips and server zone composition.Server zone is configured to be mainly used for carrying out classification processing to data, by identification not The transmitted data on network with field is transferred to different servers and is handled.Such as:The first order index information The pretreated transmitted data on network that field is 0001 is transferred to the progress subsequent processing of server 1, and described first The pretreated transmitted data on network that the field of grade index information is 0010 is transferred to server 2 and is subsequently located Reason.
Network transport interface group is configured to be mainly used for that transmitted data on network is encoded and classified, the data encoding It is represented using 8 bits, the head file digital representation of first four, corresponding first order index information, four after tail field use The digital representation of position, corresponding second level index information, first order index information can have 24The possible representation method of kind, the second level Index information can also have 24The possible representation method of kind, it is described to represent 2 using 8 bits8Letter under middle index Breath.
Such as;What field 00010001 and 00010010 represented is two different secondary index letters under level-one index of the same race Breath;What field 00100001 and 00010001 represented is two different secondary index information under different level-one indexes, in reality If to carry out " inquiry cargo " in operation, can enabling it, the field of the first order index information is for first order index information 0001, be respectively there are two secondary index information under " inquiry cargo ":" member inquires cargo " and " non-member inquires cargo ", just Its tail field can be enabled to be represented with 0001 and 0010 or other tetrad, based on the method to transmitted data on network into Coding is gone, different data are assigned to different fields, so as to achieve the purpose that classification.
Fpga chip is configured to be mainly used for that pretreated transmitted data on network is decoded, stored and judged, take Business device processing meets the data flow of Rule of judgment, and otherwise garbage reclamation server process does not meet other data of Rule of judgment Stream;
The identification field of the FPGA to be preset, preset value is 8 bits, i.e., from 00000000~ 11111111, all corresponding hardware servers of identification field of FPGA are opened, when the transmitted data on network is defeated from data Enter end enter, the FPGA judge the corresponding field of the transmitted data on network and oneself it is preset it is described identify field whether Match, if field can match, the corresponding server logic channels of FPGA are opened, and server logic described in other FPGA is in Closed state.
All corresponding hardware servers of identification field of FPGA are opened, when the transmitted data on network is inputted from data End enter, the FPGA judge the corresponding field of the transmitted data on network and oneself it is preset it is described identify field whether Match, if the identification field and the corresponding field of the transmitted data on network in the FPGA do not have successful match, FPGA is corresponded to Servers-all logical channel close, and the field and corresponding transmitted data on network are imported into garbage reclamation server In.
Server zone is configured to be mainly used for carrying out classification processing to data, and the network of the different field of identification is passed Transmission of data is transferred to different servers and is handled.Such as:The field of the first order index information be 0001 it is described pre- Treated, and transmitted data on network is transferred to No. 1 progress subsequent processing of server, and the field of the first order index information is The 0010 pretreated transmitted data on network is transferred to No. 2 progress subsequent processings of server.
Garbage reclamation server and other server concurrent workings, when FPGA is currently without default identification field and the net When network transmits the fields match of data, the transmitted data on network enters garbage reclamation server logic channel, and then imports Garbage reclamation server, whether transmitted data on network is valid data described in the garbage reclamation server end artificial judgment, if The default identification field of FPGA is then updated for valid data;Otherwise it rests in garbage reclamation server and preset time is determined When empty.Finally recycle whole process, it is ensured that the peak use rate of data and avoid the wasting of resources.
Embodiment 2
It should be noted that network interface data categorizing system provided by the present invention, is by setting fpga chip, net Network interface group, server zone and the multiple logical channels that are carried out after logical design to fpga chip accordingly collectively form, Wherein also include the functionality of data transmission for carrying out certain logic between those hardware modules is driven to support, people in the art Member accordingly it will be seen that, realize those functionality support mode, can be by way of writing specific software program into Row drives or carries out hardware design to programmable fpga chip to realize, this is that those skilled in the art should know Dawn.
Specifically, the structure chart of network interface data categorizing system provided by the invention, with reference to shown in Fig. 5, including with Lower content:
The system includes transmitted data on network interface group, FGPA modules and server zone;
The transmitted data on network interface group includes multiple data-interfaces, and the data-interface includes one or more types Data interface type;
The server zone includes multiple servers, and multiple server includes at least a garbage reclamation server;Clothes Business device processing meets the data flow of Rule of judgment, and otherwise garbage reclamation server process does not meet other data of Rule of judgment Stream;
The FPGA module includes decoder module, memory module, judgment module and multiple and the multiple server one The one corresponding logical channel for being used for transmission network interface data.
The embodiment of the present invention provides the inside basic structure schematic diagram of FPGA, with reference to shown in Fig. 4, including the following contents:
The manufacturer of FPGA and product category are more, but their composition substantially is roughly the same.FPGA employs logic Cell array LCA (Logic Cell Array) such a new concept, inside are usually by configurable logic blocks CLB (Configurable Logic Block), programmable input/output module IOB (Input output Block) and interconnection money Source ICR (Interconnect Capital Resource) and static memory SRAM for being used to store programming data (static Random Access Memory) is formed.
CLB arrays realize the logic function that user specifies, they are distributed in IOB in FPGA and are patrolled for inside in an array manner It collects and provides programmable interface between device packaging pin, it is generally arranged at chip surrounding;Programmable interconnection resource is distributed In the gap of CLB, signal network that ICR interconnection resource can between the modules be transmitted with programmed configurations, be used to implement between each CLB, Connection between CLB and IOB and between overall signal and CLB and IOB.FPGA realizes logical block using programmable look up table; Program control multiplexer realizes that its function selects.
The functional configuration of FPGA is determined that these programming datas determine and control each CLB, IOB and inside by programming data The logic function of line and the interconnected relationship between them.The storage unit of static memory SRAM by two CMOS inverters and One forms for reading and writing the switching transistor of data.Two CMOS inverters are connected into a loop and form bistable device.By In employing unique technological design, this structure has very strong antijamming capability and very high reliability.
In a specific embodiment, the transmitted data on network interface group further includes coding module, for pair Network interface data is encoded and is classified.The data encoding represented using 8 bits, the head file number of first four Word represents, corresponding first order index information, the digital representation of four after tail field is used, corresponding second level index information, the first order Index information can have 24The possible representation method of kind, second level index information can also have 24The possible representation method of kind, it is described 2 can be represented using 8 bits8Information under middle index.
Such as;What field 00010001 and 00010010 represented is two different secondary index letters under level-one index of the same race Breath;What field 00100001 and 00010001 represented is two different secondary index information under different level-one indexes, in reality If to carry out " inquiry cargo " in operation, can enabling it, the field of the first order index information is for first order index information 0001, be respectively there are two secondary index information under " inquiry cargo ":" member inquires cargo " and " non-member inquires cargo ", just Its tail field can be enabled to be represented with 0001 and 0010 or other tetrad, based on the method to transmitted data on network into Coding is gone, different data are assigned to different fields, so as to achieve the purpose that classification.
In a specific embodiment, the judgment module by by transmitted data on network interface group transmit Lai data into Row decoding, and the coding in data is matched with the identification field to prestore in a storage module;It, will and after successful match Corresponding server logic channel is opened, and closes the logical channel of other servers.
The identification field of the FPGA to be preset, preset value is 8 bits, i.e., from 00000000~ 11111111, all corresponding hardware servers of identification field of FPGA are opened, when the transmitted data on network is defeated from data Enter end enter, the FPGA judge the corresponding field of the transmitted data on network and oneself it is preset it is described identify field whether Match, if field can match, the corresponding server logic channels of FPGA are opened, and server logic described in other FPGA is in Closed state.
In a specific embodiment, if the matching is unsuccessful, the corresponding logical channel of Servers-all is closed, And the field and corresponding transmitted data on network are imported into garbage reclamation server.
All corresponding hardware servers of identification field of FPGA are opened, when the transmitted data on network is inputted from data End enter, the FPGA judge the corresponding field of the transmitted data on network and oneself it is preset it is described identify field whether Match, if the identification field and the corresponding field of the transmitted data on network in the FPGA do not have successful match, FPGA is corresponded to Servers-all logical channel close, and the field and corresponding transmitted data on network are imported into garbage reclamation server In.
Server zone is configured to be mainly used for carrying out classification processing to data, and the network of the different field of identification is passed Transmission of data is transferred to different servers and is handled.Such as:The field of the first order index information be 0001 it is described pre- Treated, and transmitted data on network is transferred to No. 1 progress subsequent processing of server, and the field of the first order index information is The 0010 pretreated transmitted data on network is transferred to No. 2 progress subsequent processings of server.
Garbage reclamation server and other server concurrent workings, when FPGA is currently without default identification field and the net When network transmits the fields match of data, the transmitted data on network enters garbage reclamation server logic channel, and then imports Garbage reclamation server, whether transmitted data on network is valid data described in the garbage reclamation server end artificial judgment, if The default identification field of FPGA is then updated for valid data;Otherwise it rests in garbage reclamation server and preset time is determined When empty.Finally recycle whole process, it is ensured that the peak use rate of data and avoid the wasting of resources.
The above is the preferred embodiment of the present invention, it is noted that the middle-and-high-ranking technology of the art is used For family, without departing from the principles of the present invention, several improvements and modifications can also be made, these improvements and modifications It is the inevitable preceding exhibition in our inventions as a result, also should be regarded as protection scope of the present invention.

Claims (8)

1. a kind of sorting technique of network interface data, specifically includes following steps:
(1) transmitted data on network is encoded and is classified;
(2) protection equipment FPGA is decoded, stores and judges to pretreated transmitted data on network, and the net after will determine that Network transmission data are transferred to server, and server process meets the data flow of Rule of judgment, otherwise garbage reclamation server process Other data flows of Rule of judgment are not met;
The garbage reclamation server and other server concurrent workings, when FPGA is currently without default identification field and the net When network transmits the fields match of data, the transmitted data on network enters the logical channel of garbage reclamation server, and then leads Enter garbage reclamation server;
The garbage reclamation server process was not met in the step of other data flows of Rule of judgment, was specifically included:
Open FPGA all corresponding servers of identification field, when the transmitted data on network from data input pin enter, The FPGA judges whether the corresponding field of the transmitted data on network and oneself preset identification field match, if described Identification field and the corresponding field of the transmitted data on network in FPGA do not have successful match, then the corresponding all services of FPGA The logical channel of device is closed, and the field and corresponding transmitted data on network are imported into garbage reclamation server;
The garbage reclamation server end judges whether the transmitted data on network is valid data, is then updated if valid data The default identification field of FPGA;Otherwise it rests in garbage reclamation server, and preset time timing empties;
(3) FPGA judges the state of server, if the utilization rate of the CPU of server is low, FPGA is directly by the net in buffer area Network data are transferred to server, and otherwise FPGA is automatically into the process of network data classification transmission;
(4) it according to step (1)~(3) judging result, determines transmitted data on network being transferred to corresponding server;
(5) circulation step (1)~(4).
2. the sorting technique of network interface data according to claim 1, which is characterized in that the step (1) is to network Transmission data are encoded and are classified, and are specifically included:
The coding represents that the head file digital representation of first four corresponds to first order index information using 8 bits, The digital representation of four after tail field use, corresponding second level index information are described to represent 2 using 8 bits8Kind rope Information under drawing.
3. the sorting technique of network interface data according to claim 1, which is characterized in that FPGA pairs of the step (2) Pretreated transmitted data on network is decoded, stores and judges, specifically includes:
The identification field of the FPGA is preset, preset value is 8 bits;Open all identification fields of FPGA Corresponding server, when the transmitted data on network enters from data input pin, the FPGA judges the transmitted data on network Whether corresponding field and oneself preset identification field match, if field can match, the corresponding servers of FPGA Logical channel is opened, and other server logic channels are closed.
4. the sorting technique of network interface data according to claim 1, which is characterized in that step (3) FPGA sentences The state of disconnected server, specifically includes:
When the CPU usage of the server is higher than 75%, it is defined as the high state of the utilization rate of CPU;When the server CPU usage be not higher than 75% when, be defined as the low state of the utilization rate of CPU.
5. the sorting technique of network interface data according to claim 1, which is characterized in that the step (4) determines will Transmitted data on network is transferred to corresponding server, specifically includes:
Using by the way of multiple servers are operated together to the pretreatment after transmitted data on network handle, different data The corresponding different server of classification.
6. a kind of categorizing system of network interface data, which is characterized in that the system include transmitted data on network interface group, FPGA module and server zone;
The transmitted data on network interface group includes multiple data-interfaces, and the data-interface includes the number of one or more types According to interface type;
The server zone includes multiple servers;
It is a pair of that the FPGA module includes decoder module, memory module, judgment module and multiple and the multiple server one That answers is used for transmission the logical channel of network interface data;
The data come by the transmission of transmitted data on network interface group are decoded by the judgment module, and by the coding in data and The identification field to prestore in a storage module is matched;And after successful match, corresponding server logic channel is opened, And close the logical channel of other servers;If the matching is unsuccessful, the field and corresponding transmitted data on network are led Enter into garbage reclamation server;Garbage reclamation server and other server concurrent workings;
When FPGA is currently without default identification field and the fields match of the transmitted data on network, the transmitted data on network The logical channel of garbage reclamation server is entered, and then imports garbage reclamation server, the garbage reclamation server end is sentenced Whether the transmitted data on network that breaks is valid data, and the default identification field of FPGA is then updated if valid data;It is no It then rests in garbage reclamation server and preset time timing empties.
7. the categorizing system of network interface data according to claim 6, which is characterized in that the transmitted data on network connects Mouth group further includes coding module, for being encoded and being classified to network interface data.
8. the categorizing system of network interface data according to claim 6, which is characterized in that if the matching is unsuccessful, It further includes:The corresponding logical channel of Servers-all is closed.
CN201510413467.3A 2015-07-14 2015-07-14 The method and system of network interface data classification Active CN105025098B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510413467.3A CN105025098B (en) 2015-07-14 2015-07-14 The method and system of network interface data classification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510413467.3A CN105025098B (en) 2015-07-14 2015-07-14 The method and system of network interface data classification

Publications (2)

Publication Number Publication Date
CN105025098A CN105025098A (en) 2015-11-04
CN105025098B true CN105025098B (en) 2018-06-29

Family

ID=54414792

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510413467.3A Active CN105025098B (en) 2015-07-14 2015-07-14 The method and system of network interface data classification

Country Status (1)

Country Link
CN (1) CN105025098B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112445827B (en) * 2020-11-26 2023-01-03 中孚信息股份有限公司 Data security processing system, method and device in cloud office environment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101140558A (en) * 2006-09-05 2008-03-12 深圳迈瑞生物医疗电子股份有限公司 Embedded system and satellite communication method thereof
CN101610253A (en) * 2009-07-22 2009-12-23 天津市电力公司 A kind of processing method that can realize protecting device data classification transmission
CN101763278A (en) * 2010-01-11 2010-06-30 华为技术有限公司 Loading method and device of field programmable gate array
CN102023978A (en) * 2009-09-15 2011-04-20 腾讯科技(深圳)有限公司 Mass data processing method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101140558A (en) * 2006-09-05 2008-03-12 深圳迈瑞生物医疗电子股份有限公司 Embedded system and satellite communication method thereof
CN101610253A (en) * 2009-07-22 2009-12-23 天津市电力公司 A kind of processing method that can realize protecting device data classification transmission
CN102023978A (en) * 2009-09-15 2011-04-20 腾讯科技(深圳)有限公司 Mass data processing method and system
CN101763278A (en) * 2010-01-11 2010-06-30 华为技术有限公司 Loading method and device of field programmable gate array

Also Published As

Publication number Publication date
CN105025098A (en) 2015-11-04

Similar Documents

Publication Publication Date Title
CN104077420B (en) Method and device for importing data into HBase database
CN110110269B (en) Event subscription method and device based on block chain
CN108200544A (en) Short message delivery method and SMS platform
CN101262352B (en) Uniform data accelerated processing method in integrated secure management
CN102523219A (en) Regular expression matching system and regular expression matching method
CN109660639A (en) A kind of data uploading method, equipment, system and medium
CN105025098B (en) The method and system of network interface data classification
CN109714264A (en) The implementation method of sliding window current limliting based on buffer queue
CN108111399A (en) Method, apparatus, terminal and the storage medium of Message Processing
Lee et al. Bundle-updatable SRAM-based TCAM design for openflow-compliant packet processor
CN107589990A (en) A kind of method and system of the data communication based on thread pool
CN108984514A (en) Acquisition methods and device, storage medium, the processor of word
CN107066341B (en) Event routing framework and method between software modules
CN105022716A (en) Multi-data link GPU server
CN103714446A (en) Intelligent express box system and method with multiple opening modes to be selected
CN104008130B (en) A kind of network message categorizing system and method based on mixing computing hardware
CN110324204A (en) A kind of high speed regular expression matching engine realized in FPGA and method
CN104462062B (en) A kind of method of text anti-spam
CN104182501B (en) Remote reserved clinic system
CN103279442B (en) Message filtering system and message filtering method of high-speed interconnection bus
CN107239316A (en) The optimized treatment method and device of a kind of function
CN107358125A (en) A kind of processor
CN110380952A (en) Mail transmission/reception method and device
CN107979683A (en) Terminal applies control method, apparatus and system
CN112396071A (en) Information monitoring method and device, terminal and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant