CN103391256B - A kind of base station user face data processing optimization method based on linux system - Google Patents

A kind of base station user face data processing optimization method based on linux system Download PDF

Info

Publication number
CN103391256B
CN103391256B CN201310315568.8A CN201310315568A CN103391256B CN 103391256 B CN103391256 B CN 103391256B CN 201310315568 A CN201310315568 A CN 201310315568A CN 103391256 B CN103391256 B CN 103391256B
Authority
CN
China
Prior art keywords
message
packet receiving
buffer
base station
buffering area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310315568.8A
Other languages
Chinese (zh)
Other versions
CN103391256A (en
Inventor
姜炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Research Institute of Posts and Telecommunications Co Ltd
Original Assignee
Wuhan Research Institute of Posts and Telecommunications Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Research Institute of Posts and Telecommunications Co Ltd filed Critical Wuhan Research Institute of Posts and Telecommunications Co Ltd
Priority to CN201310315568.8A priority Critical patent/CN103391256B/en
Publication of CN103391256A publication Critical patent/CN103391256A/en
Application granted granted Critical
Publication of CN103391256B publication Critical patent/CN103391256B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Mobile Radio Communication Systems (AREA)

Abstract

The present invention proposes a kind of base station user face data processing optimization method based on linux system, comprise and adopt bag process accelerating module to complete bag classification, utilize and interrupt poll intelligence switching mode minimizing packet receiving interruption, kernel-space-user-space memory mapping technique is utilized to avoid memory copying, lock-free queue technology is utilized to reduce the contextual processing of kernel state and User space, thus the significant performance improving base station user face data processing.

Description

A kind of base station user face data processing optimization method based on linux system
Technical field
The present invention relates to wireless communication technology field, especially relate to a kind of base station user face data processing optimization method based on linux system.
Background technology
Along with the progress of wireless communication technology, the requirement for LTE (3GPP Long Term Evolution) base station user face data processing, particularly down direction is more and more higher.LTE base station user face data is mainly carried on the GTPU(tunnel protocol on UDP (User Datagram Protoco (UDP))) business, traditional processing mode utilizes network interface card to collect message, network interface card is by interruption form notice linux kernel, linux kernel is again through the process layer by layer of network protocol stack, for GTPU business, comprise ether layer, IP(Internet protocol) layer, UDP layer, Socket (network outlet) layer, wake User space program up and copy GTPU message to user's space.
The drawback of conventional method comprises:
Copy is too much: each GTPU message, after the process of linux kernel, needs to copy user's space to from kernel spacing.
Contextual processing is too much: GTPU packet receiving socket adopts block type input mode, often calls the process that receiver function receives all once system call of a UDP message, i.e. the process of kernel state and User space contextual processing.
Interrupt too much: under higher network loading condition, network interface card packet receiving is interrupted too much.Too much interruption can the serious real-time reducing linux system.
Summary of the invention
The present invention proposes a kind of base station user face data processing optimization method based on linux system, its objective is in LTE base station user face data processing procedure, reduce contextual processing, interrupt and copy.
Technical scheme of the present invention is a kind of base station user face data processing optimization method based on linux system, bag process accelerating module is adopted to carry out user face data process, described bag process accelerating module comprises buffer management unit, network data frame administrative unit and queue management unit, network data frame administrative unit comprises packet classifier, comprise initialization procedure and data transmission procedure
Described initialization procedure comprises following sub-step,
Step 1.1, the bag classifying rules of the packet classifier of definition bag process accelerating module, described bag classifying rules is the rule that the user face data of base station is distinguished in classification, and described user face data is GTPU message, and GTPU represents tunnel protocol;
Step 1.2, set up the buffering area of buffer management unit, when being included in kernel initialization, reserved memory block is as the buffering area of buffer management unit, be multiple equal-sized grid by a memory block cutting, the physical address of each grid and size are informed to buffer management unit;
Step 1.3, in the device tree file of kernel, in the network data frame administrative unit port that LTE base station is connected with core net and step 1.2 set up buffer management unit buffering area be associated together;
Step 1.4, sets up buffer circle, when being included in driving initialization, a reserved memory block, be multiple equal-sized grid by memory block cutting, be used for stored messages descriptor, the information in described message descriptor comprises address offset and the length of the packet of GTPU message; The read pointer controlling buffer circle and write pointer is preserved in the header structure of memory block, described write pointer is kernel fills in numbering from the corresponding grid of data to buffer circle, and read pointer is User space packet receiving process reads the corresponding grid of data numbering from buffer circle; Described User space packet receiving process is the buffering area according to the information direct access buffer district administrative unit the message descriptor read from buffer circle, gives the process of other business module process by data division recomposition message delivery in the GTPU message of reading;
Step 1.5, during User space packet receiving process initialization, by step 1.2 set up the buffering area of buffer management unit and step 1.4 set up place, buffer circle physical address space be mapped to user's space;
Described data transmission procedure comprises following sub-step,
Step 2.1, the packet classifier of bag process accelerating module distinguishes base station user face data, by base station user face deposit data in the buffering area of buffer management unit, and the respective queue of network data frame administrative unit that corresponding message descriptor is joined the team to, produce the packet receiving of interrupt notification kernel;
Step 2.2, carry out kernel packet receiving, completed by the packet receiving interrupt processing call back function of QMAN, first described packet receiving interrupt processing call back function is closed packet receiving and is interrupted, enter polling status, by the information solicitation in message descriptor in buffer circle, increase progressively the write pointer of buffer circle, wake User space packet receiving process up; Packet receiving interrupt processing call back function, in poll each time, is added up the message number of the tunneling protocol data received in this poll, if number is less than predetermined threshold value, is terminated polling status, reopens packet receiving and interrupts; When packet receiving is interrupted, the sleep of User space packet receiving process drives in buffer circle on the waiting list of definition;
Step 2.3, after the User space packet receiving process of sleeping on the waiting list that buffer circle drives definition is waken up, according to the buffering area of the direct access buffer district of the information in message descriptor administrative unit, skip over other information by the data division recomposition message delivery of GTPU message to other business module process.
And in step 2.3, User space packet receiving process, by peelling off Ethernet head, VLAN head, IP head and UDP header information, by the data division in GTPU message, is assembled into message delivery to other business modules.
And, in each grid of buffer circle, whether grid is used and marked.
And, for the GTPU message without protocol stack process, when network interface enters promiscuous mode, copy a packet to protocol stack, and do respective markers in message, avoid upper strata reprocessing.
And, in step 2.2, if buffer circle is full, then the message descriptor newly received is abandoned, and cell in the buffering area discharging buffer management unit corresponding to this message descriptor.
And in step 1.2, when setting up the buffering area of buffer management unit, reserved more than one piece memory block is as the buffering area of buffer management unit, and each memory block is by different size cutting grid.
The present invention contrasts traditional LTE user face data treatment technology following innovative point:
1. utilize bag sorting technique, distinguish and can omit standard protocol stack process, namely the LTE user face data of optimization process is needed, and need the message of standard protocol stack process, as ICMP (Internet Control Message agreement), ARP(address resolution protocol) etc. message, this type of message amount is few and not high to performance requirement, is still submitted to the process of Linux protocol stack, thus avoids the clean cut mode that general optimisation technique must realize whole User space protocol stack.
2. utilize and interrupt poll intelligence handoff technique, hardware interrupts number is declined to a great extent, avoids hard interruption too much to cause the real-time of real time business task; Switching between interrupt mode and polling mode of the speed intelligence that interruption poll intelligence handoff technique enters according to message, can ensure the treatment effeciency when two-forty message enters on the one hand, also can ensure the low delay when low rate message enters on the other hand.
3. utilize lock-free queue technology, complete the synchronous of single producer and single consumer, avoid system call frequently.
4. utilize kernel-user's space memory mapping technique, avoid memory copying.
Accompanying drawing explanation
Fig. 1 is the structure chart of BMAN buffering area in the embodiment of the present invention.
Fig. 2 is the operating principle schematic diagram of buffer circle in the embodiment of the present invention.
Fig. 3 is that in the embodiment of the present invention, buffer circle is empty schematic diagram.
Fig. 4 is that in the embodiment of the present invention, buffer circle is full schematic diagram.
Fig. 5 is the interaction figure of BMAN buffering area and buffer circle in the embodiment of the present invention.
Fig. 6 is the sorted logic figure of packet classifier in the embodiment of the present invention.
Embodiment
The present invention, mainly for the optimization of LTE base station user face data process, is suitable for but is not limited to LTE base station, and this programme is equally applicable to other system needing to realize efficient user face data processing in embedded Linux system.The program can meet the demand that radio communication base station builds high speed transfer of data, taking of effective minimizing system resource, the design makes full use of bag sorting technique, interrupts poll intelligence handoff technique, a series of cutting edge technologies such as lock-free queue technology and memory mapping technique, effectively can reduce and interrupt number, the switching of minimizing process context and completely avoid data copy.
Technical solution of the present invention is described in detail below in conjunction with drawings and Examples.
Initialization procedure comprises and performs following sub-step successively:
Step 1.1, the bag classifying rules of the packet classifier of definition bag process accelerating module, distinguishes the user face data (i.e. GTPU message is characterized in that UDP destination slogan is 2152) of base station.
During concrete enforcement, the coprocessor in the CPU that this step can use according to specific implementation bag process accelerating module realizes.In embodiment, adopt Freescale PowerPC of the prior art, Freescale PowerPC adopts DPAA(data path to accelerate framework) realize bag process accelerating module, which provide network data frame administrative unit, buffer management unit, queue management unit, generally respectively referred to as FMAN, BMAN, QMAN, is provided with packet classifier in network data frame administrative unit.The classifying rules of the packet classifier of definition Freescale PowerPC, and be configured in hardware and go.
The object of this step distinguishes the message needing optimization process.Need the message of optimization process by no longer through numerous and diverse process of the network protocol stack of Linux, for LTE user face data, this type of message is GTPU message (UDP destination slogan is 2152); Do not need the message of optimization process, namely need the message of ICP/IP protocol stack process, such as icmp packet, ARP message etc.Can be UDP destination slogan according to GTPU message characteristic be 2152 setting bag classifying ruless, two kinds of messages be joined the team in different queues by bag classifying rules by subsequent data transmission process respectively.
Step 1.2, setting up BMAN buffering area, when being included in kernel initialization, reserving memory block by distributing kernel internal memory, be multiple equal-sized grid by a memory block cutting, and the physical address of each grid and size informed to the BMAN(buffer management of DPAA module) unit.For meeting ethernet standard MTU(MTU) requirement, grid size is generally set as about 2K byte.
In embodiment, during kernel initialization, a reserved memory block is as BMAN buffering area.As shown in Figure 1, be multiple equal and opposite in direction by memory block cutting, each size is the cell of 2112 bytes, and the memory block of 4 Mbytes can be divided into 1985 cells, as in figure 0,2112,2112*2 ... 2112*1984, computer realm generally adopts * to represent to be multiplied by ×.2112 bytes are the odd-multiple of 64 bytes, the selection of this size can hold the standard MTU of 1500 bytes, and allow hardware to add some accessory informations, and evenly can utilize two three grades of buffer memorys of PowerPC4080, after lattice completes, the physical address of each grid and size are informed to the BMAN unit of DPAA module.For avoiding wasting physical memory, also multiple memory block can be reserved, each memory block is as different BMAN buffering areas, cell in every block internal memory is not of uniform size, as 64 bytes, and 172 bytes, 320 bytes etc., DPAA hardware module can according to the most suitable size of size intelligent selection of network message frame, and as 173 bytes can select the BMAN buffering area of 320 byte-sized, 100 bytes can select the BMAN buffering area of 172 byte-sized.
Step 1.3, FMAN(network data frame administrative unit) port associates with BMAN buffering area, need at the DTS(device tree of kernel) in file, the BMAN buffering area set up in FMAN port LTE base station be connected with core net and step 1.2 is associated together, if be assigned with multiple BMAN buffering area, also can port binding be together therewith by multiple BMAN buffering area.
Step 1.4, set up buffer circle, when being included in driving initialization, by distributing kernel internal memory a reserved memory block, be multiple equal-sized grid by memory block cutting, be used for the address offset of the packet storing GTPU message and length, all grid have a header structure, preserve in header structure again and control the read-write pointer of buffer circle, what complete between kernel (producer) and User space packet receiving process (consumer) is synchronous.Write pointer in header structure, represents the numbering of the grid of producer's number completion certificate, read pointer, represents the numbering that consumer reads the grid of data.During concrete enforcement, the size definition of buffer circle be 2 power be conducive to improving the efficiency of read-write pointer operation.
In embodiment, Fig. 2 is shown in by the simultaneous operation schematic diagram of buffer circle, utilizes buffer circle, will no longer need traditional mutual exclusion lock structure.Time initial, buffer circle read pointer and write pointer are all zero.After the producer often produces data, after namely kernel-driven receives a GTPU message, need write pointer be increased progressively, after write pointer is incremented more than the sum of grid in buffer circle, need by the write pointer after increasing progressively with grid sum remainder.Such as, assuming that grid adds up to 256 in buffer circle, with 0 to 255 numberings.Increasing progressively front write pointer is 255, and be 256 after write pointer increases progressively, with 256 remainders, become 0, namely write pointer unrolls to 0, again points to first grid in buffer circle.Aforesaid operations NEXT is represented, therefore NEXT (X)=(X+1) %N, wherein X is write pointer, and N is the number of grid in buffer circle.In like manner, after consumer often takes data away, after namely User space packet receiving process processes a GTPU message, also need to rewrite read pointer with said method.As shown in Figure 3, when read pointer is equal with write pointer, the data representing producer produces are all taken away by consumer, and the producer also fails to produce new data, and now User space program needs sleep, to wait for the more data of producer produces.As shown in Figure 4, when after write pointer is by above-mentioned NEXT operational processes, result equals read pointer, represent buffering area full, the producer needs initiatively to abandon the follow-up GTPU message received, the difference degree of producers and consumers's processing speed that the number representative system of the grid in buffer circle can be tolerated.Drive and also need the realization that memory-mapped function is provided, complete the mapping between BMAN buffering area internal memory and user's space virtual address space as bridge.
Embodiment provides User space packet receiving process, and this process is used for directly accessing BMAN internal memory according to the information the message descriptor read from buffer circle, gives other business module process by GTPU message recomposition message delivery.
Step 1.5, during User space packet receiving process initialization, by the kernel internal memory distributed in step 1.2 and step 1.4, is mapped to user's space.
In embodiment, during User space packet receiving process initialization, by step 1.2 set up BMAN buffering area and step 1.4 set up place, buffer circle physical address space be mapped to user's space, namely be mapped to one section of user's space from one section of kernel spacing, thus the copy of network data from kernel state to User space can be avoided.
Use procedure comprises and performs following sub-step successively::
Step 2.1, DPAA packet classifier distinguishes base station user face data, left in the BMAN buffer of DPAA, and by respective queue in QMAN (queue management) unit of corresponding message descriptor (describing memory address and the size of the message) DPAA that joins the team, QMAN unit produces the packet receiving of interrupt notification kernel.
In embodiment, after the network interface of LTE base station receives the message that core-network side sends, the bag classification grammer that hardware packet classifier in DPAA module defines according to step 1.1 during initialization carries out bag classification to message, by UDP destination slogan be 2152 GTPU message descriptor to join the team a queue, and another one queue that other all message descriptors are joined the team to, by default queue number, any value can be distinguished between 0x1 to 0xFFFF during concrete enforcement.Bag sorted logic is as shown in Figure 6: message enters packet classifier, and packet classifier judges whether it is IP fragmentation, is, 0x2000 queue of joining the team, otherwise continue to judge whether destination interface is 2152,2152 0x2001 queues of joining the team, otherwise 0x2000 queue of still joining the team, bag classification terminates.The packets of information of message descriptor record is containing the start physical address of each cell and the size of message in BMAN buffering area during initialization, and hardware can produce corresponding hardware interrupts after being joined the team by message descriptor.
Step 2.2, carries out kernel packet receiving, and kernel packet receiving is completed by the packet receiving interrupt processing call back function of QMAN.When packet receiving is interrupted, the sleep of User space packet receiving process drives on the waiting list of definition in buffer circle.First the content of described packet receiving interrupt processing call back function comprises closes packet receiving interruption, enters poll pattern, by the information solicitation in message descriptor in buffer circle, increases progressively the write pointer of buffer circle, wake User space packet receiving process up.Packet receiving interrupt processing call back function is in a poll, the GTPU message number received in this poll can be added up, if number is less than predetermined threshold value, (predetermined threshold value can be self-defined in advance by those skilled in the art, as can 64 be set to) will interruption be reopened, not only interrupt mode has been got back to by poll pattern, if the GTPU message number received in a poll is not less than budget, then can not reopens interruption, namely remain on polling status.
In embodiment, kernel packet receiving is completed by the packet receiving interrupt processing call back function of QMAN.First close packet receiving to interrupt, enter polling status.Owing to joining the team in different queues by GTPU message and non-GTPU message in step 2.1, therefore different process can be made according to the difference of queue sequence number.If GTPU message, then by the information solicitation in message descriptor in buffer circle, BMAN buffering area and buffer circle mutual as shown in Figure 5, kernel in figure-user's space mapped inner-storage is the BMAN buffering area that kernel distributes, X represents the initial address in this region, X+2112, X+2112 × 2 etc. represent the initial address of each grid in buffering area, and the length of visible each grid is 2112 bytes.This sector address is mapped to oneself virtual address space by User space packet receiving process.When a GTPU message enters, this message to be placed in BMAN buffering area in certain cell by DPAA hardware module, assuming that be placed on X+2112 × 4 to this section, X+2112 × 5 physical address space, the physical length of the address offset 2112 × 4 of kernel-driven this element lattice in kernel-user's space mapped inner-storage and GTPU message is filled up in the buffer circle cell that in buffer circle, current write pointer points to, and increase progressively the write pointer of buffer circle, the then User space packet receiving process of wake-up waiting on the waiting list of loop buffer area definition.During kernel-driven packet receiving, if buffer circle is full, then message descriptor is abandoned, and discharge the cell of BMAN buffering area internal memory corresponding to this message descriptor.After increasing progressively write pointer, in fact just can discharge the BMAN buffer location lattice pointed by write pointer, but the hardware operation corresponding due to cell release in a BMAN buffering area can discharge at most 8 BMAN buffer location lattice, and a hardware operation call overhead is larger, therefore for raising the efficiency, the cell address caching of write pointers point can be got up, be accumulated to the cell that 8 once discharge corresponding BMAN buffering area internal memory later.During due to release, need to extract in buffer circle the information such as the cell address of preserving and size, therefore needing, a variable is set be used for judging whether that this cell has been previously used once, be namely filled the address of BMAN buffer location lattice.Whether can use cell in each cell of buffer circle and be marked (key value), to avoid discharging the buffering area of not filling.
In addition, during due to debugging network problem, message is crawled out by TCPDUMP (network packet catcher) by modal method exactly, and due to the now bypass of GTPU message Linux network protocol stack, therefore, when network interface enters promiscuous mode, namely when TCPDUMP opens, GTPU message can be copied to the data structure describing message in sk_buff(Linux network protocol stack) in go, and special marking is done in sk_buff structure, submit protocol stack again, meet the demand of debugging on the one hand, on the one hand special marking can allow protocol stack is not Reduplicated is submitted to upper-layer user's state handling procedure by this type of message.If not GTPU message, mean that this type of message is the message of the network protocol stack process needing Linux, because network protocol stack must use sk_buff structure, in call back function, the message described in message descriptor is copied in kernel memory management distributor dynamic assignment sk_buff out and its supplementary structure body, call correlation function and transfer to protocol stack process, owing to copying, therefore BMAN buffer location lattice can discharge immediately, and sk_buff associated internal memory then transfers to network protocol stack to discharge.
Packet receiving readjustment is in a poll, if the message number of process is less than budget, illustrate now network message to enter system slower, may be that packet receiving readjustment is after process repeatedly, data have not been had to enter in BMAN buffering area, not only then reopen packet receiving to interrupt, got back to interrupt mode by poll pattern, visible packet receiving process can according to network message enter the fast and slow dynamics of system in interruption and the switching of poll two kinds of modes intelligence.
Step 2.3, after the User space packet receiving process of sleeping on the waiting list that buffer circle drives definition is waken up by step 2.2, BMAN internal memory is directly accessed according to the information in message descriptor, by Ethernet head, VLAN head, the information such as IP head and UDP head skip over, and give other business module process by GTPU message recomposition message delivery.
After User space packet receiving process is waken up, according to the message address skew of filling in the buffer circle cell in buffer circle pointed by current read pointer and message size, just can without the GTPU message of correspondence in the access BMAN buffering area of copy.In embodiment, after User space packet receiving process initiation, can sleep on the waiting list that buffer circle drives definition, after being waken up by step 2.2, owing to BMAN Buffer mapping have been arrived the user address space of oneself, therefore can according to the information (containing physical address skew and the information such as message size of message in descriptor) in message descriptor directly access BMAN buffering area according to the difference of network model, as whether having VLAN label, IP version is IPv4 or IPv6, user program needs Ethernet head, VLAN head, the information such as IP head and UDP head skip over, other business module process is given by GTPU message recomposition message delivery.
Due to the restriction of the packet classifier of PowerPC4080, when the GTPU message that message transmitting party sends is IP fragmentation, UDP stem is contained owing to only having first IP fragmentation, and the classifying rules of grader is actually the UDP destination slogan simply judging an IP message, therefore other IP fragmentation can be submitted to the process of Linux network protocol stack, but because first burst is not submitted to protocol stack, the meeting failure therefore the IP of protocol stack recombinates.In enforcement, for avoiding the generation of problems, this type of message can be classified as non-GTPU message, the packet classifier in the CPU that PowerPC series is more high-end does not have problems.
Specific embodiment described herein is only to the explanation for example of the present invention's spirit.Those skilled in the art can make various amendment or supplement or adopt similar mode to substitute to described specific embodiment, but can't depart from spirit of the present invention or surmount the scope that appended claims defines.

Claims (6)

1. the base station user face data processing optimization method based on linux system, bag process accelerating module is adopted to carry out user face data process, described bag process accelerating module comprises buffer management unit, network data frame administrative unit and queue management unit, network data frame administrative unit comprises packet classifier, it is characterized in that: comprise initialization procedure and data transmission procedure
Described initialization procedure comprises following sub-step,
Step 1.1, the bag classifying rules of the packet classifier of definition bag process accelerating module, described bag classifying rules is the rule that the user face data of base station is distinguished in classification, and described user face data is GTPU message, and GTPU represents tunnel protocol;
Step 1.2, set up the buffering area of buffer management unit, when being included in kernel initialization, reserved memory block is as the buffering area of buffer management unit, be multiple equal-sized grid by a memory block cutting, the physical address of each grid and size are informed to buffer management unit;
Step 1.3, in the device tree file of kernel, in the network data frame administrative unit port that LTE base station is connected with core net and step 1.2 set up buffer management unit buffering area be associated together;
Step 1.4, sets up buffer circle, when being included in driving initialization, a reserved memory block, be multiple equal-sized grid by memory block cutting, be used for stored messages descriptor, the information in described message descriptor comprises address offset and the length of the packet of GTPU message; The read pointer controlling buffer circle and write pointer is preserved in the header structure of memory block, described write pointer is kernel fills in numbering from the corresponding grid of data to buffer circle, and read pointer is User space packet receiving process reads the corresponding grid of data numbering from buffer circle; Described User space packet receiving process is the buffering area according to the information direct access buffer district administrative unit the message descriptor read from buffer circle, gives the process of other business module process by data division recomposition message delivery in the GTPU message of reading;
Step 1.5, during User space packet receiving process initialization, by step 1.2 set up the buffering area of buffer management unit and step 1.4 set up place, buffer circle physical address space be mapped to user's space;
Described data transmission procedure comprises following sub-step,
Step 2.1, the packet classifier of bag process accelerating module distinguishes base station user face data, by base station user face deposit data in the buffering area of buffer management unit, and the respective queue of network data frame administrative unit that corresponding message descriptor is joined the team to, produce the packet receiving of interrupt notification kernel;
Step 2.2, carry out kernel packet receiving, completed by the packet receiving interrupt processing call back function of QMAN, described QMAN is queue management unit, first described packet receiving interrupt processing call back function is closed packet receiving and is interrupted, and enters polling status, by the information solicitation in message descriptor in buffer circle, increase progressively the write pointer of buffer circle, wake User space packet receiving process up; Packet receiving interrupt processing call back function, in poll each time, is added up the message number of the tunneling protocol data received in this poll, if number is less than predetermined threshold value, is terminated polling status, reopens packet receiving and interrupts; When packet receiving is interrupted, the sleep of User space packet receiving process drives in buffer circle on the waiting list of definition;
Step 2.3, after the User space packet receiving process of sleeping on the waiting list that buffer circle drives definition is waken up, according to the buffering area of the direct access buffer district of the information in message descriptor administrative unit, skip over other information by the data division recomposition message delivery of GTPU message to other business module process.
2. according to claim 1 based on the base station user face data processing optimization method of linux system, it is characterized in that: in step 2.3, User space packet receiving process is by peelling off Ethernet head, VLAN head, IP head and UDP header information, by the data division in GTPU message, be assembled into message delivery to other business modules.
3. according to claim 1 based on the base station user face data processing optimization method of linux system, it is characterized in that: in each grid of buffer circle, whether grid is used and marked.
4. according to claim 1 based on the base station user face data processing optimization method of linux system, it is characterized in that: for the GTPU message without protocol stack process, when network interface enters promiscuous mode, copy a packet to protocol stack, and respective markers is done in message, avoid upper strata reprocessing.
5. according to claim 1 based on the base station user face data processing optimization method of linux system, it is characterized in that: in step 2.2, if buffer circle is full, then the message descriptor newly received is abandoned, and cell in the buffering area discharging buffer management unit corresponding to this message descriptor.
6. according to claim 1 or 2 or 3 or 4 or 5 based on the base station user face data processing optimization method of linux system, it is characterized in that: in step 1.2, when setting up the buffering area of buffer management unit, reserved more than one piece memory block is as the buffering area of buffer management unit, and each memory block is by different size cutting grid.
CN201310315568.8A 2013-07-25 2013-07-25 A kind of base station user face data processing optimization method based on linux system Active CN103391256B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310315568.8A CN103391256B (en) 2013-07-25 2013-07-25 A kind of base station user face data processing optimization method based on linux system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310315568.8A CN103391256B (en) 2013-07-25 2013-07-25 A kind of base station user face data processing optimization method based on linux system

Publications (2)

Publication Number Publication Date
CN103391256A CN103391256A (en) 2013-11-13
CN103391256B true CN103391256B (en) 2016-01-13

Family

ID=49535417

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310315568.8A Active CN103391256B (en) 2013-07-25 2013-07-25 A kind of base station user face data processing optimization method based on linux system

Country Status (1)

Country Link
CN (1) CN103391256B (en)

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104731711A (en) * 2013-12-23 2015-06-24 中兴通讯股份有限公司 Table filling method and device of network equipment
CN104811391B (en) * 2014-01-24 2020-04-21 中兴通讯股份有限公司 Data packet processing method and device and server
CN103945456B (en) * 2014-05-12 2017-06-27 武汉邮电科学研究院 A kind of efficient UDP message of LTE base station user plane based on linux system sends optimization method
WO2016000147A1 (en) * 2014-06-30 2016-01-07 华为技术有限公司 Method for accessing storage device, and host
CN104102494B (en) * 2014-07-31 2017-07-25 武汉邮电科学研究院 Radio communication base station is eated dishes without rice or wine data cipher accelerated methods
CN105635045B (en) * 2014-10-28 2019-12-13 北京启明星辰信息安全技术有限公司 Tcpdump packet capture implementation method and device based on drive zero copy mode system
CN105873038A (en) * 2016-06-07 2016-08-17 武汉邮电科学研究院 Method for safely processing LTE (Long Term Evolution) base station user plane data
CN106161110B (en) * 2016-08-31 2019-05-17 东软集团股份有限公司 Data processing method and system in a kind of network equipment
CN106411778B (en) * 2016-10-27 2019-07-19 东软集团股份有限公司 The method and device of data forwarding
CN106844242B (en) * 2016-12-30 2019-07-05 中国移动通信集团江苏有限公司 A kind of method for interchanging data and system
CN107086948B (en) * 2017-04-14 2019-11-12 重庆邮电大学 A kind of data processing method promoting virtualization network performance at SDWN
CN107809366B (en) * 2017-10-27 2020-10-20 浙江宇视科技有限公司 Method and system for safely sharing UNP tunnel
CN107908365A (en) * 2017-11-14 2018-04-13 郑州云海信息技术有限公司 The method, apparatus and equipment of User space memory system data interaction
CN109936502B (en) * 2017-12-15 2022-05-17 迈普通信技术股份有限公司 Data receiving method and data transmission equipment
CN109120665B (en) * 2018-06-20 2020-05-29 中国科学院信息工程研究所 High-speed data packet acquisition method and device
CN110134439B (en) * 2019-03-30 2021-09-28 北京百卓网络技术有限公司 Lock-free data structure construction method and data writing and reading methods
CN110167197B (en) * 2019-04-16 2021-01-26 中信科移动通信技术有限公司 GTP downlink data transmission optimization method and device
CN110083311B (en) * 2019-04-26 2022-03-29 深圳忆联信息系统有限公司 SSD descriptor-based software and hardware interaction issuing method and system
CN110138797B (en) * 2019-05-27 2021-12-14 北京知道创宇信息技术股份有限公司 Message processing method and device
CN110602225A (en) * 2019-09-19 2019-12-20 北京天地和兴科技有限公司 Efficient packet receiving and sending method of linux system suitable for industrial control environment
CN111211942A (en) * 2020-01-03 2020-05-29 山东超越数控电子股份有限公司 Data packet receiving and transmitting method, equipment and medium
CN112491979B (en) * 2020-11-12 2022-12-02 苏州浪潮智能科技有限公司 Network card data packet cache management method, device, terminal and storage medium
CN113596171B (en) * 2021-08-04 2024-02-20 杭州网易数之帆科技有限公司 Cloud computing data interaction method, system, electronic equipment and storage medium
CN117579386B (en) * 2024-01-16 2024-04-12 麒麟软件有限公司 Network traffic safety control method, device and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101594307A (en) * 2009-06-30 2009-12-02 中兴通讯股份有限公司 Dispatching method and system based on multi-queue
US7953002B2 (en) * 2005-11-10 2011-05-31 Broadcom Corporation Buffer management and flow control mechanism including packet-based dynamic thresholding
CN102740369A (en) * 2011-03-31 2012-10-17 北京新岸线无线技术有限公司 Data processing method, device and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7787463B2 (en) * 2006-01-26 2010-08-31 Broadcom Corporation Content aware apparatus and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7953002B2 (en) * 2005-11-10 2011-05-31 Broadcom Corporation Buffer management and flow control mechanism including packet-based dynamic thresholding
CN101594307A (en) * 2009-06-30 2009-12-02 中兴通讯股份有限公司 Dispatching method and system based on multi-queue
CN102740369A (en) * 2011-03-31 2012-10-17 北京新岸线无线技术有限公司 Data processing method, device and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于多核处理器的普适性报文捕获技术研究;孙江;《中国优秀硕士学位论文全文数据库信息科技辑》;20120731;全文 *

Also Published As

Publication number Publication date
CN103391256A (en) 2013-11-13

Similar Documents

Publication Publication Date Title
CN103391256B (en) A kind of base station user face data processing optimization method based on linux system
US11916781B2 (en) System and method for facilitating efficient utilization of an output buffer in a network interface controller (NIC)
JP7039685B2 (en) Traffic measurement methods, devices, and systems
CN108833299B (en) Large-scale network data processing method based on reconfigurable switching chip architecture
US7756990B2 (en) Configurable protocol engine
CN103945456B (en) A kind of efficient UDP message of LTE base station user plane based on linux system sends optimization method
CN103888386B (en) The transmission method and device, system of expansible virtual local area network packet
CN103069757B (en) Packet reassembly and resequence method, apparatus and system
US7324540B2 (en) Network protocol off-load engines
US20050021558A1 (en) Network protocol off-load engine memory management
CN107659515A (en) Message processing method, device, message processing chip and server
CN103731409B (en) The distributed measurement device of embedded type automobile acquisition equipment for accelerating with TCP
WO2023116340A1 (en) Data message forwarding method and apparatus
CN118264617B (en) Method, system, equipment and storage medium for transmitting data of overlay network
CN103986585A (en) Message preprocessing method and device
CN110167197A (en) GTP downlink data transmission optimization method and device
CN104350488A (en) Systems and methods for selective data redundancy elimination for resource constrained hosts
CN104584492A (en) Packet processing method, device and system
CN101964751B (en) Transmission method and device of data packets
CN105512075A (en) High-speed output interface circuit, high-speed input interface circuit and data transmission method
CN101938399A (en) Routing method and device
US7532644B1 (en) Method and system for associating multiple payload buffers with multidata message
CN103200086B (en) The information interacting method of Routing Protocol in a kind of ForCES system
Dai et al. Design of remote upgrade of equipment monitoring system software
CN116193000B (en) FPGA-based intelligent packet rapid forwarding system and forwarding method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 430074, No. 88, postal academy road, Hongshan District, Hubei, Wuhan

Patentee after: Wuhan post and Telecommunications Science Research Institute Co., Ltd.

Address before: 430074, No. 88, postal academy road, Hongshan District, Hubei, Wuhan

Patentee before: Wuhan Inst. of Post & Telecom Science