CN101227341A - Method for fast catching Ethernet card on Linux system - Google Patents
Method for fast catching Ethernet card on Linux system Download PDFInfo
- Publication number
- CN101227341A CN101227341A CNA2007101153790A CN200710115379A CN101227341A CN 101227341 A CN101227341 A CN 101227341A CN A2007101153790 A CNA2007101153790 A CN A2007101153790A CN 200710115379 A CN200710115379 A CN 200710115379A CN 101227341 A CN101227341 A CN 101227341A
- Authority
- CN
- China
- Prior art keywords
- buffer
- bag
- packet
- buffer area
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 29
- 238000012545 processing Methods 0.000 claims abstract description 39
- 239000000872 buffer Substances 0.000 claims description 289
- 230000006870 function Effects 0.000 claims description 37
- 238000012546 transfer Methods 0.000 claims description 13
- 230000007958 sleep Effects 0.000 claims description 12
- 230000005540 biological transmission Effects 0.000 claims description 3
- 238000000151 deposition Methods 0.000 claims description 2
- 238000007599 discharging Methods 0.000 claims description 2
- 230000003760 hair shine Effects 0.000 claims description 2
- 238000009877 rendering Methods 0.000 claims description 2
- 238000013519 translation Methods 0.000 claims description 2
- 238000013507 mapping Methods 0.000 abstract 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000005253 cladding Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 239000012467 final product Substances 0.000 description 1
- 238000004064 recycling Methods 0.000 description 1
Images
Landscapes
- Data Exchanges In Wide-Area Networks (AREA)
- Small-Scale Networks (AREA)
Abstract
The invention discloses a method for utilizing an Ethernet-card to rapidly capture packet on a Linux system, the method comprises mapping an Ethernet-card DMA receiving caching area on a user area, achieving the purpose of simultaneously capturing packets for a plurality of Ethernet cads through reducing data packets to reach a user area and copying times of a user program and starting a thread packet capture with every Ethernet card, reducing dispatching cost of interrupt handling through shutting interruption to actively enquire and receive, defining cache and management thereof which are repeatedly used, reducing cache application and releasing cost, reducing the conflict of visiting cache between the packet capture thread and packet processing thread, realizing the rapid capture of Ethernet data packets and ip packets which are transferred on the Ethernet data packets.
Description
1, technical field
The present invention relates to computer communication field, be specifically related to the method that Ethernet card on a kind of linux system is caught bag fast.
2, technical background
Local area network applications is very extensive at present, and Ethernet is a kind of of local area network (LAN), also is the main flow mode of local area network (LAN).On Ethernet, can transmit being connected of realization of ip bag and tcp/ip the Internet.And in network management, network measuring, need the network packet of transmission be caught, and then the phase-split network flow, communicating by letter of specified source addresses and destination address monitored and intrusion detection etc.In order to improve performances such as monitor speed, need catch fully or as far as possible completely packet.
On linux system, traditional Ethernet card drives and the network processes stack of kernel is subjected to Ethernet card interrupt schedule, packet and neutralizes by kernel to the too much influence of user area number of copy times at kernel, and packet receiving speed is also not very good.At present the general acquisition mode of link bag on the linux system is that socket (socket) by the PK_SOCKET type carries out packet and receives, it is just to receive from the link of network interface card cladding system aly in the netif_receive_skb of the bottom of Linux protocol stack function, and also needs to duplicate once when using the socket of the type to receive data to the user area.The network protocol stack of linux system kernel is taken into account speed and general versatility at present in addition, and it receives some modes of network card equipment packet, has influenced network card equipment is caught speed packet, is in particular in:
(1) non-NAPI mode is complete in the reception of network interface card down trigger to packet, and the scheduling of network interface card Interrupt Process needs bigger cpu resource consumption when high-frequency is wrapped.
(2) the NAPI mode receives in conjunction with network interface card down trigger and active inquiry, though significantly reduced the interrupt schedule number of times, each poll has set time (1 system clock cycle
Jiffies) restriction, exceed this time value and will finish reception, be difficult to guarantee behind the end of polling(EOP) not packet loss like this.
(3) no matter right and wrong NAPI mode still is the NAPI mode, receive the packet that DMA receives buffer area for each, all need to apply for spatial cache, can not reuse, though and application and release spatial cache are finished by the memory management module of system high efficiency, its amount of calculation is also very important.
(4) no matter right and wrong NAPI mode still is the NAPI mode, and the data pack buffer queue length that receives is very limited, and being easy to buffer area when processing data packets speed is inhomogeneous expires and packet loss, and if at user area increase expansion buffer memory, also need to do once copy.
Because the tradition of linux system is caught the limitation of packet mode on speed, performance, need a kind of faster more excellent bag method of catching.
3, summary of the invention
To use Ethernet card to catch the packet receiving rate and the performance of bag in order improving, to the invention provides the method that a kind of Ethernet card that is applied on the linux system is caught bag fast.
The step that Ethernet card provided by the invention is caught the bag method fast is:
(1) in Ethernet card drives, a registration promiscuous device (miscellaneous device) when module loading, be designated as misc_dev, define the open of this equipment, release, mmap, the ioctl handling function, wherein the mmap function will be used for specifying the annular of network interface card to receive descriptor buffer area and packet receiving buffer area and shine upon program to the user area, and the ioctl function is used to the user area program to provide to move the annular of specifying network interface card to receive the descriptor buffer area and receives tail pointer so that network interface card can continue the function of packet receiving.Accordingly, when unloading, module to nullify this promiscuous device.
(2) in Ethernet card drives, global structure variable of definition when module loading, be designated as g_map_dev, the annular that is used to preserve all network card equipments of this series (driving) receives the virtual address that descriptor number and buffer area virtual address thereof, packet receive buffer area, uses this global variable g_map_dev to obtain the buffer area virtual address translation in the mmap of promiscuous device misc_dev function and shines upon as physical address and to the user area.Accordingly, when unloading, module to discharge this global structure variable.
(3) in the Ethernet card driving, to use the pci_alloc_consistent function to distribute and receive descriptor buffer area and packet reception buffer area, this mode can be visited buffer area simultaneously from network interface card and CPU both direction.These two buffer areas are but that direct memory access (Direct Memory Access) mode is transmitted for network interface card.Simultaneously the virtual address of buffer area is saved among the global structure variable g_map_dev of explanation in the step (2).(stopping network interface card, replacement network interface card) uses the pci_free_consistent function when accordingly, discharging buffer area.
(4), define the buffer area of a series of list structures, with the repeated use that realizes data pack buffer with reduce lock conflict when getting bag between the bag processing threads in the user area.
Buffer area is divided into two kinds of groups: bag receives the buffer memory group and bag is handled the buffer memory group.For being arranged, N network card equipment catch the configuration of bag and M bag processing buffer memory group, to define (the individual buffer area of (2+M) * N+ (4 * M)), be N and receive buffer memory group and M bag processing buffer memory group, each receives buffer memory group and comprises 2+M buffer area, and each bag is handled the buffer memory group and comprised 4 buffer areas.Each buffer area wherein is with the list structure tissue, and each linked list element is the data pack buffer and the state (data packet length of actual reception) thereof of a fixed size.
Each receives the buffer memory group and is made up of two-stage free buffer district and M reception buffer area that 2 free buffer districts form, the free buffer district is used to collect the bag buffer memory of depositing the release of having handled and prepares to reuse, receive buffer area and be used to deposit the packet of reception, the packet that each network interface card receives is divided into the M class according to the mode of dividing equally or according to packet content, renders to M and receives in the buffer area.
Each bag is handled buffer area in the pending buffer area of two-stage that buffer memory group is made up of 2 pending buffer areas, 1 processing, 1 and has been handled buffer area and formed.
(5) in the user area, each is caught thread of bag network interface card startup catch bag, be called and catch the envelope curve journey.
(6) catch bag and bag cache flow through-flow journey is: catch the bag program pktcap_open outwards is provided, pktcap_close, pktcap_get_onepkt_block, pktcap_get_onepkt_noblock, the pktcap_free_onepkt interface function, for N configuration of catching bag network interface card, a M bag processing buffer memory group and K bag buffer memory, in the pktcap_open function, at first apply for K bag buffer memory, each bag buffer memory gets final product greater than Ethernet bag maximum (being generally 1500), and K bag buffer memory is evenly distributed in N the one-level free buffer district.Start N then and catch the envelope curve journey, each thread will use a network interface card and a bag to receive the buffer memory group.When each catches envelope curve journey operation beginning, close the interruption of employed network interface card, the DMA of corresponding network interface card is received the descriptor buffer area to the mmap function that calls promiscuous device misc_dev and the DMA bag receives the physical address map of buffer area to the user area, the ioctl function that calls promiscuous device obtains and is provided with the accepting state of network interface card (for intel pro100 and pro1000 series network interface card, be to obtain DMA and receive descriptor owner pointer position, DMA is set receives descriptor tail pointer position), enter the packet inquiry then and receive circulation:
(7) at first inquire about the status word that current DMA receives descriptor, judge whether this bag finishes receiving, [1] then sleeps a period of time if not, the length of one's sleep, length was decided according to DMA reception descriptor number and network interface card bandwidth, for the 1000Mb bandwidth, 4096 situations that receive descriptor, for guaranteeing can not receive the full packet loss of buffer area between sleep period because of DMA, the length of one's sleep, t_sleep should be less than the time with 4096/2 parcel of the wide reception of filled band, be t_sleep=(64 * 8 * 4096/2)/(10^9)=1048ms, wherein 64 is the minimum length of Ethernet bag, the unit byte.[2] if current bag finishes receiving, then packet is duplicated and receive the reception buffer area.Judge that at first one-level free buffer district has or not buffer memory, [2.1] first buffer memory that then packet content in the network interface card DMA reception buffer memory is copied to one-level free buffer district is arranged, according to principle of equipartition or according to packet content this bag is classified then, principle of equipartition is for this network interface card, packet is arrived M 1 of receiving buffer area at every turn, circulation is thrown in from the 1st to M, and according to the method for packet content classification, can be to the source address of ip bag, destination address, source port and destination interface etc. carry out the mould that M is got in the hash computing again, obtain sequence number index, 0≤index≤M-1 renders to index+1 again and receives in the buffer area.Judge the buffer memory number of this reception buffer area then, then the packet of this buffer area is transferred to index+1 the pending buffer area of secondary greater than NUM_RECVED, shifting front and back needs the pending buffer area of the locking and unlocking secondary.[2.2] if one-level free buffer district does not have buffer memory, then move to the release again of one-level free buffer district with the locking of secondary free buffer district and with its buffer memory, [2.2.1] is if the buffer memory number obtained from secondary free buffer district is individual then in order to prevent that the free buffer district from ruing out of rapidly and continues to inquire about the vicious circle that the L2 cache district forms frequent locking L2 cache district smaller or equal to NUM_FREEBUF_L2 (NUM_FREEBUF_L2 be 0 ~ tens of), sleep a period of time, the order of magnitude of time is less than aforesaid t_sleep's, probably is 1us ~ 10us.Sleep finishes the back if the buffer memory in one-level free buffer district be a sky, then duplicates, classifies and throw in packet, otherwise do not duplicate, and promptly abandons this bag, but this bag state is set for receiving, and packet receiving again circulates.[2.2.2] is provided with this bag state for receiving if the buffer memory number of obtaining from secondary free buffer district greater than NUM_FREEBUF_L2, is then duplicated, classified and throws in packet, and packet receiving again circulates.
Above-mentioned for catching the packet receiving flow process of envelope curve journey, employed user area data pack buffer be input as secondary free buffer district, be output as the pending buffer area of secondary.And packet from the flow process that the pending buffer area of secondary is passed to secondary free buffer district is:
Each bag processing threads calls pktcap_get_onepkt_block or the pktcap_get_onepkt_noblock interface function obtains a packet, calls pktcap_free_onepkt and discharges packet.Wherein the pktcap_get_onepkt_block function is that block type obtains, but will return immediately as the packet time spent, otherwise will utilize the conditional-variable pthread_cond_wait of Linux thread to wait for, and the data accepted bag be transferred to the pending buffer area of secondary and conditional-variable annular processes thread is set from receiving buffer area up to catching the envelope curve journey.The pktcap_get_onepkt_noblock interface function obtains for the unblock formula, then return data bag pointer of packet is arranged, otherwise return null pointer immediately.Use for a plurality of bag processing threads though each bag is handled the buffer memory group, raise the efficiency, preferably only use by a bag processing threads in order to reduce thread concurrent lock cache district.
Pktcap_get_onepkt_block or pktcap_get_onepkt_noblock interface function will make data pack buffer transfer to buffer area the processing from the pending buffer area of secondary.Judge at first whether the pending buffer area of one-level is empty, if be not empty, then a packet is transferred to buffer area the processing from the pending buffer area of one-level, then this data packet addressed is returned in the pointer mode.If the pending buffer area of one-level be a sky, then lock the pending buffer area of secondary and its packet is transferred to the release again of level cache district, get a packet from the pending buffer area of one-level then and transfer to the processing buffer area and return this packet pointer.
The bag processing threads will use the pktcap_free_onepkt interface function to discharge packet after handling certain packet, transmit the pointer of this packet in function parameter.The pktcap_free_onepkt interface function will make data pack buffer buffer area from handle transfer to secondary free buffer district.Buffer area in the locking processing at first, packet is broken away from the buffer area from handle, buffer area during release is handled then, buffer area has been handled in locking, packet transferred to handle buffer area, judge then and whether handled in the buffer area data pack buffer number greater than NUM_MIN_FREE, greater than then locking secondary free buffer district, to handle in the buffer area all data pack buffers and transfer to secondary free buffer district these two buffer areas of release and returning again, otherwise release has been handled buffer area and returned.
The invention has the beneficial effects as follows, Ethernet card DMA is received buffer area be mapped to the user area, reduce the copy number of times that packet arrives user area and user program; Thread of each network interface card startup is caught bag and can be realized simultaneously many network interface cards being caught bag; Close and interrupt the active inquiry reception, reduce the Interrupt Process scheduling overhead; Define reusable buffer memory and management thereof, reduce the expense of buffer memory application and release, reduce and to catch lock conflict frequency and system resources consumption that causes and the decrease in efficiency of using buffer memory between envelope curve journey, the bag processing threads, realize the Ethernet data bag is comprised the catching fast of ip bag of transmission thereon.
4, description of drawings
Fig. 1 is the bag program structure schematic diagram of catching fast of the present invention;
Fig. 2 is the buffer memory transfer flow graph that bag of the present invention receives buffer memory group part;
Fig. 3 is the buffer memory transfer flow graph that bag of the present invention is handled buffer memory group part;
Fig. 4 is the envelope curve journey flow chart of catching of the present invention.
5, embodiment
A kind of Ethernet card provided by the invention is caught the method for bag fast, and concrete steps are:
1) Ethernet card DMA is received buffer area and be mapped to the user area, reduce packet arrives the user area at inner core region, kernel copy number of times.
2) promiscuous device of registration is used for user area and the setting and the control that provide the accepting state of network interface card are provided Ethernet card DMA reception buffer area.
3) close interruption and use the active inquiry mode to receive packet, the overhead that the scheduling of minimizing Interrupt Process brings.
4) each network interface card is started a thread and catch bag, thereby simultaneously a plurality of network interface cards are caught bag fast.
5) when program start, distribute the data pack buffer of specified quantity, realize buffer memory, reduce the inhomogeneous packet loss that causes of bag processing speed packet.
6) in catching the bag program running, reuse data pack buffer, each packet is applied for spatial cache again no longer at every turn, reduce using system and distribute spatial cache and discharge the consumption that spatial cache brings.
7) the packet buffer area is divided into groups, the classification to packet is provided, reduce the frequency of collisions that locking is visited to buffer area between the thread, concrete buffered packet and circulation style are:
8) buffer area is divided into two kinds of groups: bag receives the buffer memory group and bag is handled the buffer memory group, for being arranged, N network card equipment catch the configuration of bag and M bag processing buffer memory group, to define (the individual buffer area of (2+M) * N+ (4 * M)), be N and receive buffer memory group and M bag processing buffer memory group, each receives the buffer memory group and comprises 2+M buffer area, each bag is handled the buffer memory group and is comprised 4 buffer areas, each buffer area wherein is with the list structure tissue, and each linked list element is the data pack buffer of a fixed size and the data packet length of actual reception thereof.
9) each is caught the envelope curve journey and uses a bag reception buffer memory group, form by two-stage free buffer district and M reception buffer area that 2 buffer areas are formed, obtain idle data bag buffer memory from 2 grades of free buffer districts that a bag receives the buffer memory group during packet receiving, at first shift whole being cached on the level cache from L2 cache, it is each afterwards that only obtaining idle data bag buffer memory from level cache transfers to level cache up to using up from the secondary free buffer again, for the packet that receives in the network interface card DMA Data Receiving buffer area that is mapped to the user area, after copying on the idle data bag buffer memory, it is pressed M class divides, the method of classification is to be divided into M group or to classify by packet content, rendering to one after classification is finished receives on the buffer area, each receives the corresponding class of buffer area, when the packet number in receiving buffer area reaches a certain amount of, transfer to corresponding bag and handle on the pending buffer area in the second level of buffer memory group.
9, Ethernet card is caught the method for bag fast on the linux system according to claim 7, it is characterized in that the pending buffer area of two-stage that each bag processing buffer memory group is made up of 2 buffer areas, buffer area and 1 have handled buffer area and have formed in 1 processing, use for one or more bag processing threads, a bag is handled the corresponding one or more bag processing threads of buffer memory group, the process that the bag processing threads obtains bag is: at first pending bag buffer area is transferred to the pending bag buffer area of the first order with all packets from the second level, get bag from the pending bag buffer area of the first order afterwards at every turn, get one at every turn, and this packet is transferred to buffer area the processing from the pending bag buffer memory of the first order, the bag processing threads is handled to need behind the bag this packet buffer area from handle transferred to and is handled buffer area, judge to have handled whether the data pack buffer number reaches a certain amount of in the buffer area again, reach and then will handle all data pack buffers in the buffer area and transfer to a bag and receive in the free buffer district, the second level of buffer memory group, for the situation that a plurality of bag reception groups are arranged, circulation is delivered and is realized average the supply, has realized recycling of data pack buffer like this.
Claims (4)
1.Linux Ethernet card is caught the method for bag fast in the system, it is characterized in that, be Ethernet card DMA to be received buffer area be mapped to the user area, catching bag by the thread of copy number of times and each network interface card startup that reduces packet arrival user area and user program realizes simultaneously many network interface cards being caught bag, also interrupt the active inquiry reception by closing, reduce the Interrupt Process scheduling overhead, define reusable buffer memory and management thereof, reduce the expense of buffer memory application and release and reduce the conflict of catching access cache between envelope curve journey and the bag processing threads, realization comprises thereon the catching fast of ip bag of transmission to the Ethernet data bag, method step is as follows:
(1) in Ethernet card drives, promiscuous device miscellaneous device of registration when module loading, be designated as misc_dev, define the open of this equipment, release, mmap, the ioctl handling function, wherein the mmap function will be used for specifying the annular of network interface card to receive descriptor buffer area and packet receiving buffer area and shine upon program to the user area, the ioctl function is used to the user area program to provide and moves the annular reception descriptor buffer area reception tail pointer of appointment network interface card so that network interface card can continue the function of packet receiving, accordingly, when unloading, module to nullify this promiscuous device;
(2) in Ethernet card drives, global structure variable of definition when module loading, be designated as g_map_dev, the annular that is used to preserve all network card equipments of this series receives the virtual address that descriptor number and buffer area virtual address thereof, packet receive buffer area, using this global variable g_map_dev to obtain the buffer area virtual address translation in the mmap of promiscuous device misc_dev function shines upon as physical address and to the user area, accordingly, when unloading, module to discharge this global structure variable;
(3) in Ethernet card drives, to use the pci_alloc_consistent function to distribute and receive descriptor buffer area and packet reception buffer area, this mode makes buffer area simultaneously from network interface card and the visit of CPU both direction; These two buffer areas transmit for network interface card direct memory access Direct Memory Access mode, simultaneously the virtual address of buffer area is saved among the global structure variable g_map_dev of explanation in the step (2), accordingly, when discharging buffer area, stop network interface card or replacement network interface card and use the pci_free_consistent function;
(4), define the buffer area of a series of list structures, with the repeated use that realizes data pack buffer with reduce lock conflict when getting bag between the bag processing threads in the user area;
Buffer area is divided into two kinds of groups: bag receives the buffer memory group and bag is handled the buffer memory group, for being arranged, N network card equipment catch the configuration of bag and M bag processing buffer memory group, to define (the individual buffer area of (2+M) * N+ (4 * M)), be N and receive buffer memory group and M bag processing buffer memory group, each receives the buffer memory group and comprises 2+M buffer area, each bag is handled the buffer memory group and is comprised 4 buffer areas, each buffer area wherein is with the list structure tissue, and each linked list element is the data pack buffer of a fixed size and the data packet length of actual reception thereof; Each receives the buffer memory group and is made up of two-stage free buffer district and M reception buffer area that 2 free buffer districts form, the free buffer district is used to collect the bag buffer memory of depositing the release of having handled and prepares to reuse, receive buffer area and be used to deposit the packet of reception, the packet that each network interface card receives is divided into the M class according to the mode of dividing equally or according to packet content, renders to M and receives in the buffer area; Each bag is handled buffer area in the pending buffer area of two-stage that buffer memory group is made up of 2 pending buffer areas, 1 processing, 1 and has been handled buffer area and formed;
(5) in the user area, each is caught thread of bag network interface card startup catch bag, be called and catch the envelope curve journey;
(6) catch bag and bag cache flow through-flow journey is: catch the bag program pktcap_open outwards is provided, pktcap_close, pktcap_get_onepkt_block, pktcap_get_onepkt_noblock, the pktcap_free_onepkt interface function, catch the bag network interface card for N, M bag handled the configuration of buffer memory group and K bag buffer memory, in the pktcap_open function, at first apply for K bag buffer memory, each bag buffer memory is greater than Ethernet bag maximum, be generally 1500, K bag buffer memory is evenly distributed in N the one-level free buffer district, start N then and catch the envelope curve journey, each thread will use a network interface card and a bag to receive the buffer memory group, when each catches envelope curve journey operation beginning, close the interruption of employed network interface card, the DMA of corresponding network interface card is received the descriptor buffer area to the mmap function that calls promiscuous device misc_dev and the DMA bag receives the physical address map of buffer area to the user area, the ioctl function that calls promiscuous device obtains and is provided with the accepting state of network interface card, enters the packet inquiry then and receives circulation; For intel pro100 and pro1000 series network interface card, be to obtain DMA and receive descriptor owner pointer position, DMA is set receives descriptor tail pointer position;
(7) at first inquire about the status word that current DMA receives descriptor, judge whether this bag finishes receiving, 1) if current bag does not finish receiving, then sleep a period of time, the length of one's sleep, length was decided according to DMA reception descriptor number and network interface card bandwidth, for the 1000Mb bandwidth, 4096 situations that receive descriptor, for guaranteeing can not receive the full packet loss of buffer area between sleep period because of DMA, the length of one's sleep, t_sleep should be less than the time with 4096/2 parcel of the wide reception of filled band, be t_sleep=(64x8x4096/2)/(10^9)=1048ms, wherein 64 is the minimum length of Ethernet bag, the unit byte; 2) if current bag finishes receiving, then packet is duplicated and receive the reception buffer area; Judge that at first one-level free buffer district has or not buffer memory, if have then network interface card DMA received first buffer memory that packet content in the buffer memory copies to one-level free buffer district, according to principle of equipartition or according to packet content this bag is classified then, principle of equipartition is for this network interface card, packet is arrived M 1 of receiving buffer area at every turn, circulation is thrown in from the 1st to M, and according to the method for packet content classification, source address to the ip bag, destination address, source port and destination interface carry out the mould that M is got in the hash computing again, obtain sequence number index, 0≤index≤M-1, rendering to index+1 again receives in the buffer area, judge the buffer memory number of this reception buffer area then, then the packet of this buffer area is transferred to index+1 the pending buffer area of secondary greater than NUM_RECVED, shifting front and back needs the pending buffer area of the locking and unlocking secondary; If one-level free buffer district does not have buffer memory, then move to the release again of one-level free buffer district with the locking of secondary free buffer district and with its buffer memory; If the buffer memory number obtained from secondary free buffer district smaller or equal to NUM_FREEBUF_L2 then in order to prevent that the free buffer district from ruing out of rapidly and continues to inquire about the vicious circle that the L2 cache district forms frequent locking L2 cache district, sleep a period of time, the order of magnitude of time is less than aforesaid t_sleep's, probably is 1us-10us; Sleep finishes the back if the buffer memory in one-level free buffer district be a sky, then duplicates, classifies and throw in packet, otherwise do not duplicate, and promptly abandons this bag, but this bag state is set for receiving, and packet receiving again circulates; If the buffer memory number of obtaining from secondary free buffer district greater than NUM_FREEBUF_L2, is then duplicated, is classified and throws in packet, this bag state is set for receiving, packet receiving again circulates.
2. method according to claim 1, it is characterized in that, employed user area data pack buffer be input as secondary free buffer district, be output as the pending buffer area of secondary, and packet from the flow process that the pending buffer area of secondary is passed to secondary free buffer district is: each bag processing threads calls pktcap_get_onepkt_block or the pktcap_get_onepkt_noblock interface function obtains a packet, call pktcap_free_onepkt and discharge packet, wherein the pktcap_get_onepkt_block function is that block type obtains, to return immediately as the packet time spent, otherwise will utilize the conditional-variable pthread_cond_wait of Linux thread to wait for, and the data accepted bag be transferred to the pending buffer area of secondary and conditional-variable annular processes thread is set from receiving buffer area up to catching the envelope curve journey; The pktcap_get_onepkt_noblock interface function obtains for the unblock formula, then return data bag pointer of packet is arranged, otherwise return null pointer immediately; Use for a plurality of bag processing threads though each bag is handled the buffer memory group, raise the efficiency, use by a bag processing threads in order to reduce thread concurrent lock cache district.
3. method according to claim 2 is characterized in that, pktcap_get_onepkt_block or pktcap_get_onepkt_noblock interface function will make data pack buffer transfer to buffer area the processing from the pending buffer area of secondary; Judge at first whether the pending buffer area of one-level is empty, if be not empty, then a packet is transferred to buffer area the processing from the pending buffer area of one-level, then this data packet addressed is returned in the pointer mode; If the pending buffer area of one-level be a sky, then lock the pending buffer area of secondary and its packet is transferred to the release again of level cache district, get a packet from the pending buffer area of one-level then and transfer to the processing buffer area and return this packet pointer.
4. method according to claim 2 is characterized in that, the bag processing threads will use the pktcap_free_onepkt interface function to discharge packet after handling certain packet, transmit the pointer of this packet in function parameter; The pktcap_free_onepkt interface function will make data pack buffer buffer area from handle transfer to secondary free buffer district; Buffer area in the locking processing at first, packet is broken away from the buffer area from handle, buffer area during release is handled then, buffer area has been handled in locking, packet transferred to handle buffer area, judge then and whether handled in the buffer area data pack buffer number greater than NUM_MIN_FREE, greater than then locking secondary free buffer district, to handle in the buffer area all data pack buffers and transfer to secondary free buffer district these two buffer areas of release and returning again, otherwise release has been handled buffer area and returned.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNA2007101153790A CN101227341A (en) | 2007-12-18 | 2007-12-18 | Method for fast catching Ethernet card on Linux system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNA2007101153790A CN101227341A (en) | 2007-12-18 | 2007-12-18 | Method for fast catching Ethernet card on Linux system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN101227341A true CN101227341A (en) | 2008-07-23 |
Family
ID=39859108
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNA2007101153790A Pending CN101227341A (en) | 2007-12-18 | 2007-12-18 | Method for fast catching Ethernet card on Linux system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101227341A (en) |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101841470A (en) * | 2010-03-29 | 2010-09-22 | 东南大学 | High-speed capturing method of bottom-layer data packet based on Linux |
CN102033938A (en) * | 2010-12-10 | 2011-04-27 | 天津神舟通用数据技术有限公司 | Secondary mapping-based cluster dynamic expansion method |
CN101702676B (en) * | 2009-11-23 | 2012-01-25 | 华为终端有限公司 | Data buffering process and device |
CN101714991B (en) * | 2009-10-30 | 2012-06-20 | 清华大学 | Method for realizing heartbeat mechanism |
CN102521140A (en) * | 2011-12-01 | 2012-06-27 | 瑞斯康达科技发展股份有限公司 | Method and device for acquiring descriptor group of activities |
CN102547864A (en) * | 2010-12-27 | 2012-07-04 | 北京中电华大电子设计有限责任公司 | Method for receiving data through serial port 802.11n wireless network card chip |
CN103441941A (en) * | 2013-08-13 | 2013-12-11 | 广东睿江科技有限公司 | High performance data message capture method and device based on Linux |
CN103634230A (en) * | 2013-11-29 | 2014-03-12 | 华中科技大学 | Dynamic prediction-based network driver layer data packet receiving method and system |
CN104506379A (en) * | 2014-12-12 | 2015-04-08 | 北京锐安科技有限公司 | Method and system for capturing network data |
CN105187235A (en) * | 2015-08-12 | 2015-12-23 | 广东睿江科技有限公司 | Message processing method and device |
CN106452979A (en) * | 2016-12-06 | 2017-02-22 | 郑州云海信息技术有限公司 | Online packet capturing method and tool |
CN107070809A (en) * | 2017-04-11 | 2017-08-18 | 南通大学 | A kind of real-time retransmission method of large-scale sensor data |
CN108462682A (en) * | 2017-02-22 | 2018-08-28 | 成都鼎桥通信技术有限公司 | The distribution method and device of initial dialog protocol SIP messages |
CN108920276A (en) * | 2018-06-27 | 2018-11-30 | 郑州云海信息技术有限公司 | Linux system memory allocation method, system and equipment and storage medium |
WO2019091361A1 (en) * | 2017-11-10 | 2019-05-16 | 北京金山云网络技术有限公司 | Network card mode switching method, apparatus, electronic device and storage medium |
CN110286743A (en) * | 2019-07-03 | 2019-09-27 | 浪潮云信息技术有限公司 | A kind of data center's power-saving method, terminal, computer readable storage medium |
CN111831596A (en) * | 2020-07-28 | 2020-10-27 | 山东有人信息技术有限公司 | RTOS serial port network transmission method and device |
CN113794607A (en) * | 2021-09-27 | 2021-12-14 | 广东汉为信息技术有限公司 | Automatic test method for network card of circuit mainboard and computer readable storage medium |
CN115442319A (en) * | 2022-08-31 | 2022-12-06 | 北京天融信网络安全技术有限公司 | Data transmission method, electronic device, and computer-readable storage medium |
-
2007
- 2007-12-18 CN CNA2007101153790A patent/CN101227341A/en active Pending
Cited By (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101714991B (en) * | 2009-10-30 | 2012-06-20 | 清华大学 | Method for realizing heartbeat mechanism |
CN101702676B (en) * | 2009-11-23 | 2012-01-25 | 华为终端有限公司 | Data buffering process and device |
CN101841470B (en) * | 2010-03-29 | 2012-10-10 | 东南大学 | High-speed capturing method of bottom-layer data packet based on Linux |
CN101841470A (en) * | 2010-03-29 | 2010-09-22 | 东南大学 | High-speed capturing method of bottom-layer data packet based on Linux |
CN102033938A (en) * | 2010-12-10 | 2011-04-27 | 天津神舟通用数据技术有限公司 | Secondary mapping-based cluster dynamic expansion method |
CN102547864A (en) * | 2010-12-27 | 2012-07-04 | 北京中电华大电子设计有限责任公司 | Method for receiving data through serial port 802.11n wireless network card chip |
CN102521140B (en) * | 2011-12-01 | 2015-04-29 | 瑞斯康达科技发展股份有限公司 | Method and device for acquiring descriptor group of activities |
CN102521140A (en) * | 2011-12-01 | 2012-06-27 | 瑞斯康达科技发展股份有限公司 | Method and device for acquiring descriptor group of activities |
CN103441941A (en) * | 2013-08-13 | 2013-12-11 | 广东睿江科技有限公司 | High performance data message capture method and device based on Linux |
CN103634230A (en) * | 2013-11-29 | 2014-03-12 | 华中科技大学 | Dynamic prediction-based network driver layer data packet receiving method and system |
CN103634230B (en) * | 2013-11-29 | 2016-09-07 | 华中科技大学 | A kind of network driver layer data packet receiving method based on dynamic prediction and system |
CN104506379A (en) * | 2014-12-12 | 2015-04-08 | 北京锐安科技有限公司 | Method and system for capturing network data |
CN104506379B (en) * | 2014-12-12 | 2018-03-23 | 北京锐安科技有限公司 | Network Data Capturing method and system |
CN105187235A (en) * | 2015-08-12 | 2015-12-23 | 广东睿江科技有限公司 | Message processing method and device |
CN106452979A (en) * | 2016-12-06 | 2017-02-22 | 郑州云海信息技术有限公司 | Online packet capturing method and tool |
CN108462682A (en) * | 2017-02-22 | 2018-08-28 | 成都鼎桥通信技术有限公司 | The distribution method and device of initial dialog protocol SIP messages |
CN107070809A (en) * | 2017-04-11 | 2017-08-18 | 南通大学 | A kind of real-time retransmission method of large-scale sensor data |
CN107070809B (en) * | 2017-04-11 | 2020-05-12 | 南通大学 | Real-time forwarding method for large-scale sensor data |
WO2019091361A1 (en) * | 2017-11-10 | 2019-05-16 | 北京金山云网络技术有限公司 | Network card mode switching method, apparatus, electronic device and storage medium |
CN109787777A (en) * | 2017-11-10 | 2019-05-21 | 北京金山云网络技术有限公司 | A kind of network interface card mode switching method, device, electronic equipment and storage medium |
CN109787777B (en) * | 2017-11-10 | 2020-04-03 | 北京金山云网络技术有限公司 | Network card mode switching method and device, electronic equipment and storage medium |
CN108920276A (en) * | 2018-06-27 | 2018-11-30 | 郑州云海信息技术有限公司 | Linux system memory allocation method, system and equipment and storage medium |
CN110286743A (en) * | 2019-07-03 | 2019-09-27 | 浪潮云信息技术有限公司 | A kind of data center's power-saving method, terminal, computer readable storage medium |
CN111831596A (en) * | 2020-07-28 | 2020-10-27 | 山东有人信息技术有限公司 | RTOS serial port network transmission method and device |
CN111831596B (en) * | 2020-07-28 | 2022-01-21 | 山东有人物联网股份有限公司 | RTOS serial port network transmission method and device |
CN113794607A (en) * | 2021-09-27 | 2021-12-14 | 广东汉为信息技术有限公司 | Automatic test method for network card of circuit mainboard and computer readable storage medium |
CN115442319A (en) * | 2022-08-31 | 2022-12-06 | 北京天融信网络安全技术有限公司 | Data transmission method, electronic device, and computer-readable storage medium |
CN115442319B (en) * | 2022-08-31 | 2024-03-12 | 北京天融信网络安全技术有限公司 | Data transmission method, electronic device, and computer-readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101227341A (en) | Method for fast catching Ethernet card on Linux system | |
US8375145B2 (en) | Doorbell handling with priority processing function | |
US8505013B2 (en) | Reducing data read latency in a network communications processor architecture | |
US8537832B2 (en) | Exception detection and thread rescheduling in a multi-core, multi-thread network processor | |
US7853951B2 (en) | Lock sequencing to reorder and grant lock requests from multiple program threads | |
US20220261367A1 (en) | Persistent kernel for graphics processing unit direct memory access network packet processing | |
US6822959B2 (en) | Enhancing performance by pre-fetching and caching data directly in a communication processor's register set | |
US11875183B2 (en) | Real-time arbitration of shared resources in a multi-master communication and control system | |
US20070124728A1 (en) | Passing work between threads | |
US20110225372A1 (en) | Concurrent, coherent cache access for multiple threads in a multi-core, multi-thread network processor | |
WO2004107189A3 (en) | Uniform interface for a functional node in an adaptive computing engine | |
CN103946803A (en) | Processor with efficient work queuing | |
CN102082698A (en) | Network data processing system of high performance core based on improved zero-copy technology | |
US8576864B2 (en) | Host ethernet adapter for handling both endpoint and network node communications | |
EP1856623A2 (en) | Including descriptor queue empty events in completion events | |
US20230127722A1 (en) | Programmable transport protocol architecture | |
US20110225394A1 (en) | Instruction breakpoints in a multi-core, multi-thread network communications processor architecture | |
US20070044103A1 (en) | Inter-thread communication of lock protected data | |
CN104102542A (en) | Network data packet processing method and device | |
US7860120B1 (en) | Network interface supporting of virtual paths for quality of service with dynamic buffer allocation | |
US20060209827A1 (en) | Systems and methods for implementing counters in a network processor with cost effective memory | |
US20220286412A1 (en) | Real-time, time aware, dynamic, context aware and reconfigurable ethernet packet classification | |
CN111385222B (en) | Real-time, time-aware, dynamic, context-aware, and reconfigurable Ethernet packet classification | |
CN116501657B (en) | Processing method, equipment and system for cache data | |
CN110535827A (en) | Method and system for realizing TCP (Transmission control protocol) full-unloading IP (Internet protocol) core of multi-connection management |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Open date: 20080723 |