CN111988244B - Network data scheduling and distributing method, computer device and computer readable storage medium - Google Patents

Network data scheduling and distributing method, computer device and computer readable storage medium Download PDF

Info

Publication number
CN111988244B
CN111988244B CN202010841240.XA CN202010841240A CN111988244B CN 111988244 B CN111988244 B CN 111988244B CN 202010841240 A CN202010841240 A CN 202010841240A CN 111988244 B CN111988244 B CN 111988244B
Authority
CN
China
Prior art keywords
data
data packet
output interface
transmitted
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010841240.XA
Other languages
Chinese (zh)
Other versions
CN111988244A (en
Inventor
李俊
宋磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Yizhi Security Technology Co ltd
Guangzhou Yizhi Security Technology Co ltd
Original Assignee
Zhuhai Yizhi Security Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Yizhi Security Technology Co ltd filed Critical Zhuhai Yizhi Security Technology Co ltd
Priority to CN202010841240.XA priority Critical patent/CN111988244B/en
Publication of CN111988244A publication Critical patent/CN111988244A/en
Application granted granted Critical
Publication of CN111988244B publication Critical patent/CN111988244B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3027Output queuing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/82Miscellaneous aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9005Buffering arrangements using dynamic buffer space allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9057Arrangements for supporting packet reassembly or resequencing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention provides a network data scheduling and distributing method, a computer device and a computer readable storage medium, wherein the method comprises the steps of determining a data source and pre-distributing a memory buffer area; establishing a mapping relation between a memory buffer area and an application layer; establishing an output queue of an output interface; acquiring a data packet transmitted by a data source; storing the data packets in a memory buffer area, numbering each data packet, confirming an output interface to be transmitted of each data packet, and adding each data packet to an output queue of a corresponding output interface; if the output interface is a physical network adapter, transmitting the data packet to the physical network adapter; if the output interface is a file, writing the data packet into the file; if the output interface is a transmission queue, the number of the data packet is written into the transmission queue. The invention also provides a computer device and a computer readable storage medium for realizing the method. The invention can improve the flexibility of network data scheduling and distribution and solve the bottleneck problem of data distribution and scheduling.

Description

Network data scheduling and distributing method, computer device and computer readable storage medium
Technical Field
The present invention relates to the technical field of network data processing, and in particular, to a network data scheduling and allocating method, and a computer apparatus and a computer-readable storage medium for implementing the method.
Background
With the continuous and rapid development of network technology, networks with data transmission amount per second at giga level (1 Gbps) have been rapidly popularized, and a gigabit network has been widely applied to the center and core nodes of each network. Higher speed network devices such as 20Gbps, 40Gbps, and 100Gbps have also begun to become widespread. At present, when the quality of a network is evaluated, the security of the network is monitored, assets and equipment of the network are checked, and the like, complete real-time traffic of the network is generally analyzed, that is, a data packet transmitted in the network is obtained, and data of the data packet is analyzed.
At present, the steps for analyzing the traffic of high-speed and ultra-high-speed networks generally include: flow mirroring/light splitting, flow capturing, flow analysis, analysis result processing and the like. The traffic mirroring and light splitting are usually configured on a core switch or implemented by hardware equipment, the traffic capturing is to extract traffic data of the data mirroring and light splitting from a network adapter to an analysis application program, the traffic analysis is to process the data transmitted by the network adapter according to different analysis models and service processing logics, and the analysis result processing is to store, transmit and persist the results.
The existing flow grabbing method mainly comprises the following steps: firstly, the kernel processing logic implementation based on linux or other systems, such as the libpcap logic implementation, is low in working efficiency because traffic data needs to completely pass through a kernel network protocol stack and needs to be frequently switched between kernel and user states, and is generally only used for realizing traffic capture in networks below 1 Gbps; secondly, a traffic grabbing method based on zero copy, such as DPDK, PF _ RING ZC, netmap and the like, is mainly realized by bypassing the kernel processing of an operating system and directly acquiring data from a network adapter by an analysis application program. The two existing flow capturing technologies mainly consider efficient extraction of flow data and how to obtain data of a network adapter, but do not consider a subsequent data processing process, so that the data capturing and data processing processes are not linked, and the overall efficiency of data analysis is influenced.
In a high-speed and ultra-high-speed network environment, although the bottleneck of a data capturing part can be solved by adopting the flow capturing method, the bottleneck of a data analysis part cannot be solved, so that the situation that data packets of the data capturing part are lost rarely occurs, but after data are captured, the subsequent processing speed of the captured data cannot be kept up with that, namely, an analysis application program cannot rapidly analyze the captured data, so that the bottleneck of data analysis cannot be solved.
For this reason, it is considered to improve the method for analyzing and scheduling high-speed and ultra-high-speed traffic, and the existing methods for analyzing and scheduling high-speed and ultra-high-speed traffic generally include the following two methods: the first mode is a single mode, namely, the application program corresponds to the network adapter used for capturing the data packet one by one, and the data of a certain network adapter is directly and only transmitted to a certain specific application program; the second mode is a Pipeline mode, that is, a memory buffer area which can be shared and used is allocated to each application program in advance, the captured data of the network adapter is stored in the memory buffer area, old data in the memory buffer area is deleted, and then each analysis application program is sequentially informed to go to the memory buffer area by itself to read the data.
However, the above data analysis method still fails to solve the bottleneck problem of the analysis part, and for a single mode, the network data at 10Gbps basically meets the requirements, but only meets the requirements of a single analysis service, and cannot realize the requirement of simultaneously and rapidly analyzing a plurality of service analysis models; for the pipeline mode, the method does not fully consider the calculation difficulty difference of different analysis models, easily causes the imbalance of resource allocation, and easily causes the problems of packet loss and tail program failure caused by pipeline head program overtime.
Disclosure of Invention
The invention mainly aims to provide a network data scheduling and distributing method capable of effectively solving the bottleneck problems of data capture and data analysis.
Another object of the present invention is to provide a computer apparatus for implementing the above network data scheduling allocation method.
Still another object of the present invention is to provide a computer readable storage medium for implementing the network data scheduling assignment method.
In order to realize the main purpose of the invention, the network data scheduling and distributing method provided by the invention comprises the steps of determining a data source for transmitting a data packet to be distributed, and pre-distributing a memory buffer area according to the number and the type of the data source; establishing a mapping relation between a memory buffer area and an application layer; establishing an output queue of each output interface; acquiring a data packet transmitted by a data source; storing the acquired data packets to be distributed in a memory buffer, numbering each data packet, confirming an output interface to be transmitted of each data packet, and adding each data packet to an output queue of a corresponding output interface; if the output interface to be transmitted is a physical network adapter, the data packet is directly transmitted to the physical network adapter; if the output interface to be transmitted is a file, writing the data packet into the file; and if the output interface to be transmitted is a transmission queue, writing the serial number of the data packet into the transmission queue.
According to the scheme, the output queues of the output interfaces are established, the data packets transmitted by the data source are numbered and then distributed to the output queues of the output interfaces, the problem that the data packets can only be distributed to one analysis application program is avoided, and the carrying capacity of data analysis is improved. In addition, aiming at different types of output interfaces, different data transmission strategies are adopted, so that the transmission rate of a data packet can be improved, and the bottleneck problem of data capture and data analysis is solved.
Preferably, after the data packet is transmitted to the physical network adapter, the application program reads the data in the data packet directly from the physical network adapter.
Therefore, the physical network adapter can directly read the data in the data packet, the data transmission efficiency can be improved, and the bottleneck problem of data analysis is effectively solved.
The further scheme is that before the serial number of the data packet is written into the transmission queue, whether the transmission queue is full is judged, if yes, the data packets with the preset number are eliminated according to the sequence of writing the data packets into the transmission queue.
Therefore, under the condition that the data packets in the transmission queue are full, the data packets with longer transmission time are deleted, so that on one hand, a space can be vacated for storing new data packets, on the other hand, the data packets with shorter updating time can be prevented from being improperly cleared, and the problem of data packet loss is avoided.
In a further aspect, after writing the packet number to the transmit queue, the application reads the data of each packet from the transmit queue.
Therefore, the application program can directly acquire the data in the data packet according to the number of the data packet, so that the time consumed by data transmission is reduced on one hand, and the storage space occupied by repeated transmission of the data packet is avoided on the other hand.
Further, after the application program reads the data of each data packet from the transmission queue, the number of the data packet that has been read is deleted from the transmission queue.
Therefore, the application program deletes the numbers of the read data packets in time, so that a large number of numbers of the data packets in the transmission queue are avoided, and the overlong transmission queue can be reduced.
Further, the step of confirming the output interface of each data packet to be transmitted comprises: data packets with the same source address and destination address are distributed to the same output interface.
Because the data packets with the same source address and the same target address usually have relevance, the relevance and the accuracy of data analysis can be improved by using the same output interface for transmission, and the time consumed by data analysis can be reduced.
Further, the step of confirming the output interface of each data packet to be transmitted comprises: and mapping a plurality of interfaces of the data source to form output interfaces respectively, and distributing the data packets transmitted by the same data source to the corresponding output interfaces.
Therefore, the data source interface and the output interface are consistent, the output interface is set more simply and quickly, and the data analysis efficiency is improved.
Further, the step of confirming the output interface of each data packet to be transmitted comprises: and circularly distributing the data packets to a plurality of output interfaces in sequence.
Therefore, the data packets are sequentially transmitted to the plurality of output interfaces in a polling mode, so that the data packets are very simply distributed, a large amount of complex calculation is avoided, and the efficiency of distributing the data packets can be improved.
In order to achieve the above another object, the present invention provides a computer device, which includes a processor and a memory, wherein the memory stores a computer program, and the computer program implements the steps of the network data scheduling allocation method when executed by the processor.
To achieve the above-mentioned further object, the present invention provides a computer program stored on a computer readable storage medium, wherein the computer program, when executed by a processor, implements the steps of the network data scheduling assignment method.
Drawings
Fig. 1 is a flowchart of an embodiment of a network data scheduling assignment method according to the present invention.
The invention is further explained with reference to the drawings and the embodiments.
Detailed Description
The network data scheduling and distributing method is applied to network equipment, such as a server or a core node, and mainly aims at capturing and analyzing the flow of a high-speed and ultra-high-speed network. The computer device of the present invention may be a device such as a server, and is provided with a processor and a memory, and the memory stores a computer program, and the computer program can implement the network data scheduling assignment method described above when executed.
The embodiment of the network data scheduling and distributing method comprises the following steps:
the embodiment is used for realizing automatic capture of network data and data allocation, and needs to apply a data capture engine to implement data capture, and needs to allocate a memory buffer, and set multiple output interfaces at the same time, where each output interface needs to establish an output queue, so that the embodiment first needs to perform initialization operation on a system, for example, need to initialize the data capture engine, initialize a memory at the same time, determine an output interface, and initialize an output queue of the output interface. For example, in initialization, it is necessary to receive parameters input by a user, configure a data source, an optional data allocation manner, a data output interface type, and the like according to the input parameters of the user, and initialize an output queue of the data output interface.
Referring to fig. 1, in initialization, step S1 is first performed to determine a plurality of data sources. In this embodiment, the data source may be one or more physical network adapters, or may be an output interface of a previous-stage system. After the data source is determined, the data source is bound with the data capture engine, so that the data packet can be captured at high speed. In addition, the memory buffer needs to be calculated and pre-allocated according to the number and the type of the data sources, in this embodiment, performance factors are considered, and a large page (Hugepage) is used to realize the pre-allocation of the memory buffer.
The data crawling engine may be configured according to a setting of a user, for example, the user may configure an underlying data crawling engine, and a high-speed data crawling engine supported by a DPDK, a forwarding, a netmap, or another network adapter.
Then, step S2 is executed to establish a mapping relationship between the memory buffer and the application layer. For example, the memory buffer area is mapped uniformly and the memory buffer area is mapped into the application layer, so that the application layer can directly read the data of the memory buffer area, and the problem of data loss caused in the process of copying the data of the memory buffer area is avoided.
Then, step S3 is executed, and after the plurality of output interfaces are determined, an output queue corresponding to each output interface is established according to the types of the plurality of output interfaces, or of course, an output queue may be established for each type of output interface. In this embodiment, the output interface may be a physical network adapter, i.e., a network card, or a ring queue, or may be a logical network adapter, or may be a file. After the output queues of each output interface are established, it is further necessary to initialize each output queue, for example, to empty the data of each output queue, so as to ensure that there is no data in each output queue after initialization.
If the output interface is a physical network adapter or a logical network adapter, it may attempt to bind the network adapter with the data fetching engine to support the transmission of high-speed data packets, and attempt to turn on the physical characteristics of the network adapter, such as hash computation during transmission, multi-queue support, and the like.
The initialization of the system further includes determining a data allocation manner, in this embodiment, the selectable data allocation manners include a polling manner, an IP scheduling manner, a replication manner, an IP port scheduling manner, an interface mapping manner, and the like, for example, the polling manner sequentially and circularly allocates the data packets to be allocated, which are received from the data source, to each output interface in sequence, and this manner is the simplest to allocate the data packets, and has a high allocation effect. The IP scheduling mode is to calculate the source address and the target address of each data packet according to the characteristics of the received data packet, so as to ensure that the data packets with the same source address and target address are transmitted to the same or the same type of output interface, and the mode can improve the accuracy of data packet allocation. The copy mode is to synchronously copy the data packet to a plurality of output interfaces. The IP port scheduling mode is based on the IP scheduling mode, and simultaneously judges the source port and the target port of the data packet to calculate, and takes account of the IP scheduling mode and the port information. The interface mapping mode is to map a plurality of interfaces of the data source to the output interfaces in a one-to-one correspondence manner, that is, the output interfaces and the interfaces of the data source are in a one-to-one correspondence relationship.
Of course, the data distribution method is not used singly, and may be combined in the above manners, that is, one or more data distribution methods are used simultaneously.
After the initialization is completed, step S4 is executed to obtain the data packet transmitted by the data source, and store the data packet in the memory buffer. Because there are multiple data sources, the data of different data sources can be stored in different areas of the memory buffer respectively, or after the data of the data sources is received, each data packet is marked to mark which data source the data packet comes from, so as to facilitate the subsequent transmission of the data packet.
Then, step S5 is executed to number each data packet stored in the memory buffer, preferably, each data packet has a unique number, which can indicate which data source the data packet comes from, that is, the number includes an identifier of the data source.
After each data packet is numbered, an output interface to be transmitted of each data packet is determined according to a preset data packet distribution mode. Since the allocation manner of the data packets is preset, step S6 only needs to determine the output interface of each data packet according to the preset allocation manner, for example, allocating a plurality of data packets to each output interface according to a polling manner, or determining the output interface corresponding to each data packet according to the source address, the destination address, and the like of the data packet.
After the output interface of each data packet is determined, step S7 is executed to add each data packet to the output queue of the corresponding output interface, and finally step S8 is executed to obtain the data packet corresponding to each output interface by the application program.
Specifically, if the output interface to which a certain data packet is allocated is a physical network adapter or a logical network adapter, the data packet may be directly sent to the corresponding network adapter through the packet sending function of the high-speed data capture engine, so that the application program in the application layer may directly read the data of the network adapter from the network adapter through the system interface and the interface of the data processing engine, thereby implementing fast allocation and analysis of the data packet.
If the output interface to which the data packet is allocated is a file, the file serving as the output interface can directly write the data of the data packet into the file through a file operation function, so that the data packet can be rapidly read.
If the output interface to which the data packet is allocated is a transmission queue, the number of the data packet is directly written into the transmission queue, for example, the number of the data packet is inserted into the transmission queue. Preferably, the number of the output interfaces is multiple, and the number of the transmission queues is multiple, and one data packet can be allocated to only one transmission queue, so that the number of one data packet can be inserted into only one of the transmission queues.
Because the number of other data packets may be in the transmission queue, and the number of data packets that can be accommodated in each transmission queue is also limited, before the number of the data packet is added to the transmission queue, it is first necessary to determine whether the data packet accommodated in the transmission queue has reached an upper limit, that is, whether the transmission queue is full, and if the transmission queue is full, it is necessary to clear a part of data packets in the transmission queue first to write the number of a new data packet into the transmission queue.
When the data packets in the transmission queue are cleared, the serial number of the data packet which is added to the transmission queue earliest is cleared according to the sequence of the adding time of each data packet in the transmission queue, so that the data packet which is not added for a long time can be prevented from being cleared. Further, the number of packets to be purged each time is predetermined, for example, only one or two packets to be purged each time. Of course, the number of the packets to be removed may also be determined according to the number of new packets that need to be added to the transmission queue currently, that is, the number of the removed packets is equal to the number of the packets that need to be added at the present time.
After the number of the data packet is added to the transmission queue, the application program may read data from the corresponding transmission queue through the API interface, for example, obtain the number of each data packet in the transmission queue, and read data of the corresponding data packet from the memory buffer according to the number of each data packet. Preferably, in order to avoid the number of the data packets in the transmission queue being too large, the number of each data packet is deleted from the transmission queue after the data of the data packet is read.
When network data distribution and analysis are not needed, the data distribution and analysis operations can be closed through the application program, specifically, the application program can close the system through a close function, and after a close instruction sent by the application program is received, the following operations can be sequentially executed: stopping the cyclic operation of data packet capture, stopping the work of each data transmission API interface, closing the data input interface and the data output interface, clearing data in the memory buffer area, and recovering the pre-allocated memory space by the operating system.
In order to verify the performance of the embodiment, the scheme of the embodiment is applied to compare the existing data capturing and analyzing system before and after modification. Before the original system is modified, the processing capacity of a single CPU is 50000pps, namely about 400Mbps, the input network adapter is 10Gbps, the average flow is 4Gbps to 5Gbps, the system is provided with at least two service analysis application programs, and the two application programs are mutually independent.
Before upgrading and transforming the system, the optical splitter is required to split the flow into two servers, and two analysis application programs are respectively operated. Because the analysis application is time-consuming to start, a long analysis interruption is caused after occasional false restarts. By upgrading and transforming the system, a single server can meet the requirements, and the flow and the starting time of a single analysis process are reduced by adopting a load balancing mode, so that the influence of the abnormality of the single process on the analysis abnormality of the whole system is obviously reduced.
Therefore, the invention can improve the flexibility of the network data capturing and analyzing system and solves the problem that one data source can only be bound with one network adapter in the existing network data capturing and analyzing method. In addition, the invention can improve the stability of the data distribution and analysis system. The invention can predict the analysis capability of the distribution and analysis system, and can adjust the flow of the network output interface by increasing and reducing the output interface and the output queue through configuration, thereby adapting to the analysis capability of the analysis system. Finally, the invention can improve the expandability of the data distribution and analysis system. For an ultra-high speed network (such as 40-100 Gbps), 100Gbps network traffic can be output to 4 or even 10Gbps output interfaces through a system in a secondary and multiple cascading mode, existing server resources are fully utilized, and a plurality of high-performance servers do not need to be configured.
The embodiment of the computer device comprises:
the computer device of this embodiment may be a server or a switch, and the computer device includes a processor, a memory, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the processor implements the steps of the network data scheduling allocation method.
For example, a computer program may be partitioned into one or more modules that are stored in a memory and executed by a processor to implement the modules of the present invention. One or more of the modules may be a sequence of computer program instruction segments for describing the execution of a computer program in a computer device that is capable of performing certain functions.
The Processor may be a Central Processing Unit (CPU), or may be other general-purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, a discrete hardware component, or the like. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, the processor being the control center of the computer device and the various interfaces and lines connecting the various parts of the overall computer device.
The memory may be used to store computer programs and/or modules, and the processor may implement various functions of the computer device by executing or executing the computer programs and/or modules stored in the memory and invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the mobile phone, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
Computer-readable storage medium embodiments:
the computer program stored in the computer device may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, all or part of the flow in the method according to the above embodiments may be implemented by a computer program, which may be stored in a computer readable storage medium and used by a processor to implement the steps of the network data scheduling allocation method.
Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, U.S. disk, removable hard disk, magnetic disk, optical disk, computer Memory, read-Only Memory (ROM), random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution media, and the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, in accordance with legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunications signals.
Finally, it should be emphasized that the present invention is not limited to the above embodiments, such as the change of the data source type, or the change of the data distribution mode, and these changes should also be included in the protection scope of the present claims.

Claims (10)

1. A network data scheduling and distributing method comprises the following steps:
determining a data source for transmitting a data packet to be distributed, and pre-distributing a memory buffer area according to the number and the type of the data source;
the method is characterized in that:
establishing a mapping relation between the memory buffer area and an application layer;
establishing an output queue of each output interface;
acquiring a data packet transmitted by the data source;
storing the acquired data packets to be distributed in the memory buffer, numbering each data packet, confirming the output interface to be transmitted of each data packet, and adding each data packet to the output queue of the corresponding output interface;
if the output interface to be transmitted is a physical network adapter, the data packet is directly transmitted to the physical network adapter;
if the output interface to be transmitted is a file, writing the data packet into the file;
and if the output interface to be transmitted is a transmission queue, writing the serial number of the data packet into the transmission queue.
2. The network data scheduling assignment method of claim 1, wherein:
and after the data packet is transmitted to the physical network adapter, an application program directly reads data in the data packet from the physical network adapter.
3. The network data scheduling assignment method of claim 1, wherein:
and before writing the serial numbers of the data packets into the transmission queue, judging whether the transmission queue is full, if so, clearing a preset number of data packets according to the sequence of writing the data packets into the transmission queue.
4. The network data scheduling assignment method of claim 3, wherein:
and after the serial numbers of the data packets are written into the transmission queue, the application program reads the data of the data packets from the transmission queue.
5. The network data scheduling assignment method of claim 4, wherein:
and after the application program reads the data of each data packet from the transmission queue, deleting the number of the read data packet from the transmission queue.
6. The network data scheduling assignment method according to any one of claims 1 to 5, wherein:
confirming that the output interface of each data packet is to be transmitted comprises: distributing the data packets with the same source address and destination address into the same output interface.
7. The network data scheduling assignment method according to any one of claims 1 to 5, wherein:
confirming that the output interface of each data packet is to be transmitted comprises: and mapping the plurality of interfaces of the data source to form the output interfaces respectively, and distributing the data packets transmitted by the same data source to the corresponding output interfaces.
8. The network data scheduling assignment method according to any one of claims 1 to 5, wherein:
confirming that the output interface of each data packet is to be transmitted comprises: and circularly distributing the data packets to the output interfaces in sequence.
9. Computer arrangement, characterized in that it comprises a processor and a memory, said memory storing a computer program that, when executed by the processor, performs the steps of the network data scheduling assignment method according to any of claims 1 to 8.
10. A computer-readable storage medium having stored thereon a computer program, characterized in that: the computer program when executed by a processor implements the steps of the network data scheduling assignment method of any of claims 1 to 8.
CN202010841240.XA 2020-08-20 2020-08-20 Network data scheduling and distributing method, computer device and computer readable storage medium Active CN111988244B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010841240.XA CN111988244B (en) 2020-08-20 2020-08-20 Network data scheduling and distributing method, computer device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010841240.XA CN111988244B (en) 2020-08-20 2020-08-20 Network data scheduling and distributing method, computer device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111988244A CN111988244A (en) 2020-11-24
CN111988244B true CN111988244B (en) 2022-10-18

Family

ID=73442304

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010841240.XA Active CN111988244B (en) 2020-08-20 2020-08-20 Network data scheduling and distributing method, computer device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111988244B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113835902B (en) * 2021-09-22 2023-12-05 抖音视界有限公司 Data processing method, device, computer equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106371925A (en) * 2016-08-31 2017-02-01 北京中测安华科技有限公司 High-speed big data detection method and device
CN107171980A (en) * 2016-03-08 2017-09-15 迈络思科技Tlv有限公司 Flexible Buffer allocation in the network switch
CN110380992A (en) * 2019-07-24 2019-10-25 南京中孚信息技术有限公司 Message processing method, device and network flow acquire equipment
CN110995678A (en) * 2019-11-22 2020-04-10 北京航空航天大学 Industrial control network-oriented efficient intrusion detection system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9485188B2 (en) * 2013-02-01 2016-11-01 International Business Machines Corporation Virtual switching based flow control

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107171980A (en) * 2016-03-08 2017-09-15 迈络思科技Tlv有限公司 Flexible Buffer allocation in the network switch
CN106371925A (en) * 2016-08-31 2017-02-01 北京中测安华科技有限公司 High-speed big data detection method and device
CN110380992A (en) * 2019-07-24 2019-10-25 南京中孚信息技术有限公司 Message processing method, device and network flow acquire equipment
CN110995678A (en) * 2019-11-22 2020-04-10 北京航空航天大学 Industrial control network-oriented efficient intrusion detection system

Also Published As

Publication number Publication date
CN111988244A (en) 2020-11-24

Similar Documents

Publication Publication Date Title
CN105511954B (en) Message processing method and device
US9742671B2 (en) Switching method
CN107179879B (en) Method and apparatus for data migration of storage device
CN106571978B (en) Data packet capturing method and device
US9858096B2 (en) Communication device migration method of extension function and communication system
CN110990415A (en) Data processing method and device, electronic equipment and storage medium
US8954702B2 (en) Extended address volume (EAV) allocation verification
CN111988244B (en) Network data scheduling and distributing method, computer device and computer readable storage medium
CN107294865B (en) load balancing method of software switch and software switch
US8589610B2 (en) Method and system for receiving commands using a scoreboard on an infiniband host channel adaptor
CN113535319A (en) Method, equipment and storage medium for realizing multiple RDMA network card virtualization
WO2017166997A1 (en) Inic-side exception handling method and device
CN108228099A (en) A kind of method and device of data storage
CN112631994A (en) Data migration method and system
CN101682551A (en) Method, apparatus, and computer program product for implementing bandwidth capping at logical port level for shared Ethernet port
CN110830385A (en) Packet capturing processing method, network equipment, server and storage medium
CN103841200A (en) Method and device for controlling software licensing
US11829335B1 (en) Using machine learning to provide a single user interface for streamlines deployment and management of multiple types of databases
US8041902B2 (en) Direct memory move of multiple buffers between logical partitions
US9189370B2 (en) Smart terminal fuzzing apparatus and method using multi-node structure
CN114153607A (en) Cross-node edge computing load balancing method, device and readable storage medium
CN106559439B (en) A kind of method for processing business and equipment
CN111125011B (en) File processing method, system and related equipment
CN113535370A (en) Method and equipment for realizing multiple RDMA network card virtualization of load balancing
CN109639447B (en) Method and device for mapping network function virtualization service chain under ring networking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: Room 1901, No. 120, Huangpu Avenue West, Tianhe District, Guangzhou, Guangdong 510,000 (office only)

Patentee after: Guangdong Yizhi Security Technology Co.,Ltd.

Address before: Room 1211, Building A2, No. 23, Middle Spectra Road, Huangpu District, Guangzhou, Guangdong 510000

Patentee before: Guangzhou Yizhi Security Technology Co.,Ltd.

Address after: Room 1211, Building A2, No. 23, Middle Spectra Road, Huangpu District, Guangzhou, Guangdong 510000

Patentee after: Guangzhou Yizhi Security Technology Co.,Ltd.

Address before: 519000 room 105-44388, No. 6, Baohua Road, Hengqin new area, Zhuhai, Guangdong (centralized office area)

Patentee before: ZHUHAI YIZHI SECURITY TECHNOLOGY Co.,Ltd.

CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: Room 1901, No. 120, Huangpu Avenue West, Tianhe District, Guangzhou, Guangdong 510,000 (office only)

Patentee after: Guangdong Yizhi Security Technology Co.,Ltd.

Address before: Room 1211, Building A2, No. 23, Middle Spectra Road, Huangpu District, Guangzhou, Guangdong 510000

Patentee before: Guangzhou Yizhi Security Technology Co.,Ltd.

Address after: Room 1211, Building A2, No. 23, Middle Spectra Road, Huangpu District, Guangzhou, Guangdong 510000

Patentee after: Guangzhou Yizhi Security Technology Co.,Ltd.

Address before: 519000 room 105-44388, No. 6, Baohua Road, Hengqin new area, Zhuhai, Guangdong (centralized office area)

Patentee before: ZHUHAI YIZHI SECURITY TECHNOLOGY Co.,Ltd.