WO2023071272A1 - Traffic migration method, apparatus and system, and electronic device and storage medium - Google Patents

Traffic migration method, apparatus and system, and electronic device and storage medium Download PDF

Info

Publication number
WO2023071272A1
WO2023071272A1 PCT/CN2022/103219 CN2022103219W WO2023071272A1 WO 2023071272 A1 WO2023071272 A1 WO 2023071272A1 CN 2022103219 W CN2022103219 W CN 2022103219W WO 2023071272 A1 WO2023071272 A1 WO 2023071272A1
Authority
WO
WIPO (PCT)
Prior art keywords
cache
data
target
network card
queue
Prior art date
Application number
PCT/CN2022/103219
Other languages
French (fr)
Chinese (zh)
Inventor
孙晓
火一莽
万月亮
Original Assignee
北京锐安科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京锐安科技有限公司 filed Critical 北京锐安科技有限公司
Publication of WO2023071272A1 publication Critical patent/WO2023071272A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Definitions

  • the embodiments of the present application relate to computer technologies, for example, to a traffic migration method, device, system, electronic equipment, and storage medium.
  • the embodiment of the present application provides a traffic migration method, device, system, electronic equipment, and storage medium, to deal with sudden traffic jitter and when a certain processing node in the equipment fails, to ensure that the traffic is not lost, and to realize the traffic migration .
  • the embodiment of the present application provides a traffic migration method, the method includes:
  • the network card receives the pending data distributed by the switch
  • the network card calculates the first target cache queue to which the data to be processed should be distributed according to the additional information of the data to be processed;
  • the network card determines the cache status of the first target cache queue
  • the network card determines a second target cache queue according to the cache status of other cache queues, and the other cache queues are cache queues other than the first target cache queue among the plurality of cache queues;
  • the network card distributes the data to be processed to the second target cache queue.
  • this embodiment also provides a traffic migration device, which includes:
  • the data receiving module is set as the network card to receive the data to be processed distributed by the switch;
  • the queue calculation module is configured as a first target cache queue to which the network card calculates the data to be processed according to the additional information of the data to be processed;
  • a state determination module is configured to determine the cache state of the first target cache queue for the network card
  • the queue determination module is configured to determine the second target cache queue according to the cache status of other cache queues when the cache state of the first target cache queue is fully loaded, and the other cache queues are the multiple cache queues except the first target cache queue Outside the cache queue;
  • the data distribution module is configured to distribute the data to be processed to the second target cache queue by the network card.
  • the embodiment of the present application also provides a traffic migration system.
  • the traffic migration system includes a switch, a network card as described in the first aspect, and a target server.
  • the network card is installed on the target server, and the target server has multiple cache queues and Corresponding multiple processing processes, each processing process processes the data cached in the corresponding cache queue.
  • the embodiment of the present application also provides an electronic device, including a memory, a processor, and a computer program stored in the memory and operable on the processor.
  • the processor executes the program, the computer program implemented in the first aspect of the present application The traffic migration method described above.
  • the embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, the traffic migration method as described in the first aspect of the present application is implemented.
  • FIG. 1 is a schematic flow chart of a traffic migration method according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of traffic migration in a target server according to an embodiment of the present application
  • FIG. 3 is a schematic diagram of traffic migration for migrating traffic to a standby server according to an embodiment of the present application
  • FIG. 4 is another schematic flowchart of a traffic migration method according to an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of a traffic migration device according to an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a traffic migration system according to an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • words such as “optional” or “exemplary” are used as examples, illustrations or illustrations. Any embodiment or design solution described as “optional” or “exemplary” in the embodiments of the present application shall not be interpreted as being more preferred or more advantageous than other embodiments or design solutions. Rather, the use of words such as “optional” or “exemplary” is intended to present related concepts in a concrete manner.
  • Figure 1 is a schematic flow chart of the flow migration method of the embodiment of the present application.
  • the method can be executed by the flow migration device provided by the embodiment of the present application.
  • the device can be implemented by software and/or hardware.
  • the device may be integrated in an electronic device, and the electronic device may be a network card.
  • the integration of the traffic migration device in the network card is used as an example for illustration.
  • Fig. 1 is a schematic flow chart of the traffic migration method of the embodiment of the present application, which is applied to the traffic migration system.
  • the traffic migration system includes a switch, a network card, and a target server.
  • the network card is installed on the target server, and the target server has multiple cache queues and corresponding Multiple processing processes, each processing process processes the data cached in the corresponding cache queue, the method includes the following steps:
  • the network card receives the data to be processed distributed by the switch.
  • the network card is a network processor architecture for multi-core and multi-thread, and realizes features such as virtual switching.
  • it may be a plug-in network card, or may be a snap-in network card, etc., which is not limited in this embodiment.
  • a switch is a device that performs an information exchange function in a communication system.
  • it may be an Ethernet switch, or may be a fiber optic switch, which is not limited in this embodiment.
  • the data to be processed can be accessed from the outside, such as the operator, or it can be generated by the internal structure, such as building a network and directional input traffic.
  • the network card is attached to the target server, and each target processor is equipped with a network card.
  • the target server is mainly set up for packet acceleration processing and traffic management.
  • the target server generally uses a general-purpose x86 server as its basic form, and business-related applications use the basic hardware resources provided by it to run normally.
  • it can be a central processing A central processing unit (CPU), a dynamic random access memory (Dynamic Random Access Memory, DRAM), or a hard disk drive (Hard Disk Drive, HDD), etc., which is not limited in this embodiment.
  • the switch evenly distributes the data traffic to multiple target servers, and then the target servers receive the data traffic through the network card.
  • the network card calculates the first target cache queue to which the data to be processed should be distributed according to the additional information of the data to be processed.
  • the additional information includes input port information, source Media Access Control MAC (Media Access Control) address, destination MAC address, network type, network identification number, source Internet Protocol Address (Internet Protocol Address) address, and destination IP address At least one of , IP port information, source Transmission Control Protocol TCP (Transmission Control Protocol) port information and target TCP port information.
  • source Media Access Control MAC Media Access Control
  • destination MAC Network type
  • network identification number source Internet Protocol Address (Internet Protocol Address) address
  • destination IP address At least one of , IP port information, source Transmission Control Protocol TCP (Transmission Control Protocol) port information and target TCP port information.
  • IP port information At least one of , IP port information, source Transmission Control Protocol TCP (Transmission Control Protocol) port information and target TCP port information.
  • TCP Transmission Control Protocol
  • the cache queue is a buffer for storing pre-allocated data traffic.
  • a target server will load a network card.
  • a single cache queue may not be able to meet the performance requirements. Requirements, so that multiple cache queues can be divided, and one cache queue is determined from the multiple cache queues as the first target cache queue according to the additional information of the data to be processed, and the remaining cache queues are other cache queues.
  • the network card determines the cache status of the first target cache queue.
  • a processing process can process a cache queue. If the processing process corresponding to a certain cache queue is killed by the operating system, such as a code error, memory access out of bounds, etc., an exception will occur in the processing process at this time, resulting in the generation of the cache queue.
  • the NIC detects that the cache queue is full in time; if the processing process can continue to process normally, it will display that the cache status is not full; that is, the status of the cache queue reflects the status of the corresponding processing process.
  • the network card determines that the first target cache queue The cache status of a target cache queue is ready to store data traffic.
  • the network card determines the second target cache queue according to the cache states of other cache queues, and the other cache queues are caches other than the first target cache queue among the multiple cache queues queue.
  • the remaining cache queues are called other cache queues.
  • a second target cache queue is determined.
  • Figure 2 is a schematic diagram of traffic migration in the target server of the embodiment of the present application, there are n cache queues in total, assuming that the first target cache queue is cache queue 1, when the cache state of the first target cache queue is In the full state, that is, the first target cache queue can no longer receive new data traffic, and the corresponding processing process can no longer be processed, resulting in a backlog in the first target cache queue.
  • the same method as S130 needs to be used to determine other cache queues again
  • the second target cache queue is determined in 2-n cache queues except the first target cache queue according to the cache status of other cache queues.
  • the network card determines to distribute the data to be processed to the cache queue 1 as shown by the black arrow.
  • the cache queue 1 When the cache queue 1 is fully loaded, the cache queue 1 cannot continue to be stored at this time, and the network card judges that the cache queue 2 and cache queue n can be processed.
  • the data traffic is cached, and the data to be processed is cached in cache queue 2 and cache queue n respectively, as shown in the black boxes in cache queue 2 and cache queue n in Figure 2.
  • Releasing the full load of a single cache queue mainly depends on the daemon program of the data processing node, which can continuously monitor the running status of the program. After the full load occurs, such as the error of the memory out of bounds segment mentioned above, the program can be restarted.
  • the release of the full cache queue mainly depends on the system. For example, the server is powered off and the port is disconnected from the switch. When the power failure is restored, the server will automatically execute the program startup script after startup to restore the normal operation of the target server. The switch detects that the processing process of the cache queue starts processing The traffic will then be forwarded back.
  • the cache status of the first target cache queue will be The second cache queue can be determined from other cache queues. For example, if there is a cache queue in other cache queues, and two of its 8 data flow storage spaces are occupied, then the cache queue can be determined as The second target cache queue, there is no backlog in the second target cache queue.
  • the network card distributes the data to be processed to the second target cache queue.
  • the network card distributes the data to be processed to the second target cache queue.
  • the traffic migration system also includes a backup server, and the method also includes: when the network card cannot determine the second target cache queue according to the cache states of other cache queues, the network card returns the data to be processed to the switch, so that the switch will Pending data is distributed to standby servers.
  • the standby server may be the same server as the target server, or may be a different server, which is not limited in this embodiment.
  • the network card cannot determine the second target cache queue according to the cache status of other cache queues, that is to say, when the second target cache queue is determined according to the cache status of other cache queues, the remaining target cache queues are all fully loaded state, the data to be processed can no longer be received. At this time, the data traffic to be accessed is returned to the switch, and then the switch distributes the data to be processed to the standby server for further processing.
  • Figure 3 is a schematic diagram of traffic migration to a backup server according to the embodiment of the present application; when all target cache queues in the target server are fully loaded or the instantaneous data traffic is much greater than the performance of the target server For a limited time, the network card will not discard the subsequent received traffic, but will forward the traffic to the standby server through the switch.
  • the technical solution of the embodiment of the present application provides a flow migration method, which receives the data to be processed distributed by the switch through the network card, and uses the network card to calculate the first target cache queue to which the data to be processed should be distributed according to the additional information of the data to be processed; the network card determines The cache status of the first target cache queue; when the cache status of the first target cache queue is fully loaded, the network card determines the second target cache queue according to the cache status of other cache queues, and the other cache queues are the multiple cache queues except the first A cache queue other than the target cache queue; the network card distributes the data to be processed to the second target cache queue.
  • a flexible load balancing strategy can be achieved through the network card using its own computing power to migrate traffic, and the traffic received by the data nodes is more stable and balanced. That is to say, in the embodiment of the present application, the network card installed on the device can be used for traffic distribution, and the balanced distribution of traffic can be realized, and the high-performance programmable capability of the network card can be used to realize the sudden traffic migration. When the process) is abnormal, traffic migration can be performed through the network card installed on the device, thereby avoiding traffic being discarded, ensuring the normal operation of the business, and improving the user's business experience.
  • the result of traffic migration performed by the network card according to the data to be processed is refined.
  • Fig. 4 is a schematic flowchart of another traffic migration method according to an embodiment of the present application. As shown in Fig. 4, the method includes:
  • the network card receives the data to be processed distributed by the switch.
  • the switch can maintain a data distribution table and distribute the data to be processed according to the data distribution table.
  • the entries in the data distribution table are generally 256, 512 or 1024.
  • the data distribution table maintained in the switch can be shown in Table 1 below:
  • the entry is 1024, which needs to be distributed to 16 network ports (network cards), then for a piece of data to be processed, it can be analyzed to obtain the source IP and destination IP, both of which are 32bit The value needs to be processed to correspond to the entry (1024 entries represent 10bit).
  • the general processing is xor, which means XOR operation, namely:
  • Source IP xor destination IP gets a 32bit value
  • the data distribution table maintained in the switch is queried according to the obtained hash value, so as to determine to which network port the data to be processed is distributed, and thus the data to be processed is distributed to the corresponding network port. For example, if the hash value of certain data to be processed calculated by the switch according to the above rules is 15, the lookup table 1 determines that the data to be processed can be distributed to the network port 15 .
  • the network card performs hash calculation on the additional information of the data to be processed to obtain a hash value.
  • the hash calculation is also called the hash algorithm, which is to give an input channel through the hash algorithm to output a fixed length, mainly to transform the input of any length into a fixed-length output through the hash algorithm, the output is hash value.
  • This conversion is a compression mapping, that is, the space of the hash value is usually much smaller than the space of the input, different inputs may be hashed into the same output, and it is impossible to uniquely determine the input value from the hash value. Simply put, it is a function to compress a message of any length into a fixed-length message digest.
  • the hash value can be understood as the identity of a piece of traffic data.
  • a certain hash algorithm for example, it can be MD5 message digest algorithm (MD5Message-Digest Algorithm, MD5), or it can be secure hash algorithm 1 (Secure Hash Algorithm 1, SHA-1), etc., which are not limited in this embodiment.
  • the additional information of the data to be processed by the network card such as input port information, source media access control MAC address, destination MAC address, network type, network identification number, source Internet Protocol IP address, destination IP address, IP At least one of port information, source transmission control protocol TCP port information and target TCP port information is hashed, and a long piece of data is mapped to a short piece of small data.
  • This piece of small data is the hash of big data. Hash value, and it is unique, once the big data changes, even if it is a small change, its hash value will also change.
  • a data distribution table may also be maintained in the network card, and the data distribution table maintained in the network card may be shown in Table 2 below:
  • the network card After the network card calculates the hash value of the data to be processed according to the above rules, it can query the data distribution table maintained in the network card to determine the first queue where the data to be processed should be distributed. A target cache queue. For example, for a certain data to be processed, if the hash value calculated by the network card is 2, the data distribution table maintained in the network card can be queried to determine that the data to be processed needs to be distributed to the cache queue 2.
  • the network card determines whether the first target cache queue is fully loaded.
  • the value equal to 8 bits is a full load state, and less than 8 bits is a non-full load state.
  • the network card distributes the data to be processed to the first target cache queue.
  • a processing process of a target server is used for processing.
  • the non-full load state is when the processing process normally processes the data traffic of the first target cache queue and can continue to buffer the data traffic. At this time, it can Distribute the data to be processed to the first target cache queue for processing.
  • the total number of caches in the first target cache queue is 8, for example, if the number of caches in the first target cache queue is currently 4 in a non-full state, there are 4 data flows in the network card that need to be distributed to the cache queue. There is buffer space in the first target cache queue, and the 4 data flows to be processed can be distributed to the first target cache queue.
  • the network card determines a second target cache queue according to cache states of other cache queues, where the other cache queues are cache queues other than the first target cache queue among the multiple cache queues.
  • the cache status of the first target cache queue is full, that is, when the processing process is processing data traffic, there is a backlog in the processing of the cache queue by the processing process, and the network card promptly distributes the subsequent files to the first target cache queue.
  • the data flow of a target cache queue is recalculated and the hash result is distributed to other cache queues, that is, the second target cache queue is determined in other cache queues, and the original distribution method is restored after the cache queue is fully loaded.
  • the full load of the cache queue mainly depends on the daemon program of the target server, which can continuously monitor the running status of the program.
  • the full load can be relieved by restarting the program.
  • the network card distributes the data to be processed to the second target cache queue.
  • the network card determines the second target cache queue according to the cache status of other cache queues, including: the network card finds that the cache status is not fully loaded from other cache queues, and the amount of cached data is less than a preset Threshold cache queue to get the second target cache queue.
  • the preset threshold is a theoretical upper limit. In many cases, it is affected by traffic or hardware. It can be a dynamic upper limit for identification. Within a period of time, there is a threshold that reaches the upper limit for a certain time. It can be considered to have reached the upper limit of the preset threshold.
  • the network card determines whether the cache status of other cache queues is full or not full according to the cache status of other cache queues except the first target cache queue, and the cache queue The number of caches is 8, if it is less than 8, it is judged as not fully loaded, and when it is equal to 8, it is judged as fully loaded. When it is judged to be in a non-full load state, determine the cached data volume of the data to be processed.
  • the second Target cache queue If the determined cacheable data volume of the non-full-loaded cache queue is 4, and the data volume of the data to be processed is less than 4 or equal to 4, the second Target cache queue; if the amount of data to be processed is greater than the cacheable data volume 4 of the cache queue, continue to determine the second target cache queue in the remaining other cache queues.
  • the technical solutions of the embodiments of the present application provide a traffic migration method, wherein the network card receives the data to be processed distributed by the switch.
  • the network card performs hash calculation on the additional information of the data to be processed to obtain the hash value; determines the first target cache queue according to the hash value; when the cache state of the first target cache queue is not fully loaded, the network card distributes the data to be processed to The first target cache queue; the network card determines the cache status of the first target cache queue; when the cache status of the first target cache queue is fully loaded, the network card determines the second target cache queue according to the cache status of other cache queues, and the other cache queues are A cache queue other than the first target cache queue among the multiple cache queues; the network card distributes the data to be processed to the second target cache queue. That is, in the embodiment of the present application, the hash value is used to determine the first target cache queue, thereby successfully implementing traffic migration and ensuring that the traffic received by the data nodes is more stable and balanced.
  • FIG. 5 is a schematic structural diagram of a traffic migration device according to an embodiment of the present application.
  • the traffic migration device may be a network card.
  • the traffic migration device includes: a data receiving module 510, a queue calculation module 520, and a status determination module 530 , a queue determination module 540 and a data distribution module 550 . in,
  • the data receiving module 510 is configured to receive the data to be processed distributed by the switch;
  • the queue calculation module 520 is configured to calculate the first target cache queue to which the data to be processed should be distributed according to the additional information of the data to be processed;
  • the state determination module 530 is configured to determine the cache state of the first target cache queue
  • the queue determination module 540 is configured to determine the second target cache queue according to the cache status of other cache queues when the cache status of the first target cache queue is full, and the other cache queues are the multiple cache queues except the first target cache queue Outside the cache queue;
  • the data distribution module 550 is configured to distribute the data to be processed to the second target cache queue.
  • the additional information includes input port information, source media access control MAC address, destination MAC address, network type, network identification number, source Internet Protocol IP address, destination IP address, IP port information, source transmission control At least one of protocol TCP port information and target TCP port information.
  • the queue calculation module 520 includes:
  • the hash calculation unit is configured to perform hash calculation on the additional information of the data to be processed to obtain a hash value
  • the queue determination unit is configured to determine the first target cache queue according to the hash value.
  • the data distribution module 550 is also set to:
  • the data to be processed is distributed to the first target cache queue.
  • the queue determination module 540 is set to:
  • Searching for a cache queue whose cache state is not fully loaded and whose cached data volume is less than a preset threshold is searched from other cache queues to obtain a second target cache queue.
  • the traffic migration system also includes a standby server, and the device also includes:
  • the data rollback module is configured to roll back the data to be processed to the switch when the second target cache queue cannot be determined according to the cache status of other cache queues, so that the switch distributes the data to be processed to the standby server.
  • a traffic migration device provided in an embodiment of the present application can execute the traffic migration method provided in any embodiment of the present application, and has corresponding functional modules for executing the method.
  • FIG. 6 is a schematic structural diagram of a traffic migration system according to an embodiment of the present application.
  • the traffic migration system includes a network card 601 , a switch 602 , a target server 603 and a standby server 604 .
  • the network card is installed on the target server, and the target server has multiple cache queues and corresponding multiple processing processes, and each processing process processes the data cached in the corresponding cache queue.
  • the target server has multiple cache queues and corresponding multiple processing processes, and each processing process processes the data cached in the corresponding cache queue.
  • the network card 601 is configured to: receive the data to be processed distributed by the switch 602; and calculate the first target cache queue to which the data to be processed should be distributed according to the additional information of the data to be processed; the network card 601 further determines the cache status of the first target cache queue; When the cache state of the first target cache queue is fully loaded, the network card 601 determines the second target cache queue according to the cache states of other cache queues, and the other cache queues are cache queues other than the first target cache queue among the multiple cache queues; Then the network card 601 distributes the data to be processed to the second target cache queue, and finally the target server 603 continues to process the data in the cache queue. When the target server 603 fails and the smart network card cannot be started, the data is distributed to the standby server 604 .
  • the switch 602 is set to: distribute the data to be processed to the network card 601; when the network card 601 cannot cache the data to be processed, the switch 602 receives the data that the network card 601 cannot process, and distributes the data to the backup server 604.
  • the target server 603 is set to: attach the network card 601 to the target server 603, and the processing processes corresponding to the multiple cache queues on the target server 603 process the data traffic in the cache queues.
  • the standby server 604 is set to: when all cache queues of the target server 603 are fully loaded, or the instantaneous traffic is far greater than the performance processing upper limit of the target server 603, the standby server 604 receives and processes the data traffic to be processed, and the processing method can be Like the target server 603, it is in the form of a network card, which is not limited in this embodiment.
  • the traffic migration system includes a network card, a switch, a target server and a backup server.
  • the network card is installed on the target server, and the target server has multiple cache queues and corresponding multiple processing processes, and each processing process processes the data cached in the corresponding cache queue.
  • the network card Receive the data to be processed distributed by the switch through the network card, and use the network card to calculate the first target cache queue to which the data to be processed should be distributed according to the additional information of the data to be processed; the network card determines the cache status of the first target cache queue; in the first target cache queue When the cache status of the cache is fully loaded, the network card determines the second target cache queue according to the cache status of other cache queues, and the other cache queues are cache queues other than the first target cache queue among multiple cache queues; the network card distributes the pending data to the second target cache queue. That is, in the embodiment of the present application, a flexible load balancing strategy can be achieved through the network card using its own computing power to migrate traffic, and the traffic received by the data nodes is more stable and balanced.
  • FIG. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • the electronic device includes a processor 710, a memory 720, an input device 730, and an output device 740; the number of processors 710 in the electronic device may be at least one, as shown in FIG. 7 Take a processor 710 as an example; the processor 710, the memory 720, the input device 730 and the output device 740 in the electronic device may be connected through a bus or in other ways. In FIG. 7, the connection through a bus is taken as an example.
  • the memory 720 can be set to store software programs, computer-executable programs and modules, such as program instructions/modules corresponding to the traffic migration method in the embodiment of the present application (for example, the data reception in the traffic migration device module 510, queue calculation module 520, state determination module 530, queue determination module 540 and data distribution module 550), the processor 710 executes various functions of the electronic device by running software programs, instructions and modules stored in the memory 720 Application and data processing, that is, realizing the above traffic migration method.
  • software programs, computer-executable programs and modules such as program instructions/modules corresponding to the traffic migration method in the embodiment of the present application (for example, the data reception in the traffic migration device module 510, queue calculation module 520, state determination module 530, queue determination module 540 and data distribution module 550)
  • the processor 710 executes various functions of the electronic device by running software programs, instructions and modules stored in the memory 720 Application and data processing, that is, realizing the above traffic migration method.
  • the memory 720 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system and an application program required by at least one function; the data storage area may store data created according to the use of the terminal, and the like.
  • the memory 720 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage devices.
  • memory 720 may include memory located remotely from processor 710, and such remote memory may be connected to the electronic device through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the input device 730 may be configured to receive input numbers or character information, and generate key signal input related to user settings and function control of the electronic device.
  • the output device 740 may include a display device such as a display screen.
  • the embodiment of the present application also provides a storage medium containing computer-executable instructions, the computer-executable instructions are used to execute a traffic migration method when executed by a computer processor, and the method includes:
  • the network card receives the pending data distributed by the switch
  • the network card calculates the first target cache queue to which the data to be processed should be distributed according to the additional information of the data to be processed;
  • the network card determines the cache status of the first target cache queue
  • the network card determines the second target cache queue according to the cache states of other cache queues, and the other cache queues are cache queues other than the first target cache queue among the multiple cache queues;
  • the network card distributes the data to be processed to the second target cache queue.
  • the computer-executable instructions are not limited to the above-mentioned method operations, and may also perform the traffic migration method provided in any embodiment of the present application. related operations.
  • the units and modules included are only divided according to functional logic, but are not limited to the above-mentioned divisions, as long as the corresponding functions can be realized; in addition, each function
  • the specific names of the units are only for the convenience of distinguishing each other, and are not used to limit the protection scope of the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The present application discloses a traffic migration method, apparatus and system, and an electronic device and a storage medium. The method is applied to a traffic migration system. The system comprises a switch, a network card and a target server, wherein the network card is installed on the target server; the target server has a plurality of cache queues and a plurality of corresponding processing processes; and each processing process processes data which is cached in a corresponding cache queue. The method comprises: a network card receiving data to be processed, which is distributed by a switch; according to additional information of said data, calculating a first target cache queue to which said data shall be distributed; the network card determining a cache state of the first target cache queue; in response to the cache state of the first target cache queue being a full-load state, the network card determining a second target cache queue according to cache states of other cache queues, wherein the other cache queues are cache queues, other than the first target cache queue, among a plurality of cache queues; and the network card distributing said data to the second target cache queue.

Description

一种流量迁移方法、装置、系统、电子设备及存储介质A traffic migration method, device, system, electronic equipment and storage medium
本申请要求在2021年10月28日提交中国专利局、申请号为202111260898.2的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。This application claims priority to a Chinese patent application with application number 202111260898.2 filed with the China Patent Office on October 28, 2021, the entire contents of which are incorporated herein by reference.
技术领域technical field
本申请实施例涉及计算机技术,例如涉及一种流量迁移方法、装置、系统、电子设备及存储介质。The embodiments of the present application relate to computer technologies, for example, to a traffic migration method, device, system, electronic equipment, and storage medium.
背景技术Background technique
目前全球通信技术发展非常迅猛,也带来移动数据流量的快速增长。而根据调研数据的预测,到2026年,全球每月移动数据流量是2019年的6.65倍,达到2260亿GB。At present, the rapid development of global communication technology also brings about the rapid growth of mobile data traffic. According to the forecast of research data, by 2026, the global monthly mobile data traffic will be 6.65 times that of 2019, reaching 226 billion GB.
面对如此爆炸式的流量增长,传统的数据中心很难应对突发的流量抖动,当设备内的某个处理节点发生故障时,通常是直接将发往该节点的数据流量丢弃,影响业务体验。In the face of such explosive traffic growth, it is difficult for traditional data centers to cope with sudden traffic jitter. When a processing node in the device fails, it usually directly discards the data traffic sent to this node, which affects business experience. .
发明内容Contents of the invention
本申请实施例提供一种流量迁移方法、装置、系统、电子设备及存储介质,应对突发的流量抖动以及当设备内的某个处理节点发生故障时,保证流量不遗失,实现对流量进行迁移。The embodiment of the present application provides a traffic migration method, device, system, electronic equipment, and storage medium, to deal with sudden traffic jitter and when a certain processing node in the equipment fails, to ensure that the traffic is not lost, and to realize the traffic migration .
第一方面,本申请实施例提供了一种流量迁移方法,该方法包括:In the first aspect, the embodiment of the present application provides a traffic migration method, the method includes:
网卡接收交换机分发的待处理数据;The network card receives the pending data distributed by the switch;
网卡根据待处理数据的附加信息计算待处理数据应分发至的第一目标缓存队列;The network card calculates the first target cache queue to which the data to be processed should be distributed according to the additional information of the data to be processed;
网卡确定第一目标缓存队列的缓存状态;The network card determines the cache status of the first target cache queue;
响应于第一目标缓存队列的缓存状态为满载状态,网卡根据其他缓存队列的缓存状态确定第二目标缓存队列,其他缓存队列为多个缓存队列中除第一目标缓存队列之外的缓存队列;In response to the cache status of the first target cache queue being fully loaded, the network card determines a second target cache queue according to the cache status of other cache queues, and the other cache queues are cache queues other than the first target cache queue among the plurality of cache queues;
网卡将待处理数据分发至第二目标缓存队列。The network card distributes the data to be processed to the second target cache queue.
第二方面,本实施例还提供一种流量迁移装置,该装置包括:In the second aspect, this embodiment also provides a traffic migration device, which includes:
数据接收模块,设置为网卡接收交换机分发的待处理数据;The data receiving module is set as the network card to receive the data to be processed distributed by the switch;
队列计算模块,设置为网卡根据待处理数据的附加信息计算待处理数据应分发至的第一目标缓存队列;The queue calculation module is configured as a first target cache queue to which the network card calculates the data to be processed according to the additional information of the data to be processed;
状态确定模块,设置为网卡确定第一目标缓存队列的缓存状态;A state determination module is configured to determine the cache state of the first target cache queue for the network card;
队列确定模块,设置为在第一目标缓存队列的缓存状态为满载状态时,网卡根据其他缓存队列的缓存状态确定第二目标缓存队列,其他缓存队列为多个缓存队列中除第一目标缓存队列之外的缓存队列;The queue determination module is configured to determine the second target cache queue according to the cache status of other cache queues when the cache state of the first target cache queue is fully loaded, and the other cache queues are the multiple cache queues except the first target cache queue Outside the cache queue;
数据分发模块,设置为网卡将待处理数据分发至第二目标缓存队列。The data distribution module is configured to distribute the data to be processed to the second target cache queue by the network card.
第三方面,本申请实施例还提供了一种流量迁移系统,流量迁移系统包括交换机、如第一方面所述的网卡以及目标服务器,网卡安装在目标服务器上,目标服务器具有多个缓存队列和对应的多个处理进程,每个处理进程处理对应缓存队列中缓存的数据。In the third aspect, the embodiment of the present application also provides a traffic migration system. The traffic migration system includes a switch, a network card as described in the first aspect, and a target server. The network card is installed on the target server, and the target server has multiple cache queues and Corresponding multiple processing processes, each processing process processes the data cached in the corresponding cache queue.
第四方面,本申请实施例还提供了一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,处理器执行程序时实现如本申请第一方面所述的流量迁移方法。In the fourth aspect, the embodiment of the present application also provides an electronic device, including a memory, a processor, and a computer program stored in the memory and operable on the processor. When the processor executes the program, the computer program implemented in the first aspect of the present application The traffic migration method described above.
第五方面,本申请实施例提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如本申请第一方面所述的流量迁移方法。In a fifth aspect, the embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, the traffic migration method as described in the first aspect of the present application is implemented.
附图说明Description of drawings
图1是本申请实施例的流量迁移方法的一个流程示意图;FIG. 1 is a schematic flow chart of a traffic migration method according to an embodiment of the present application;
图2是本申请实施例的目标服务器中的流量迁移示意图;FIG. 2 is a schematic diagram of traffic migration in a target server according to an embodiment of the present application;
图3是本申请实施例的将流量迁移至备用服务器的流量迁移示意图;FIG. 3 is a schematic diagram of traffic migration for migrating traffic to a standby server according to an embodiment of the present application;
图4是本申请实施例的流量迁移方法的另一个流程示意图;FIG. 4 is another schematic flowchart of a traffic migration method according to an embodiment of the present application;
图5是本申请实施例的流量迁移装置的一个结构示意图;FIG. 5 is a schematic structural diagram of a traffic migration device according to an embodiment of the present application;
图6为本申请实施例的流量迁移系统的一个结构示意图;FIG. 6 is a schematic structural diagram of a traffic migration system according to an embodiment of the present application;
图7是本申请实施例的电子设备的一个结构示意图。FIG. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
具体实施方式Detailed ways
下面结合附图和实施例对本申请作详细说明。The application will be described in detail below in conjunction with the accompanying drawings and embodiments.
另外,在本申请实施例中,“可选的”或者“示例性的”等词用于表示作例子、例证或说明。本申请实施例中被描述为“可选的”或者“示例性的”的任何实施例或设计方案不应被解释为比其他实施例或设计方案更优选或更具优势。确切而言,使用“可选的”或者“示例性的”等词旨在以具体方式呈现相关概念。In addition, in the embodiments of the present application, words such as "optional" or "exemplary" are used as examples, illustrations or illustrations. Any embodiment or design solution described as "optional" or "exemplary" in the embodiments of the present application shall not be interpreted as being more preferred or more advantageous than other embodiments or design solutions. Rather, the use of words such as "optional" or "exemplary" is intended to present related concepts in a concrete manner.
图1是本申请实施例的流量迁移方法的一个流程示意图,该方法可以由本申请实施例提供的流量迁移装置来执行,该装置可以采用软件和/或硬件的方式实现,在一个具体的实施例中,该装置可以集成在电子设备中,电子设备可以是网卡,在本实施例中,以流量迁移装置集成在网卡中为例进行说明。Figure 1 is a schematic flow chart of the flow migration method of the embodiment of the present application. The method can be executed by the flow migration device provided by the embodiment of the present application. The device can be implemented by software and/or hardware. In a specific embodiment In this example, the device may be integrated in an electronic device, and the electronic device may be a network card. In this embodiment, the integration of the traffic migration device in the network card is used as an example for illustration.
图1是本申请实施例的流量迁移方法的一个流程示意图,应用于流量迁移系统,流量迁移系统中包括交换机、网卡和目标服务器,网卡安装在目标服务器上,目标服务器具有多个缓存队列和对应的多个处理进程,每个处理进程处理对应缓存队列中缓存的数据,方法包括如下步骤:Fig. 1 is a schematic flow chart of the traffic migration method of the embodiment of the present application, which is applied to the traffic migration system. The traffic migration system includes a switch, a network card, and a target server. The network card is installed on the target server, and the target server has multiple cache queues and corresponding Multiple processing processes, each processing process processes the data cached in the corresponding cache queue, the method includes the following steps:
S110、网卡接收交换机分发的待处理数据。S110. The network card receives the data to be processed distributed by the switch.
其中,网卡是用于多核多线程的网络处理器架构,实现虚拟交换等特性,例如可以是,插卡式网卡,或者可以是,卡扣式网卡等,本实施例对此不进行限定。交换机是一种在通信系统中完成信息交换功能的设备,例如可以是,以太网交换机,或者可以是,光纤交换机等,本实施例对此不进行限定。待处理数据可以从外部接入,譬如运营商,也可以是内部结构产生,如搭建一个网络,定向进行输入得流量。Wherein, the network card is a network processor architecture for multi-core and multi-thread, and realizes features such as virtual switching. For example, it may be a plug-in network card, or may be a snap-in network card, etc., which is not limited in this embodiment. A switch is a device that performs an information exchange function in a communication system. For example, it may be an Ethernet switch, or may be a fiber optic switch, which is not limited in this embodiment. The data to be processed can be accessed from the outside, such as the operator, or it can be generated by the internal structure, such as building a network and directional input traffic.
其中,网卡依附在目标服务器上,每一个目标处理器上都配备一张网卡。目标服务器主要是设置为进行报文加速处理以及流量的管理,其中,目标服务器一般使用通用x86服务器作为其基本形态,业务相关应用程序利用其提供的 基础硬件资源正常运行,例如可以是,中央处理器(central processing unit,CPU)、动态随机存取存储器(Dynamic Random Access Memory,DRAM),或者可以是,硬盘驱动器(Hard Disk Drive,HDD)等,本实施例对此不进行限定。Wherein, the network card is attached to the target server, and each target processor is equipped with a network card. The target server is mainly set up for packet acceleration processing and traffic management. Among them, the target server generally uses a general-purpose x86 server as its basic form, and business-related applications use the basic hardware resources provided by it to run normally. For example, it can be a central processing A central processing unit (CPU), a dynamic random access memory (Dynamic Random Access Memory, DRAM), or a hard disk drive (Hard Disk Drive, HDD), etc., which is not limited in this embodiment.
示例性的,交换机将数据流量均匀分发到多个目标服务器,然后目标服务器通过网卡进行数据流量的接收。Exemplarily, the switch evenly distributes the data traffic to multiple target servers, and then the target servers receive the data traffic through the network card.
S120、网卡根据待处理数据的附加信息计算待处理数据应分发至的第一目标缓存队列。S120. The network card calculates the first target cache queue to which the data to be processed should be distributed according to the additional information of the data to be processed.
其中,附加信息包括输入端口信息、源媒体存取控制MAC(Media Access Control)位址、目的MAC地址、网络类型、网络识别号、源网际互连协议IP(Internet Protocol Address)地址、目的IP地址、IP端口信息、源传输控制协议TCP(Transmission Control Protocol)端口信息和目标TCP端口信息中的至少一者。Among them, the additional information includes input port information, source Media Access Control MAC (Media Access Control) address, destination MAC address, network type, network identification number, source Internet Protocol Address (Internet Protocol Address) address, and destination IP address At least one of , IP port information, source Transmission Control Protocol TCP (Transmission Control Protocol) port information and target TCP port information.
其中,缓存队列是用于存放预分配的数据流量的缓冲区,一般情况下,一个目标服务器中会加载一张网卡,在大量的数据流量接入的情况下,单个的缓存队列可能无法满足性能需求,从而可以划分出多个缓存队列,根据待处理数据的附加信息从多个缓存队列中确定一个缓存队列为第一目标缓存队列,剩余缓存队列则为其他缓存队列。Among them, the cache queue is a buffer for storing pre-allocated data traffic. Generally, a target server will load a network card. In the case of a large amount of data traffic access, a single cache queue may not be able to meet the performance requirements. Requirements, so that multiple cache queues can be divided, and one cache queue is determined from the multiple cache queues as the first target cache queue according to the additional information of the data to be processed, and the remaining cache queues are other cache queues.
S130、网卡确定第一目标缓存队列的缓存状态。S130. The network card determines the cache status of the first target cache queue.
示例性的,网卡要将数据流量存入第一目标缓存队列要确定第一目标缓存队列的缓存状态是否可以进行数据流量的存入。其中,可以是一个处理进程处理一个缓存队列,如果某个缓存队列对应的处理进程被操作系统杀死,譬如代码出现错误、内存访问越界等,此时处理进程就会出现异常,导致缓存队列产生积压,网卡及时感知缓存队列积压满载;若处理进程能够一直正常处理则显示缓存状态无满载;即缓存队列的状态反映了对应处理进程的状态。Exemplarily, when the network card wants to store the data traffic into the first target cache queue, it needs to determine whether the cache status of the first target cache queue can store the data traffic. Among them, a processing process can process a cache queue. If the processing process corresponding to a certain cache queue is killed by the operating system, such as a code error, memory access out of bounds, etc., an exception will occur in the processing process at this time, resulting in the generation of the cache queue. Backlog, the NIC detects that the cache queue is full in time; if the processing process can continue to process normally, it will display that the cache status is not full; that is, the status of the cache queue reflects the status of the corresponding processing process.
示例性的,当网卡现在需要向第一目标缓存队列中将现有数据流量存入,此时第一目标缓存队列对应的处理进程无异常,且缓存队列不存在积压情况, 此时网卡确定第一目标缓存队列的缓存状态为可以进行数据流量的存入。Exemplarily, when the network card now needs to store the existing data traffic into the first target cache queue, and the processing process corresponding to the first target cache queue is normal, and there is no backlog in the cache queue, then the network card determines that the first target cache queue The cache status of a target cache queue is ready to store data traffic.
S140、在第一目标缓存队列的缓存状态为满载状态时,网卡根据其他缓存队列的缓存状态确定第二目标缓存队列,其他缓存队列为多个缓存队列中除第一目标缓存队列之外的缓存队列。S140. When the cache state of the first target cache queue is fully loaded, the network card determines the second target cache queue according to the cache states of other cache queues, and the other cache queues are caches other than the first target cache queue among the multiple cache queues queue.
其中,对于所有的目标缓存队列,确定一个目标缓存队列为第一目标缓存队列,则剩余缓存队列被成为其他缓存队列,示例性地,在第一目标缓存队列的缓存状态为满载状态时,可以在其他缓存队列中,确定第二目标缓存队列。Wherein, for all target cache queues, if one target cache queue is determined to be the first target cache queue, then the remaining cache queues are called other cache queues. Exemplarily, when the cache state of the first target cache queue is fully loaded, it can Among other cache queues, a second target cache queue is determined.
如图2所示,图2是本申请实施例的目标服务器中的流量迁移示意图,总共有n个缓存队列,假设第一目标缓存队列为缓存队列1,当第一目标缓存队列的缓存状态为满载状态,即第一目标缓存队列无法再接收新的数据流量,对应的处理进程已经处理不过来了,导致第一目标缓存队列存在积压,此时需要用S130相同的方法再一次确定其他缓存队列的缓存状态,根据其他缓存队列的缓存状态在除了第一目标缓存队列外的2~n个缓存队列中确定第二目标缓存队列。对应图2中,网卡确定向缓存队列1分发待处理数据如黑色箭头,当缓存队列1出现满载情况后,此时缓存队列1无法继续存入,网卡判断到缓存队列2和缓存队列n可以进行数据流量缓存,并将待处理数据分别缓存到缓存队列2和缓存队列n,如图2中缓存队列2和缓存队列n中的黑色框。As shown in Figure 2, Figure 2 is a schematic diagram of traffic migration in the target server of the embodiment of the present application, there are n cache queues in total, assuming that the first target cache queue is cache queue 1, when the cache state of the first target cache queue is In the full state, that is, the first target cache queue can no longer receive new data traffic, and the corresponding processing process can no longer be processed, resulting in a backlog in the first target cache queue. At this time, the same method as S130 needs to be used to determine other cache queues again The second target cache queue is determined in 2-n cache queues except the first target cache queue according to the cache status of other cache queues. Corresponding to Figure 2, the network card determines to distribute the data to be processed to the cache queue 1 as shown by the black arrow. When the cache queue 1 is fully loaded, the cache queue 1 cannot continue to be stored at this time, and the network card judges that the cache queue 2 and cache queue n can be processed. The data traffic is cached, and the data to be processed is cached in cache queue 2 and cache queue n respectively, as shown in the black boxes in cache queue 2 and cache queue n in Figure 2.
单缓存队列解除满载主要依赖于数据处理节点的守护程序,可以持续监测程序的运行状态,发生满载比如上面说的内存越界段的错误后,可以重新启动程序,多缓存队列解除满载主要依赖于系统的自启动服务,比如服务器断电了,端口掉了交换机,当掉电恢复后,服务器启动后会自动执行程序启动脚本,恢复目标服务器的正常运行状态,交换机检测到缓存队列的处理进程开始处理后会将流量转发回来。Releasing the full load of a single cache queue mainly depends on the daemon program of the data processing node, which can continuously monitor the running status of the program. After the full load occurs, such as the error of the memory out of bounds segment mentioned above, the program can be restarted. The release of the full cache queue mainly depends on the system. For example, the server is powered off and the port is disconnected from the switch. When the power failure is restored, the server will automatically execute the program startup script after startup to restore the normal operation of the target server. The switch detects that the processing process of the cache queue starts processing The traffic will then be forwarded back.
示例性的,当设置初始每个缓存队列的数据流量存储量为8个,如果目前第一目标缓存队列的8个数据流量存储空间都被占用,此时第一目标缓存队列的缓存状态就被称为满载状态,则可以从其他缓存队列中确定第二缓存队列, 比如其他缓存队列中存在某个缓存队列,其8个数据流量存储空间被占用了2个,那么可以将该缓存队列确定为第二目标缓存队列,第二目标缓存队列不存在积压。Exemplarily, when the initial data flow storage capacity of each cache queue is set to 8, if the current 8 data flow storage spaces of the first target cache queue are all occupied, the cache status of the first target cache queue will be The second cache queue can be determined from other cache queues. For example, if there is a cache queue in other cache queues, and two of its 8 data flow storage spaces are occupied, then the cache queue can be determined as The second target cache queue, there is no backlog in the second target cache queue.
S150、网卡将待处理数据分发至第二目标缓存队列。S150. The network card distributes the data to be processed to the second target cache queue.
示例性的,当确定第二目标缓存队列不是满载状态时,即不存在数据流量的积压,处理进程处理数据流量正常,此时网卡将待处理数据分发至第二目标缓存队列。Exemplarily, when it is determined that the second target cache queue is not fully loaded, that is, there is no backlog of data traffic, and the processing process is processing data traffic normally, then the network card distributes the data to be processed to the second target cache queue.
可选的,流量迁移系统中还包括备用服务器,方法还包括:在网卡根据其他缓存队列的缓存状态确定不出第二目标缓存队列时,网卡将待处理数据回退给交换机,以使得交换机将待处理数据分发给备用服务器。Optionally, the traffic migration system also includes a backup server, and the method also includes: when the network card cannot determine the second target cache queue according to the cache states of other cache queues, the network card returns the data to be processed to the switch, so that the switch will Pending data is distributed to standby servers.
其中,备用服务器可以是和目标服务器相同的服务器,也可以是不同的服务器,本实施例对此不进行限定。Wherein, the standby server may be the same server as the target server, or may be a different server, which is not limited in this embodiment.
示例性的,在网卡根据其他缓存队列的缓存状态确定不出第二目标缓存队列时,也就是说在根据其他缓存队列的缓存状态确定第二目标缓存队列时,剩余的目标缓存队列都存在满载的状态,无法再接收待处理数据,此时将即将接入的数据流量返回到交换机中,然后交换机将待处理数据分发给备用服务器做进一步处理。Exemplarily, when the network card cannot determine the second target cache queue according to the cache status of other cache queues, that is to say, when the second target cache queue is determined according to the cache status of other cache queues, the remaining target cache queues are all fully loaded state, the data to be processed can no longer be received. At this time, the data traffic to be accessed is returned to the switch, and then the switch distributes the data to be processed to the standby server for further processing.
如图3所示,图3是本申请实施例的将流量迁移至备用服务器的流量迁移示意图;当目标服务器中的所有目标缓存队列都发生满载或者瞬时数据流量远大于该目标服务器的性能处理上限时,网卡不会丢弃后续接收的流量,而是通过交换机将流量转发备用服务器。As shown in Figure 3, Figure 3 is a schematic diagram of traffic migration to a backup server according to the embodiment of the present application; when all target cache queues in the target server are fully loaded or the instantaneous data traffic is much greater than the performance of the target server For a limited time, the network card will not discard the subsequent received traffic, but will forward the traffic to the standby server through the switch.
本申请实施例的技术方案提供了一种流量迁移方法,通过网卡接收交换机分发的待处理数据,使用网卡根据待处理数据的附加信息计算待处理数据应分发至的第一目标缓存队列;网卡确定第一目标缓存队列的缓存状态;在第一目标缓存队列的缓存状态为满载状态时,网卡根据其他缓存队列的缓存状态确定第二目标缓存队列,其他缓存队列为多个缓存队列中除第一目标缓存队列之外 的缓存队列;网卡将待处理数据分发至第二目标缓存队列。即在本申请实施例中,通过网卡利用自身的计算能力进行流量迁移,可以达到灵活的负载均衡策略,数据节点接收到的流量更加稳定均衡的效果。即本申请实施例中,可以通过设备上安装的网卡进行流量分配,可以实现流量的均衡分配,利用网卡高性能的可编程能力,实现流量突发迁移,在设备内的某一个处理节点(处理进程)异常时,可以通过设备上安装的网卡进行流量迁移,从而避免了流量被丢弃,保证了业务的正常运行,提升了用户的业务使用体验。The technical solution of the embodiment of the present application provides a flow migration method, which receives the data to be processed distributed by the switch through the network card, and uses the network card to calculate the first target cache queue to which the data to be processed should be distributed according to the additional information of the data to be processed; the network card determines The cache status of the first target cache queue; when the cache status of the first target cache queue is fully loaded, the network card determines the second target cache queue according to the cache status of other cache queues, and the other cache queues are the multiple cache queues except the first A cache queue other than the target cache queue; the network card distributes the data to be processed to the second target cache queue. That is, in the embodiment of the present application, a flexible load balancing strategy can be achieved through the network card using its own computing power to migrate traffic, and the traffic received by the data nodes is more stable and balanced. That is to say, in the embodiment of the present application, the network card installed on the device can be used for traffic distribution, and the balanced distribution of traffic can be realized, and the high-performance programmable capability of the network card can be used to realize the sudden traffic migration. When the process) is abnormal, traffic migration can be performed through the network card installed on the device, thereby avoiding traffic being discarded, ensuring the normal operation of the business, and improving the user's business experience.
示例性的,基于上述实施例的基础上,细化网卡根据待处理数据进行流量迁移的结果。Exemplarily, on the basis of the foregoing embodiments, the result of traffic migration performed by the network card according to the data to be processed is refined.
图4是本申请实施例的另一流量迁移方法的一个流程示意图,如图4所示,该方法包括:Fig. 4 is a schematic flowchart of another traffic migration method according to an embodiment of the present application. As shown in Fig. 4, the method includes:
S410、网卡接收交换机分发的待处理数据。S410. The network card receives the data to be processed distributed by the switch.
即网卡接收的待处理数据是由交换机分发的,示例地,交换机可以维持数据分发表,按照数据分发表分发待处理数据,数据分发表的表项一般为256、512或者1024,示例性的,交换机中维持的数据分发表可如下表1所示:That is, the data to be processed received by the network card is distributed by the switch. For example, the switch can maintain a data distribution table and distribute the data to be processed according to the data distribution table. The entries in the data distribution table are generally 256, 512 or 1024. For example, The data distribution table maintained in the switch can be shown in Table 1 below:
索引index 网口Network port
00 00
11 11
22 22
1515 1515
1616 00
1717 11
10231023 1515
表1Table 1
表1所示,即表项为1024,需要分发给16个网口(网卡),则针对一个待 处理数据,可以对其进行解析得到源IP和目的IP,源IP和目的IP都是32bit的值,需要进行处理才能跟表项对应起来(1024个表项代表10bit),一般的处理是xor,xor表示异或操作,即:As shown in Table 1, that is, the entry is 1024, which needs to be distributed to 16 network ports (network cards), then for a piece of data to be processed, it can be analyzed to obtain the source IP and destination IP, both of which are 32bit The value needs to be processed to correspond to the entry (1024 entries represent 10bit). The general processing is xor, which means XOR operation, namely:
(1)源IP xor目的IP得到一个32bit的值;(1) Source IP xor destination IP gets a 32bit value;
(2)对第(1)步得到的值取高16位xor低16位,得到一个16bit的值;(2) Take the high 16-bit xor low 16-bit value of the value obtained in step (1) to obtain a 16-bit value;
(3)再用16bits的15-12bits与11-8bits xor,将得到的4bits替换到11-8bits,得到12bit;(3) Use 15-12bits and 11-8bits xor of 16bits to replace the obtained 4bits with 11-8bits to obtain 12bits;
(4)12bit值再右移2位(去除低2位)得到10bits的哈希值。(4) The 12bit value is shifted to the right by 2 bits (remove the lower 2 bits) to obtain a 10bits hash value.
根据得到的哈希值查询交换机中维持的数据分发表,从而确定将待处理数据分发至哪个网口,从而将待处理数据分发至对应的网口。比如,若交换机按照上述规则计算得到的某个待处理数据的哈希值为15,则查询表1确定可以将该待处理数据分发给网口15。The data distribution table maintained in the switch is queried according to the obtained hash value, so as to determine to which network port the data to be processed is distributed, and thus the data to be processed is distributed to the corresponding network port. For example, if the hash value of certain data to be processed calculated by the switch according to the above rules is 15, the lookup table 1 determines that the data to be processed can be distributed to the network port 15 .
S420、网卡对待处理数据的附加信息进行散列计算,得到哈希值。S420. The network card performs hash calculation on the additional information of the data to be processed to obtain a hash value.
其中,散列计算又称为哈希算法,就是给一个输入通道通过散列算法后输出得到固定的长度,主要是把任意长度的输入通过散列算法,变换成固定长度的输出,该输出就是哈希值。这种转换是一种压缩映射,也就是,散列值的空间通常远小于输入的空间,不同的输入可能会散列成相同的输出,而不可能从散列值来唯一的确定输入值。简单的说就是一种将任意长度的消息压缩到某一固定长度的消息摘要的函数。Among them, the hash calculation is also called the hash algorithm, which is to give an input channel through the hash algorithm to output a fixed length, mainly to transform the input of any length into a fixed-length output through the hash algorithm, the output is hash value. This conversion is a compression mapping, that is, the space of the hash value is usually much smaller than the space of the input, different inputs may be hashed into the same output, and it is impossible to uniquely determine the input value from the hash value. Simply put, it is a function to compress a message of any length into a fixed-length message digest.
其中,哈希值可以理解为一段流量数据的身份标识,通过一定的哈希算法,例如可以是,MD5信息摘要算法(MD5Message-Digest Algorithm,MD5),或者可以是,安全散列算法1(Secure Hash Algorithm 1,SHA-1)等,本实施例对此不进行限定。Among them, the hash value can be understood as the identity of a piece of traffic data. Through a certain hash algorithm, for example, it can be MD5 message digest algorithm (MD5Message-Digest Algorithm, MD5), or it can be secure hash algorithm 1 (Secure Hash Algorithm 1, SHA-1), etc., which are not limited in this embodiment.
示例性的,网卡对待处理数据的附加信息,如输入端口信息、源媒体存取控制MAC位址、目的MAC地址、网络类型、网络识别号、源网际互连协议IP地址、目的IP地址、IP端口信息、源传输控制协议TCP端口信息和目标TCP 端口信息中的至少一者进行散列计算,将一段较长的数据映射为一段较短的小的数据,这段小数据就是大数据的哈希值,而且它是唯一的,一旦大数据发生变化,哪怕是微小的变化,它的哈希值也会发生变化。Exemplary, the additional information of the data to be processed by the network card, such as input port information, source media access control MAC address, destination MAC address, network type, network identification number, source Internet Protocol IP address, destination IP address, IP At least one of port information, source transmission control protocol TCP port information and target TCP port information is hashed, and a long piece of data is mapped to a short piece of small data. This piece of small data is the hash of big data. Hash value, and it is unique, once the big data changes, even if it is a small change, its hash value will also change.
S430、根据哈希值确定第一目标缓存队列。S430. Determine the first target cache queue according to the hash value.
示例地,网卡中也可以维持一个数据分发表,网卡中维持的数据分发表可如下表2所示:For example, a data distribution table may also be maintained in the network card, and the data distribution table maintained in the network card may be shown in Table 2 below:
索引index 缓存队列cache queue
11 11
22 22
33 33
77 11
88 22
99 33
表2Table 2
表2所示,即数据需要分发给3个缓存队列,则网卡按照上述规则计算得到待处理数据的哈希值之后,可以查询网卡中维持的数据分发表,从而确定待处理数据应该分发的第一目标缓存队列。比如,针对某个待处理数据,若网卡计算得到其哈希值为2,则可以查询网卡中维持的数据分发表,确定这个待处理数据需要分发给缓存队列2。As shown in Table 2, that is, the data needs to be distributed to three cache queues. After the network card calculates the hash value of the data to be processed according to the above rules, it can query the data distribution table maintained in the network card to determine the first queue where the data to be processed should be distributed. A target cache queue. For example, for a certain data to be processed, if the hash value calculated by the network card is 2, the data distribution table maintained in the network card can be queried to determine that the data to be processed needs to be distributed to the cache queue 2.
S440、网卡确定第一目标缓存队列是否是满载状态。S440. The network card determines whether the first target cache queue is fully loaded.
示例性的,比如当第一目标缓存队列的原始固定缓存数值为8bits时,等于8bits就是满载状态,小于8bits则是非满载状态。Exemplarily, for example, when the original fixed buffer value of the first target buffer queue is 8 bits, the value equal to 8 bits is a full load state, and less than 8 bits is a non-full load state.
S450、网卡将待处理数据分发至第一目标缓存队列。S450. The network card distributes the data to be processed to the first target cache queue.
示例性的,对应一个目标缓存队列,使用一个目标服务器的处理进程进行处理,非满载状态就是当处理进程正常处理第一目标缓存队列的数据流量且还 可以继续进行数据流量缓冲时,此时可以将待处理数据分发至第一目标缓存队列进行处理。Exemplarily, corresponding to a target cache queue, a processing process of a target server is used for processing. The non-full load state is when the processing process normally processes the data traffic of the first target cache queue and can continue to buffer the data traffic. At this time, it can Distribute the data to be processed to the first target cache queue for processing.
示例性的,若第一目标缓存队列的总缓存数量为8,非满载状态比如目前第一目标缓存队列的缓存数量为4时,此时网卡中存在4个数据流量需要分发到缓存队列,确定第一目标缓存队列存在缓存空间,可以将待处理的4个数据流量分发至第一目标缓存队列。Exemplarily, if the total number of caches in the first target cache queue is 8, for example, if the number of caches in the first target cache queue is currently 4 in a non-full state, there are 4 data flows in the network card that need to be distributed to the cache queue. There is buffer space in the first target cache queue, and the 4 data flows to be processed can be distributed to the first target cache queue.
S460、网卡根据其他缓存队列的缓存状态确定第二目标缓存队列,其他缓存队列为多个缓存队列中除第一目标缓存队列之外的缓存队列。S460. The network card determines a second target cache queue according to cache states of other cache queues, where the other cache queues are cache queues other than the first target cache queue among the multiple cache queues.
示例性的,在第一目标缓存队列的缓存状态为满载状态时,即处理进程在处理数据流量的过程中,该处理进程对缓存队列的处理存在积压,网卡及时将后续本应分发至该第一目标缓存队列的数据流量重新计算散列结果分发至其他缓存队列,即在其他缓存队列中确定第二目标缓存队列,待缓存队列满载解除后恢复原有分发方式。其中,缓存队列满载主要依赖于目标服务器的守护程序,可以持续监控程序的运行状态,通常缓存队列满载比如内存越界段错误,可以重新启动程序解除满载。Exemplarily, when the cache status of the first target cache queue is full, that is, when the processing process is processing data traffic, there is a backlog in the processing of the cache queue by the processing process, and the network card promptly distributes the subsequent files to the first target cache queue. The data flow of a target cache queue is recalculated and the hash result is distributed to other cache queues, that is, the second target cache queue is determined in other cache queues, and the original distribution method is restored after the cache queue is fully loaded. Among them, the full load of the cache queue mainly depends on the daemon program of the target server, which can continuously monitor the running status of the program. Usually, when the cache queue is full, such as a memory out-of-bounds segment error, the full load can be relieved by restarting the program.
S470、网卡将待处理数据分发至第二目标缓存队列。S470. The network card distributes the data to be processed to the second target cache queue.
作为本实施例中的一种可选方案,网卡根据其他缓存队列的缓存状态确定第二目标缓存队列,包括:网卡从其他缓存队列中查找缓存状态为非满载状态、且缓存数据量小于预设阈值的缓存队列,得到第二目标缓存队列。As an optional solution in this embodiment, the network card determines the second target cache queue according to the cache status of other cache queues, including: the network card finds that the cache status is not fully loaded from other cache queues, and the amount of cached data is less than a preset Threshold cache queue to get the second target cache queue.
其中,预设阈值是一个理论的上限值,很多情况下受到流量或者硬件的影响,可以是一个动态的识别上限值,一段时间内,有一个阈值,达到某次的上限,此时就可以认为它达到了预设阈值的上限。Among them, the preset threshold is a theoretical upper limit. In many cases, it is affected by traffic or hardware. It can be a dynamic upper limit for identification. Within a period of time, there is a threshold that reaches the upper limit for a certain time. It can be considered to have reached the upper limit of the preset threshold.
示例性的,当第一目标缓存队列存在满载的情况后,网卡根据除第一目标缓存队列的其他缓存队列的缓存状态,确定其他缓存队列的缓存状态是否存在满载情况或者非满载情况,缓存队列的缓存数为8,小于8就判断为非满载状态,等于8就判断为满载状态。当判断为非满载状态后,确定待处理数据的缓存数 据量,若确定的非满载缓存队列可缓存数据量为4,且待处理数据的数据量为小于4或者等于4,则成功得到第二目标缓存队列;若待处理数据的数据量大于该缓存队列可缓存数据量4,继续在剩余的其他缓存队列中确定第二目标缓存队列。Exemplarily, when the first target cache queue is fully loaded, the network card determines whether the cache status of other cache queues is full or not full according to the cache status of other cache queues except the first target cache queue, and the cache queue The number of caches is 8, if it is less than 8, it is judged as not fully loaded, and when it is equal to 8, it is judged as fully loaded. When it is judged to be in a non-full load state, determine the cached data volume of the data to be processed. If the determined cacheable data volume of the non-full-loaded cache queue is 4, and the data volume of the data to be processed is less than 4 or equal to 4, the second Target cache queue; if the amount of data to be processed is greater than the cacheable data volume 4 of the cache queue, continue to determine the second target cache queue in the remaining other cache queues.
本申请实施例的技术方案提供了一种流量迁移方法,网卡接收交换机分发的待处理数据。网卡对待处理数据的附加信息进行散列计算,得到哈希值;根据哈希值确定第一目标缓存队列;在第一目标缓存队列的缓存状态为非满载状态时,网卡将待处理数据分发至第一目标缓存队列;网卡确定第一目标缓存队列的缓存状态;在第一目标缓存队列的缓存状态为满载状态时,网卡根据其他缓存队列的缓存状态确定第二目标缓存队列,其他缓存队列为多个缓存队列中除第一目标缓存队列之外的缓存队列;网卡将待处理数据分发至所述第二目标缓存队列。即在本申请实施例中,通过哈希值确定第一目标缓存队列,从而成功实现流量的迁移,确保数据节点接收到的流量更加稳定均衡。The technical solutions of the embodiments of the present application provide a traffic migration method, wherein the network card receives the data to be processed distributed by the switch. The network card performs hash calculation on the additional information of the data to be processed to obtain the hash value; determines the first target cache queue according to the hash value; when the cache state of the first target cache queue is not fully loaded, the network card distributes the data to be processed to The first target cache queue; the network card determines the cache status of the first target cache queue; when the cache status of the first target cache queue is fully loaded, the network card determines the second target cache queue according to the cache status of other cache queues, and the other cache queues are A cache queue other than the first target cache queue among the multiple cache queues; the network card distributes the data to be processed to the second target cache queue. That is, in the embodiment of the present application, the hash value is used to determine the first target cache queue, thereby successfully implementing traffic migration and ensuring that the traffic received by the data nodes is more stable and balanced.
图5是本申请实施例的流量迁移装置的一个结构示意图,该流量迁移装置可以是网卡,如图5所示,该流量迁移装置包括:数据接收模块510、队列计算模块520、状态确定模块530、队列确定模块540和数据分发模块550。其中,FIG. 5 is a schematic structural diagram of a traffic migration device according to an embodiment of the present application. The traffic migration device may be a network card. As shown in FIG. 5 , the traffic migration device includes: a data receiving module 510, a queue calculation module 520, and a status determination module 530 , a queue determination module 540 and a data distribution module 550 . in,
数据接收模块510、设置为接收交换机分发的待处理数据;The data receiving module 510 is configured to receive the data to be processed distributed by the switch;
队列计算模块520、设置为根据待处理数据的附加信息计算待处理数据应分发至的第一目标缓存队列;The queue calculation module 520 is configured to calculate the first target cache queue to which the data to be processed should be distributed according to the additional information of the data to be processed;
状态确定模块530、设置为确定第一目标缓存队列的缓存状态;The state determination module 530 is configured to determine the cache state of the first target cache queue;
队列确定模块540、设置为在第一目标缓存队列的缓存状态为满载状态时,根据其他缓存队列的缓存状态确定第二目标缓存队列,其他缓存队列为多个缓存队列中除第一目标缓存队列之外的缓存队列;The queue determination module 540 is configured to determine the second target cache queue according to the cache status of other cache queues when the cache status of the first target cache queue is full, and the other cache queues are the multiple cache queues except the first target cache queue Outside the cache queue;
数据分发模块550、设置为将待处理数据分发至第二目标缓存队列。The data distribution module 550 is configured to distribute the data to be processed to the second target cache queue.
可选的,附加信息包括输入端口信息、源媒体存取控制MAC位址、目的MAC地址、网络类型、网络识别号、源网际互连协议IP地址、目的IP地址、IP端口信息、源传输控制协议TCP端口信息和目标TCP端口信息中的至少一者。Optionally, the additional information includes input port information, source media access control MAC address, destination MAC address, network type, network identification number, source Internet Protocol IP address, destination IP address, IP port information, source transmission control At least one of protocol TCP port information and target TCP port information.
可选的,队列计算模块520,包括:Optionally, the queue calculation module 520 includes:
散列计算单元、设置为对待处理数据的附加信息进行散列计算,得到哈希值;The hash calculation unit is configured to perform hash calculation on the additional information of the data to be processed to obtain a hash value;
队列确定单元、设置为根据哈希值确定第一目标缓存队列。The queue determination unit is configured to determine the first target cache queue according to the hash value.
可选的,数据分发模块550还设置为:Optionally, the data distribution module 550 is also set to:
在第一目标缓存队列的缓存状态为非满载状态时,将待处理数据分发至第一目标缓存队列。When the cache state of the first target cache queue is not fully loaded, the data to be processed is distributed to the first target cache queue.
可选的,队列确定模块540设置为:Optionally, the queue determination module 540 is set to:
从其他缓存队列中查找缓存状态为非满载状态、且缓存数据量小于预设阈值的缓存队列,得到第二目标缓存队列。Searching for a cache queue whose cache state is not fully loaded and whose cached data volume is less than a preset threshold is searched from other cache queues to obtain a second target cache queue.
可选的,流量迁移系统中还包括备用服务器,所述装置还包括:Optionally, the traffic migration system also includes a standby server, and the device also includes:
数据回退模块,设置为在根据其他缓存队列的缓存状态确定不出第二目标缓存队列时,将待处理数据回退给交换机,以使得交换机将待处理数据分发给备用服务器。The data rollback module is configured to roll back the data to be processed to the switch when the second target cache queue cannot be determined according to the cache status of other cache queues, so that the switch distributes the data to be processed to the standby server.
本申请实施例所提供的一种流量迁移装置,可执行本申请任意实施例所提供的流量迁移方法,具备执行方法相应的功能模块。A traffic migration device provided in an embodiment of the present application can execute the traffic migration method provided in any embodiment of the present application, and has corresponding functional modules for executing the method.
图6为本申请实施例的流量迁移系统的一个结构示意图,如图6所示,该流量迁移系统包括网卡601、交换机602、目标服务器603和备用服务器604。FIG. 6 is a schematic structural diagram of a traffic migration system according to an embodiment of the present application. As shown in FIG. 6 , the traffic migration system includes a network card 601 , a switch 602 , a target server 603 and a standby server 604 .
其中,网卡是安装在目标服务器上,目标服务器具有多个缓存队列和对应的多个处理进程,每个处理进程处理对应缓存队列中缓存的数据。示例性的:Wherein, the network card is installed on the target server, and the target server has multiple cache queues and corresponding multiple processing processes, and each processing process processes the data cached in the corresponding cache queue. Exemplary:
网卡601设置为:接收交换机602分发的待处理数据;并根据待处理数据的附加信息计算待处理数据应分发至的第一目标缓存队列;网卡601进一步确定第一目标缓存队列的缓存状态;在第一目标缓存队列的缓存状态为满载状态时,网卡601根据其他缓存队列的缓存状态确定第二目标缓存队列,其他缓存队列为多个缓存队列中除第一目标缓存队列之外的缓存队列;然后网卡601将待处理数据分发至第二目标缓存队列,最后目标服务器603对缓存队列中的数 据继续处理。当目标服务器603故障导致智能网卡无法启动时,将数据分发给备用服务器604。The network card 601 is configured to: receive the data to be processed distributed by the switch 602; and calculate the first target cache queue to which the data to be processed should be distributed according to the additional information of the data to be processed; the network card 601 further determines the cache status of the first target cache queue; When the cache state of the first target cache queue is fully loaded, the network card 601 determines the second target cache queue according to the cache states of other cache queues, and the other cache queues are cache queues other than the first target cache queue among the multiple cache queues; Then the network card 601 distributes the data to be processed to the second target cache queue, and finally the target server 603 continues to process the data in the cache queue. When the target server 603 fails and the smart network card cannot be started, the data is distributed to the standby server 604 .
交换机602设置为:将待处理数据分发给网卡601;当网卡601无法缓存待处理数据后,交换机602接收网卡601无法处理的数据,并将数据分发到备用服务器604。The switch 602 is set to: distribute the data to be processed to the network card 601; when the network card 601 cannot cache the data to be processed, the switch 602 receives the data that the network card 601 cannot process, and distributes the data to the backup server 604.
目标服务器603设置为:将网卡601依附在目标服务器603上,目标服务器603上的多个缓存队列对应的处理进程处理缓存队列中的数据流量。The target server 603 is set to: attach the network card 601 to the target server 603, and the processing processes corresponding to the multiple cache queues on the target server 603 process the data traffic in the cache queues.
备用服务器604设置为:在目标服务器603所有的缓存队列都存在满载情况,或者瞬时流量远大于该目标服务器603的性能处理上限时,备用服务器604接收待处理的数据流量并进行处理,处理方式可以和目标服务器603一样采用网卡的形式,本实施例对此不进行限定。The standby server 604 is set to: when all cache queues of the target server 603 are fully loaded, or the instantaneous traffic is far greater than the performance processing upper limit of the target server 603, the standby server 604 receives and processes the data traffic to be processed, and the processing method can be Like the target server 603, it is in the form of a network card, which is not limited in this embodiment.
本实施例提供的流量迁移系统,包括网卡、交换机、目标服务器和备用服务器。其中,网卡是安装在目标服务器上,目标服务器具有多个缓存队列和对应的多个处理进程,每个处理进程处理对应缓存队列中缓存的数据。通过网卡接收交换机分发的待处理数据,使用网卡根据待处理数据的附加信息计算待处理数据应分发至的第一目标缓存队列;网卡确定第一目标缓存队列的缓存状态;在第一目标缓存队列的缓存状态为满载状态时,网卡根据其他缓存队列的缓存状态确定第二目标缓存队列,其他缓存队列为多个缓存队列中除第一目标缓存队列之外的缓存队列;网卡将待处理数据分发至第二目标缓存队列。即在本申请实施例中,通过网卡利用自身的计算能力进行流量迁移,可以达到灵活的负载均衡策略,数据节点接收到的流量更加稳定均衡的效果。The traffic migration system provided in this embodiment includes a network card, a switch, a target server and a backup server. Wherein, the network card is installed on the target server, and the target server has multiple cache queues and corresponding multiple processing processes, and each processing process processes the data cached in the corresponding cache queue. Receive the data to be processed distributed by the switch through the network card, and use the network card to calculate the first target cache queue to which the data to be processed should be distributed according to the additional information of the data to be processed; the network card determines the cache status of the first target cache queue; in the first target cache queue When the cache status of the cache is fully loaded, the network card determines the second target cache queue according to the cache status of other cache queues, and the other cache queues are cache queues other than the first target cache queue among multiple cache queues; the network card distributes the pending data to the second target cache queue. That is, in the embodiment of the present application, a flexible load balancing strategy can be achieved through the network card using its own computing power to migrate traffic, and the traffic received by the data nodes is more stable and balanced.
图7为本申请实施例的电子设备的一个结构示意图,该电子设备包括处理器710、存储器720、输入装置730和输出装置740;电子设备中处理器710的数量可以是至少一个,图7中以一个处理器710为例;电子设备中的处理器710、存储器720、输入装置730和输出装置740可以通过总线或其他方式连接,图7中以通过总线连接为例。FIG. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device includes a processor 710, a memory 720, an input device 730, and an output device 740; the number of processors 710 in the electronic device may be at least one, as shown in FIG. 7 Take a processor 710 as an example; the processor 710, the memory 720, the input device 730 and the output device 740 in the electronic device may be connected through a bus or in other ways. In FIG. 7, the connection through a bus is taken as an example.
存储器720作为一种计算机可读存储介质,可设置为存储软件程序、计算机可执行程序以及模块,如本申请实施例中的流量迁移方法对应的程序指令/模块(例如流量迁移装置中的数据接收模块510、队列计算模块520、状态确定模块530、队列确定模块540和数据分发模块550),处理器710通过运行存储在存储器720中的软件程序、指令以及模块,从而执行电子设备的各种功能应用以及数据处理,即实现上述的流量迁移方法。The memory 720, as a computer-readable storage medium, can be set to store software programs, computer-executable programs and modules, such as program instructions/modules corresponding to the traffic migration method in the embodiment of the present application (for example, the data reception in the traffic migration device module 510, queue calculation module 520, state determination module 530, queue determination module 540 and data distribution module 550), the processor 710 executes various functions of the electronic device by running software programs, instructions and modules stored in the memory 720 Application and data processing, that is, realizing the above traffic migration method.
存储器720可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序;存储数据区可存储根据终端的使用所创建的数据等。此外,存储器720可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实例中,存储器720可包括相对于处理器710远程设置的存储器,这些远程存储器可以通过网络连接至电子设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。The memory 720 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system and an application program required by at least one function; the data storage area may store data created according to the use of the terminal, and the like. In addition, the memory 720 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage devices. In some examples, memory 720 may include memory located remotely from processor 710, and such remote memory may be connected to the electronic device through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
输入装置730可设置为接收输入的数字或字符信息,以及产生与电子设备的用户设置以及功能控制有关的键信号输入。输出装置740可包括显示屏等显示设备。The input device 730 may be configured to receive input numbers or character information, and generate key signal input related to user settings and function control of the electronic device. The output device 740 may include a display device such as a display screen.
本申请实施例还提供一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行一种流量迁移方法,该方法包括:The embodiment of the present application also provides a storage medium containing computer-executable instructions, the computer-executable instructions are used to execute a traffic migration method when executed by a computer processor, and the method includes:
网卡接收交换机分发的待处理数据;The network card receives the pending data distributed by the switch;
网卡根据待处理数据的附加信息计算待处理数据应分发至的第一目标缓存队列;The network card calculates the first target cache queue to which the data to be processed should be distributed according to the additional information of the data to be processed;
网卡确定第一目标缓存队列的缓存状态;The network card determines the cache status of the first target cache queue;
在第一目标缓存队列的缓存状态为满载状态时,网卡根据其他缓存队列的缓存状态确定第二目标缓存队列,其他缓存队列为多个缓存队列中除第一目标缓存队列之外的缓存队列;When the cache state of the first target cache queue is fully loaded, the network card determines the second target cache queue according to the cache states of other cache queues, and the other cache queues are cache queues other than the first target cache queue among the multiple cache queues;
网卡将待处理数据分发至第二目标缓存队列。The network card distributes the data to be processed to the second target cache queue.
当然,本申请实施例所提供的一种包含计算机可执行指令的存储介质,其计算机可执行指令不限于如上所述的方法操作,还可以执行本申请任意实施例所提供的流量迁移方法中的相关操作。Certainly, a storage medium containing computer-executable instructions provided in the embodiments of the present application, the computer-executable instructions are not limited to the above-mentioned method operations, and may also perform the traffic migration method provided in any embodiment of the present application. related operations.
通过以上关于实施方式的描述,所属领域的技术人员可以清楚地了解到,本申请可借助软件及必需的通用硬件来实现,当然也可以通过硬件实现。基于这样的理解,本申请的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如计算机的软盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、闪存(FLASH)、硬盘或光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。Through the above descriptions about the implementation manners, those skilled in the art can clearly understand that the present application can be realized by software and necessary general-purpose hardware, and of course it can also be realized by hardware. Based on this understanding, the essence of the technical solution of this application or the part that contributes to related technologies can be embodied in the form of software products, and the computer software products can be stored in computer-readable storage media, such as computer floppy disks, Read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), flash memory (FLASH), hard disk or optical disc, etc., including several instructions to make a computer device (which can be a personal computer, A server, or a network device, etc.) executes the methods described in various embodiments of the present application.
值得注意的是,上述搜索装置的实施例中,所包括的各个单元和模块只是按照功能逻辑进行划分的,但并不局限于上述的划分,只要能够实现相应的功能即可;另外,各功能单元的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。It is worth noting that in the embodiments of the search device above, the units and modules included are only divided according to functional logic, but are not limited to the above-mentioned divisions, as long as the corresponding functions can be realized; in addition, each function The specific names of the units are only for the convenience of distinguishing each other, and are not used to limit the protection scope of the present application.

Claims (10)

  1. 一种流量迁移方法,应用于流量迁移系统,所述流量迁移系统中包括交换机、网卡和目标服务器,所述网卡安装在所述目标服务器上,所述目标服务器具有多个缓存队列和对应的多个处理进程,每个处理进程处理对应缓存队列中缓存的数据,所述方法包括:A traffic migration method, applied to a traffic migration system, the traffic migration system includes a switch, a network card, and a target server, the network card is installed on the target server, and the target server has multiple cache queues and corresponding multiple processing processes, each processing process processes the data cached in the corresponding cache queue, and the method includes:
    所述网卡接收所述交换机分发的待处理数据;The network card receives the data to be processed distributed by the switch;
    所述网卡根据所述待处理数据的附加信息计算所述待处理数据应分发至的第一目标缓存队列;The network card calculates the first target cache queue to which the data to be processed should be distributed according to the additional information of the data to be processed;
    所述网卡确定所述第一目标缓存队列的缓存状态;The network card determines a cache state of the first target cache queue;
    响应于所述第一目标缓存队列的缓存状态为满载状态,所述网卡根据其他缓存队列的缓存状态确定第二目标缓存队列,所述其他缓存队列为所述多个缓存队列中除所述第一目标缓存队列之外的缓存队列;In response to the cache status of the first target cache queue being fully loaded, the network card determines a second target cache queue according to the cache status of other cache queues, and the other cache queues are all cache queues except the first cache queue. a cache queue other than the target cache queue;
    所述网卡将所述待处理数据分发至所述第二目标缓存队列。The network card distributes the data to be processed to the second target cache queue.
  2. 根据权利要求1所述的流量迁移方法,其中,所述附加信息包括输入端口信息、源媒体存取控制MAC位址、目的MAC地址、网络类型、网络识别号、源网际互连协议IP地址、目的IP地址、IP端口信息、源传输控制协议TCP端口信息和目标TCP端口信息中的至少一者。The traffic migration method according to claim 1, wherein the additional information includes input port information, source media access control MAC address, destination MAC address, network type, network identification number, source Internet Protocol IP address, At least one of destination IP address, IP port information, source TCP port information and destination TCP port information.
  3. 根据权利要求1或2所述的流量迁移方法,其中,所述网卡根据所述待处理数据的附加信息计算所述待处理数据应分发至的第一目标缓存队列,包括:The traffic migration method according to claim 1 or 2, wherein the network card calculates the first target cache queue to which the data to be processed should be distributed according to the additional information of the data to be processed, comprising:
    所述网卡对所述待处理数据的附加信息进行散列计算,得到哈希值;The network card performs hash calculation on the additional information of the data to be processed to obtain a hash value;
    根据所述哈希值确定所述第一目标缓存队列。Determine the first target cache queue according to the hash value.
  4. 根据权利要求1所述的流量迁移方法,在确定所述第一目标缓存队列的缓存状态之后,还包括:The traffic migration method according to claim 1, after determining the cache state of the first target cache queue, further comprising:
    响应于所述第一目标缓存队列的缓存状态为非满载状态,所述网卡将所述待处理数据分发至所述第一目标缓存队列。In response to the cache status of the first target cache queue being not full, the network card distributes the data to be processed to the first target cache queue.
  5. 根据权利要求1所述的流量迁移方法,其中,所述网卡根据其他缓存队列的缓存状态确定第二目标缓存队列,包括:The traffic migration method according to claim 1, wherein the network card determines a second target cache queue according to cache states of other cache queues, comprising:
    所述网卡从所述其他缓存队列中查找缓存状态为非满载状态、且缓存数据量小于预设阈值的缓存队列,得到所述第二目标缓存队列。The network card searches the other cache queues for a cache queue whose cache state is not fully loaded and whose cached data volume is less than a preset threshold, to obtain the second target cache queue.
  6. 根据权利要求1所述的流量迁移方法,其中,所述流量迁移系统中还包括备用服务器,所述方法还包括:The traffic migration method according to claim 1, wherein the traffic migration system further includes a backup server, and the method further includes:
    响应于所述网卡根据所述其他缓存队列的缓存状态确定不出所述第二目标缓存队列,所述网卡将所述待处理数据回退给所述交换机,以使得所述交换机将所述待处理数据分发给所述备用服务器。In response to the network card failing to determine the second target cache queue according to the cache status of the other cache queues, the network card returns the data to be processed to the switch, so that the switch sends the pending data to the switch. The processed data is distributed to the standby server.
  7. 一种流量迁移装置,应用于流量迁移系统,所述流量迁移系统中包括交换机、网卡和目标服务器,所述网卡安装在所述目标服务器上,所述目标服务器具有多个缓存队列和对应的多个处理进程,每个处理进程处理对应缓存队列中缓存的数据,所述装置包括:A traffic migration device, applied to a traffic migration system, the traffic migration system includes a switch, a network card, and a target server, the network card is installed on the target server, and the target server has multiple cache queues and corresponding multiple processing processes, each processing process processes the data cached in the corresponding buffer queue, and the device includes:
    数据接收模块,设置为所述网卡接收所述交换机分发的待处理数据;A data receiving module, configured to receive the data to be processed distributed by the switch by the network card;
    队列计算模块,设置为所述网卡根据所述待处理数据的附加信息计算所述待处理数据应分发至的第一目标缓存队列;The queue calculation module is configured to calculate, according to the additional information of the data to be processed, the first target cache queue to which the data to be processed should be distributed;
    状态确定模块,设置为所述网卡确定所述第一目标缓存队列的缓存状态;A state determination module, configured to determine the cache state of the first target cache queue for the network card;
    队列确定模块,设置为在所述第一目标缓存队列的缓存状态为满载状态时,所述网卡根据其他缓存队列的缓存状态确定第二目标缓存队列,所述其他缓存队列为所述多个缓存队列中除所述第一目标缓存队列之外的缓存队列;The queue determination module is configured to determine a second target cache queue according to the cache status of other cache queues when the cache status of the first target cache queue is full, and the other cache queues are the cache queues of the plurality of cache queues. cache queues in the queue other than the first target cache queue;
    数据分发模块,设置为所述网卡将所述待处理数据分发至所述第二目标缓存队列。A data distribution module configured to distribute the data to be processed to the second target cache queue by the network card.
  8. 一种流量迁移系统,包括交换机、目标服务器以及如权利要求1至6任意一项所述的网卡,所述网卡安装在所述目标服务器上,所述目标服务器具有多个缓存队列和对应的多个处理进程,每个处理进程处理对应缓存队列中缓存的数据。A traffic migration system, comprising a switch, a target server, and the network card according to any one of claims 1 to 6, the network card is installed on the target server, and the target server has multiple cache queues and corresponding multiple Each processing process processes the data cached in the corresponding cache queue.
  9. 一种电子设备,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如权利要 求1至6中任一项所述的一种流量迁移方法。An electronic device, comprising a memory, a processor, and a computer program stored on the memory and operable on the processor, when the processor executes the computer program, any one of claims 1 to 6 is realized. A traffic migration method described in item .
  10. 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至6中任一项所述的一种流量迁移方法。A computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the traffic migration method according to any one of claims 1 to 6 is implemented.
PCT/CN2022/103219 2021-10-28 2022-07-01 Traffic migration method, apparatus and system, and electronic device and storage medium WO2023071272A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111260898.2 2021-10-28
CN202111260898.2A CN114024915B (en) 2021-10-28 2021-10-28 Traffic migration method, device and system, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
WO2023071272A1 true WO2023071272A1 (en) 2023-05-04

Family

ID=80058061

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/103219 WO2023071272A1 (en) 2021-10-28 2022-07-01 Traffic migration method, apparatus and system, and electronic device and storage medium

Country Status (2)

Country Link
CN (1) CN114024915B (en)
WO (1) WO2023071272A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117640796A (en) * 2024-01-03 2024-03-01 北京火山引擎科技有限公司 Network message processing method and device

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114024915B (en) * 2021-10-28 2023-06-16 北京锐安科技有限公司 Traffic migration method, device and system, electronic equipment and storage medium
CN116016092A (en) * 2022-12-13 2023-04-25 杭州领祺科技有限公司 MQTT synchronous message method based on multithreading

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109450816A (en) * 2018-11-19 2019-03-08 迈普通信技术股份有限公司 A kind of array dispatching method, device, the network equipment and storage medium
CN111131074A (en) * 2018-10-31 2020-05-08 中移(杭州)信息技术有限公司 Data processing method, device, system, server and readable storage medium
CN111371866A (en) * 2020-02-26 2020-07-03 厦门网宿有限公司 Method and device for processing service request
US20210133110A1 (en) * 2019-10-30 2021-05-06 International Business Machines Corporation Migrating data between block pools in a storage system
CN114024915A (en) * 2021-10-28 2022-02-08 北京锐安科技有限公司 Traffic migration method, device and system, electronic equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110493145B (en) * 2019-08-01 2022-06-24 新华三大数据技术有限公司 Caching method and device
CN112463654A (en) * 2019-09-06 2021-03-09 华为技术有限公司 Cache implementation method with prediction mechanism

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111131074A (en) * 2018-10-31 2020-05-08 中移(杭州)信息技术有限公司 Data processing method, device, system, server and readable storage medium
CN109450816A (en) * 2018-11-19 2019-03-08 迈普通信技术股份有限公司 A kind of array dispatching method, device, the network equipment and storage medium
US20210133110A1 (en) * 2019-10-30 2021-05-06 International Business Machines Corporation Migrating data between block pools in a storage system
CN111371866A (en) * 2020-02-26 2020-07-03 厦门网宿有限公司 Method and device for processing service request
CN114024915A (en) * 2021-10-28 2022-02-08 北京锐安科技有限公司 Traffic migration method, device and system, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117640796A (en) * 2024-01-03 2024-03-01 北京火山引擎科技有限公司 Network message processing method and device

Also Published As

Publication number Publication date
CN114024915B (en) 2023-06-16
CN114024915A (en) 2022-02-08

Similar Documents

Publication Publication Date Title
WO2023071272A1 (en) Traffic migration method, apparatus and system, and electronic device and storage medium
US9632839B2 (en) Dynamic virtual machine consolidation
KR102304416B1 (en) Automatic Tuning of Hybrid WAN Links by Adaptive Replication of Packets on Alternate Links
US9866479B2 (en) Technologies for concurrency of cuckoo hashing flow lookup
CN107079060B (en) System and method for carrier-level NAT optimization
US10715622B2 (en) Systems and methods for accelerating object stores with distributed caching
US8937942B1 (en) Storing session information in network devices
EP2793436B1 (en) Content router forwarding plane architecture
US9774531B2 (en) Hash-based forwarding in content centric networks
US20140301388A1 (en) Systems and methods to cache packet steering decisions for a cluster of load balancers
WO2019237594A1 (en) Session persistence method and apparatus, and computer device and storage medium
WO2014101777A1 (en) Flow table matching method and device, and switch
US20200364080A1 (en) Interrupt processing method and apparatus and server
US10089131B2 (en) Compute cluster load balancing based on disk I/O cache contents
Liu et al. Memory disaggregation: Research problems and opportunities
WO2022111313A1 (en) Request processing method and micro-service system
US20150220438A1 (en) Dynamic hot volume caching
Mendelson et al. Anchorhash: A scalable consistent hash
TW201738781A (en) Method and device for joining tables
EP3977707B1 (en) Hardware load balancer gateway on commodity switch hardware
US10678754B1 (en) Per-tenant deduplication for shared storage
Vardoulakis et al. Tebis: index shipping for efficient replication in lsm key-value stores
EP3685567B1 (en) Load shedding of traffic based on current load state of target capacity
WO2023029485A1 (en) Data processing method and apparatus, computer device, and computer-readable storage medium
CN115766729A (en) Data processing method for four-layer load balancing and related device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22885177

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE