WO2023071272A1 - Procédé, appareil et système de migration de trafic, et dispositif électronique et support de stockage - Google Patents

Procédé, appareil et système de migration de trafic, et dispositif électronique et support de stockage Download PDF

Info

Publication number
WO2023071272A1
WO2023071272A1 PCT/CN2022/103219 CN2022103219W WO2023071272A1 WO 2023071272 A1 WO2023071272 A1 WO 2023071272A1 CN 2022103219 W CN2022103219 W CN 2022103219W WO 2023071272 A1 WO2023071272 A1 WO 2023071272A1
Authority
WO
WIPO (PCT)
Prior art keywords
cache
data
target
network card
queue
Prior art date
Application number
PCT/CN2022/103219
Other languages
English (en)
Chinese (zh)
Inventor
孙晓
火一莽
万月亮
Original Assignee
北京锐安科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京锐安科技有限公司 filed Critical 北京锐安科技有限公司
Publication of WO2023071272A1 publication Critical patent/WO2023071272A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Definitions

  • the embodiments of the present application relate to computer technologies, for example, to a traffic migration method, device, system, electronic equipment, and storage medium.
  • the embodiment of the present application provides a traffic migration method, device, system, electronic equipment, and storage medium, to deal with sudden traffic jitter and when a certain processing node in the equipment fails, to ensure that the traffic is not lost, and to realize the traffic migration .
  • the embodiment of the present application provides a traffic migration method, the method includes:
  • the network card receives the pending data distributed by the switch
  • the network card calculates the first target cache queue to which the data to be processed should be distributed according to the additional information of the data to be processed;
  • the network card determines the cache status of the first target cache queue
  • the network card determines a second target cache queue according to the cache status of other cache queues, and the other cache queues are cache queues other than the first target cache queue among the plurality of cache queues;
  • the network card distributes the data to be processed to the second target cache queue.
  • this embodiment also provides a traffic migration device, which includes:
  • the data receiving module is set as the network card to receive the data to be processed distributed by the switch;
  • the queue calculation module is configured as a first target cache queue to which the network card calculates the data to be processed according to the additional information of the data to be processed;
  • a state determination module is configured to determine the cache state of the first target cache queue for the network card
  • the queue determination module is configured to determine the second target cache queue according to the cache status of other cache queues when the cache state of the first target cache queue is fully loaded, and the other cache queues are the multiple cache queues except the first target cache queue Outside the cache queue;
  • the data distribution module is configured to distribute the data to be processed to the second target cache queue by the network card.
  • the embodiment of the present application also provides a traffic migration system.
  • the traffic migration system includes a switch, a network card as described in the first aspect, and a target server.
  • the network card is installed on the target server, and the target server has multiple cache queues and Corresponding multiple processing processes, each processing process processes the data cached in the corresponding cache queue.
  • the embodiment of the present application also provides an electronic device, including a memory, a processor, and a computer program stored in the memory and operable on the processor.
  • the processor executes the program, the computer program implemented in the first aspect of the present application The traffic migration method described above.
  • the embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, the traffic migration method as described in the first aspect of the present application is implemented.
  • FIG. 1 is a schematic flow chart of a traffic migration method according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of traffic migration in a target server according to an embodiment of the present application
  • FIG. 3 is a schematic diagram of traffic migration for migrating traffic to a standby server according to an embodiment of the present application
  • FIG. 4 is another schematic flowchart of a traffic migration method according to an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of a traffic migration device according to an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a traffic migration system according to an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • words such as “optional” or “exemplary” are used as examples, illustrations or illustrations. Any embodiment or design solution described as “optional” or “exemplary” in the embodiments of the present application shall not be interpreted as being more preferred or more advantageous than other embodiments or design solutions. Rather, the use of words such as “optional” or “exemplary” is intended to present related concepts in a concrete manner.
  • Figure 1 is a schematic flow chart of the flow migration method of the embodiment of the present application.
  • the method can be executed by the flow migration device provided by the embodiment of the present application.
  • the device can be implemented by software and/or hardware.
  • the device may be integrated in an electronic device, and the electronic device may be a network card.
  • the integration of the traffic migration device in the network card is used as an example for illustration.
  • Fig. 1 is a schematic flow chart of the traffic migration method of the embodiment of the present application, which is applied to the traffic migration system.
  • the traffic migration system includes a switch, a network card, and a target server.
  • the network card is installed on the target server, and the target server has multiple cache queues and corresponding Multiple processing processes, each processing process processes the data cached in the corresponding cache queue, the method includes the following steps:
  • the network card receives the data to be processed distributed by the switch.
  • the network card is a network processor architecture for multi-core and multi-thread, and realizes features such as virtual switching.
  • it may be a plug-in network card, or may be a snap-in network card, etc., which is not limited in this embodiment.
  • a switch is a device that performs an information exchange function in a communication system.
  • it may be an Ethernet switch, or may be a fiber optic switch, which is not limited in this embodiment.
  • the data to be processed can be accessed from the outside, such as the operator, or it can be generated by the internal structure, such as building a network and directional input traffic.
  • the network card is attached to the target server, and each target processor is equipped with a network card.
  • the target server is mainly set up for packet acceleration processing and traffic management.
  • the target server generally uses a general-purpose x86 server as its basic form, and business-related applications use the basic hardware resources provided by it to run normally.
  • it can be a central processing A central processing unit (CPU), a dynamic random access memory (Dynamic Random Access Memory, DRAM), or a hard disk drive (Hard Disk Drive, HDD), etc., which is not limited in this embodiment.
  • the switch evenly distributes the data traffic to multiple target servers, and then the target servers receive the data traffic through the network card.
  • the network card calculates the first target cache queue to which the data to be processed should be distributed according to the additional information of the data to be processed.
  • the additional information includes input port information, source Media Access Control MAC (Media Access Control) address, destination MAC address, network type, network identification number, source Internet Protocol Address (Internet Protocol Address) address, and destination IP address At least one of , IP port information, source Transmission Control Protocol TCP (Transmission Control Protocol) port information and target TCP port information.
  • source Media Access Control MAC Media Access Control
  • destination MAC Network type
  • network identification number source Internet Protocol Address (Internet Protocol Address) address
  • destination IP address At least one of , IP port information, source Transmission Control Protocol TCP (Transmission Control Protocol) port information and target TCP port information.
  • IP port information At least one of , IP port information, source Transmission Control Protocol TCP (Transmission Control Protocol) port information and target TCP port information.
  • TCP Transmission Control Protocol
  • the cache queue is a buffer for storing pre-allocated data traffic.
  • a target server will load a network card.
  • a single cache queue may not be able to meet the performance requirements. Requirements, so that multiple cache queues can be divided, and one cache queue is determined from the multiple cache queues as the first target cache queue according to the additional information of the data to be processed, and the remaining cache queues are other cache queues.
  • the network card determines the cache status of the first target cache queue.
  • a processing process can process a cache queue. If the processing process corresponding to a certain cache queue is killed by the operating system, such as a code error, memory access out of bounds, etc., an exception will occur in the processing process at this time, resulting in the generation of the cache queue.
  • the NIC detects that the cache queue is full in time; if the processing process can continue to process normally, it will display that the cache status is not full; that is, the status of the cache queue reflects the status of the corresponding processing process.
  • the network card determines that the first target cache queue The cache status of a target cache queue is ready to store data traffic.
  • the network card determines the second target cache queue according to the cache states of other cache queues, and the other cache queues are caches other than the first target cache queue among the multiple cache queues queue.
  • the remaining cache queues are called other cache queues.
  • a second target cache queue is determined.
  • Figure 2 is a schematic diagram of traffic migration in the target server of the embodiment of the present application, there are n cache queues in total, assuming that the first target cache queue is cache queue 1, when the cache state of the first target cache queue is In the full state, that is, the first target cache queue can no longer receive new data traffic, and the corresponding processing process can no longer be processed, resulting in a backlog in the first target cache queue.
  • the same method as S130 needs to be used to determine other cache queues again
  • the second target cache queue is determined in 2-n cache queues except the first target cache queue according to the cache status of other cache queues.
  • the network card determines to distribute the data to be processed to the cache queue 1 as shown by the black arrow.
  • the cache queue 1 When the cache queue 1 is fully loaded, the cache queue 1 cannot continue to be stored at this time, and the network card judges that the cache queue 2 and cache queue n can be processed.
  • the data traffic is cached, and the data to be processed is cached in cache queue 2 and cache queue n respectively, as shown in the black boxes in cache queue 2 and cache queue n in Figure 2.
  • Releasing the full load of a single cache queue mainly depends on the daemon program of the data processing node, which can continuously monitor the running status of the program. After the full load occurs, such as the error of the memory out of bounds segment mentioned above, the program can be restarted.
  • the release of the full cache queue mainly depends on the system. For example, the server is powered off and the port is disconnected from the switch. When the power failure is restored, the server will automatically execute the program startup script after startup to restore the normal operation of the target server. The switch detects that the processing process of the cache queue starts processing The traffic will then be forwarded back.
  • the cache status of the first target cache queue will be The second cache queue can be determined from other cache queues. For example, if there is a cache queue in other cache queues, and two of its 8 data flow storage spaces are occupied, then the cache queue can be determined as The second target cache queue, there is no backlog in the second target cache queue.
  • the network card distributes the data to be processed to the second target cache queue.
  • the network card distributes the data to be processed to the second target cache queue.
  • the traffic migration system also includes a backup server, and the method also includes: when the network card cannot determine the second target cache queue according to the cache states of other cache queues, the network card returns the data to be processed to the switch, so that the switch will Pending data is distributed to standby servers.
  • the standby server may be the same server as the target server, or may be a different server, which is not limited in this embodiment.
  • the network card cannot determine the second target cache queue according to the cache status of other cache queues, that is to say, when the second target cache queue is determined according to the cache status of other cache queues, the remaining target cache queues are all fully loaded state, the data to be processed can no longer be received. At this time, the data traffic to be accessed is returned to the switch, and then the switch distributes the data to be processed to the standby server for further processing.
  • Figure 3 is a schematic diagram of traffic migration to a backup server according to the embodiment of the present application; when all target cache queues in the target server are fully loaded or the instantaneous data traffic is much greater than the performance of the target server For a limited time, the network card will not discard the subsequent received traffic, but will forward the traffic to the standby server through the switch.
  • the technical solution of the embodiment of the present application provides a flow migration method, which receives the data to be processed distributed by the switch through the network card, and uses the network card to calculate the first target cache queue to which the data to be processed should be distributed according to the additional information of the data to be processed; the network card determines The cache status of the first target cache queue; when the cache status of the first target cache queue is fully loaded, the network card determines the second target cache queue according to the cache status of other cache queues, and the other cache queues are the multiple cache queues except the first A cache queue other than the target cache queue; the network card distributes the data to be processed to the second target cache queue.
  • a flexible load balancing strategy can be achieved through the network card using its own computing power to migrate traffic, and the traffic received by the data nodes is more stable and balanced. That is to say, in the embodiment of the present application, the network card installed on the device can be used for traffic distribution, and the balanced distribution of traffic can be realized, and the high-performance programmable capability of the network card can be used to realize the sudden traffic migration. When the process) is abnormal, traffic migration can be performed through the network card installed on the device, thereby avoiding traffic being discarded, ensuring the normal operation of the business, and improving the user's business experience.
  • the result of traffic migration performed by the network card according to the data to be processed is refined.
  • Fig. 4 is a schematic flowchart of another traffic migration method according to an embodiment of the present application. As shown in Fig. 4, the method includes:
  • the network card receives the data to be processed distributed by the switch.
  • the switch can maintain a data distribution table and distribute the data to be processed according to the data distribution table.
  • the entries in the data distribution table are generally 256, 512 or 1024.
  • the data distribution table maintained in the switch can be shown in Table 1 below:
  • the entry is 1024, which needs to be distributed to 16 network ports (network cards), then for a piece of data to be processed, it can be analyzed to obtain the source IP and destination IP, both of which are 32bit The value needs to be processed to correspond to the entry (1024 entries represent 10bit).
  • the general processing is xor, which means XOR operation, namely:
  • Source IP xor destination IP gets a 32bit value
  • the data distribution table maintained in the switch is queried according to the obtained hash value, so as to determine to which network port the data to be processed is distributed, and thus the data to be processed is distributed to the corresponding network port. For example, if the hash value of certain data to be processed calculated by the switch according to the above rules is 15, the lookup table 1 determines that the data to be processed can be distributed to the network port 15 .
  • the network card performs hash calculation on the additional information of the data to be processed to obtain a hash value.
  • the hash calculation is also called the hash algorithm, which is to give an input channel through the hash algorithm to output a fixed length, mainly to transform the input of any length into a fixed-length output through the hash algorithm, the output is hash value.
  • This conversion is a compression mapping, that is, the space of the hash value is usually much smaller than the space of the input, different inputs may be hashed into the same output, and it is impossible to uniquely determine the input value from the hash value. Simply put, it is a function to compress a message of any length into a fixed-length message digest.
  • the hash value can be understood as the identity of a piece of traffic data.
  • a certain hash algorithm for example, it can be MD5 message digest algorithm (MD5Message-Digest Algorithm, MD5), or it can be secure hash algorithm 1 (Secure Hash Algorithm 1, SHA-1), etc., which are not limited in this embodiment.
  • the additional information of the data to be processed by the network card such as input port information, source media access control MAC address, destination MAC address, network type, network identification number, source Internet Protocol IP address, destination IP address, IP At least one of port information, source transmission control protocol TCP port information and target TCP port information is hashed, and a long piece of data is mapped to a short piece of small data.
  • This piece of small data is the hash of big data. Hash value, and it is unique, once the big data changes, even if it is a small change, its hash value will also change.
  • a data distribution table may also be maintained in the network card, and the data distribution table maintained in the network card may be shown in Table 2 below:
  • the network card After the network card calculates the hash value of the data to be processed according to the above rules, it can query the data distribution table maintained in the network card to determine the first queue where the data to be processed should be distributed. A target cache queue. For example, for a certain data to be processed, if the hash value calculated by the network card is 2, the data distribution table maintained in the network card can be queried to determine that the data to be processed needs to be distributed to the cache queue 2.
  • the network card determines whether the first target cache queue is fully loaded.
  • the value equal to 8 bits is a full load state, and less than 8 bits is a non-full load state.
  • the network card distributes the data to be processed to the first target cache queue.
  • a processing process of a target server is used for processing.
  • the non-full load state is when the processing process normally processes the data traffic of the first target cache queue and can continue to buffer the data traffic. At this time, it can Distribute the data to be processed to the first target cache queue for processing.
  • the total number of caches in the first target cache queue is 8, for example, if the number of caches in the first target cache queue is currently 4 in a non-full state, there are 4 data flows in the network card that need to be distributed to the cache queue. There is buffer space in the first target cache queue, and the 4 data flows to be processed can be distributed to the first target cache queue.
  • the network card determines a second target cache queue according to cache states of other cache queues, where the other cache queues are cache queues other than the first target cache queue among the multiple cache queues.
  • the cache status of the first target cache queue is full, that is, when the processing process is processing data traffic, there is a backlog in the processing of the cache queue by the processing process, and the network card promptly distributes the subsequent files to the first target cache queue.
  • the data flow of a target cache queue is recalculated and the hash result is distributed to other cache queues, that is, the second target cache queue is determined in other cache queues, and the original distribution method is restored after the cache queue is fully loaded.
  • the full load of the cache queue mainly depends on the daemon program of the target server, which can continuously monitor the running status of the program.
  • the full load can be relieved by restarting the program.
  • the network card distributes the data to be processed to the second target cache queue.
  • the network card determines the second target cache queue according to the cache status of other cache queues, including: the network card finds that the cache status is not fully loaded from other cache queues, and the amount of cached data is less than a preset Threshold cache queue to get the second target cache queue.
  • the preset threshold is a theoretical upper limit. In many cases, it is affected by traffic or hardware. It can be a dynamic upper limit for identification. Within a period of time, there is a threshold that reaches the upper limit for a certain time. It can be considered to have reached the upper limit of the preset threshold.
  • the network card determines whether the cache status of other cache queues is full or not full according to the cache status of other cache queues except the first target cache queue, and the cache queue The number of caches is 8, if it is less than 8, it is judged as not fully loaded, and when it is equal to 8, it is judged as fully loaded. When it is judged to be in a non-full load state, determine the cached data volume of the data to be processed.
  • the second Target cache queue If the determined cacheable data volume of the non-full-loaded cache queue is 4, and the data volume of the data to be processed is less than 4 or equal to 4, the second Target cache queue; if the amount of data to be processed is greater than the cacheable data volume 4 of the cache queue, continue to determine the second target cache queue in the remaining other cache queues.
  • the technical solutions of the embodiments of the present application provide a traffic migration method, wherein the network card receives the data to be processed distributed by the switch.
  • the network card performs hash calculation on the additional information of the data to be processed to obtain the hash value; determines the first target cache queue according to the hash value; when the cache state of the first target cache queue is not fully loaded, the network card distributes the data to be processed to The first target cache queue; the network card determines the cache status of the first target cache queue; when the cache status of the first target cache queue is fully loaded, the network card determines the second target cache queue according to the cache status of other cache queues, and the other cache queues are A cache queue other than the first target cache queue among the multiple cache queues; the network card distributes the data to be processed to the second target cache queue. That is, in the embodiment of the present application, the hash value is used to determine the first target cache queue, thereby successfully implementing traffic migration and ensuring that the traffic received by the data nodes is more stable and balanced.
  • FIG. 5 is a schematic structural diagram of a traffic migration device according to an embodiment of the present application.
  • the traffic migration device may be a network card.
  • the traffic migration device includes: a data receiving module 510, a queue calculation module 520, and a status determination module 530 , a queue determination module 540 and a data distribution module 550 . in,
  • the data receiving module 510 is configured to receive the data to be processed distributed by the switch;
  • the queue calculation module 520 is configured to calculate the first target cache queue to which the data to be processed should be distributed according to the additional information of the data to be processed;
  • the state determination module 530 is configured to determine the cache state of the first target cache queue
  • the queue determination module 540 is configured to determine the second target cache queue according to the cache status of other cache queues when the cache status of the first target cache queue is full, and the other cache queues are the multiple cache queues except the first target cache queue Outside the cache queue;
  • the data distribution module 550 is configured to distribute the data to be processed to the second target cache queue.
  • the additional information includes input port information, source media access control MAC address, destination MAC address, network type, network identification number, source Internet Protocol IP address, destination IP address, IP port information, source transmission control At least one of protocol TCP port information and target TCP port information.
  • the queue calculation module 520 includes:
  • the hash calculation unit is configured to perform hash calculation on the additional information of the data to be processed to obtain a hash value
  • the queue determination unit is configured to determine the first target cache queue according to the hash value.
  • the data distribution module 550 is also set to:
  • the data to be processed is distributed to the first target cache queue.
  • the queue determination module 540 is set to:
  • Searching for a cache queue whose cache state is not fully loaded and whose cached data volume is less than a preset threshold is searched from other cache queues to obtain a second target cache queue.
  • the traffic migration system also includes a standby server, and the device also includes:
  • the data rollback module is configured to roll back the data to be processed to the switch when the second target cache queue cannot be determined according to the cache status of other cache queues, so that the switch distributes the data to be processed to the standby server.
  • a traffic migration device provided in an embodiment of the present application can execute the traffic migration method provided in any embodiment of the present application, and has corresponding functional modules for executing the method.
  • FIG. 6 is a schematic structural diagram of a traffic migration system according to an embodiment of the present application.
  • the traffic migration system includes a network card 601 , a switch 602 , a target server 603 and a standby server 604 .
  • the network card is installed on the target server, and the target server has multiple cache queues and corresponding multiple processing processes, and each processing process processes the data cached in the corresponding cache queue.
  • the target server has multiple cache queues and corresponding multiple processing processes, and each processing process processes the data cached in the corresponding cache queue.
  • the network card 601 is configured to: receive the data to be processed distributed by the switch 602; and calculate the first target cache queue to which the data to be processed should be distributed according to the additional information of the data to be processed; the network card 601 further determines the cache status of the first target cache queue; When the cache state of the first target cache queue is fully loaded, the network card 601 determines the second target cache queue according to the cache states of other cache queues, and the other cache queues are cache queues other than the first target cache queue among the multiple cache queues; Then the network card 601 distributes the data to be processed to the second target cache queue, and finally the target server 603 continues to process the data in the cache queue. When the target server 603 fails and the smart network card cannot be started, the data is distributed to the standby server 604 .
  • the switch 602 is set to: distribute the data to be processed to the network card 601; when the network card 601 cannot cache the data to be processed, the switch 602 receives the data that the network card 601 cannot process, and distributes the data to the backup server 604.
  • the target server 603 is set to: attach the network card 601 to the target server 603, and the processing processes corresponding to the multiple cache queues on the target server 603 process the data traffic in the cache queues.
  • the standby server 604 is set to: when all cache queues of the target server 603 are fully loaded, or the instantaneous traffic is far greater than the performance processing upper limit of the target server 603, the standby server 604 receives and processes the data traffic to be processed, and the processing method can be Like the target server 603, it is in the form of a network card, which is not limited in this embodiment.
  • the traffic migration system includes a network card, a switch, a target server and a backup server.
  • the network card is installed on the target server, and the target server has multiple cache queues and corresponding multiple processing processes, and each processing process processes the data cached in the corresponding cache queue.
  • the network card Receive the data to be processed distributed by the switch through the network card, and use the network card to calculate the first target cache queue to which the data to be processed should be distributed according to the additional information of the data to be processed; the network card determines the cache status of the first target cache queue; in the first target cache queue When the cache status of the cache is fully loaded, the network card determines the second target cache queue according to the cache status of other cache queues, and the other cache queues are cache queues other than the first target cache queue among multiple cache queues; the network card distributes the pending data to the second target cache queue. That is, in the embodiment of the present application, a flexible load balancing strategy can be achieved through the network card using its own computing power to migrate traffic, and the traffic received by the data nodes is more stable and balanced.
  • FIG. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • the electronic device includes a processor 710, a memory 720, an input device 730, and an output device 740; the number of processors 710 in the electronic device may be at least one, as shown in FIG. 7 Take a processor 710 as an example; the processor 710, the memory 720, the input device 730 and the output device 740 in the electronic device may be connected through a bus or in other ways. In FIG. 7, the connection through a bus is taken as an example.
  • the memory 720 can be set to store software programs, computer-executable programs and modules, such as program instructions/modules corresponding to the traffic migration method in the embodiment of the present application (for example, the data reception in the traffic migration device module 510, queue calculation module 520, state determination module 530, queue determination module 540 and data distribution module 550), the processor 710 executes various functions of the electronic device by running software programs, instructions and modules stored in the memory 720 Application and data processing, that is, realizing the above traffic migration method.
  • software programs, computer-executable programs and modules such as program instructions/modules corresponding to the traffic migration method in the embodiment of the present application (for example, the data reception in the traffic migration device module 510, queue calculation module 520, state determination module 530, queue determination module 540 and data distribution module 550)
  • the processor 710 executes various functions of the electronic device by running software programs, instructions and modules stored in the memory 720 Application and data processing, that is, realizing the above traffic migration method.
  • the memory 720 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system and an application program required by at least one function; the data storage area may store data created according to the use of the terminal, and the like.
  • the memory 720 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage devices.
  • memory 720 may include memory located remotely from processor 710, and such remote memory may be connected to the electronic device through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the input device 730 may be configured to receive input numbers or character information, and generate key signal input related to user settings and function control of the electronic device.
  • the output device 740 may include a display device such as a display screen.
  • the embodiment of the present application also provides a storage medium containing computer-executable instructions, the computer-executable instructions are used to execute a traffic migration method when executed by a computer processor, and the method includes:
  • the network card receives the pending data distributed by the switch
  • the network card calculates the first target cache queue to which the data to be processed should be distributed according to the additional information of the data to be processed;
  • the network card determines the cache status of the first target cache queue
  • the network card determines the second target cache queue according to the cache states of other cache queues, and the other cache queues are cache queues other than the first target cache queue among the multiple cache queues;
  • the network card distributes the data to be processed to the second target cache queue.
  • the computer-executable instructions are not limited to the above-mentioned method operations, and may also perform the traffic migration method provided in any embodiment of the present application. related operations.
  • the units and modules included are only divided according to functional logic, but are not limited to the above-mentioned divisions, as long as the corresponding functions can be realized; in addition, each function
  • the specific names of the units are only for the convenience of distinguishing each other, and are not used to limit the protection scope of the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

La présente invention concerne un procédé, un appareil et un système de migration de trafic, et un dispositif électronique et un support de stockage. Le procédé est appliqué à un système de migration de trafic. Le système comprend un commutateur, une carte réseau et un serveur cible, dans lequel la carte réseau est installée sur le serveur cible ; le serveur cible a une pluralité de files d'attente de cache et une pluralité de processus de traitement correspondants ; et chaque processus de traitement traite des données qui sont mises en cache dans une file d'attente de cache correspondante. Le procédé comprend : une carte réseau recevant des données à traiter, qui sont distribuées par un commutateur ; selon des informations supplémentaires desdites données, le calcul d'une première file d'attente d'antémémoire cible à laquelle lesdites données doivent être distribuées ; la carte réseau déterminant un état d'antémémoire de la première file d'attente d'antémémoire cible ; en réponse à l'état d'antémémoire de la première file d'attente d'antémémoire cible étant un état de pleine charge, la carte réseau déterminant une seconde file d'attente d'antémémoire cible selon les états d'antémémoire d'autres files d'attente d'antémémoire, dans laquelle les autres files d'attente d'antémémoire sont des files d'attente d'antémémoire, autres que la première file d'attente d'antémémoire cible, parmi une pluralité de files d'attente d'antémémoire ; et la carte réseau distribuant lesdites données à la seconde file d'attente d'antémémoire cible.
PCT/CN2022/103219 2021-10-28 2022-07-01 Procédé, appareil et système de migration de trafic, et dispositif électronique et support de stockage WO2023071272A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111260898.2 2021-10-28
CN202111260898.2A CN114024915B (zh) 2021-10-28 2021-10-28 一种流量迁移方法、装置、系统、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2023071272A1 true WO2023071272A1 (fr) 2023-05-04

Family

ID=80058061

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/103219 WO2023071272A1 (fr) 2021-10-28 2022-07-01 Procédé, appareil et système de migration de trafic, et dispositif électronique et support de stockage

Country Status (2)

Country Link
CN (1) CN114024915B (fr)
WO (1) WO2023071272A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117640796A (zh) * 2024-01-03 2024-03-01 北京火山引擎科技有限公司 网络报文处理方法及设备

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114024915B (zh) * 2021-10-28 2023-06-16 北京锐安科技有限公司 一种流量迁移方法、装置、系统、电子设备及存储介质
CN116016092A (zh) * 2022-12-13 2023-04-25 杭州领祺科技有限公司 一种基于多线程的mqtt同步消息方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109450816A (zh) * 2018-11-19 2019-03-08 迈普通信技术股份有限公司 一种队列调度方法、装置、网络设备及存储介质
CN111131074A (zh) * 2018-10-31 2020-05-08 中移(杭州)信息技术有限公司 一种数据处理方法、装置、系统、服务器及可读存储介质
CN111371866A (zh) * 2020-02-26 2020-07-03 厦门网宿有限公司 一种处理业务请求的方法和装置
US20210133110A1 (en) * 2019-10-30 2021-05-06 International Business Machines Corporation Migrating data between block pools in a storage system
CN114024915A (zh) * 2021-10-28 2022-02-08 北京锐安科技有限公司 一种流量迁移方法、装置、系统、电子设备及存储介质

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110493145B (zh) * 2019-08-01 2022-06-24 新华三大数据技术有限公司 一种缓存方法及装置
CN112463654A (zh) * 2019-09-06 2021-03-09 华为技术有限公司 一种带预测机制的cache实现方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111131074A (zh) * 2018-10-31 2020-05-08 中移(杭州)信息技术有限公司 一种数据处理方法、装置、系统、服务器及可读存储介质
CN109450816A (zh) * 2018-11-19 2019-03-08 迈普通信技术股份有限公司 一种队列调度方法、装置、网络设备及存储介质
US20210133110A1 (en) * 2019-10-30 2021-05-06 International Business Machines Corporation Migrating data between block pools in a storage system
CN111371866A (zh) * 2020-02-26 2020-07-03 厦门网宿有限公司 一种处理业务请求的方法和装置
CN114024915A (zh) * 2021-10-28 2022-02-08 北京锐安科技有限公司 一种流量迁移方法、装置、系统、电子设备及存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117640796A (zh) * 2024-01-03 2024-03-01 北京火山引擎科技有限公司 网络报文处理方法及设备

Also Published As

Publication number Publication date
CN114024915B (zh) 2023-06-16
CN114024915A (zh) 2022-02-08

Similar Documents

Publication Publication Date Title
WO2023071272A1 (fr) Procédé, appareil et système de migration de trafic, et dispositif électronique et support de stockage
US9632839B2 (en) Dynamic virtual machine consolidation
KR102304416B1 (ko) 대체 링크 상의 패킷의 적응형 복제에 의한 하이브리드 wan 링크의 자동 튜닝
US9866479B2 (en) Technologies for concurrency of cuckoo hashing flow lookup
CN107079060B (zh) 用于运营商级nat优化的系统和方法
US10715622B2 (en) Systems and methods for accelerating object stores with distributed caching
US8937942B1 (en) Storing session information in network devices
EP2793436B1 (fr) Architecture de plan de transmission pour routeur de contenu
US9774531B2 (en) Hash-based forwarding in content centric networks
US20140301388A1 (en) Systems and methods to cache packet steering decisions for a cluster of load balancers
WO2019237594A1 (fr) Procédé et appareil de persistance de session, dispositif informatique et support de données
WO2014101777A1 (fr) Procédé et dispositif de mise en correspondance de tables de flux, et commutateur
US20200364080A1 (en) Interrupt processing method and apparatus and server
US10089131B2 (en) Compute cluster load balancing based on disk I/O cache contents
Liu et al. Memory disaggregation: Research problems and opportunities
WO2022111313A1 (fr) Procédé de traitement de requête et système de micro-services
US20150220438A1 (en) Dynamic hot volume caching
Mendelson et al. Anchorhash: A scalable consistent hash
TW201738781A (zh) 資料表連接方法及裝置
EP3977707B1 (fr) Passerelle d'équilibreur de charge matérielle sur un matériel de commutation de grande série
US10678754B1 (en) Per-tenant deduplication for shared storage
Vardoulakis et al. Tebis: index shipping for efficient replication in lsm key-value stores
EP3685567B1 (fr) Répartition de charge du traffic basée sur la charge actuelle de la destination
WO2023029485A1 (fr) Procédé et appareil de traitement de données, dispositif informatique, et support de stockage lisible par ordinateur
CN115766729A (zh) 一种四层负载均衡的数据处理方法及相关装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22885177

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE