WO2023186046A1 - 一种发送报文的方法和装置 - Google Patents

一种发送报文的方法和装置 Download PDF

Info

Publication number
WO2023186046A1
WO2023186046A1 PCT/CN2023/085243 CN2023085243W WO2023186046A1 WO 2023186046 A1 WO2023186046 A1 WO 2023186046A1 CN 2023085243 W CN2023085243 W CN 2023085243W WO 2023186046 A1 WO2023186046 A1 WO 2023186046A1
Authority
WO
WIPO (PCT)
Prior art keywords
physical
queue
port
queues
target
Prior art date
Application number
PCT/CN2023/085243
Other languages
English (en)
French (fr)
Inventor
梁晨
Original Assignee
阿里巴巴(中国)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴(中国)有限公司 filed Critical 阿里巴巴(中国)有限公司
Publication of WO2023186046A1 publication Critical patent/WO2023186046A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/70Virtual switches
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/25Flow control; Congestion control with rate being modified by the source upon detecting a change of network conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/52Queue scheduling by attributing bandwidth to queues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/56Queue scheduling implementing delay-aware scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/622Queue service order
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports

Definitions

  • the present application relates to the field of computers, and more specifically, to a method and device for sending messages.
  • VMs virtual machines
  • the virtual network card takes out the message from the queue used to temporarily store the message in the virtual machine memory, sends the message to the physical network card, and reaches the physical port through one of the multiple physical ports of the physical network card. network.
  • the virtual network card After the virtual network card takes out the message from the queue, it will put the taken out message in the cache area of the virtual network card, calculate the hash value based on the five-tuple carried in the message, and determine the destination of the message based on the hash value. Which physical port of the physical network card sends out. However, since the virtual network card blindly reads the packets in the queue, the packets may be sent to the overloaded physical port, causing the delay of the overloaded physical port to increase and traffic congestion; while the lightly loaded physical port may The bandwidth limit has not been reached, and the load on the physical ports is unbalanced.
  • This application provides a method and device for sending messages, in order to achieve load balancing of physical ports.
  • this application provides a method for sending messages.
  • the method includes: determining a target port that meets preset conditions from multiple physical ports of a physical network card; based on the mapping relationship between multiple physical ports and multiple queues. , determine the target queue corresponding to the target port.
  • One or more queues in the multiple queues correspond to one physical port in the multiple physical ports.
  • the multiple queues are used to cache messages to be sent; send through the target port Messages in the destination queue.
  • the virtual network card when the virtual network card needs to send a message, it can exclude overloaded physical ports from multiple physical ports based on the load of each port of the physical network card and preset conditions, and determine the lightly loaded physical port as the target port. And determine the target queue corresponding to the target port based on the mapping relationship between multiple physical ports and multiple queues. From the target queue The packet is retrieved and sent via the target port. Since the lightly loaded physical port is determined first, and there is a mapping relationship between the physical port and the queue, the virtual network card can take the message from the corresponding queue and send it based on the determined lightly loaded physical port. On the one hand, it can increase the traffic of lightly loaded physical ports, increase its transmission rate, and reach the bandwidth limit.
  • sending packets through overloaded physical ports can be suspended to avoid further exacerbation of transmission delays and congestion.
  • the bandwidth of each physical port can reach the upper limit, and transmission delay and congestion are alleviated, and load balancing between physical ports is achieved.
  • the preset condition includes: the number of accumulated packets is less than a preset threshold.
  • the preset condition includes: within a unit time, the rate at which packets enter the physical port is higher than the rate at which packets are sent out from the physical port.
  • the method also includes: obtaining mapping relationships between multiple physical ports and multiple queues.
  • the method further includes: adjusting mapping relationships between multiple physical ports and multiple queues based on the number of packets in each queue in the multiple queues.
  • this application provides a device for sending messages, including modules or units for implementing the method in the first aspect and any possible implementation of the first aspect. It should be understood that each module or unit can implement the corresponding function by executing a computer program.
  • this application provides a device for sending messages.
  • the device includes a processor, which is coupled to a memory and can be used to execute a computer program in the memory to implement the first aspect and any one of the first aspects. possible implementation methods.
  • the device for sending messages may further include a memory for storing computer-readable instructions, and the processor reads the computer-readable instructions so that the device for sending messages can implement the first aspect above. and any method in any possible implementation of the first aspect.
  • the device for sending messages may also include a communication interface, which is used to communicate with the device and other devices.
  • the communication interface may be a transceiver, a circuit, a bus, a module or other type of communication interface.
  • the present application provides a chip system, which includes at least one processor for supporting the implementation of the functions involved in the above-mentioned first aspect and any possible implementation of the first aspect, such as processing the above-mentioned method. Determination of the target port and target queue involved.
  • the chip system further includes a memory, the memory is used to store program instructions and data, and the memory is located within the processor or outside the processor.
  • the chip system may be composed of chips, or may include chips and other discrete devices.
  • the present application provides an electronic device, which includes: a processor, a memory and a device stored in The computer program is on the memory and can be run on the processor.
  • the processor executes the computer program, the method in the above-mentioned first aspect and any possible implementation manner of the first aspect is implemented.
  • the present application provides a computer-readable storage medium.
  • the computer-readable storage medium stores a computer program.
  • the processor is caused to implement the above-mentioned first aspect and any of the first aspects. A method among possible implementations.
  • Figure 1 is a schematic diagram of the network architecture provided by the embodiment of this application.
  • Figure 2 is a schematic flow chart of a method for sending messages provided by an embodiment of the present application
  • Figure 3 is a schematic block diagram of a device for sending messages provided by an embodiment of the present application.
  • Figure 4 is another schematic block diagram of a device for sending messages provided by an embodiment of the present application.
  • Virtual machine refers to a complete computer system that has complete hardware system functions through software simulation and runs in a completely isolated environment. Everything that can be done on a physical computer can be done on a virtual machine. Virtual machines make every cloud computing user think that they have an independent hardware environment. One or more virtual machines can be built on a cloud server. Different operating systems and application layer software can be installed on each virtual machine based on the different needs of users.
  • Virtual switch is widely used in Internet services based on infrastructure as a service. Through the virtual switch running on the virtualization platform, it provides layer 2 network access and some layer 3 network functions for the virtual machines built on the server. The virtual machine connects to the network through a virtual switch, and the virtual switch uses the physical network card on the physical host as an uplink to connect to the external network. Each virtual switch contains a certain number of ports that can be used to connect to virtual or physical network cards.
  • the virtual machine monitor (hypervisor) is a software layer installed on the physical hardware, which can divide the physical machine into many virtual machines through virtualization. This allows multiple operating systems to run simultaneously on one physical hardware.
  • the hypervisor is responsible for managing and allocating system resources to virtual machines.
  • a physical network card commonly known as a network card, is a piece of computer hardware designed to allow computers to communicate on a computer network.
  • a network card is a network component that works at the physical layer. It is the interface that connects computers and transmission media in a local area network. It not only realizes the physical connection and electrical signal matching with the LAN transmission medium, but also involves the sending and receiving of frames, encapsulation and unpacking of frames, media access control, data encoding and decoding, and data caching functions.
  • a processor central processing unit, CPU
  • memory includes read only memory (read only memory, ROM) and random access memory (random access memory, RAM).
  • the communication between the network card and the LAN is carried out in serial transmission through cables or twisted pairs, while the communication between the network card and the computer is carried out in parallel transmission through the I/O bus on the computer motherboard. Therefore, an important function of the network card is to perform serial/parallel conversion. Since the data rate on the network is not the same as the data rate on the computer bus, the network card will be equipped with a memory chip that caches the data.
  • a physical network card can include multiple physical ports, and packets can be sent and received through the physical ports.
  • the data rate generally refers to the data transfer rate (datatransferrate), which refers to the speed of transmitting information on the communication line and the number of bits transmitted in a unit time (usually one second).
  • datatransferrate the data transfer rate
  • Each physical network card may include at least one physical port, and packets may be sent to the physical network via the physical port.
  • Virtual network card also called virtual network adapter, uses software to simulate the network environment and simulate the network adapter.
  • a virtual network card can establish a LAN between remote computers, simulate the function of a hub, and realize the function of a virtual private network (VPN), allowing the system to recognize the software as a network card.
  • virtual NICs can include cache areas that can be used to cache data. For example, in this embodiment of the present application, the cache area can be used to cache mapping relationships between multiple physical ports and multiple queues, as well as packets obtained from the queues and about to be sent to the physical ports.
  • the virtual network card exposes the virtual network card interface to the virtual machine.
  • the function of the virtual network card can be implemented through software, hardware, or a combination of software and hardware.
  • the function of the virtual network card in the embodiment of this application can be implemented by a physical network card, which can be a board (such as a printed circuit board (PCB)) inserted into a physical device.
  • the board card contains a chip, and the chip can implement the method described in the embodiments below by executing a computer program or by using a logic circuit or integrated circuit solidified on the chip. 6. Queue, used to temporarily store communication messages between the host and the network card.
  • queues can also be divided into sending queues and receiving queues. Multiple queues can be created in the memory of each virtual machine, and different queues can be distinguished by different identifiers.
  • the queue can be regarded as the communication interface between the application layer software and the virtual network card, and the access of messages can follow the first in first out (FIFO) principle.
  • Physical switch a network device used for electrical (optical) signal forwarding, can provide an exclusive electrical signal path for any two network nodes connected to the switch, and can transmit messages sent by the physical network card to the physical network.
  • Physical network is a network composed of various physical devices (such as hosts, routers, switches). Switch, etc.) and media (such as optical cables, electrical cables, twisted pairs, etc.) connected to form a network.
  • the physical network is the underlying network carried by the Internet and is the first layer in the seven-layer architecture of the open system interconnection reference model (open system interconnect, OSI).
  • OSI open system interconnect
  • OSI provides a functional structure framework for open interconnected information systems. From low to high, they are: physical layer, data link layer, network layer, transport layer, session layer, presentation layer and application layer.
  • Figure 1 is a schematic diagram of a network architecture suitable for embodiments of this application.
  • the network architecture can include: virtual machines, virtual switches and physical switches. Among them, the virtual machine and the virtual switch are connected through the virtual network card on the virtual machine and the port on the virtual switch, and the virtual switch and the physical switch are connected through the physical network card.
  • One or more virtual machines are built on the server. As shown in Figure 1, virtual machine 1 and virtual machine 2 can be built. Different application layer software is installed on virtual machine 1 and virtual machine 2. Messages can be generated and sent through application layer software.
  • each virtual machine has created four queues, namely queue 0, queue 1, queue 2 and queue 3, which are used to temporarily store packets sent by the application layer software in the virtual machine.
  • each virtual machine can have at least one virtual network card. As shown in Figure 1, virtual machine 1 has one virtual network card, and virtual machine 2 has two virtual network cards. Each virtual network card has a corresponding buffer area for temporarily storing packets taken out of the queue.
  • a virtual switch includes multiple ports that can be used to connect to a virtual network card or a physical network card to implement multi-layer data forwarding. As shown in Figure 1, packets sent by the virtual network card can be sent to the physical network card through the port of the virtual switch.
  • a physical network card can include multiple physical ports. As shown in Figure 1, the physical network card may include three ports, namely physical port 0, physical port 1 and physical port 2. Messages sent to the physical network card can be sent to the physical switch through the physical port on the physical network card, and then the message is sent to the physical network through the physical switch. It should be understood that the number of physical ports of each physical network card may be the same or different.
  • hypervisor is responsible for managing virtual machine 1 and virtual machine 2, and allocating system resources to virtual machine 1 and virtual machine 2.
  • the virtual network card can take out the message from the queue according to the FIFO principle, send the message through the port of the virtual switch to the physical port of the physical network card, and send the message through a certain physical port. In this way, the packet can be sent to the physical network via the physical switch.
  • the virtual network card After the virtual network card takes out the packet from the queue, it will put the taken out packet in the cache area and pass the packet carried by the packet.
  • the five-tuple is used to calculate the hash value to determine which physical port of the physical network card the packet was sent from.
  • the virtual network card blindly fetches packets from the queue, it is only possible to determine which physical port the packet is to be sent to after the packet is taken out and the hash value is calculated. If it is determined that the packet is sent to an overloaded physical port (assuming that there are already a large number of packets waiting to be sent at physical port 1, which exceeds the carrying capacity of physical port 1), it may cause traffic congestion and delay the overloaded physical port. increase.
  • this application provides a message sending method.
  • the virtual network card When the virtual network card needs to send a message, it can exclude overloaded physical ports from multiple physical ports according to preset conditions and determine the lightly loaded physical port as the target port.
  • the target queue corresponding to the target port is determined based on the mapping relationship between multiple physical ports and multiple queues, and the packet is taken out from the target queue and sent through the target port. Since the lightly loaded physical port is determined first, and there is a mapping relationship between the physical port and the queue, the virtual network card can take the message from the corresponding queue and send it based on the determined lightly loaded physical port. On the one hand, it can increase the traffic of lightly loaded physical ports, increase its transmission rate, and reach the bandwidth limit.
  • Figure 2 is a schematic flow chart of a method for sending messages provided by an embodiment of the present application. It should be understood that the method 200 shown in Figure 2 can be executed by a virtual network card, or can be executed by a physical device (such as a server, etc.) that can provide the function of a virtual network card, or can also be executed by a component (such as a chip) configured in a physical device. , chip system, etc.), or it can also be executed by a module that can realize part or all of the virtual network card functions, etc.
  • a virtual network card or can be executed by a physical device (such as a server, etc.) that can provide the function of a virtual network card, or can also be executed by a component (such as a chip) configured in a physical device. , chip system, etc.), or it can also be executed by a module that can realize part or all of the virtual network card functions, etc.
  • the physical device may be a server, for example.
  • the physical device may implement the functions performed by the virtual network card in the following embodiments by executing computer programs and other methods to provide services to users to achieve load balancing of the physical ports.
  • the method 200 may include steps 201 to 203. Each step in the method 200 shown in Figure 2 is described in detail below.
  • Step 201 Determine a target port whose load condition meets preset conditions from multiple physical ports of the physical network card.
  • the virtual network card can collect statistics on the load of each physical port of the physical network card, and use preset conditions to first determine the lightly loaded physical port as the target port.
  • the number of target ports can be one or more.
  • the preset conditions include: the number of accumulated packets is less than the preset threshold.
  • the destination port is a physical port where the number of accumulated packets is less than the preset threshold.
  • the preset threshold is a critical value used to determine whether a physical port is a heavily loaded port or a lightly loaded port.
  • Each physical port can correspond to a preset threshold, and the preset thresholds of each physical port can be the same or different. It should be understood that those skilled in the art can set specific values of the preset threshold according to actual needs, and this application does not limit this.
  • the virtual network card can count the number of packets accumulated at each physical port of the physical network card, and combine it with the preset threshold of each physical port to determine whether the physical port is overloaded or lightly loaded.
  • the number of packets accumulated at the physical port is greater than or equal to the preset threshold, it is a heavily loaded port; when the number of packets accumulated at the physical port is less than the preset threshold, it is a lightly loaded port. Filter out the physical ports where the number of packets accumulated on the port is less than the preset threshold, that is, use the lightly loaded port as the target port.
  • the preset threshold of physical port 0 is 100 packets
  • the preset threshold of physical port 1 is 250 packets
  • the preset threshold of physical port 2 is 250 packets.
  • the virtual network card found that 80 packets accumulated at physical port 0, 300 packets accumulated at physical port 1, and 300 packets accumulated at physical port 2. 200 items. Therefore, physical port 0 and physical port 2 can be used as target physical ports.
  • the preset condition includes: within a unit time, the rate at which packets enter the physical port is higher than the rate at which packets are sent out from the physical port.
  • the destination port is a physical port where the rate at which packets enter the physical port per unit time is higher than the rate at which packets are sent out from the physical port.
  • the virtual network card can determine whether a physical port is heavily loaded or lightly loaded based on the relationship between the rate at which packets enter the physical port and the rate at which packets are sent out from the physical port. When the rate at which packets enter the physical port is lower than the rate at which packets are sent out from the physical port, it indicates that port delay occurs and the port is a reloaded port; when the rate at which packets enter the physical port is greater than the rate at which packets are sent out from the physical port When the speed is high, it means that there is no port delay and the port is a lightly loaded port, so these ports can be used as target ports. It should be understood that when the rate at which packets enter the physical port is equal to the rate at which packets are sent out from the physical port, it means that the physical port is in a critical state between heavy load and light load and is not used as a target port.
  • the rate at which packets enter physical port 0 is 1500 bits per second (BPS), and the rate at which packets are sent out from physical port 0 is 1000 bps; the rate at which packets enter physical port 1 is 2000 bps, and the rate at which packets are sent from physical port 1 is 2000 bps.
  • the rate sent by 1 is 2200bps; the rate of packets entering physical port 2 is 1700bps, and the rate sent from physical port 2 is 900bps. Therefore, physical port 0 and physical port 2 can be used as target physical ports.
  • the bitrate is Refers to the number of bits transmitted per unit time.
  • the number of target ports determined by the virtual network card can be one or multiple.
  • Step 202 Determine the target queue corresponding to the target port based on the mapping relationship between multiple physical ports and multiple queues.
  • One or more queues in the plurality of queues correspond to one physical port in the plurality of physical ports, and the plurality of queues are used to cache messages to be sent. That is, each physical port can correspond to one or more queues.
  • mapping relationship between physical ports and queues may specifically be the mapping relationship between physical port numbers and queue identifiers (identity document, ID).
  • mapping relationship The following exemplifies two ways of expressing the mapping relationship.
  • the queue IDs of each virtual machine are different from each other, for example, the queue IDs in virtual machine 2 are: queue 0, queue 1, queue 2 and queue 3, and the queue ID in virtual machine 1 is: queue 4. , Queue 5, Queue 6 and Queue 7. Therefore, the mapping relationship between the queue ID and the physical port number can be established.
  • queue IDs in different virtual machines are the same, for example, the queue IDs in one virtual machine are: queue 0, queue 1, queue 2 and queue 3, but the queue ID in another virtual machine is still: queue 0, queue 1, queue 2 and queue 3.
  • different IDs can be set for different virtual machines. For example, the ID of one virtual machine is set to virtual machine 1, and the ID of another virtual machine is set to virtual machine 2. Then the identity of the queue can be identified through the virtual machine ID and queue ID. Therefore, the mapping relationship between the virtual machine ID, queue ID and physical port number can be established.
  • the virtual network card can determine the target queue corresponding to the target port based on the created mapping relationship. For example, for virtual machine 2, the created mapping relationship is: physical port 0 corresponds to queue 0 and queue 3, physical port 1 corresponds to queue 1, and physical port 2 corresponds to queue 2. Assume that the target ports determined by the virtual network card are physical port 0 and physical port 2, then the target queues corresponding to physical port 0 are queue 0 and queue 3, and the target queue corresponding to physical port 2 is queue 2.
  • the number of target ports can be one or multiple. Based on the above mapping relationship, the number of target queues corresponding to the target port can also be one or more.
  • the method also includes: obtaining mapping relationships between multiple physical ports and multiple queues.
  • the virtual network card can obtain the mapping relationship in the following two ways:
  • One possible implementation method is that the virtual network card obtains a preconfigured mapping relationship.
  • mapping relationships between multiple queues and multiple physical ports can be configured based on the queues in the virtual machine and the physical ports in the physical network card, and saved locally.
  • the virtual network card needs to use the mapping relationship, the above mapping relationship can be obtained locally. This method allows the virtual network card to quickly determine the destination Target queue of the target port to increase the speed of packet sending.
  • the virtual network card can establish a mapping relationship based on the remainder of the queue ID divided by the number of physical ports.
  • the queues that can participate in establishing the mapping relationship are queues that temporarily store messages, and the empty queues that do not temporarily store messages in the queues do not need to participate in the establishment of the mapping relationship. For example, when queue 3 is an empty queue, you can only establish the mapping relationship between queue 0 to queue 2 and physical port 0 to physical port 2, but not establish the mapping relationship between queue 3 and physical ports.
  • Another possible implementation method is to temporarily create a mapping relationship for the virtual network card.
  • the virtual network card can also temporarily create a mapping relationship based on the queue that currently has messages to be sent and the physical port in the physical network card when sending a message.
  • the mapping relationship created in this way is more in line with the current actual situation and can more effectively achieve load balancing of physical ports.
  • the method further includes: adjusting mapping relationships between multiple physical ports and multiple queues based on the number of packets in each queue in the multiple queues.
  • mapping relationship One possible situation is to add a mapping relationship. After the mapping relationship has been created, if the virtual network card finds that there is a new queue in the memory of the virtual machine, it can count the load on each physical port of the physical network card, determine the physical port with the smallest load, and create a new queue. The mapping relationship between the queue and the physical port to adjust the mapping relationship. Alternatively, the virtual network card can still establish the mapping relationship between the new queue and the physical port by dividing the queue ID by the number of physical ports.
  • the new queue is a new queue that needs to send messages, and there are messages temporarily stored in the new queue. That is, the new queue can be a newly added queue. For example, on the basis of the existing queues 0 to 4, an additional queue 5 is created, and there are messages temporarily stored in queue 5, then queue 5 is a newly added queue. .
  • the newly added queue can be an existing queue, and the queue was previously an empty queue. For example, queue 3 was an empty queue before, and no mapping relationship with the physical port was established. However, at a certain moment, packets that need to be sent are temporarily stored in queue 3, and the virtual network card can use queue 3 as Add a new queue to establish a mapping relationship with the physical port.
  • mapping relationship Another possible situation is to cancel the mapping relationship. After the mapping relationship has been created, if the virtual network card finds that a queue continues to be empty within a preset time, the mapping relationship between the queue and the corresponding physical port can be released.
  • Another possible situation is to change the mapping relationship between queues and physical ports.
  • the virtual network card After establishing the mapping relationship between the lightly loaded physical port and the queue, the virtual network card sends the packets in the queue corresponding to the lightly loaded physical port to the lightly loaded physical port one by one. If after a period of time, the lightly loaded physical port If the number of packets accumulated at the overloaded physical port increases and the port becomes a heavily loaded physical port, a lightly loaded physical port is redetermined from multiple physical ports, and a mapping relationship is established between the queue and the redetermined physical port. For example, when lightly loaded physical port 1 corresponds to queue 1, the virtual network card sends the packets in queue 1 to physical port 1.
  • Step 203 Send the packet in the target queue through the target port.
  • the virtual network card After determining the target queue corresponding to the target port, the virtual network card can remove the packet from the target queue, send it to the corresponding target port, and send the packet to the physical network through the target port.
  • the number of destination ports can be one or more, and the number of destination queues can also be one or more.
  • the specific process of step 203 may be different.
  • the virtual network card can directly take out the packets in the target queue and send them to the target port for sending to the physical network through the target port.
  • An example is a case where the number of target queues corresponding to the target port is one. Assume that the target port is physical port 2, and it is determined that physical port 2 corresponds to queue 2. Then the virtual network card takes the packet from queue 2 and sends it to physical port 2, and then sends the packet to the physical network through physical port 2.
  • Another example is the case where the number of target queues corresponding to the target port is multiple. Assume that the target port is physical port 0, and it is determined that physical port 0 corresponds to queue 0 and queue 3. Then the virtual network card can first send the packets in queue 0 to physical port 0. After the packets in queue 0 are sent, Then send the packets in queue 3 to physical port 0; you can also send the packets in queue 3 to physical port 0 first, and then send the packets in queue 0 after the packets in queue 3 are sent. to physical port 0; packets in queue 0 and queue 3 can also be sent to physical port 0 alternately.
  • the virtual network card can be sorted according to the load of the target ports, sending packets to the more lightly loaded target ports first, and then sending packets to the relatively heavily loaded target ports.
  • the target ports can be sorted according to the number of packets accumulated at the port, or according to the ratio of the rate at which packets enter the port and the rate at which packets are sent out from the port. Or the difference value to sort the target ports, obtain the sorted target ports, and then send the packet according to the sorted order.
  • the physical ports determined as target ports are all lightly loaded physical ports, but the degree of light load is different.
  • each time a packet is fetched it can be randomly fetched from multiple target queues and sent to the corresponding target port.
  • the following takes the load condition of the target port as an example for illustrative explanation.
  • the target ports are physical port 0 and physical port 2
  • the target queues of physical port 0 are queue 0 and queue 3
  • the target queue of physical port 2 is queue 2.
  • the number of packets accumulated at physical port 0 is 80
  • the number of packets accumulated at physical port 2 is 200.
  • the ordering can be based on the degree of packet accumulation at the physical port.
  • the degree of packet accumulation can be reflected by the ratio of the number of packets accumulated at the physical port to the preset threshold, or the absolute value of the difference. The larger the ratio, or the smaller the absolute value of the difference, the more serious the accumulation of packets.
  • the preset threshold of physical port 2 is 250 packets
  • the preset threshold of physical port 0 is 100 packets.
  • the accumulation degree of packets at physical port 2 is lower than the accumulation degree of packets at physical port 0, indicating that If the load of physical port 2 is smaller than that of physical port 0, packets will be taken from queue 2 first and sent to physical port 2, and then packets will be taken from queue 0 and queue 3 and sent to physical port 0.
  • the target ports are physical port 0 and physical port 2
  • the target queues of physical port 0 are queue 0 and queue 3
  • the target queue of physical port 2 is queue 2.
  • the rate of packets entering physical port 0 is 1500bps
  • the rate of packets sent from physical port 0 is 1000bps
  • the rate difference of physical port 0 is 500bps
  • the rate of packets entering physical port 2 is 1700bps
  • the rate of packets sent from physical port 2 is 900bps.
  • the rate difference of physical port 2 is 800bps.
  • the rate difference of physical port 2 is greater than the rate difference of physical port 0, which means that the load of physical port 2 is smaller than the load of physical port 0.
  • the packets are taken out from queue 2 first and sent to physical port 2, and then from queue 0 and queue 0.
  • the packet is taken out in 3 and sent to physical port 0.
  • mapping relationship between multiple physical ports and multiple queues proposed in this application is different from the mapping relationship between the five-tuple hash value of the message and the physical port. Because in the current technology, the virtual network card blindly fetches packets from the queue, only after fetching the packet can the five-tuple hash value be calculated and the corresponding physical port determined. Therefore, the possibility of sending packets to an overloaded physical port cannot be avoided.
  • the target queue corresponding to the target port is determined based on the mapping relationship between multiple physical ports and multiple queues, and the packets are taken out from the target queue and passed through the target port sent. Since the lightly loaded physical port is determined first, and there is a mapping relationship between the physical port and the queue, the virtual network card can take the message from the corresponding queue and send it based on the determined lightly loaded physical port. On the one hand, it can increase the traffic of lightly loaded physical ports, increase its transmission rate, and reach the bandwidth limit.
  • sending packets through overloaded physical ports can be suspended to avoid further exacerbation of transmission delays and congestion. Therefore, the bandwidth of each physical port can reach the upper limit, and transmission delay and congestion are alleviated, and load balancing between physical ports is achieved.
  • the packets taken out of the queue can be sent directly to the corresponding physical port for sending, without introducing an additional buffer area to store the packets, thus avoiding the occupation of storage space.
  • Figure 3 is a schematic block diagram of a device provided by an embodiment of the present application.
  • the device 300 may include: a processing module 310 and a transceiver module 320.
  • Each unit in the device 300 can be used to implement the corresponding functions of the virtual network card in the method 200 shown in FIG. 2 .
  • the processing module 310 can be used to perform steps 201 and 202 in the method 200
  • the transceiving module 320 can be used to perform step 203 in the method 200.
  • the processing module 310 can be used to determine the target port whose load condition meets the preset conditions from multiple physical ports of the physical network card; and determine the target queue corresponding to the target port based on the mapping relationship between the multiple physical ports and multiple queues. , one or more queues in the multiple queues correspond to one physical port in the multiple physical ports, and the multiple queues are used to cache messages to be sent; the transceiver module 320 can be used to send the target queue through the target port. messages in .
  • the preset condition includes: the number of accumulated packets is less than a preset threshold.
  • the preset condition includes: within a unit time, the rate at which packets enter the physical port is higher than the rate at which packets are sent out from the physical port.
  • processing module 310 can also be used to obtain mapping relationships between multiple physical ports and multiple queues.
  • the processing module 310 may also be used to adjust the mapping relationship between multiple physical ports and multiple queues based on the number of packets in each queue in the multiple queues.
  • each functional module in various embodiments of the present application can be integrated into a processor, or can exist physically alone, or two or more modules can be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or software function modules.
  • FIG 4 is another schematic block diagram of a device provided by an embodiment of the present application.
  • the device 400 can be used to implement the function of the virtual network card in the above method 200.
  • the device 400 may be a system on a chip.
  • the chip system can be composed of Chip composition can also include chips and other discrete devices.
  • the device 400 may include at least one processor 410, which is used to implement the function of the virtual network card in the method 200 provided by the embodiment of this application.
  • the processor 410 can be used to determine the target port whose load condition meets the preset conditions from multiple physical ports of the physical network card. ; Based on the mapping relationship between multiple physical ports and multiple queues, determine the target queue corresponding to the target port. One or more queues in the multiple queues correspond to one physical port in the multiple physical ports.
  • the multiple The queue is used to cache messages to be sent; messages in the target queue are sent through the target port.
  • the apparatus 400 may also include at least one memory 420 for storing program instructions and/or data.
  • Memory 420 and processor 410 are coupled.
  • the coupling in the embodiment of this application is an indirect coupling or communication connection between devices, units or modules, which may be in electrical, mechanical or other forms, and is used for information interaction between devices, units or modules.
  • Processor 410 may cooperate with memory 420.
  • Processor 410 may execute program instructions stored in memory 420 . At least one of the at least one memory may be included in the processor.
  • the device 400 may also include a communication interface 430 for communicating with other devices through a transmission medium, so that the device 400 can communicate with other devices.
  • the other device may be a physical network card;
  • the communication interface 430 may be, for example, a transceiver, an interface, a bus, a circuit, or A device capable of transmitting and receiving functions.
  • the processor 410 can use the communication interface 430 to send and receive data and/or information, and is used to implement the method performed by the virtual network card in the corresponding embodiment of FIG. 2 .
  • the embodiment of the present application does not limit the specific connection medium between the above-mentioned processor 410, memory 420 and communication interface 430.
  • the processor 410, the memory 420 and the communication interface 430 are connected through a bus.
  • the bus is represented by a thick line in Figure 4, and the connection methods between other components are only schematically illustrated and not limited thereto.
  • the bus can be divided into address bus, data bus, control bus, etc. For ease of presentation, only one thick line is used in Figure 4, but it does not mean that there is only one bus or one type of bus.
  • the processor in the embodiment of the present application may be an integrated circuit chip with signal processing capabilities.
  • each step of the above method embodiment can be completed through an integrated logic circuit of hardware in the processor or instructions in the form of software.
  • the above-mentioned processor can be a general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (fieldprogrammable gate array, FPGA), or other programmable processors.
  • a general-purpose processor may be a microprocessor or the processor may be any conventional processor, etc.
  • the steps of the method disclosed in conjunction with the embodiments of the present application can be directly implemented by a hardware decoding processor, or executed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other mature storage media in this field.
  • the storage medium is located in the memory, and the processor reads the information in the memory and completes the steps of the above method in combination with its hardware.
  • non-volatile memory may be volatile memory or non-volatile memory, or may include both volatile and non-volatile memory.
  • non-volatile memory can be read-only memory (ROM), programmable ROM (PROM), erasable programmable read-only memory (erasable PROM, EPROM), electrically removable memory. Erase electrically programmable read-only memory (EPROM, EEPROM) or flash memory.
  • Volatile memory can be random access memory (RAM), which is used as an external cache.
  • RAM static random access memory
  • DRAM dynamic random access memory
  • SDRAM synchronous dynamic random access memory
  • double data rate SDRAM double data rate SDRAM
  • DDR SDRAM double data rate SDRAM
  • ESDRAM enhanced synchronous dynamic random access memory
  • SLDRAM synchronous link dynamic random access memory
  • direct rambus RAM direct rambus RAM
  • This application also provides a chip system, which includes at least one processor and is used to implement the functions related to the virtual network card in the embodiment shown in FIG. 2 .
  • the chip system further includes a memory, the memory is used to store program instructions and data, and the memory is located within the processor or outside the processor.
  • the chip system can be composed of chips or include chips and other discrete devices.
  • the above method can be implemented by executing a computer program, or by using logic circuits, integrated circuits, etc. solidified on the chip. Therefore, the present application also provides a chip, which includes a logic circuit or an integrated circuit.
  • the mapping relationships between multiple physical ports and multiple queues and each preset threshold described in the above method embodiment can be implemented through external configuration. This application does not limit this.
  • the electronic device includes: a processor, a memory, and a computer program stored on the memory and executable on the processor.
  • the processor executes the computer program, the implementation is implemented as shown in Figure 2 Example method.
  • This application also provides a computer-readable storage medium that stores a computer program (which may also be called a code, or an instruction).
  • a computer program which may also be called a code, or an instruction.
  • unit may be used to refer to computer-related entities, hardware, firmware, a combination of hardware and software, software, or software in execution.
  • the unit described as a separate component may or may not be physically separated, and the component shown as a unit may or may not be a physical unit, that is, it may be located in one place, or may be distributed to multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application can be integrated into one processing unit, each unit can exist physically alone, or two or more units can be integrated into one unit.
  • each functional unit may be implemented in whole or in part by software, hardware, firmware, or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions (programs). When the computer program instructions (program) are loaded and executed on the computer, the processes or functions according to the embodiments of the present application are generated in whole or in part.
  • the computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted over a wired connection from a website, computer, server, or data center (such as coaxial cable, optical fiber, digital subscriber line (DSL) or wireless (such as infrared, wireless, microwave, etc.) to transmit to another website, computer, server or data center.
  • the computer-readable storage The media can be any available media that can be accessed by a computer or a data storage device such as a server or data center that contains one or more available media.
  • the available media can be magnetic media (for example, floppy disks, hard disks, tapes), optical media, etc. media (for example, digital video disc (DVD)), or semiconductor media (Such as solid state disk (SSD)) etc.
  • this function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the technical solution of the present application is essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods of various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, ROM, RAM, magnetic disk or optical disk and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本申请实施例提供了一种发送报文的方法和装置。该方法包括:从物理网卡的多个物理端口中确定负载情况满足预设条件的目标端口;基于多个物理端口与多个队列的映射关系,确定目标端口对应的目标队列,该多个队列中的一个或多个队列对应于该多个物理端口中的一个物理端口,该多个队列用于缓存待发送的报文;通过目标端口发送目标队列中的报文。通过排除重载的物理端口,确定出轻载的物理端口对应的队列,从轻载的物理端口对应的队列中取出报文经由轻载的物理端口发送,从而实现了物理端口的负载均衡。

Description

一种发送报文的方法和装置
本申请要求于2022年04月01日提交中国专利局、申请号为202210349326.X、申请名称为“一种发送报文的方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机领域,并且更具体地,涉及一种发送报文的方法和装置。
背景技术
随着计算机技术的不断发展,越来越多的用户利用虚拟机(virtual machine,VM)中的虚拟网卡来发送报文。在发送报文时,虚拟网卡从虚拟机内存中用于暂存报文的队列中取出报文,将报文发送到物理网卡,经由物理网卡的多个物理端口中的某个物理端口到达物理网络。
目前,虚拟网卡在从队列中取出报文后,会将取出的报文放在虚拟网卡的缓存区,通过报文所携带的五元组来计算哈希值,根据哈希值确定报文从物理网卡的哪个物理端口发出。然而,由于虚拟网卡对队列中的报文是盲取,可能导致报文被送往重载的物理端口,使得重载的物理端口延迟增大,流量拥堵;而轻载的物理端口,则可能还未达到带宽上限,物理端口的负载不均衡。
因此,如何实现物理端口的负载均衡,成为一项亟待解决的技术问题。
发明内容
本申请提供了一种发送报文的方法和装置,以期能够实现物理端口的负载均衡。
第一方面,本申请提供了一种发送报文的方法,该方法包括:从物理网卡的多个物理端口中确定满足预设条件的目标端口;基于多个物理端口与多个队列的映射关系,确定目标端口对应的目标队列,该多个队列中的一个或多个队列对应于该多个物理端口中的一个物理端口,该多个队列用于缓存待发送的报文;通过目标端口发送目标队列中的报文。
基于上述方案,虚拟网卡需要发送报文时,可根据物理网卡各个端口的负载情况及预设条件,从多个物理端口中排除重载的物理端口,确定出轻载的物理端口作为目标端口,并根据多个物理端口与多个队列的映射关系确定出目标端口对应的目标队列,从目标队列 中取出报文经由目标端口发送。由于优先确定出轻载的物理端口,且物理端口与队列存在映射关系,虚拟网卡可以基于确定出的轻载的物理端口,从相应的队列中取报文来发送。一方面可以加大轻载的物理端口的流量,提高其传输速率,达到带宽上限。另一方面,可以暂缓通过重载的物理端口发送报文,避免了传输延迟和拥塞的进一步加剧。综上,每个物理端口的带宽均可达到上限,且缓解了传输延迟和拥塞,实现了物理端口之间的负载均衡。
可选地,该预设条件包括:堆积的报文数量小于预设门限。
可选地,该预设条件包括:单位时间内,报文进入物理端口的速率高于报文从物理端口发出的速率。
可选地,该方法还包括:获取多个物理端口与多个队列的映射关系。
可选地,该方法还包括:根据多个队列中每个队列中的报文数量,调整多个物理端口与多个队列的映射关系。
第二方面,本申请提供了一种发送报文的装置,包括用于实现第一方面以及第一方面任一种可能实现方式中的方法的模块或单元。应理解,各个模块或单元可通过执行计算机程序来实现相应的功能。
第三方面,本申请提供了一种发送报文的装置,该装置包括处理器,该处理器与存储器耦合,可用于执行存储器中的计算机程序,以实现第一方面以及第一方面中任一种可能实现方式中的方法。
可选地,所述发送报文的装置还可以包括存储器,用于存储计算机可读指令,所述处理器读取所述计算机可读指令使得所述发送报文的装置可以实现上述第一方面以及第一方面任一种可能实现方式中的方法。
可选地,所述发送报文的装置还可以包括通信接口,所述通信接口用于与该装置与其它设备进行通信,示例性地,通信接口可以是收发器、电路、总线、模块或其它类型的通信接口。
第四方面,本申请提供了一种芯片系统,该芯片系统包括至少一个处理器,用于支持实现上述第一方面以及第一方面任一种可能实现方式中所涉及的功能,例如处理上述方法中所涉及的目标端口和目标队列的确定。
在一种可能的设计中,所述芯片系统还包括存储器,所述存储器用于保存程序指令和数据,存储器位于处理器之内或处理器之外。
该芯片系统可以由芯片构成,也可以包含芯片和其它分立器件。
第五方面,本申请提供了一种电子设备,该电子设备包括:处理器、存储器及存储在 该存储器上并可在处理器上运行的计算机程序,处理器执行计算机程序时实现上述第一方面以及第一方面任一种可能实现方式中的方法。
第六方面,本申请提供了一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,当计算机程序被处理器执行时,使处理器实现上述第一方面以及第一方面任一种可能实现方式中的方法。
应当理解的是,本申请的第二方面至第六方面与本申请的第一方面的技术方案相对应,各方面及对应的可行实施方式所取得的有益效果相似,不再赘述。
附图说明
图1是本申请实施例提供的网络架构示意图;
图2是本申请实施例提供的发送报文的方法的示意性流程图;
图3是本申请实施例提供的发送报文的装置的示意性框图;
图4是本申请实施例提供的发送报文的装置的另一示意图框图。
具体实施方式
下面将结合附图,对本申请中的技术方案进行描述。
为便于理解本申请实施例,下面先对本申请涉及到的相关术语进行简单解释。
1、虚拟机,指通过软件模拟具有完整硬件系统功能的、运行在一个完全隔离环境中的完整计算机系统。在实体计算机中能够完成的工作在虚拟机中都能够实现。虚拟机使得每一个云计算用户均认为自己拥有一套独立的硬件环境。一个云服务器上可以构建一个或多个虚拟机。每个虚拟机上可基于用户的不同需求,安装不同的操作系统和应用层软件。
2、虚拟交换机(virtual switch),广泛应用在基于基础架构即服务(infrastructure as a service)的互联网服务中。通过运行在虚拟化平台上的虚拟交换机,为构建在服务器上的虚拟机提供二层网络接入和部分三层网络功能。虚拟机通过虚拟交换机来连接网络,虚拟交换机则通过物理主机上的物理网卡作为上行链路与外界网络的连接。每个虚拟交换机包含一定数量的端口,可用于与虚拟网卡或物理网卡连接。
3、虚拟机监测器(hypervisor),为安装在物理硬件上的软件层,可以将物理机通过虚拟机化分成许多虚拟机。这样多个操作系统可以在一个物理硬件上同时运行。hypervisor可负责管理和分配系统资源给虚拟机。
4、物理网卡,俗称网卡,是一块被设计用来允许计算机在计算机网络上进行通讯的计算机硬件。网卡是工作在物理层的网路组件,是局域网中连接计算机和传输介质的接口, 不仅能实现与局域网传输介质之间的物理连接和电信号匹配,还涉及帧的发送与接收、帧的封装与拆封、介质访问控制、数据的编码与解码以及数据缓存的功能等。网卡上面安装有处理器(central processingunit,CPU)和存储器,存储器包括只读存储器(read onlymemory,ROM)和随机存取存储器(random access memory,RAM)。网卡和局域网之间的通信是通过电缆或双绞线以串行传输方式进行的,而网卡与计算机之间的通信则是通过计算机主板上的I/O总线以并行传输方式进行。因此,网卡的一个重要功能就是进行串行/并行转换。由于网络上的数据率和计算机总线上的数据率并不相同,因此,在网卡中会装有对数据进行缓存的存储芯片。物理网卡可包括多个物理端口,通过物理端口实现报文的收发。
应理解,数据率一般指数据传输速率(datatransferrate),是指通信线上传输信息的速度,在单位时间内(通常为一秒)传输的比特数。每个物理网卡可包括至少一个物理端口,报文可经由物理端口发送至物理网络。
5、虚拟网卡,又叫虚拟网络适配器,即,用软件模拟网络环境,模拟网络适配器。虚拟网卡可以建立远程计算机间的局域网,可以模拟集线器功能,实现虚拟专用网络(virtualprivate network,VPN)的功能,使得系统把该软件识别成一块网卡。与物理网卡一样,虚拟网卡可包括缓存区,可用于对数据进行缓存。例如,在本申请实施例中,该缓存区可用于缓存多个物理端口与多个队列的映射关系,以及从队列中获取到、并即将发送至物理端口的报文。虚拟网卡暴露给虚拟机的是虚拟网卡接口。
虚拟网卡的功能可通过软件来实现,也可以通过硬件来实现,还可通过软硬件结合的方式来实现。本申请实施例中的虚拟网卡的功能可以由物理网卡来实现,该物理网卡可以是插在物理设备上的一块板卡(如,印刷电路板(printed circuit board,PCB))。该板卡包含芯片,芯片可通过执行计算机程序,也可通过固化在该芯片上的逻辑电路或集成电路,来实现下文实施例中所述的方法。6、队列,用于暂时存放主机和网卡之间通信的报文。在本申请实施例中,可以是指虚拟机内存中的一块区域,用来暂时存放虚拟机与虚拟网卡之间通信的报文,例如,包括应用层软件下发的报文,和/或,从物理网络中接收到来自其他设备的报文等。基于发送方向的不同,队列还可分为发送队列和接收队列。每个虚拟机的内存可创建多个队列,可通过不同的标识来区分不同的队列。
应理解,队列可视为应用层软件与虚拟网卡之间的通讯接口,报文的存取可遵循先进先出(first in first out,FIFO)的原则。
7、物理交换机,用于电(光)信号转发的网络设备,可以为接入交换机的任意两个网络节点提供独享的电信号通路,可将物理网卡发出的报文传送到物理网络。
8、物理网络(physical network),是在网络中由各种物理设备(如主机、路由器、交 换机等)和介质(如光缆、电缆、双绞线等)连接起来形成的网络。物理网络是互联网承载的底层网络,是开放系统互联参考模型(open system interconnect,OSI)七层架构中的第一层。应理解,OSI为开放式互联信息系统提供了一种功能结构的框架。它从低到高分别是:物理层、数据链路层、网络层、传输层、会话层、表示层和应用层。
图1是适用于本申请实施例提供的网络架构示意图。如图1所示,该网络架构可以包括:虚拟机、虚拟交换机和物理交换机。其中,虚拟机与虚拟交换机通过虚拟机上的虚拟网卡和虚拟交换机上的端口连接,虚拟交换机与物理交换机之间通过物理网卡连接。
服务器上构建有一个或多个虚拟机,如图1所示,可构建有虚拟机1和虚拟机2,虚拟机1和虚拟机2上均安装有不同的应用层软件。通过应用层软件可生成报文并发出报文。
其中,每个虚拟机的内存中都可创建多个队列。以虚拟机2为例,虚拟机2创建了四个队列,分别为队列0、队列1、队列2和队列3,用来暂时存放虚拟机中的应用层软件所发出的报文。并且,每个虚拟机可有至少一个虚拟网卡,如图1所示,虚拟机1有一个虚拟网卡,虚拟机2有两个虚拟网卡。每个虚拟网卡都有相应的缓存区,用于暂存从队列中取出的报文。
虚拟交换机包括有多个端口,可用于与虚拟网卡或物理网卡连接,实现多层数据的转发。如图1所示,虚拟网卡发出的报文可经由虚拟交换机的端口发送至物理网卡。
物理网卡可包括多个物理端口。如图1所示,物理网卡可包括三个端口,分别为物理端口0、物理端口1和物理端口2。发送至物理网卡的报文可通过物理网卡上的物理端口发送至物理交换机,进而通过物理交换机将报文送达物理网络。应理解,各物理网卡的物理端口的数量可以相同,也可以不同。
应理解,hypervisor负责管理虚拟机1和虚拟机2,并为虚拟机1和虚拟机2分配系统资源。
用户可通过虚拟机内的应用层软件发包,通过应用层软件所发出的报文会暂时存放在虚拟机内存中的队列中,队列中的报文等待被处理或被发送。在进行报文发送时,虚拟网卡可以按照FIFO的原则,从队列中取出报文,将报文通过虚拟交换机的端口发送至物理网卡的物理端口,经由某个物理端口将报文发出。如此,报文经由物理交换机可发送至物理网络。
应理解,本申请对所构建的虚拟机的数量、虚拟网卡的数量、虚拟交换机的端口数量、物理网卡的数量、物理端口的数量以及队列的数量不作具体限定,均可由本领域技术人员根据实际需求设置。
目前,虚拟网卡在从队列中取出报文后,会将取出的报文放在缓存区,通过报文所携 带的五元组来计算哈希值,以确定报文到底从物理网卡的哪个物理端口发送出去。然而,由于虚拟网卡从队列中盲取报文,在将报文取出并计算出哈希值后才能确定出要将报文送往哪个物理端口。若确定出报文送往重载的物理端口(假设物理端口1处排队等待发送的报文数量已经很多,超出了物理端口1的承载能力),可能导致流量出现拥堵,重载的物理端口延迟增大。而对于轻载的物理端口(假设物理端口0和物理端口2处排队等待发送的报文数量较少,远远未达到物理端口0和物理端口2的承载能力),传输速率就会较低,无法达到带宽上限。因此,目前发送报文的方式无法有效实现物理端口的负载均衡。并且,报文放在缓存区会占用缓存,一旦缓存占满,也无法继续从队列中取报文来发送,降低报文发送效率。
鉴于此,本申请提供一种报文发送方法,虚拟网卡需要发送报文时,可根据预设条件从多个物理端口中排除重载的物理端口,确定出轻载的物理端口作为目标端口,并根据多个物理端口与多个队列的映射关系确定出目标端口对应的目标队列,从目标队列中取出报文经由目标端口发送。由于优先确定出轻载的物理端口,且物理端口与队列存在映射关系,虚拟网卡可以基于确定出的轻载的物理端口,从相应的队列中取报文来发送。一方面可以加大轻载的物理端口的流量,提高其传输速率,达到带宽上限。另一方面,可以暂缓通过重载的物理端口发送报文,避免了传输延迟和拥塞的进一步加剧。因此,每个物理端口的带宽均可达到上限,且缓解了传输延迟和拥塞,实现了物理端口之间的负载均衡。并且,从队列中取出的报文无需经过虚拟网卡的缓存区,避免了对缓存区的占用。
下面将结合附图对本申请实施例提供的发送报文的方法做详细说明。
参看图2,图2是本申请实施例提供的发送报文的方法的示意性流程图。应理解,图2所示的方法200可以由虚拟网卡来执行,也可以由能够提供虚拟网卡功能的物理设备(比如服务器等)来执行,或者也可以由配置在物理设备中的部件(如芯片、芯片系统等)来执行,或者还可以是能够实现部分或全部虚拟网卡功能的模块来执行等等。
当该方法200由物理设备来执行时,该物理设备例如可以是服务器。例如,在服务器的物理网卡的基础上进行虚拟化,可以得到一个或多个虚拟网卡,暴露给用户。此外,物理设备可通过执行计算机程序等方式,实现下文实施例中虚拟网卡执行的功能,来向用户提供服务,以实现对物理端口的负载均衡。
下文中为方便说明,以虚拟网卡为例来描述本申请实施例提供的方法。
还应理解,图2所示的方法200可应用在云服务器上。
方法200可以包括步骤201至步骤203。下面详细说明图2所示的方法200中的各个步骤。
步骤201,从物理网卡的多个物理端口中确定负载情况满足预设条件的目标端口。
虚拟网卡可以对物理网卡的各个物理端口的负载情况进行统计,利用预设条件先确定出轻载的物理端口作为目标端口,目标端口的数量可以为一个或多个。
下文示例性地给出了预设条件的两个示例。
一示例,预设条件包括:堆积的报文数量小于预设门限。相应地,目标端口为堆积的报文数量小于预设门限的物理端口。
其中,预设门限为用于判断物理端口为重载端口还是轻载端口的临界值。每个物理端口可对应一个预设门限,各物理端口的预设门限可以相同,也可以不同。应理解,本领域技术人员可根据实际需求设置预设门限的具体数值,本申请对此不加以限制。
虚拟网卡可对物理网卡的各个物理端口处堆积的报文数量进行统计,并结合每个物理端口的预设门限来判断物理端口为重载还是轻载,在物理端口处堆积的报文数量大于或等于预设门限时,为重载端口;在物理端口处堆积的报文数量小于预设门限时,为轻载端口。将端口处堆积的报文数量小于预设门限的物理端口筛选出来,即,将轻载端口作为目标端口。
例如,假设物理端口0的预设门限为100条报文,物理端口1的预设门限为250条报文、物理端口2的预设门限为250条报文。虚拟网卡通过对三个物理端口处堆积的报文数量统计发现,物理端口0处堆积的报文为80条、物理端口1处堆积的报文为300条、物理端口2处堆积的报文为200条。因此,可将物理端口0和物理端口2作为目标物理端口。
另一示例,预设条件包括:单位时间内,报文进入物理端口的速率高于报文从物理端口发出的速率。相应地,目标端口为单位时间内,报文进入物理端口的速率高于报文从物理端口发出的速率的物理端口。
虚拟网卡可通过报文进入物理端口的速率与报文从物理端口发出的速率的大小关系来确定出物理端口为重载还是轻载。在报文进入物理端口的速率低于报文从物理端口发出的速率时,说明出现了端口延迟现象,该端口为重载端口;在报文进入物理端口的速率大于报文从物理端口发出的速率时,说明不存在端口延迟现象,该端口为轻载端口,就可将这些端口作为目标端口。应理解,在报文进入物理端口的速率等于报文从物理端口发出的速率时,说明物理端口处于重载和轻载的临界状态,不作为目标端口。
例如,在1秒内,报文进入物理端口0的速率为1500比特率(bitsper second,BPS),从物理端口0发出的速率为1000bps;报文进入物理端口1的速率为2000bps,从物理端口1发出的速率为2200bps;报文进入物理端口2的速率为1700bps,从物理端口2发出的速率为900bps。因此,可将物理端口0和物理端口2作为目标物理端口。应理解,比特率是 指单位时间内传输的比特(bit)数。
虚拟网卡所确定出的目标端口的数量可以为一个,也可以为多个。
应理解,在物理网卡的每个物理端口均不满足预设条件时,没有目标端口。此情况下,可以暂时不从队列中取报文发送。
步骤202,基于多个物理端口与多个队列的映射关系,确定目标端口对应的目标队列。
其中,该多个队列中的一个或多个队列对应于该多个物理端口中的一个物理端口,多个队列用于缓存待发送的报文。即,每个物理端口可对应一个或多个队列。
物理端口与队列之间的映射关系具体可以为物理端口号与队列标识(identity document,ID)之间的映射关系。
下文示例性地给出了映射关系的两种表示方式。
一示例,在各个虚拟机的队列标识ID互不相同时,例如,虚拟机2中的队列ID为:队列0、队列1、队列2和队列3,虚拟机1中的队列ID为:队列4、队列5、队列6和队列7。因此,可建立队列ID与物理端口号之间的映射关系。
另一示例,在不同虚拟机中的队列ID相同时,例如,一个虚拟机中的队列ID为:队列0、队列1、队列2和队列3,另一个虚拟机中的队列ID还是为:队列0、队列1、队列2和队列3。则,可为不同的虚拟机设置不同的ID,例如,设置一个虚拟机的ID为虚拟机1,另一个虚拟机的ID为虚拟机2。则就可通过虚拟机ID和队列ID,来对队列的身份进行识别。因此,可建立虚拟机ID、队列ID与物理端口号之间的映射关系。
虚拟网卡可基于创建好的映射关系来确定目标端口对应的目标队列。比如,针对虚拟机2,创建好的映射关系为:物理端口0与队列0、队列3对应,物理端口1与队列1对应,物理端口2与队列2对应。假设虚拟网卡确定出的目标端口为物理端口0和物理端口2,则物理端口0所对应的目标队列为队列0和队列3,物理端口2所对应的目标队列为队列2。
如前所述,目标端口的数量可以为一个,也可以为多个。基于上述映射关系,与目标端口对应的目标队列的数量也可以为一个或多个。
可选地,该方法还包括:获取多个物理端口与多个队列的映射关系。
虚拟网卡可以通过如下两种方式获取映射关系:
一种可能的实现方式为,虚拟网卡获取预先配置好的映射关系。
示例性地,在创建虚拟网卡时,就可根据虚拟机中的队列和物理网卡中的物理端口,配置多个队列与多个物理端口之间的映射关系,并在本地保存。在虚拟网卡有使用映射关系的需求时,便可以从本地获取上述映射关系。这种方式使得虚拟网卡可快捷地确定出目 标端口的目标队列,提升报文发送的速度。
在具体实现中,虚拟网卡可以根据队列标识ID除以物理端口数量取余的方式,建立映射关系。
以图1中所示的虚拟机2中的队列和物理端口为例。在确定队列0对应的物理端口时,利用队列标识0除以物理端口数3,取余为0,则队列0对应的物理端口为物理端口0;在确定队列1对应的物理端口时,利用队列标识1除以物理端口数3,取余为1,则队列1对应的物理端口为物理端口1;在确定队列2对应的物理端口时,利用队列标识2除以物理端口数3,取余为2,则队列2对应的物理端口为物理端口2;在确定队列3对应的物理端口时,利用队列标识3除以物理端口数3,取余为0,则队列3对应的物理端口为物理端口0。因此,就可得到各队列与各物理端口之间的映射关系。
应理解,可以参与建立映射关系的队列为暂存有报文的队列,队列中未暂存报文的空队列可以不参与映射关系的建立。比如,在队列3为空队列时,可只建立队列0至队列2,与物理端口0至物理端口2之间的映射关系,不建立队列3与物理端口的映射关系。
另一种可能的实现方式为,虚拟网卡临时创建映射关系。
虚拟网卡也可在发送报文时,临时根据当前有报文待发送的队列和物理网卡中的物理端口,来创建映射关系。这种方式所创建出的映射关系更符合当前实际情况,能更有效地实现对物理端口的负载均衡。
由于各个队列中的报文数量会实时发生变化,各个物理端口的负载情况也会变化,故上述映射关系可以调整。
可选地,该方法还包括:根据多个队列中每个队列中的报文数量,调整多个物理端口与多个队列的映射关系。
一种可能的情况为,增加映射关系。在映射关系已经创建好之后,若虚拟网卡发现虚拟机的内存中有了新增队列,则可对物理网卡的各物理端口处的负载情况进行统计,确定出负载最小的物理端口,建立新增队列与该物理端口之间的映射关系,以实现对映射关系的调整。或者,虚拟网卡也仍旧可以按照队列标识ID除以物理端口数量取余的方式,来建立新增队列与物理端口之间的映射关系。
应理解,新增队列为新增的需要发送报文的队列,新增队列中暂存有报文。即,新增队列可以为新添加的队列,例如,在已经存在队列0至队列4的基础上,又额外创建了队列5,且队列5中暂存有报文,则队列5为新增队列。或者,新增队列可以为已有的、且队列在之前为空队列的队列。例如,队列3在之前为空队列,并未建立与物理端口的映射关系。然而,在某一时刻,队列3中暂存了需要发送的报文,则虚拟网卡可将队列3作为 新增队列来与物理端口建立映射关系。
另一种可能的情况为,解除映射关系。在映射关系已经创建好之后,若在预设时间内,虚拟网卡发现有队列持续为空队列,则可将该队列与对应的物理端口的映射关系解除。
再一种可能的情况为,改变队列与物理端口之间的映射关系。在建立轻载的物理端口与队列之间的映射关系后,虚拟网卡将轻载的物理端口对应的队列中的报文一个个发送至该轻载的物理端口,若在一段时间后,该轻载的物理端口处堆积的报文数量增大,成为了重载的物理端口,则从多个物理端口中重新确定出轻载的物理端口,将该队列与重新确定的物理端口建立映射关系。例如,轻载的物理端口1与队列1对应时,虚拟网卡将队列1中的报文发送给物理端口1,随着物理端口1处堆积的报文数量逐步增多,在物理端口1变成重载物理端口时,虚拟网卡重新确定出物理端口2为轻载的物理端口,则将队列1与物理端口2建立映射关系。步骤203,通过目标端口发送目标队列中的报文。
在确定出目标端口对应的目标队列后,虚拟网卡就可从目标队列中取出报文,发送至对应的目标端口,通过目标端口将报文发送至物理网络。
如前所述,目标端口的数量可以为一个或多个,目标队列的数量也可以为一个或多个。针对不同的情况,步骤203的具体过程可以不同。
在目标端口的数量为一个时,虚拟网卡可以直接将目标队列中的报文取出,并发送至目标端口,以通过目标端口向物理网络中发送。
一示例,目标端口对应的目标队列的数量为一个的情况。假设目标端口为物理端口2,确定出物理端口2与队列2对应,则虚拟网卡从队列2中取出报文发送到物理端口2,通过物理端口2将报文发送到物理网络。
另一示例,目标端口对应的目标队列的数量为多个的情况。假设目标端口为物理端口0,确定出物理端口0与队列0、队列3对应,则虚拟网卡可先将队列0中的报文发送至物理端口0,在队列0中的报文发送完后,再将队列3中的报文发送至物理端口0;也可以先将队列3中的报文发送至物理端口0,在队列3中的报文发送完后,再将队列0中的报文发送至物理端口0;还可以将队列0和队列3中的报文交替发送至物理端口0。
在目标端口的数量为多个时,虚拟网卡可以按照目标端口的负载情况进行排序,先向更加轻载的目标端口发送报文,再向相对重载的目标端口发送报文。具体地,在虚拟网卡确定的目标端口的数量为多个时,可根据端口处堆积的报文数量对目标端口排序,或者,根据报文进入端口的速率与报文从端口发出的速率的比值或差值对目标端口排序,得到排序后的目标端口,再根据排序顺序发送报文。应理解,确定为目标端口的物理端口都是轻载的物理端口,只是轻载的程度有所差异。
或者,也可以不考虑目标端口的负载情况,每一次取报文时都可以从多个目标队列中随机取出,并发送至相应的目标端口。
下面以考虑目标端口的负载情况为例进行示例性说明。
一示例,假设目标端口为物理端口0和物理端口2,物理端口0的目标队列为队列0和队列3,物理端口2的目标队列为队列2。物理端口0处堆积的报文数量为80条,物理端口2处堆积的报文数量为200条。
在各物理端口的预设门限相同时,可直接根据堆积的报文数量的大小进行排序,得到物理端口0处堆积的报文数量小于物理端口2处堆积的报文数量的结果,说明物理端口0的负载比物理端口2的负载更小,则优先从队列0、队列3中取出报文发送至对物理端口0,再从队列2中取出报文发送至对物理端口2。
而在各物理端口的预设门限不同时,可根据物理端口处报文的堆积程度来排序。报文的堆积程度可根据物理端口处堆积的报文数量与预设门限的比值,或差值的绝对值来体现。比值越大,或者差值的绝对值越小,则报文的堆积程度越严重。比如,物理端口2的预设门限为250条报文,物理端口0的预设门限为100条报文,物理端口2处报文的堆积程度比物理端口0处报文的堆积程度低,说明物理端口2的负载比物理端口0的负载更小,则优先从队列2中取出报文发送至物理端口2,再从队列0、队列3中取出报文发送至物理端口0。
另一示例,假设目标端口为物理端口0和物理端口2,物理端口0的目标队列为队列0和队列3,物理端口2的目标队列为队列2。报文进入物理端口0的速率为1500bps,从物理端口0发出的速率为1000bps,物理端口0的速率差为500bps;报文进入物理端口2的速率为1700bps,从物理端口2发出的速率为900bps,物理端口2的速率差为800bps。物理端口2的速率差大于物理端口0的速率差,说明物理端口2的负载比物理端口0的负载更小,则优先从队列2中取出报文发送至物理端口2,再从队列0、队列3中取出报文发送至物理端口0。
应理解,从队列0和队列3中取出报文来发送的先后顺序可参照前述目标端口对应的目标队列的数量为多个的情况的说明,不再赘述。
综上可以看到,本申请提出的多个物理端口与多个队列的映射关系,不同于报文的五元组哈希值与物理端口的映射关系。由于在当前技术中,虚拟网卡从队列中盲取报文,只有取出报文后才能计算五元组哈希值,进而确定所对应的物理端口。因此无法避免将报文送往重载的物理端口的可能。
而本申请中,通过将物理端口与队列建立映射关系,在根据预设条件从多个物理端口 中排除重载的物理端口,确定出轻载的物理端口作为目标端口后,根据多个物理端口与多个队列的映射关系确定出目标端口对应的目标队列,从目标队列中取出报文经由目标端口发送。由于优先确定出轻载的物理端口,且物理端口与队列存在映射关系,虚拟网卡可以基于确定出的轻载的物理端口,从相应的队列中取报文来发送。一方面可以加大轻载的物理端口的流量,提高其传输速率,达到带宽上限。另一方面,可以暂缓通过重载的物理端口发送报文,避免了传输延迟和拥塞的进一步加剧。因此,每个物理端口的带宽均可达到上限,且缓解了传输延迟和拥塞,实现了物理端口之间的负载均衡。并且,从队列中取出的报文可直接发送至相应的物理端口发送,无需引入额外的缓存区来存放报文,避免了对存储空间的占用。
以上,结合图2详细描述了本申请实施例提供的方法。以下,结合图3至图4详细说明本申请实施例提供的装置。
图3是本申请实施例提供的装置的示意性框图。如图3所示,该装置300可以包括:处理模块310和收发模块320。该装置300中的各单元可用于实现图2所示的方法200中虚拟网卡的相应功能。例如,处理模块310可用于执行方法200中的步骤201和步骤202,收发模块320可用于执行方法200中的步骤203。
具体地,处理模块310,可用于从物理网卡的多个物理端口中确定负载情况满足预设条件的目标端口;基于多个物理端口与多个队列的映射关系,确定该目标端口对应的目标队列,该多个队列中的一个或多个队列对应于该多个物理端口中的一个物理端口,该多个队列用于缓存待发送的报文;收发模块320,可用于通过目标端口发送目标队列中的报文。
可选地,该预设条件包括:堆积的报文数量小于预设门限。
可选地,该预设条件包括:单位时间内,报文进入物理端口的速率高于报文从物理端口发出的速率。
可选地,该处理模块310,还可用于获取多个物理端口与多个队列的映射关系。
可选地,该处理模块310,还可用于根据多个队列中每个队列中的报文数量,调整多个物理端口与多个队列的映射关系。
应理解,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。另外,在本申请各个实施例中的各功能模块可以集成在一个处理器中,也可以是单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
图4是本申请实施例提供的装置的另一示意性框图。该装置400可用于实现上述方法200中虚拟网卡的功能。该装置400可以为芯片系统。本申请实施例中,芯片系统可以由 芯片构成,也可以包含芯片和其他分立器件。
如图4所示,该装置400可以包括至少一个处理器410,用于实现本申请实施例提供的方法200中虚拟网卡的功能。
示例性地,当该装置400用于实现本申请实施例提供的方法200中虚拟网卡的功能时,处理器410可用于从物理网卡的多个物理端口中确定负载情况满足预设条件的目标端口;基于多个物理端口与多个队列的映射关系,确定该目标端口对应的目标队列,该多个队列中的一个或多个队列对应于该多个物理端口中的一个物理端口,该多个队列用于缓存待发送的报文;通过目标端口发送目标队列中的报文。具体参见方法示例中的详细描述,此处不做赘述。
该装置400还可以包括至少一个存储器420,用于存储程序指令和/或数据。存储器420和处理器410耦合。本申请实施例中的耦合是装置、单元或模块之间的间接耦合或通信连接,可以是电性,机械或其它的形式,用于装置、单元或模块之间的信息交互。处理器410可能和存储器420协同操作。处理器410可能执行存储器420中存储的程序指令。该至少一个存储器中的至少一个可以包括于处理器中。
该装置400还可以包括通信接口430,用于通过传输介质和其它设备进行通信,从而用于装置400可以和其它设备进行通信。示例性地,当该装置400用于实现本申请实施例提供的方法200中虚拟网卡的功能时,该其他设备可以是物理网卡;该通信接口430例如可以是收发器、接口、总线、电路或者能够实现收发功能的装置。处理器410可利用通信接口430收发数据和/或信息,并用于实现图2对应的实施例中虚拟网卡所执行的方法。
本申请实施例中不限定上述处理器410、存储器420以及通信接口430之间的具体连接介质。本申请实施例在图4中以处理器410、存储器420以及通信接口430之间通过总线连接。总线在图4中以粗线表示,其它部件之间的连接方式,仅是进行示意性说明,并不引以为限。该总线可以分为地址总线、数据总线、控制总线等。为便于表示,图4中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
应理解,本申请实施例中的处理器可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法实施例的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器可以是通用处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现场可编程门阵列(fieldprogrammable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。 结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。
还应理解,本申请实施例中的存储器可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(random access memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(dynamic RAM,DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,DR RAM)。应注意,本文描述的系统和方法的存储器旨在包括但不限于这些和任意其它适合类型的存储器。
本申请还提供了一种芯片系统,所述芯片系统包括至少一个处理器,用于实现上述图2所示实施例中虚拟网卡所涉及的功能。
在一种可能的设计中,所述芯片系统还包括存储器,所述存储器用于保存程序指令和数据,存储器位于处理器之内或处理器之外。
该芯片系统可以由芯片构成,也可以包含芯片和其他分立器件。
如前所述,上述方法可以通过执行计算机程序来实现,也可以通过固化在芯片上的逻辑电路、集成电路等来实现。因此,本申请还提供了一种芯片,该芯片包括逻辑电路或集成电路。上述方法实施例中所述的多个物理端口与多个队列之间的映射关系,以及各个预设门限,可通过外部配置的方法来实现。本申请对此不作限定。
本申请还提供一种电子设备,该电子设备包括:处理器、存储器及存储在该存储器上并可在处理器上运行的计算机程序,该处理器执行该计算机程序时实现如图2所示实施例的方法。
本申请还提供一种计算机可读存储介质,该计算机可读存储介质存储有计算机程序(也可以称为代码,或指令)。当该计算机程序被运行时,使得计算机执行如图2所示实 施例的方法。
本说明书中使用的术语“单元”、“模块”等,可用于表示计算机相关的实体、硬件、固件、硬件和软件的组合、软件、或执行中的软件。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各种说明性逻辑块(illustrative logical block)和步骤(step),能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。在本申请所提供的几个实施例中,应该理解到,所揭露的装置、设备和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,该单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
该作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
在上述实施例中,各功能单元的功能可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。该计算机程序产品包括一个或多个计算机指令(程序)。在计算机上加载和执行该计算机程序指令(程序)时,全部或部分地产生按照本申请实施例该的流程或功能。该计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。该计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,该计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。该计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。该可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,数字视频光盘(digital video disc,DVD))、或者半导体介质 (例如固态硬盘(solid state disk,SSD))等。
该功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (10)

  1. 一种发送报文的方法,其特征在于,所述方法包括:
    从物理网卡的多个物理端口中确定负载情况满足预设条件的目标端口;
    基于所述多个物理端口与多个队列的映射关系,确定所述目标端口对应的目标队列,所述多个队列中的一个或多个队列对应于所述多个物理端口中的一个物理端口,所述多个队列用于缓存待发送的报文;
    通过所述目标端口发送所述目标队列中的报文。
  2. 如权利要求1所述的方法,其特征在于,所述预设条件包括:堆积的报文数量小于预设门限。
  3. 如权利要求1所述的方法,其特征在于,所述预设条件包括:单位时间内,报文进入物理端口的速率高于报文从所述物理端口发出的速率。
  4. 如权利要求1至3中任一项所述的方法,其特征在于,在所述基于所述多个物理端口与多个队列的映射关系,确定所述目标端口对应的目标队列之前,所述方法还包括:
    获取所述多个物理端口与所述多个队列的映射关系。
  5. 如权利要求4所述的方法,其特征在于,所述方法还包括:
    根据所述多个队列中每个队列中的报文数量,调整所述多个物理端口与所述多个队列的映射关系。
  6. 一种发送报文的装置,其特征在于,所述装置包括:
    处理模块,用于从物理网卡的多个物理端口中确定负载情况满足预设条件的目标端口;基于所述多个物理端口与多个队列的映射关系,确定所述目标端口对应的目标队列,所述多个队列中的一个或多个队列对应于所述多个物理端口中的一个物理端口,所述多个队列用于缓存待发送的报文;
    收发模块,用于通过所述目标端口发送所述目标队列中的报文。
  7. 如权利要求6所述的装置,其特征在于,所述处理模块,还用于获取所述多个物理端口与所述多个队列的映射关系。
  8. 如权利要求7所述的装置,其特征在于,所述处理模块,还用于根据所述多个队列中每个队列中的报文数量,调整所述多个物理端口与所述多个队列的映射关系。
  9. 一种电子设备,其特征在于,包括:处理器、存储器及存储在所述存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如权利要求1至5中任一项所述的方法。
  10. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,当所述计算机程序被处理器执行时,使处理器实现如权利要求1至5中任一项所述的方法。
PCT/CN2023/085243 2022-04-01 2023-03-30 一种发送报文的方法和装置 WO2023186046A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210349326.X 2022-04-01
CN202210349326.XA CN114666276B (zh) 2022-04-01 2022-04-01 一种发送报文的方法和装置

Publications (1)

Publication Number Publication Date
WO2023186046A1 true WO2023186046A1 (zh) 2023-10-05

Family

ID=82033693

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/085243 WO2023186046A1 (zh) 2022-04-01 2023-03-30 一种发送报文的方法和装置

Country Status (2)

Country Link
CN (1) CN114666276B (zh)
WO (1) WO2023186046A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114666276B (zh) * 2022-04-01 2024-09-06 阿里巴巴(中国)有限公司 一种发送报文的方法和装置
CN115794317B (zh) * 2023-02-06 2023-04-21 天翼云科技有限公司 一种基于虚拟机的处理方法、装置、设备及介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102137018A (zh) * 2011-03-21 2011-07-27 华为技术有限公司 一种负载分担方法及装置
CN110677358A (zh) * 2019-09-25 2020-01-10 杭州迪普科技股份有限公司 一种报文处理方法及一种网络设备
CN113422731A (zh) * 2021-06-22 2021-09-21 恒安嘉新(北京)科技股份公司 一种负载均衡输出方法、装置、汇聚分流设备和介质
CN114666276A (zh) * 2022-04-01 2022-06-24 阿里巴巴(中国)有限公司 一种发送报文的方法和装置

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7855957B2 (en) * 2006-08-30 2010-12-21 Hewlett-Packard Development Company, L.P. Method and system of transmit load balancing across multiple physical ports
US8948193B2 (en) * 2008-08-19 2015-02-03 Check Point Software Technologies, Ltd. Methods for intelligent NIC bonding and load-balancing
CN107995199A (zh) * 2017-12-06 2018-05-04 锐捷网络股份有限公司 网络设备的端口限速方法及装置
CN112272933B (zh) * 2018-06-05 2022-08-09 华为技术有限公司 队列控制方法、装置及存储介质
CN111726299B (zh) * 2019-03-18 2023-05-09 华为技术有限公司 流量均衡方法及装置
CN113037640A (zh) * 2019-12-09 2021-06-25 华为技术有限公司 数据转发方法、数据缓存方法、装置和相关设备
CN113285878B (zh) * 2020-02-20 2022-08-26 华为技术有限公司 负载分担的方法、第一网络设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102137018A (zh) * 2011-03-21 2011-07-27 华为技术有限公司 一种负载分担方法及装置
CN110677358A (zh) * 2019-09-25 2020-01-10 杭州迪普科技股份有限公司 一种报文处理方法及一种网络设备
CN113422731A (zh) * 2021-06-22 2021-09-21 恒安嘉新(北京)科技股份公司 一种负载均衡输出方法、装置、汇聚分流设备和介质
CN114666276A (zh) * 2022-04-01 2022-06-24 阿里巴巴(中国)有限公司 一种发送报文的方法和装置

Also Published As

Publication number Publication date
CN114666276A (zh) 2022-06-24
CN114666276B (zh) 2024-09-06

Similar Documents

Publication Publication Date Title
US12026116B2 (en) Network and edge acceleration tile (NEXT) architecture
US11736402B2 (en) Fast data center congestion response based on QoS of VL
WO2023186046A1 (zh) 一种发送报文的方法和装置
US8180949B1 (en) Resource virtualization switch
US9813283B2 (en) Efficient data transfer between servers and remote peripherals
US11394649B2 (en) Non-random flowlet-based routing
WO2020236291A1 (en) System and method for facilitating efficient load balancing in a network interface controller (nic)
US10880204B1 (en) Low latency access for storage using multiple paths
US12074794B2 (en) Receiver-based precision congestion control
US10320677B2 (en) Flow control and congestion management for acceleration components configured to accelerate a service
US20220124035A1 (en) Switch-originated congestion messages
US20220311711A1 (en) Congestion control based on network telemetry
WO2019085907A1 (zh) 一种基于软件定义网络的数据发送方法、装置及系统
US11528187B1 (en) Dynamically configurable networking device interfaces for directional capacity modifications
US11218394B1 (en) Dynamic modifications to directional capacity of networking device interfaces
WO2018057165A1 (en) Technologies for dynamically transitioning network traffic host buffer queues
US20230116614A1 (en) Deterministic networking node
US20240089219A1 (en) Packet buffering technologies
US20230403233A1 (en) Congestion notification in a multi-queue environment
US20230409511A1 (en) Hardware resource selection
US20230123387A1 (en) Window-based congestion control
US12093571B1 (en) Accelerating request/response protocols
WO2024152938A1 (zh) 转发网内计算报文的方法、转发节点及计算机存储介质
EP4432631A1 (en) Proxy offload to network interface device
US20230359582A1 (en) In-network collective operations

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23778411

Country of ref document: EP

Kind code of ref document: A1