CN113973086B - Data transmission method, device and storage medium - Google Patents

Data transmission method, device and storage medium Download PDF

Info

Publication number
CN113973086B
CN113973086B CN202010647499.0A CN202010647499A CN113973086B CN 113973086 B CN113973086 B CN 113973086B CN 202010647499 A CN202010647499 A CN 202010647499A CN 113973086 B CN113973086 B CN 113973086B
Authority
CN
China
Prior art keywords
load balancer
data packet
backhaul
data request
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010647499.0A
Other languages
Chinese (zh)
Other versions
CN113973086A (en
Inventor
曲悦
李宙洲
张�浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Suzhou Software Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Suzhou Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Suzhou Software Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202010647499.0A priority Critical patent/CN113973086B/en
Publication of CN113973086A publication Critical patent/CN113973086A/en
Application granted granted Critical
Publication of CN113973086B publication Critical patent/CN113973086B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application discloses a data transmission method, comprising the following steps: the node corresponding to the load balancer marks the return data packet sent by the server; transmitting the marked backhaul data packet to the load balancer; the application also discloses a storage medium and a data transmission device, through the data transmission method, the storage medium and the device disclosed by the application, the backhaul data packet can be guided into the load equalizer, and data transmission under the cloud computing environment is realized.

Description

Data transmission method, device and storage medium
Technical Field
The present disclosure relates to the field of internet technologies, and in particular, to a data transmission method, apparatus, and storage medium.
Background
Cloud computing is an internet-based computing mode, and shared software and hardware resources and information can be provided for computers and other devices through cloud computing according to requirements; in the related art, a reverse proxy mode is generally used to realize load balancing, so as to realize data transmission of a cloud computing environment, but all source internet protocols (Internet Protocol, IP) of data packets flowing to a server through a proxy end are service IP addresses of the proxy end, so that the server cannot learn the real IP of the client end; if the service IP address of the proxy end is replaced by the real IP of the client end, the gateway can directly send the backhaul data packet to the client end, and the load balancer is completely bypassed. Therefore, how to overcome the problems existing in the related art, not only can the server obtain the real IP of the client, but also the backhaul data packet can be successfully guided into the virtual load balancer, so as to realize the data transmission in the cloud computing environment, which is a technical problem to be solved.
Disclosure of Invention
The embodiment of the application provides an information processing method, an information processing device and a storage medium, which can guide a backhaul data packet into a virtual load balancer to realize data transmission in a cloud computing environment.
In a first aspect, the present application provides a data transmission method, including:
the node corresponding to the load balancer marks the return data packet sent by the server;
and sending the marked backhaul data packet to the load balancer.
In the above solution, before the backhaul data packet sent by the node marking server corresponding to the load balancer, the method further includes:
and setting a switch for controlling and modifying the source IP of the data request sent by the load balancer in the node.
In the above scheme, the setting a switch in the node corresponding to the load balancer includes:
modifying a Neutron Lbias listener resource in an Openstack environment, adding a first attribute in the Neutron Lbias listener resource, and opening the switch under the condition that the first attribute is true; in the case that the first attribute is false, the switch is closed.
In the above scheme, the method further comprises:
when the switch is turned on, the node modifies the source IP of the data request to the IP of the client side sending the data request;
And sending the data request to the server.
In the above solution, the backhaul data packet sent by the node marking server corresponding to the load balancer includes:
and marking the backhaul data packet sent by the server based on the Mangle table included by the node.
In the above solution, the sending the marked backhaul data packet to the load balancer includes:
modifying IP rules and routing rules of nodes corresponding to the load balancer;
and sending the marked backhaul data packet to a load balancer based on the IP rule and the routing rule.
In a second aspect, the present application provides a data transmission method, including:
the server receives a data request sent by a node corresponding to the load balancer;
sending the backhaul data packet corresponding to the data request to a node corresponding to the load balancer;
the Source IP (SIP) of the data request includes an IP of a client that sends the data request; the destination IP of the backhaul packet includes the IP of the client that sent the data request.
In the above solution, the sending the backhaul data packet corresponding to the data request to the node corresponding to the load balancer includes:
transmitting the return data packet to a Loopback interface of a node corresponding to the load balancer; and the backhaul data packet is transmitted to the load equalizer through the Loopback interface and the TProxy.
In a third aspect, the present application provides a data transmission apparatus, including:
the marking unit is used for marking the backhaul data packet sent by the server;
and the sending unit is used for sending the marked backhaul data packet to the load balancer.
In the above scheme, the device further includes:
and the setting unit is used for setting a switch for controlling and modifying the source IP of the data request sent by the load balancer in the node corresponding to the load balancer.
In the above solution, the setting unit is further configured to:
modifying a Neutron Lbias listener resource in an Openstack environment, and adding a first attribute in the Neutron Lbias listener resource, wherein the switch is opened under the condition that the first attribute is true; and when the first attribute is false, the switch is closed.
In the above scheme, the device further includes:
the modifying unit is used for modifying the source IP of the data request into the IP of the client side sending the data request under the condition that the switch is opened;
the sending unit is further configured to send the data request to the server.
In the above aspect, the marking unit is further configured to:
and marking the backhaul data packet sent by the server based on a Mangle table included by the node corresponding to the load balancer.
In the above scheme, the modifying unit is configured to modify an IP rule and a routing rule of a node corresponding to the load balancer;
the sending unit is further configured to send the marked backhaul data packet to a load balancer based on the IP rule and the routing rule.
In a fourth aspect, the present application provides a data transmission apparatus, including:
the receiving unit is used for receiving the data request sent by the node corresponding to the load balancer by the server;
a sending unit, configured to send a backhaul data packet corresponding to the data request to a node corresponding to the load balancer;
the source IP of the data request comprises the IP of the client side sending the data request; the Destination IP (DIP) of the backhaul packet includes the IP of the load balancer.
In the above solution, the sending unit is further configured to:
transmitting the backhaul data packet to a loop back (Loopback) interface of a node corresponding to the load balancer; and the backhaul data packet is transmitted to the load equalizer through the Loopback interface and the TProxy.
According to the data transmission method, the data transmission device and the storage medium, the backhaul data packets sent by the node marking server corresponding to the load balancer are sent to the load balancer, and the backhaul data packets can be guided into the load balancer; the server returns a return data packet with the IP as the IP of the load balancer, so that the return data packet can be transmitted to the client through the load balancer, and a transmission control protocol (Transmission Control Protocol, TCP) transparent transmission function of the load balancer in an OpenStack cloud environment is realized; in addition, by setting the switch for controlling and modifying the SIP of the data request sent by the load balancer in the node provided by the embodiment of the application, the SIP of the data request is modified by opening the switch, so that the server can know the real IP of the client side sending the data request according to the data request.
Drawings
Fig. 1 is an optional flowchart of a data transmission method of a node end corresponding to a load balancer provided in an embodiment of the present application;
fig. 2 is an optional flowchart of a data transmission method at a server provided in an embodiment of the present application;
FIG. 3 is a diagram illustrating a data transmission of a load balancer according to the related art;
fig. 4 is a schematic diagram of data transmission of a load balancer modifying SIP to client IP;
fig. 5 is a data transmission schematic diagram of a data transmission method according to an embodiment of the present application;
fig. 6 is an alternative flow chart of a data transmission method according to an embodiment of the present application;
fig. 7 is an optional structural schematic diagram of a node end corresponding to a load balancing service of a data transmission device provided in an embodiment of the present application;
fig. 8 is a schematic diagram of an alternative structure of a server side of the data transmission device according to the embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
Before proceeding to the further detailed description of the present application, terms and nouns involved in embodiments of the present application are described, the terms and nouns involved in embodiments of the present application are applicable to the following explanation.
(1) Cloud computing: an internet-based computing scheme by which shared software and hardware resources and information can be provided to computers and other devices on demand. Openstack has been used in important fields such as communications, finance, industry, etc. as a resource management and scheduling platform in cloud environments.
(2) OpenStack: an open-source cloud computing management platform project is a combination of a series of software open-source projects. And providing extensible and elastic cloud computing services for private clouds and public clouds. The project aims to provide a cloud computing management platform which is simple to implement, can be expanded in a large scale, is rich and has unified standards.
(3) Neutron: the core component for providing the network service in the OpenStack realizes the management of the software-based network resources based on the thought of a software-defined network, fully utilizes various network related technologies in a Linux system on the realization, and supports a third-party plug-in.
(4) Lbaas: the load balancing service of OpenStack adopts HAproxy as a Driver to realize a load balancing function by default, and in a default, the Lbias does not provide a high-possibility function, that is, one Lbias instance fails, which may affect the load balancing function of the service.
(5) HAProxy: a free and open source software written in the C language provides high availability, load balancing, and application proxy based on TCP and Hypertext transfer protocol (Hyper Text Transfer Protocol, HTTP).
In the related art, a load balancing service is provided by a load balancer, and is responsible for controlling traffic to be uniformly distributed on a plurality of back-end servers, so as to avoid overload of a single server, and thus single-point faults in products are prevented. Further improving the performance, network throughput and response time of the server. In the traffic model of the load balancing service, the load balancer receives a data request from a client, then forwards the data request to an available back-end server for processing, and transmits a response result of the back-end server back to the client for further processing.
The OpenStack initially takes the load balancing service as a sub-service of the virtual network service, provides the service outwards through a neutral component exposure interface, and dispatches bottom software or control hardware equipment to construct a load balancing device through Lbias plug in, lbias agent and various drivers by the neutral component so as to realize a load balancing function; and then, an independent Octavia component is created to provide load balancing service, a load balancer in the form of an Amphora virtual machine is created by cooperating with components such as Neutron, nova and Glance, and the load balancing function is realized in a software mode inside the virtual machine. The default load balancing service provider in the OpenStack project is HAProxy, whether Lbaas plug in or Octavia.
In the related art, the following three methods are mainly used for realizing data transmission through load balancing:
the method comprises the following steps: the method comprises the steps that a load balancing node receives a first service request message from a service request end, wherein the first service request comprises address information of the service request end, address information of a load balancing instance to be processed and a media access control (Media Access Control, MAC) address of the load balancing node; determining a to-be-processed service member according to address information of a to-be-processed load balancing instance, wherein the to-be-processed service member is used for processing a first service request message; and modifying the MAC address of the load balancing node in the first service request message into the MAC address of the member to be processed to obtain a second service request message, and sending the second service request message to the computing node to which the member to be processed belongs.
The second method is as follows: the system comprises a load balancer and a virtual router which are created in a software-defined mode, wherein a request data packet sent by a client arrives at a server after being processed by the load balancing system, and a response data packet returned by the server is sent to the client after being processed by the load balancing system, so that a transparent proxy is realized. When the transparent proxy is carried out, the server does not need to be modified, a user can select an operating system for the server according to the needs, the use is convenient, the user can create a load equalizer at any time, hardware does not need to be added, so that the hardware cost is reduced, the transmission of network data can be reduced, and meanwhile, the load capacity can be horizontally expanded through cluster deployment.
And a third method: the system comprises a client, a Cache server, an Array load balancing server and a plurality of global Wide area networks (Web), namely global Wide area network application servers, wherein the client is connected with the Cache server, the Cache server is connected with the Array load balancing server, the client is used for sending a content acquisition request, the Array load balancing server is connected with a plurality of Web application servers to acquire the connection number, the connection number of the plurality of Web application servers is judged, and the content acquisition request is forwarded to one Web application server with relatively fewer connection numbers.
The method includes the steps that a load balancing device is utilized to receive a first service request from a client, firstly, the MAC in the first service request is converted and sent to a computing node, and then the computing node performs destination address conversion (Destination Network Address Translation, DNAT) operation on the converted first service request (namely, a second service request) and sends the converted first service request to a processing member. According to the method, the computing node is required to complete DNAT operation on the second service request, the load of the computing node is increased, and a specific physical load balancing device is relied on, so that service expansion in a cloud computing environment is not facilitated.
And secondly, carrying out DNAT operation on the request data packet sent by the client by using the load balancer and the virtual router so as to send the request data packet to the server side which is calculated according to the load balancing rule. For backhaul packets, the system performs two source address translation (Source Network Address Translation, snap) operations using the virtual router, load balancer, to send the backhaul packets out of the system to the client. In the traffic path of the system, multiple network address translation (Network Address Translation, NAT) operations are required, which can place a high load on the physical bearers of the virtual network devices. In addition, the system swaps the locations of the load balancer and the virtual router in the traffic path in the OpenStack architecture, and cannot be implemented in an OpenStack architecture-based infrastructure as a service (Infrastructure as a Service, iaas) solution.
The method comprises the steps of constructing an agent between a client and a Web application server by using a Cache server and an Array load balancing server, receiving a request from the client by using the agent, selecting a proper Web application server according to a principle of priority of the minimum connection number to process the request from the client, and returning a processing result to the client. Unlike the first and second methods, the third method adopts the proxy mode to complete the load balancing task, which is closer to the technical scheme given herein, but the third method results in that the Source IP (SIP) of the backhaul data packets flowing to the server through the proxy is consistent, and the Source IP is the service IP of the proxy, so that the server cannot learn the real IP of the client.
Based on the problems existing in the data transmission of the OpenStack cloud computing platform, the application provides a data transmission method which can solve the technical problems and disadvantages which cannot be solved in the prior art.
Fig. 1 is a schematic flowchart of an alternative data transmission method of a node end corresponding to a load balancer according to an embodiment of the present application, and will be described according to each step.
Step S101, the node corresponding to the load balancer marks the backhaul packet sent by the server.
In some embodiments, the backhaul data packet sent by the node marking server corresponding to the load balancer includes: and the node corresponding to the load balancer receives the backhaul data packet sent by the server, and marks the backhaul data packet based on a Mangle table included by the node. The backhaul data packet corresponds to a data request sent to a server by a client through a load balancer; the SIP of the backhaul packet is a server IP, and the Destination IP (DIP) of the backhaul packet is an IP of a client that sends a data request corresponding to the backhaul packet.
In some embodiments, the marking the backhaul packet by the node corresponding to the load balancer based on a Mangle table included by the node includes: and setting IPtables at nodes corresponding to the load balancer, and modifying corresponding flag bits of a backhaul data packet conforming to the strategy of the IPtables by a Mangle table based on the strategy of the IPtables.
In some embodiments, the policies of the IPtables may include: and (3) the backhaul data packet of the Socket (Socket) flowing into the corresponding node of the virtual equalizer, and/or the backhaul data packet of which the SIP is a server and the destination IP is a client.
In some embodiments, the setting the IPtables at the node corresponding to the load balancer may include: modifying the HAproxy Driver of Neutron Lbias, encoding the change logic of IPtables into the HAproxy Driver, and applying the change logic before the HAproxy Driver initializes the HAproxy process. The modifying, by the Mangle table, the corresponding flag bit of the backhaul packet conforming to the policy of the IPtables includes: the Mangle table marks the backhaul packet that conforms to the IPtables policy, such as FWARK.
In some embodiments, the node corresponding to the load balancer may be a node where the load balancer is located.
Step S102, the marked backhaul data packet is sent to the load balancer.
In some embodiments, before the node corresponding to the load balancer sends the marked backhaul data packet to the load balancer, the method further includes: and the node corresponding to the load balancer modifies the IP rule and the routing rule of the node corresponding to the load balancer.
In some embodiments, the modifying the IP rule and the routing rule of the node corresponding to the load balancer includes: modifying HAproxy Driver of Neutron Lbias, encoding IP Rule and change logic of routing Rule into the HAproxy Driver, and applying the change logic before the HAproxy Driver initializes HAproxy process.
In some embodiments, the sending, by the node corresponding to the load balancer, the marked backhaul data packet to the load balancer includes: and the node corresponding to the load balancer sends the marked backhaul data packet to the load balancer based on the IP rule and the routing rule.
In some embodiments, the node corresponding to the load balancer sends the marked backhaul data packet to the load balancer based on the IP rule and the routing rule, including: the node corresponding to the load balancer leads the marked return data packet to the routing rule based on the IP rule; and leading the marked return data packet to a Loopback interface included by the node corresponding to the load equalizer based on the routing rule by the node corresponding to the load equalizer, and transmitting the return data packet to the load equalizer through TProxy after the return data packet passes through the Loopback interface.
In some embodiments, the transmitting the backhaul data packet to the load balancer through TProxy after passing through the Loopback interface may include: the backhaul data packet is imported into a Loopback interface and then received by a TProxy included in a node corresponding to a load balancer, and the TProxy sends the backhaul data packet to the load balancer; or the load balancer monitors and acquires the backhaul data packet imported into the Loopback interface.
In some embodiments, before step S101, further comprising:
step S100, a switch for controlling SIP for modifying the data request sent by the load balancer is arranged in the node.
In some embodiments, the setting a switch within the node for controlling SIP for modifying the data request sent by the load balancer includes: modifying a Neutron Lbias listener resource in an Openstack environment, and adding a first attribute in the Neutron Lbias listener resource, wherein the switch is opened under the condition that the first attribute is true; and when the first attribute is false, the switch is closed. The first attribute may be tcp_transfer.
In some embodiments, the data request sent by the load balancer includes: and the client side sends a data request to the server through the load balancer and/or the load balancer sends a data request to the server.
In some embodiments, the node modifies the SIP of the data request from the IP of the load balancer to the IP of the client that sent the data request and sends the data request of the source IP to the server with the switch on. In the case that the switch is turned off, the node forwards the data request, and the SIP of the data request is the IP of the load balancer, but as such, the server cannot learn the true IP of the client that sent the data request.
According to the data transmission method provided by the embodiment of the application, the switch for controlling and modifying the SIP of the data request sent by the load balancer is arranged in the node, so that the server can know the real IP of the client side sending the data request according to the data request under the condition that the switch is opened. And the backhaul data packet is sent to the load balancer through the backhaul data packet sent by the node marking server corresponding to the load balancer, so that the backhaul data packet can be guided into the load balancer, and the TCP transparent transmission function of the load balancer in the OpenStack cloud environment is realized.
Fig. 2 is a schematic flowchart of an alternative data transmission method at a server side according to an embodiment of the present application, and will be described according to various steps.
In step S201, the server receives a data request sent by a node corresponding to the load balancer.
In some embodiments, the receiving, by the server, a data request sent by a node corresponding to a load balancer includes: and the server receives a data request sent by a node corresponding to the load balancer, wherein the SIP of the data request comprises the IP of the client for sending the data request.
In some embodiments, the method further comprises: and the server confirms a backhaul data packet corresponding to the data request according to the data request and sends the backhaul data packet.
In this way, the server can learn the IP of the client that sent the data request according to the SIP of the data request.
Step S202, sending the backhaul data packet corresponding to the data request to the node corresponding to the load balancer.
In some embodiments, sending the backhaul data packet corresponding to the data request to the node corresponding to the load balancer includes: and the server sends the return data packet to a Loopback interface of a node corresponding to the load balancer, and the return data packet is transmitted to the load balancer through the Loopback interface and the TProxy.
And sending the backhaul data packet to a Loopback interface of a node corresponding to the load balancer, wherein the backhaul data packet is transmitted to the load balancer through the Loopback interface and the TProxy. The backhaul packet being transmitted to the load balancer through the Loopback interface and TProxy may include: the backhaul data packet is imported into a Loopback interface and then received by a TProxy included in a node corresponding to a load balancer, and the TProxy sends the backhaul data packet to the load balancer; or the load balancer monitors and acquires the backhaul data packet imported into the Loopback interface.
In some embodiments, the DIP of the backhaul packet includes the IP of the client and the SIP of the backhaul packet includes the IP of the server.
In some embodiments, the method further comprises: and modifying the routing rule of the server, and modifying the next hop IP of the backhaul data packet from the gateway IP to the IP of the load balancer. As such, the backhaul data packet may be transmitted to the load balancer instead of the client, resulting in a situation where the client cannot receive.
Thus, by the data transmission method provided by the embodiment of the application, the server can learn the IP of the client sending the data request according to the received data request of which the SIP is the IP of the client; the server modifies the routing rule, and modifies the next hop IP of the backhaul data packet from the gateway IP to the IP of the load balancer, so that backhaul transmission of the backhaul data packet can be realized, and the situation that the DIP is a client but the client cannot receive the backhaul data packet is avoided.
FIG. 3 is a diagram showing data transmission of a load balancer in the related art; fig. 4 shows a data transmission schematic of a load balancer modifying SIP to client IP; fig. 5 shows a data transmission schematic diagram of a transmission method according to an embodiment of the present application.
Fig. 6 shows an alternative flow chart of a data transmission method according to an embodiment of the present application, which will be described with reference to fig. 3, fig. 4, and fig. 5.
As shown in fig. 3, the client obtains service by accessing the IP of the load balancer, and when the SIP is the client IP and the DIP is the data request of the load balancer IP, the load balancer will select an optimal server according to the information such as the protocol type and the access port of the request backhaul data packet and the policies such as the minimum connection priority and the weighted polling, and forward the request from the client to the optimal server in the role of proxy. The DIP of the data request forwarded by the load balancer to the server defaults to the IP of the server, and the SIP of the forwarded data request is the IP of the load balancer. Thus, the server is not aware of the actual IP of the client that sent the data request.
In view of the defect of the data transmission method shown in fig. 3, in fig. 4, a client obtains a service by accessing the IP of the load balancer, and when the SIP is the client IP and the DIP is the data request of the load balancer IP, the load balancer modifies the SIP of the data request from the IP of the load balancing server to the real IP of the client that sends the data request, and sends the data request to the server. In this way, the server can learn the true IP of the client that sent the data request. However, in the backhaul process, the server directly sends the backhaul data packet to the gateway according to the real IP of the client and the default routing rule, and the gateway further sends the backhaul data packet to the client, and such backhaul network path bypasses the load balancer completely.
In the course of TCP or other connection-oriented communication, if according to the unidirectional traffic transmission mode shown in fig. 4, the client sends a synchronization sequence number (Synchronize Sequence Numbers, SYN) packet to the IP of the load balancer, but cannot receive a [ SYN, acknowledgement character (Acknowledge character, ACK) ] packet that SIP is the real IP of the backend server, this will directly result in that the TCP connection cannot be established, i.e. the unidirectional traffic transmission mode does not meet the TCP protocol communication conditions.
In order to solve the technical problem that the server cannot obtain the IP and TCP connection of the client cannot be established in the OpenStack cloud computing environment data transmission technology, the application includes steps S301 to S309, and will be described with reference to fig. 5 and 6.
In step S301, a node corresponding to the load balancing server sets a switch for controlling SIP for modifying the data request sent by the load balancing server in the node corresponding to the load balancing server.
In some embodiments, the setting, by the node corresponding to the load balancing server, a switch for controlling SIP for modifying the data request sent by the load balancing server includes: modifying a Neutron Lbias listener resource in an Openstack environment, and adding a first attribute in the Neutron Lbias listener resource, wherein the switch is opened under the condition that the first attribute is true; and when the first attribute is false, the switch is closed. The first attribute may be tcp_transport.
In some embodiments, the data request sent by the load balancer includes: and the client side sends a data request to the server through the load balancer and/or the load balancer sends a data request to the server.
In some embodiments, the node modifies the IP of the data request from the IP of the load balancer to the IP of the client that sent the data request and sends the data request of the source IP to the server with the switch on. In the case that the switch is turned off, the node forwards the data request, and the SIP of the data request is the IP of the load balancer, but as such, the server cannot learn the true IP of the client that sent the data request.
In step S302, the client sends a data request to the load balancer.
In some embodiments, the client sends a data request to the load balancer, the SIP of the data request being the client IP and the DIP of the data request being the IP of the load balancer.
Step S303, the load balancer forwards the data request.
In some embodiments, with the switch open, the load balancer forwarding the data request includes: the load balancer receives the data request; the load balancer sends the data request to a server. The load balancer sending the data request to a server includes: the load balancer modifies the SIP of the data request and modifies the load balancing server IP to the client IP.
In some embodiments, in conjunction with step S301, the forwarding the data request by the load balancer includes: the HAproxy Driver of Neutron Lbias is modified and logic is programmed to detect the listener object organized by Lbias plug in and determine if the listener TCP_transparent attribute is true. Under the condition that the TCP_transfer attribute of the listener is true, the switch is turned on, a load balancer modifies a HAproxy process configuration file, and a source 0.0.0.0 usesrc clientip is written in a governed background byte of the corresponding listener so as to complete reservation of the real IP of the client; and under the condition that the attribute of the listener TCP_transfer is false, the switch is closed, and the load balancer does not do other processing.
Step S304, the server receives the data request sent by the node corresponding to the load balancer.
In some embodiments, the receiving, by the server, a data request sent by a node corresponding to a load balancer includes: and the server receives the data request sent by the node corresponding to the load balancer, wherein the SIP of the data request comprises the IP of the client side sending the data request.
In some embodiments, the method further comprises: and the server confirms a backhaul data packet corresponding to the data request according to the data request and sends the backhaul data packet.
In this way, the server can learn the IP of the client that sent the data request according to the SIP of the data request.
In step S305, the server sends the backhaul data packet corresponding to the data request to the node corresponding to the load balancer.
In some embodiments, sending the backhaul data packet corresponding to the data request to the node corresponding to the load balancer includes: and the server sends the return data packet to a Loopback interface of a node corresponding to the load balancer, and the return data packet is transmitted to the load balancer through the Loopback interface and the TProxy.
And sending the backhaul data packet to a Loopback interface of a node corresponding to the load balancer, wherein the backhaul data packet is transmitted to the load balancer through the Loopback interface and the TProxy. The backhaul packet being transmitted to the load balancer through the Loopback interface and TProxy may include: the backhaul data packet is imported into a Loopback interface and then received by a TProxy included in a node corresponding to a load balancer, and the TProxy sends the backhaul data packet to the load balancer; or the load balancer monitors and acquires the backhaul data packet imported into the Loopback interface.
In some embodiments, the DIP of the backhaul packet includes the IP of the client and the SIP of the backhaul packet pair includes the IP of the server.
In some embodiments, the method further comprises: and modifying the routing rule of the server, and modifying the IP of the next hop from the gateway IP to the IP of the load balancer. As such, the backhaul data packet may be transmitted to the load balancer instead of the client, resulting in a situation where the client cannot receive.
Step S306, the node corresponding to the load balancer marks the backhaul packet sent by the server.
In some embodiments, the backhaul data packet sent by the node marking server corresponding to the load balancer includes: and the node corresponding to the load balancer receives the backhaul data packet sent by the server, and marks the backhaul data packet based on a Mangle table included by the node. The backhaul data packet corresponds to a data request sent to a server by a client through a load balancer; the SIP of the return data packet is a server IP, and the destination IP of the return data packet is the IP of a client terminal which sends a data request corresponding to the return data packet.
In some embodiments, the marking the backhaul packet by the node corresponding to the load balancer based on a Mangle table included by the node includes: and setting IPtables at nodes corresponding to the load balancer, and modifying corresponding flag bits of a backhaul data packet conforming to the strategy of the IPtables by a Mangle table based on the strategy of the IPtables.
In some embodiments, the policies of the IPtables may include: and (3) the backhaul data packet of the Socket (Socket) flowing into the corresponding node of the virtual equalizer, and/or the backhaul data packet of which the SIP is a server and the destination IP is a client.
In some embodiments, the setting the IPtables at the node corresponding to the load balancer may include: modifying the HAproxy Driver of Neutron Lbias, encoding the change logic of IPtables into the HAproxy Driver, and applying the change logic before the HAproxy Driver initializes the HAproxy process. The modifying, by the Mangle table, the corresponding flag bit of the backhaul packet conforming to the policy of the IPtables includes: the Mangle table marks the backhaul packet that conforms to the IPtables policy, such as FWARK.
In some embodiments, the node corresponding to the load balancer may be a node where the load balancer is located.
Step S307, the node corresponding to the load balancer modifies the IP rule and the routing rule of the node corresponding to the load balancer.
In some embodiments, the modifying, by the node corresponding to the load balancer, the IP rule and the routing rule of the node corresponding to the load balancer includes: modifying HAproxy Driver of Neutron Lbias, encoding the change logic of IP rule and routing rule into the HAproxy Driver, and applying the change logic before the HAproxy Driver initializes HAproxy process.
In step S308, the node corresponding to the load balancer sends the marked backhaul data packet to the load balancer.
In some embodiments, the sending, by the node corresponding to the load balancer, the marked backhaul data packet to the load balancer includes: and the node corresponding to the load balancer sends the marked backhaul data packet to the load balancer based on the IP rule and the routing rule.
In some embodiments, the node corresponding to the load balancer sends the marked backhaul data packet to the load balancer based on the IP rule and the routing rule, including: the node corresponding to the load balancer leads the marked return data packet to the routing rule based on the IP rule; and leading the marked return data packet to a Loopback interface included by the node corresponding to the load equalizer based on the routing rule by the node corresponding to the load equalizer, and transmitting the return data packet to the load equalizer through TProxy after the return data packet passes through the Loopback interface.
In some embodiments, the transmitting the backhaul data packet to the load balancer through TProxy after passing through the Loopback interface may include: the backhaul data packet is imported into a Loopback interface and then received by a TProxy included in a node corresponding to a load balancer, and the TProxy sends the backhaul data packet to the load balancer; or the load balancer monitors and acquires the backhaul data packet imported into the Loopback interface.
In step S309, the load balancer sends the backhaul packet to the client.
In some embodiments, the load balancer sending the backhaul data packet to a client comprises: and the load balancer sends the return data packet with the SIP as the IP of the load balancer and the DIP as the IP of the client to the client.
Thus, by the data transmission method provided by the embodiment of the application, the server can obtain the real IP of the client by modifying the SIP of the data request through the load balancer; the load balancing service of Neutron can be utilized to turn on or off the TCP transparent transmission function of the load balancing service provider by setting the TCP_transparent attribute of the monitor managed by the load balancing device. Further, by starting the Usesrc client function of the HAproxy, the processing of the backhaul data packet from the load balancer to the back-end server is completed; fwmark marks are set through IPtables, and return data packets are guided through IP rule and routing rule, so that the processing of packets from a back-end server to a load equalizer is completed, and the TCP transparent transmission function is further realized. In addition, the embodiment of the application also utilizes functions and modules such as TProxy, IPtables and ip rule to realize transparent proxy, can complete proxy tasks under the condition of keeping a return data packet SIP from a client, and fully accords with an OpenStack architecture.
Fig. 7 is a schematic diagram of an alternative structure of a node end corresponding to a load balancing service of a data transmission device according to an embodiment of the present application, and will be described according to various parts.
In some embodiments, the data transmission apparatus 400 includes: a marking unit 401 and a transmitting unit 402.
The marking unit 401 is configured to mark a backhaul packet sent by the server.
The sending unit 402 is configured to send the marked backhaul data packet to a load balancer.
In some embodiments, the apparatus 400 further comprises:
and the setting unit 403 is configured to set, in a node corresponding to the load balancer, a switch for controlling and modifying a source IP of the data request sent by the load balancer.
The setting unit 403 is further configured to modify a Neutron Lbaas listener resource in an Openstack environment, and add a first attribute to the Neutron Lbaas listener resource, where the switch is turned on if the first attribute is true; and when the first attribute is false, the switch is closed.
In some embodiments, the apparatus 400 further comprises:
and a modifying unit 404, configured to modify, when the switch is turned on, the source IP of the data request into the IP of the client that sends the data request.
The sending unit 402 is further configured to send the data request to the server.
The marking unit 401 is further configured to mark a backhaul packet sent by the server based on a Mangle table included in a node corresponding to the load balancer.
The modifying unit 404 is further configured to modify an IP rule and a routing rule of a node corresponding to the load balancer.
The sending unit 402 is further configured to send the marked backhaul data packet to a load balancer based on the IP rule and the routing rule.
Fig. 8 is a schematic diagram showing an optional structure of a server side of the data transmission device according to the embodiment of the present application, and will be described according to each section.
In some embodiments, the apparatus 500 comprises: a receiving unit 501 and a transmitting unit 502.
The receiving unit 501 is configured to receive, by a server, a data request sent by a node corresponding to a load balancer.
The sending unit 502 is configured to send a backhaul data packet corresponding to the data request to a node corresponding to the load balancer.
In some embodiments, the SIP of the data request includes an IP of a client that sent the data request; the destination IP of the backhaul packet includes the IP of the client that sent the data request.
The sending unit 502 is further configured to send the backhaul data packet to a Loopback interface of a node corresponding to the load balancer; and the backhaul data packet is transmitted to the load equalizer through the Loopback interface and the TProxy.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be accomplished by a program command related hardware, where the foregoing program may be stored in a storage medium, and when the program is executed, the program, when executed, receives a notification message based on a second application during running of a first application, responds to the notification message in a first area on a screen of an electronic device; the first area is smaller than the corresponding area of the input method application loaded when the second application is independently operated on the screen of the electronic equipment. And the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk or an optical disk, or the like, which can store program codes.
Alternatively, the integrated units described above may be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product. Based on such understanding, the technical solutions of the embodiments of the present application may be embodied in essence or a part contributing to the prior art in the form of a software product stored in a storage medium, including several commands for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a removable storage device, ROM, RAM, magnetic or optical disk, or other medium capable of storing program code.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (16)

1. A method of data transmission, the method comprising:
setting a switch for controlling and modifying a source Internet Protocol (IP) of a data request sent by a load balancer in a node corresponding to the load balancer;
when the switch is turned on, the node modifies the source IP of the data request to the IP of the client side sending the data request;
transmitting the data request to a server;
the node corresponding to the load balancer marks a backhaul data packet sent by the server;
and sending the marked backhaul data packet to the load balancer.
2. The method of claim 1, wherein the setting a switch in a node corresponding to the load balancer comprises:
modifying a Neutron Lbias listener resource in an Openstack environment, adding a first attribute in the Neutron Lbias listener resource, and opening the switch under the condition that the first attribute is true; in the case that the first attribute is false, the switch is closed.
3. The method of claim 1, wherein the backhaul packets sent by the node-marker server corresponding to the load balancer comprises:
and marking the backhaul data packet sent by the server based on the Mangle table included by the node.
4. The method of claim 1, wherein the sending the tagged backhaul data packet to a load balancer comprises:
modifying IP rules and routing rules of nodes corresponding to the load balancer;
and sending the marked backhaul data packet to a load balancer based on the IP rule and the routing rule.
5. A method of data transmission, the method comprising:
the method comprises the steps that a server receives a data request sent by a node corresponding to a load balancer, wherein a source Internet Protocol (IP) of the data request comprises an IP of a client for sending the data request, and the IP of the client is obtained by starting a switch of the source Internet Protocol (IP) arranged in the node to modify the data request;
and sending the backhaul data packet corresponding to the data request to a node corresponding to the load balancer, wherein the destination IP of the backhaul data packet comprises the IP of the client side sending the data request.
6. The method of claim 5, wherein the sending the backhaul data packet corresponding to the data request to the node corresponding to the load balancer comprises:
transmitting the return data packet to a loop Loopback interface of a node corresponding to the load balancer; and the backhaul data packet is transmitted to the load equalizer through the Loopback interface and the TProxy.
7. A data transmission apparatus, the apparatus comprising:
the setting unit is used for setting a switch for controlling and modifying a source Internet Protocol (IP) of a data request sent by the load balancer in a node corresponding to the load balancer;
a modifying unit, configured to modify, when the switch is turned on, a source IP of the data request to an IP of a client that sends the data request;
the first sending unit is used for sending the data request to a server;
the marking unit is used for marking the backhaul data packet sent by the server;
and the second sending unit is used for sending the marked backhaul data packet to the load balancer.
8. The apparatus of claim 7, wherein the setting unit is further configured to:
Modifying a Neutron Lbias listener resource in an Openstack environment, adding a first attribute in the Neutron Lbias listener resource, and opening the switch under the condition that the first attribute is true; in the case that the first attribute is false, the switch is closed.
9. The apparatus of claim 7, wherein the marking unit is further configured to:
and marking the backhaul data packet sent by the server based on a Mangle table included by the node corresponding to the load balancer.
10. The apparatus of claim 7, wherein the apparatus further comprises:
the modification unit is used for modifying the IP rule and the routing rule of the node corresponding to the load balancer;
the second sending unit is further configured to send the marked backhaul data packet to a load balancer based on the IP rule and the routing rule.
11. A data transmission apparatus, the apparatus comprising:
the receiving unit is used for receiving the data request sent by the node corresponding to the load balancer by the server, wherein the source internet protocol IP of the data request comprises the IP of the client for sending the data request, and the IP of the client is obtained by starting a switch of the source internet protocol IP arranged in the node to modify the data request;
And the sending unit is used for sending the backhaul data packet corresponding to the data request to the node corresponding to the load balancer, wherein the destination IP of the backhaul data packet comprises the IP of the client side sending the data request.
12. The apparatus of claim 11, wherein the transmitting unit is further configured to:
transmitting the return data packet to a loop Loopback interface of a node corresponding to the load balancer; and the backhaul data packet is transmitted to the load equalizer through the Loopback interface and the TProxy.
13. A storage medium storing an executable program, wherein the executable program, when executed by a processor, implements the data transmission method of any one of claims 1 to 4.
14. A storage medium storing an executable program, wherein the executable program, when executed by a processor, implements the data transmission method of any one of claims 5 to 6.
15. A data transmission apparatus comprising a memory, a processor and an executable program stored on the memory and capable of being executed by the processor, wherein the processor executes the steps of the data transmission method according to any one of claims 1 to 4 when the executable program is executed by the processor.
16. A data transmission apparatus comprising a memory, a processor and an executable program stored on the memory and capable of being executed by the processor, wherein the processor executes the steps of the data transmission method according to any one of claims 5 to 6 when the executable program is executed by the processor.
CN202010647499.0A 2020-07-07 2020-07-07 Data transmission method, device and storage medium Active CN113973086B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010647499.0A CN113973086B (en) 2020-07-07 2020-07-07 Data transmission method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010647499.0A CN113973086B (en) 2020-07-07 2020-07-07 Data transmission method, device and storage medium

Publications (2)

Publication Number Publication Date
CN113973086A CN113973086A (en) 2022-01-25
CN113973086B true CN113973086B (en) 2024-01-26

Family

ID=79584503

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010647499.0A Active CN113973086B (en) 2020-07-07 2020-07-07 Data transmission method, device and storage medium

Country Status (1)

Country Link
CN (1) CN113973086B (en)

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8825792B1 (en) * 2008-03-11 2014-09-02 United Services Automobile Association (Usaa) Systems and methods for online brand continuity
CN104780115A (en) * 2014-01-14 2015-07-15 上海盛大网络发展有限公司 Load balancing method and load balancing system in cloud computing environment
CN105554065A (en) * 2015-12-03 2016-05-04 华为技术有限公司 Method, conversion unit and application unit for message processing
CN106506700A (en) * 2016-12-28 2017-03-15 北京优帆科技有限公司 A kind of transparent proxy method of load equalizer and SiteServer LBS
CN106686129A (en) * 2017-01-23 2017-05-17 天地融科技股份有限公司 Load balancing method and load balancing system
CN106686085A (en) * 2016-12-29 2017-05-17 华为技术有限公司 Load balancing method, apparatus and system
CN107306289A (en) * 2016-04-21 2017-10-31 中国移动通信集团重庆有限公司 A kind of load-balancing method and equipment based on cloud computing
KR20180024062A (en) * 2016-08-25 2018-03-08 엔에이치엔엔터테인먼트 주식회사 Method and system for processing load balancing using virtual switch in virtual network invironment
KR20180024063A (en) * 2016-08-25 2018-03-08 엔에이치엔엔터테인먼트 주식회사 Method and system for processing direct server return load balancing using loop back interface in virtual network invironment
CN107786669A (en) * 2017-11-10 2018-03-09 华为技术有限公司 A kind of method of load balance process, server, device and storage medium
CN108449282A (en) * 2018-05-29 2018-08-24 华为技术有限公司 A kind of load-balancing method and its device
US10091098B1 (en) * 2017-06-23 2018-10-02 International Business Machines Corporation Distributed affinity tracking for network connections
CN110932992A (en) * 2019-11-29 2020-03-27 深圳供电局有限公司 Load balancing communication method based on tunnel mode
CN110971698A (en) * 2019-12-09 2020-04-07 北京奇艺世纪科技有限公司 Data forwarding system, method and device
CN111010342A (en) * 2019-11-21 2020-04-14 天津卓朗科技发展有限公司 Distributed load balancing implementation method and device
CN111008075A (en) * 2019-12-05 2020-04-14 安超云软件有限公司 Load balancing system, method, device, equipment and medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8819275B2 (en) * 2012-02-28 2014-08-26 Comcast Cable Communications, Llc Load balancing and session persistence in packet networks
US9560126B2 (en) * 2013-05-06 2017-01-31 Alcatel Lucent Stateless load balancing of connections
WO2015187946A1 (en) * 2014-06-05 2015-12-10 KEMP Technologies Inc. Adaptive load balancer and methods for intelligent data traffic steering
JP6505171B2 (en) * 2016-08-25 2019-04-24 エヌエイチエヌ エンターテインメント コーポレーションNHN Entertainment Corporation Method and system for handling DSR load balancing utilizing a loopback interface in a virtual network environment
US10212071B2 (en) * 2016-12-21 2019-02-19 Nicira, Inc. Bypassing a load balancer in a return path of network traffic
CN107087023B (en) * 2017-04-06 2019-11-05 平安科技(深圳)有限公司 Data forwarding method and system
US10541925B2 (en) * 2017-08-31 2020-01-21 Microsoft Technology Licensing, Llc Non-DSR distributed load balancer with virtualized VIPS and source proxy on load balanced connection

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8825792B1 (en) * 2008-03-11 2014-09-02 United Services Automobile Association (Usaa) Systems and methods for online brand continuity
CN104780115A (en) * 2014-01-14 2015-07-15 上海盛大网络发展有限公司 Load balancing method and load balancing system in cloud computing environment
CN105554065A (en) * 2015-12-03 2016-05-04 华为技术有限公司 Method, conversion unit and application unit for message processing
CN107306289A (en) * 2016-04-21 2017-10-31 中国移动通信集团重庆有限公司 A kind of load-balancing method and equipment based on cloud computing
KR20180024063A (en) * 2016-08-25 2018-03-08 엔에이치엔엔터테인먼트 주식회사 Method and system for processing direct server return load balancing using loop back interface in virtual network invironment
KR20180024062A (en) * 2016-08-25 2018-03-08 엔에이치엔엔터테인먼트 주식회사 Method and system for processing load balancing using virtual switch in virtual network invironment
CN106506700A (en) * 2016-12-28 2017-03-15 北京优帆科技有限公司 A kind of transparent proxy method of load equalizer and SiteServer LBS
CN106686085A (en) * 2016-12-29 2017-05-17 华为技术有限公司 Load balancing method, apparatus and system
CN106686129A (en) * 2017-01-23 2017-05-17 天地融科技股份有限公司 Load balancing method and load balancing system
US10091098B1 (en) * 2017-06-23 2018-10-02 International Business Machines Corporation Distributed affinity tracking for network connections
CN107786669A (en) * 2017-11-10 2018-03-09 华为技术有限公司 A kind of method of load balance process, server, device and storage medium
CN108449282A (en) * 2018-05-29 2018-08-24 华为技术有限公司 A kind of load-balancing method and its device
CN111010342A (en) * 2019-11-21 2020-04-14 天津卓朗科技发展有限公司 Distributed load balancing implementation method and device
CN110932992A (en) * 2019-11-29 2020-03-27 深圳供电局有限公司 Load balancing communication method based on tunnel mode
CN111008075A (en) * 2019-12-05 2020-04-14 安超云软件有限公司 Load balancing system, method, device, equipment and medium
CN110971698A (en) * 2019-12-09 2020-04-07 北京奇艺世纪科技有限公司 Data forwarding system, method and device

Also Published As

Publication number Publication date
CN113973086A (en) 2022-01-25

Similar Documents

Publication Publication Date Title
KR102514250B1 (en) Method, Apparatus and System for Selecting a Mobile Edge Computing Node
US11431673B2 (en) Method, apparatus, and system for selecting MEC node
CN112470436B (en) Systems, methods, and computer-readable media for providing multi-cloud connectivity
US11665263B2 (en) Network multi-path proxy selection to route data packets
EP4059195A1 (en) Domain name system as an authoritative source for multipath mobility policy
CN113691589B (en) Message transmission method, device and system
EP3741089B1 (en) Method for prioritization of internet traffic by finding appropriate internet exit points
CN116633934A (en) Load balancing method, device, node and storage medium
WO2015043679A1 (en) Moving stateful applications
CN113973086B (en) Data transmission method, device and storage medium
WO2022089169A1 (en) Method and apparatus for sending computing routing information, device, and storage medium
US11743180B2 (en) System and method for routing traffic onto an MPLS network
CN113918326A (en) Request processing method and device
CN112055083B (en) Request processing method and device, electronic equipment and medium
US12010012B2 (en) Application-aware BGP path selection and forwarding
Thai et al. Joint server and network optimization toward load‐balanced service chaining
CN109150725A (en) Traffic grooming method and server
WO2023274087A1 (en) Message forwarding method, apparatus and system
CN113114565B (en) Data message forwarding method and device, storage medium and electronic equipment
US20230389091A1 (en) Communication method and apparatus, computer-readable medium, and electronic device
CN116418794A (en) CDN scheduling method, device, system, equipment and medium suitable for HTTP3 service
CN117768388A (en) Device and method for applying virtual router under OpenStack
CN116546019A (en) Traffic management method, device, equipment and medium based on service grid
CN115766557A (en) Routing system, table item generation method of session maintenance table and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant