CN116366541B - Cloud scene network storage load balancing access method and system - Google Patents

Cloud scene network storage load balancing access method and system Download PDF

Info

Publication number
CN116366541B
CN116366541B CN202310258352.6A CN202310258352A CN116366541B CN 116366541 B CN116366541 B CN 116366541B CN 202310258352 A CN202310258352 A CN 202310258352A CN 116366541 B CN116366541 B CN 116366541B
Authority
CN
China
Prior art keywords
watch
intranet gateway
socket
group
ovs
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310258352.6A
Other languages
Chinese (zh)
Other versions
CN116366541A (en
Inventor
赵晶晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianyi Cloud Technology Co Ltd
Original Assignee
Tianyi Cloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianyi Cloud Technology Co Ltd filed Critical Tianyi Cloud Technology Co Ltd
Priority to CN202310258352.6A priority Critical patent/CN116366541B/en
Publication of CN116366541A publication Critical patent/CN116366541A/en
Application granted granted Critical
Publication of CN116366541B publication Critical patent/CN116366541B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/645Splitting route computation layer and forwarding layer, e.g. routing according to path computational element [PCE] or based on OpenFlow functionality
    • H04L45/655Interaction between route computation entities and forwarding entities, e.g. for route determination or for flow table update
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • H04L45/741Routing in networks with a plurality of addressing schemes, e.g. with both IPv4 and IPv6
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/09Mapping addresses
    • H04L61/10Mapping addresses of different types
    • H04L61/103Mapping addresses of different types across network layers, e.g. resolution of network layer into physical layer addresses or address resolution protocol [ARP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/50Address allocation
    • H04L61/5007Internet protocol [IP] addresses
    • H04L61/5014Internet protocol [IP] addresses using dynamic host configuration protocol [DHCP] or bootstrap protocol [BOOTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • H04L67/1044Group management mechanisms 
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention relates to the field of network transmission in data communication, and provides a cloud scene network storage load balancing access method and a cloud scene network storage load balancing access system. Meanwhile, a new group table socket watch_ip field is added, so that the link state between the host and the far-end intranet gateway can be perceived, the group table socket on the host can be quickly switched, and the traffic is sent to other normal intranet gateways.

Description

Cloud scene network storage load balancing access method and system
Technical Field
The invention relates to the field of network transmission in data communication, in particular to a cloud scene network storage load balancing access method and device.
Background
SDN is an architecture that abstracts a network into a control plane and a forwarding plane, making the network agile and flexible. With the rise of clouds, SDN provides flexible network support for various cloud services and application loads. Meanwhile, due to the detail difference caused by the implementation of various virtualization technologies, the full stack networking interworking faces the asymmetry of information stored by each virtualization control plane, how to synchronize and uniformly manage a plurality of control plane information, how to implement cross-regional routing and the like, and brings great challenges to SDN.
OpenFlow is a network communication protocol, which belongs to a data link layer and can control the forwarding plane of a network switch or router. Therefore, it is widely used in SDN architecture. SDN controls the forwarding plane of OVS through OpenFlow, thereby changing the network path taken by the network data packet.
Network Storage (Network Storage) is one way of data Storage. Including storage devices and embedded system software, may provide cross-platform file sharing functionality. In this configuration, the network store centrally manages and processes all data on the network, offloading loads from applications or enterprise servers, effectively reducing total cost of ownership, and protecting user investment. Therefore, in the cloud scenario, various network storage technologies are mostly adopted to store user data.
In the cloud scenario, part of the storage devices are deployed outside the cloud or on other clouds, so that access is required by using a public network address, and the user cloud host uses an intra-cloud private network address. The user cloud host needs SDN to conduct drainage to access the storage device, and the network between the cloud host and the storage device is opened.
The prior art generally realizes the following method:
and the cloud host arrives at the stored traffic, carries out NAT processing through the intranet gateway, and sends the corresponding traffic to the stored network. A plurality of pairs of intranet gateway clusters are generally deployed in advance in a resource pool, and all tenants share the intranet gateway. When creating a storage service, a tenant allocates an intranet gateway port to a corresponding subnet (the IP may fall on the corresponding intranet gateway) for traffic to be drained to the intranet gateway. The specific flow is as follows:
1. after the cloud host 1 is started, an address is obtained through DHCP;
After the DHCP message reaches the virtual switch, the matched OpenFlow flow table is sent to the SDN controller;
And the SDN controller searches lease according to the MAC address carried in the DHCP message, encapsulates the IP address and related Option carrier into a DHCP response message, and sends the DHCP response message to the virtual machine. The Option of the DHCP response comprises a host route Option (Option 121), wherein the route comprises a default route (the next hop is gateway IP) and a storage route (the destination address is a storage network segment address, and the next hop is an intranet gateway port IP address); note that: when the intranet gateway is a plurality of clusters, only one is selected at present and carried to the cloud host.
4. After receiving the DHCP message, the cloud host 1 configures its own route according to the Option, which includes the access storage route;
5. The cloud host 1 accesses the storage, and searches that the next hop is the intranet gateway IP1 according to the network segment route. Firstly, sending an MAC address corresponding to an ARP request IP 1;
ARP message is sent to OVS, and the solution is replaced by OpenFlow flow table;
7. The cloud host 1 learns the MAC address of the IP1, encapsulates the MAC address on the destination MAC of the access storage message, and sends out the message;
8. the message is sent to an OVS, and is matched with an MAC forwarding table on OpenFlow, and the flow is sent out from a VXLAN port and is sent to an intranet gateway cluster 1;
9. after receiving the message, the intranet gateway cluster 1 carries out NAT conversion on the IP of the message according to the configured NAT rule, and then sends the message out to reach the storage device.
While this scheme can meet the need for through-storage, the scheme has some drawbacks:
1. The reliability is poor: with the rise of 5G services, the reliability requirements on the system are higher and higher. In the original drainage scheme, when the next hop fails, the flow cannot be automatically switched to the normal next hop of the link, so that the service is cut off for a long time. Because the drainage scheme relies on host routes carried in the DHCP messages, when the intranet gateway fails, traffic switching cannot be automatically performed, after a customer senses that the service is abnormal, a port corresponding to the intranet gateway cluster 1 is deleted, then each cloud host needs to manually release an address, and DHCP interaction is performed again to acquire a static route reaching the intranet gateway cluster 2, so that the cut-off time is long. Therefore, there is a need to enhance system reliability and address failure issues.
2. Poor capability of handling bursty traffic: as more and more users are involved, the impact to which the system needs to bear is greater and greater, different intranet gateways are distributed based on the cloud host, and when the service flow of the intranet gateways is greater, capacity expansion is performed. The original cloud host storage service cannot be automatically switched to the intranet gateway cluster 3, and meanwhile, the next hop of the storage network segment allocated for the newly-built cloud host is still randomly selected in the 3 clusters. This results in that even if the capacity is expanded, the service pressures of the intranet gateway 1 and the intranet gateway 2 cannot be completely relieved, the energy level of handling the burst traffic is poor, and the elastic capacity expansion network cannot be automatically adapted. Therefore, there is a need for a network streaming scheme that can load balance based on flows, while automatically adapting the capacity of the expansion and contraction.
3. Backward compatibility is poor: the state is greatly developing IPv6, and in the long term evolution of the system, it is required to support both IPv4 and IPv6 networks. The existing scheme uses that Option121 in DHCP carries route information to a cloud host, and the cloud host can access storage according to storage route. In the case of subsequent switching of the IPv6 network, the DHCPv6 packet in the IPv6 network is not provided with the Option of host routing, that is, the IPv4 and IPv6 cannot share the same set of scheme, which increases complexity of the system. A general method is therefore needed to simplify the system.
Disclosure of Invention
In view of the above, a new access storage method and device are provided herein for solving the problems of poor reliability, poor capability of handling burst traffic, poor backward compatibility, high system complexity and the like in the original scheme.
The invention provides a cloud scene network storage load balancing access method, which is characterized by comprising the following steps of:
s1, deploying N intranet gateway clusters by an SDN controller, configuring a select type group table, namely an intranet gateway group table through OpenFlow, wherein the load mode is five-tuple HASH, and N pockets are correspondingly added;
s2, the OVS detects N intranet gateway clusters respectively, and sets the corresponding socket state to be effective or invalid according to detection results;
S3, the SDN controller sends the flow table through OpenFlow, and the flow of which the target network segment is a storage network segment is sent to the intranet gateway group table;
S4, the controller receives the DHCP message sent by the cloud host 1 and does not carry the storage route;
s5, when the cloud host 1 accesses the storage, matching a default route, and sending ARP to obtain the MAC address of the subnet gateway;
S6, ARP message is sent to OVS, and the solution is replaced by OpenFlow flow table;
S7, the cloud host 1 learns the MAC address of the subnet gateway, encapsulates the MAC address on the destination MAC of the access storage message, and sends out the message;
S8, the message is sent to an OVS, an IP forwarding table on OpenFlow is matched, and the flow is sent to an intranet gateway group table; and sharing the flow which is accessed and stored to the corresponding intranet gateway cluster, carrying out NAT conversion on the IP of the message by the intranet gateway cluster which receives the message according to the configured NAT rule, and then sending the IP to the storage equipment.
In a second aspect, the invention provides a cloud scene network storage load balancing access system for implementing the method, wherein the system comprises an SDN controller, an OVS, a subnet gateway, an intranet gateway cluster, a cloud host and network storage equipment;
the SDN controller is used for deploying N intranet gateway clusters, configuring a select type group table, namely an intranet gateway group table through OpenFlow, wherein the load mode is five-tuple HASH, and N pockets are correspondingly added;
The OVS is used for detecting N intranet gateway clusters respectively and determining the current state of the corresponding socket;
The cloud host is used for initiating network storage access.
In a third aspect, the present invention provides a computing device comprising: the device comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete communication with each other through the communication bus;
the memory is used for storing at least one executable program, and the executable program enables the processor to execute the operation corresponding to the cloud scene network storage load balancing access method.
In a fourth aspect, the present invention provides a computer storage medium, where at least one executable program is stored in the storage medium, where the executable program makes a processor execute operations corresponding to the cloud scenario network storage load balancing access method.
In the technical scheme of the invention, the cloud host only needs to walk the default route to the gateway, the flow is sent to the OVS, and the flow which is accessed and stored is sent to the select type group table by adopting the flow table on the OVS, so that the load sharing based on the flow is realized. Meanwhile, a new group table socket watch_ip field is added, so that the link state between the host and the far-end intranet gateway can be perceived, the group table socket on the host can be quickly switched, the flow is sent to other normal intranet gateways, the reliability of the system is improved, and the user does not perceive. When the intranet gateway elastically expands and contracts, a user does not need to change any content, and the load sharing of the flow can still be ensured, so that the capability of the system for coping with sudden business is improved. In the invention, the cloud host sends the stored traffic to the OVS by default route, so that the invention can simultaneously meet the IPv4 and IPv6 traffic demands and simplify the system.
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present invention more readily apparent.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the drawings used in the description of the invention or the prior art will be briefly described, it being obvious that the drawings in the description below are some embodiments of the invention, and that other drawings can be obtained from these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow schematic of the present invention;
FIG. 2 is a flow chart of the process of adding fields in the group table according to the present invention;
FIG. 3 is a schematic flow diagram of the intranet gateway according to the present invention when it fails;
FIG. 4 is a flow chart of the expansion according to the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, the "plurality" generally includes at least two.
It should be understood that the term "and/or" as used herein is merely one relationship describing the association of the associated objects, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrase "if determined" or "if detected (stated condition or event)" may be interpreted as "when determined" or "in response to determination" or "when detected (stated condition or event)" or "in response to detection (stated condition or event), depending on the context.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a product or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such product or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a commodity or system comprising such elements.
In addition, the sequence of steps in the method embodiments described below is only an example and is not strictly limited.
The invention provides a cloud scene network storage load balancing access method, which comprises the following steps:
an SDN controller deploys N intranet gateway clusters, a select type group table is configured through OpenFlow, namely the intranet gateway group table is loaded in a five-tuple HASH mode, and N pockets are correspondingly added.
In the specific implementation, in fig. 1, n=2 is taken as an example, that is, an intranet gateway cluster 1 and an intranet gateway cluster 2 are deployed.
Preferably, the bucket data structure is as follows:
Socket_id, which refers to the 32-bit integer group id of a socket. Values greater than OxfffIff a are reserved. This field is added in openvswitch2.4 to conform to the OpenFlow 1.5 specification. When an early version of OpenFlow is used, it is not supported. The vSwitch is opened and arbitrarily assigned when bucketID is not specified.
Actions refers to the corresponding operation, with the designation of an action being optional, and any unknown socket parameter will be interpreted as an action.
Weight, which refers to the Weight value, and the relative Weight value of the bucket is an integer. For a group of type SELECT, the switch may use this during the selection of a socket.
Watch_port, a monitor port, is a port used to determine the activity of a group. This or watch_group field is needed for a group of type ff or fast_failover. This field or the watch_group field may also be used for a group of type SELECT.
Watch_group, which refers to the group identifier that observes the group id for determining the activity of the group. This or watch_ Portfield is needed for a group of type ff or fast_Failover. This or Watch Portfield may also be used for a group of type SELECT.
Preferably, the socket of the open source Openflow group table is extended, the watch_ip field is newly added, and the select type group table can configure this watch_ip field. The watch_ip field in the socket 1 is the IP of the intranet gateway cluster 1, and acts as a flow guide to the intranet gateway cluster 1;
similarly, the watch_ip field in bucketN is the IP of the intranet gateway cluster N, and acts to drain traffic to the intranet gateway cluster N.
And 2, the OVS detects the N intranet gateway clusters respectively, and sets the corresponding socket state to be effective or invalid according to the detection result.
In a specific implementation, as shown in fig. 2, the OVS detects according to the watch_ip timing, and if the OVS detects the failure in a predetermined number of times, the OVS notifies the group table processing module of the detection failure result, and preferably, the predetermined number of times may be 3 times; if the detection is successful, notifying the detection success result to a group table processing module; the group table processing module receives the detection failure message, and if the current socket state is an effective state, the socket state is set to be an ineffective state; and the group table processing module receives the successful detection message, and sets the socket state as an effective state if the current socket state is an ineffective state.
3. And the controller sends the flow table through the OpenFlow, and the flow of which the target network segment is the storage network segment is sent to the intranet gateway group table.
4. The controller receives the DHCP message sent by the cloud host 1 and no longer carries the storage route.
5. When the cloud host 1 accesses the storage, matching a default route, and sending ARP to obtain the MAC address of the subnet gateway; the subnet gateway is a non-intranet gateway.
ARP message is sent to OVS, and the solution is replaced by OpenFlow flow table;
7. The cloud host 1 learns the MAC address of the subnet gateway, encapsulates the MAC address on the destination MAC of the access storage message, and sends out the message;
8. the message is sent to an OVS, an IP forwarding table on OpenFlow is matched, and the flow is sent to an intranet gateway group table; sharing the flow of access storage to a corresponding intranet gateway cluster; and the intranet gateway cluster for receiving the message carries out NAT conversion on the IP of the message according to the configured NAT rule, and then sends the message out to the storage device.
In specific implementation, determining the socket in the effective state, and sharing the flow of access storage to the socket in the effective state based on the message five-tuple load by adopting a HASH mode.
As shown in fig. 1, if both the socket 1 and the socket 2 are in an effective state, the flow of the access storage is shared to the intranet gateway cluster 1 (GW 1) or the intranet gateway cluster 2 (GW 2) based on the message quintuple load in a HASH manner;
For example, as shown in fig. 3, when the intranet gateway cluster fails, if only one of the socket 1 and the socket 2 is in an effective state, after the cloud host 1 accesses the stored traffic to reach the group table, the message is drained to the intranet gateway cluster 2 due to only the socket 2 in the group table. In summary, in the whole link failure process, no manual intervention is needed, and the flow can be automatically switched to a normal intranet gateway cluster, so that the high availability of the system is realized.
If the capacity of the intranet gateway is expanded, the SDN controller adds bucketN +1 drained to the intranet gateway cluster N+1 in the intranet gateway group table, and configures the watch_ip of the service as the IP of the intranet gateway cluster N+1, and after the OVS is successfully detected according to the IP of the intranet gateway cluster N+1, the bucketN +1 takes effect.
As shown in fig. 4, an intranet gateway cluster 3 is expanded, an sdn controller adds a socket 3 drained to the intranet gateway cluster 3 in an intranet gateway group table, and configures a watch_ip of the socket as an IP of the intranet gateway cluster 3, and after the OVS succeeds in detecting according to the IP of the intranet gateway cluster 3, the socket 3 takes effect. The flow of the subsequent access storage is directly based on five-tuple in the group table to carry out HASH (HASH to intranet gateway clusters 1, 2 and 3), the network is not required to be manually interfered, and the load sharing after capacity expansion is automatically realized.
The invention also provides a cloud scene network storage load balancing access system which comprises an SDN controller, an OVS, a subnet gateway, an intranet gateway cluster, a cloud host and network storage equipment.
It will be appreciated that the system/device/apparatus provided by this embodiment may also be used to implement the steps of the methods provided by other embodiments of the present invention.
The invention also provides computer equipment. The computer device is in the form of a general purpose computing device. Components of a computer device may include, but are not limited to: one or more processors or processing units, system memory, and buses connecting the different system components.
Computer devices typically include a variety of computer system readable media. Such media can be any available media that can be accessed by the computer device and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory may include a computer system readable medium in the form of volatile memory and the memory may include at least one program product having a set (e.g., at least one) of program modules configured to carry out the functions of the embodiments of the invention.
The processing unit executes various functional applications and data processing by running programs stored in the system memory, such as the methods provided by other embodiments of the present invention.
The present invention also provides a storage medium containing computer-executable instructions, on which a computer program is stored which, when executed by a processor, implements methods provided by other embodiments of the present invention.
Note that the above is only a preferred embodiment of the present invention and the technical principle applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, while the invention has been described in connection with the above embodiments, the invention is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the invention, which is set forth in the following claims.

Claims (8)

1. The cloud scene network storage load balancing access method is characterized by comprising the following steps of:
S1, deploying N intranet gateway clusters by an SDN controller, configuring a select type group table, namely an intranet gateway group table through OpenFlow, wherein the load mode is five-tuple HASH, and N pockets are correspondingly added; when the capacity of the intranet gateway is expanded, the SDN controller adds bucketN +1 which is drained to the intranet gateway cluster N+1 in the intranet gateway group table, and configures a watch_ip of the switch_ip as the IP of the intranet gateway cluster N+1, and after the OVS is successfully detected according to the IP of the intranet gateway cluster N+1, the bucketN +1 takes effect; the socket configuration parameters include: socket_id, actions, weight, watch_port, watch_group, watch_ip; wherein the bucket_id is a 32-bit integer group id of the bucket; the Actions is the corresponding operation, the Weight is the Weight value, the watch_port is the monitoring port; watch_group is the observation group id; the watch_ip field is an intranet gateway cluster IP corresponding to the socket;
s2, the OVS detects N intranet gateway clusters respectively, and sets the corresponding socket state to be effective or invalid according to detection results;
S3, the SDN controller sends the flow table through OpenFlow, and the flow of which the target network segment is a storage network segment is sent to the intranet gateway group table;
S4, the controller receives the DHCP message sent by the cloud host 1 and does not carry the storage route;
s5, when the cloud host 1 accesses the storage, matching a default route, and sending ARP to obtain the MAC address of the subnet gateway;
s6, ARP message is sent to OVS, and the solution is replaced by OpenFlow flow table;
S7, the cloud host 1 learns the MAC address of the subnet gateway, encapsulates the MAC address on the destination MAC of the access storage message, and sends out the message;
S8, the message is sent to an OVS, an IP forwarding table on OpenFlow is matched, and the flow is sent to an intranet gateway group table; and sharing the flow which is accessed and stored to the corresponding intranet gateway cluster, carrying out NAT conversion on the IP of the message by the intranet gateway cluster which receives the message according to the configured NAT rule, and then sending the IP to the storage equipment.
2. The method of claim 1, wherein the OVS detects N intranet gateway clusters respectively, including detecting the OVS according to a watch_ip timing, and notifying a group table processing module of a detection failure result if the OVS fails to detect a plurality of times continuously; if the detection is successful, notifying the detection success result to a group table processing module; the group table processing module receives the detection failure message, and if the current socket state is an effective state, the socket state is set to be an ineffective state; and the group table processing module receives the successful detection message, and sets the socket state as an effective state if the current socket state is an ineffective state.
3. The method of claim 1, wherein the sharing the flow of the access storage to the corresponding intranet gateway cluster includes determining the socket in an effective state, and sharing the flow of the access storage to the socket in the effective state based on the message five-tuple load in a HASH manner.
4. A cloud scenario network storage load balancing access system for implementing the method of claim 1, the system comprising an SDN controller, an OVS, a subnet gateway, an intranet gateway cluster, a cloud host, a network storage device;
The SDN controller is used for deploying N intranet gateway clusters, configuring a select type group table, namely an intranet gateway group table through OpenFlow, wherein the load mode is five-tuple HASH, and N pockets are correspondingly added; when the capacity of the intranet gateway is expanded, the SDN controller adds bucketN +1 which is drained to the intranet gateway cluster N+1 in the intranet gateway group table, and configures a watch_ip of the switch_ip as the IP of the intranet gateway cluster N+1, and after the OVS is successfully detected according to the IP of the intranet gateway cluster N+1, the bucketN +1 takes effect; the socket configuration parameters include: socket_id, actions, weight, watch_port, watch_group, watch_ip; wherein the bucket_id is a 32-bit integer group id of the bucket; the Actions is the corresponding operation, the Weight is the Weight value, the watch_port is the monitoring port; watch_group is the observation group id; the watch_ip field is an intranet gateway cluster IP corresponding to the socket;
The OVS is used for detecting N intranet gateway clusters respectively and determining the current state of the corresponding socket;
The cloud host is used for initiating network storage access.
5. The system of claim 4, wherein the socket configuration parameters comprise: socket_id, actions, weight, watch_port, watch_group, watch_ip;
Wherein the bucket_id is a 32-bit integer group id of the bucket; the Actions is the corresponding operation, the Weight is the Weight value, the watch_port is the monitoring port; watch_group is the observation group id; the watch_ip field is an intranet gateway cluster IP corresponding to the socket.
6. The system of claim 4, wherein the OVS detects N intranet gateway clusters respectively, and includes detecting the OVS according to a watch_ip timing, and notifying a group table processing module of a detection failure result if the OVS detects the detection failure a plurality of times continuously; if the detection is successful, notifying the detection success result to a group table processing module; the group table processing module receives the detection failure message, and if the current socket state is an effective state, the socket state is set to be an ineffective state; and the group table processing module receives the successful detection message, and sets the socket state as an effective state if the current socket state is an ineffective state.
7. A computing device, comprising: the device comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete communication with each other through the communication bus;
The memory is configured to store at least one executable program, where the executable program causes the processor to execute operations corresponding to the cloud scenario network storage load balancing access method according to any one of claims 1 to 3.
8. A computer storage medium having at least one executable program stored therein, the executable program causing a processor to perform operations corresponding to the cloud scenario network storage load balancing access method of any one of claims 1-3.
CN202310258352.6A 2023-03-12 2023-03-12 Cloud scene network storage load balancing access method and system Active CN116366541B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310258352.6A CN116366541B (en) 2023-03-12 2023-03-12 Cloud scene network storage load balancing access method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310258352.6A CN116366541B (en) 2023-03-12 2023-03-12 Cloud scene network storage load balancing access method and system

Publications (2)

Publication Number Publication Date
CN116366541A CN116366541A (en) 2023-06-30
CN116366541B true CN116366541B (en) 2024-08-02

Family

ID=86927613

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310258352.6A Active CN116366541B (en) 2023-03-12 2023-03-12 Cloud scene network storage load balancing access method and system

Country Status (1)

Country Link
CN (1) CN116366541B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114466016A (en) * 2022-03-04 2022-05-10 烽火通信科技股份有限公司 Method and system for realizing dynamic load balance of data center gateway
CN115037602A (en) * 2022-04-12 2022-09-09 新华三技术有限公司 Fault processing method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3895031A4 (en) * 2018-12-15 2022-07-20 Telefonaktiebolaget LM Ericsson (publ) Efficient network address translation (nat) in cloud networks

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114466016A (en) * 2022-03-04 2022-05-10 烽火通信科技股份有限公司 Method and system for realizing dynamic load balance of data center gateway
CN115037602A (en) * 2022-04-12 2022-09-09 新华三技术有限公司 Fault processing method and device

Also Published As

Publication number Publication date
CN116366541A (en) 2023-06-30

Similar Documents

Publication Publication Date Title
US10298449B2 (en) Automatically generated virtual network elements for virtualized packet networks
US11349687B2 (en) Packet processing method, device, and system
CN105610632B (en) Virtual network equipment and related method
US11102164B1 (en) Software defined networking operations for programmable connected devices
US11258729B2 (en) Deploying a software defined networking (SDN) solution on a host using a single active uplink
US20220086025A1 (en) Flexible network interfaces as a framework for a network appliance
US20210320865A1 (en) Flow-based local egress in a multisite datacenter
US11190406B1 (en) Injecting network endpoints into a SDN
CN114070723A (en) Virtual network configuration method and system of bare metal server and intelligent network card
US11477270B1 (en) Seamless hand-off of data traffic in public cloud environments
CN106254095B (en) The backup processing method and equipment of tunnel traffic
US11558220B2 (en) Uplink-aware monitoring of logical overlay tunnels
JP2017038218A (en) Communication system and setting method
CN116366541B (en) Cloud scene network storage load balancing access method and system
US9876689B1 (en) Automatically generated virtual network elements for virtualized local area networks
CN106209634B (en) Learning method and device of address mapping relation
US20200274799A1 (en) Multi-vrf and multi-service insertion on edge gateway virtual machines
EP4211886B1 (en) Fault tolerance for sdn gateways using network switches
US12088493B2 (en) Multi-VRF and multi-service insertion on edge gateway virtual machines
US10944585B1 (en) Migration for network appliances
KR20240007787A (en) Edge platform management device for supporting management of layer 2 of edge platforms and operation method thereof
CN117938800A (en) Method, device and computer program product for rapidly switching IP addresses
CN113132221A (en) Method and device for processing routing information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant