CN115865932A - Traffic scheduling method and device, electronic equipment and storage medium - Google Patents

Traffic scheduling method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115865932A
CN115865932A CN202310170635.5A CN202310170635A CN115865932A CN 115865932 A CN115865932 A CN 115865932A CN 202310170635 A CN202310170635 A CN 202310170635A CN 115865932 A CN115865932 A CN 115865932A
Authority
CN
China
Prior art keywords
data center
link
access request
service access
load balancing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310170635.5A
Other languages
Chinese (zh)
Other versions
CN115865932B (en
Inventor
周阳
吕玉超
石凤
张义飞
李政
王建超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianyi Cloud Technology Co Ltd
Original Assignee
Tianyi Cloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianyi Cloud Technology Co Ltd filed Critical Tianyi Cloud Technology Co Ltd
Priority to CN202310170635.5A priority Critical patent/CN115865932B/en
Publication of CN115865932A publication Critical patent/CN115865932A/en
Application granted granted Critical
Publication of CN115865932B publication Critical patent/CN115865932B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)
  • Computer And Data Communications (AREA)

Abstract

The invention provides a traffic scheduling method, a traffic scheduling device, electronic equipment and a storage medium, and relates to the technical field of traffic scheduling. In the embodiment of the present invention, an optimal egress link of a first data center may be determined according to a pre-trained routing model, so as to return a response flow of the first application server to the service access request to a user. The processing efficiency of the system can be integrally improved, and the waiting time of the user can be shortened.

Description

Traffic scheduling method and device, electronic equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of traffic scheduling, in particular to a traffic scheduling method, a traffic scheduling device, electronic equipment and a storage medium.
Background
With the development of information technology, data centers have become centers for governments, enterprises, financial institutions, and the like that process data providing services. The integrity of the data, as well as the continuity and reliability of the service, become important factors in determining economic efficiency and improving the nationality. The single data center has the operation risk of single point failure, if one machine room has inefficient factors such as infrastructure, electric power, natural disasters and the like, the service is influenced certainly, and the recovery is difficult in a short period, so the construction requirements of double data centers, two places and three centers and the like are met. How to ensure that the business of one data center continues to provide services under the attack of a serious disaster is technical work which needs to be considered in the construction of the data center, and all technical choices and architectural designs need to be developed around reliability and continuity.
In the related art, more than two data centers are generally built due to disaster tolerance, and a main data center and a standby data center are designed and built in a mirror image mode. The main data center is used for providing user services, and the backup data center is used for backing up services, configuration, data and the like of the main data center. When the main data center has problems, the standby data center takes over to continuously provide services to the outside. It can be seen that the backup data center does not bear services at ordinary times, which may cause resource waste and increase of operation and maintenance cost, and the dual-active data center may well solve the problem, where the dual-active data center refers to two data centers that provide services at the same time and are backup for each other. Even if no fault exists, each service can be shared to different data centers, so that waste caused by idling of one data center is avoided, the service providing capacity is expanded, and more economic benefits are generated.
However, a dual-activity data center may result in more complex network architecture design and network policy design. The flow can be loaded to two data centers, so that the complexity of a flow model is brought; the problem of diversity of flow scheduling can also be brought when different services are positioned in different data centers; the data center exit strategy is fixed singly, and user experience is influenced when network quality fluctuation occurs; when network faults are accurately positioned and the main and standby centers are quickly switched, ensuring the minimum service interruption time also becomes a challenge of the dual active data center.
Therefore, a specific traffic scheduling method is needed to solve the related problems.
Disclosure of Invention
Embodiments of the present invention provide a traffic scheduling method and apparatus, an electronic device, and a storage medium, so as to at least partially solve the problems in the related art.
A first aspect of an embodiment of the present invention provides a traffic scheduling method, which is applied to a global scheduling server in a dual active data center traffic scheduling system, where the dual active data center traffic scheduling system includes: global scheduling server, first data center, second data center, every data center includes: at least one application server, the method comprising:
receiving a service access request sent by a user;
forwarding the service access request to a first application server of a first data center to process the service access request, wherein the first application server of the first data center is determined according to the service access request and a load balancing strategy;
determining an optimal exit link of the first data center according to a pre-trained routing model, wherein the optimal exit link is used for returning response flow of the first application server to the service access request to a user;
the input of the routing model is the current link configuration data and the current link quality data, and the output is the optimal exit link.
Optionally, the method further comprises:
collecting link quality configuration data of each exit link of the first data center;
determining the link price value of each exit link according to the position of the current user;
determining a user weighted value of each exit link according to a current user;
determining current link configuration data according to the link quality configuration data, the link price value and the user weighted value;
and acquiring the current packet loss rate and/or delay rate of each outlet link of the first data center, and determining the quality data of the current link.
Optionally, each data center further comprises: at least one application load balancing server; forwarding the service access request to a first application server of a first data center, comprising:
determining a corresponding first data center according to a domain name carried by the service access request, and forwarding the service access request to the first data center, wherein the first data center forwards the service access request to an application load balancing server in the first data center, and the application load balancing server is used for determining a first application server according to the configuration and load balancing strategy of each application server in the first data center;
forwarding the service access request to the first application server.
Optionally, each data center further comprises: the system comprises at least one application load balancing server, wherein the application load balancing servers arranged in different data centers form a load balancing cluster; forwarding the service access request to a first application server of a first data center, comprising:
determining a corresponding first data center according to a domain name carried by the service access request, wherein the first data center forwards the service access request to a load balancing cluster, and the load balancing cluster is used for determining a first application server according to the configuration and a load balancing strategy of each application server of the first data center and a second data center;
and forwarding the service access request to an application server.
Optionally, the method further comprises:
periodically sending a detection message, and detecting the accessibility of a target object, wherein the target object comprises: the system comprises a data center exit link, an application load server, a data center interconnection link and a data center intranet link;
in the case where any target object is unreachable, alarm information is generated.
Optionally, the method further comprises:
and under the condition that the application load balancing server of the first data center fails, forwarding the service access request accessing the first data center to an application load balancing server of a second data center, wherein the application load balancing server of the second data center is used for forwarding the service access request to an application server providing server of the first data center according to the configuration and load balancing strategy of each application server of the first data center.
Optionally, the method further comprises:
and under the condition that the exit link of the first data center has a fault, resolving the access domain name carried in the service access request into an address of a second data center, and processing the service access request by an application server arranged in the second data center.
Optionally, the method further comprises:
and under the condition of interconnection link failure among the data centers, all domain names carried in the service access request are analyzed as the address of a first data center, wherein the first data center is a main data center.
A second aspect of the embodiments of the present invention provides a traffic scheduling apparatus, which is applied to a global scheduling server in a dual-active data center traffic scheduling system, where the dual-active data center traffic scheduling system includes: global scheduling server, first data center, second data center, every data center includes: at least one application server, the apparatus comprising:
the receiving module is used for receiving a service access request sent by a user;
the forwarding module is used for forwarding the service access request to a first application server of a first data center so as to process the service access request, wherein the first application server of the first data center is determined according to the service access request and a load balancing strategy;
a determining module, configured to determine an optimal egress link of the first data center according to a pre-trained routing model, where the optimal egress link is used to return a response traffic of the first application server to the service access request to a user;
the input of the routing model is the current link configuration data and the current link quality data, and the output is the optimal exit link.
Optionally, the apparatus further comprises:
the acquisition module is used for acquiring link quality configuration data of each exit link of the first data center;
the link price value determining module is used for determining the link price value of each exit link according to the position of the current user;
the user weighted value determining module is used for determining the user weighted value of each exit link according to the current user;
the link configuration data determining module is used for determining the current link configuration data according to the link quality configuration data, the link price value and the user weighted value;
and the link quality data determining module is used for acquiring the current packet loss rate and/or delay rate of each outlet link of the first data center and determining the current link quality data.
Optionally, each data center further comprises: at least one application load balancing server; the forwarding module is specifically configured to:
determining a corresponding first data center according to a domain name carried by the service access request, and forwarding the service access request to the first data center, wherein the first data center forwards the service access request to an application load balancing server in the first data center, and the application load balancing server is used for determining a first application server according to the configuration and load balancing strategy of each application server in the first data center;
forwarding the service access request to the first application server.
Optionally, each data center further comprises: the system comprises at least one application load balancing server, wherein the application load balancing servers arranged in different data centers form a load balancing cluster; the forwarding module is specifically configured to:
determining a corresponding first data center according to a domain name carried by the service access request, wherein the first data center forwards the service access request to a load balancing cluster, and the load balancing cluster is used for determining a first application server according to the configuration and a load balancing strategy of each application server of the first data center and a second data center;
and forwarding the service access request to an application server.
Optionally, the apparatus further comprises:
a sending module, configured to send a probe packet periodically, and detect reachability of a target object, where the target object includes: the system comprises a data center exit link, an application load server, a data center interconnection link and a data center intranet link;
and the generating module is used for generating alarm information under the condition that any target object is unreachable.
Optionally, the apparatus further comprises:
the first fault processing module is used for forwarding a service access request accessing the first data center to an application load balancing server of a second data center under the condition that the application load balancing server of the first data center has a fault, and the application load balancing server of the second data center is used for forwarding the service access request to an application server providing server of the first data center according to the configuration and load balancing strategy of each application server of the first data center.
Optionally, the apparatus further comprises:
and the second fault processing module is used for resolving the access domain name carried in the service access request into an address of a second data center under the condition that the first data center outlet link has a fault, and processing the service access request by an application server arranged in the second data center.
Optionally, the apparatus further comprises:
and the third fault processing module is used for resolving all domain names carried in the service access request into an address of a first data center under the condition that an interconnection link between the data centers is in fault, wherein the first data center is a main data center.
A third aspect of embodiments of the present invention provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps in the method according to the first aspect of the present invention.
A fourth aspect of the embodiments of the present invention provides an electronic device, which includes a memory, a processor and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the steps in the method according to the first aspect of the present invention are implemented.
In the embodiment of the present invention, an optimal egress link of a first data center may be determined according to a pre-trained routing model, so as to return a response flow of the first application server to the service access request to a user. The processing efficiency of the system can be integrally improved, and the waiting time of the user can be shortened.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
Fig. 1 is a flowchart of a traffic scheduling method according to an embodiment of the present invention;
fig. 2 is a flow chart of another traffic scheduling method according to an embodiment of the present invention;
fig. 3 is a flow chart of another traffic scheduling method according to an embodiment of the present invention;
fig. 4 is a block diagram of a traffic scheduling apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Referring to fig. 1, a flowchart of a traffic scheduling method according to an embodiment of the present invention is shown, where the traffic scheduling method according to the embodiment of the present invention is applied to a global scheduling server in a dual-active data center traffic scheduling system, where the dual-active data center traffic scheduling system includes: global scheduling server, first data center, second data center, every data center includes: the traffic scheduling method provided by the embodiment of the invention comprises the following steps:
s101, receiving a service access request sent by a user.
In the embodiment of the invention, the global scheduling server can be connected with all data centers in the double-active data center flow scheduling system. And dispatching the service access request from the user to the corresponding first data center.
In the embodiment of the present invention, a service access request sent by a user may carry user information, where the user information at least includes: user identification and user location.
S102, the service access request is forwarded to a first application server of a first data center to process the service access request, and the first application server of the first data center is determined according to the service access request and a load balancing strategy.
In this embodiment of the present invention, the global scheduling server may forward, according to the service access request and the load balancing policy, the service access request to a first application server of the first data center that corresponds to the service access request and conforms to the load balancing policy, so that the first application server processes the service access request.
In the embodiment of the present invention, the application servers arranged in the first data center and the second data center may be respectively configured to process different service services, or a plurality of application servers may be configured to process the same service, and in a case where the plurality of application servers are configured to process the same service, the application servers may be configured with their respective priorities.
In the embodiment of the present invention, the load balancing policy refers to performing balanced distribution on traffic loads of multiple application servers arranged in a data center and processing the same service.
S103, determining an optimal exit link of the first data center according to a pre-trained routing model, wherein the optimal exit link is used for returning the response flow of the first application server to the service access request to a user.
In the embodiment of the invention, the input of the routing model is the current link configuration data and the current link quality data, and the output is the optimal exit link.
In this embodiment of the present invention, the current link configuration data refers to: and the user or the system administrator determines the configuration data of each link according to the fixed numerical value of the link configuration and the current user information. The current link quality data refers to: the current real-time network quality status of each link.
In the embodiment of the present invention, a preset training model is a Support Vector Machine (SVM) model, a link quality data sample value and a link configuration data sample value are used as inputs, a sample routing result (i.e., an optimal egress link) is used as an output, the inputs and the output are used as training vectors and input to the SVM model for training, and a trained model, i.e., a routing model, is obtained.
In the embodiment of the invention, the sample routing result is an optimal exit link manually established according to the link quality data sample value and the link configuration data sample value.
In practical application, the current link configuration data and the current link quality data are input into the trained routing model to obtain a routing result, namely an optimal exit link corresponding to a user, so that dynamic link adjustment is performed based on the optimal exit link, and response flow of the first application server to the service access request is returned to the user through the optimal exit link. In the embodiment of the invention, in the application process, real-time link configuration data and link quality data can be continuously used as input, the optimal export link is used as output to carry out iterative training on the model, and the export link with higher reliability can be determined in a self-adaptive manner according to the network quality condition.
In the embodiment of the invention, the exit link entering the data center (for example, the exit link A selected by the Unicom user and the exit link B selected by the telecom user) can be selected according to the operator of the user IP carried in the service access request sent by the user, and then the optimal exit link can be determined from a plurality of exit links according to the current link configuration data and the current link quality data so as to dynamically adjust the exit link and provide the optimal access route for the user.
Referring to fig. 2, a flowchart of a traffic scheduling method according to an embodiment of the present invention is shown, where the traffic scheduling method according to the embodiment of the present invention is applied to a global scheduling server in a dual-active data center traffic scheduling system, where the dual-active data center traffic scheduling system includes: global scheduling server, first data center, second data center, every data center includes: at least one application server, at least one application load balancing server. Specifically, the traffic scheduling method may include the following steps:
s201, receiving a service access request sent by a user.
Step S201 is similar to step S101, and the description of the embodiment of the present invention is omitted here.
S202, the service access request is forwarded to a first application server of a first data center to process the service access request, and the first application server of the first data center is determined according to the service access request and a load balancing strategy.
In this embodiment of the present invention, at least one application load balancing server is disposed in each data center, and specifically, the step S202 includes the following sub-steps:
S2021A, determining a corresponding first data center according to the domain name carried in the service access request, and forwarding the service access request to the first data center, where the first data center forwards the service access request to an application load balancing server in the first data center, and the application load balancing server is configured to determine a first application server according to a configuration and a load balancing policy of each application server in the first data center.
S2022A, forwards the service access request to the first application server.
In the embodiment of the invention, when a user accesses a certain service of a data center, a global scheduling server resolves a domain name carried in a service access request into an export service IP address which is closest to the data center from the user, maps a service IP of a public network into a private network IP address of an application load balancing server, associates the application load balancing server with the application server, and schedules flow to the corresponding application server through a load balancing technology to provide service.
In the embodiment of the invention, the application load balancing servers arranged in different data centers can form a load balancing cluster. Specifically, the application load balancing servers of the data center are all deployed and hung on the aggregation switch by using the cluster.
In the embodiment of the invention, the load balancing cluster adopts the actual IP of the virtual service IP agent application system. The virtual IP of the application load balancing server of the first data center and the virtual IP of the application load balancing server of the second data center are provided with the same node pool of the application system or the database server, and the same expansion deployment and disaster recovery backup applied to the two data centers are realized by setting the priority for the application system or the database node.
In the embodiment of the invention, a dual-active data center flow scheduling system adopts a dual-active scheme at a service layer as follows: the user access address is globally scheduled by the global load server, recursively resolved back to the application system server of the first data center or the second data center through the DNS, and provided with service by the application system server of the data center.
The double-activity scheme at the application layer is as follows: the application load balancing servers of the first data center and the second data center can provide application layer proxy services for the outside together, for each application group, the load balancing cluster provides different proxy IPs for Internet users, the global load balancing technology sets the same domain name direction and different weight priorities for the two IPs, and load balancing of user service flows and backup of application systems are achieved.
The double-activity scheme of the database layer is that databases are distributed and deployed in a first data center and a second data center, a master-slave structure layout is adopted, part of application systems use the first data center as a master database, the second data center as a slave database, part of the application systems use the second data center as the master database, the first data center is a slave database, and after the master database of the data center application system fails, the slave database of the same application system of the other data center becomes the master database to bear services.
In this case, the step S202 includes the following sub-steps:
and S2021B, determining a corresponding first data center according to the domain name carried by the service access request, forwarding the service access request to a load balancing cluster by the first data center, wherein the load balancing cluster is used for determining a first application server according to the configuration and load balancing strategy of each application server of the first data center and the second data center.
S2022B, forwards the service access request to an application server.
In the embodiment of the invention, the load balancing cluster can carry out uniform flow scheduling on the application servers arranged in the two data centers based on the configuration of the application servers and the load balancing strategy.
S203, collecting link quality configuration data of each exit link of the first data center; determining the link price value of each exit link according to the position of the current user; and determining the user weighted value of each exit link according to the current user.
In the embodiment of the present invention, the link quality configuration data of each egress link is a fixed value configured in advance, for example, a value of 1 to 100, which represents the level of the link quality. The link price value of each egress link refers to the distance of each link from the user, and the user weighted value of each egress link refers to the link requirement of the user for the egress link.
And S204, determining the current link configuration data according to the link quality configuration data, the link price value and the user weighted value.
S205, collecting the current packet loss rate and/or delay rate of each exit link of the first data center, and determining the current link quality data.
In the embodiment of the present invention, the link quality data is detection traffic information of each egress link, and includes a delay rate, a packet loss rate, and the like.
S206, determining an optimal exit link of the first data center according to a pre-trained routing model, wherein the optimal exit link is used for returning the response flow of the first application server to the service access request to the user.
In the embodiment of the invention, the optimal exit link can be determined from the plurality of exit links so as to dynamically adjust the exit link and provide the optimal access route for the user.
Referring to fig. 3, a flowchart of a traffic scheduling method according to an embodiment of the present invention is shown, where the traffic scheduling method according to the embodiment of the present invention may specifically include the following steps:
s301, receiving a service access request sent by a user.
S302, forward the service access request to a first application server of a first data center to process the service access request, where the first application server of the first data center is determined according to the service access request and a load balancing policy.
S303, determining an optimal exit link of the first data center according to a pre-trained routing model, wherein the optimal exit link is used for returning the response flow of the first application server to the service access request to a user.
The steps S301 to S303 are similar to the steps S101 to S103, and the embodiment of the present invention is not described herein again.
S304, periodically sending a detection message, and detecting the accessibility of a target object, wherein the target object comprises: the system comprises a data center exit link, an application load server, a data center interconnection link and a data center intranet link.
S305, generating alarm information when any target object is not reachable.
In the embodiment of the invention, the global scheduling server can periodically send detection messages, detect the accessibility of a target object, specifically detect the network connectivity and the network quality of a specific position, and generate interruption information for feedback when a network fails; and then according to the network interruption condition, carrying out flow scheduling in time according to a preset fault processing strategy.
In the embodiment of the present invention, for the case that the target object is unreachable, the following fault handling strategies are also proposed, which specifically include:
1. the application load balancing server fails: and under the condition that the application load balancing server of the first data center fails, forwarding the service access request for accessing the first data center to an application load balancing server of a second data center, wherein the application load balancing server of the second data center is used for forwarding the service access request to an application server providing server of the first data center according to the configuration and load balancing strategy of each application server of the first data center.
In the embodiment of the invention, each data center is provided with at least one application load balancing server to form a cluster, and when a certain application load balancing server fails, another application load balancing server which can be preferentially switched to the data center can take over all services. The traffic schedule does not change at this time.
When all the application load balancing servers of the first data center have faults, the application load balancing servers of the second data center can be switched to preferentially to take over all the services.
In the embodiment of the invention, when all the application load balancing servers of the first data center are in failure, the access flow of a user can be scheduled to the application load balancing server of the second data center, the virtual service of the second data center can still schedule the flow to the application server of the first data center through the data center interconnection link, and the virtual service can schedule the application server of the second data center only when the routing of the application server of the first data center is inaccessible.
2. Failure of the first data center egress link: and under the condition that the outlet link of the first data center is in fault, resolving the access domain name carried in the service access request into an address of a second data center, and processing the service access request by an application server arranged in the second data center.
In the embodiment of the invention, although the application server and the database of the first data center are not in fault, the default route of the first data center points to the exit of the data center, and the fault of the exit link only enables the second data center to carry out service hosting.
Taking a first data center as a main center of an application system and a second data center as a standby center as an example, when an upper switch or an outlet firewall of the main center fails, a global load server resolves a service access request of a user into an IP address of the standby center through a link health check mechanism and a scheduling mechanism, and can perform priority degradation processing on a server node pool in the main data center to enable the standby center to bear service.
3. Interconnection link failure between data centers: and under the condition of interconnection link failure among the data centers, all domain names carried in the service access request are analyzed as the address of a first data center, wherein the first data center is a main data center.
Specifically, taking a first data center as a master center of an application system and a second data center as a standby center as an example, under normal conditions, a northern user is dispatched to the first data center to access a master database, and a southern user is dispatched to the second data center to access the master database through a private line. And after the data center interconnection link fails, the global load resolves the domain name accessed by the user to a first data center access main database.
4. Data center intranet link failure: and under the condition that an intranet link of a first data center fails, forwarding a service access request for accessing the first data center to an application load balancing server of the first data center, wherein the application load balancing server of the first data center is used for reducing the priority configuration of each application server of the first data center and forwarding the service access request to an application server providing server of a second data center according to a load balancing strategy.
In the embodiment of the present invention, the failure of the intranet link in the data center is specifically: and the network port is unreachable from the application load balancing server to the application server. At this point, the user request may reach the virtual IP of the application load balancing server of the first data center, but not to the node that actually provides the service. The service access request needs to be forwarded to the application server providing server of the second data center. Therefore, the normal access of the service cannot be influenced by the failure of the intranet link of the first data center.
Take a first data center as a main center of an application system and a second data center as a standby center as an example. And when the intranet link of the first data center fails, the service is interrupted. The application load balancing server of the first data center reduces the priority configuration of the application server of the data center, so that the flow is dispatched to the application server of the second data center through the data center interconnection link. Meanwhile, the application load balancing server of the second data center can also improve the priority configuration of the data center application server and schedule the flow to the data center application server so as to ensure that all users can normally access the service.
In the embodiment of the invention, specific path information (for example, information such as physical position of an important node in a path, IP address and the like) accessed by a user can be collected and a front-end interface of the double-active data center flow scheduling system is displayed; in the embodiment of the present invention, configuration and description information of all application servers may also be stored, including a service name, a domain name, a public network IP for providing a service, a machine room to which the IP belongs, an IP address of a public network mapping intranet, a service providing node list, a machine room to which each node belongs, a node priority, a node on or off, and the like, and an interface is provided for other modules or managers to query and call.
Therefore, in the embodiment of the invention, when the outlet link of the data center fails, the global load can resolve the accessed domain name into the address of another data center, so that the user service is not perceived. When the application load balancing server fails, the global load server and the application load balancing server are linked to guide the flow to the application load balancing server of another data center. When the interconnection link of the data center fails, the global load server resolves all domain names accessed by the user into a service IP address of the main data center, and the flow can be dispatched to the main data center. When the data center intranet breaks down, the application load is interrupted to the application node network. At the moment, the priority of the node of the other data center is improved, and the traffic goes to the node of the other data center through the interconnection private line.
Therefore, the embodiment of the invention ensures that the user flow is loaded to the two data centers, and greatly improves the utilization rate of resources; and a routing strategy is iteratively adjusted by using machine learning, and an optimal exit link is provided. Meanwhile, the network fault under different conditions can be quickly detected and responded, the service can be recovered in a short time, the fault point can be positioned, and the operation and maintenance cost and the operation and maintenance pressure are reduced; finally, the service configuration and the specific path information accessed by the user can be packaged for network management personnel to use, and the operation and maintenance efficiency is greatly improved.
Based on the same inventive concept, an embodiment of the present invention provides a traffic scheduling apparatus, and referring to fig. 4, fig. 4 is a schematic diagram of the traffic scheduling apparatus provided in the embodiment of the present invention, where the apparatus is applied to a global scheduling server in a dual active data center traffic scheduling system, and the dual active data center traffic scheduling system includes: global scheduling server, first data center, second data center, every data center includes: at least one application server, the apparatus comprising:
a receiving module 401, configured to receive a service access request sent by a user;
a forwarding module 402, configured to forward the service access request to a first application server of a first data center to process the service access request, where the first application server of the first data center is determined according to the service access request and a load balancing policy;
a determining module 403, configured to determine an optimal egress link of the first data center according to a pre-trained routing model, where the optimal egress link is used to return a response traffic of the first application server to the service access request to a user;
the input of the routing model is the current link configuration data and the current link quality data, and the output is the optimal exit link.
Optionally, the apparatus further comprises:
the acquisition module is used for acquiring link quality configuration data of each exit link of the first data center;
the link price value determining module is used for determining the link price value of each exit link according to the position of the current user;
the user weighted value determining module is used for determining the user weighted value of each exit link according to the current user;
the link configuration data determining module is used for determining the current link configuration data according to the link quality configuration data, the link price value and the user weighted value;
and the link quality data determining module is used for acquiring the current packet loss rate and/or delay rate of each outlet link of the first data center and determining the current link quality data.
Optionally, each data center further comprises: at least one application load balancing server; the forwarding module 402 is specifically configured to:
determining a corresponding first data center according to a domain name carried by the service access request, and forwarding the service access request to the first data center, wherein the first data center forwards the service access request to an application load balancing server in the first data center, and the application load balancing server is used for determining a first application server according to the configuration and load balancing strategy of each application server in the first data center;
forwarding the service access request to the first application server.
Optionally, each data center further comprises: the system comprises at least one application load balancing server, wherein the application load balancing servers arranged in different data centers form a load balancing cluster; the forwarding module 402 is specifically configured to:
determining a corresponding first data center according to a domain name carried by the service access request, wherein the first data center forwards the service access request to a load balancing cluster, and the load balancing cluster is used for determining a first application server according to the configuration and a load balancing strategy of each application server of the first data center and a second data center;
and forwarding the service access request to an application server.
Optionally, the apparatus further comprises:
a sending module, configured to send a probe packet periodically, and detect reachability of a target object, where the target object includes: the system comprises a data center exit link, an application load server, a data center interconnection link and a data center intranet link;
and the generating module is used for generating alarm information under the condition that any target object is unreachable.
Optionally, the apparatus further comprises:
the first fault processing module is used for forwarding a service access request accessing the first data center to an application load balancing server of a second data center under the condition that the application load balancing server of the first data center has a fault, and the application load balancing server of the second data center is used for forwarding the service access request to an application server providing server of the first data center according to the configuration and load balancing strategy of each application server of the first data center.
Optionally, the apparatus further comprises:
and the second fault processing module is used for resolving the access domain name carried in the service access request into an address of a second data center under the condition that the exit link of the first data center has a fault, and processing the service access request by an application server arranged in the second data center.
Optionally, the apparatus further comprises:
and the third fault processing module is used for resolving all domain names carried in the service access request into an address of a first data center under the condition that an interconnection link between the data centers is in fault, wherein the first data center is a main data center.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
Based on the same inventive concept, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps in the traffic scheduling method according to any of the embodiments.
Based on the same inventive concept, an embodiment of the present invention provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement the steps in the traffic scheduling method according to any of the above embodiments.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable traffic scheduling terminal apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable traffic scheduling terminal apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable traffic scheduling terminal apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable traffic scheduling terminal apparatus to cause a series of operational steps to be performed on the computer or other programmable terminal apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, in this document, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another like element in a process, method, article, or terminal that comprises the element.
The traffic scheduling method, apparatus, electronic device and storage medium provided by the present invention are described in detail above, and specific examples are applied herein to explain the principles and embodiments of the present invention, and the descriptions of the above embodiments are only used to help understanding the method and its core ideas of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (15)

1. A traffic scheduling method is applied to a global scheduling server in a dual-active data center traffic scheduling system, and the dual-active data center traffic scheduling system includes: global scheduling server, first data center, second data center, every data center includes: at least one application server, the method comprising:
receiving a service access request sent by a user;
forwarding the service access request to a first application server of a first data center to process the service access request, wherein the first application server of the first data center is determined according to the service access request and a load balancing strategy;
determining an optimal exit link of the first data center according to a pre-trained routing model, wherein the optimal exit link is used for returning response flow of the first application server to the service access request to a user;
the input of the routing model is the current link configuration data and the current link quality data, and the output is the optimal exit link.
2. The traffic scheduling method of claim 1, further comprising:
collecting link quality configuration data of each exit link of the first data center;
determining the link price value of each exit link according to the position of the current user;
determining a user weighted value of each exit link according to the current user;
determining current link configuration data according to the link quality configuration data, the link price value and the user weighted value;
and acquiring the current packet loss rate and/or delay rate of each exit link of the first data center, and determining the current link quality data.
3. The traffic scheduling method of claim 1, wherein each data center further comprises: at least one application load balancing server; forwarding the service access request to a first application server of a first data center, comprising:
determining a corresponding first data center according to a domain name carried by the service access request, and forwarding the service access request to the first data center, wherein the first data center forwards the service access request to an application load balancing server in the first data center, and the application load balancing server is used for determining a first application server according to the configuration and load balancing strategy of each application server in the first data center;
forwarding the service access request to the first application server.
4. The traffic scheduling method of claim 1, wherein each data center further comprises: the application load balancing servers arranged in different data centers form a load balancing cluster; forwarding the service access request to a first application server of a first data center, comprising:
determining a corresponding first data center according to a domain name carried by the service access request, wherein the first data center forwards the service access request to a load balancing cluster, and the load balancing cluster is used for determining a first application server according to the configuration and a load balancing strategy of each application server of the first data center and a second data center;
and forwarding the service access request to an application server.
5. The traffic scheduling method according to claim 4, wherein the method further comprises:
periodically sending a detection message, and detecting the accessibility of a target object, wherein the target object comprises: the system comprises a data center exit link, an application load server, a data center interconnection link and a data center intranet link;
in the case where any target object is unreachable, alarm information is generated.
6. The traffic scheduling method according to claim 5, wherein the method further comprises:
and under the condition that the application load balancing server of the first data center fails, forwarding the service access request accessing the first data center to an application load balancing server of a second data center, wherein the application load balancing server of the second data center is used for forwarding the service access request to an application server providing server of the first data center according to the configuration and load balancing strategy of each application server of the first data center.
7. The traffic scheduling method of claim 5, further comprising:
and under the condition that the outlet link of the first data center is in fault, resolving the access domain name carried in the service access request into an address of a second data center, and processing the service access request by an application server arranged in the second data center.
8. The traffic scheduling method according to claim 5, wherein the method further comprises:
and under the condition of interconnection link failure among the data centers, all domain names carried in the service access request are analyzed as the address of a first data center, wherein the first data center is a main data center.
9. The traffic scheduling device is applied to a global scheduling server in a dual-active data center traffic scheduling system, and the dual-active data center traffic scheduling system includes: global scheduling server, first data center, second data center, every data center includes: at least one application server, the apparatus comprising:
the receiving module is used for receiving a service access request sent by a user;
the forwarding module is used for forwarding the service access request to a first application server of a first data center so as to process the service access request, wherein the first application server of the first data center is determined according to the service access request and a load balancing strategy;
a determining module, configured to determine an optimal egress link of the first data center according to a pre-trained routing model, where the optimal egress link is used to return a response traffic of the first application server to the service access request to a user;
the input of the routing model is the current link configuration data and the current link quality data, and the output is the optimal exit link.
10. The traffic scheduling device of claim 9, wherein the device further comprises:
the acquisition module is used for acquiring link quality configuration data of each exit link of the first data center;
the link price value determining module is used for determining the link price value of each exit link according to the position of the current user;
the user weighted value determining module is used for determining the user weighted value of each exit link according to the current user;
the link configuration data determining module is used for determining the current link configuration data according to the link quality configuration data, the link price value and the user weighted value;
and the link quality data determining module is used for acquiring the current packet loss rate and/or delay rate of each exit link of the first data center and determining the current link quality data.
11. The traffic scheduling device of claim 10, wherein each data center further comprises: at least one application load balancing server; the forwarding module is specifically configured to:
determining a corresponding first data center according to a domain name carried by the service access request, and forwarding the service access request to the first data center, wherein the first data center forwards the service access request to an application load balancing server in the first data center, and the application load balancing server is used for determining a first application server according to the configuration and load balancing strategy of each application server in the first data center;
forwarding the service access request to the first application server.
12. The traffic scheduling device of claim 10, wherein each data center further comprises: the system comprises at least one application load balancing server, wherein the application load balancing servers arranged in different data centers form a load balancing cluster; the forwarding module is specifically configured to:
determining a corresponding first data center according to a domain name carried by the service access request, forwarding the service access request to a load balancing cluster by the first data center, wherein the load balancing cluster is used for determining a first application server according to the configuration and load balancing strategy of each application server of the first data center and the second data center;
and forwarding the service access request to an application server.
13. The traffic scheduling device of claim 9, wherein the device further comprises:
a sending module, configured to send a probe packet periodically, and detect reachability of a target object, where the target object includes: the system comprises a data center exit link, an application load server, a data center interconnection link and a data center intranet link;
and the generating module is used for generating alarm information under the condition that any target object is unreachable.
14. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the traffic scheduling method according to any of claims 1-8 when executing the computer program.
15. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the traffic scheduling method according to any one of claims 1 to 8.
CN202310170635.5A 2023-02-27 2023-02-27 Traffic scheduling method and device, electronic equipment and storage medium Active CN115865932B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310170635.5A CN115865932B (en) 2023-02-27 2023-02-27 Traffic scheduling method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310170635.5A CN115865932B (en) 2023-02-27 2023-02-27 Traffic scheduling method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115865932A true CN115865932A (en) 2023-03-28
CN115865932B CN115865932B (en) 2023-06-23

Family

ID=85659103

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310170635.5A Active CN115865932B (en) 2023-02-27 2023-02-27 Traffic scheduling method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115865932B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120144014A1 (en) * 2010-12-01 2012-06-07 Cisco Technology, Inc. Directing data flows in data centers with clustering services
CN105159775A (en) * 2015-08-05 2015-12-16 浪潮(北京)电子信息产业有限公司 Load balancer based management system and management method for cloud computing data center
CN106506588A (en) * 2016-09-23 2017-03-15 北京许继电气有限公司 How polycentric data center's dual-active method and system
CN107465721A (en) * 2017-06-27 2017-12-12 国家电网公司 Whole load equalizing method and system and dispatch server based on dual-active framework
CN109828868A (en) * 2019-01-04 2019-05-31 新华三技术有限公司成都分公司 Date storage method, device, management equipment and dual-active data-storage system
CN114143324A (en) * 2021-10-27 2022-03-04 上海卓悠网络科技有限公司 Load balancing method and device based on application market architecture
CN114465954A (en) * 2021-12-27 2022-05-10 天翼云科技有限公司 Self-adaptive routing method, device and equipment for special cloud line and readable storage medium
CN114500340A (en) * 2021-12-23 2022-05-13 天翼云科技有限公司 Intelligent scheduling distributed path calculation method and system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120144014A1 (en) * 2010-12-01 2012-06-07 Cisco Technology, Inc. Directing data flows in data centers with clustering services
CN105159775A (en) * 2015-08-05 2015-12-16 浪潮(北京)电子信息产业有限公司 Load balancer based management system and management method for cloud computing data center
CN106506588A (en) * 2016-09-23 2017-03-15 北京许继电气有限公司 How polycentric data center's dual-active method and system
CN107465721A (en) * 2017-06-27 2017-12-12 国家电网公司 Whole load equalizing method and system and dispatch server based on dual-active framework
CN109828868A (en) * 2019-01-04 2019-05-31 新华三技术有限公司成都分公司 Date storage method, device, management equipment and dual-active data-storage system
CN114143324A (en) * 2021-10-27 2022-03-04 上海卓悠网络科技有限公司 Load balancing method and device based on application market architecture
CN114500340A (en) * 2021-12-23 2022-05-13 天翼云科技有限公司 Intelligent scheduling distributed path calculation method and system
CN114465954A (en) * 2021-12-27 2022-05-10 天翼云科技有限公司 Self-adaptive routing method, device and equipment for special cloud line and readable storage medium

Also Published As

Publication number Publication date
CN115865932B (en) 2023-06-23

Similar Documents

Publication Publication Date Title
CN113037560B (en) Service flow switching method and device, storage medium and electronic equipment
CN112671882B (en) Same-city double-activity system and method based on micro-service
CN112000448A (en) Micro-service architecture-based application management method
CN107465721B (en) Global load balancing method and system based on double-active architecture and scheduling server
US8200743B2 (en) Anomaly management scheme for a multi-agent system
CN111274027A (en) Multi-live load balancing method and system applied to openstack cloud platform
WO2019210580A1 (en) Access request processing method, apparatus, computer device, and storage medium
CN108900598B (en) Network request forwarding and responding method, device, system, medium and electronic equipment
CN104158707A (en) Method and device of detecting and processing brain split in cluster
WO2020001409A1 (en) Virtual network function (vnf) deployment method and apparatus
CN107707644A (en) Processing method, device, storage medium, processor and the terminal of request message
CN114900430B (en) Container network optimization method, device, computer equipment and storage medium
CN106060125A (en) Distributed real-time data transmission method based on data tags
CN115080436A (en) Test index determination method and device, electronic equipment and storage medium
CN113242299A (en) Disaster recovery system, method, computer device and medium for multiple data centers
CN115865932B (en) Traffic scheduling method and device, electronic equipment and storage medium
CN114338670B (en) Edge cloud platform and network-connected traffic three-level cloud control platform with same
CN114500340B (en) Intelligent scheduling distributed path calculation method and system
CN111193674A (en) Method and system for realizing load distribution based on scene and service state
Nathaniel et al. Istio API gateway impact to reduce microservice latency and resource usage on kubernetes
Oliveira et al. Design and implementation of fault tolerance techniques to improve QoS in SOA
CN116723111B (en) Service request processing method, system and electronic equipment
CN102904957A (en) Information updating method and system in high-availability cluster
CN114785465B (en) Implementation method, server and storage medium for multiple activities in different places
CN115801858A (en) Method and device for selecting and optimizing load balancing strategy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP02 Change in the address of a patent holder

Address after: 100007 room 205-32, floor 2, building 2, No. 1 and No. 3, qinglonghutong a, Dongcheng District, Beijing

Patentee after: Tianyiyun Technology Co.,Ltd.

Address before: 100093 Floor 4, Block E, Xishan Yingfu Business Center, Haidian District, Beijing

Patentee before: Tianyiyun Technology Co.,Ltd.

CP02 Change in the address of a patent holder