CN109688191B - Traffic scheduling method and communication device - Google Patents

Traffic scheduling method and communication device Download PDF

Info

Publication number
CN109688191B
CN109688191B CN201811242554.7A CN201811242554A CN109688191B CN 109688191 B CN109688191 B CN 109688191B CN 201811242554 A CN201811242554 A CN 201811242554A CN 109688191 B CN109688191 B CN 109688191B
Authority
CN
China
Prior art keywords
micro
service
service instance
instances
instance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811242554.7A
Other languages
Chinese (zh)
Other versions
CN109688191A (en
Inventor
蒙泽超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Huawei Cloud Computing Technology Co ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201811242554.7A priority Critical patent/CN109688191B/en
Publication of CN109688191A publication Critical patent/CN109688191A/en
Application granted granted Critical
Publication of CN109688191B publication Critical patent/CN109688191B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/61Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources taking into account QoS or priority requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application provides a traffic scheduling method and a communication device, which can improve the stability of micro-service instance calling, reduce the network delay of micro-service instance calling, and improve the reliability and efficiency of processing network services. The method comprises the following steps: acquiring running environment marks of running environments where K candidate micro-service instances are located; wherein the running environment mark is used for indicating the network positions of the K candidate micro-service instances, and K is a positive integer; calling a first target micro-service instance; and the first target micro-service instance is the alternative micro-service instance in which the running environment mark of the running environment in which the first target micro-service instance is located is the same as the running environment mark of the running environment in which the current micro-service instance is located in the K alternative micro-service instances. The method can be applied to processing the network service in a cloud computing system in a micro-service instance calling mode.

Description

Traffic scheduling method and communication device
Technical Field
The present application relates to the field of communications technologies, and in particular, to a traffic scheduling method and a communications apparatus.
Background
In order to meet the requirement of disaster recovery (disaster recovery) and reduce the deployment cost, a cloud computing system usually adopts a physical architecture of a distributed active/active data center (distributed active/active data centers), and introduces new technologies such as virtualization and Micro Service (MS) to complete network deployment. In particular, cloud computing systems typically include multiple independently deployable microservices for performing a single task. Among them, "independently deployable" can be understood as: multiple micro service instances (entries) corresponding to the same micro service can be deployed at different network locations, and micro service instances corresponding to different micro services can also be deployed at the same network location. Illustratively, in the cloud computing system shown in fig. 1, instances 1 and 2 of micro-service 1 through micro-service 3 are located in Data Center (DC) 1, and instances 3 and 4 of micro-service 1 through micro-service 3 are located in data center 2. Wherein, the reference sign MSx-Ey is an instance y of the micro service x, the value of x is one of 1,2 and 3, and the value of y is one of 1,2,3 and 4.
When large complex software containing a plurality of single tasks runs on a cloud computing system, a plurality of micro service instances can be interactively called to execute the single tasks, so that the functions of the large complex software are realized. In general, each microservice instance can be considered a process. The "interactively calling a plurality of micro-service instances" can be realized by inter-process communication (IPC), so that a complete micro-service instance calling chain is formed.
However, in the above "interactively invoke multiple micro-service instances", the scheduling algorithm is only used to determine the next micro-service instance to be invoked according to the single task that needs to be invoked by the current micro-service instance, and does not consider the network location and network delay of the next micro-service instance, so that a situation that a remote micro-service instance is frequently invoked may occur. For example, as shown in fig. 1, a Load Balancer (LB) calls MS1-2 to process a received cloud service request, MS1-2 calls MS2-3, and MS2-3 calls MS 3-1.
It is easy to understand that compared with the method for calling a local micro-service instance, the method for calling a micro-service instance remotely has poor network stability and longer network delay, so that the cloud computing system has poor reliability, takes longer time and has lower efficiency in processing a cloud service request.
Disclosure of Invention
Embodiments of the present application provide a traffic scheduling method and a communication apparatus, which improve stability of micro-service instance invocation, reduce network latency of micro-service instance invocation, and improve reliability and efficiency of processing a network service.
In order to achieve the above purpose, the embodiments of the present application provide the following technical solutions:
in a first aspect, a traffic scheduling method is provided, including: and acquiring the operating environment marks of the operating environments where the K candidate micro-service instances are located. Then, a first target microservice instance is invoked. The running environment mark is used for indicating the network positions of K candidate micro-service instances, and K is a positive integer. The first target micro-service instance is the alternative micro-service instance, of the K alternative micro-service instances, of which the operating environment mark of the operating environment is the same as that of the current micro-service instance.
The traffic scheduling method provided by the application can preferentially call the alternative micro-service instance with the same running environment mark of the running environment where the current micro-service instance is located as the running environment mark of the running environment where the current micro-service instance is located, namely preferentially call the micro-service instance with the same network position as the current micro-service instance, can reduce the probability that the current micro-service instance calls the alternative micro-service instance with different network positions, can reduce the length of a micro-service instance calling chain, improve the stability of micro-service instance calling, and reduce the network delay of micro-service instance calling, thereby improving the reliability and efficiency of processing network services by the micro-service instance calling mode.
Illustratively, the first target microservice instance may include one of: in the K candidate micro-service instances, the candidate micro-service instance is positioned in the same virtual machine VM/physical machine PM as the current micro-service instance; in the K candidate micro-service instances, the candidate micro-service instance and the current micro-service instance are positioned in the same data center DC; in the K candidate micro-service instances, the candidate micro-service instance which is positioned in the same local network as the current micro-service instance; in the K alternative micro-service instances, the alternative micro-service instances which are positioned in the same Virtual Private Cloud (VPC) as the current micro-service instance; and in the K candidate micro-service instances, the candidate micro-service instance and the current micro-service instance are positioned in the same cloud computing system.
It is easy to understand that if none of the K candidate microservice instances is in the same network location as the current microservice instance, the candidate microservice instance with the closest distance to the current microservice instance or the shortest response time among the K candidate microservice instances still needs to be called. Therefore, in a possible design method, the traffic scheduling method may further include: and if the first target micro-service instance does not exist, calling a second target micro-service instance. Wherein the second target microservice instance is: and determining the alternative microservice instances according to the comparison result of the calling priorities of the K alternative microservice instances.
For example, the invocation priorities of the K candidate microservice instances may be determined according to a preset priority determination rule. Optionally, the priority determination rule may include at least one of: the scheduling priority of the alternative micro-service instances which are positioned in the same virtual machine VM/physical machine PM with the current micro-service instance is higher than that of the alternative micro-service instances which are not positioned in the same VM/PM with the current micro-service instance; the scheduling priority of the alternative micro-service instance which is positioned in the same DC with the current micro-service instance is higher than that of the alternative micro-service instance which is not positioned in the same DC with the current micro-service instance; the scheduling priority of the alternative micro-service instances which are positioned in the same local network with the current micro-service instance is higher than that of the alternative micro-service instances which are not positioned in the same local network with the current micro-service instance; the scheduling priority of the alternative micro-service instance which is positioned in the same private cloud VPC with the current micro-service instance is higher than that of the alternative micro-service instance which is not positioned in the same VPC with the current micro-service instance; the scheduling priority of an alternative microservice instance that is located on the same cloud computing system as the current microservice instance is higher than the scheduling priority of an alternative microservice instance that is not located on the same cloud computing system as the current microservice instance.
In a possible design method, before obtaining the operating environment flags of the operating environments where the K candidate microservice instances are located, the traffic scheduling method may further include: and configuring an operating environment mark for the operating environment where the N preset micro-service instances are located. The running environment of the N preset micro-service instances comprises at least one of the following conditions: the system comprises a physical machine PM, a virtual machine VM, a data center DC, a local network, a private cloud and a cloud computing system, wherein the N preset micro-service instances are located in the physical machine PM, the virtual machine VM, the data center DC, the local network, the private cloud and the cloud computing system. Specifically, the obtaining of the running environment mark of the running environment where the K candidate microservice instances are located may include: and acquiring the running environment marks of the running environments of the N preset micro-service instances according to the calling records of the N preset micro-service instances. And then, determining K preset micro-service instances supporting the functions to be called by the current micro-service instance in the N preset micro-service instances as K alternative micro-service instances.
In a second aspect, a communication device is provided. The communication device includes: the device comprises an acquisition module and a scheduling module. The acquisition module is used for acquiring the running environment marks of the running environments where the K candidate micro-service instances are located. And the scheduling module is used for calling the first target micro-service instance. The running environment mark is used for indicating the network positions of K candidate micro-service instances, and K is a positive integer; the first target micro-service instance is the alternative micro-service instance, of the K alternative micro-service instances, of which the operating environment mark of the operating environment is the same as that of the current micro-service instance.
Illustratively, the first target microservice instance may include one of: in the K candidate micro-service instances, the candidate micro-service instance is positioned in the same virtual machine VM/physical machine PM as the current micro-service instance; in the K candidate micro-service instances, the candidate micro-service instance and the current micro-service instance are positioned in the same data center DC; in the K candidate micro-service instances, the candidate micro-service instance which is positioned in the same local network as the current micro-service instance; in the K alternative micro-service instances, the alternative micro-service instances which are positioned in the same Virtual Private Cloud (VPC) as the current micro-service instance; and in the K candidate micro-service instances, the candidate micro-service instance and the current micro-service instance are positioned in the same cloud computing system.
In one possible design, the scheduling module is further configured to invoke a second target microservice instance if the first target microservice instance does not exist. Wherein the second target microservice instance is: and determining the alternative microservice instances according to the comparison result of the calling priorities of the K alternative microservice instances.
Optionally, the scheduling module is further configured to determine, according to a preset priority determination rule, the call priority of the K candidate microservice instances; wherein the priority determination rule includes at least one of: the scheduling priority of the alternative micro-service instances which are positioned in the same virtual machine VM/physical machine PM with the current micro-service instance is higher than that of the alternative micro-service instances which are not positioned in the same VM/PM with the current micro-service instance; the scheduling priority of the alternative micro-service instance which is positioned in the same DC with the current micro-service instance is higher than that of the alternative micro-service instance which is not positioned in the same DC with the current micro-service instance; the scheduling priority of the alternative micro-service instances which are positioned in the same local network with the current micro-service instance is higher than that of the alternative micro-service instances which are not positioned in the same local network with the current micro-service instance; the scheduling priority of the alternative micro-service instance which is positioned in the same private cloud VPC with the current micro-service instance is higher than that of the alternative micro-service instance which is not positioned in the same VPC with the current micro-service instance; the scheduling priority of an alternative microservice instance that is located on the same cloud computing system as the current microservice instance is higher than the scheduling priority of an alternative microservice instance that is not located on the same cloud computing system as the current microservice instance.
In one possible design, the communication device may further include a configuration module. The configuration module is used for configuring the running environment marks for the running environments of the N preset micro-service instances before the obtaining module obtains the running environment marks of the running environments of the K standby micro-service instances; the running environment of the N preset micro-service instances comprises at least one of the following conditions: the system comprises a physical machine PM, a virtual machine VM, a data center DC, a local network, a private cloud and a cloud computing system, wherein the N preset micro-service instances are located in the physical machine PM, the virtual machine VM, the data center DC, the local network, the private cloud and the cloud computing system. And the obtaining module is further used for obtaining the running environment marks of the running environments where the N preset micro-service instances are located according to the calling records of the N preset micro-service instances. And the scheduling module is further used for determining K preset micro-service instances supporting the functions to be called by the current micro-service instance in the N preset micro-service instances as K alternative micro-service instances.
In a third aspect, there is also provided a communication apparatus, including: a processor, a transceiver, and a memory. Wherein the memory stores one or more programs comprising computer-executable instructions. When the processor executes the computer-executable instructions described above, the communication device is caused to perform the traffic scheduling method according to the first aspect and any one of its various alternative implementations.
In a fourth aspect, there is provided a readable storage medium storing a program or instructions, which when executed on a computer performs the traffic scheduling method according to the first aspect and any one of its various alternative implementations.
In a fifth aspect, a computer program product is provided that contains a program or instructions. When the above program or instructions are run on a computer, the computer performs the traffic scheduling method according to any one of the above first aspect and its various alternative implementations.
In a sixth aspect, a cloud computing system is provided, which comprises one or more clients, and one or more of the above communication devices.
Drawings
FIG. 1 illustrates a schematic diagram of a scenario for scheduling microservice instances across data centers;
fig. 2A shows a first structural diagram of a cloud computing system provided in an embodiment of the present application;
fig. 2B shows a structural schematic diagram of a cloud computing system provided in the embodiment of the present application;
fig. 3 illustrates a first flowchart of a traffic scheduling method according to an embodiment of the present application;
fig. 4 illustrates a second flowchart of a traffic scheduling method according to an embodiment of the present application;
fig. 5 illustrates a third flowchart of a traffic scheduling method according to an embodiment of the present application;
fig. 6 is a first schematic structural diagram of a communication device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a communication device according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a communication device according to an embodiment of the present application.
Detailed Description
The technical solution in the present application will be described below with reference to the accompanying drawings.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone.
The terms "first" and "second" and the like in the description and drawings of the present application are used for distinguishing different objects or for distinguishing different processes for the same object, and are not used for describing a specific order of the objects.
Furthermore, the terms "including" and "having," and any variations thereof, as referred to in the description of the present application, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements but may alternatively include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that in the embodiments of the present application, words such as "exemplary" or "for example" are used to mean serving as examples, illustrations or descriptions. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the description of the present application, the meaning of "a plurality" means two or more unless otherwise specified.
In the examples of the present application, the subscripts are sometimes as W1It may be mistaken for a non-subscripted form such as W1, whose intended meaning is consistent when the distinction is de-emphasized.
The network architecture and the service scenario described in the embodiment of the present application are for more clearly illustrating the technical solution of the embodiment of the present application, and do not form a limitation on the technical solution provided in the embodiment of the present application, and as a person of ordinary skill in the art knows that along with the evolution of the network architecture and the appearance of a new service scenario, the technical solution provided in the embodiment of the present application is also applicable to similar technical problems.
The technical solution of the embodiment of the present application may be applied to various virtualized traffic scheduling systems including a plurality of micro services independently deployed by using a distributed physical architecture, such as a cloud computing system shown in fig. 2A.
As shown in fig. 2A, the cloud computing system includes: LB, DC1 and DC 2.
The DC1 comprises 4 VMs/PMs, configuration instances (CEs), management instances (MEs), micro service instances MS1-E1 and MS2-E1 are sequentially configured from top to bottom, and the DC2 comprises micro service instances MS1-E2 and MS 2-E2. In addition, each VM/PM configured with a micro-service instance is also configured with a scheduling instance (scheduling) corresponding thereto, which is used to determine the next micro-service instance called by the micro-service instance. As shown in FIG. 2A, the VM/PM configured with MS1-E2 in the DC2 is also configured with MS1-S2, and MS1-S2 determines that the next micro-service instance called by MS1-E2 is MS 2-E2. Since the micro service instances are in one-to-one correspondence with the scheduling instances, the scheduling operation is also initiated by the micro service instance, i.e., the scheduling operation can be regarded as being completed by the micro service instance itself. This scheduling method, called client scheduling, in which the micro-service instance itself invokes the next micro-service instance. Whereas a micro-service instance may be considered a process, client scheduling may also be referred to as in-process scheduling.
A runtime environment may be configured with only one microservice instance and one corresponding schedule instance, i.e. the runtime environment may be configured with only one microservice instance-schedule instance pair. Of course, a plurality of microservice instances and a plurality of scheduling instances corresponding to the plurality of microservice instances one to one may also be configured on one operating environment, that is, a plurality of microservice instance-scheduling instance pairs may exist on the same operating environment. This is not limited in this application.
It should be noted that, in the cloud computing system shown in fig. 2A, each micro service instance is configured with a corresponding scheduling instance. It should be understood that, in order to save scheduling resources, one scheduling instance may also provide scheduling services for multiple micro-service instances, i.e., multiple micro-service instances may share the same scheduling instance. That is, in addition to the above-described client scheduling, there are two scheduling methods as follows.
Proxy scheduling
Specifically, the scheduling operations of multiple microservice instances running in the same operating environment, such as a VM/PM, are all performed by an independent proxy process to determine the next microservice instance to be invoked by each of the multiple microservice instances. Since the scheduling operation is performed by an independent agent process in the same execution environment, the agent scheduling may be referred to as independent process scheduling.
External LB scheduling
Specifically, the scheduling operations of multiple microservice instances running in multiple execution environments may be performed by a scheduling instance external to the execution environments.
For example, assuming that the plurality of operating environments are a plurality of VM/PMs, the external scheduling instance may be a scheduling instance running on other VM/PMs than the plurality of VM/PMs. For another example, assuming that the plurality of operating environments are a plurality of DCs, the external scheduling instance may be a scheduling instance operating in a network element or a subsystem other than the plurality of DCs, such as a scheduling instance in another DC, or a scheduling instance in another network element or subsystem in communication connection with the plurality of DCs.
It should be noted that the three scheduling manners of the micro service instance are defined according to whether the scheduling instance and one or more micro service instances served by the scheduling instance are located in the same operating environment. The operating environment may be defined as a certain network hierarchy of the cloud computing system. For example, assuming that the cloud computing system includes multiple DCs, each DC including multiple VMs/PMs, the operating environment may be defined as a DC or as a VM/PM. This is not limited in this application.
The configuration example CE is used for receiving configuration instructions and deployment instructions input by a system administrator. The configuration instructions are used for configuring running environment marks for network elements and subsystems, such as DC and VM/PM, in the cloud computing system. The operation environment mark is used for indicating the network location of the network element and the subsystem, and may be a name, an identifier, and a network address of the network element or the subsystem, or a combination of the name, the identifier, and the network address of the network element or the subsystem. For example, as shown in FIG. 2A, the operating environment identifier of the VM/PM configured with MS1-E1 may be { the identifier of the VM/PM configured with MS1-E1 itself }, or { the identifier of DC1, the identifier of the VM/PM configured with MS1-E1 itself }.
It should be noted that the configuration instruction may be a configuration command input by a system administrator, or may be a preset start instruction of an executable configuration script or a configuration program. This is not limited in this application.
The deployment instruction is used for indicating the ME to acquire the running environment mark of the running environment where the micro-service instance in the cloud computing system is located. Illustratively, the runtime environment tag of the runtime environment in which the microservice instance is located can be obtained in several ways as follows.
Configuration analysis method
The configuration command, the configuration file and the operation environment mark of the operation environment to be configured in the configuration program have a corresponding relation with the subsystem and the network element of the cloud computing system. Therefore, in the embodiment of the present application, the operating environment flag of the operating environment in which the microservice instance is located may be obtained by analyzing the configuration command, the configuration file, and the configuration program.
Active collection mode
Specifically, the method may include the steps of:
step 1: the management instance actively sends a tag probe request to the scheduling instance, such as sending the tag probe request in a broadcast (broadcast) manner. The scheduling instance may be a part of the scheduling instance managed by the management instance, or may be all of the scheduling instances managed by the management instance.
Step 2: the scheduling instance receives a probe response which is returned by the micro service instance and carries the running environment mark of the running environment of the micro service instance called by the scheduling instance;
and step 3: and the management instance records and stores the corresponding relation between the called micro-service instance and the running environment mark of the running environment where the micro-service instance is located.
It should be noted that the deployment instruction may be a mark collection command input by a system administrator, or may be a preset start instruction of an executable deployment script or a deployment program. In practical application, the mark acquisition may be started periodically, or may be triggered according to a preset starting condition. This is not limited in this application.
Calling record reporting mode
The management instance is in communication connection with the scheduling instances of all the micro-service instances served by the management instance. When one micro-service instance calls other micro-service instances, the scheduling instance may report the call record to the management instance. The calling record comprises an operation environment mark of an operation environment where the current micro-service instance is located and an operation environment mark of an operation environment where the next micro-service instance called for the current micro-service instance is located. And the management example counts the running environment marks of the running environment where the micro service example is located according to the reported calling record. The call record may be reported in real time, may also be reported periodically, and may also be reported after a cloud service request is processed, which is not limited in the present application.
For example, the management instance may also acquire the call record in a listening manner, which is not described herein again.
The LB is used for receiving a cloud service request initiated by a client application program and calling a first micro-service instance required for processing the cloud service request. Illustratively, as shown in FIG. 2A, the LB schedules micro-service instance MS1-E2 in DC2 for cloud service requests. Thereafter, the MS1-E2 requests the scheduling instance MS1-S2 to schedule the next micro-service instance, e.g., MS2-E2, and so on, until the cloud service request is processed, and sends a cloud service response to the client application.
It should be understood that fig. 2A is a simplified schematic diagram that is merely exemplary for ease of understanding, and that other network elements or subsystems, which are not depicted in fig. 2A, may also be included in the cloud computing system.
The cloud computing system provided by the application is explained in a specific example. Fig. 2B illustrates another schematic structural diagram of a cloud computing system provided herein.
As shown in fig. 2B, the cloud computing system includes infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS), and 2 data centers DC.
The IaaS is a physical resource management platform in the cloud computing system, and can provide virtualized resources, such as running environments and storage resources of virtual machines and/or physical machines, for the upper PaaS platform.
The PaaS is a middleware platform, and can provide a development environment for constructing an application program, such as a customized software development platform. One core function in the PaaS platform is lifecycle management of an application, including deployment, upgrade, uninstallation, and the like of the application, which may be provided by a deployment management unit. The deployment management unit needs to cooperate with deployment agents in each operating environment, the deployment management unit uniformly issues application management commands, and the deployment agents execute specific actions in the operating environments according to the management commands.
The SaaS is a service network in the cloud computing system, so as to open an application service deployed in the SaaS to the internet, and mainly includes the network devices such as the LB.
Software and hardware resources such as virtual machines or physical machines managed by the IaaS can be deployed in different DCs, so that the purpose of high availability of multiple DCs is achieved.
In an embodiment of the present application, the management flow in fig. 2B may include the deployment instruction and the configuration instruction in fig. 2A, and the traffic flow in fig. 2B includes the cloud service request in fig. 2A.
In the micro-service architecture, a main management unit is a micro-service registry, and each micro-service needs to be registered in the micro-service registry, so that a micro-service client (which may be other micro-services of course) needs to find an available micro-service instance address in the micro-service registry and send a service request to a micro-service instance according to the address.
In practical application, the micro-service instance usually has a plurality of backups, and the client needs to perform routing selection to find a proper micro-service instance. In particular, the traffic scheduling unit may be responsible for routing. The traffic scheduling unit can be divided into three types, namely a centralized type, an in-process type and an independent process type.
It should be noted that the names of the unit modules in the cloud computing system do not limit the device itself, and in an actual implementation, the unit modules may appear by other names. As long as the functions of each unit module are similar to those of the embodiments of the present application, the technical solutions provided by the present application can be regarded.
In addition, fig. 2A and 2B only illustrate one possible division manner of the unit modules of the cloud computing system. In practical applications, there may be other dividing manners, such as dividing one unit module in fig. 2A and fig. 2B into a plurality of unit modules, or combining a plurality of unit modules in fig. 2A and fig. 2B into one unit module. For example, the management instance in FIG. 2A may include the deployment management unit and the deployment flag unit in FIG. 2B. As another example, the scheduling instance in fig. 2A may include the traffic scheduling unit and the flag scheduling unit in fig. 2B. As long as the functions executed by the repartitioning of the unit modules are similar to the embodiments of the present application, the technical solutions equivalent to those provided in the present application can be considered.
Fig. 3 shows a flowchart of a traffic scheduling method according to an embodiment of the present application. As shown in fig. 3, the traffic scheduling method includes S301 to S302:
s301, obtaining the running environment marks of the running environments where the K candidate micro-service instances are located.
The running environment mark is used for indicating the network positions of K candidate micro-service instances, and K is a positive integer.
It should be noted that the runtime environment flags of the runtime environments in which all the microservice instances in a cloud computing system are located are configured in advance by an administrator. Therefore, referring to fig. 3, as shown in fig. 4, in a possible design method, before the step S301 of obtaining the operating environment labels of the operating environments where the K candidate microservice instances are located is executed, the traffic scheduling method may further include step S401:
s401, configuring running environment marks for running environments where the N preset micro-service instances are located.
The running environment of the N preset micro-service instances comprises at least one of the following conditions: the system comprises a physical machine PM, a virtual machine VM, a data center DC, a local network, a private cloud and a cloud computing system, wherein the N preset micro-service instances are located in the physical machine PM, the virtual machine VM, the data center DC, the local network, the private cloud and the cloud computing system.
For the configuration of the runtime environment flag, reference may be made to the above description of the configuration example and the configuration instruction, which is not described herein again.
Optionally, the step S301 of obtaining the running environment tags of the running environments where the K candidate microservice instances are located may include steps S402 to S403:
s402, obtaining the running environment marks of the running environments of the N preset micro-service instances according to the calling records of the N preset micro-service instances.
Specifically, reference is made to the above description of the management example and the deployment instruction, which is not repeated herein.
S403, determining K preset micro-service instances supporting the functions to be called by the current micro-service instance in the N preset micro-service instances as K alternative micro-service instances.
It should be noted that the K alternative microservice instances described above may each support a single task supported by the next microservice instance to be invoked by the current microservice instance.
For example, as shown in fig. 2A, assuming that the current micro-service instance is MS1-E2 in DC2 and the next micro-service to be invoked by MS1-E2 is MS2 and there are 2 micro-service instances MS2-E1 and MS2-E2 in total for MS2, then the micro-service instances MS2-E1 and MS2-E2 may be considered as the above K alternative micro-service instances, where K is equal to 2.
It is to be understood that the K alternative microservice instances described above may be located at the same network location or may be located at different network locations. That is to say, the environment labels of the operating environments where the K alternative micro-service instances are located may be the same or different, and this is not limited in this application.
For example, as shown in FIG. 2A, 2 microservice instances of microservice MS2, MS2-E1 and MS2-E2, are located in DC1 and DC2, respectively. For another example, as shown in fig. 1, micro-service MS2 coexists in 4 micro-service instances MS2-E1 through MS2-E4, MS2-E1 and MS2-E2 are located in DC1, and MS2-E3 and MS2-E4 are located in DC 2.
S302, calling a first target micro-service instance.
The first target micro-service instance is the alternative micro-service instance in the K alternative micro-service instances, and the running environment mark of the running environment where the first target micro-service instance is located is the same as the running environment mark of the running environment where the current micro-service instance is located. That is, the first target microservice instance is located at the same network location as the current microservice instance.
Illustratively, the first target microservice instance may include one of:
in the K candidate micro-service instances, the candidate micro-service instance is positioned in the same virtual machine VM/physical machine PM as the current micro-service instance;
in the K candidate micro-service instances, the candidate micro-service instance and the current micro-service instance are positioned in the same data center DC;
in the K candidate micro-service instances, the candidate micro-service instance which is positioned in the same local network as the current micro-service instance;
in the K alternative micro-service instances, the alternative micro-service instances which are positioned in the same Virtual Private Cloud (VPC) as the current micro-service instance;
and in the K candidate micro-service instances, the candidate micro-service instance and the current micro-service instance are positioned in the same cloud computing system.
Illustratively, as shown in FIG. 2A, the current micro-service instance MS1-E2 needs to invoke micro-service MS2, and micro-service MS2 has 2 micro-service instances MS2-E1 and MS2-E2, respectively located in DC1 and DC 2. The scheduling instances MS1-S2 of the current micro-service instances MS1-E2 can preferentially schedule the micro-service instances MS2-E2 in DC2 instead of the micro-service instances MS2-E1 in DC1, so that the frequency of cross-DC calling the micro-service instances can be reduced, the calling chain of the micro-service instances is shortened, the reliability of calling the micro-service instances is improved, the response time is shortened, and the processing efficiency of cloud service requests is improved.
In practical applications, among the K candidate microservice instances, the number of the candidate microservice instances having the same operating environment mark of the operating environment where the current microservice instance is located may be multiple. In order to further improve the processing efficiency of the cloud service request, the candidate micro-service instance with the shortest response time may be selected from the candidate micro-service instances with the same running environment mark of the running environment where the multiple running environments are located and the running environment mark of the running environment where the current micro-service instance is located as the first target micro-service instance.
Exemplarily, as shown in fig. 1, the current microservice instance is MS1-E3 in DC2, and the current microservice needs to invoke microservice MS2, then the above K alternative microservice instances are 4 of MS2-E1 to MS2-E4, and wherein the 2 microservice instances of MS1-E3 and MS1-E4 are both located in DC2 with the current microservice instance MS 1-E3. In this case, one of MS1-E3 and MS1-E4 may also determine the first target microservice instance based on other additional conditions. For example, the first target microservice instance may be determined for the shorter response time of one of MS1-E3 and MS 1-E4.
It is easy to understand that even if the K candidate micro-service instances are not located at the same network location as the current micro-service instance, a call still needs to be selected from the K candidate micro-service instances, for example, the candidate micro-service instance closest to the current micro-service instance or with the shortest response time is called, so as to complete the processing work of the cloud service request received by the cloud computing system. Therefore, in another possible design method, with reference to fig. 3 or fig. 4, taking fig. 3 as an example, as shown in fig. 5, the traffic scheduling method may further include S501:
s501, if the first target micro-service instance does not exist, calling a second target micro-service instance.
Wherein the second target microservice instance is: and determining the alternative microservice instances according to the comparison result of the calling priorities of the K alternative microservice instances.
The absence of the first target micro-service instance mentioned above means: the operation environments of the K standby micro-service instances are different from the operation environment of the current micro-service instance. In this case, the second target microservice instance may be determined according to a pre-set priority. That is, the invocation priorities of the K candidate microservice instances may be determined according to a preset priority determination rule.
Optionally, the priority determination rule may include at least one of:
the scheduling priority of the alternative micro-service instances which are positioned in the same virtual machine VM/physical machine PM with the current micro-service instance is higher than that of the alternative micro-service instances which are not positioned in the same VM/PM with the current micro-service instance;
the scheduling priority of the alternative micro-service instance which is positioned in the same DC with the current micro-service instance is higher than that of the alternative micro-service instance which is not positioned in the same DC with the current micro-service instance;
the scheduling priority of the alternative micro-service instances which are positioned in the same local network with the current micro-service instance is higher than that of the alternative micro-service instances which are not positioned in the same local network with the current micro-service instance;
the scheduling priority of the alternative micro-service instance which is positioned in the same private cloud VPC with the current micro-service instance is higher than that of the alternative micro-service instance which is not positioned in the same VPC with the current micro-service instance;
the scheduling priority of an alternative microservice instance that is located on the same cloud computing system as the current microservice instance is higher than the scheduling priority of an alternative microservice instance that is not located on the same cloud computing system as the current microservice instance.
It should be noted that, in general, the processing efficiency of the alternative microservice instance located in the same operating environment as the current microservice instance is generally higher than the processing efficiency of the alternative microservice instance not located in the same operating environment as the current microservice instance. However, given that the network environment, such as load, network latency, etc., is dynamically changing, there may be situations where the processing efficiency of an alternative microservice instance that is located in the same operating environment as the current microservice instance is lower than the processing efficiency of an alternative microservice instance that is not located in the same operating environment as the current microservice instance. In this case, the scheduling priorities of the alternative microservice instances in the same operating environment as the current microservice instance and the alternative microservice instances not in the same operating environment as the current microservice instance may also be dynamically adjusted according to the changing situation of the network environment.
For example, when the response time of the candidate micro-service instance in the same operation environment as the current micro-service instance is longer than the preset time, the scheduling priority of the candidate micro-service instance in the same operation environment as the current micro-service instance is adjusted to be lower than the scheduling priority of the candidate micro-service instance not in the same operation environment as the current micro-service instance.
For another example, when the difference between the response time of the candidate micro-service instance not located in the same operation environment as the current micro-service instance and the response time of the candidate micro-service instance located in the same operation environment as the current micro-service instance is greater than the second duration threshold, the scheduling priority of the candidate micro-service instance located in the same operation environment as the current micro-service instance is adjusted to be lower than the scheduling priority of the candidate micro-service instance not located in the same operation environment as the current micro-service instance.
The traffic scheduling method provided by the application can preferentially call the alternative micro-service instance with the same running environment mark of the running environment where the current micro-service instance is located as the running environment mark of the running environment where the current micro-service instance is located, namely preferentially call the micro-service instance with the same network position as the current micro-service instance, can reduce the probability that the current micro-service instance calls the alternative micro-service instance with different network positions, can reduce the length of a micro-service instance calling chain, improve the stability of micro-service instance calling, and reduce the network delay of micro-service instance calling, thereby improving the reliability and efficiency of processing network services by the micro-service instance calling mode.
In the embodiment of the present application, the terminal may be divided into the functional modules according to the method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, in the embodiment of the present application, the division of the module is schematic, and is only one logic function division, and there may be another division manner in actual implementation.
Fig. 6 shows a schematic diagram of a possible structure of a communication device capable of performing the traffic scheduling method. As shown in fig. 6, the communication apparatus 600 includes: an acquisition module 601 and a scheduling module 602.
The obtaining module 601 is configured to obtain running environment tags of running environments where the K candidate microservice instances are located.
A scheduling module 602, configured to invoke the first target microservice instance. The running environment mark is used for indicating the network positions of K candidate micro-service instances, and K is a positive integer; the first target micro-service instance is the alternative micro-service instance, of the K alternative micro-service instances, of which the operating environment mark of the operating environment is the same as that of the current micro-service instance.
The communication device 600 may further include a storage module 603. The storage module 603 is configured to store related instructions and data.
Illustratively, the first target microservice instance may include one of: in the K candidate micro-service instances, the candidate micro-service instance is positioned in the same virtual machine VM/physical machine PM as the current micro-service instance; in the K candidate micro-service instances, the candidate micro-service instance and the current micro-service instance are positioned in the same data center DC; in the K candidate micro-service instances, the candidate micro-service instance which is positioned in the same local network as the current micro-service instance; in the K alternative micro-service instances, the alternative micro-service instances which are positioned in the same Virtual Private Cloud (VPC) as the current micro-service instance; and in the K candidate micro-service instances, the candidate micro-service instance and the current micro-service instance are positioned in the same cloud computing system.
In one possible design, the scheduling module 602 is further configured to invoke a second target microservice instance if the first target microservice instance does not exist. Wherein the second target microservice instance is: and determining the alternative microservice instances according to the comparison result of the calling priorities of the K alternative microservice instances.
Optionally, the scheduling module 602 is further configured to determine, according to a preset priority determination rule, the invocation priority of the K candidate microservice instances.
Wherein the priority determination rule includes at least one of:
the scheduling priority of the alternative micro-service instances which are positioned in the same virtual machine VM/physical machine PM with the current micro-service instance is higher than that of the alternative micro-service instances which are not positioned in the same VM/PM with the current micro-service instance;
the scheduling priority of the alternative micro-service instance which is positioned in the same DC with the current micro-service instance is higher than that of the alternative micro-service instance which is not positioned in the same DC with the current micro-service instance;
the scheduling priority of the alternative micro-service instances which are positioned in the same local network with the current micro-service instance is higher than that of the alternative micro-service instances which are not positioned in the same local network with the current micro-service instance;
the scheduling priority of the alternative micro-service instance which is positioned in the same private cloud VPC with the current micro-service instance is higher than that of the alternative micro-service instance which is not positioned in the same VPC with the current micro-service instance;
the scheduling priority of an alternative microservice instance that is located on the same cloud computing system as the current microservice instance is higher than the scheduling priority of an alternative microservice instance that is not located on the same cloud computing system as the current microservice instance.
In one possible design, as shown in fig. 7 in conjunction with fig. 6, the communication device 600 may also include a configuration module 604.
The configuration module 604 is configured to configure the running environment tags for the running environments where the N preset micro-service instances are located before the obtaining module 601 obtains the running environment tags of the running environments where the K candidate micro-service instances are located.
Illustratively, the operating environment where the N preset micro-service instances are located includes at least one of the following: the system comprises a physical machine PM, a virtual machine VM, a data center DC, a local network, a private cloud and a cloud computing system, wherein the N preset micro-service instances are located in the physical machine PM, the virtual machine VM, the data center DC, the local network, the private cloud and the cloud computing system.
The obtaining module 601 is further configured to obtain an operation environment flag of an operation environment where the N preset micro-service instances are located according to the call record of the N preset micro-service instances.
The scheduling module 602 is further configured to determine, as K candidate microservice instances, K preset microservice instances that support a function to be called by the current microservice instance, from among the N preset microservice instances.
Fig. 8 shows a schematic diagram of a possible structure of a communication device capable of performing the traffic scheduling method. As shown in fig. 8, the communication apparatus 800 includes: a processor 801, a transceiver 802, and a memory 803. Memory 803 stores one or more programs, including computer-executable instructions, among others.
A processor 801 configured to execute a computer program stored in the memory 803, so as to enable the communication apparatus 800 to execute the traffic scheduling method according to the embodiment of the present application.
The communication device 800 also includes a bus 804.
The memory 803 may be a memory in the communication apparatus 800, and may include a volatile memory, such as a random access memory; the memory may also include non-volatile memory, such as read-only memory, flash memory, a hard disk, or a solid state disk; the memory may also comprise a combination of memories of the kind described above.
The processor 801 described above may be various illustrative logical blocks, modules, and circuits implemented or performed in connection with the disclosure herein. The processor may be a central processing unit, general purpose processor, digital signal processor, application specific integrated circuit, field programmable gate array or other programmable logic device, transistor logic device, hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. Further, the processor 801 may also be a combination of computing functions, e.g., comprising one or more microprocessors, a combination of a DSP and a microprocessor, or the like.
The bus 804 may be an Extended Industry Standard Architecture (EISA) bus or the like. The bus 804 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 8, but this is not intended to represent only one bus or type of bus.
The application provides a cloud computing system. The cloud computing system comprises one or more clients and one or more communication devices, so as to execute the traffic scheduling method provided by the embodiment of the application. For the communication device, reference may be made to the above description of the method embodiment and the device embodiment, which are not described herein again.
The present application provides a readable storage medium storing a program or instructions. When the above-mentioned program or instructions are run on a computer, the computer performs the traffic scheduling method as described in the above-mentioned method embodiments.
In a sixth aspect, a computer program product is provided that contains a program or instructions. When the program or the instructions are run on a computer, the computer executes the traffic scheduling method according to the method embodiment.
It should be understood that the processor in the embodiments of the present application may be a Central Processing Unit (CPU), and the processor may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It will also be appreciated that the memory in the embodiments of the subject application can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The non-volatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example, but not limitation, many forms of Random Access Memory (RAM) are available, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchlink DRAM (SLDRAM), and direct bus RAM (DR RAM).
The above embodiments may be implemented in whole or in part by software, hardware (e.g., circuitry), firmware, or any combination thereof. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. The computer program product comprises one or more computer instructions or computer programs. The procedures or functions according to the embodiments of the present application are wholly or partially generated when the computer instructions or the computer program are loaded or executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more collections of available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium. The semiconductor medium may be a solid state disk.
It should be understood that the term "and/or" herein is merely one type of association relationship that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. In addition, the "/" in this document generally indicates that the former and latter associated objects are in an "or" relationship, but may also indicate an "and/or" relationship, which may be understood with particular reference to the former and latter text.
In the present application, "at least one" means one or more, "a plurality" means two or more. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple.
It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (8)

1. A traffic scheduling method, comprising:
acquiring running environment marks of running environments where K candidate micro-service instances are located; wherein the running environment mark is used for indicating the network positions of the K candidate micro-service instances, and K is a positive integer;
calling a first target micro-service instance; the first target micro-service instance is an alternative micro-service instance, of the K alternative micro-service instances, where an operating environment label of an operating environment where the first target micro-service instance is located is the same as an operating environment label of an operating environment where a current micro-service instance is located, and the first target micro-service instance includes one of: in the K standby micro-service instances, the standby micro-service instance and the current micro-service instance are positioned in the same virtual machine VM/physical machine PM; among the K candidate micro-service instances, the candidate micro-service instance which is positioned in the same data center DC as the current micro-service instance; in the K candidate micro-service instances, the candidate micro-service instance which is positioned in the same local network as the current micro-service instance; in the K candidate micro-service instances, the candidate micro-service instance and the current micro-service instance are positioned in the same Virtual Private Cloud (VPC); in the K candidate micro-service instances, the candidate micro-service instance and the current micro-service instance are located in the same cloud computing system;
if the first target micro-service instance does not exist, calling a second target micro-service instance; wherein the second target microservice instance is: and determining the alternative microservice instances according to the comparison result of the calling priorities of the K alternative microservice instances.
2. The traffic scheduling method according to claim 1, wherein the invocation priorities of the K candidate microservice instances are determined according to a preset priority determination rule; wherein the priority determination rule includes at least one of:
the scheduling priority of the alternative micro-service instance which is positioned in the same virtual machine VM/physical machine PM with the current micro-service instance is higher than that of the alternative micro-service instance which is not positioned in the same VM/PM with the current micro-service instance;
the scheduling priority of the alternative micro-service instance which is positioned at the same DC as the current micro-service instance is higher than that of the alternative micro-service instance which is not positioned at the same DC as the current micro-service instance;
the scheduling priority of the alternative micro-service instances which are positioned in the same local network with the current micro-service instance is higher than that of the alternative micro-service instances which are not positioned in the same local network with the current micro-service instance;
the scheduling priority of the alternative micro-service instance which is positioned in the same private cloud VPC as the current micro-service instance is higher than that of the alternative micro-service instance which is not positioned in the same VPC as the current micro-service instance;
the scheduling priority of the alternative micro-service instance which is positioned in the same cloud computing system with the current micro-service instance is higher than that of the alternative micro-service instance which is not positioned in the same cloud computing system with the current micro-service instance.
3. The traffic scheduling method according to any one of claims 1 to 2, wherein before the obtaining of the running environment tags of the running environments where the K candidate microservice instances are located, the traffic scheduling method further comprises:
configuring an operation environment mark for the operation environment where the N preset micro-service instances are located; the running environment where the N preset micro-service instances are located comprises at least one of the following conditions: the physical machine PM, the virtual machine VM, the data center DC, the local network, the private cloud and the cloud computing system where the N preset micro-service instances are located;
the obtaining of the operating environment mark of the operating environment where the K candidate microservice instances are located includes:
acquiring running environment marks of running environments of the N preset micro-service instances according to the calling records of the N preset micro-service instances;
and determining K preset micro-service instances supporting the functions to be called by the current micro-service instance in the N preset micro-service instances as the K alternative micro-service instances.
4. A communications apparatus, comprising: the system comprises an acquisition module and a scheduling module; wherein the content of the first and second substances,
the acquisition module is used for acquiring the running environment marks of the running environments where the K standby micro-service instances are located; wherein the running environment mark is used for indicating the network positions of the K candidate micro-service instances, and K is a positive integer;
the scheduling module is used for calling a first target micro-service instance; the first target micro-service instance is an alternative micro-service instance, of the K alternative micro-service instances, where an operating environment label of an operating environment where the first target micro-service instance is located is the same as an operating environment label of an operating environment where a current micro-service instance is located, and the first target micro-service instance includes one of: in the K standby micro-service instances, the standby micro-service instance and the current micro-service instance are positioned in the same virtual machine VM/physical machine PM; among the K candidate micro-service instances, the candidate micro-service instance which is positioned in the same data center DC as the current micro-service instance; in the K candidate micro-service instances, the candidate micro-service instance which is positioned in the same local network as the current micro-service instance; in the K candidate micro-service instances, the candidate micro-service instance and the current micro-service instance are positioned in the same Virtual Private Cloud (VPC); in the K candidate micro-service instances, the candidate micro-service instance and the current micro-service instance are located in the same cloud computing system;
the scheduling module is further configured to invoke a second target microservice instance if the first target microservice instance does not exist; wherein the second target microservice instance is: and determining the alternative microservice instances according to the comparison result of the calling priorities of the K alternative microservice instances.
5. The communication device of claim 4,
the scheduling module is further configured to determine the calling priority of the K candidate microservice instances according to a preset priority determination rule; wherein the priority determination rule includes at least one of:
the scheduling priority of the alternative micro-service instance which is positioned in the same virtual machine VM/physical machine PM with the current micro-service instance is higher than that of the alternative micro-service instance which is not positioned in the same VM/PM with the current micro-service instance;
the scheduling priority of the alternative micro-service instance which is positioned at the same DC as the current micro-service instance is higher than that of the alternative micro-service instance which is not positioned at the same DC as the current micro-service instance;
the scheduling priority of the alternative micro-service instances which are positioned in the same local network with the current micro-service instance is higher than that of the alternative micro-service instances which are not positioned in the same local network with the current micro-service instance;
the scheduling priority of the alternative micro-service instance which is positioned in the same private cloud VPC as the current micro-service instance is higher than that of the alternative micro-service instance which is not positioned in the same VPC as the current micro-service instance;
the scheduling priority of the alternative micro-service instance which is positioned in the same cloud computing system with the current micro-service instance is higher than that of the alternative micro-service instance which is not positioned in the same cloud computing system with the current micro-service instance.
6. The communication device according to any of claims 4-5, wherein the communication device further comprises a configuration module; wherein the content of the first and second substances,
the configuration module is used for configuring the running environment marks for the running environments of the N preset micro-service instances before the operation environment marks of the running environments of the K standby micro-service instances are obtained by the obtaining module; the running environment where the N preset micro-service instances are located comprises at least one of the following conditions: the physical machine PM, the virtual machine VM, the data center DC, the local network, the private cloud and the cloud computing system where the N preset micro-service instances are located;
the obtaining module is further configured to obtain an operating environment mark of an operating environment in which the N preset micro-service instances are located according to the call records of the N preset micro-service instances;
the scheduling module is further configured to determine, as the K candidate microservice instances, K preset microservice instances that support the function to be called by the current microservice instance among the N preset microservice instances.
7. A communications apparatus, comprising: a processor, a transceiver, and a memory; wherein the memory stores one or more programs comprising computer-executable instructions that, when executed by the processor, cause the communication device to perform the traffic scheduling method of any of claims 1-3.
8. A computer-readable storage medium, in which a program or instructions are stored, which, when run on a computer, the computer performs the traffic scheduling method according to any one of claims 1 to 3.
CN201811242554.7A 2018-10-24 2018-10-24 Traffic scheduling method and communication device Active CN109688191B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811242554.7A CN109688191B (en) 2018-10-24 2018-10-24 Traffic scheduling method and communication device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811242554.7A CN109688191B (en) 2018-10-24 2018-10-24 Traffic scheduling method and communication device

Publications (2)

Publication Number Publication Date
CN109688191A CN109688191A (en) 2019-04-26
CN109688191B true CN109688191B (en) 2021-02-12

Family

ID=66185220

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811242554.7A Active CN109688191B (en) 2018-10-24 2018-10-24 Traffic scheduling method and communication device

Country Status (1)

Country Link
CN (1) CN109688191B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110620727B (en) * 2019-09-09 2022-02-22 平安科技(深圳)有限公司 Gateway automatic routing method and related equipment in multi-environment
CN110995829B (en) * 2019-11-29 2022-07-22 广州市百果园信息技术有限公司 Instance calling method and device and computer storage medium
CN113395310A (en) * 2020-03-12 2021-09-14 华为技术有限公司 Micro-service calling method, device, equipment and medium
CN111770176B (en) * 2020-06-29 2023-04-07 北京百度网讯科技有限公司 Traffic scheduling method and device
CN112202929B (en) * 2020-12-01 2021-03-26 湖南新云网科技有限公司 Service access method, device and equipment in micro-service architecture
CN113037812A (en) * 2021-02-25 2021-06-25 中国工商银行股份有限公司 Data packet scheduling method and device, electronic equipment, medium and intelligent network card

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105187531A (en) * 2015-09-09 2015-12-23 上海赛为信息技术有限公司 Cloud computing virtualized server cluster load balancing system and method
CN106612188A (en) * 2015-10-21 2017-05-03 中兴通讯股份有限公司 Method and device for extending software function based on micro service architecture
CN107733726A (en) * 2017-11-29 2018-02-23 新华三云计算技术有限公司 A kind of processing method and processing device of service request

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7881261B2 (en) * 2002-09-26 2011-02-01 Telefonaktiebolaget Lm Ericsson (Publ) Method and apparatus for efficient dormant handoff of mobile stations having multiple packet data service instances
US9391884B2 (en) * 2014-01-31 2016-07-12 Google Inc. Consistent hashing using exact matching with application to hardware load balancing
CN104270418B (en) * 2014-09-15 2017-09-15 中国人民解放军理工大学 Users ' Need-oriented Deadline cloud agency's reservation distribution method
US20170039636A1 (en) * 2015-08-07 2017-02-09 GetBid, Inc. Procuring spatially feasible on-demand household-related goods or services

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105187531A (en) * 2015-09-09 2015-12-23 上海赛为信息技术有限公司 Cloud computing virtualized server cluster load balancing system and method
CN106612188A (en) * 2015-10-21 2017-05-03 中兴通讯股份有限公司 Method and device for extending software function based on micro service architecture
CN107733726A (en) * 2017-11-29 2018-02-23 新华三云计算技术有限公司 A kind of processing method and processing device of service request

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"一种分布式服务治理框架的设计与实现";张羽;《中国优秀硕士学位论文全文数据库电子期刊(信息科技辑)》;20170115;第I138-280页 *
"微服务若干关键问题研究";邓杰文等;《五邑大学学报(自然科学版)》;20160515;第30卷(第2期);全文 *

Also Published As

Publication number Publication date
CN109688191A (en) 2019-04-26

Similar Documents

Publication Publication Date Title
CN109688191B (en) Traffic scheduling method and communication device
US10437629B2 (en) Pre-triggers for code execution environments
CN109729143B (en) Deploying a network-based cloud platform on a terminal device
CN114930295B (en) Serverless call allocation method and system utilizing reserved capacity without inhibiting scaling
CN106537338B (en) Self-expanding clouds
CN113243005A (en) Performance-based hardware emulation in on-demand network code execution systems
EP3355187A1 (en) Loading method and device for terminal application (app)
US10033816B2 (en) Workflow service using state transfer
EP3839726B1 (en) Software modification initiation method and apparatus
CN106371889B (en) Method and device for realizing high-performance cluster system of scheduling mirror image
US11106492B2 (en) Workflow service for a cloud foundry platform
CN110661647A (en) Life cycle management method and device
CN107005435B (en) Network service descriptor shelving method and device
US10055393B2 (en) Distributed version control of orchestration templates
US20240111549A1 (en) Method and apparatus for constructing android running environment
CN110659104B (en) Service monitoring method and related equipment
CN114168179A (en) Micro-service management method, device, computer equipment and storage medium
CN113986539A (en) Method, device, electronic equipment and readable storage medium for realizing pod fixed IP
CN113760543A (en) Resource management method and device, electronic equipment and computer readable storage medium
CN113032125A (en) Job scheduling method, device, computer system and computer-readable storage medium
US20220229689A1 (en) Virtualization platform control device, virtualization platform control method, and virtualization platform control program
CN114979286A (en) Access control method, device and equipment for container service and computer storage medium
CN114662102A (en) File processing method and device and storage medium
CN111435320B (en) Data processing method and device
CN110347473B (en) Method and device for distributing virtual machines of virtualized network elements distributed across data centers

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220216

Address after: 550025 Huawei cloud data center, jiaoxinggong Road, Qianzhong Avenue, Gui'an New District, Guiyang City, Guizhou Province

Patentee after: Huawei Cloud Computing Technology Co.,Ltd.

Address before: 518129 Bantian HUAWEI headquarters office building, Longgang District, Guangdong, Shenzhen

Patentee before: HUAWEI TECHNOLOGIES Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20221216

Address after: 518129 Huawei Headquarters Office Building 101, Wankecheng Community, Bantian Street, Longgang District, Shenzhen, Guangdong

Patentee after: Shenzhen Huawei Cloud Computing Technology Co.,Ltd.

Address before: 550025 Huawei cloud data center, jiaoxinggong Road, Qianzhong Avenue, Gui'an New District, Guiyang City, Guizhou Province

Patentee before: Huawei Cloud Computing Technology Co.,Ltd.