Detailed Description
The technical solution in the present application will be described below with reference to the accompanying drawings.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone.
The terms "first" and "second" and the like in the description and drawings of the present application are used for distinguishing different objects or for distinguishing different processes for the same object, and are not used for describing a specific order of the objects.
Furthermore, the terms "including" and "having," and any variations thereof, as referred to in the description of the present application, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements but may alternatively include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that in the embodiments of the present application, words such as "exemplary" or "for example" are used to mean serving as examples, illustrations or descriptions. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the description of the present application, the meaning of "a plurality" means two or more unless otherwise specified.
In the examples of the present application, the subscripts are sometimes as W1It may be mistaken for a non-subscripted form such as W1, whose intended meaning is consistent when the distinction is de-emphasized.
The network architecture and the service scenario described in the embodiment of the present application are for more clearly illustrating the technical solution of the embodiment of the present application, and do not form a limitation on the technical solution provided in the embodiment of the present application, and as a person of ordinary skill in the art knows that along with the evolution of the network architecture and the appearance of a new service scenario, the technical solution provided in the embodiment of the present application is also applicable to similar technical problems.
The technical solution of the embodiment of the present application may be applied to various virtualized traffic scheduling systems including a plurality of micro services independently deployed by using a distributed physical architecture, such as a cloud computing system shown in fig. 2A.
As shown in fig. 2A, the cloud computing system includes: LB, DC1 and DC 2.
The DC1 comprises 4 VMs/PMs, configuration instances (CEs), management instances (MEs), micro service instances MS1-E1 and MS2-E1 are sequentially configured from top to bottom, and the DC2 comprises micro service instances MS1-E2 and MS 2-E2. In addition, each VM/PM configured with a micro-service instance is also configured with a scheduling instance (scheduling) corresponding thereto, which is used to determine the next micro-service instance called by the micro-service instance. As shown in FIG. 2A, the VM/PM configured with MS1-E2 in the DC2 is also configured with MS1-S2, and MS1-S2 determines that the next micro-service instance called by MS1-E2 is MS 2-E2. Since the micro service instances are in one-to-one correspondence with the scheduling instances, the scheduling operation is also initiated by the micro service instance, i.e., the scheduling operation can be regarded as being completed by the micro service instance itself. This scheduling method, called client scheduling, in which the micro-service instance itself invokes the next micro-service instance. Whereas a micro-service instance may be considered a process, client scheduling may also be referred to as in-process scheduling.
A runtime environment may be configured with only one microservice instance and one corresponding schedule instance, i.e. the runtime environment may be configured with only one microservice instance-schedule instance pair. Of course, a plurality of microservice instances and a plurality of scheduling instances corresponding to the plurality of microservice instances one to one may also be configured on one operating environment, that is, a plurality of microservice instance-scheduling instance pairs may exist on the same operating environment. This is not limited in this application.
It should be noted that, in the cloud computing system shown in fig. 2A, each micro service instance is configured with a corresponding scheduling instance. It should be understood that, in order to save scheduling resources, one scheduling instance may also provide scheduling services for multiple micro-service instances, i.e., multiple micro-service instances may share the same scheduling instance. That is, in addition to the above-described client scheduling, there are two scheduling methods as follows.
Proxy scheduling
Specifically, the scheduling operations of multiple microservice instances running in the same operating environment, such as a VM/PM, are all performed by an independent proxy process to determine the next microservice instance to be invoked by each of the multiple microservice instances. Since the scheduling operation is performed by an independent agent process in the same execution environment, the agent scheduling may be referred to as independent process scheduling.
External LB scheduling
Specifically, the scheduling operations of multiple microservice instances running in multiple execution environments may be performed by a scheduling instance external to the execution environments.
For example, assuming that the plurality of operating environments are a plurality of VM/PMs, the external scheduling instance may be a scheduling instance running on other VM/PMs than the plurality of VM/PMs. For another example, assuming that the plurality of operating environments are a plurality of DCs, the external scheduling instance may be a scheduling instance operating in a network element or a subsystem other than the plurality of DCs, such as a scheduling instance in another DC, or a scheduling instance in another network element or subsystem in communication connection with the plurality of DCs.
It should be noted that the three scheduling manners of the micro service instance are defined according to whether the scheduling instance and one or more micro service instances served by the scheduling instance are located in the same operating environment. The operating environment may be defined as a certain network hierarchy of the cloud computing system. For example, assuming that the cloud computing system includes multiple DCs, each DC including multiple VMs/PMs, the operating environment may be defined as a DC or as a VM/PM. This is not limited in this application.
The configuration example CE is used for receiving configuration instructions and deployment instructions input by a system administrator. The configuration instructions are used for configuring running environment marks for network elements and subsystems, such as DC and VM/PM, in the cloud computing system. The operation environment mark is used for indicating the network location of the network element and the subsystem, and may be a name, an identifier, and a network address of the network element or the subsystem, or a combination of the name, the identifier, and the network address of the network element or the subsystem. For example, as shown in FIG. 2A, the operating environment identifier of the VM/PM configured with MS1-E1 may be { the identifier of the VM/PM configured with MS1-E1 itself }, or { the identifier of DC1, the identifier of the VM/PM configured with MS1-E1 itself }.
It should be noted that the configuration instruction may be a configuration command input by a system administrator, or may be a preset start instruction of an executable configuration script or a configuration program. This is not limited in this application.
The deployment instruction is used for indicating the ME to acquire the running environment mark of the running environment where the micro-service instance in the cloud computing system is located. Illustratively, the runtime environment tag of the runtime environment in which the microservice instance is located can be obtained in several ways as follows.
Configuration analysis method
The configuration command, the configuration file and the operation environment mark of the operation environment to be configured in the configuration program have a corresponding relation with the subsystem and the network element of the cloud computing system. Therefore, in the embodiment of the present application, the operating environment flag of the operating environment in which the microservice instance is located may be obtained by analyzing the configuration command, the configuration file, and the configuration program.
Active collection mode
Specifically, the method may include the steps of:
step 1: the management instance actively sends a tag probe request to the scheduling instance, such as sending the tag probe request in a broadcast (broadcast) manner. The scheduling instance may be a part of the scheduling instance managed by the management instance, or may be all of the scheduling instances managed by the management instance.
Step 2: the scheduling instance receives a probe response which is returned by the micro service instance and carries the running environment mark of the running environment of the micro service instance called by the scheduling instance;
and step 3: and the management instance records and stores the corresponding relation between the called micro-service instance and the running environment mark of the running environment where the micro-service instance is located.
It should be noted that the deployment instruction may be a mark collection command input by a system administrator, or may be a preset start instruction of an executable deployment script or a deployment program. In practical application, the mark acquisition may be started periodically, or may be triggered according to a preset starting condition. This is not limited in this application.
Calling record reporting mode
The management instance is in communication connection with the scheduling instances of all the micro-service instances served by the management instance. When one micro-service instance calls other micro-service instances, the scheduling instance may report the call record to the management instance. The calling record comprises an operation environment mark of an operation environment where the current micro-service instance is located and an operation environment mark of an operation environment where the next micro-service instance called for the current micro-service instance is located. And the management example counts the running environment marks of the running environment where the micro service example is located according to the reported calling record. The call record may be reported in real time, may also be reported periodically, and may also be reported after a cloud service request is processed, which is not limited in the present application.
For example, the management instance may also acquire the call record in a listening manner, which is not described herein again.
The LB is used for receiving a cloud service request initiated by a client application program and calling a first micro-service instance required for processing the cloud service request. Illustratively, as shown in FIG. 2A, the LB schedules micro-service instance MS1-E2 in DC2 for cloud service requests. Thereafter, the MS1-E2 requests the scheduling instance MS1-S2 to schedule the next micro-service instance, e.g., MS2-E2, and so on, until the cloud service request is processed, and sends a cloud service response to the client application.
It should be understood that fig. 2A is a simplified schematic diagram that is merely exemplary for ease of understanding, and that other network elements or subsystems, which are not depicted in fig. 2A, may also be included in the cloud computing system.
The cloud computing system provided by the application is explained in a specific example. Fig. 2B illustrates another schematic structural diagram of a cloud computing system provided herein.
As shown in fig. 2B, the cloud computing system includes infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS), and 2 data centers DC.
The IaaS is a physical resource management platform in the cloud computing system, and can provide virtualized resources, such as running environments and storage resources of virtual machines and/or physical machines, for the upper PaaS platform.
The PaaS is a middleware platform, and can provide a development environment for constructing an application program, such as a customized software development platform. One core function in the PaaS platform is lifecycle management of an application, including deployment, upgrade, uninstallation, and the like of the application, which may be provided by a deployment management unit. The deployment management unit needs to cooperate with deployment agents in each operating environment, the deployment management unit uniformly issues application management commands, and the deployment agents execute specific actions in the operating environments according to the management commands.
The SaaS is a service network in the cloud computing system, so as to open an application service deployed in the SaaS to the internet, and mainly includes the network devices such as the LB.
Software and hardware resources such as virtual machines or physical machines managed by the IaaS can be deployed in different DCs, so that the purpose of high availability of multiple DCs is achieved.
In an embodiment of the present application, the management flow in fig. 2B may include the deployment instruction and the configuration instruction in fig. 2A, and the traffic flow in fig. 2B includes the cloud service request in fig. 2A.
In the micro-service architecture, a main management unit is a micro-service registry, and each micro-service needs to be registered in the micro-service registry, so that a micro-service client (which may be other micro-services of course) needs to find an available micro-service instance address in the micro-service registry and send a service request to a micro-service instance according to the address.
In practical application, the micro-service instance usually has a plurality of backups, and the client needs to perform routing selection to find a proper micro-service instance. In particular, the traffic scheduling unit may be responsible for routing. The traffic scheduling unit can be divided into three types, namely a centralized type, an in-process type and an independent process type.
It should be noted that the names of the unit modules in the cloud computing system do not limit the device itself, and in an actual implementation, the unit modules may appear by other names. As long as the functions of each unit module are similar to those of the embodiments of the present application, the technical solutions provided by the present application can be regarded.
In addition, fig. 2A and 2B only illustrate one possible division manner of the unit modules of the cloud computing system. In practical applications, there may be other dividing manners, such as dividing one unit module in fig. 2A and fig. 2B into a plurality of unit modules, or combining a plurality of unit modules in fig. 2A and fig. 2B into one unit module. For example, the management instance in FIG. 2A may include the deployment management unit and the deployment flag unit in FIG. 2B. As another example, the scheduling instance in fig. 2A may include the traffic scheduling unit and the flag scheduling unit in fig. 2B. As long as the functions executed by the repartitioning of the unit modules are similar to the embodiments of the present application, the technical solutions equivalent to those provided in the present application can be considered.
Fig. 3 shows a flowchart of a traffic scheduling method according to an embodiment of the present application. As shown in fig. 3, the traffic scheduling method includes S301 to S302:
s301, obtaining the running environment marks of the running environments where the K candidate micro-service instances are located.
The running environment mark is used for indicating the network positions of K candidate micro-service instances, and K is a positive integer.
It should be noted that the runtime environment flags of the runtime environments in which all the microservice instances in a cloud computing system are located are configured in advance by an administrator. Therefore, referring to fig. 3, as shown in fig. 4, in a possible design method, before the step S301 of obtaining the operating environment labels of the operating environments where the K candidate microservice instances are located is executed, the traffic scheduling method may further include step S401:
s401, configuring running environment marks for running environments where the N preset micro-service instances are located.
The running environment of the N preset micro-service instances comprises at least one of the following conditions: the system comprises a physical machine PM, a virtual machine VM, a data center DC, a local network, a private cloud and a cloud computing system, wherein the N preset micro-service instances are located in the physical machine PM, the virtual machine VM, the data center DC, the local network, the private cloud and the cloud computing system.
For the configuration of the runtime environment flag, reference may be made to the above description of the configuration example and the configuration instruction, which is not described herein again.
Optionally, the step S301 of obtaining the running environment tags of the running environments where the K candidate microservice instances are located may include steps S402 to S403:
s402, obtaining the running environment marks of the running environments of the N preset micro-service instances according to the calling records of the N preset micro-service instances.
Specifically, reference is made to the above description of the management example and the deployment instruction, which is not repeated herein.
S403, determining K preset micro-service instances supporting the functions to be called by the current micro-service instance in the N preset micro-service instances as K alternative micro-service instances.
It should be noted that the K alternative microservice instances described above may each support a single task supported by the next microservice instance to be invoked by the current microservice instance.
For example, as shown in fig. 2A, assuming that the current micro-service instance is MS1-E2 in DC2 and the next micro-service to be invoked by MS1-E2 is MS2 and there are 2 micro-service instances MS2-E1 and MS2-E2 in total for MS2, then the micro-service instances MS2-E1 and MS2-E2 may be considered as the above K alternative micro-service instances, where K is equal to 2.
It is to be understood that the K alternative microservice instances described above may be located at the same network location or may be located at different network locations. That is to say, the environment labels of the operating environments where the K alternative micro-service instances are located may be the same or different, and this is not limited in this application.
For example, as shown in FIG. 2A, 2 microservice instances of microservice MS2, MS2-E1 and MS2-E2, are located in DC1 and DC2, respectively. For another example, as shown in fig. 1, micro-service MS2 coexists in 4 micro-service instances MS2-E1 through MS2-E4, MS2-E1 and MS2-E2 are located in DC1, and MS2-E3 and MS2-E4 are located in DC 2.
S302, calling a first target micro-service instance.
The first target micro-service instance is the alternative micro-service instance in the K alternative micro-service instances, and the running environment mark of the running environment where the first target micro-service instance is located is the same as the running environment mark of the running environment where the current micro-service instance is located. That is, the first target microservice instance is located at the same network location as the current microservice instance.
Illustratively, the first target microservice instance may include one of:
in the K candidate micro-service instances, the candidate micro-service instance is positioned in the same virtual machine VM/physical machine PM as the current micro-service instance;
in the K candidate micro-service instances, the candidate micro-service instance and the current micro-service instance are positioned in the same data center DC;
in the K candidate micro-service instances, the candidate micro-service instance which is positioned in the same local network as the current micro-service instance;
in the K alternative micro-service instances, the alternative micro-service instances which are positioned in the same Virtual Private Cloud (VPC) as the current micro-service instance;
and in the K candidate micro-service instances, the candidate micro-service instance and the current micro-service instance are positioned in the same cloud computing system.
Illustratively, as shown in FIG. 2A, the current micro-service instance MS1-E2 needs to invoke micro-service MS2, and micro-service MS2 has 2 micro-service instances MS2-E1 and MS2-E2, respectively located in DC1 and DC 2. The scheduling instances MS1-S2 of the current micro-service instances MS1-E2 can preferentially schedule the micro-service instances MS2-E2 in DC2 instead of the micro-service instances MS2-E1 in DC1, so that the frequency of cross-DC calling the micro-service instances can be reduced, the calling chain of the micro-service instances is shortened, the reliability of calling the micro-service instances is improved, the response time is shortened, and the processing efficiency of cloud service requests is improved.
In practical applications, among the K candidate microservice instances, the number of the candidate microservice instances having the same operating environment mark of the operating environment where the current microservice instance is located may be multiple. In order to further improve the processing efficiency of the cloud service request, the candidate micro-service instance with the shortest response time may be selected from the candidate micro-service instances with the same running environment mark of the running environment where the multiple running environments are located and the running environment mark of the running environment where the current micro-service instance is located as the first target micro-service instance.
Exemplarily, as shown in fig. 1, the current microservice instance is MS1-E3 in DC2, and the current microservice needs to invoke microservice MS2, then the above K alternative microservice instances are 4 of MS2-E1 to MS2-E4, and wherein the 2 microservice instances of MS1-E3 and MS1-E4 are both located in DC2 with the current microservice instance MS 1-E3. In this case, one of MS1-E3 and MS1-E4 may also determine the first target microservice instance based on other additional conditions. For example, the first target microservice instance may be determined for the shorter response time of one of MS1-E3 and MS 1-E4.
It is easy to understand that even if the K candidate micro-service instances are not located at the same network location as the current micro-service instance, a call still needs to be selected from the K candidate micro-service instances, for example, the candidate micro-service instance closest to the current micro-service instance or with the shortest response time is called, so as to complete the processing work of the cloud service request received by the cloud computing system. Therefore, in another possible design method, with reference to fig. 3 or fig. 4, taking fig. 3 as an example, as shown in fig. 5, the traffic scheduling method may further include S501:
s501, if the first target micro-service instance does not exist, calling a second target micro-service instance.
Wherein the second target microservice instance is: and determining the alternative microservice instances according to the comparison result of the calling priorities of the K alternative microservice instances.
The absence of the first target micro-service instance mentioned above means: the operation environments of the K standby micro-service instances are different from the operation environment of the current micro-service instance. In this case, the second target microservice instance may be determined according to a pre-set priority. That is, the invocation priorities of the K candidate microservice instances may be determined according to a preset priority determination rule.
Optionally, the priority determination rule may include at least one of:
the scheduling priority of the alternative micro-service instances which are positioned in the same virtual machine VM/physical machine PM with the current micro-service instance is higher than that of the alternative micro-service instances which are not positioned in the same VM/PM with the current micro-service instance;
the scheduling priority of the alternative micro-service instance which is positioned in the same DC with the current micro-service instance is higher than that of the alternative micro-service instance which is not positioned in the same DC with the current micro-service instance;
the scheduling priority of the alternative micro-service instances which are positioned in the same local network with the current micro-service instance is higher than that of the alternative micro-service instances which are not positioned in the same local network with the current micro-service instance;
the scheduling priority of the alternative micro-service instance which is positioned in the same private cloud VPC with the current micro-service instance is higher than that of the alternative micro-service instance which is not positioned in the same VPC with the current micro-service instance;
the scheduling priority of an alternative microservice instance that is located on the same cloud computing system as the current microservice instance is higher than the scheduling priority of an alternative microservice instance that is not located on the same cloud computing system as the current microservice instance.
It should be noted that, in general, the processing efficiency of the alternative microservice instance located in the same operating environment as the current microservice instance is generally higher than the processing efficiency of the alternative microservice instance not located in the same operating environment as the current microservice instance. However, given that the network environment, such as load, network latency, etc., is dynamically changing, there may be situations where the processing efficiency of an alternative microservice instance that is located in the same operating environment as the current microservice instance is lower than the processing efficiency of an alternative microservice instance that is not located in the same operating environment as the current microservice instance. In this case, the scheduling priorities of the alternative microservice instances in the same operating environment as the current microservice instance and the alternative microservice instances not in the same operating environment as the current microservice instance may also be dynamically adjusted according to the changing situation of the network environment.
For example, when the response time of the candidate micro-service instance in the same operation environment as the current micro-service instance is longer than the preset time, the scheduling priority of the candidate micro-service instance in the same operation environment as the current micro-service instance is adjusted to be lower than the scheduling priority of the candidate micro-service instance not in the same operation environment as the current micro-service instance.
For another example, when the difference between the response time of the candidate micro-service instance not located in the same operation environment as the current micro-service instance and the response time of the candidate micro-service instance located in the same operation environment as the current micro-service instance is greater than the second duration threshold, the scheduling priority of the candidate micro-service instance located in the same operation environment as the current micro-service instance is adjusted to be lower than the scheduling priority of the candidate micro-service instance not located in the same operation environment as the current micro-service instance.
The traffic scheduling method provided by the application can preferentially call the alternative micro-service instance with the same running environment mark of the running environment where the current micro-service instance is located as the running environment mark of the running environment where the current micro-service instance is located, namely preferentially call the micro-service instance with the same network position as the current micro-service instance, can reduce the probability that the current micro-service instance calls the alternative micro-service instance with different network positions, can reduce the length of a micro-service instance calling chain, improve the stability of micro-service instance calling, and reduce the network delay of micro-service instance calling, thereby improving the reliability and efficiency of processing network services by the micro-service instance calling mode.
In the embodiment of the present application, the terminal may be divided into the functional modules according to the method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, in the embodiment of the present application, the division of the module is schematic, and is only one logic function division, and there may be another division manner in actual implementation.
Fig. 6 shows a schematic diagram of a possible structure of a communication device capable of performing the traffic scheduling method. As shown in fig. 6, the communication apparatus 600 includes: an acquisition module 601 and a scheduling module 602.
The obtaining module 601 is configured to obtain running environment tags of running environments where the K candidate microservice instances are located.
A scheduling module 602, configured to invoke the first target microservice instance. The running environment mark is used for indicating the network positions of K candidate micro-service instances, and K is a positive integer; the first target micro-service instance is the alternative micro-service instance, of the K alternative micro-service instances, of which the operating environment mark of the operating environment is the same as that of the current micro-service instance.
The communication device 600 may further include a storage module 603. The storage module 603 is configured to store related instructions and data.
Illustratively, the first target microservice instance may include one of: in the K candidate micro-service instances, the candidate micro-service instance is positioned in the same virtual machine VM/physical machine PM as the current micro-service instance; in the K candidate micro-service instances, the candidate micro-service instance and the current micro-service instance are positioned in the same data center DC; in the K candidate micro-service instances, the candidate micro-service instance which is positioned in the same local network as the current micro-service instance; in the K alternative micro-service instances, the alternative micro-service instances which are positioned in the same Virtual Private Cloud (VPC) as the current micro-service instance; and in the K candidate micro-service instances, the candidate micro-service instance and the current micro-service instance are positioned in the same cloud computing system.
In one possible design, the scheduling module 602 is further configured to invoke a second target microservice instance if the first target microservice instance does not exist. Wherein the second target microservice instance is: and determining the alternative microservice instances according to the comparison result of the calling priorities of the K alternative microservice instances.
Optionally, the scheduling module 602 is further configured to determine, according to a preset priority determination rule, the invocation priority of the K candidate microservice instances.
Wherein the priority determination rule includes at least one of:
the scheduling priority of the alternative micro-service instances which are positioned in the same virtual machine VM/physical machine PM with the current micro-service instance is higher than that of the alternative micro-service instances which are not positioned in the same VM/PM with the current micro-service instance;
the scheduling priority of the alternative micro-service instance which is positioned in the same DC with the current micro-service instance is higher than that of the alternative micro-service instance which is not positioned in the same DC with the current micro-service instance;
the scheduling priority of the alternative micro-service instances which are positioned in the same local network with the current micro-service instance is higher than that of the alternative micro-service instances which are not positioned in the same local network with the current micro-service instance;
the scheduling priority of the alternative micro-service instance which is positioned in the same private cloud VPC with the current micro-service instance is higher than that of the alternative micro-service instance which is not positioned in the same VPC with the current micro-service instance;
the scheduling priority of an alternative microservice instance that is located on the same cloud computing system as the current microservice instance is higher than the scheduling priority of an alternative microservice instance that is not located on the same cloud computing system as the current microservice instance.
In one possible design, as shown in fig. 7 in conjunction with fig. 6, the communication device 600 may also include a configuration module 604.
The configuration module 604 is configured to configure the running environment tags for the running environments where the N preset micro-service instances are located before the obtaining module 601 obtains the running environment tags of the running environments where the K candidate micro-service instances are located.
Illustratively, the operating environment where the N preset micro-service instances are located includes at least one of the following: the system comprises a physical machine PM, a virtual machine VM, a data center DC, a local network, a private cloud and a cloud computing system, wherein the N preset micro-service instances are located in the physical machine PM, the virtual machine VM, the data center DC, the local network, the private cloud and the cloud computing system.
The obtaining module 601 is further configured to obtain an operation environment flag of an operation environment where the N preset micro-service instances are located according to the call record of the N preset micro-service instances.
The scheduling module 602 is further configured to determine, as K candidate microservice instances, K preset microservice instances that support a function to be called by the current microservice instance, from among the N preset microservice instances.
Fig. 8 shows a schematic diagram of a possible structure of a communication device capable of performing the traffic scheduling method. As shown in fig. 8, the communication apparatus 800 includes: a processor 801, a transceiver 802, and a memory 803. Memory 803 stores one or more programs, including computer-executable instructions, among others.
A processor 801 configured to execute a computer program stored in the memory 803, so as to enable the communication apparatus 800 to execute the traffic scheduling method according to the embodiment of the present application.
The communication device 800 also includes a bus 804.
The memory 803 may be a memory in the communication apparatus 800, and may include a volatile memory, such as a random access memory; the memory may also include non-volatile memory, such as read-only memory, flash memory, a hard disk, or a solid state disk; the memory may also comprise a combination of memories of the kind described above.
The processor 801 described above may be various illustrative logical blocks, modules, and circuits implemented or performed in connection with the disclosure herein. The processor may be a central processing unit, general purpose processor, digital signal processor, application specific integrated circuit, field programmable gate array or other programmable logic device, transistor logic device, hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. Further, the processor 801 may also be a combination of computing functions, e.g., comprising one or more microprocessors, a combination of a DSP and a microprocessor, or the like.
The bus 804 may be an Extended Industry Standard Architecture (EISA) bus or the like. The bus 804 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 8, but this is not intended to represent only one bus or type of bus.
The application provides a cloud computing system. The cloud computing system comprises one or more clients and one or more communication devices, so as to execute the traffic scheduling method provided by the embodiment of the application. For the communication device, reference may be made to the above description of the method embodiment and the device embodiment, which are not described herein again.
The present application provides a readable storage medium storing a program or instructions. When the above-mentioned program or instructions are run on a computer, the computer performs the traffic scheduling method as described in the above-mentioned method embodiments.
In a sixth aspect, a computer program product is provided that contains a program or instructions. When the program or the instructions are run on a computer, the computer executes the traffic scheduling method according to the method embodiment.
It should be understood that the processor in the embodiments of the present application may be a Central Processing Unit (CPU), and the processor may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It will also be appreciated that the memory in the embodiments of the subject application can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The non-volatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example, but not limitation, many forms of Random Access Memory (RAM) are available, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchlink DRAM (SLDRAM), and direct bus RAM (DR RAM).
The above embodiments may be implemented in whole or in part by software, hardware (e.g., circuitry), firmware, or any combination thereof. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. The computer program product comprises one or more computer instructions or computer programs. The procedures or functions according to the embodiments of the present application are wholly or partially generated when the computer instructions or the computer program are loaded or executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more collections of available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium. The semiconductor medium may be a solid state disk.
It should be understood that the term "and/or" herein is merely one type of association relationship that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. In addition, the "/" in this document generally indicates that the former and latter associated objects are in an "or" relationship, but may also indicate an "and/or" relationship, which may be understood with particular reference to the former and latter text.
In the present application, "at least one" means one or more, "a plurality" means two or more. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple.
It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.