CN114003387A - Micro-service load balancing and elastic expansion and contraction method based on reinforcement learning - Google Patents

Micro-service load balancing and elastic expansion and contraction method based on reinforcement learning Download PDF

Info

Publication number
CN114003387A
CN114003387A CN202111297596.2A CN202111297596A CN114003387A CN 114003387 A CN114003387 A CN 114003387A CN 202111297596 A CN202111297596 A CN 202111297596A CN 114003387 A CN114003387 A CN 114003387A
Authority
CN
China
Prior art keywords
micro
container
service
load
reward
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111297596.2A
Other languages
Chinese (zh)
Inventor
陈雷鸣
张卫山
王玉乾
董次浩
袁晓晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Petroleum East China
Original Assignee
China University of Petroleum East China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Petroleum East China filed Critical China University of Petroleum East China
Priority to CN202111297596.2A priority Critical patent/CN114003387A/en
Publication of CN114003387A publication Critical patent/CN114003387A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a micro-service load balancing and elastic expansion and contraction method based on reinforcement learning, and mainly relates to the fields of reinforcement learning, load scheduling of micro-service application, elastic expansion of containers and the like. The method mainly comprises the following steps: firstly, a micro-service application environment is constructed, and specifically comprises a micro-service application flow simulation component, an index monitoring component, an intelligent decision component, a load regulation component and a container expansion and contraction component. Monitoring the state of the micro-service through a monitoring plug-in to obtain indexes such as response time and the like of the service and establish a formalized representation method of various resources and load information; then, under the scenes of load adjustment and dynamic container expansion, key elements such as an environment, an intelligent agent, an action space, a reward function and the like are designed based on a reinforcement learning theory; the interaction between the micro-service application and the flow environment is realized by simulating information such as load environment, resources and the like, and the adjustment of the micro-service load and the resources is realized based on the intelligent decision-making component. And stores the decision action space in the wisdom base. And finally, applying the decision result in the intelligent library to load adjustment in the actual environment to realize dynamic adjustment of a micro-service application load adjustment strategy and container elastic expansion so as to realize the optimal performance and response time of the micro-service application.

Description

Micro-service load balancing and elastic expansion and contraction method based on reinforcement learning
Technical Field
The invention relates to the field of micro-service application load balancing, the field of reinforcement learning and the field of container resource dynamic adjustment, in particular to a micro-service application load balancing and container expansion and contraction method based on reinforcement learning.
Background
With the rapid development of the mobile internet and the internet of things, the number of various terminal devices is increasing, and service applications based on the internet and the internet of things also face increasing access demands. Application architectures based on traditional monolithic architectures have been unable to meet the ever-increasing and diverse access needs. Due to the flexible characteristic of the micro-service architecture, the whole application can be split into a plurality of micro-service applications according to various requirements. Each micro service application can flexibly adjust the load and flexibly expand and contract the capacity, and the micro service application can flexibly meet the flow access requirements of different scenes. Therefore, each large internet enterprise has a micro service architecture which is more flexible based on system resource scheduling to construct own application. Meanwhile, how to realize load balancing and flexible expansion of automatic micro-service applications has become a hot point of current research.
The main methods for solving the load balancing at present comprise: adjusting the overall average response time based on a message queue chain-oriented load balancing algorithm; the better response time, throughput rate and stability are realized based on the improved leading resource fair allocation algorithm; distributing the load based on an improved consistent hash algorithm; the overall response time of the system is reduced based on the average request delay of requests across the microservice dependency chain.
However, the existing method can only adjust the load based on a specific scene, and cannot realize load balance adjustment and resource adjustment under the dynamic change of a complex scene. The invention is therefore based on reinforcement learning correlation theory. And information interaction with the decision-making intelligent agent is realized by simulating various flow scenes. By designing a reasonable reward function and constructing a load balancing decision and a container dynamic expansion component, the load adjustment of the micro-service application and the dynamic adjustment of container resources are realized.
Disclosure of Invention
In order to overcome the defects and shortcomings in the prior art, the invention provides a micro-service application load balancing and container resource expansion method based on reinforcement learning, which is used for realizing automatic adjustment of micro-service application loads and dynamic expansion of resources in different application scenes.
In order to achieve the technical purpose, the invention adopts the following technical scheme:
and (1) constructing a micro-service application component operating environment, wherein the micro-service application component operating environment comprises a load balancing component, a micro-service application component, a container capacity expansion component, a container resource pool, an index monitoring component and a flow simulation component.
And (2) respectively designing basic elements of reinforcement learning based on a reinforcement learning theory, wherein the basic elements comprise an environment space, an intelligent agent, an action space, a reward function and the like. A decision agent is then designed, the agent functions including: coordinating the flexible expansion of load and container resources. And finally, realizing the strategies of load regulation and container expansion.
And (3) after the basic environment and the related algorithm are successfully designed. And performing access flow pressure test on the micro-service application based on the test component, then acquiring indexes such as response time, throughput and the like by using the index monitoring component, and inputting the acquired indexes into the decision-making intelligent agent.
And (4) the intelligent agent randomly makes a decision and feeds the decision back to the load balancing component and the container expansion component. And the load balancing and container expansion component makes corresponding load adjustment and container expansion after receiving the decision of the intelligent agent to change the current operation state of the micro-service application and feed back the reward and punishment values to the decision intelligent agent. And then the micro service application enters index statistics of the next stage.
And (5) after repeating the steps (3) and (4) for a plurality of rounds, giving a corresponding reward and punishment value based on interaction of the intelligent decision-making body and the external environment until the decision-making body reaches a stable reward value. And finally, applying the trained intelligent decision library action to the actual micro-service application load balancing and capacity expansion requirements.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a general architecture diagram of the system of the present invention
FIG. 2 is a flow chart of the intelligent decision module of the present invention
FIG. 3 is a flow chart of the container resource scaling according to the present invention
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The system structure of the container load scheduling method based on reinforcement learning comprises five modules: the system comprises a flow simulation module, an index acquisition module, a load balancing decision module, a container resource scheduling module and a decision intelligent agent module.
The following describes in detail a specific process of the micro-service load balancing and container scaling method based on reinforcement learning with reference to fig. 1 and fig. 2:
s1, constructing a micro service application component operating environment which comprises a load balancing component, a micro service application component, a container capacity expansion component, a container resource pool, an index monitoring component, a pressure test component and a flow simulation component.
And S2, designing basic elements of reinforcement learning based on the reinforcement learning theory, wherein the basic elements comprise an environment space, an agent, an action space, a reward function and the like. A decision agent is then designed, the agent functions including: coordinating the flexible expansion of load and container resources. And finally, realizing the strategies of load regulation and container expansion.
And S3, after the basic environment and the related algorithm are successfully designed. The method comprises the steps of carrying out access flow pressure test on micro-service application based on a test tool, then utilizing an index monitoring component to collect indexes such as response time and throughput, and inputting the collected indexes into a decision-making intelligent agent.
S4, the agent randomly makes a decision and feeds the decision back to the load balancing component and the container expansion component. And the load balancing and container expansion component makes corresponding load adjustment and container expansion after receiving the decision of the intelligent agent to change the current operation state of the micro-service application and feed back the reward and punishment values to the decision intelligent agent. And then the micro service application enters index statistics of the next stage.
And S5, repeating the steps S3 and S4 for a plurality of rounds, and giving corresponding reward and punishment values based on interaction of the intelligent decision-making body and the external environment until the decision-making body reaches a stable reward value. And finally, applying the trained intelligent decision library action to the actual micro-service application load balancing and capacity expansion requirements.
The invention relates to a micro-service load balancing and elastic expansion and contraction method based on reinforcement learning, which combines the characteristics of intelligent self-learning in reinforcement learning. The method can automatically learn the load adjustment strategies for different flows under different access scenes. Meanwhile, when the number of the micro-service instances cannot meet the response requirement of access, automatic capacity expansion of the container resources can be realized through the learned memory action, so that the optimal service quality and performance index of the micro-service application are realized.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (4)

1. The micro-service load balancing and elastic expansion and contraction capacity method based on reinforcement learning is characterized in that basic elements such as measurement values, space environments, actions, reward functions and the like based on reinforcement learning theory are designed simultaneously by constructing various core components such as flow simulation, index acquisition, intelligent decision, load balancing, container expansion and contraction capacity and the like. And repeatedly testing on the simulation access request scene and the function provided by the core component, realizing the interaction between the decision-making intelligent agent and the actual environment, learning to the optimized decision, and realizing the automatic load balancing of the micro-service application and the elastic expansion and contraction of the container resource. The method comprises the following specific steps:
and (1) constructing a micro-service application component operating environment, wherein the micro-service application component operating environment comprises a load balancing component, a micro-service application component, a container capacity expansion component, a container resource pool, an index monitoring component and a flow simulation component.
And (2) respectively designing basic elements of reinforcement learning based on a reinforcement learning theory, wherein the basic elements comprise an environment space, an intelligent agent, an action space, a reward function and the like. A decision agent is then designed, the agent functions including: coordinating the flexible expansion of load and container resources.
And (3) after the basic environment and the related algorithm are successfully designed. And performing access flow pressure test on the micro-service application based on the test component, then acquiring indexes such as response time, container load and the like by using the index monitoring component, and inputting the acquired indexes into the decision intelligent agent.
And (4) the intelligent agent randomly makes a decision and feeds the decision back to the load balancing component and the container expansion component. And the load balancing and container expansion component makes corresponding load adjustment and container expansion after receiving the decision of the intelligent agent to change the current operation state of the micro-service application and feed back the reward and punishment values to the decision intelligent agent. And then the micro service application enters index statistics of the next stage.
And (5) after repeating the steps (3) and (4) for a plurality of rounds, giving a corresponding reward and punishment value based on interaction of the intelligent decision-making body and the external environment until the decision-making body reaches a stable reward value. And finally, applying the trained intelligent decision library action to the actual load balancing and capacity expansion requirements of the micro-service application so as to realize the automatic state adjustment of the micro-service application.
2. The reinforcement-learning microservice application container load scheduling method of claim 1, wherein in step (1), the components are described in detail as follows.
And S11, the flow simulation component is mainly used for completing flow simulation and pressure test under different scenes so as to ensure that the intelligent decision component can complete multiple load balancing and container resource adjustment under the same access flow scene. Therefore, the optimal adjustment strategy is found through repeated training.
And S12, an index monitoring component, which is mainly used for completing the collection of response time information of the micro-service application example and the collection of information indexes of the distributed resources. The intelligent decision-making component is used for monitoring micro-service instances in real time, converting monitoring information into environment observation information and sending the environment observation information to the intelligent decision-making component.
S13, an intelligent decision-making component which is a core component of the system. Firstly, sending environmental information acquired by an index monitoring component to an intelligent decision-making component; the intelligent decision makes related actions (load adjustment or container resource adjustment) according to the information; then sending the current node action information to a load balancing component and a container expansion component; and the external environment can give a reward and punishment value of the current adjustment strategy according to the action information. Then enter the information observation of the next stage, repeat the course continuously until the reward reaches the steady value.
And S14, the load balancing component adjusts the load according to the action instruction of the intelligent decision component, and realizes the flow load balancing of different micro service instances and the drainage of the newly-built micro service instances mainly by realizing related interfaces of the service gateway and calling the interfaces.
S15, a container expansion assembly, which performs elastic expansion and contraction according to the action command of the intelligent decision assembly, and has the main functions of: the interfaces of various container resource pools are realized, and the functions of starting and stopping containers and the like can be carried out by scheduling the container management platform through the interfaces.
3. The reinforcement learning microservice application container load scheduling method of claim 1, wherein in step (2), the reinforcement learning basic element definition comprises: the measured value, the action space and the reward and punishment value of the interactive environment. The specific elements are formalized as follows.
S21, interactive environment measurement index definition, wherein the interactive environment measurement index comprises: the response time of each microservice application and the load of the microservice corresponding to the container resource. The specific definition is as follows:
the total request access quantity is R, the service corresponding to the ith micro-service instance is SiIn which S isiThe number of requests allocated is RiAnd the corresponding number of container resources is Mi, wherein Li represents the load of the current service instance, and Ti is the response time corresponding to the corresponding micro-service instance, then V is definedSiFor the observed value, V, corresponding to the service SiSi={Ri,Ti,Mi,Li}
S22, defining an action space, wherein actions mainly comprise two types: the invention defines an action space set as A, wherein the action set of the load adjustment is defined as B, and the action set of the container resource adjustment is defined as C. Thus, the action set includes four scenes defined as a1, a2, A3, and a4, respectively.
Wherein a1 is the agent performing a single microservice application load adjustment action, defined as a1 ═ B };
wherein a2 is the agent performing a single container resource adjustment action, defined as a2 ═ C };
wherein A3 is to execute a composite action, i.e., to execute a load adjustment action after capacity expansion of the capacitor resource, which is defined as A3 ═ C | B };
where a4 is defined as a4 ═ B | C } when the adjustment of the container resources is performed first and then the load adjustment is performed. Thus, a total set of actions a ═ { a1 ═ a2 ═ A3 ═ a4} is defined
For load adjustment action B: the access requests R are actually distributed over the micro-service instance set U, where R is the total number of requested accesses, and instance U is defined at this timeiThe allocated access amount is RiWherein R is1+R2+...+Ri+...+RnR. The process mainly comprises the steps that the decision-making agent reasonably distributes the access request R in the U in the instance set according to the environment observation space value, namely the request R is about to be transmittediForward to Ui. The load regulation action is thus defined as B ═ Ri→UiI is more than or equal to |0 and less than or equal to n }, which means that R is equal to or less than niNumber of access requestsIs distributed to UiAbove, where n is the total number of instances.
Container-specific adjustment action C: in fact, the number of container resources corresponding to the micro-service instance is increased or decreased. Defining the number of container resources in the container resource pool as M. The main work of the container capacity expansion decision agent is to adjust the number of containers corresponding to application instances of each application service in the microservice system S.
Definition of SiThe number of containers corresponding to the ith application service is Mi
Definition C0Indicating that the current container quantity state is saved without performing a container resource adjustment operation on any one application. Definition CnTo perform the operation of increasing or decreasing the number of containers for the micro-service, Δ e is the step size of adjusting the containers.
Figure FDA0003337158340000041
Indicating the container number adjustment operation corresponding to the ith service,
Figure FDA0003337158340000042
meaning that two container resources are added to service i,
Figure FDA0003337158340000043
meaning two container resources are reduced for the service.
Figure FDA0003337158340000044
Where m is the number of micro-services and Δ e is the step size for adjusting the container. The action space of the reinforcement learning agent in the elastic capacity expansion scene can be expressed as follows:
Figure FDA0003337158340000045
and S23, designing a reward function, wherein the reward function determines whether the decision-making intelligent agent can automatically select and adjust the parameters. The present invention presents two possible reward function definitions: linear reward functions based on example statistics, gaussian reward functions based on non-linearities.
Linear reward function based on example statistics: the main role of the intelligent decision-making body is to reasonably distribute the access requests within the optimal response time, so that the boundary of the reward function and the penalty function need to be reasonably defined when the reward function is designed. The invention defines the reward upper limit rmaxAnd a lower prize limit rmin
Reward upper bound rmaxThe strategy used to limit the intelligence attempts to be overly aggressive, i.e., choosing an absolute low response time, results in parts of the request being permanently in the wait queue and not being processed.
Lower limit of reward rminFor ensuring that the decision agent does not receive too low a reward value after redistributing access requests, especially in extreme access scenarios.
Definition of rfailFor a punitive reward value, if the decision-making agent does not act to increase the response time of each microservice instance and reduce the load on the container, a negative reward, i.e., a punitive reward value, is given by the environment.
Defining the maximum response time of a micro-service instance in a period from t time as tt,j,maxThe decision agent then receives the smallest reward at the maximum response time.
Defining a minimum response time as tt,j,min. I.e. the maximum prize value is achieved at the minimum response time.
Defining an average response time of
Figure FDA0003337158340000051
Representing the response speed obtained by most requests in the period t, and defining the reward value obtained by the intelligent agent with the average response time as
Figure FDA0003337158340000052
The formula defining the total response time reward is therefore expressed as follows.
Figure FDA0003337158340000053
The argument represents the processing time of the request t, and defines the field as t ∈ [0, + ∞).
Nonlinear gaussian based reward function: defining under this function an intelligent decision-maker obtains a punitive reward value r if an assigned access request results in a severe timeout of the response timefail
The key to the Gaussian distribution of reward parameters is how to design the Gaussian distribution-compliant random variables X-N (mu, delta)2) And most of the values are distributed around the mean. Global response time average is used herein
Figure FDA0003337158340000054
To represent an estimate of the mean value μ of the random variable t. Global response time standard deviation deltatAs a random variable tiAn estimate of the standard deviation of (d). Finally adding the reward amplification ratio rkOffset from the prize rb
Defining a reward limit, wherein an upper limit of the reward function is defined as:
Figure FDA0003337158340000055
the reward function downline is defined as: r ismin=rb. The reward function is constructed as follows.
Figure FDA0003337158340000056
4. The method for scheduling container load of micro-service application of reinforcement learning as claimed in claim 1, wherein in step (4), the basic rules for each action decision according to different access scenarios are as follows,
and if the number of the instances of the micro-service application can meet a certain traffic response time requirement, scheduling the load balancing component and distributing the access request traffic to different instances. That is, the access requests are redirected and distributed without increasing the number of microservice instances, so as to realize load balancing. In this case, the container resource does not need to be adjusted.
If the number of instances of the micro-service cannot meet the access requirement, for example, the response time of a single micro-service is seriously overtime, and the resource load of the instances of the micro-service reaches a full load state, the instances of the micro-service need to be subjected to container resource expansion, that is, the number of container resources corresponding to the micro-service is increased to realize the shunting of the access request, and at this time, container resource expansion action needs to be performed first and the flow needs to be redistributed.
If the number of the access requests is small and the number of the micro service instances at present far meets the access requirements, the container resources are reduced, and the flow is redistributed, so that the effective utilization of the resources is realized.
CN202111297596.2A 2021-11-04 2021-11-04 Micro-service load balancing and elastic expansion and contraction method based on reinforcement learning Pending CN114003387A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111297596.2A CN114003387A (en) 2021-11-04 2021-11-04 Micro-service load balancing and elastic expansion and contraction method based on reinforcement learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111297596.2A CN114003387A (en) 2021-11-04 2021-11-04 Micro-service load balancing and elastic expansion and contraction method based on reinforcement learning

Publications (1)

Publication Number Publication Date
CN114003387A true CN114003387A (en) 2022-02-01

Family

ID=79927108

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111297596.2A Pending CN114003387A (en) 2021-11-04 2021-11-04 Micro-service load balancing and elastic expansion and contraction method based on reinforcement learning

Country Status (1)

Country Link
CN (1) CN114003387A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116610454A (en) * 2023-07-17 2023-08-18 中国海洋大学 MADDPG algorithm-based hybrid cloud resource elastic expansion system and operation method
CN116680201A (en) * 2023-07-31 2023-09-01 南京争锋信息科技有限公司 System pressure testing method based on machine learning
CN117648123A (en) * 2024-01-30 2024-03-05 中国人民解放军国防科技大学 Micro-service rapid integration method, system, equipment and storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116610454A (en) * 2023-07-17 2023-08-18 中国海洋大学 MADDPG algorithm-based hybrid cloud resource elastic expansion system and operation method
CN116610454B (en) * 2023-07-17 2023-10-17 中国海洋大学 MADDPG algorithm-based hybrid cloud resource elastic expansion system and operation method
CN116680201A (en) * 2023-07-31 2023-09-01 南京争锋信息科技有限公司 System pressure testing method based on machine learning
CN116680201B (en) * 2023-07-31 2023-10-17 南京争锋信息科技有限公司 System pressure testing method based on machine learning
CN117648123A (en) * 2024-01-30 2024-03-05 中国人民解放军国防科技大学 Micro-service rapid integration method, system, equipment and storage medium
CN117648123B (en) * 2024-01-30 2024-06-11 中国人民解放军国防科技大学 Micro-service rapid integration method, system, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN114003387A (en) Micro-service load balancing and elastic expansion and contraction method based on reinforcement learning
CN103401947A (en) Method and device for allocating tasks to multiple servers
CN107579518A (en) Power system environment economic load dispatching method and apparatus based on MHBA
CN113806018B (en) Kubernetes cluster resource mixed scheduling method based on neural network and distributed cache
CN113627545B (en) Image classification method and system based on isomorphic multi-teacher guiding knowledge distillation
CN104899100A (en) Resource scheduling method for cloud system
CN116227757A (en) Comprehensive energy management and control method and system based on intelligent cloud gateway
CN113964853A (en) 5G macro base station group optimal scheduling method, device, medium and terminal equipment
CN109639498A (en) A kind of resource flexibility configuration method of the service-oriented quality based on SDN and NFV
CN112261120A (en) Cloud-side cooperative task unloading method and device for power distribution internet of things
CN116760771A (en) On-line monitoring data multichannel transmission control strategy processing method
CN114546646A (en) Processing method and processing apparatus
CN114217944A (en) Dynamic load balancing method for neural network aiming at model parallelism
CN115550373B (en) Combined test task environment load balancing modeling method based on cloud platform management and control
CN115622087B (en) Power regulation and control method, device and equipment for power distribution network
Mobasheri et al. Toward developing fog decision making on the transmission rate of various IoT devices based on reinforcement learning
CN105187488A (en) Method for realizing MAS (Multi Agent System) load balancing based on genetic algorithm
Caliciotti et al. On optimal buffer allocation for guaranteeing quality of service in multimedia internet broadcasting for mobile networks
CN114327925A (en) Power data real-time calculation scheduling optimization method and system
CN110322369B (en) Building load optimal combination determination method, terminal device and storage medium
RU2296362C1 (en) Method for servicing varying priority requests from users of computer system
Ran Influence of government subsidy on high-tech enterprise investment based on artificial intelligence and fuzzy neural network
Grum et al. The construction of a common objective function for analytical infrastructures
CN107729150A (en) A kind of addressing method of isomeric group safety supervision equipment least energy consumption node
CN110879335B (en) Method for evaluating heavy overload condition of power distribution network line

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication