CN106407007B - Cloud resource configuration optimization method for elastic analysis process - Google Patents

Cloud resource configuration optimization method for elastic analysis process Download PDF

Info

Publication number
CN106407007B
CN106407007B CN201610790447.2A CN201610790447A CN106407007B CN 106407007 B CN106407007 B CN 106407007B CN 201610790447 A CN201610790447 A CN 201610790447A CN 106407007 B CN106407007 B CN 106407007B
Authority
CN
China
Prior art keywords
component
response time
node
components
average response
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610790447.2A
Other languages
Chinese (zh)
Other versions
CN106407007A (en
Inventor
曹健
姚艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN201610790447.2A priority Critical patent/CN106407007B/en
Publication of CN106407007A publication Critical patent/CN106407007A/en
Application granted granted Critical
Publication of CN106407007B publication Critical patent/CN106407007B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing

Abstract

The invention provides a cloud resource configuration optimization method for an elastic analysis process, which comprises the following steps of 1: performing performance modeling on the elastic analysis process by adopting an open queuing network theory, namely modeling the whole analysis process into an open queuing network, wherein each component in the process corresponds to a sub-queue in a queuing network system, and the output of one component is the input of the other component; step 2: the average response time of the whole open queuing network is estimated through a queuing theory, and cloud resource allocation is carried out on each component according to the estimated average response time, so that the total number of resources is minimum on the premise of meeting the average response time required by a user. The invention can carry out resource allocation on the analysis flow of continuous arrival of the requests, estimate the average response time of the system by using the queuing theory, estimate the response time more accurately, estimate the solution set of the allocable server of each component according to the queuing theory, and then obtain the approximate optimal solution by using a heuristic algorithm.

Description

Cloud resource configuration optimization method for elastic analysis process
Technical Field
The invention relates to the technical field of cloud resource configuration optimization, in particular to a cloud resource configuration optimization method for an elastic analysis process.
Background
In recent years, the popularization and development of cloud computing and mobile internet have promoted the generation of mass data in various fields, such as the field of biological information, social networks, and intelligent transportation systems. The analysis and processing of these massive data become a research hotspot. Efficient analysis of these data can make decisions more accurate and faster. Generally, data analysis is a computationally intensive application. For a continuously arriving data analysis task, if all requests are to be processed in time, enough resources must be allocated to satisfy the highest load. However, this results in a waste of resource idleness when the load requests are low. The cloud computing platform realizes the sharing of computing resources through a network, and can realize the supply of the resources as required. As a result, more and more applications are deployed on cloud platforms.
Generally speaking, a data analysis processing application is made up of multiple components, each of which may be deployed independently. Workflow is a good tool for coordinating these components. In order to dynamically extend the system, the conventional method is to extend the whole process as a whole, and increase or decrease the resource instances according to the whole process. But different components in the flow have different processing capabilities. In the case of a constantly arriving request load, one resource instance may be sufficient for a component, but other components may need more instances, or it may become a system bottleneck.
With the advent of the big data age, more and more data analysis applications emerge. Some data analysis applications may be modeled with a workflow. Requests for such applications are typically continuously arriving and have strict requirements on response time for these requests. When the analysis processes are deployed on a cloud platform, a key problem is how to allocate cloud resources, so that the number of leased virtual machines is minimized on the premise of meeting the response time requirement.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a cloud resource configuration optimization method facing an elastic analysis process.
The cloud resource configuration optimization method for the elastic analysis process comprises the following steps:
step 1: performing performance modeling on the elastic analysis process, namely modeling the whole analysis process into an open queuing network, wherein each component in the process is correspondingly an M/M/M sub-queue in a queuing network system, and the output of one component is the input of the other component;
step 2: and estimating the average response time of the whole open queuing network, and performing cloud resource allocation on each component according to the estimated average response time, so that the total number of resources is minimum on the premise of meeting the average response time required by a user.
Preferably, the elasticity analysis process in step 1 can be represented by a weighted directed acyclic graph G, where G ═ C, E, C is a node set of the graph G and represents a set of all components in the process, and E is an edge set of the graph G and represents a dependency relationship between the components; wherein the weight value on the node represents the average service time of the component for executing the user request; an edge formed by two nodes is used for representing a data flow relationship between two components, let e (c)i,cj)∈E,e(ci,cj) Represents node ciAnd cjHas a dependency relationship between them and represents a node ciIs cjOf the preceding node, respectively, cjIs ciThe successor node of (1); when processing a request, a component cannot begin processing until all of its predecessor nodes have not completed; specifically, the method comprises the following steps:
the elasticity analysis flow is represented by EW, EW ═ G, f (c) >, < G, f (c) > represents the optimization operation performed on graph G, where f (c) represents the function used to determine the number of instances per component.
Preferably, the step 2 includes:
step 2.1: determining a critical path in an open queuing network;
step 2.2: optimizing the resources of the components on the critical path;
step 2.3: resource optimization is performed on the remaining components, i.e., components that are not on the critical path.
Preferably, said step 2.1 comprises: updating the critical path every other pricing duration, and allocating a virtual machine to the components on the critical path, wherein: the critical path is as follows: the path with the maximum sum of all node weights in the path from the starting node to the ending node in the directed acyclic graph; the sum of all node weights can represent the average service time of the component.
Preferably, said step 2.2 comprises: and optimizing the resources of the components on the critical path.
Preferably, said step 2.3 comprises: in step 2.2, the average response time of all components on the critical path is obtained, virtual machines are distributed to the remaining components according to the parallel blocks of the weighted directed acyclic graph, and the average response time of the non-critical path parallel branches on each parallel block is smaller than or equal to the average response time of the corresponding parallel block critical path branch components; and calculating the number of the virtual machines according to the relation between the average response time and the number of the servers, and distributing the virtual machines.
Compared with the prior art, the invention has the following beneficial effects:
the invention can allocate resources to the analysis process that the request arrives continuously, concretely, the whole system is deployed independently by taking the assembly as a unit to form an elastic process; and each component is regarded as a queuing system, the whole process is regarded as a queuing network, the average response time of the system is estimated by using a queuing theory, and the response time is estimated more accurately. The invention can also estimate the distributable server solution set of each component according to the queuing theory, and then obtains an approximate optimal solution by utilizing a heuristic algorithm.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a schematic diagram of a component performance analysis model.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
Aiming at the elastic analysis-oriented process, the invention adopts a cloud resource configuration optimization strategy based on a queuing network theory. Generally, an analysis flow includes a plurality of components, each having independent functions and being capable of being deployed independently, and the components having interdependencies therebetween. The existing cloud resource configuration optimization method for workflows is to regard the whole process as a whole and perform cloud resource allocation on the whole process. Thus, when the performance of a certain component is reduced and is not enough to process the continuously arriving requests, resources need to be added to the whole application, which causes the performance of other components to be too high, and thus causes resource waste. In other words, each component in an analysis process has different processing capabilities and different amounts of resources, and if the whole process is allocated as a whole, the resources are allocated according to the bottleneck component (i.e., the component with the lowest processing capability), and the resources allocated by other components are excessive. The invention allocates each component with independent resource according to requirement, which saves more resource and cost. However, such problems are complicated and difficult. Since the user is concerned with the average response time for each request processing completion, not the processing time of each component, but the resource allocation scheme is how much resource is allocated for each component, i.e. a single component. Obviously, this is an NP-hard problem.
Firstly, an open queuing network theory is used for carrying out performance modeling on an elastic analysis process. The whole analysis process is modeled into an open queuing network system, each component in the process is an M/M/M sub-queue in the queuing network system, and the output of one component is the input of the other component. And then estimating the average response time of the whole system through a queuing theory. According to the estimated average response time, two heuristic algorithms are provided to distribute cloud resources to each component, so that the total number of resources is the minimum on the premise of meeting the average response time required by a user.
Elastic analysis process model:
typically, the analysis flow can be represented by a weighted Directed Acyclic Graph (DAG), G ═ C, E. Wherein the node represents the component, and the weight value on the node represents the average service time of the component for executing the user request; edge e (c)i,cj) Representing data flow relationships between components, e (c)i,cj) E denotes the vertex ciIs cjOf the preceding node, respectively, cjIs ciThe successor node of (1). When processing a request, a component cannot start processing until all its predecessor nodes have not completed. Each component in the elasticity analysis flow may deploy multiple instances according to load.
In the present invention, the elasticity analysis flow is represented by EW, < G, f (c) >, where f (c) represents a function for determining the number of instances of each component.
Resource model
Cloud resources are typically provided externally in the form of virtual machines and include a variety of specifications. To simplify the problem, the present invention standardizes virtual machines with virtual machine units. Furthermore, the present invention employs an exclusive resource provisioning model, i.e., each virtual machine can only be assigned to one component at a time. The pricing model for virtual machines is on-demand charging.
Performance analysis model
And modeling the performance of the analysis process by using a queuing theory, modeling the whole elastic analysis process into an open queuing network, and modeling each component into an M/M/M queue. According to a queuing theory, expressing the average request reaching rate of the whole analysis process by lambda; representing the average service rate of each virtual machine on the component by mu; m represents the number of virtual machines allocated on a component. The flow intensity is defined as ρ, ρ ═ λ/(m μ). Average response time per component
Figure GDA0002459975590000041
Wherein
Figure GDA0002459975590000042
Indicating the average probability of queuing requests at each component,
Figure GDA0002459975590000043
the resource allocation optimization strategy of the invention is divided into three steps: the method comprises the steps of firstly, determining a key path, secondly, performing resource optimization on components on the key path, and thirdly, performing resource optimization on the rest components. Program 1 is pseudo code of the general flow of the policy. The first row is to calculate each update period based on the tariff charge duration. Because the virtual machines are charged according to the needs, in each charging duration, the virtual machines need to be charged once the virtual machines are rented, whether idle or busy, and all the virtual machines are actively updated once every other charging duration. Then, in each update cycle, a critical path is determined, and the algorithm of the calling program 2 or the program 3 allocates a virtual machine to the component on the critical path. And distributing virtual machines for the rest of the components according to the parallel blocks, wherein the strategy is that the average response time of other parallel branches on each parallel block is not more than that of the components of the critical path branches.
Program 1 (cloud resource allocation optimization algorithm overview)
Figure GDA0002459975590000044
Figure GDA0002459975590000051
The component on the critical path is a sequential flow. According to the queuing theory, the average response time of the whole critical path is the sum R of the response times of all the componentspath
Figure GDA0002459975590000052
In the formula: riThe response time of the ith component is shown, and k is the number of components.
The resource allocation optimization strategy of the components on the critical path has two optimization algorithms:
packet Knapsack based optimization Algorithm (Group Knapack-based Algorithm, GKA)
Program 2 presents the pseudo code of the packet knapsack based optimization algorithm. Given the average arrival rate of requests and the average service time of a component, the relationship between the average response time and the number of virtual machines can be calculated according to a performance analysis model of the component. When the number of allocated virtual machines is larger in one component, the average response time of the component is smaller. But when the number of virtual machines reaches a certain saturation value, i.e. each newly arrived request is served without waiting in a queue, the average response time of the component does not change any more, and the number of virtual machines in this saturation state is recorded as m _ maxi,tThe corresponding response time is denoted r _ maxi,t. According to the steady state condition of the system, m _ mini,t=|λti,tL, wherein: m _ mini,tRepresents the minimum number of virtual machines, λ, that satisfy the steady state of the system at time ttRepresents the average arrival rate, μ, of requests at time ti,tRepresenting the average service rate of the virtual machine on the component i at the time t, and defining a feasible solution set as m _ maxi,tAnd m _ mini,tAll integers in between. Considering the critical path as a knapsack, the capacity of the knapsack is the constraint given by the user: average response time Rπ. Each feasible solution is considered an item, each item having two attributes: cost and average response time. All items are divided into k groups (i.e., k components), and the items within each group can and can only be selected one. Items are now loaded into the backpack so that the cost of the backpack and the average response time are minimized.
In procedure 2, the number of feasible solutions for each component is first calculated, and then a packet knapsack algorithm is executed using a dynamic programming based method.
Program 2 (packet-based backpack optimization algorithm)
Figure GDA0002459975590000061
Figure GDA0002459975590000071
Adaptive Heuristic Algorithm (PAHA)
Program 3 presents the pseudo-code of the adaptive heuristic algorithm. The idea of this algorithm is simple, namely to allocate virtual machines proportionally according to the service time of all components, the more virtual machines are allocated to components with high service time (i.e. slow service). The first step of the algorithm is to compute a set of feasible solutions. Then R is adjusted according to the proportion of the service timeπDivided over each component such that
Figure GDA0002459975590000072
If the target time allocated by a component is less than the lowest configuration time of the component, the component is allocated with the lowest configuration time, and then other components are proportionally allocated again to ensure that each component can be normally served. And then calculating the minimum number of virtual machines of each component on the premise of meeting the required service time according to the performance analysis model.
Program 3 (adaptive heuristic algorithm)
Figure GDA0002459975590000073
Figure GDA0002459975590000081
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.

Claims (3)

1. A cloud resource configuration optimization method for an elastic analysis process is characterized by comprising the following steps:
step 1: performing performance modeling on the elastic analysis process, namely modeling the whole analysis process into an open queuing network, wherein each component in the process is correspondingly an M/M/M sub-queue in a queuing network system, and the output of one component is the input of the other component;
step 2: estimating the average response time of the whole open queuing network, and performing cloud resource allocation on each component according to the estimated average response time, so that the total number of resources is minimum on the premise of meeting the average response time required by a user;
the elasticity analysis process in step 1 can be represented by a weighted directed acyclic graph G, where G is (C, E), C is a node set of the graph G and represents a set of all components in the process, and E is an edge set of the graph G and represents a dependency relationship between the components; wherein the weight value on the node represents the average service time of the component for executing the user request; an edge formed by two nodes is used for representing a data flow relationship between two components, let e (c)i,cj)∈E,e(ci,cj) Represents node ciAnd cjHas a dependency relationship between them and represents a node ciIs cjOf the preceding node, respectively, cjIs ciThe successor node of (1); when processing a request, a component cannot begin processing until all of its predecessor nodes have not completed; specifically, the method comprises the following steps:
representing the elasticity analysis flow by EW, wherein EW is < G, f (C) >, < G, f (C) > represents the optimization operation of the graph G, and f (C) represents a function for determining the number of instances of each component;
the step 2 comprises the following steps:
step 2.1: determining a critical path in an open queuing network;
step 2.2: optimizing the resources of the components on the critical path;
step 2.3: performing resource optimization on the residual components, namely the components which are not on the critical path;
the critical path is as follows: the path with the maximum sum of all node weights in the path from the starting node to the ending node in the directed acyclic graph; the sum of all node weights can represent the average service time of the component.
2. The elasticity analysis process-oriented cloud resource configuration optimization method according to claim 1, wherein the step 2.1 includes: updating the critical path every other pricing duration, and allocating a virtual machine to the components on the critical path, wherein: the critical path is as follows: the path with the maximum sum of all node weights in the path from the starting node to the ending node in the directed acyclic graph; the sum of all node weights can represent the average service time of the component.
3. The elasticity analysis process-oriented cloud resource configuration optimization method according to claim 1, wherein the step 2.3 includes: in step 2.2, the average response time of all components on the critical path is obtained, virtual machines are distributed to the remaining components according to the parallel blocks of the weighted directed acyclic graph, and the average response time of the non-critical path parallel branches on each parallel block is smaller than or equal to the average response time of the corresponding parallel block critical path branch components; and calculating the number of the virtual machines according to the relation between the average response time and the number of the servers, and distributing the virtual machines.
CN201610790447.2A 2016-08-31 2016-08-31 Cloud resource configuration optimization method for elastic analysis process Active CN106407007B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610790447.2A CN106407007B (en) 2016-08-31 2016-08-31 Cloud resource configuration optimization method for elastic analysis process

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610790447.2A CN106407007B (en) 2016-08-31 2016-08-31 Cloud resource configuration optimization method for elastic analysis process

Publications (2)

Publication Number Publication Date
CN106407007A CN106407007A (en) 2017-02-15
CN106407007B true CN106407007B (en) 2020-06-12

Family

ID=58001630

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610790447.2A Active CN106407007B (en) 2016-08-31 2016-08-31 Cloud resource configuration optimization method for elastic analysis process

Country Status (1)

Country Link
CN (1) CN106407007B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107633125B (en) * 2017-09-14 2021-08-31 北京仿真中心 Simulation system parallelism identification method based on weighted directed graph
CN108196948A (en) * 2017-12-28 2018-06-22 东华大学 A kind of mysorethorn example type combination optimum choice method based on Dynamic Programming
CN108521352B (en) * 2018-03-26 2022-07-22 天津大学 Online cloud service tail delay prediction method based on random return network
CN110278125B (en) * 2019-06-21 2022-03-29 山东省计算中心(国家超级计算济南中心) Cloud computing resource elasticity evaluation method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1610311A (en) * 2003-10-20 2005-04-27 国际商业机器公司 Method and apparatus for automatic modeling building using inference for IT systems
CN102043674A (en) * 2009-10-16 2011-05-04 Sap股份公司 Estimating service resource consumption based on response time

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7054943B1 (en) * 2000-04-28 2006-05-30 International Business Machines Corporation Method and apparatus for dynamically adjusting resources assigned to plurality of customers, for meeting service level agreements (slas) with minimal resources, and allowing common pools of resources to be used across plural customers on a demand basis
US7146353B2 (en) * 2003-07-22 2006-12-05 Hewlett-Packard Development Company, L.P. Resource allocation for multiple applications
US8538740B2 (en) * 2009-10-14 2013-09-17 International Business Machines Corporation Real-time performance modeling of software systems with multi-class workload

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1610311A (en) * 2003-10-20 2005-04-27 国际商业机器公司 Method and apparatus for automatic modeling building using inference for IT systems
CN102043674A (en) * 2009-10-16 2011-05-04 Sap股份公司 Estimating service resource consumption based on response time

Also Published As

Publication number Publication date
CN106407007A (en) 2017-02-15

Similar Documents

Publication Publication Date Title
Van den Bossche et al. Cost-efficient scheduling heuristics for deadline constrained workloads on hybrid clouds
Kaur et al. Load balancing optimization based on hybrid Heuristic-Metaheuristic techniques in cloud environment
US10474504B2 (en) Distributed node intra-group task scheduling method and system
CN111427679B (en) Computing task scheduling method, system and device for edge computing
US9218213B2 (en) Dynamic placement of heterogeneous workloads
US8997107B2 (en) Elastic scaling for cloud-hosted batch applications
US8612987B2 (en) Prediction-based resource matching for grid environments
Oleghe Container placement and migration in edge computing: Concept and scheduling models
Zhu et al. Scheduling stochastic multi-stage jobs to elastic hybrid cloud resources
CN106407007B (en) Cloud resource configuration optimization method for elastic analysis process
Nabi et al. DRALBA: Dynamic and resource aware load balanced scheduling approach for cloud computing
CN103701886A (en) Hierarchic scheduling method for service and resources in cloud computation environment
Gutierrez-Garcia et al. Agent-based cloud bag-of-tasks execution
Pasdar et al. Hybrid scheduling for scientific workflows on hybrid clouds
CN115134371A (en) Scheduling method, system, equipment and medium containing edge network computing resources
Biswas et al. Multi-level queue for task scheduling in heterogeneous distributed computing system
CN115033357A (en) Micro-service workflow scheduling method and device based on dynamic resource selection strategy
Jagadish Kumar et al. Hybrid gradient descent golden eagle optimization (HGDGEO) algorithm-based efficient heterogeneous resource scheduling for big data processing on clouds
Thai et al. Budget constrained execution of multiple bag-of-tasks applications on the cloud
CN111309472A (en) Online virtual resource allocation method based on virtual machine pre-deployment
Choi et al. Gpsf: general-purpose scheduling framework for container based on cloud environment
Le et al. Ita: The Improved Throttled Algorithm of Load Balancing On Cloud Computing
Abrishami et al. Scheduling in hybrid cloud to maintain data privacy
Kim et al. Design of the cost effective execution worker scheduling algorithm for faas platform using two-step allocation and dynamic scaling
CN113254200B (en) Resource arrangement method and intelligent agent

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant