CN113110914A - Internet of things platform construction method based on micro-service architecture - Google Patents

Internet of things platform construction method based on micro-service architecture Download PDF

Info

Publication number
CN113110914A
CN113110914A CN202110229566.1A CN202110229566A CN113110914A CN 113110914 A CN113110914 A CN 113110914A CN 202110229566 A CN202110229566 A CN 202110229566A CN 113110914 A CN113110914 A CN 113110914A
Authority
CN
China
Prior art keywords
module
internet
micro
service
prediction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110229566.1A
Other languages
Chinese (zh)
Inventor
沈玉龙
彭环
绳金涛
祝幸辉
张志为
滕跃
徐扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202110229566.1A priority Critical patent/CN113110914A/en
Publication of CN113110914A publication Critical patent/CN113110914A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/3006Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system is distributed, e.g. networked systems, clusters, multiprocessor systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/301Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system is a virtual computing platform, e.g. logically partitioned systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y30/00IoT infrastructure
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45562Creating, deleting, cloning virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5021Priority

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Quality & Reliability (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention belongs to the technical field of Internet of things platform construction, and discloses an Internet of things platform construction method based on a micro-service architecture, wherein the Internet of things platform construction system comprises: the system comprises a micro-service level division module, a kubernets cluster building module, a mirror image building module, an expansion and contraction module, a module adding module, a data collecting module, a monitoring scheme selecting module, a prediction model building module, a simulation module, a precision checking module, a prediction module and a scheduling module. According to the invention, the complex functions of the platform of the Internet of things can be decoupled, the application of the Internet of things is established on the micro-service set, and in actual deployment, multiple instances of the micro-service can enhance the flexibility and robustness of an application program; the scheduling algorithm in kubernets is improved, resource scheduling is achieved by distinguishing multiple tenants, scheduling decision and resource constraint of a single user are respected, and fairness of a default overall scheduling mechanism is improved.

Description

Internet of things platform construction method based on micro-service architecture
Technical Field
The invention belongs to the technical field of Internet of things platform construction, and particularly relates to a method for constructing an Internet of things platform based on a micro-service architecture.
Background
At present, most of existing internet of things platforms are single applications deployed in virtual machines, and with continuous development of the internet of things industry, functions of the internet of things platforms are gradually complicated, and redundancy backup and horizontal expansion need to consume more machine resources and time cost for maintenance. Meanwhile, for the application of the traditional single Internet of things, the coupling degree between the modules is high, the functions are operated in the same process in a centralized mode, when the environment of the increasingly complex Internet of things is faced, the whole cluster can only be stretched, and resources are greatly wasted.
The micro-service architecture is widely supported in recent years, and can decouple the application services, reduce the coupling between the services and improve the robustness of the system; docker is an open source application container engine, and supports the deployment of containers to local environments and various mainstream cloud platforms.
Task scheduling and resource allocation in a cloud infrastructure are well-known discussion problems, and the conventional open source architecture kubernets provides functions of scheduling containers and load balancing; the scheduling algorithm belongs to a non-exclusive type, all incoming task requests are processed through one scheduling component, the priority of tasks cannot be realized in the component, all tasks are processed by the principle of first come and first serve, meanwhile, the default scheduling algorithm does not support a preemptive strategy, and the scheduling algorithm cannot better process the task requests in the complex environment of the Internet of things; the scheduler realized by kubernets by default screens out improper nodes through a preselection process, then calculates scores of all nodes through a preference strategy, and after the scores are compared, the pod is operated on the node with the highest score, but when the scheduler schedules different pod copies under the same resource controller, preselection and preference operation can be carried out on the same node, so that repeated work can be carried out each time.
The kubernets can realize load balance of the service, corresponding capacity expansion and capacity reduction are carried out on the service according to preset rules, but the function of the automatic scaling service requires an application program provider to define a parameter set in a self-defining mode, the parameter set is unpredictable in determination and can only be obtained according to experience, and the situations of resource waste or resource shortage are likely to exist. These management parameters are again static and incoming requests do change often, in which case the scaling decision is passive in nature rather than proactive.
Through the above analysis, the problems and defects of the prior art are as follows:
(1) with the continuous development of the internet of things industry, the functions of the internet of things platform are gradually complicated, and the redundant backup and the horizontal expansion need to consume more machine resources and time cost for maintenance. Meanwhile, for the application of the traditional single Internet of things, the coupling degree between the modules is high, the functions are operated in the same process in a centralized mode, when the environment of the increasingly complex Internet of things is faced, the whole cluster can only be stretched, and resources are greatly wasted.
(2) The scheduling algorithm of the conventional open source architecture kubernets belongs to a non-exclusive type, the priority of tasks cannot be realized in a component, and all tasks are processed by a principle of first come and first serve; meanwhile, the default scheduling algorithm does not support a preemptive strategy, and the scheduling algorithm cannot better process task requests in the complex environment of the internet of things.
(3) When a scheduler of the conventional open source architecture kubernets schedules different pod copies under the same resource controller, preselection and optimization operations are performed on the same node, and repeated work is performed each time.
(4) The function of the automatic scaling service of the conventional open source architecture kubernets requires an application program provider to define a parameter set, the determination of the parameter set is unpredictable and can only be obtained according to experience, and the situations of resource waste or resource insufficiency are likely to exist; these management parameters are again static and incoming requests do change often, in which case the scaling decision is passive in nature rather than proactive.
The difficulty in solving the above problems and defects is: compared with the traditional architecture, the micro service architecture reduces the granularity of the service, shortens the development period, reduces the inherent complexity of the large service and improves the scalability by constructing the service with the independent life cycle. However, the cost is expensive performance overhead and complex dynamic resource usage, and an application program usually deployed in the cloud needs to meet certain performance requirements, such as response time, but at the same time, the cost of the cloud resources is also reduced as much as possible. How to reasonably allocate resources and balance load of micro-service programs through a container scheduling system kubernets becomes an industry difficulty.
The significance of solving the problems and the defects is as follows: with the increasing severity of micro-service architecture, resource scheduling gradually becomes a key technology in a cloud platform. And (4) reasonable resource scheduling is carried out. And sufficient resource guarantee can be provided for the application program, and the response time of the service is reduced, so that the service quality is improved. For the micro service management platform kubernets, the resource scheduling mechanism has very important significance as well, and is an indispensable important component in cluster management. The invention provides a resource scheduling strategy based on load prediction on the basis of the original scheduling strategy, the load situation applied at a certain time in the future is predicted according to the running situation of a monitoring program, and then the resource is scheduled in advance according to the prediction result, so that the response time of service is reduced.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a method for constructing an Internet of things platform based on a micro-service architecture, and particularly relates to a technology for constructing a networking platform on a Kubernets architecture.
The invention is realized in such a way that an Internet of things platform construction method based on a micro-service architecture comprises the following steps:
dividing a micro-service level for an Internet of things platform; the micro-service level comprises a data access layer, a data processing layer, a service layer and an application layer, and is used for decoupling the functions of the platform, reducing the coupling and improving the expansibility;
step two, building a kubernets cluster, and realizing high availability of the cluster, wherein three masters and six slaves are built;
step three, constructing all split projects of the platform into a Docker mirror image file, so that the service can be deployed more quickly;
step four, the deployment module of the micro-service uses a container cluster management tool kubernets, and correspondingly deploys the mirror image file manufactured in the third step on the kubernets cluster, so that the split micro-service architecture Internet of things platform can be managed more conveniently;
step five, when the load balance of the cluster service is realized, intelligent capacity expansion and capacity reduction are realized through monitoring states, grading stages, plan making and execution operations;
adding a monitoring module, a prediction module and a resource dynamic scheduling module on the kubernets architecture, and automatically expanding and reducing the capacity;
collecting the resource use conditions of all containers on the node through a monitoring module; wherein the monitoring module is a monitoring container operating at each node;
step eight, selecting Prometheus + Grafana as a monitoring scheme to more accurately monitor the running state of the application program;
step nine, using historical resource use condition data provided by the monitoring module to establish a gray prediction model, predicting the resource use condition in a period of time in the future, and scheduling the resource in advance, thereby reducing the response time of the service;
step ten, using the cAdvisor to obtain the utilization rate of the CPU and the utilization rate of the memory about the previous instant value of the node, and using a GM (1, 1) prediction algorithm to simulate;
step eleven, carrying out precision inspection on the prediction data in the step seven, so that the prediction result is more accurate;
step twelve, by collecting historical resource use condition data of the application program running on the kubernets platform, predicting the resource use condition in a future period of time by using a grey prediction model, and then calculating the scaling time and predicting the workload in an analysis stage;
step thirteen, the realization of the cluster preemptive scheduling can be carried out on the tasks with high priority;
fourteen, taking the CPU utilization rate into consideration through a scheduling algorithm, and judging a plurality of indexes including the memory utilization rate and the applied network state;
and step fifteen, performing resource specific scheduling based on multiple tenants.
Further, in step eight, the selecting Prometheus + Grafana as the monitoring scheme includes:
(1) integrating Prometous onto the deployed kubernets;
(2) optimizing Prometheus deployment to realize hot loading of configuration information;
(3) and configuring a Prometeus information collecting rule to realize information collection of the running containers in the kubernets cluster.
Further, in step ten, the obtaining, by using cAdvisor, values of the CPU utilization rate and the memory utilization rate about a previous moment of the node, and performing simulation by using a GM (1, 1) prediction algorithm includes:
assuming that the CPU utilization of a node is Uc and the memory utilization is Um, the CPU and Um are obtained from cAdvisor at the previous moment of the node, where Uc ═ { Uc (1), Uc (2), …, Uc (n)) }, Um ═ Um { Um (1), Um (2), …, Um (n)) }, and then GM (1, 1) prediction algorithm is used to simulate it, and then the values of Uc and Um at time n +1 are predicted, the calculation steps are as follows:
(1) assume a time series, x(0)={x(0)(1),x(0)(2),…,x(0)(N), the number of original values is N, and then one is generated by one accumulationNew sequences, i.e. x(1)={x(1)(1),x(1)(2),…,x(1)(N), the summary can be:
Figure BDA0002958526910000051
according to the grey prediction method, the corresponding whitening differential equation of the GM (1, 1) model can be obtained:
Figure BDA0002958526910000052
where alpha is called constant, mu is called developed gray number, and the gray number for endogenous control is a constant input to the system, and this equation satisfies the initial conditions,
when t is equal to t0Time x(1)=x(1)(t0) (3)
Is solved as
Figure BDA0002958526910000053
Discrete values sampled at equal intervals (note t)01) then:
Figure BDA0002958526910000054
the approach to the gray model is to accumulate the sequence once (1) to estimate the constants α and μ by the least squares method.
(2) Because of x(1)(1) Left as the initial value, so that x(1)(2),x(1)x(3),…,x(1)(N) is substituted into equation (2) to substitute the differential, and Δ t ═ 1 (t +1) to t ═ 1 (t +1) are obtained by sampling at equal intervals, instead of the differential
Figure BDA0002958526910000055
Is like that
Figure BDA0002958526910000056
Then, the formula (2) has
Figure BDA0002958526910000057
Will ax(1)(i) The term moves to the right and is written as the product of the vector quantities:
Figure BDA0002958526910000061
due to the fact that
Figure BDA0002958526910000062
Involving an accumulation column x(1)Of two time instants, thus x(1)(i) It is more reasonable to take the average substitution of the two moments before and after, namely x(1)(i) Is replaced by
Figure BDA0002958526910000063
Writing equation (5) as a matrix expression:
Figure BDA0002958526910000064
y is (x)(0)(2),x(0)(3),…,x(0)(N))T
(3) Suppose that
Figure BDA0002958526910000065
The least squares estimate of equation set (6) is then:
Figure BDA0002958526910000066
(4) estimate the value
Figure BDA00029585269100000613
And
Figure BDA0002958526910000068
substituting the equation (4) to obtain a corresponding time equation:
Figure BDA0002958526910000069
when k is 1, 2, …, N-1, the result is obtained from equation (8)
Figure BDA00029585269100000610
Is the fitted value; when k is more than or equal to N,
Figure BDA00029585269100000611
for predicting values, this is relative to a once-accumulated sequence x(1)The fitting value of (a) is reduced by a post-subtraction operation, and when k is 1, 2, …, N-1, the original sequence x is obtained(0)Fitting value of
Figure BDA00029585269100000612
When k is more than or equal to N, the original sequence x can be obtained(0)And (6) forecasting values.
Further, in step eleven, the precision checking of the prediction data includes:
(1) residual error test, respectively calculating:
residual error:
Figure BDA0002958526910000071
relative residual error:
Figure BDA0002958526910000072
(2) and (3) posterior difference inspection: respectively calculating:
x(0)average value of (d):
Figure BDA0002958526910000073
x(0)variance of (a):
Figure BDA0002958526910000074
mean of residuals:
Figure BDA0002958526910000075
variance of residual:
Figure BDA0002958526910000076
the posterior difference ratio:
Figure BDA0002958526910000077
small error probability:
Figure BDA0002958526910000078
(3) and constructing a prediction precision grade comparison table.
Further, in the step (3), in the prediction accuracy grade comparison table, when P is greater than 0.95 and C is less than 0.35, the prediction accuracy grade is good; when P is greater than 0.80 and C is less than 0.45, the prediction precision grade is qualified; p >0.70 and C <0.50, the prediction accuracy level is marginal; when P is less than or equal to 0.70 and C is more than or equal to 0.65, the prediction precision grade is unqualified.
Further, in step thirteen, the implementation of the cluster preemptive scheduling includes:
firstly, the Pod is divided into a high priority and a low priority, and each priority is added with a sub-optimal priority which can be defined by a user; in the scheduling process, the kubernets cluster can schedule the Pod with high priority in advance, and meanwhile, when cluster resources cannot support the operation condition of the container, the cluster resources can also support the function that the Pod with high priority preempts the Pod with low priority.
Further, in step fifteen, the resource-specific scheduling based on the multiple tenants includes:
(1) before starting, the first step is to audit deleted, stopped or abnormally crashed instances;
(2) internal circulation: selecting a user in the priority ordering, selecting a suspended task in the user's queue, and then determining whether any node can host the task; if no node can bear the task, deleting the user from the list, and internally circulating to continue to the next user in the list;
(3) external circulation: if a match is found between the resource requirements of the task and the available resources of the node, then the user is removed from the inner loop; and then recalculating the scheduling priorities of all the users again to generate a new list, and then performing another round of scheduling through internal circulation.
Another object of the present invention is to provide an internet of things platform construction system using the internet of things platform construction method based on a micro service architecture, the internet of things platform construction system including:
the micro-service level dividing module is used for dividing micro-service levels for the Internet of things platform; the micro service level comprises a data access layer, a data processing layer, a service layer and an application layer;
the kubernets cluster building module is used for building a kubernets cluster, three masters and six slaves, and high availability of the cluster is achieved;
the mirror image construction module is used for constructing all split projects of the platform into mirror images and carrying out corresponding deployment on the cluster, and the deployment module of the micro-service uses a container cluster management tool kubernets;
the capacity expansion and reduction module is used for realizing intelligent capacity expansion and reduction through monitoring states, grading stages, plan making and execution operations when the load balance of the cluster service is realized;
the module adding module is used for adding a monitoring module, a prediction module, a resource dynamic scheduling module and an automatic capacity expansion and reduction module on a kubernets architecture;
the data collection module is used for collecting the resource use conditions of all containers on the node through the monitoring module; the monitoring module runs a monitoring container at each node;
the monitoring scheme selection module is used for selecting Prometheus + Grafana as a monitoring scheme;
the prediction model establishing module is used for establishing a gray prediction model by utilizing the historical resource use condition data provided by the monitoring module and predicting the resource use condition in a period of time in the future;
the simulation module is used for acquiring the utilization rate of the CPU and the utilization rate of the memory about the previous instant value of the node through the cAdvisor and simulating by using a GM (1, 1) prediction algorithm;
the precision inspection module is used for carrying out precision inspection on the predicted data;
the prediction module is used for predicting the resource use condition in a future period of time by collecting historical resource use condition data of an application program running on the kubernets platform and using a grey prediction model, and then calculating the scaling time and the workload prediction in an analysis stage;
the scheduling module is used for realizing cluster preemptive scheduling; the CPU utilization rate is considered through a scheduling algorithm, and multiple indexes including the memory utilization rate and the applied network state are judged at the same time; and meanwhile, the resources are specifically scheduled based on multiple tenants.
Another object of the present invention is to provide a computer program product stored on a computer readable medium, which includes a computer readable program for providing a user input interface to implement the method for constructing the internet of things platform based on the micro service architecture when the computer program product is executed on an electronic device.
Another object of the present invention is to provide a computer-readable storage medium, which stores instructions that, when executed on a computer, cause the computer to execute the method for constructing a platform of an internet of things based on a microservice architecture.
By combining all the technical schemes, the invention has the advantages and positive effects that: according to the Internet of things platform construction method based on the micro-service architecture, the application program of the existing Internet of things platform is divided into a plurality of small services for reconstruction, the communication between the services adopts an REST method, and an interface calling method of a single platform is reserved to the maximum extent. Meanwhile, the container technology is utilized to deploy the split micro-services in a Docker, a traditional virtual machine virtualizes a set of complete hardware, a complete operating system is operated on the hardware, and then the required application program is operated on the system; and the application program of the container directly runs the kernel of the host, and the container does not have the kernel of the container and does not perform hardware virtualization, so that the container is easier to transplant and has high efficiency.
According to the invention, the complex functions of the platform of the Internet of things can be decoupled, the application of the Internet of things is established on the micro-service set, and in actual deployment, multiple instances of the micro-service can enhance the flexibility and robustness of an application program. The invention also provides a method for realizing resource scheduling by distinguishing multiple tenants, and a scheduler of the kubernets is not suitable for a multi-tenant environment for a group of different tasks and different resource requirements, so that two-stage scheduling is provided, another scheduling layer is integrated on the whole scheduler of the kubernets, the scheduling decision and resource constraint of a single user can be respected, and the fairness of a default whole scheduling mechanism is improved; meanwhile, a kubernets expansion engine is provided, in the engine, the running condition of a container in a kubernets cluster collected by Prometous is utilized, a gray prediction model is used, and prediction analysis is carried out on collected data, so that load balancing in kubernets is faster and more accurate, the dispatching efficiency of a kubernets scheduler is improved, the cluster load balancing capacity is enhanced, and the requirement under the complex and changeable internet of things environment is met more quickly and accurately.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments of the present invention will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a method for constructing an internet of things platform based on a micro-service architecture according to an embodiment of the present invention.
Fig. 2 is a structural block diagram of a platform construction system of the internet of things provided by the embodiment of the invention;
in the figure: 1. a microservice level partitioning module; 2. a kubernets cluster building module; 3. a mirror image building module; 4. a capacity expansion and reduction module; 5. a module adding module; 6. a data collection module; 7. a monitoring scheme selection module; 8. a prediction model building module; 9. a simulation module; 10. a precision inspection module; 11. a prediction module; 12. and a scheduling module.
Fig. 3 is a layered architecture diagram of an internet of things platform according to an embodiment of the present invention.
Fig. 4 is a micro-service orchestration architecture diagram (kubernets architecture diagram) provided by an embodiment of the present invention.
Fig. 5 is a diagram of the overall architecture of scheduling provided by an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Aiming at the problems in the prior art, the invention provides a method for constructing an internet of things platform based on a micro-service architecture, and the invention is described in detail below with reference to the accompanying drawings.
As shown in fig. 1, the method for constructing an internet of things platform based on a micro-service architecture provided by the embodiment of the invention includes the following steps:
s101, dividing a micro-service level for an Internet of things platform; the micro service level comprises a data access layer, a data processing layer, a service layer and an application layer;
s102, building a kubernets cluster, and realizing high availability of the cluster, wherein three masters and six slaves are built;
s103, constructing all split projects of the platform into mirror images, and carrying out corresponding deployment on the cluster;
s104, the deployment module of the micro-service uses a container cluster management tool kubernets;
s105, when the load balance of the cluster service is realized, intelligent capacity expansion and capacity reduction are realized through monitoring states, grading stages, plan making and execution operations;
s106, adding a monitoring module, a prediction module and a resource dynamic scheduling module on the kubernets architecture, and automatically expanding and reducing the capacity;
s107, collecting the resource use conditions of all containers on the node through a monitoring module; the monitoring module runs a monitoring container at each node;
s108, selecting Prometheus + Grafana as a monitoring scheme;
s109, establishing a gray prediction model by using the historical resource use condition data provided by the monitoring module, and predicting the resource use condition in a period of time in the future;
s110, using the cAdvisor to obtain the utilization rate of the CPU and the utilization rate of the memory about the previous instant value of the node, and using a GM (1, 1) prediction algorithm to simulate;
s111, carrying out precision inspection on the prediction data in the S107;
s112, forecasting the resource use condition in a future period of time by collecting historical resource use condition data of an application program running on the kubernets platform by using a grey forecasting model, and then calculating the scaling time and forecasting the workload in an analysis stage;
s113, realizing cluster preemptive scheduling;
s114, the CPU utilization rate is considered through a scheduling algorithm, and multiple indexes including the memory utilization rate and the applied network state are judged at the same time;
and S115, specifically scheduling resources based on multiple tenants.
A person skilled in the art can also implement the method for constructing the platform of the internet of things based on the micro service architecture according to the present invention by using other steps, and the method for constructing the platform of the internet of things based on the micro service architecture according to the present invention shown in fig. 1 is only a specific embodiment.
As shown in fig. 2, the internet of things platform construction system provided by the embodiment of the present invention includes:
the micro-service level dividing module 1 is used for dividing micro-service levels for the platform of the Internet of things; the micro service level comprises a data access layer, a data processing layer, a service layer and an application layer;
the kubernets cluster building module 2 is used for building a kubernets cluster, three masters and six slaves, and high availability of the cluster is achieved;
a mirror image construction module 3, configured to construct mirror images of all split projects of the platform, and perform corresponding deployment on the cluster, where the deployment module of the micro-service uses a container cluster management tool kubernets;
the capacity expansion and reduction module 4 is used for realizing intelligent capacity expansion and reduction through monitoring states, grading stages, plan making and execution operations when the load balance of the cluster service is realized;
the module adding module 5 is used for adding a monitoring module, a prediction module and a resource dynamic scheduling module on the kubernets architecture and automatically expanding and reducing the capacity;
the data collection module 6 is used for collecting the resource use conditions of all containers on the node through the monitoring module; the monitoring module runs a monitoring container at each node;
a monitoring scheme selection module 7, configured to select Prometheus + Grafana as a monitoring scheme;
the prediction model establishing module 8 is used for establishing a gray prediction model by utilizing the historical resource use condition data provided by the monitoring module, and predicting the resource use condition in a future period of time;
the simulation module 9 is used for acquiring the values of the utilization rate of the CPU and the utilization rate of the memory about the previous moment of the node through the cAdvisor and performing simulation by using a GM (1, 1) prediction algorithm;
the precision inspection module 10 is used for carrying out precision inspection on the prediction data;
the prediction module 11 is used for predicting the resource usage in a future period of time by collecting historical resource usage data of an application program running on the kubernets platform and using a grey prediction model, and then calculating the scaling time and the workload prediction in an analysis stage;
the scheduling module 12 is configured to implement cluster preemptive scheduling; the CPU utilization rate is considered through a scheduling algorithm, and multiple indexes including the memory utilization rate and the applied network state are judged at the same time; and meanwhile, the resources are specifically scheduled based on multiple tenants.
The technical solution of the present invention is further described with reference to the following examples.
1. According to the invention, the application program of the existing Internet of things platform is divided into a plurality of small services for reconstruction, the REST method is adopted for communication among the services, and the interface calling method of the single platform is reserved to the maximum extent. Meanwhile, the container technology is utilized to deploy the split micro-services in a Docker, a traditional virtual machine virtualizes a set of complete hardware, a complete operating system is operated on the hardware, and then the required application program is operated on the system; and the application program of the container directly runs the kernel of the host, and the container does not have the kernel of the container and does not perform hardware virtualization, so that the container is easier to transplant and has high efficiency.
According to the invention, the complex functions of the platform of the Internet of things can be decoupled, the application of the Internet of things is established on the micro-service set, and in actual deployment, multiple instances of the micro-service can enhance the flexibility and robustness of an application program. The invention also provides a method for realizing resource scheduling by distinguishing multiple tenants, and a scheduler of the kubernets is not suitable for a multi-tenant environment for a group of different tasks and different resource requirements, so that two-stage scheduling is provided, another scheduling layer is integrated on the whole scheduler of the kubernets, the scheduling decision and resource constraint of a single user can be respected, and the fairness of a default whole scheduling mechanism is improved; meanwhile, a kubernets expansion engine is provided, in the engine, the running condition of a container in a kubernets cluster collected by Prometous is utilized, a gray prediction model is used, and prediction analysis is carried out on collected data, so that load balancing in kubernets is faster and more accurate, the dispatching efficiency of a kubernets scheduler is improved, the cluster load balancing capacity is enhanced, and the requirement under the complex and changeable internet of things environment is met more quickly and accurately.
2. Aiming at the problems in the prior art, the invention provides a method for constructing an Internet of things platform based on a micro-service architecture. The invention is realized in such a way that the construction method of the Internet of things platform based on the micro-service architecture comprises the following steps:
(1) and dividing a micro service level into a data access layer, a data processing layer, a service layer and an application layer for the platform of the Internet of things.
(2) Building a kubernetes cluster, three masters and six slaves to realize the high availability of the cluster;
(3) constructing all split projects of the platform into a mirror image, and carrying out corresponding deployment on the cluster;
(4) the deployment module of the micro-service uses a container cluster management tool kubernets;
(5) when realizing the load balance of the cluster service, in order to be able to more intelligent expansion and contraction, the process that needs to be realized probably has: monitoring state, grading stage, planning and executing operation;
(6) a monitoring module, a prediction module and a resource dynamic scheduling module are added on a kubernets architecture, and an automatic capacity expansion and reduction module is added;
(7) the monitoring module runs a monitoring container on each node to collect the resource use condition of all containers on the node;
(8) the monitoring scheme selects Prometheus + Grafana, and the specific implementation steps are as follows:
the method comprises the following steps: prometous is integrated on the well-deployed kubernets, and the high availability of Prometous can be guaranteed due to the characteristics of kubernets
Step two: optimizing Prometheus deployment to realize hot loading of configuration information
Step three: configuring a Prometeus information collecting rule to realize information collection of containers in operation in the kubernets cluster;
(9) at each worker node, the usage of the resource changes over time. Therefore, the relation between the change trend of the monitored node and the time is researched, and a prediction model is built by utilizing the historical resource use condition data provided by the monitoring module so as to predict the resource use condition in a future period of time. The gray prediction model is chosen for use in this patent because it can be adapted for small data modeling without regard to its internal factors.
(10) Since the original scheduling policy of kubernets only calculates the resources of CPU and memory, therefore, assuming that the CPU utilization of a node is Uc and the memory utilization is Um, cAdvisor is used to obtain the previous moment of Uc and Um about the node, where Uc ═ { Uc (1), Uc (2), …, Uc (n)) }, Um ═ Um { Um (1), Um (2), …, Um (n)) }, and then GM (1, 1) prediction algorithm is used to simulate it, and then the values of Uc and Um at time n +1 are predicted, the main calculation steps are as follows:
the method comprises the following steps: assume a time series, x(0)={x(0)(1),x(0)(2),…,x(0)(N), the number of original values is N, and then a new sequence, x, is generated by a single accumulation(1)={x(1)(1),x(1)(2),…,x(1)(N), the summary can be:
Figure BDA0002958526910000151
according to the grey prediction method, the corresponding whitening differential equation of the GM (1, 1) model can be obtained:
Figure BDA0002958526910000152
where alpha is called constant, mu is called developed gray number, and the gray number for endogenous control is a constant input to the system, and this equation satisfies the initial conditions,
when t is equal to t0Time x(1)=x(1)(t0) (3)
Is solved as
Figure BDA0002958526910000153
Discrete values sampled at equal intervals (note t)01) then:
Figure BDA0002958526910000154
the approach to the gray model is to accumulate the sequence once (1) to estimate the constants α and μ by the least squares method.
Step two: because of x(1)(1) Left as the initial value, so that x(1)(2),x(1)(3),…,x(1)(N) is substituted into equation (2) to substitute the differential, and Δ t ═ 1 (t +1) to t ═ 1 (t +1) are obtained by sampling at equal intervals, instead of the differential
Figure BDA0002958526910000155
Is like that
Figure BDA0002958526910000156
Then, the formula (2) has
Figure BDA0002958526910000157
Will ax(1)(i) The term moves to the right and is written as the product of the vector quantities:
Figure BDA0002958526910000161
due to the fact that
Figure BDA0002958526910000162
Involving an accumulation column x(1)Of two time instants, thus x(1)(i) It is more reasonable to take the average substitution of the two moments before and after, namely x(1)(i) Is replaced by
Figure BDA0002958526910000163
Write (5) as a matrix expression:
Figure BDA0002958526910000164
y is (x)(0)(2),x(0)(3),…,x(0)(N))T
Step three: suppose that
Figure BDA0002958526910000165
The least squares estimate of equation set (6) is then:
Figure BDA0002958526910000166
step four: estimate the value
Figure BDA00029585269100001613
Figure BDA0002958526910000168
Substituting the equation (4) to obtain a corresponding time equation:
Figure BDA0002958526910000169
when k is 1, 2, …, N-1, the result is obtained from equation (8)
Figure BDA00029585269100001610
Is the fitted value; when k is more than or equal to N,
Figure BDA00029585269100001611
for predicting values, this is relative to a once-accumulated sequence x(1)The fitting value of (a) is reduced by a post-subtraction operation, and when k is 1, 2, …, N-1, the original sequence x is obtained(0)Fitting value of
Figure BDA00029585269100001612
When k is more than or equal to N, the original sequence x can be obtained(0)And (6) forecasting values.
(11) And (5) carrying out precision test on the prediction data in the step (7), wherein the implementation mode is as follows:
the method comprises the following steps: residual error test, respectively calculating:
residual error:
Figure BDA0002958526910000171
relative residual error:
Figure BDA0002958526910000172
step two: and (3) posterior difference inspection: respectively calculating:
x(0)average value of (d):
Figure BDA0002958526910000173
x(0)variance of (a):
Figure BDA0002958526910000174
mean of residuals:
Figure BDA0002958526910000175
variance of residual:
Figure BDA0002958526910000176
the posterior difference ratio:
Figure BDA0002958526910000177
small error probability:
Figure BDA0002958526910000178
step three: prediction accuracy grade comparison table (see Table 1)
TABLE 1 prediction accuracy grade LUT
Figure BDA0002958526910000179
(12) By collecting historical resource use condition data of an application program running on a kubernets platform, predicting the resource use condition in a future period of time by using a grey prediction model, and then calculating the scaling time and the workload prediction in an analysis stage;
(13) and (3) realizing cluster preemptive scheduling: firstly, the Pod is divided into a high priority and a low priority, and each priority is added with a sub-optimal priority which can be defined by a user; in the scheduling process, the kubernets cluster can schedule the Pod with high priority in advance, and meanwhile, when cluster resources cannot support the operation condition of the container, the cluster resources can also support the function that the Pod with high priority preempts the Pod with low priority.
(14) In the scheduling algorithm, not only the CPU utilization rate is considered, but also a plurality of indexes such as the memory utilization rate, the applied network state and the like are judged, so that a better basis is provided for the scheduling of tasks.
(15) The specific scheduling of resources is performed based on multiple tenants, and the whole scheduling process is as follows:
1. before starting, the first step is to audit deleted, stopped or abnormally crashed instances;
2. internal circulation: selecting a user in the priority ordering, selecting a suspended task in the user's queue, and then determining whether any node can host the task; if no node can bear the task, deleting the user from the list, and internally circulating to continue to the next user in the list;
3. external circulation: if a match is found between the resource requirements of the task and the available resources of the node, the user is removed from the inner loop. Then, the scheduling priorities of all users are recalculated again, a new list is generated, and then another round of scheduling is performed through an internal loop.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When used in whole or in part, can be implemented in a computer program product that includes one or more computer instructions. When loaded or executed on a computer, cause the flow or functions according to embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL), or wireless (e.g., infrared, wireless, microwave, etc.)). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The above description is only for the purpose of illustrating the present invention and the appended claims are not to be construed as limiting the scope of the invention, which is intended to cover all modifications, equivalents and improvements that are within the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A method for constructing an Internet of things platform based on a micro-service architecture is characterized by comprising the following steps:
dividing micro service levels for the platform of the Internet of things; the micro service level comprises a data access layer, a data processing layer, a service layer and an application layer;
building a kubernetes cluster, three masters and six slaves to realize the high availability of the cluster;
constructing all split projects of the platform into a mirror image, and carrying out corresponding deployment on the cluster;
the deployment module of the micro-service uses a container cluster management tool kubernets;
when the load balance of the cluster service is realized, intelligent capacity expansion and capacity reduction are realized through monitoring states, grading stages, planning and executing operations;
a monitoring module, a prediction module and a resource dynamic scheduling module are added on a kubernets architecture, and an automatic capacity expansion and reduction module is added;
collecting the resource use conditions of all containers on the node through a monitoring module; the monitoring module runs a monitoring container at each node;
selecting Prometheus + Grafana as a monitoring scheme;
establishing a gray prediction model by using historical resource use condition data provided by a monitoring module, and predicting the resource use condition in a future period of time;
using the cAdvisor to obtain the utilization rate of the CPU and the utilization rate of the memory about the previous instant value of the node, and using a GM (1, 1) prediction algorithm to simulate;
carrying out precision inspection on the predicted data;
by collecting historical resource use condition data of an application program running on a kubernets platform, predicting the resource use condition in a future period of time by using a grey prediction model, and then calculating the scaling time and the workload prediction in an analysis stage;
realizing cluster preemptive scheduling;
the CPU utilization rate is considered through a scheduling algorithm, and multiple indexes including the memory utilization rate and the applied network state are judged at the same time;
and performing resource specific scheduling based on multiple tenants.
2. The Internet of things platform construction method based on the micro-service architecture as claimed in claim 1, wherein the selecting Prometheus + Grafana as a monitoring scheme comprises:
(1) integrating Prometous onto the deployed kubernets;
(2) optimizing Prometheus deployment to realize hot loading of configuration information;
(3) and configuring a Prometeus information collecting rule to realize information collection of the running containers in the kubernets cluster.
3. The method for constructing the platform of the internet of things based on the micro-service architecture according to claim 1, wherein the obtaining of the values of the utilization rate of the CPU and the utilization rate of the memory with respect to the previous moment of the node by using the cAdvisor and the simulation by using a GM (1, 1) prediction algorithm comprise:
assuming that the CPU utilization of a node is Uc and the memory utilization is Um, cAdvisor is used to obtain Uc and Um about the previous moment of the node, where Uc ═ Uc (1), Uc (2),.., Uc (n) }, Um ═ Um (1), Um (2),. once, Um (n) }, and then GM (1, 1) prediction algorithm is used to simulate it, and then the values of Uc and Um at time n +1 are predicted, the calculation steps are as follows:
(1) assume a time series, x(0)={x(0)(1),x(0)(2),…,x(0)(N), the number of original values is N, and then a new sequence, x, is generated by a single accumulation(1)={x(1)(1),x(1)(2),…,x(1)(N), the summary can be:
Figure FDA0002958526900000021
according to the grey prediction method, the corresponding whitening differential equation of the GM (1, 1) model can be obtained:
Figure FDA0002958526900000022
where alpha is called constant, mu is called developed gray number, and the gray number for endogenous control is a constant input to the system, and this equation satisfies the initial conditions,
when t is equal to t0Time x(1)=x(1)(t0) (3)
Is solved as
Figure FDA0002958526900000023
Discrete values, t, sampled at equal intervals01, then:
Figure FDA0002958526900000031
the approach to the gray model is to accumulate the sequence once (1) to estimate the constants α and μ by the least squares method;
(2) because of x(1)(1) Left as the initial value, so that x(1)(2),x(1)(3),...,x(1)(N) is substituted into equation (2) to substitute the differential, and Δ t ═ 1 (t +1) to t ═ 1 (t +1) are obtained by sampling at equal intervals, instead of the differential
Figure FDA0002958526900000032
Is like that
Figure FDA0002958526900000033
Then, the formula (2) has
Figure FDA0002958526900000034
Will ax(1)(i) The term moves to the right and is written as the product of the vector quantities:
Figure FDA0002958526900000035
due to the fact that
Figure FDA0002958526900000036
Involving an accumulation column x(1)Of two time instants, thus x(1)(i) It is more reasonable to take the average substitution of the two moments before and after, namely x(1)(i) Is replaced by
Figure FDA0002958526900000037
Writing equation (5) as a matrix expression:
Figure FDA0002958526900000038
y is (x)(0)(2),x(0)(3),…,x(0)(N))T
(3) Suppose that
Figure FDA0002958526900000041
The least squares estimate of equation set (6) is then:
Figure FDA0002958526900000042
(4) estimate the value
Figure FDA0002958526900000043
And
Figure FDA0002958526900000044
substituting the equation (4) to obtain a corresponding time equation:
Figure FDA0002958526900000045
when k is 1, 2, …, N-1, the result is obtained from equation (8)
Figure FDA0002958526900000046
Is the fitted value; when k is more than or equal to N,
Figure FDA0002958526900000047
for predicting values, this is relative to a once-accumulated sequence x(1)The fitting value of (a) is reduced by a post-subtraction operation, and when k is 1, 2, …, N-1, the original sequence x is obtained(0)Fitting value of
Figure FDA0002958526900000048
When k is more than or equal to N, the original sequence x can be obtained(0)And (6) forecasting values.
4. The method for constructing the platform of the internet of things based on the micro-service architecture as claimed in claim 1, wherein the performing precision test on the prediction data comprises:
(1) residual error test, respectively calculating:
residual error:
Figure FDA0002958526900000049
relative residual error:
Figure FDA00029585269000000410
(2) and (3) posterior difference inspection: respectively calculating:
x(0)average value of (d):
Figure FDA00029585269000000411
x(0)variance of (a):
Figure FDA00029585269000000412
mean of residuals:
Figure FDA00029585269000000413
variance of residual:
Figure FDA00029585269000000414
the posterior difference ratio:
Figure FDA0002958526900000052
small error probability:
Figure FDA0002958526900000051
(3) and constructing a prediction precision grade comparison table.
5. The method for constructing the platform of the internet of things based on the micro-service architecture as claimed in claim 4, wherein in the step (3), when P is greater than 0.95 and C is less than 0.35 in the prediction precision level comparison table, the prediction precision level is good; when P is more than 0.80 and C is less than 0.45, the prediction precision grade is qualified; when P is more than 0.70 and C is less than 0.50, the prediction precision level is marginal; when P is less than or equal to 0.70 and C is more than or equal to 0.65, the prediction precision grade is unqualified.
6. The method for constructing the platform of the internet of things based on the micro-service architecture as claimed in claim 1, wherein the implementation of the cluster preemptive scheduling comprises: firstly, the Pod is divided into a high priority and a low priority, and each priority is added with a sub-optimal priority which can be defined by a user; in the scheduling process, the kubeemets cluster can schedule the Pod with high priority in advance, and meanwhile, when cluster resources cannot support the condition that the container runs, the function that the Pod with high priority preempts the Pod with low priority can be supported.
7. The method for constructing the platform of the internet of things based on the micro-service architecture as claimed in claim 1, wherein the resource-specific scheduling based on the multi-tenant comprises:
(1) before starting, the first step is to audit deleted, stopped or abnormally crashed instances;
(2) internal circulation: selecting a user in the priority ordering, selecting a suspended task in the user's queue, and then determining whether any node can host the task; if no node can bear the task, deleting the user from the list, and internally circulating to continue to the next user in the list;
(3) external circulation: if a match is found between the resource requirements of the task and the available resources of the node, then the user is removed from the inner loop; and then recalculating the scheduling priorities of all the users again to generate a new list, and then performing another round of scheduling through internal circulation.
8. An internet of things platform construction system applying the internet of things platform construction method based on the micro-service architecture according to any one of claims 1 to 7, wherein the internet of things platform construction system comprises:
the micro-service level dividing module is used for dividing micro-service levels for the Internet of things platform; the micro service level comprises a data access layer, a data processing layer, a service layer and an application layer;
the kubernets cluster building module is used for building a kubernets cluster, three masters and six slaves, and high availability of the cluster is achieved;
the mirror image construction module is used for constructing all split projects of the platform into mirror images and carrying out corresponding deployment on the cluster, and the deployment module of the micro-service uses a container cluster management tool kubernets;
the capacity expansion and reduction module is used for realizing intelligent capacity expansion and reduction through monitoring states, grading stages, plan making and execution operations when the load balance of the cluster service is realized;
the module adding module is used for adding a monitoring module, a prediction module, a resource dynamic scheduling module and an automatic capacity expansion and reduction module on a kubernets architecture;
the data collection module is used for collecting the resource use conditions of all containers on the node through the monitoring module; the monitoring module runs a monitoring container at each node;
the monitoring scheme selection module is used for selecting Prometheus + Grafana as a monitoring scheme;
the prediction model establishing module is used for establishing a gray prediction model by utilizing the historical resource use condition data provided by the monitoring module and predicting the resource use condition in a period of time in the future;
the simulation module is used for acquiring the utilization rate of the CPU and the utilization rate of the memory about the previous instant value of the node through the cAdvisor and simulating by using a GM (1, 1) prediction algorithm;
the precision inspection module is used for carrying out precision inspection on the predicted data;
the prediction module is used for predicting the resource use condition in a future period of time by collecting historical resource use condition data of an application program running on the kubernets platform and using a grey prediction model, and then calculating the scaling time and the workload prediction in an analysis stage;
the scheduling module is used for realizing cluster preemptive scheduling; the CPU utilization rate is considered through a scheduling algorithm, and multiple indexes including the memory utilization rate and the applied network state are judged at the same time; and meanwhile, the resources are specifically scheduled based on multiple tenants.
9. A computer program product stored on a computer readable medium, comprising a computer readable program for providing a user input interface to implement the method for constructing a platform of an internet of things based on a microservice architecture according to any one of claims 1 to 7 when the computer program product is executed on an electronic device.
10. A computer-readable storage medium storing instructions which, when executed on a computer, cause the computer to execute the method for constructing a platform of internet of things based on a micro-service architecture according to any one of claims 1 to 7.
CN202110229566.1A 2021-03-02 2021-03-02 Internet of things platform construction method based on micro-service architecture Pending CN113110914A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110229566.1A CN113110914A (en) 2021-03-02 2021-03-02 Internet of things platform construction method based on micro-service architecture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110229566.1A CN113110914A (en) 2021-03-02 2021-03-02 Internet of things platform construction method based on micro-service architecture

Publications (1)

Publication Number Publication Date
CN113110914A true CN113110914A (en) 2021-07-13

Family

ID=76709642

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110229566.1A Pending CN113110914A (en) 2021-03-02 2021-03-02 Internet of things platform construction method based on micro-service architecture

Country Status (1)

Country Link
CN (1) CN113110914A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113791863A (en) * 2021-08-10 2021-12-14 北京中电飞华通信有限公司 Virtual container-based power internet of things agent resource scheduling method and related equipment
CN114048021A (en) * 2021-09-30 2022-02-15 河北嘉朗科技有限公司 Internet of things multilayer multi-rule hybrid computing power automatic distribution technology
CN114500400A (en) * 2022-01-04 2022-05-13 西安电子科技大学 Large-scale network real-time simulation method based on container technology
CN115237570A (en) * 2022-07-29 2022-10-25 陈魏炜 Strategy generation method based on cloud computing and cloud platform
WO2024007849A1 (en) * 2023-04-26 2024-01-11 之江实验室 Distributed training container scheduling for intelligent computing
CN117453493A (en) * 2023-12-22 2024-01-26 山东爱特云翔信息技术有限公司 GPU computing power cluster monitoring method and system for large-scale multi-data center
CN117791613A (en) * 2024-02-27 2024-03-29 浙电(宁波北仑)智慧能源有限公司 Decision method and system based on resource cluster regulation and control
CN117453493B (en) * 2023-12-22 2024-05-31 山东爱特云翔信息技术有限公司 GPU computing power cluster monitoring method and system for large-scale multi-data center

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108829494A (en) * 2018-06-25 2018-11-16 杭州谐云科技有限公司 Container cloud platform intelligence method for optimizing resources based on load estimation
CN110149396A (en) * 2019-05-20 2019-08-20 华南理工大学 A kind of platform of internet of things construction method based on micro services framework
US20200019444A1 (en) * 2018-07-11 2020-01-16 International Business Machines Corporation Cluster load balancing based on assessment of future loading
CN112199150A (en) * 2020-08-13 2021-01-08 北京航空航天大学 Online application dynamic capacity expansion and contraction method based on micro-service calling dependency perception

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108829494A (en) * 2018-06-25 2018-11-16 杭州谐云科技有限公司 Container cloud platform intelligence method for optimizing resources based on load estimation
US20200019444A1 (en) * 2018-07-11 2020-01-16 International Business Machines Corporation Cluster load balancing based on assessment of future loading
CN110149396A (en) * 2019-05-20 2019-08-20 华南理工大学 A kind of platform of internet of things construction method based on micro services framework
CN112199150A (en) * 2020-08-13 2021-01-08 北京航空航天大学 Online application dynamic capacity expansion and contraction method based on micro-service calling dependency perception

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
CHIA-CHEN CHANG: "A Kubernetes-Based Monitoring Platform for Dynamic Cloud Resource Provisioning", 《2017 IEEE GLOBAL COMMUNICATIONS CONFERENCE》, 15 January 2018 (2018-01-15) *
沈玉龙: "无线异构网络中的切换预测算法", 《通信学报》 *
沈玉龙: "无线异构网络中的切换预测算法", 《通信学报》, 31 October 2009 (2009-10-31) *
王天泽: "基于灰色模型的云资源动态伸缩功能研究", 《软件导刊》 *
王天泽: "基于灰色模型的云资源动态伸缩功能研究", 《软件导刊》, no. 04, 15 April 2018 (2018-04-15) *
青鸟英谷教育科技股份有限公司: "《云计算框架与应用》", 西安电子科技大学出版社, pages: 163 - 165 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113791863A (en) * 2021-08-10 2021-12-14 北京中电飞华通信有限公司 Virtual container-based power internet of things agent resource scheduling method and related equipment
CN113791863B (en) * 2021-08-10 2024-01-23 北京中电飞华通信有限公司 Virtual container-based power Internet of things proxy resource scheduling method and related equipment
CN114048021A (en) * 2021-09-30 2022-02-15 河北嘉朗科技有限公司 Internet of things multilayer multi-rule hybrid computing power automatic distribution technology
CN114500400A (en) * 2022-01-04 2022-05-13 西安电子科技大学 Large-scale network real-time simulation method based on container technology
CN114500400B (en) * 2022-01-04 2023-09-08 西安电子科技大学 Large-scale network real-time simulation method based on container technology
CN115237570A (en) * 2022-07-29 2022-10-25 陈魏炜 Strategy generation method based on cloud computing and cloud platform
CN115237570B (en) * 2022-07-29 2023-06-16 上海佑瞻智能科技有限公司 Policy generation method based on cloud computing and cloud platform
WO2024007849A1 (en) * 2023-04-26 2024-01-11 之江实验室 Distributed training container scheduling for intelligent computing
CN117453493A (en) * 2023-12-22 2024-01-26 山东爱特云翔信息技术有限公司 GPU computing power cluster monitoring method and system for large-scale multi-data center
CN117453493B (en) * 2023-12-22 2024-05-31 山东爱特云翔信息技术有限公司 GPU computing power cluster monitoring method and system for large-scale multi-data center
CN117791613A (en) * 2024-02-27 2024-03-29 浙电(宁波北仑)智慧能源有限公司 Decision method and system based on resource cluster regulation and control

Similar Documents

Publication Publication Date Title
CN113110914A (en) Internet of things platform construction method based on micro-service architecture
US11656911B2 (en) Systems, methods, and apparatuses for implementing a scheduler with preemptive termination of existing workloads to free resources for high priority items
Zhu et al. Task scheduling for multi-cloud computing subject to security and reliability constraints
CN108829494B (en) Container cloud platform intelligent resource optimization method based on load prediction
US10514951B2 (en) Systems, methods, and apparatuses for implementing a stateless, deterministic scheduler and work discovery system with interruption recovery
US11294726B2 (en) Systems, methods, and apparatuses for implementing a scalable scheduler with heterogeneous resource allocation of large competing workloads types using QoS
CN113806018B (en) Kubernetes cluster resource mixed scheduling method based on neural network and distributed cache
US11579933B2 (en) Method for establishing system resource prediction and resource management model through multi-layer correlations
CN112685153A (en) Micro-service scheduling method and device and electronic equipment
CN115297112A (en) Dynamic resource quota and scheduling component based on Kubernetes
CN115220916B (en) Automatic calculation scheduling method, device and system of video intelligent analysis platform
CN113391913A (en) Distributed scheduling method and device based on prediction
JP5515889B2 (en) Virtual machine system, automatic migration method and automatic migration program
CN115543626A (en) Power defect image simulation method adopting heterogeneous computing resource load balancing scheduling
CN109614210B (en) Storm big data energy-saving scheduling method based on energy consumption perception
Lu et al. InSTechAH: Cost-effectively autoscaling smart computing hadoop cluster in private cloud
CN111367632B (en) Container cloud scheduling method based on periodic characteristics
CN112130927A (en) Reliability-enhanced mobile edge computing task unloading method
CN111124619A (en) Container scheduling method for secondary scheduling
Li et al. On scheduling of high-throughput scientific workflows under budget constraints in multi-cloud environments
CN115562841A (en) Cloud video service self-adaptive resource scheduling system and method
Yakubu et al. Priority based delay time scheduling for quality of service in cloud computing networks
CN115269140A (en) Container-based cloud computing workflow scheduling method, system and equipment
CN115061811A (en) Resource scheduling method, device, equipment and storage medium
Du et al. A combined priority scheduling method for distributed machine learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210713

RJ01 Rejection of invention patent application after publication