CN115052033B - Resource sharing-based micro-service effective containerized deployment method for intelligent factory - Google Patents
Resource sharing-based micro-service effective containerized deployment method for intelligent factory Download PDFInfo
- Publication number
- CN115052033B CN115052033B CN202210619201.4A CN202210619201A CN115052033B CN 115052033 B CN115052033 B CN 115052033B CN 202210619201 A CN202210619201 A CN 202210619201A CN 115052033 B CN115052033 B CN 115052033B
- Authority
- CN
- China
- Prior art keywords
- micro
- service
- server
- services
- deployed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 16
- 238000004891 communication Methods 0.000 claims abstract description 33
- 238000004519 manufacturing process Methods 0.000 claims abstract description 24
- 230000004044 response Effects 0.000 claims abstract description 14
- 238000005457 optimization Methods 0.000 claims description 17
- 230000005540 biological transmission Effects 0.000 claims description 16
- 238000003860 storage Methods 0.000 claims description 13
- 230000003993 interaction Effects 0.000 claims description 9
- 239000011159 matrix material Substances 0.000 claims description 8
- 238000005094 computer simulation Methods 0.000 claims description 7
- 235000008694 Humulus lupulus Nutrition 0.000 claims description 6
- 230000006870 function Effects 0.000 claims description 6
- 230000002452 interceptive effect Effects 0.000 claims description 3
- 239000000758 substrate Substances 0.000 claims description 3
- 238000010586 diagram Methods 0.000 claims 1
- 238000001514 detection method Methods 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000009776 industrial production Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000002787 reinforcement Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1095—Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The invention discloses an effective containerized deployment method of micro-services based on resource sharing in an intelligent factory, which relates to the field of intelligent manufacturing. The invention can achieve the purpose of reducing mirror image pull delay and communication overhead and improving micro-service response efficiency through the micro-service deployment scheme based on resource sharing.
Description
Technical Field
The invention relates to the field of intelligent manufacturing, in particular to an effective containerized deployment method of micro-services based on resource sharing in an intelligent factory.
Background
With the rapid development of intelligent manufacturing and flexible production, the flexibility of industrial production is greatly enhanced. Therefore, industrial software is required to rapidly redistribute and adjust production procedures according to changes of orders, and higher requirements are put on flexibility and expansibility of the industrial software. The traditional industrial software adopts a single-body type service architecture, the high coupling and high occupancy rate in the service enable the whole system to have higher complexity, and the expansibility, stability and fault tolerance of the whole system are difficult to meet the requirements of intelligent manufacturing, so that the industrial software architecture based on the micro-service is widely paid attention to. Through the micro-service architecture, a complete service can be split into a plurality of loosely coupled micro-services according to service and function requirements, and the loosely coupled micro-services cooperate with each other to complete production tasks. The logic between different micro services is independent, so that the method has high flexibility, expandability and fault tolerance, can well adapt to the requirements of intelligent manufacturing, and meets the requirements of production customization.
In intelligent manufacturing, a large number of computationally intensive production tasks exist, and in order to meet the high requirements of the computationally intensive production tasks on instantaneity and service efficiency, an edge-computing-oriented micro-service platform is emerging. Edge computing is a novel computing paradigm, and by providing low-latency computing services through small edge servers deployed near the device, computing resources of terminal devices, edge nodes, and cloud servers can be fully utilized. Currently, container technology represented by Docker and the container orchestration tool Kubernetes developed by google are becoming the dominant solution for micro-service deployment and maintenance on edge platforms. Through the container orchestration tool, each micro-service may be packaged into a Docker mirror, deployed to edge servers according to service requests and deployment policies.
In the process of micro-service containerized deployment, how to improve service efficiency so that deployed micro-services can be quickly started, run and obtain calculation results is a critical problem. The service efficiency is mainly influenced by two aspects, namely, the starting time of the micro service is too long, so that the service response speed is slow; and secondly, communication overhead among micro services, transmission of input data and calculation results can cause transmission delay, so that service efficiency is affected.
The start-up time of the micro-service is mainly dependent on the pull delay of the Docker mirror. Dock images generally include runtime tools, system tools, and system dependencies, stored in the cloud through different image layers. When a service needs to be provided locally, the edge server first pulls the non-local container image containing all necessary layers from the cloud and deploys it on the edge server. Because of the limited network bandwidth, image pulling can create a corresponding downlink delay that depends on the size of the image pulled and the link bandwidth. An integrated study of microservice images showed that at 100Mbps bandwidth, the average startup time of the individual images was about 20.7 seconds, while the average image pull delay was about 15.8 seconds, accounting for 76.6% of the average startup time. Mirror pull latency has become a non-negligible factor affecting the container startup time, and thus the efficiency of service response.
On the other hand, the communication overhead of the micro services depends on the communication data volume between the micro services. Because of the continuity of production, industrial applications may be co-ordinated by multiple micro-services deployed on one or more edge servers. For example, the equipment failure detection (Equipment Failure Detection, EFD) function of semiconductor manufacturing may be implemented by six input-output interconnected micro-services including source data access, feature extraction, k-Nearest Neighbor (knN) cluster-based learning services, XGBoost, support vector machine, and result aggregation services. These micro services may be referred to as a micro service chain, and frequent information interaction may occur between micro services located on the same micro service chain. For example, in the equipment failure detection, the acquired source data needs to be transmitted to the feature extraction micro-service for data processing, and the processed feature vector file needs to be transmitted to the next micro-service for clustering. The large amount of data transmission between micro services can lead to high transmission delays, which in turn affect service response efficiency.
One effective way to optimize service efficiency is resource sharing. Although the functions of the micro services are different from each other, there is also commonality between the micro services. Through the repeated utilization of the common resources, the response efficiency and the service quality of the micro service can be improved.
One policy for resource sharing is layer sharing. Although production facilities (robotic arms, AGVs, cameras, etc.) are diverse, the same type of micro-services can be implemented based on the same base layer. For example, fault detection, face recognition and quality detection of micro-service images can be realized through a Python public base layer; the micro-service mirror image related to the production data storage and the query can be realized through the MySQL public base layer. Sharing of the Docker native support layer, if the micro services deployed on the same edge server use the same base layer, then the layer will be pulled only once without repeated pulling when mirror pulling is performed, and the layer can be shared by all the micro services. By sharing the same base layer, the delay of mirror pulling can be effectively reduced, thereby improving the starting speed and service response efficiency of micro-service.
Another policy for resource sharing is chain sharing. In the micro-service chain, frequent information transmission exists between two adjacent micro-services, and if the two micro-services are deployed on different edge servers, the information needs to be transmitted through multiple hops; if two micro-services are deployed on the same server, the information to be transmitted can be directly accessed by the next micro-service by sharing the same memory address, and multi-hop transmission of data is not needed, so that the problems of delay, packet loss and the like caused by data transmission are reduced.
In the above two resource sharing schemes, layer sharing deploys the same type of micro services on different micro service chains together so as to recycle the same base layer, and chain sharing tends to deploy the micro services on the same micro service chain together, so that data transmission is reduced through shared memory. However, because of the limited resources of the edge servers, the micro services cannot be deployed on the same edge server.
Aiming at the problem of resource sharing among micro services, the deployment of the micro services mainly faces the following two difficulties. 1) How to model the hierarchical structure of the micro-service mirror to accurately describe the relationship between the micro-service mirror and the container layer. 2) How to describe the chain structure of the micro-services and the amount of communication data between the micro-services. 3) How to simultaneously consider layer sharing and chain sharing to establish an optimization problem, and realize optimal resource sharing balance based on a corresponding solving method, thereby improving the efficiency of service response.
Accordingly, those skilled in the art are working to develop an optimal micro-service deployment strategy to make the trade-off between layer sharing and chain sharing.
Disclosure of Invention
In view of the above-mentioned drawbacks of the prior art, the present invention is directed to how to improve the service response rate in the production process. In order to achieve the above purpose, the invention provides an effective containerized deployment method of a micro-service based on resource sharing in an intelligent factory, which is characterized in that a mirror image pull delay model and a communication overhead model of a system are built based on a layered structure of the micro-service and a micro-service chain, and an optimization problem is built into an integer quadratic programming problem and solved by a business solver.
Further, the method comprises the steps of:
step 1, system modeling, which aims at an intelligent manufacturing system, and performs detailed system modeling to describe the relationship among a mirror layer, the micro service chain and service response efficiency;
step 2, constructing an integer quadratic programming problem to optimize mirror image pull delay and communication overhead;
step 3, solving by using a solver to obtain an optimal deployment strategy;
and 4, performing micro-service deployment by using the obtained deployment strategy.
Further, the step 1 further includes the following steps:
step 1.1, modeling of an edge server;
step 1.2, modeling of a micro-service mirror image and a mirror image layer;
step 1.3, modeling of data transmission and a multi-hop model;
and step 1.4, modeling the micro service chain.
Further, in the step 1.1, the intelligent manufacturing system includesThe edge server and a cloud server deployed at the far end are used for storing the micro-service mirror image; each edge server is provided with limited computing and storage resources, and a certain number of micro services can be deployed; the computing and storage resources of the edge server n are denoted +.> and />The bandwidth between the cloud server and the edge server n is +.>
Further, in the step 1.2, all the micro-service images are stored in a micro-service image registry of a cloud server, and pulled by the edge server according to the deployed micro-service; each complete micro service image consists of a shareable base layer and an unshared layer; all layers were combined with l= {1,2, L to show that, by means of the method,indicate->The size of the layer; thus, each micro-service can be composed of +.>One or more layers of (C) may be composed of E kil E {0,1} represents ms ki Whether or not the first layer is included.
Further, in step 1.3, each server receives different service requests and data, if the server is deployed with the expected micro service, the service requests can be directly processed, and if the server is not deployed with the expected micro service, the requests and data need to be transmitted to another edge server with the expected micro service through multi-hops; because of different geographic positions among servers, the multi-hop times of communication among different servers also have differences, and D is defined nn′ For the number of hops to transmit a request or data from server n to server n', a matrix D is used to represent the multi-hop connections between all servers;
further, in the step 1.4, modeling the micro service chain as a directed weighted acyclic graph; every two micro services ms ki and mskj All use interactive weightTo represent the magnitude of the traffic between two micro-services; the interaction graph is written in the form of a matrix, which for application k is defined as
Wherein only on the micro-service chainThere are specific values for +.>Based on the interaction graph, communication overhead can be calculated; if two adjacent micro services on one micro service chain are deployed on the same edge server, as the two micro services can share hardware resources, the data needing to be interacted can be directly accessed through a memory, and multi-hop data transmission among servers is not needed; if two micro services are deployed on different servers, then multi-hop transmission over the communication link between the servers is required.
Further, the step 2 further includes the following steps:
step 2.1, defining an optimization variable; to represent deployment of micro services, definitionExpressed in ms ki Is (are) deployment case>Representing that the micro-service is deployed at the edge server n; because of the hierarchical structure of micro services, once a micro service is deployed at the edge server n, all layers contained by the micro service image are neededExist in the server n; with variable->To represent download of micro-service layer, < >>Indicating that layer i needs to be downloaded to the edge server n;
step 2.2, constructing an objective function wherein />S=[S 1 ,…,S L ] T ,/>x=[(x 1 ) T ,…,(x K ) T ] T ,/> The sum of the Hadamard products of the two matrices; md is the mirror pull delay, x, of the microservice T Wx is communication overhead among micro services, θ is a weight factor, and is used for balancing download delay and communication data volume;
step 2.3, constructing a micro-service deployment constraint qx=b, which ensures that each micro-service can only be deployed on one server and must be deployed, whereinq=[1,1,...,1] 1×N ,/>
Step 2.4, constructing between two optimization variablesConstraint; because of the existence of layer sharing, if micro services deployed on the same server can share the same base layer, the base layer only needs to be downloaded once; thus, the first and second substrates are bonded together, and />The following constraints need to be met: />Wherein Z is an arbitrarily large constant greater than 1, y= [ Y ] 1 ,…,Y N ] T ,
wherein
Step 2.5, constructing resource constraint; the storage space occupied by all the micro-service mirror layers cannot exceed the maximum storage of the server, and the constraint is constructed as follows
The computing resources of all microservice requests cannot exceed the maximum computing power of the server, and the constraint is built as
Step 2.6, constructing variable constraints, wherein the optimization variables are binary variables, and the constraints are constructed as follows
Further, in step 3, the expression of the optimization problem is obtained through the step 2 as follows
s.t.Qx=b
Fd≤C S
Gx≤C C
The problem is an integer quadratic programming problem and is directly input into a commercial solver Gurobi to obtain a numerical solution.
Further, in the step 4, the numerical solution of x and d obtained in the step 3 is used to obtain the micro-service deployment strategy for balancing the pulling delay and the communication overhead of the mirror image optimally, and the engineering deployment is performed next step.
The invention has the following technical effects:
1. the invention simultaneously considers the layered structure of the micro-service mirror image and the chained structure among the micro-service, and establishes the integer quadratic programming problem of minimizing mirror image pulling delay and communication overhead, thereby obtaining the optimal micro-service deployment scheme to improve service response efficiency
2. The similar technology is to use an iterative approximation algorithm, solve a plurality of optimization problems, or use algorithms such as a neural network and reinforcement learning, and the like, has a complex structure, and can obtain a deployment scheme by only obtaining parameters related to a system structure and constructing a parameter matrix to solve an integer quadratic programming problem.
The conception, specific structure, and technical effects of the present invention will be further described with reference to the accompanying drawings to fully understand the objects, features, and effects of the present invention.
Drawings
FIG. 1 is a flowchart of an algorithm of a preferred embodiment of the present invention.
Detailed Description
The following description of the preferred embodiments of the present invention refers to the accompanying drawings, which make the technical contents thereof more clear and easy to understand. The present invention may be embodied in many different forms of embodiments and the scope of the present invention is not limited to only the embodiments described herein.
In the drawings, like structural elements are referred to by like reference numerals and components having similar structure or function are referred to by like reference numerals. The dimensions and thickness of each component shown in the drawings are arbitrarily shown, and the present invention is not limited to the dimensions and thickness of each component. The thickness of the components is exaggerated in some places in the drawings for clarity of illustration.
As shown in fig. 1, in order to optimize service response efficiency in a production process, the present invention attempts to propose a resource sharing micro-service deployment scheme for an edge computing scenario. And establishing a mirror image pull delay model and a communication overhead model of the system based on the layered structure of the micro service and the micro service chain, converting the optimization problem into an integer quadratic programming problem through model reconstruction, and solving through a business solver. The final micro-service deployment scheme can achieve the purpose of reducing mirror image pull delay and communication overhead so as to improve the micro-service response efficiency.
The resource sharing micro-service deployment scheme for the edge computing scene comprises the following steps:
step one: system modeling, detailed system modeling is performed for intelligent manufacturing systems to describe the relationship between the mirror layer, the micro-service chain, and the service response efficiency. The method mainly comprises the following steps:
s1: modeling of edge servers. The intelligent manufacturing system comprisesThe system comprises a platform edge server and a cloud server deployed at a remote end for storing micro-service images. Each edge server has limited computing and storage resources, and a certain number of micro services can be deployed. The computing and storage resources of edge server n are denoted +.> and />The bandwidth between the cloud server and the edge server n is +.>
S2: modeling of micro-service mirroring and mirroring layers. All the micro-service images are stored in a micro-service image registry of the cloud server, and pulled by the edge server according to the deployed micro-service. Each complete micro-service image consists of some shareable base layers and some unshared layers. The different layers are of different sizes, we use all layersTo indicate (I)>Indicate->Layer size. Thus, each micro-service can be composed of +.>One or more layers of (C) may be composed of E kil E {0,1} represents ms ki Whether or not the first layer is included.
S3: modeling of data transfer and multi-hop models. Each server receives different service requests and data, if the server is deployed with the expected micro service, the service requests can be directly processed, and if the server is not deployed with the expected micro service, the requests and the data need to be transmitted to another edge server with the expected micro service through multiple hops. Because of different geographic locations between servers, there is also a difference in the number of hops communicated between different servers, we define D nn′ To transmit a request or data from server n to server n', the number of hops may be obtained through the shortest communication path between the two servers. Obviously D nn′ =D n′n ,D nn =0, we can represent the multi-hop connection between all servers with one matrix D.
And S4, modeling a micro service chain. In general, a micro-service chain can be modeled as a directed acyclic graph. However, because of communication interactions between micro services, to more clearly show the impact of communication data on micro service deployment, the micro service chain may be modeled as a directionally weighted acyclic graph. Every two micro services ms ki and mskj All use interactive weightTo indicate the magnitude of traffic between two micro-services. For ease of understanding, the interaction graph may be written in the form of a matrix, which for application k is defined as
Wherein only on the micro-service chainThere are specific values for +.>Based on the interaction map, a calculation of the communication overhead can be performed. If two adjacent micro services on one micro service chain are deployed on the same edge server, as the two micro services can share hardware resources, data needing to be interacted can be directly accessed through a memory, and multi-hop data transmission between the servers is not needed. If two micro services are deployed on different servers, then multi-hop transmission over the communication link between the servers is required.
Step two: and constructing an integer quadratic programming problem to optimize mirror image pull delay and communication overhead. The invention constructs the integer quadratic programming problem aiming at the system modeling completed in the step one and gives corresponding constraint conditions. The method mainly comprises the following steps:
s1: optimization variables are defined. To represent the deployment of micro-services, we defineExpressed in ms ki Is (are) deployment case>Representing that the micro-service is deployed at edge server n. Because of the hierarchical structure of the micro-services, once a micro-service is deployed at edge server n, all layers contained by the micro-service imageNeeds to exist on the server n. Variable for usTo represent download of micro-service layer, < >>Indicating that the first layer needs to be downloaded to the edge server n.
S2: construction of objective functions wherein />S=[S 1 ,…,S L ] T ,/>x=[(x 1 ) T ,…,(x K ) T ] T ,/> The sum is defined as the Hadamard product of the two matrices. Md is the mirror pull delay, x, of the microservice T Wx is the communication overhead between micro services, θ is the weight factor, used to trade-off between download delay and communication data volume.
S3: constructing a micro-service deployment constraint qx=b that ensures that each micro-service can only be deployed on one server and must be deployed, whereinq=[1,1,...,1] 1×N ,
S4: constraints between two optimization variables are constructed. Storage due to layer sharingIn the event that micro services deployed on the same server can share the same base layer, the base layer need only be downloaded once. Thus, the first and second substrates are bonded together, and />The following constraints need to be met: />Wherein Z is an arbitrarily large constant greater than 1, y= [ Y ] 1 ,…,Y N ] T ,
wherein
S5: and constructing resource constraints. The storage space occupied by all micro-service mirror layers cannot exceed the maximum storage of the server, and the constraint is constructed as follows
The computing resources of all micro-service requests cannot exceed the maximum computing capacity of the server, and the constraint is constructed to Gx less than or equal to C C Wherein g= [ G 1 ,…,G N ] T ,
S6, constructing variable constraints, wherein the optimization variables are binary variables, and the constraints are constructed as follows
Step three: and solving by using a solver to obtain an optimal deployment strategy. The expression of the optimization problem can be obtained through the second step
s.t.Qx=b
Fd≤C S
Gx≤C C
The problem is an integer quadratic programming problem and can be directly input into a commercial solver Gurobi to obtain a numerical solution.
Step four: and performing micro-service deployment by using the obtained deployment strategy. And D, using the numerical solution of x and d obtained in the step three to obtain an optimal micro-service deployment strategy for balancing the mirror image pulling delay and the communication overhead, and carrying out engineering deployment.
The foregoing describes in detail preferred embodiments of the present invention. It should be understood that numerous modifications and variations can be made in accordance with the concepts of the invention without requiring creative effort by one of ordinary skill in the art. Therefore, all technical solutions which can be obtained by logic analysis, reasoning or limited experiments based on the prior art by the person skilled in the art according to the inventive concept shall be within the scope of protection defined by the claims.
Claims (2)
1. An intelligent factory resource sharing-based micro-service effective containerized deployment method is characterized in that a mirror image pull delay model and a communication overhead model of a system are established based on a layered structure of micro-services and a micro-service chain, an optimization problem is modeled as an integer quadratic programming problem, and the integer quadratic programming problem is solved through a business solver;
the method comprises the following steps:
step 1, system modeling, which aims at an intelligent manufacturing system, and performs detailed system modeling to describe the relationship among a mirror layer, the micro service chain and service response efficiency;
step 2, constructing an integer quadratic programming problem to optimize mirror image pull delay and communication overhead;
step 3, solving by using a solver to obtain an optimal deployment strategy;
step 4, performing micro-service deployment by using the obtained deployment strategy;
wherein, the step 1 further comprises the following steps:
step 1.1, modeling of an edge server;
step 1.2, modeling of a micro-service mirror image and a mirror image layer;
step 1.3, modeling of data transmission and a multi-hop model;
step 1.4, modeling the micro service chain;
in the step 1.1, the intelligent manufacturing system comprisesThe edge server and a deploymentThe cloud server at the far end is used for storing the micro-service mirror image; each edge server is provided with limited computing and storage resources, and a certain number of micro services are deployed; the computing and storage resources of the edge server n are denoted +.> and />The bandwidth between the cloud server and the edge server n is +.>
In the step 1.2, all the micro-service images are stored in a micro-service image registry of a cloud server, and pulled by the edge server according to the deployed micro-service; each complete micro service image consists of a shareable base layer and an unshared layer; applying all layers toTo indicate (I)>Representing the size of the L-th e L layer; thus, each microservice is composed of +.>One or more layers of (E) kil E {0,1} represents ms ki Whether or not to include a first layer;
in step 1.3, each server receives different service requests and data, if the server is deployed with an expected micro-service, the service requests are directly processed, and if the server is not deployed with the expected micro-service, the requests and the data need to be transmitted to another edge server with the expected micro-service through multi-hops; because of the difference in geographic locations between serversThe number of multi-hops of communication with the server is different, and D is defined nn′ For the number of hops to transmit a request or data from server n to server n', a matrix D is used to represent the multi-hop connections between all servers;
in the step 1.4, modeling the micro service chain as a directed weighted acyclic graph; every two micro services ms ki and mskj All use interactive weightTo represent the magnitude of the traffic between two micro-services; the interaction map is written in the form of a matrix, which interaction matrix is defined as +.>
Wherein only on the micro-service chainWith specific values for not located on the micro-service chainBased on the interaction diagram, calculating communication overhead; if two adjacent micro services on one micro service chain are deployed on the same edge server, as the two micro services share hardware resources, the data needing to be interacted are directly accessed through a memory, and multi-hop data transmission among the servers is not needed; if two micro services are deployed on different servers, then multi-hop transmission over the communication link between the servers is required;
the step 2 further comprises the following steps:
step 2.1, fixingSense optimization variables; to represent deployment of micro services, definitionExpressed in ms ki Is (are) deployment case>Representing that the micro-service is deployed at the edge server n; because of the hierarchical structure of micro services, once a micro service is deployed at the edge server n, all layers contained by the micro service image need to exist at server n; by variablesTo represent download of micro-service layer, < >>Indicating that layer i needs to be downloaded to the edge server n;
step 2.2, constructing an objective function wherein />S=[S 1 ,…,S L ] T ,/>x=[(x 1 ) T ,…,(x K ) T ] T ,/> The sum of the Hadamard products of the two matrices; md is the mirror pull delay, x, of the microservice T Wx is communication overhead among micro services, θ is a weight factor, and is used for balancing download delay and communication data volume;
step 2.3, constructing a micro-service deployment constraint qx=b, which ensures that each micro-service can only be deployed on one server and must be deployed, wherein
Step 2.4, constructing constraints between two optimization variables; because of the existence of layer sharing, if the micro services deployed on the same server share the same base layer, the base layer only needs to be downloaded once; thus, the first and second substrates are bonded together, and />The following constraints need to be met: />Wherein Z is an arbitrarily large constant greater than 1, y= [ Y ] 1 ,…,Y N ] T ,/>
wherein
Step 2.5, constructing resource constraint; the storage space occupied by all the micro-service mirror layers cannot exceed the maximum storage of the server, and the constraint is constructed as follows
The computing resources of all microservice requests cannot exceed the maximum computing power of the server, and the constraint is built as
Step 2.6, constructing variable constraints, wherein the optimization variables are binary variables, and the constraints are constructed as follows
In the step 3, the expression of the optimization problem is obtained through the step 2
s.t.Qx=b
Fd≤C S
Gx≤C C
The problem is an integer quadratic programming problem and is directly input into a commercial solver Gurobi to obtain a numerical solution.
2. The method for efficiently and containerized deployment of micro-services based on resource sharing in an intelligent factory according to claim 1, wherein in the step 4, the numerical solution of x and d obtained in the step 3 is used to obtain an optimal micro-service deployment strategy for balancing mirror image pull delay and communication overhead, and in the next step, engineering deployment is performed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210619201.4A CN115052033B (en) | 2022-06-01 | 2022-06-01 | Resource sharing-based micro-service effective containerized deployment method for intelligent factory |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210619201.4A CN115052033B (en) | 2022-06-01 | 2022-06-01 | Resource sharing-based micro-service effective containerized deployment method for intelligent factory |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115052033A CN115052033A (en) | 2022-09-13 |
CN115052033B true CN115052033B (en) | 2023-04-28 |
Family
ID=83159807
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210619201.4A Active CN115052033B (en) | 2022-06-01 | 2022-06-01 | Resource sharing-based micro-service effective containerized deployment method for intelligent factory |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115052033B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024086344A1 (en) * | 2022-10-20 | 2024-04-25 | Fisher-Rosemount Systems, Inc. | Compute fabric functionalities for a process control or automation system |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108388472A (en) * | 2018-03-01 | 2018-08-10 | 吉林大学 | A kind of elastic task scheduling system and method based on Docker clusters |
CN110058950A (en) * | 2019-04-17 | 2019-07-26 | 上海沄界信息科技有限公司 | Distributed cloud computing method and equipment based on serverless backup framework |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10893100B2 (en) * | 2015-03-12 | 2021-01-12 | International Business Machines Corporation | Providing agentless application performance monitoring (APM) to tenant applications by leveraging software-defined networking (SDN) |
US11765225B2 (en) * | 2019-03-18 | 2023-09-19 | Reliance Jio Infocomm Limited | Systems and methods for microservice execution load balancing in virtual distributed ledger networks |
CN113542503B (en) * | 2020-03-31 | 2022-07-15 | 华为技术有限公司 | Method, electronic device and system for creating application shortcut |
CN114338504B (en) * | 2022-03-15 | 2022-07-08 | 武汉烽火凯卓科技有限公司 | Micro-service deployment and routing method based on network edge system |
-
2022
- 2022-06-01 CN CN202210619201.4A patent/CN115052033B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108388472A (en) * | 2018-03-01 | 2018-08-10 | 吉林大学 | A kind of elastic task scheduling system and method based on Docker clusters |
CN110058950A (en) * | 2019-04-17 | 2019-07-26 | 上海沄界信息科技有限公司 | Distributed cloud computing method and equipment based on serverless backup framework |
Non-Patent Citations (3)
Title |
---|
Almiani Muder.Resilient Back Propagation Neural Network Security Model For Containerized Cloud Computing.《Simulation Modelling Practice and Theory 》.2022,全文. * |
刘为 ; .微服务架构及相应云平台解析.科教导刊(下旬).2017,(01),全文. * |
陆志刚 ; 徐继伟 ; 黄涛 ; .基于分片复用的多版本容器镜像加载方法.软件学报.2020,(06),全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN115052033A (en) | 2022-09-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115052033B (en) | Resource sharing-based micro-service effective containerized deployment method for intelligent factory | |
Mouradian et al. | Application component placement in NFV-based hybrid cloud/fog systems | |
CN113835899B (en) | Data fusion method and device for distributed graph learning | |
CN105049353A (en) | Method for configuring routing path of business and controller | |
Rikos et al. | Optimal CPU scheduling in data centers via a finite-time distributed quantized coordination mechanism | |
CN111813539A (en) | Edge computing resource allocation method based on priority and cooperation | |
Tran et al. | Dependable control systems with Internet of Things | |
CN114465900B (en) | Data sharing delay optimization method and device based on federal edge learning | |
Santos et al. | Reinforcement learning for service function chain allocation in fog computing | |
Henna et al. | Distributed and collaborative high-speed inference deep learning for mobile edge with topological dependencies | |
CN117118497A (en) | Controller and gateway joint deployment method suitable for satellite-ground integrated network | |
CN116367190A (en) | Digital twin function virtualization method for 6G mobile network | |
CN114024894B (en) | Dynamic calculation method and system in software-defined heaven-earth integrated network | |
CN114710200B (en) | Satellite network resource arrangement method and system based on reinforcement learning | |
CN116582407A (en) | Containerized micro-service arrangement system and method based on deep reinforcement learning | |
CN112217652A (en) | Network topology device and method based on central communication mode | |
Zhang et al. | Accelerate deep learning in IoT: Human-interaction co-inference networking system for edge | |
CN115713009A (en) | Dynamic aggregation federal learning method based on satellite and ground station connection density | |
CN114745386A (en) | Neural network segmentation and unloading method under multi-user edge intelligent scene | |
CN114826820A (en) | FC-AE-1553 network service scheduling method and equipment under centralized control | |
CN113949666A (en) | Flow control method, device, equipment and system | |
CN113708982A (en) | Service function chain deployment method and system based on group learning | |
CN112153147A (en) | Method for placing chained service entities based on entity sharing in mobile edge environment | |
CN114090306B (en) | Pluggable block chain layered consensus method, system, device and storage medium | |
CN113132435B (en) | Distributed training network system with separated storage and service network and communication method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |