CN115529316A - Micro-service deployment method based on cloud computing center network architecture - Google Patents

Micro-service deployment method based on cloud computing center network architecture Download PDF

Info

Publication number
CN115529316A
CN115529316A CN202211206589.1A CN202211206589A CN115529316A CN 115529316 A CN115529316 A CN 115529316A CN 202211206589 A CN202211206589 A CN 202211206589A CN 115529316 A CN115529316 A CN 115529316A
Authority
CN
China
Prior art keywords
micro
service
server
deployment
microservice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211206589.1A
Other languages
Chinese (zh)
Inventor
彭凯
马芳玲
徐博
胡毅
胡梦兰
彭聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei Chutianyun Co ltd
Huazhong University of Science and Technology
Original Assignee
Hubei Chutianyun Co ltd
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei Chutianyun Co ltd, Huazhong University of Science and Technology filed Critical Hubei Chutianyun Co ltd
Priority to CN202211206589.1A priority Critical patent/CN115529316A/en
Publication of CN115529316A publication Critical patent/CN115529316A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network

Abstract

The invention provides a micro-service deployment method based on a cloud computing center network architecture, which adopts a joint optimization mode to simultaneously solve the micro-service deployment problem and the user request routing problem. The system can support different types of services under certain constraint and can simultaneously respond to massive mobile user requests. In addition, the invention considers the interdependency among different micro services, effectively reduces the response time delay of the system to the user request by fully considering the communication dependency among the micro services, and improves the user application experience.

Description

Micro-service deployment method based on cloud computing center network architecture
Technical Field
The invention relates to the field of cloud computing, in particular to a micro-service deployment method based on a cloud computing center network architecture.
Background
The mobile internet industry has been developed rapidly, and the number of global users is approximately 44 hundred million. People are using myriad mobile applications such as WeChat, QQ, paibao, taobao, etc. every day. With the prosperity of mobile applications and the dramatic increase of user data volume, enterprise architecture of the mobile internet is overwhelmed, and thus, traditional monolithic, distributed, layered and Service Oriented Architecture (SOA) is shifted to the popular micro-service architecture in recent years.
With the increasing business forms, the requirements of users on service performance are higher and higher, and how to optimize the user experience becomes the most concerned problem of each large business on the premise of ensuring the positive increase of the revenue of service providers. The rapid popularization of the mobile internet and the explosive growth of various services bring great challenges to enterprises while bringing development opportunities. Under the scene of high-concurrency massive user requests, how to ensure the user service quality, and the micro-service architecture provides a good solution for the problems. The micro-services are modules with specific functions, one micro-service only focuses on one single function, and the micro-services are interactively cooperated to jointly complete user requests. The characteristics of high cohesion and low coupling of the micro-service are very easy to expand and maintain. Clearly, the micro-service architecture has irreplaceable advantages in service scenarios with a surge in traffic.
At present, no related work is available for detailed research on joint optimization of micro-service instance deployment and request routing of the cloud computing center. Much work has focused on the research of microservice deployment and load balancing, and the strong coupling of the microservice deployment and the load balancing is not fully utilized to optimize the time delay. Some of the research efforts have considered coarse-grained resource allocation, such as with servers as the smallest units of computing resources, which can result in a significant amount of wasted resources. In addition, research is also carried out on neglecting communication delay among micro-services, and only time consumption brought by computing resources is considered, but actually, the micro-services with dependency relationship are deployed on the same server, so that the network congestion probability can be greatly reduced, and the user service experience is greatly improved.
Disclosure of Invention
In order to solve the problem that a high-concurrency mass of user requests can reach a cloud computing center, the invention provides a micro-service deployment method in the cloud computing center, which integrates a micro-service technical architecture in the cloud computing, establishes a performance model with resource leasing cost as constraint, performs joint optimization on micro-service instance deployment and request routing, obtains an optimized micro-service deployment and user request routing scheme based on an improved greedy heuristic algorithm, a genetic algorithm, a local search algorithm and the like, and aims to minimize user request response time delay under the condition of meeting ASP cost constraint.
The cloud computing center network architecture comprises a plurality of switches and a plurality of servers, each switch is connected with the plurality of servers, a plurality of micro services are deployed on the servers of the cloud computing center, the combination of different micro services forms micro service chains with different functions, each micro service has a plurality of instances, and each type of user request corresponds to one micro service chain;
and obtaining an optimal micro-service deployment scheme based on the micro-service deployment strategy and the route request strategy combined optimization, wherein the micro-service deployment strategy comprises the specific number of each micro-service instance deployed on the server of the cloud computing center and the deployment position of each micro-service instance on the server of the cloud computing center, and the request route strategy comprises the specific route path of each user request between the servers of the cloud computing center.
The invention provides a cloud computing center network architecture, and provides a method for micro-service instance deployment and routing request in a network based on the network architecture, wherein a cloud computing center network model consisting of a switch and a server is designed, the switch is responsible for data transmission, the server is responsible for request processing, multiple micro-service instances are deployed on the server of the cloud computing center, and different combinations among the micro-services form micro-service chains with different functions; and with the resource leasing cost as a constraint, establishing a performance model of the total time delay of the system according to the queuing network; based on the model, a microservice instance deployment algorithm supporting various microservice chains is designed, the algorithm is improved based on a hybrid inheritance and local search algorithm, a plurality of instances of various microservices are dynamically deployed in a network, and a request routing strategy is correspondingly adjusted according to a deployment result; the request routing algorithm designed for the request from the user aims to find an effective path of the microservice in the network for the request, so that the system can more efficiently complete the processing of the request and return a corresponding result. The invention adopts a joint optimization mode to simultaneously solve the problem of micro-service deployment and the problem of user request routing, and particularly, the invention takes a micro-service instance deployment scheme as a precondition and takes response time delay obtained by calculating a request routing result as an evaluation standard to evaluate the instance deployment scheme, thereby fully utilizing the strong coupling relationship of the micro-service instance deployment scheme and the request routing result. The system can support different types of services such as WeChat, buffalo, youkou, paibao and the like under certain constraint, and can simultaneously respond to massive mobile user requests. In addition, the invention considers the interdependency among different micro services, effectively reduces the response time delay of the system to the user request by fully considering the communication dependency among the micro services, and improves the user application experience.
Drawings
Fig. 1 is a schematic structural diagram of a cloud computing center network architecture provided in the present invention;
FIG. 2 is a schematic diagram of a microservice and its deployment on a server;
FIG. 3 is a flowchart of a micro-service deployment and request routing joint optimization algorithm to obtain an optimal micro-service deployment scenario;
FIG. 4 is a schematic diagram of request routing;
fig. 5 is a flow chart of a request routing algorithm.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention. In addition, technical features of various embodiments or individual embodiments provided by the present invention may be arbitrarily combined with each other to form a feasible technical solution, and such combination is not limited by the sequence of steps and/or the structural composition mode, but must be realized by a person skilled in the art, and when the technical solution combination is contradictory or cannot be realized, such a technical solution combination should not be considered to exist and is not within the protection scope of the present invention.
Referring to fig. 1, a cloud computing center network architecture is provided, which is composed of a large cloud computing center including a plurality of switches and a plurality of servers. The switches are responsible for data forwarding, the servers are responsible for request processing and routing, the switches are connected with each other to form a connected graph, the switches are connected with each other through optical fibers, each switch is connected with a plurality of servers, one switch and the server connected with the switch form a small network, the servers in the small network are communicated with the servers in other small networks through the data forwarding of the switches, and the communication time delay among the servers in the same small network can be ignored. The difference between the micro service and the traditional service is that the micro service divides an application program into a plurality of modules with specific functions, and the modules cooperate with each other to jointly complete a user request. In order to guarantee the service quality, each micro service has a plurality of instances, and the user request can be selected from the plurality of instances to complete the self requirement. A plurality of micro services are combined into a linear chain according to a certain sequence, namely the micro service chain, and one micro service chain corresponds to one user request. After the user request reaches the system, the exchanger and the server cooperate to sequentially process and route the request according to the micro-service sequence on the corresponding service chain so as to complete the user requirement. The specific process is as follows: when a user request reaches an acceptance module of the cloud computing center, the acceptance module obtains a first micro-service instance deployment position of the request through table lookup, selects an instance with low prediction delay for the request according to the current state of the instance, routes the request to the selected instance, and when the first micro-service function is completed, the server selects a second micro-service instance for the request according to a routing algorithm. And by analogy, when each micro service in the chain is executed, the result is finally returned to the user.
The method comprises the steps that based on a constructed network architecture of the cloud computing center, an optimal micro-service deployment scheme is obtained based on a micro-service deployment strategy and a route request strategy combined optimization, wherein the micro-service deployment strategy comprises the specific number of each micro-service instance deployed on a server of the cloud computing center and the deployment position of each micro-service instance on the server of the cloud computing center, and the request route strategy comprises the specific route path of each user request between the servers of the cloud computing center.
It is understood that a delay model constrained by service provider (ASP) resource lease costs is built based on the network architecture; each micro service has a plurality of instances (mirrors), the micro service instance deployment strategy includes determining a specific number of each micro service instance and a deployment position of each micro service instance on a server of the cloud computing center, and as can be seen in fig. 2, a schematic diagram of deployment of a micro service on a server is shown; the user request routing strategy comprises the steps of designing a specific routing path of the request between the servers; the combined optimization strategy of micro-service instance deployment and request routing makes full use of the coupling relationship between the two, namely, the instance deployment is used as a precondition of the request routing strategy, and the time delay obtained after the request routing is used as a standard for measuring the quality of the deployment strategy. The algorithm optimizes the two problems simultaneously in the continuous iterative updating process to finally obtain an optimal micro-service deployment scheme and a request routing scheme, and finally the network architecture can process user requests under high-concurrency massive scenes.
As an embodiment, the microservice deployment policy includes: generating an initial deployment scheme of the micro-service instance based on a greedy algorithm; and obtaining an optimal micro-service deployment scheme based on iterative optimization of a hybrid genetic algorithm and a local search algorithm.
It is understood that fig. 3 is a flow chart of a method for deploying micro services, and referring to fig. 3, for the micro services, the deployment problem, the present invention adopts an improved hybrid genetic and local search algorithm to obtain a corresponding solution. In each iteration, the individual invokes a request routing algorithm to minimize request response latency under the resource lease cost constraint. The micro-service instance deployment algorithm flow is as follows:
algorithm 1: micro-service instance deployment algorithm:
input iteration number
Figure 603497DEST_PATH_IMAGE001
Micro-service set M, server set I and initial population number
Figure 51796DEST_PATH_IMAGE002
The total core number C of the server;
and outputting the individual S with the maximum fitness function.
Calculating the minimum core number of each micro service M in the set M
Figure 267751DEST_PATH_IMAGE003
for
Figure 866223DEST_PATH_IMAGE004
do
Randomly selecting r micro services from M to obtain minimum core number
Figure 821540DEST_PATH_IMAGE003
Extension to 2
Figure 175161DEST_PATH_IMAGE003
for
Figure 645457DEST_PATH_IMAGE005
do
According to
Figure 489697DEST_PATH_IMAGE006
And
Figure 689734DEST_PATH_IMAGE007
selecting a deployment location for each instance of microservice m
end for
end for// initial population P 0 Has generated
for
Figure 886360DEST_PATH_IMAGE008
do
for
Figure 781635DEST_PATH_IMAGE009
do
Figure 456330DEST_PATH_IMAGE010
Get round upwards
From P 0 Two parent solutions P1 and P2 are selected
Obtaining offspring solutions C1 and C2 by applying cross algorithm
Performing local search on C1 and C2 to obtain optimized solutions S1 and S2
Putting S1 and S2 into a population P 0
end for
Calling request routing algorithm and calculating evaluation function of each individual
2P is added 0 The solutions are sorted according to the fitness function descending order
Figure 947092DEST_PATH_IMAGE011
From 2P 0 P before selection in the solution 0 Individual one
Figure 580199DEST_PATH_IMAGE012
end for
Returning the solution S with the maximum fitness function value
The flow of the algorithm is as follows:
a. calculating the minimum core number required by each micro service in the micro service set, and randomly selecting r micro services from the micro service set to expand the required core number to be twice of the minimum core number;
b. based on the degree of dependence of any micro service on other micro services and the degree of dependence of any micro service on each server, selecting a deployment position for the instance of any micro service, and generating an initial population P0 of the micro service instance, wherein P0 is the number of individuals in the initial population;
c. selecting two solutions P1 and P2 from an initial population P0, respectively selecting one server from P1 and P2, exchanging micro-service instances deployed on the two servers to obtain solutions C1 and C2, executing a local search algorithm on C1 and C2 to obtain solutions S1 and S2, and putting the solutions S1 and S2 back into the initial population P0;
d. c is repeatedly executed until the number of individuals in the population is 2P0;
e. executing a request routing algorithm on 2P0 individuals in the population, and calculating a fitness function of each individual;
f. sorting the 2P0 individuals in a descending order according to a fitness function, and forming a new population by the P0 individuals which are sorted in the front as an initial population of the next iteration;
g. c-f is repeatedly executed until the iteration times reach the maximum iteration times
Figure 962770DEST_PATH_IMAGE013
h. And selecting the individual S with the highest fitness function and meeting the leasing cost constraint from the last iteration result as a final micro-service deployment scheme.
It is understood that the number of cores occupied by the micro-service can be estimated from the queue stability condition. The request arrival rate of the microservice k is
Figure 441155DEST_PATH_IMAGE014
Then the request arrival rate of all microservices on the service chain k is
Figure 615785DEST_PATH_IMAGE014
. If there is only one instance per microservice on k, then for any microservice m on k, the queue stability condition can be derived
Figure 387169DEST_PATH_IMAGE015
Then the minimum value of the initial core number of the micro service m can be obtained
Figure 257037DEST_PATH_IMAGE016
The calculating the minimum core number required by each micro service in the micro service set comprises the following steps: calculating the minimum core number required by each micro service m according to the relation between the request arrival rate on the micro service chain and the core processing rate of the server
Figure 867009DEST_PATH_IMAGE016
Figure 568249DEST_PATH_IMAGE017
Wherein the content of the first and second substances,
Figure 746421DEST_PATH_IMAGE018
is the arrival rate of requests containing a microservice m, the request arrival rate of a microservice chain n is
Figure 228218DEST_PATH_IMAGE018
Then the request arrival rate of all the microservices on the microservice chain n is
Figure 750204DEST_PATH_IMAGE018
Figure 305950DEST_PATH_IMAGE019
Is the processing rate of the server core to the micro-service m.
As an embodiment, the selecting a deployment location for an instance of any one microservice based on the degree of dependency of the any one microservice on other microservices and the degree of dependency of the any one microservice on each server includes: calculating the degree of dependence of any micro service on each server according to the calculated degree of dependence of any micro service on each server, and deploying the instance of any micro service on the server with the highest degree of dependence; wherein the degree of dependency between two microservices is characterized based on data traffic between the two microservices and is expressed as:
Figure 858285DEST_PATH_IMAGE020
the dependency degree of the micro service m1 on the server i is defined as the sum of data traffic between the micro service m1 and all micro services deployed on the server i, that is:
Figure 992158DEST_PATH_IMAGE021
where m' is the microservice on server i.
It can be understood that, when the micro-service is deployed on the servers of the cloud computing center, the server on which the micro-service to be deployed is determined according to the degree of dependency between the micro-service to be deployed and the micro-service already deployed on each server. Specifically, instance deployment is performed according to the dependency degree between the micro services, that is, the micro service instances with high dependency degree are deployed on the same server, so as to reduce the request response delay.
The micro-service deployment scheme of the invention fully considers the dependency relationship between the micro-services and uses
Figure 350458DEST_PATH_IMAGE022
The data traffic between the micro-services m1 and m2 is expressed as a criterion for evaluating the degree of dependency between the micro-services, and the larger the data traffic, the higher the degree of dependency between the two micro-services. On the basis, the degree of dependence of the microservice m on the server i is defined
Figure 495131DEST_PATH_IMAGE023
. When deploying the micro-service instance, the micro-service m to be deployed is preferentially deployed to the degree of dependence
Figure 513641DEST_PATH_IMAGE024
Thus reducing latency at higher servers.
When the deployment scheme of the micro-service is carried out, an iterative optimization method is adopted, firstly, an initial population P0 of a micro-service instance is generated, and P0 is the number of individuals in the initial population. For the generated initial population P0, a crossing method is adopted to expand a search space so as to avoid the search from being trapped in local optimization, a crossing algorithm has certain randomness and jump performance, and two individuals with higher evaluation functions are selected to be crossed, namely, example sets deployed on any two servers on the two individuals are exchanged. This operation may have a large influence on the genetic evaluation function of an individual, which may result in the evaluation function becoming better or worse, but it is this randomness that expands the search range of the algorithm and the algorithm can jump out of local optima. Specifically, two individuals P1 and P2 are selected from the initial population P0 of the generated micro-service instance, and the micro-service instances deployed on P1 and P2 are exchanged to obtain C1 and C2.
The cross method can prevent the algorithm from falling into local optimum, but because the jump variability is too strong, the global optimum solution can be missed, so the local search algorithm is adopted to search the solution space carefully after the cross method, and the problem of slow algorithm convergence speed caused by the jump characteristic of the cross method is avoided. In the invention, the local search algorithm is defined as adding or subtracting one operation to the number of instance cores on the basis of the instance deployment of an original individual, and finally, the operation which reduces the time delay to the maximum is selected as the result of local search. The neighborhood of an individual is defined as follows:
Figure 48658DEST_PATH_IMAGE025
wherein the content of the first and second substances,
Figure 538546DEST_PATH_IMAGE026
is a mini-suitService M occupies the number of cores on server i, M is the micro-service set,
Figure 6567DEST_PATH_IMAGE027
is a collection of servers with microservice m deployed.
The specific operation of the local search algorithm is that, for the above C1 and C2, one microservice instance is added to C1 or one microservice instance is reduced to obtain S1, then C2 is unchanged, and S2= C2; or adding a micro-service instance to C2 or subtracting a micro-service instance to obtain S2, then C1 is unchanged, and S1= C1. And (3) after the C1 and the C2 are subjected to local search algorithm processing, S1 and S2 are obtained, and the S1 and the S2 are added into the initial population P0. And for the initial population, repeatedly adopting a cross method and a local search algorithm to expand the number of individuals of the initial population until the number of individuals of the initial population reaches 2P0.
And then, calling a request routing algorithm for 2P0 individuals in the population, and calculating a fitness function of each individual. In the iteration process of the micro-service instance deployment algorithm, a request routing algorithm is called to obtain a fitness function of a current individual, and the advantages and disadvantages of a current deployment and routing scheme are evaluated according to the fitness function, wherein the flow of the request routing algorithm is as follows:
and 2, algorithm: request routing algorithm:
inputting a micro-service instance deployment scheme P and a user request n
Routing scheme Rn for outgoing requests n
for
Figure 992715DEST_PATH_IMAGE028
do// Ln is the set of microservices on the request chain n
if micro service m successor service is null
break
else
The successor service of the micro-service m is m s
Remember that the instance of executing microservice m selected for request n by the last iteration is m j
for
Figure 670821DEST_PATH_IMAGE029
do // J m s Is a micro-service m s Example set of,// m s q Is m s Q example of (1)
Computing
Figure 902083DEST_PATH_IMAGE030
And
Figure 224611DEST_PATH_IMAGE031
computing
Figure 883125DEST_PATH_IMAGE032
end for
To generate [0,1]Random number in between
Figure 219166DEST_PATH_IMAGE033
for
Figure 988539DEST_PATH_IMAGE034
do
if
Figure 431153DEST_PATH_IMAGE035
Figure 729410DEST_PATH_IMAGE036
Figure 824186DEST_PATH_IMAGE037
end if
end for
end else
end for
The request routing algorithm comprises the following steps:
the method comprises the following steps: and for the current micro service m of the micro service chain n, judging whether the micro service m is the last service on the request chain n, if so, exiting the routing algorithm and outputting a routing result.
Step two: subsequent microservice if microservice m exists
Figure 131671DEST_PATH_IMAGE038
Then to
Figure 553425DEST_PATH_IMAGE038
Each example of (a)
Figure 491425DEST_PATH_IMAGE039
The following operations are performed. Recording the execution instance selected for the micro service m by the last iteration as the
Figure 303523DEST_PATH_IMAGE040
Step three: calculating out
Figure 742595DEST_PATH_IMAGE040
To
Figure 455074DEST_PATH_IMAGE039
Routing probability based on distance between servers where the two servers are located
Figure 95133DEST_PATH_IMAGE041
And calculating
Figure 535473DEST_PATH_IMAGE040
To
Figure 948875DEST_PATH_IMAGE039
Routing probability based on the number of cores on server occupied by both
Figure 751746DEST_PATH_IMAGE042
Step four: according to
Figure 500390DEST_PATH_IMAGE043
Computing
Figure 520036DEST_PATH_IMAGE040
Is routed to
Figure 707435DEST_PATH_IMAGE039
Final probability of
Figure 223867DEST_PATH_IMAGE044
Step five: to generate [0,1]Random number in between
Figure 736888DEST_PATH_IMAGE045
Step six: according to the generated random number
Figure 214137DEST_PATH_IMAGE045
And the final probability obtained by calculation
Figure 798702DEST_PATH_IMAGE046
For micro-service
Figure 608041DEST_PATH_IMAGE038
Selecting a specific instance to be executed, and adding the selected instance and the corresponding probability to the routing result
Figure 619859DEST_PATH_IMAGE047
In (1).
Wherein, for example, the subservices of microservices m
Figure 318825DEST_PATH_IMAGE038
There are three examples, calculation
Figure 379185DEST_PATH_IMAGE040
Obtaining three corresponding probabilities according to the final probabilities among the three examples, and respectively connecting the three probabilities with the generated random numbers
Figure 604630DEST_PATH_IMAGE045
A comparison is made and an instance of the selected route is determined based on the comparison. For example, the three probabilities are 0, respectively.2. 0.3 and 0.5, then if a random number is generated
Figure 662716DEST_PATH_IMAGE048
Then routing to the first instance; if generated random number
Figure 239190DEST_PATH_IMAGE049
Then routing to the third instance; if generated random number
Figure 273880DEST_PATH_IMAGE050
Then the route is selected to the second instance.
Step seven: and repeating the first step to the sixth step to obtain the routing paths of all the micro services of the micro service chain n.
It can be appreciated that in finding a routing path for the micro-service chain n, a request routing algorithm based on a weighted sum of server distance and core number includes routing user requests on a specific deployment scenario. And sequentially calculating the probability of all the examples of each micro service route to the subsequent service thereof according to the sequence of the micro services on the request chain, namely obtaining the possible route path of each service chain request. The routing probability mainly comprises a probability based on the distance between the servers and a probability based on the number of example cores, and a weighting factor is used
Figure 25936DEST_PATH_IMAGE051
Indicating the degree of influence of both on the final probability.
And for each individual, after a request algorithm is called, calculating a fitness function of each individual, evaluating the quality of the individuals in the population through the fitness function, reserving the individual with a large fitness value in the iterative optimization process, and discarding the individual with a small fitness value. The fitness function is expressed as the inverse of the optimization objective, i.e.:
Figure 645136DEST_PATH_IMAGE052
Figure 318694DEST_PATH_IMAGE053
is the total delay of the system.
Wherein the total latency of a chain of microservices n in the system comprises the linger latency at the microservices on the chain
Figure 48752DEST_PATH_IMAGE054
And communication delay of data in network
Figure 592997DEST_PATH_IMAGE055
Stay time delay of microservice chain
Figure 553738DEST_PATH_IMAGE054
The sum of the linger delays at each microservice on the chain is calculated as:
Figure 839226DEST_PATH_IMAGE056
wherein, the first and the second end of the pipe are connected with each other,
Figure 982762DEST_PATH_IMAGE057
representing the queuing delay at the instance of microservice m on server i,
Figure 771727DEST_PATH_IMAGE058
representing the computation latency at the instance of microservice m on server i,
Figure 608096DEST_PATH_IMAGE059
whether the micro service m on the service chain n and the precursor node thereof are deployed on the same server or not is shown, and if yes, the micro service m and the precursor node thereof are deployed on the same server
Figure 115300DEST_PATH_IMAGE060
Is 0, otherwise is 1.
Figure 62528DEST_PATH_IMAGE061
Representing a collection of micro-services on a service chain n,
Figure 237157DEST_PATH_IMAGE062
representing a set of servers on the n routing path of the request chain.
Communication delay of microservice chain n in network
Figure 477383DEST_PATH_IMAGE063
By data forwarding delay on the switch
Figure 737463DEST_PATH_IMAGE064
And data transmission delay between the switches, namely:
Figure 19540DEST_PATH_IMAGE065
wherein the content of the first and second substances,
Figure 658463DEST_PATH_IMAGE066
representing the data forwarding delay at the switch s,
Figure 695689DEST_PATH_IMAGE067
representing the total amount of data to be transmitted on switch s,
Figure 318431DEST_PATH_IMAGE068
indicating the rate of data transfer between the switches,
Figure 732095DEST_PATH_IMAGE069
representing a set of switches on the microservice chain n-route path.
Total time delay of system
Figure 792236DEST_PATH_IMAGE070
I.e., the sum of the time delays to all microservice chains within the cloud computing center for a period of time T, i.e.:
Figure 141309DEST_PATH_IMAGE071
after the fitness functions of 2P0 individuals are calculated, 2P0 individuals are selectedSorting the volumes in descending order according to fitness function, and sorting the volumes in the front
Figure 110402DEST_PATH_IMAGE072
And forming a new population by the individuals to serve as an initial population for the next iteration. Repeatedly executing iterative processing until the iterative times reach the maximum iterative times
Figure 671964DEST_PATH_IMAGE073
. And selecting the individual S with the highest fitness function and meeting the leasing cost constraint from the last iteration result as a final micro-service deployment scheme.
It should be noted that the present invention is mainly divided into two parts, namely micro service instance deployment and request routing. First, an application is divided into a plurality of microservices, each microservice having a plurality of instances. The micro-services with larger data traffic are deployed in the same server or the same small network according to the dependency relationship among the micro-services, so that the communication delay can be greatly reduced, and the user experience is improved. After the micro-service instance is deployed, the request reaches the cloud computing center, and a path needs to be planned for the request to complete the user request with low time delay. Microservice deployment and request routing flow diagrams are shown in fig. 2 and 3. The invention establishes a performance model aiming at minimizing the time delay with the ASP resource leasing cost as a constraint, and evaluates the advantages and disadvantages of deployment and routing schemes by calculating the queuing time delay, the calculation time delay, the data forwarding time delay and the data transmission time delay requested in a system.
As an embodiment, the method for deploying the micro service includes the steps of selecting an individual S with the highest fitness function and meeting the leasing cost constraint from the last iteration result as a final micro service deployment scheme, and includes: calculating a fitness function of each individual and the number of server cores required by the micro-service deployment scheme of each individual in the last iteration result, and determining a required server and a required switch according to the required number of the server cores; calculating the resource lease cost Y corresponding to the micro-service deployment scheme of each individual according to the required server and the required switch:
Figure 675692DEST_PATH_IMAGE074
wherein, the first and the second end of the pipe are connected with each other,
Figure 930087DEST_PATH_IMAGE075
is the overhead of leasing one server,
Figure 557116DEST_PATH_IMAGE076
in order to lease the overhead of one switch,
Figure 47003DEST_PATH_IMAGE077
for the number of servers to be leased,
Figure 577341DEST_PATH_IMAGE078
number of switches leased.
And (3) limiting the resource leasing cost, namely leasing cost constraint:
Figure 268217DEST_PATH_IMAGE079
wherein the content of the first and second substances,
Figure 211902DEST_PATH_IMAGE080
maximum value of resource lease cost for obtaining forward profit.
And selecting the individual S with the highest fitness function and meeting the leasing cost constraint as a final micro-service deployment scheme based on the fitness function and the resource leasing cost of each individual in the last iteration result.
It can be understood that the last iteration result includes a plurality of individuals, the fitness function and the resource lease cost of each individual are calculated, and the individual with the highest fitness and the resource lease cost meeting the lease cost constraint condition is taken as the final micro-service deployment scheme.
The embodiment of the invention provides a micro-service deployment method based on a cloud computing center, which mainly comprises the following steps:
(1) A network architecture of a cloud computing center supporting micro-services is designed, and the micro-services are deployed on servers of the cloud computing center. Different micro services form a micro service chain with specific functions according to a certain sequence. When the user request reaches the data center, the system selects proper micro-service examples in the system to sequentially complete the user request according to the composition of the micro-service chain corresponding to the user request.
(2) The research content is divided into two parts of micro-service instance deployment and request routing. The micro-service instance deployment comprises the step of determining the number of the micro-service instances and the instance deployment positions, and the request routing part comprises the step of determining the specific routing strategy of the request in the network, namely determining the transmission path of the user request in the network. And accordingly, establishing a total time delay model of the system.
(3) And performing joint optimization on the micro-service instance deployment and the request routing, and evaluating the current deployment scheme by taking the response time delay obtained by calculation after the request routing as an evaluation standard in each algorithm iteration process, wherein different deployment schemes have different routing strategies and response time delays.
(4) The method solves the problem of micro-service instance deployment based on an improved genetic algorithm and a local search combined algorithm, and obtains a deployment scheme of multiple instances of multiple micro-services on a server of a cloud computing center.
(5) The request routing problem is solved by an algorithm based on distance and computing resource weighted transfer probability, and a routing path of a user request on a server of the cloud computing center is obtained.
The invention adopts a joint optimization mode to simultaneously solve the problem of micro-service deployment and the problem of user request routing, and particularly, the invention takes a micro-service instance deployment scheme as a precondition and takes response time delay obtained by calculating a request routing result as an evaluation standard to evaluate the instance deployment scheme, thereby fully utilizing the strong coupling relationship of the micro-service instance deployment scheme and the request routing result. The system can support different types of services under certain constraint and can simultaneously respond to massive mobile user requests. In addition, the invention considers the interdependency among different micro services, effectively reduces the response time delay of the system to the user request by fully considering the communication dependency among the micro services, and improves the user application experience.
It should be noted that, in the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to relevant descriptions of other embodiments for parts that are not described in detail in a certain embodiment.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A micro-service deployment method based on a cloud computing center network architecture is characterized in that the cloud computing center network architecture comprises a plurality of switches and a plurality of servers, each switch is connected with the plurality of servers, a plurality of micro-services are deployed on the servers of a cloud computing center, the combination of different micro-services forms a micro-service chain with different functions, each micro-service has a plurality of instances, and each type of user request corresponds to one micro-service chain;
and obtaining an optimal micro-service deployment scheme based on the micro-service deployment strategy and the route request strategy combined optimization, wherein the micro-service deployment strategy comprises the specific number of each micro-service instance deployed on the server of the cloud computing center and the deployment position of each micro-service instance on the server of the cloud computing center, and the request route strategy comprises the specific route path of each user request between the servers of the cloud computing center.
2. The method of claim 1, wherein the microservice deployment policy comprises:
generating an initial deployment scheme of the micro-service instance based on a greedy algorithm;
and obtaining an optimal micro-service deployment scheme based on iterative optimization of a hybrid genetic algorithm and a local search algorithm.
3. The method of micro-service deployment as claimed in claim 2, wherein the greedy-based algorithm generating an initial deployment scenario for micro-service instances comprises:
calculating the minimum core number required by each micro service in the micro service set, and randomly selecting r micro services from the micro service set to expand the required core number to be twice of the minimum core number;
based on the degree of dependence of any micro service on other micro services and the degree of dependence of any micro service on each server, selecting a deployment position for the instance of any micro service, and generating an initial population P0 of the instance of the micro service, wherein P0 is the number of individuals in the initial population;
the optimal micro-service deployment scheme is obtained based on iterative optimization of the hybrid genetic algorithm and the local search algorithm, and comprises the following steps:
a. selecting two solutions P1 and P2 from an initial population P0, respectively selecting one server from P1 and P2, exchanging micro-service instances deployed on the two servers to obtain solutions C1 and C2, executing a local search algorithm on C1 and C2 to obtain solutions S1 and S2, and putting the solutions S1 and S2 back into the initial population P0;
b. repeating the step a until the number of individuals in the population is 2P0;
c. executing a request routing algorithm on 2P0 individuals in the population, and calculating a fitness function of each individual;
d. sorting the 2P0 individuals in a descending order according to a fitness function, and sorting the individuals in the top order
Figure 17821DEST_PATH_IMAGE001
Forming a new population by the individuals as an initial population of the next iteration;
e. repeatedly executing a-d until the iteration times reach the maximum iteration times
Figure 708696DEST_PATH_IMAGE002
f. And selecting the individual S with the highest fitness function and meeting the leasing cost constraint from the last iteration result as a final micro-service deployment scheme.
4. The method of claim 3, wherein the calculating the minimum number of cores required for each micro-service in the set of micro-services comprises:
calculating the minimum core number required by each micro service m according to the relation between the request arrival rate on the micro service chain and the core processing rate of the server
Figure 386802DEST_PATH_IMAGE003
Figure 70593DEST_PATH_IMAGE004
Wherein the content of the first and second substances,
Figure 783334DEST_PATH_IMAGE005
is the arrival rate of requests containing a microservice m, the request arrival rate of a microservice chain n is
Figure 645111DEST_PATH_IMAGE005
The request arrival rate of all the microservices on the microservice chain n is
Figure 810513DEST_PATH_IMAGE005
Figure 32416DEST_PATH_IMAGE006
Is the processing rate of the server core to the micro-service m.
5. The method of micro-service deployment according to claim 3, wherein said selecting a deployment location for an instance of any one of the micro-services based on the degree of dependence of said any one micro-service on other micro-services and the degree of dependence of said any one micro-service on each server comprises:
calculating the degree of dependence of any micro service on each server according to the calculated degree of dependence of any micro service on each server, and deploying the instance of any micro service on the server with the highest degree of dependence;
wherein the degree of dependency between two microservices is characterized based on the data traffic between the two microservices and is expressed as:
Figure 599663DEST_PATH_IMAGE007
the degree of dependence of the micro-service m1 on the server i is defined as the sum of data traffic between the micro-service m1 and all micro-services deployed on the server i, i.e.:
Figure 632342DEST_PATH_IMAGE008
where m' is the microservice on server i.
6. The method of claim 3, wherein the local search algorithm is defined such that the neighborhood of solution S is a space obtained by adding or subtracting one to or from the number of cores of the instance of the micro-service deployed in S, and the neighborhood of solution S N (S) can be expressed as:
Figure 285040DEST_PATH_IMAGE009
wherein the content of the first and second substances,
Figure 310633DEST_PATH_IMAGE010
is the number of cores on server I occupied by microservice M, M being the microservice set, I m Is deployed with microservicesm, server set.
7. The method of claim 3, wherein the request routing algorithm is:
the method comprises the following steps: judging whether the current micro service m is the last service on the request chain n or not according to each micro service on the micro service chain corresponding to the user request, if so, exiting the routing algorithm, and outputting a routing result;
step two: if the micro-service m has a subsequent service
Figure 997967DEST_PATH_IMAGE011
Then to
Figure 201546DEST_PATH_IMAGE011
Each example of (a)
Figure 262912DEST_PATH_IMAGE012
Executing the third step to the sixth step, and recording the execution instance selected for the microservice m by the last iteration as an execution instance
Figure 701983DEST_PATH_IMAGE013
Step three: calculating out
Figure 119189DEST_PATH_IMAGE014
To
Figure 618304DEST_PATH_IMAGE012
Routing probability based on distance between servers where the two servers are located
Figure 370228DEST_PATH_IMAGE015
And calculating
Figure 347411DEST_PATH_IMAGE014
To
Figure 884703DEST_PATH_IMAGE012
Routing probability based on the number of cores on server occupied by both
Figure 476090DEST_PATH_IMAGE016
Step four: according to
Figure 325098DEST_PATH_IMAGE017
Calculating out
Figure 981338DEST_PATH_IMAGE014
Is routed to
Figure 497770DEST_PATH_IMAGE012
Final probability of
Figure 260059DEST_PATH_IMAGE018
Step five: to generate [0,1]Random number in between
Figure 471728DEST_PATH_IMAGE019
Step six: according to the generated random number
Figure 56293DEST_PATH_IMAGE019
And the final probability obtained by calculation
Figure 551866DEST_PATH_IMAGE018
For micro-service
Figure 766946DEST_PATH_IMAGE011
Selecting instances of specific executions
Figure 528229DEST_PATH_IMAGE012
And will select the instance
Figure 650906DEST_PATH_IMAGE012
And corresponding final probability is added to the routing result
Figure 266564DEST_PATH_IMAGE020
Performing the following steps;
step seven: and repeating the first step to the sixth step until each micro service on the micro service chain corresponding to the user request selects a specific example of the route, and obtaining a final route result of the user request.
8. The method of micro-service deployment according to claim 3, wherein the fitness function is:
Figure 449283DEST_PATH_IMAGE021
wherein, T dc The total system latency is the sum of the latencies to all the microservice chains in the cloud computing center over a period of time T.
9. The method of micro-service deployment according to claim 8, characterized in that the system total time delay T is calculated by dc
Lingering delay of microservice chain n
Figure 635545DEST_PATH_IMAGE022
The sum of the linger time delay of each micro service on the micro service chain n is calculated to obtain:
Figure 296334DEST_PATH_IMAGE023
wherein the content of the first and second substances,
Figure 766498DEST_PATH_IMAGE024
representing the queuing delay at the instance of microservice m on server i,
Figure 120119DEST_PATH_IMAGE025
representing the computation latency at the instance of microservice m on server i,
Figure 59256DEST_PATH_IMAGE026
whether the micro-service m on the service chain n is deployed on the same server as the predecessor micro-service m on the service chain n or not is represented, and if yes, the micro-service m and the predecessor micro-service are deployed on the same server
Figure 523736DEST_PATH_IMAGE027
Is 0, otherwise is 1,
Figure 848406DEST_PATH_IMAGE028
representing a collection of micro-services on a service chain n,
Figure 372929DEST_PATH_IMAGE029
representing a server set on a request chain n routing path;
communication delay of micro service chain n in network
Figure 533783DEST_PATH_IMAGE030
Data forwarding delay from switch
Figure 67532DEST_PATH_IMAGE031
And data transmission delay between the switches, namely:
Figure 246710DEST_PATH_IMAGE032
wherein the content of the first and second substances,
Figure 942133DEST_PATH_IMAGE033
representing the data forwarding delay at the switch s,
Figure 590283DEST_PATH_IMAGE034
representing the total amount of data to be transmitted on switch s,
Figure 662145DEST_PATH_IMAGE035
indicating the data transfer rate between the switches,
Figure 695829DEST_PATH_IMAGE036
representing a switch set on a routing path of the microservice chain n;
the total system delay is the sum of the delays of all microservice chains in the cloud computing center within a period of time T, that is:
Figure 827733DEST_PATH_IMAGE037
10. the method of claim 3, wherein the step f of selecting the individual S with the highest fitness function and satisfying the lease cost constraint from the last iteration result as the final microservice deployment scenario includes:
calculating a fitness function of each individual and the number of server cores required by the micro-service deployment scheme of each individual in the last iteration result, and determining a required server and a required switch according to the required number of the server cores;
calculating the resource lease cost Y corresponding to the micro-service deployment scheme of each individual according to the required server and the required switch:
Figure 697600DEST_PATH_IMAGE038
wherein the content of the first and second substances,
Figure 573152DEST_PATH_IMAGE039
is the overhead of leasing one server,
Figure 992501DEST_PATH_IMAGE040
in order to lease the overhead of one switch,
Figure 170672DEST_PATH_IMAGE041
for the number of servers to be leased,
Figure 573841DEST_PATH_IMAGE042
number of switches leased;
and (3) limiting the resource leasing cost, namely leasing cost constraint:
Figure 987504DEST_PATH_IMAGE043
wherein the content of the first and second substances,
Figure 480934DEST_PATH_IMAGE044
maximum value of resource leasing cost for obtaining forward income;
and selecting the individual S with the highest fitness function and meeting the leasing cost constraint as a final micro-service deployment scheme based on the fitness function and the resource leasing cost of each individual in the last iteration result.
CN202211206589.1A 2022-09-30 2022-09-30 Micro-service deployment method based on cloud computing center network architecture Pending CN115529316A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211206589.1A CN115529316A (en) 2022-09-30 2022-09-30 Micro-service deployment method based on cloud computing center network architecture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211206589.1A CN115529316A (en) 2022-09-30 2022-09-30 Micro-service deployment method based on cloud computing center network architecture

Publications (1)

Publication Number Publication Date
CN115529316A true CN115529316A (en) 2022-12-27

Family

ID=84700543

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211206589.1A Pending CN115529316A (en) 2022-09-30 2022-09-30 Micro-service deployment method based on cloud computing center network architecture

Country Status (1)

Country Link
CN (1) CN115529316A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116233262A (en) * 2023-05-07 2023-06-06 湖北省楚天云有限公司 Micro-service deployment and request routing method and system based on edge network architecture
CN116627660A (en) * 2023-07-24 2023-08-22 湖北省楚天云有限公司 Micro-service resource allocation method based on cloud data center
CN117201319A (en) * 2023-11-06 2023-12-08 三峡高科信息技术有限责任公司 Micro-service deployment method and system based on edge calculation
CN117573379A (en) * 2024-01-16 2024-02-20 国网湖北省电力有限公司信息通信公司 Micro-service deployment method based on symmetrical scaling merging

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022007212A1 (en) * 2020-07-10 2022-01-13 南京邮电大学 Software microservice combination optimization method for edge cloud system
CN114338504A (en) * 2022-03-15 2022-04-12 武汉烽火凯卓科技有限公司 Micro-service deployment and routing method based on network edge system
CN114615338A (en) * 2022-04-11 2022-06-10 河海大学 Micro-service deployment method and device based on layer sharing in edge environment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022007212A1 (en) * 2020-07-10 2022-01-13 南京邮电大学 Software microservice combination optimization method for edge cloud system
CN114338504A (en) * 2022-03-15 2022-04-12 武汉烽火凯卓科技有限公司 Micro-service deployment and routing method based on network edge system
CN114615338A (en) * 2022-04-11 2022-06-10 河海大学 Micro-service deployment method and device based on layer sharing in edge environment

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116233262A (en) * 2023-05-07 2023-06-06 湖北省楚天云有限公司 Micro-service deployment and request routing method and system based on edge network architecture
CN116627660A (en) * 2023-07-24 2023-08-22 湖北省楚天云有限公司 Micro-service resource allocation method based on cloud data center
CN117201319A (en) * 2023-11-06 2023-12-08 三峡高科信息技术有限责任公司 Micro-service deployment method and system based on edge calculation
CN117201319B (en) * 2023-11-06 2024-01-09 三峡高科信息技术有限责任公司 Micro-service deployment method and system based on edge calculation
CN117573379A (en) * 2024-01-16 2024-02-20 国网湖北省电力有限公司信息通信公司 Micro-service deployment method based on symmetrical scaling merging
CN117573379B (en) * 2024-01-16 2024-03-29 国网湖北省电力有限公司信息通信公司 Micro-service deployment method based on symmetrical scaling merging

Similar Documents

Publication Publication Date Title
CN115529316A (en) Micro-service deployment method based on cloud computing center network architecture
Sun et al. TIDE: Time-relevant deep reinforcement learning for routing optimization
CN108260169B (en) QoS guarantee-based dynamic service function chain deployment method
Liu et al. DRL-R: Deep reinforcement learning approach for intelligent routing in software-defined data-center networks
CN114338504B (en) Micro-service deployment and routing method based on network edge system
Wang et al. Scheduling with machine-learning-based flow detection for packet-switched optical data center networks
Wang et al. Towards network-aware service composition in the cloud
Quang et al. Multi-domain non-cooperative VNF-FG embedding: A deep reinforcement learning approach
CN113708972B (en) Service function chain deployment method and device, electronic equipment and storage medium
CN109358971B (en) Rapid and load-balancing service function chain deployment method in dynamic network environment
Wang et al. Presto: Towards efficient online virtual network embedding in virtualized cloud data centers
He et al. QoE-based cooperative task offloading with deep reinforcement learning in mobile edge networks
Liu Intelligent routing based on deep reinforcement learning in software-defined data-center networks
CN109614215A (en) Stream scheduling method, device, equipment and medium based on deeply study
CN109379230B (en) Service function chain deployment method based on breadth-first search
Nguyen et al. Toward adaptive joint node and link mapping algorithms for embedding virtual networks: A conciliation strategy
CN113472671A (en) Method and device for determining multicast route and computer readable storage medium
CN114466407A (en) Network slice arranging algorithm based on particle swarm heredity
Zhao et al. Flow aggregation through dynamic routing overlaps in software defined networks
Chen et al. QMORA: A Q-learning based multi-objective resource allocation scheme for NFV orchestration
Guler et al. Genetic algorithm enabled virtual multicast tree embedding in Software-Defined Networks
CN112995032B (en) Segment routing traffic engineering method and device based on limited widest path
Pham et al. Multi-domain non-cooperative VNF-FG embedding: A deep reinforcement learning approach
Liu et al. Balanced off-chain payment channel network routing strategy based on weight calculation
Yang et al. Replica placement in content delivery networks with stochastic demands and M/M/1 servers

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination