CN114168234A - Method and device for processing micro service process, electronic equipment and storage medium - Google Patents

Method and device for processing micro service process, electronic equipment and storage medium Download PDF

Info

Publication number
CN114168234A
CN114168234A CN202111488727.5A CN202111488727A CN114168234A CN 114168234 A CN114168234 A CN 114168234A CN 202111488727 A CN202111488727 A CN 202111488727A CN 114168234 A CN114168234 A CN 114168234A
Authority
CN
China
Prior art keywords
micro service
target
micro
historical
operating parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111488727.5A
Other languages
Chinese (zh)
Inventor
单荣杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202111488727.5A priority Critical patent/CN114168234A/en
Publication of CN114168234A publication Critical patent/CN114168234A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/448Execution paradigms, e.g. implementations of programming paradigms
    • G06F9/4482Procedural
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The disclosure relates to a micro service flow processing method and device, electronic equipment and a storage medium. The method comprises the following steps: acquiring historical operating parameters of each micro service node in system application, wherein the system application comprises a plurality of micro service nodes, and the historical operating parameters are generated by calling each micro service node according to an initial calling process to process historical service data; obtaining target operation parameters of each micro service node in the future time according to the historical operation parameters of each micro service node; and adjusting the calling sequence of the micro service nodes according to the target operation parameters of the micro service nodes to obtain a target calling flow. The scheme predicts the future target operation parameters based on the existing historical operation parameters, automatically adjusts the micro-service flow according to the target operation parameters, can automatically update the micro-service flow and automatically adjust the calling sequence of the micro-service nodes, and greatly simplifies the adjustment process of the micro-service flow.

Description

Method and device for processing micro service process, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of internet technologies, and in particular, to a method and an apparatus for processing a micro service flow, an electronic device, a computer-readable storage medium, and a computer program product.
Background
With the development of cloud computing, more and more business links are informationized, digitalized and intelligentized, and micro services with different functions and suitable for various fields are formed. In order to rapidly perform service access, the micro-service can be divided into the boundary of the working field according to the functions of the micro-service, and the micro-service can be rapidly arranged to form a new application based on the boundary.
In the related art, when the micro-services are arranged, a workflow of the micro-services is generally defined in advance, and a large amount of bonding languages are adopted to combine the micro-services according to the above engineering workflow. When the micro-service calling sequence needs to be adjusted, operations such as online operation and the like are often needed, and the problem that the adjustment of the micro-service calling flow is complex exists.
Disclosure of Invention
The present disclosure provides a method and an apparatus for processing a microservice procedure, an electronic device, a computer-readable storage medium, and a computer program product, so as to at least solve the problem in the related art that the adjustment of a microservice calling procedure is complicated. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, a method for processing a microservice process is provided, including:
acquiring historical operating parameters of each micro-service node in system application, wherein the system application comprises a plurality of micro-service nodes, and the historical operating parameters are generated by calling each micro-service node according to an initial calling process to process historical service data;
obtaining target operation parameters of each micro service node at future time according to the historical operation parameters of each micro service node, wherein the future time is later than the current time;
and adjusting the calling sequence of the micro service nodes according to the target operation parameters of the micro service nodes to obtain a target calling flow.
In one embodiment, the historical operating parameters include operating parameters collected from multiple dimensions;
the obtaining of the target operation parameters of each micro service node at the future time according to the historical operation parameters of each micro service node includes:
and inputting the historical operating parameters of each dimension into a deep learning network to obtain target operating parameters corresponding to each dimension at future time.
In one embodiment, the historical operating parameters in each dimension are time series data collected within a preset time period; the deep learning network comprises a plurality of gate control cycle units which are sequentially connected in series;
the step of inputting the historical operating parameters of each dimension into a deep learning network to obtain the target operating parameters corresponding to each dimension at future time includes:
inputting the historical operating parameters in each dimension into the deep learning network, and processing the historical operating parameters in each dimension through a plurality of gate control circulation units to obtain target operating parameters in each dimension, wherein the target operating parameters are time series data in the preset time length in the future.
In one embodiment, the adjusting the call sequence of the multiple microservice nodes according to the target operating parameter of each microservice node to obtain a target call flow includes:
adjusting the calling sequence of the micro service nodes according to the numerical value of the target operation parameter of each micro service node to obtain the target calling flow;
the target operation parameters comprise a target passing rate and a target operation duration, and the smaller the numerical value of the target passing rate and/or the target operation duration is, the earlier the calling sequence of the micro service nodes is.
In one embodiment, the method further comprises:
acquiring a dependency relationship among a plurality of micro service nodes;
the adjusting the calling sequence of each micro service node according to the target operation parameter of each micro service node to obtain a target calling flow comprises:
and adjusting the calling sequence of the micro service nodes according to the dependency relationship and the target operation parameters of the micro service nodes to obtain a target calling flow.
In one embodiment, the obtaining historical operating parameters generated by each micro service node in the system application processing historical service data includes:
when the current moment is determined to meet the process processing conditions, acquiring the historical operating parameters of each micro service node in the system application;
the flow processing conditions include any of the following cases:
the variable quantity of the historical operating parameters is greater than a preset quantity;
the current moment satisfies a predefined time condition.
In one embodiment, before obtaining the historical operating parameters of each micro service node in the system application, the method further includes:
in the process of processing the historical service data by the system application, acquiring original operation parameters generated by processing the historical service data by each micro service node;
asynchronously writing the original operation parameters of each micro service node into a distributed service cluster through a thread pool, generating parameter processing tasks at each micro service node, and sending the parameter processing tasks to a task message queue;
and executing the parameter processing task in the task message queue, processing the original operating parameters corresponding to the parameter processing task in the distributed service cluster, and generating and storing the history operating parameters of the micro service node corresponding to the parameter processing task.
In one embodiment, after the obtaining the target call flow, the method further includes:
continuing to call the plurality of microservice nodes according to the initial call flow to process current service data;
and when new service data are received, calling the micro service nodes to process the new service data according to the target calling process.
According to a second aspect of the embodiments of the present disclosure, there is provided a processing apparatus of a microservice process, including:
the parameter acquisition module is configured to execute the historical operation parameter acquisition of each micro service node in system application, the system application comprises a plurality of micro service nodes, and the historical operation parameter is generated by calling each micro service node according to an initial calling process to process historical service data;
the parameter generation module is configured to execute the operation according to the historical operation parameters of each micro service node to obtain the target operation parameters of each micro service node at the future time, wherein the future time is a time later than the current time;
and the flow adjusting module is configured to execute adjustment on the calling sequence of the micro service nodes according to the target operation parameters of the micro service nodes to obtain a target calling flow.
In one embodiment, the historical operating parameters include operating parameters collected from multiple dimensions;
the parameter generation module is configured to input the historical operating parameters in each dimension into a deep learning network to obtain target operating parameters corresponding to each dimension in future time.
In one embodiment, the historical operating parameters in each dimension are time series data collected within a preset time period; the deep learning network comprises a plurality of gate control cycle units which are sequentially connected in series;
the parameter generation module is configured to input the historical operating parameters in each dimension into the deep learning network, and process the historical operating parameters in each dimension through the plurality of gating cycle units to obtain target operating parameters in each dimension, wherein the target operating parameters are time series data in the preset time length in the future.
In one embodiment, the flow adjusting module is configured to perform adjustment on a calling sequence of the micro service nodes according to a numerical value of a target operating parameter of each micro service node, so as to obtain the target calling flow;
the target operation parameters comprise a target passing rate and a target operation duration, and the smaller the numerical value of the target passing rate and/or the target operation duration is, the earlier the calling sequence of the micro service nodes is.
In one embodiment, the apparatus further comprises:
a relationship obtaining module configured to perform obtaining a dependency relationship between the plurality of microservice nodes;
and the flow adjusting module is configured to adjust the calling sequence of the micro service nodes according to the dependency relationship and the target operation parameters of the micro service nodes to obtain a target calling flow.
In one embodiment, the parameter obtaining module is configured to perform obtaining the historical operating parameters of each micro service node in the system application when it is determined that a current time meets a process processing condition;
the flow processing conditions include any of the following cases:
the variable quantity of the historical operating parameters is greater than a preset quantity;
the current moment satisfies a predefined time condition.
In one embodiment, the apparatus further comprises:
the parameter acquisition module is configured to acquire original operation parameters generated by processing the historical business data by each micro service node in the process of processing the historical business data by the system application;
the parameter writing module is configured to asynchronously write the original operation parameters of the micro service nodes into the distributed service cluster through the thread pool, generate parameter processing tasks at the micro service nodes, and send the parameter processing tasks to a task message queue;
and the task processing module is configured to execute the parameter processing task in the task message queue, process the original operating parameters corresponding to the parameter processing task in the distributed service cluster, generate and store the historical operating parameters of the micro-service node corresponding to the parameter processing task.
In one embodiment, the apparatus further comprises:
the first data processing module is configured to execute the calling of the plurality of the micro service nodes according to the initial calling process to process the current business data;
and the second data processing module is configured to call a plurality of micro service nodes to process new business data according to the target call flow when receiving the new business data.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method for processing the micro service flow as described in any embodiment of the first aspect.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium, wherein instructions, when executed by a processor of an electronic device, enable the electronic device to perform a method of processing a microservice procedure as described in any one of the embodiments of the first aspect.
According to a fifth aspect of the embodiments of the present disclosure, the computer program product includes instructions, which when executed by a processor of an electronic device, enable the electronic device to perform the method for processing a microservice procedure as described in any one of the embodiments of the first aspect.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
the method comprises the steps of obtaining historical operation parameters generated by processing historical service data by each micro service node in system application, and obtaining target operation parameters of each micro service node in future time according to the historical operation parameters of each micro service node. And automatically adjusting the calling sequence of the micro service nodes according to the target operation parameters of the micro service nodes. The method predicts future target operation parameters based on the existing historical operation parameters, automatically adjusts the micro-service flow according to the target operation parameters, can fully automatically update the micro-service flow and automatically adjust the calling sequence of the micro-service nodes, and greatly simplifies the adjustment process of the micro-service flow. The micro-service process is processed in advance according to the target operation parameters of the future time, so that the system can operate under the optimal micro-service process, the operation efficiency of the system is greatly improved, and the operation pressure of the system is favorably reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
FIG. 1 is a flow chart illustrating a method of processing a microservice process in accordance with an exemplary embodiment.
FIG. 2 is a related schematic diagram illustrating a gated loop unit according to an exemplary embodiment.
FIG. 3 is a diagram illustrating a microservice flow before adaptation, according to an example embodiment.
FIG. 4 is a diagram illustrating a microservice flow prior to another adjustment, according to an example embodiment.
FIG. 5 is a schematic diagram illustrating an adjusted microservice flow according to an example embodiment.
FIG. 6 is a schematic diagram illustrating another adjusted microservice flow according to an example embodiment.
FIG. 7 is a schematic diagram illustrating a process for generating historical operating parameters, according to an exemplary embodiment.
FIG. 8 is a schematic diagram illustrating the collection of raw operating parameters in accordance with an exemplary embodiment.
FIG. 9 is a diagram illustrating one type of generating historical operating parameters in accordance with an exemplary embodiment.
FIG. 10 is a flow chart illustrating a method of processing a microservice process in accordance with an exemplary embodiment.
FIG. 11 is a block diagram illustrating a processing device of a microservice process in accordance with an exemplary embodiment.
FIG. 12 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
It should also be noted that the user information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data for presentation, analyzed data, etc.) referred to in this disclosure are both information and data that are authorized by the user or sufficiently authorized by the parties.
The method for processing the micro service flow provided by the disclosure can be applied to a server. The server is provided with a system application which runs based on the micro-service architecture. The system application comprises a plurality of micro service nodes, and the plurality of micro service nodes work according to an initial calling process. When the server judges that the current flow processing conditions are met, acquiring historical operating parameters generated by processing historical service data by each micro-service node in system application; and generating target operation parameters of each micro service node in the future time according to the historical operation parameters of each micro service node. And then adjusting the initial calling flows of the micro service nodes according to the target operation parameters. The server can be implemented by an independent server or a server cluster composed of a plurality of servers.
Fig. 1 is a flowchart illustrating a method for processing a micro service flow according to an exemplary embodiment, and the method for processing a micro service flow is used in a server, as shown in fig. 1, and includes the following steps.
In step S110, historical operating parameters of each micro service node in a system application are obtained, where the system application includes a plurality of micro service nodes, and the historical operating parameters are generated by invoking each micro service node according to an initial invocation flow to process historical service data.
The historical operation parameters may include, but are not limited to, an operation time length required for processing the historical service data, a passing rate of the service data, and the like. The server can generate node identifiers corresponding to the micro service nodes in advance, and the corresponding relation between the micro service nodes and the operation parameters is established based on the node identifiers.
The initial call flow may refer to a micro service flow currently in use, which defines a call order of the plurality of micro service nodes, for example, may be a serial call in order, a parallel call. In one example, the initial call flow may be orchestrated and deployed by the user. In another process, the server may cyclically process the microservice process, and then the initial calling process may also be a process obtained by the last automatic adjustment of the server.
Specifically, in the history time, the server calls each micro service node according to the initial calling flow in the process of processing each piece of historical service data by the system application, collects the historical operating parameters of each micro service node, and generates the mapping relation between the historical operating parameters and the node identification. And when the server needs to adjust the micro-service flow, acquiring the historical operating parameters of each micro-service node.
In step S120, a target operation parameter of each micro service node at a future time is obtained according to the historical operation parameter of each micro service node. Wherein the future time is a time later than the current time.
In particular, parameter prediction logic is deployed in the server. The parameter prediction logic may be implemented based on techniques such as regression analysis, machine learning, and the like. After obtaining the historical operating parameters of each micro service node, the server processes the historical operating parameters of each micro service node through parameter prediction logic to generate the target operating parameters of each micro service node in the future time. For example, an average value of historical operating parameters of each micro service node in a certain time period is obtained, and target operating parameters of each micro service node are predicted according to the average value.
In step S130, the calling sequence of the micro service nodes is adjusted according to the target operating parameter of each micro service node, so as to obtain a target calling flow.
Specifically, the server is preconfigured with an association relationship between the numerical value of the target operation parameter and the process sequence. For example, the target operation parameter is the operation duration, the micro service nodes with smaller operation duration are configured, and the sequencing in the calling flow template is earlier, so that the system can call the micro service nodes preferentially. And the server adjusts a plurality of micro service flows according to the value of the target operation parameter of each micro service node based on the incidence relation to generate a target calling flow. And then calling each micro service node to process service data according to the calling sequence defined by the target calling process.
Further, the server can continue to collect the operation parameters of each micro service node under the target call flow. The target calling process is continuously optimized by repeating the above contents, so that the micro service process can be maintained in an optimal state for a long time.
In the processing method of the micro service process, historical operating parameters generated by processing historical service data by each micro service node in system application are obtained, and target operating parameters of each micro service node in the future time are obtained according to the historical operating parameters of each micro service node. And automatically adjusting the calling sequence of the micro service nodes according to the target operation parameters of the micro service nodes. The method has the advantages that the future target operation parameters are predicted based on the existing historical operation parameters, the micro-service process is automatically adjusted according to the target operation parameters, the micro-service process can be fully automatically updated, the calling sequence of the micro-service nodes can be automatically adjusted, and therefore the adjustment process of the micro-service process is greatly simplified. The micro-service process is processed in advance according to the target operation parameters of the future time, so that the system can operate under the optimal micro-service process, the operation efficiency of the system is greatly improved, and the operation pressure of the system is favorably reduced.
In an exemplary embodiment, the server may switch from the initial calling template to the target calling template in a hot-update manner. In step S130, after obtaining the target call flow, the method further includes: continuously calling a plurality of micro service nodes according to the initial calling process to process the current service data; and when receiving new service data, calling a plurality of micro service nodes to process the new service data according to the target calling process.
Specifically, after the target call flow is generated, the current business data in the current processing is processed by the initial call template. After processing is complete, the initial call template may be deleted or archived. And aiming at the new service data received after the target call flow is generated, processing the new service data according to the target call flow until a new call flow is generated. The method can improve the updating efficiency and the updating success rate of the flow by switching the new calling flow and the old calling flow in a hot updating mode.
In an exemplary embodiment, the historical operating parameters of each micro service node may be processed through a deep learning network to obtain target operating parameters of each micro service node. The deep learning network may employ any network that can predict data, such as a convolutional neural network, a cyclic neural network, and the like. In specific implementation, after acquiring historical operating parameters corresponding to each micro service node, the server inputs the historical operating parameters of each micro service node into a pre-trained deep learning network to obtain target operating parameters of each micro service node at a future time.
In one embodiment, the deep learning network may be implemented based on time series models such as LSTM (Long Short-Term Memory, Long and Short Term Memory artificial neural network), GRU (Gate recovery Unit), and the like, and in order to avoid the problem of excessive parameters caused by the complexity of the conventional LSTM, the deep learning network of this embodiment includes a plurality of GRUs connected in series in sequence. Each GRU outputs a preset length (e.g. 512) vector. The GRU is one kind of circulating neural network and is proposed to solve the problems of long-term memory, gradient in back propagation and the like. In this case, the historical operating parameter is time-series data acquired within a preset time period.
Specifically, (a) in fig. 2 shows a structural schematic diagram of a single GRU. In FIG. 2 (a), h denotes a memory cell. The signaling after the GRU forward process is spread out in time sequence is shown in fig. 2 (b). Referring to fig. 2, the GRU includes 2 gate structures: an input gate and a reset gate. The input gate and the reset gate receive the same input: the input signal at the current time and the context at the previous time. The inputs to these two gate structures are defined by the following equations (1) and (2), respectively.
zt=σ(Wzxt+Uzht-1) (1)
rt=σ(Wrxt+Urht-1) (2)
Wherein x istFor the input signal at the present moment, ht-1Is the last time context.
The hidden layer activation state of the GRU unit is defined by the following equation (3).
Figure RE-GDA0003457191820000091
Wherein, U is a bit-wise multiplication operation, and U and W are fixed parameters.
The final output of the GRU is determined by the last time context, the hidden layer activation, W and the reset port activation, as shown in equation (4) below. The output is saved as the context of the current time in the memory unit h.
Figure BDA0003397622750000093
In the embodiment, the accuracy of the obtained target operation parameters can be ensured by predicting the future operation parameters on the basis of the prior knowledge of the deep learning network. By utilizing the powerful processing capability of the deep learning network, the processing efficiency of the operation parameters can be ensured.
In an exemplary embodiment, the historical operating parameters include operating parameters collected from a plurality of dimensions. In this case, the historical operating parameters in each dimension may be input to the deep learning network, resulting in the target operating parameters corresponding to each dimension at a future time. If the deep learning network comprises a plurality of GRUs connected in series in sequence, inputting the historical operating parameters in each dimension into the deep learning network, and processing the historical operating parameters in each dimension through the plurality of GRUs to obtain target operating parameters in each dimension, wherein the target operating parameters are time sequence data in a preset time length in the future.
The server can acquire historical operating parameters of each micro-service node from a time dimension and a passing rate dimension. For example, when a piece of historical service data passes through a micro service node, the running time required by the micro service node to process the historical service data is collected, and the passing rate of representing whether the historical service data passes through the micro service node and enters the next node is collected.
Specifically, when there are historical operating parameters of multiple dimensions, the calling sequence of multiple micro service nodes may be adjusted according to the numerical value of the target operating parameter of the multiple dimensions of each micro service node, so as to obtain a target calling flow. Taking the target operation parameters including the target passing rate and the target operation duration as an example, the smaller the target passing rate and the target operation duration, the earlier the sequence in the flow template is. The passing rate is used for representing the probability of business data flowing into the next node, and the target node with the small passing rate is deployed in front of the node, so that the returning efficiency of processing results and the releasing efficiency of cluster computing resources can be improved.
In one embodiment, the priority corresponding to each dimension may be preconfigured. And generating a target calling flow according to the priority of each dimension and the numerical value of the target operation parameters of the multiple dimensions corresponding to each micro service node. Continuing with the example where the target operation parameters include a target pass rate and a target operation duration, the time dimension priority is higher than the pass rate dimension priority. The operation time length of the micro service node 1 is A1, and the passing rate is B1; the operation time length of the microserver node 2 is A2, and the passing rate is B2. If A1 < A2 and B1 > B2, the calling order of the microservice node 1 is determined to be in the front.
In another embodiment, the weighting coefficients corresponding to each dimension may be preconfigured. And determining the node evaluation value of each micro service node according to the weight coefficient corresponding to each dimension and the target operation parameter (for example, calculating the weighted sum of the weight coefficients and the target operation parameters under a plurality of dimensions). And further setting a more advanced calling sequence based on the micro service node with the smaller node evaluation value.
In this embodiment, through obtaining historical operating parameter from a plurality of dimensions collection, consider the performance of little service node from a plurality of dimensions to can sequence little service node as required more in a flexible way, promote the adjustment flexibility of little service flow.
In an exemplary embodiment, the method further comprises: acquiring a dependency relationship among a plurality of micro service nodes; step S130, adjusting the calling sequence of each micro service node according to the target operation parameter of each micro service node to obtain a target calling flow, comprising: and adjusting the calling sequence of the micro service nodes according to the dependency relationship and the target operation parameters of the micro service nodes to obtain a target calling flow.
Specifically, the server determines a micro service node with a dependency relationship from a plurality of micro service nodes. On the basis of determining the target calling flow based on the numerical value of the target operation parameter, the calling sequence of the relied micro service nodes is adjusted to be in front of the micro service nodes needing to be relied, so that the micro service nodes can be ensured to transmit data according to the trend conforming to the dependency relationship, and the data can be processed in the shortest path and in the shortest time.
Fig. 3-6 show a process diagram of a microservice process. As shown in fig. 3 or fig. 4, the initial call flow may be a serial structure or a parallel structure, and has micro service nodes 1, 2, 3 …, 8, where the micro service node 5 depends on the micro service node 1, and the micro service node 8 depends on the micro service node 5. Each micro service node is a micro service with a responsibility boundary, and when the verification result of each micro service node does not pass, the service data processing is directly finished. The server determines that the micro service nodes 1-4 are arranged in the same order and are the first according to the target operation parameters of all the micro service nodes, and the micro service nodes 5-6 are arranged in the same order and are later than the micro service nodes 1-4; the microservice nodes 7 and 8 are ordered the same and, at the end, a target call flow diagram as shown in fig. 5 can be generated.
Further, in the operation process of the call flow shown in fig. 5, the call flow shown in fig. 5 may be further adjusted according to the operation parameters corresponding to fig. 5, for example, if the passing rate and the operation duration of the microservice node 6 are lower than those of the microservice node 5/7/8, the microservice node 6 may be advanced, so as to obtain the call flow shown in fig. 6.
In this embodiment, since the total operation time of the micro service flow generally depends on the slowest micro service node, the micro service node with the lowest passing rate and the relatively short operation time is moved forward, so that the processing speed of the micro service flow is increased while the computing resources are saved.
In an exemplary embodiment, obtaining historical operating parameters generated by processing historical service data by each micro service node in a system application comprises: when the current moment is determined to meet the process processing conditions, acquiring historical operating parameters of all micro service nodes in system application; the flow processing conditions include any of the following cases: the variable quantity of the historical operating parameters is larger than the preset quantity; the current moment satisfies a predefined time condition. By setting the process processing conditions, the server can autonomously trigger and process the micro-service process, so that the dynamic adjustment of the micro-service process is realized.
In an exemplary embodiment, the manner in which the historical operating parameters are generated is described. As shown in fig. 7, before acquiring the historical operating parameters of each micro service node in the system application in step S110, the method further includes:
step S710, in the process of processing the historical service data by the system application, collecting the original operation parameters generated by each micro service node processing the historical service data.
Step S720, the original operation parameters of each micro service node are asynchronously written into the distributed service cluster through the thread pool, parameter processing tasks are generated at each micro service node, and the parameter processing tasks are sent to the task message queue.
Step S730, executing the parameter processing task in the task message queue, processing the original operation parameter corresponding to the parameter processing task in the distributed service cluster, generating and storing the historical operation parameter of the micro service node corresponding to the parameter processing task.
Specifically, in the process of processing each piece of historical service data by the system application, the original operation parameters generated by processing each piece of historical service data by each micro service node are collected. The original running data of each micro service node is asynchronously written into a redis cluster (distributed service cluster) in the form of key-value through a thread pool. Meanwhile, a parameter processing task is generated at each micro service node, and the parameter processing task is sent to a task Message Queue (MQ) task, so that a server asynchronously executes the parameter processing task in the task Message Queue, processes original operating parameters corresponding to the parameter processing task in the distributed service cluster, for example, decomposes the original distributed parameters according to dimensions to generate structured historical operating parameters, and stores the obtained historical operating parameters of the micro service node corresponding to the parameter processing task in a database or a non-relational database (for example, NoSQL, elastic search). By acquiring the operating parameters of the micro service nodes in real time and using the thread pool to perform asynchronous processing on the original operating parameters, the operating efficiency of the system can be improved, and the operating expenditure of the system can be reduced.
Fig. 8 is a flowchart illustrating a method for processing a micro service flow according to an exemplary embodiment, where the method for processing the micro service flow is implemented by a micro service flow processing engine deployed in a server. As shown in fig. 9, the microservice process engine includes a front-end component, a unique ID generator, a data collection system, and an intelligent prediction module. As shown in fig. 8, the processing method of the microservice procedure includes the following steps.
1. And (3) processing the historical service data:
in step S802, after the historical service data passes through the front-end component, a data identifier corresponding to the service data is generated by the unique ID generator, so as to facilitate tracking, searching, and processing the operating parameter corresponding to the service data. Referring to fig. 10, in the process of processing historical service data by the system application, the data acquisition system acquires original operating parameters generated by each micro service node processing the historical service data. The original operating parameters include an original operating time length and an original passing rate.
In step S804, the data acquisition system asynchronously writes the original operating parameters of each micro service node into the distributed service cluster through the thread pool, generates a parameter processing task at each micro service node, and sends the parameter processing task to the task message queue.
In step S806, the data acquisition system executes the parameter processing task in the task message queue, searches for the original operating parameters of each micro service node corresponding to the parameter processing task from the distributed service cluster according to the data identifier, generates the historical operating parameters of each micro service node corresponding to the parameter processing task, and stores the historical operating parameters in the database.
In step S808, when the process processing condition is satisfied, the intelligent prediction module obtains the historical operating duration and the historical throughput of each micro service node from the database, and respectively processes the historical operating duration and the historical throughput of each micro service node by using the deep learning network, so as to obtain the target operating duration and the target throughput of each micro service node. The network structure and processing manner of the deep learning network may refer to the above embodiments, which are not specifically described herein.
In step S810, if it is determined that the initial call flow needs to be adjusted according to the target running time and the target passing rate of each micro service node, a target call flow is generated, new service data is processed using the target call flow, and the initial call flow is used to continue processing the currently existing service data. And if the adjustment is not needed, continuing to use the initial flow template.
And repeating the steps S802 to S810 to dynamically adjust the micro-service flow.
It should be understood that, although the steps in the above-described flowcharts are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in a strict order unless explicitly stated herein, and may be performed in other orders. Moreover, at least a portion of the steps in the above-mentioned flowcharts may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a portion of the steps or stages in other steps.
It is understood that the same/similar parts between the embodiments of the method described above in this specification can be referred to each other, and each embodiment focuses on the differences from the other embodiments, and it is sufficient that the relevant points are referred to the descriptions of the other method embodiments.
FIG. 11 is a block diagram illustrating a processing device 1100 of a microservice process in accordance with an exemplary embodiment. Referring to fig. 11, the apparatus includes a parameter obtaining module 1102, a parameter generating module 1104, and a flow adjusting module 1106.
A parameter obtaining module 1102, configured to perform historical operation parameter obtaining on each microservice node in a system application, where the system application includes a plurality of microservice nodes, and the historical operation parameter is generated by processing historical service data by calling each microservice node according to an initial calling process; a parameter generating module 1104 configured to execute obtaining a target operating parameter of each micro service node at a future time according to the historical operating parameter of each micro service node, wherein the future time is a time later than the current time; a flow adjusting module 1106 configured to perform adjustment on the call sequence of the multiple micro service nodes according to the target operating parameters of each micro service node, so as to obtain a target call flow.
In an exemplary embodiment, the historical operating parameters include operating parameters collected from a plurality of dimensions; and the parameter generation module 1104 is configured to perform input of the historical operating parameters in each dimension to the deep learning network, so as to obtain the target operating parameters corresponding to each dimension in the future time.
In an exemplary embodiment, the historical operating parameter in each dimension is time series data collected within a preset time length; the deep learning network comprises a plurality of gate control cycle units which are sequentially connected in series; and the parameter generation module 1104 is configured to input the historical operating parameters in each dimension into the deep learning network, and process the historical operating parameters in each dimension through a plurality of gating cycle units to obtain target operating parameters in each dimension, wherein the target operating parameters are time series data in a preset time length in the future.
In an exemplary embodiment, the flow adjusting module 1106 is configured to perform adjusting the calling sequence of the multiple micro service nodes according to the value of the target operating parameter of each micro service node, so as to obtain a target calling flow; the target operation parameters comprise a target passing rate and a target operation time length, and the smaller the numerical value of the target passing rate and/or the target operation time length is, the earlier the calling sequence of the micro service nodes is.
In an exemplary embodiment, the apparatus 1100 further comprises: the relation acquisition module is configured to acquire the dependency relation among the micro service nodes; and a flow adjusting module 1106 configured to adjust the calling sequence of the multiple micro service nodes according to the dependency relationship and the target operating parameters of the micro service nodes, so as to obtain a target calling flow.
In an exemplary embodiment, the parameter obtaining module 1102 is configured to perform, when it is determined that the current time meets the process processing condition, obtaining historical operating parameters of each micro service node in the system application; the flow processing conditions include any of the following cases: the variable quantity of the historical operating parameters is larger than the preset quantity; the current moment satisfies a predefined time condition.
In an exemplary embodiment, the apparatus 1100 further comprises: the parameter acquisition module is configured to acquire original operation parameters generated by processing historical service data by each micro service node in the process of processing the historical service data by the system application; the parameter writing module is configured to asynchronously write the original operation parameters of each micro-service node into the distributed service cluster through the thread pool, generate parameter processing tasks at each micro-service node and send the parameter processing tasks to the task message queue; and the task processing module is configured to execute the parameter processing tasks in the task message queue, process the original operation parameters corresponding to the parameter processing tasks in the distributed service cluster, generate and store the historical operation parameters of the micro service nodes corresponding to the parameter processing tasks.
In an exemplary embodiment, the apparatus 1100 further comprises: the first data processing module is configured to execute the step of continuously calling the plurality of micro service nodes according to the initial calling process to process the current service data; and the second data processing module is configured to call the plurality of micro service nodes to process the new business data according to the target call flow when receiving the new business data.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
FIG. 12 is a block diagram illustrating an electronic device S00 for processing a microservice flow in accordance with an exemplary embodiment. For example, the electronic device S00 may be a server. Referring to FIG. 12, electronic device S00 includes a processing component S20 that further includes one or more processors and memory resources represented by memory S22 for storing instructions, such as applications, that are executable by processing component S20. The application program stored in the memory S22 may include one or more modules each corresponding to a set of instructions. Further, the processing component S20 is configured to execute instructions to perform the above-described method.
The electronic device S00 may further include: the power supply module S24 is configured to perform power management of the electronic device S00, the wired or wireless network interface S26 is configured to connect the electronic device S00 to a network, and the input/output (I/O) interface S28. The electronic device S00 may operate based on an operating system stored in the memory S22, such as Windows Server, Mac OS X, Unix, Linux, FreeBSD, or the like.
In an exemplary embodiment, a computer-readable storage medium comprising instructions, such as the memory S22 comprising instructions, executable by the processor of the electronic device S00 to perform the above method is also provided. The storage medium may be a computer-readable storage medium, which may be, for example, a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, there is also provided a computer program product comprising instructions executable by a processor of the electronic device S00 to perform the above method.
It should be noted that, the descriptions of the above-mentioned apparatus, the electronic device, the computer-readable storage medium, the computer program product, and the like according to the method embodiments may also include other embodiments, and specific implementations may refer to the descriptions of the related method embodiments, which are not described in detail herein.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A method for processing a micro service flow is characterized by comprising the following steps:
obtaining historical operation parameters of each micro service node in system application, wherein the system application comprises a plurality of micro service nodes, and the historical operation parameters are generated by calling each micro service node according to an initial calling process to process historical service data;
obtaining target operation parameters of each micro service node at future time according to the historical operation parameters of each micro service node, wherein the future time is later than the current time;
and adjusting the calling sequence of the micro service nodes according to the target operation parameters of the micro service nodes to obtain a target calling flow.
2. The method of claim 1, wherein the historical operating parameters include operating parameters collected from multiple dimensions;
the obtaining of the target operation parameters of each micro service node at the future time according to the historical operation parameters of each micro service node includes:
and inputting the historical operating parameters of each dimension into a deep learning network to obtain target operating parameters corresponding to each dimension at future time.
3. The method for processing the micro service flow according to claim 2, wherein the historical operating parameters in each dimension are time series data collected within a preset time length; the deep learning network comprises a plurality of gate control cycle units which are sequentially connected in series;
the step of inputting the historical operating parameters of each dimension into a deep learning network to obtain the target operating parameters corresponding to each dimension at future time includes:
inputting the historical operating parameters in each dimension into the deep learning network, and processing the historical operating parameters in each dimension through a plurality of gate control circulation units to obtain target operating parameters in each dimension, wherein the target operating parameters are time series data in the preset time length in the future.
4. The method for processing the micro service process according to any one of claims 1 to 3, wherein the adjusting the calling sequence of the plurality of micro service nodes according to the target operating parameter of each micro service node to obtain the target calling process comprises:
adjusting the calling sequence of the micro service nodes according to the numerical value of the target operation parameter of each micro service node to obtain the target calling flow;
the target operation parameters comprise a target passing rate and a target operation duration, and the smaller the numerical value of the target passing rate and/or the target operation duration is, the earlier the calling sequence of the micro service nodes is.
5. The method for processing the microservice process according to any one of claims 1 to 3, further comprising:
acquiring a dependency relationship among a plurality of micro service nodes;
the adjusting the calling sequence of each micro service node according to the target operation parameter of each micro service node to obtain a target calling flow comprises:
and adjusting the calling sequence of the micro service nodes according to the dependency relationship and the target operation parameters of the micro service nodes to obtain a target calling flow.
6. The method for processing the micro service process according to any one of claims 1 to 3, wherein the obtaining of the historical operating parameters generated by processing the historical service data by each micro service node in the system application comprises:
when the current moment is determined to meet the process processing conditions, acquiring the historical operating parameters of each micro service node in the system application;
the flow processing conditions include any of the following cases:
the variable quantity of the historical operating parameters is greater than a preset quantity;
the current moment satisfies a predefined time condition.
7. A device for processing a microservice process, comprising:
the parameter acquisition module is configured to execute acquisition of historical operating parameters of each micro service node in a system application, the system application comprises a plurality of micro service nodes, and the historical operating parameters are generated by calling each micro service node according to an initial calling process to process historical service data;
the parameter generation module is configured to execute the operation according to the historical operation parameters of each micro service node to obtain the target operation parameters of each micro service node at the future time, wherein the future time is a time later than the current time;
and the flow adjusting module is configured to execute adjustment on the calling sequence of the micro service nodes according to the target operation parameters of the micro service nodes to obtain a target calling flow.
8. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the processing method of the microservice procedure of any of claims 1-6.
9. A computer-readable storage medium, wherein instructions in the computer-readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the method of processing a microservice process of any of claims 1-6.
10. A computer program product comprising instructions which, when executed by a processor of an electronic device, enable the electronic device to carry out the method of processing a microservice procedure according to any one of claims 1 to 6.
CN202111488727.5A 2021-12-07 2021-12-07 Method and device for processing micro service process, electronic equipment and storage medium Pending CN114168234A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111488727.5A CN114168234A (en) 2021-12-07 2021-12-07 Method and device for processing micro service process, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111488727.5A CN114168234A (en) 2021-12-07 2021-12-07 Method and device for processing micro service process, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114168234A true CN114168234A (en) 2022-03-11

Family

ID=80484146

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111488727.5A Pending CN114168234A (en) 2021-12-07 2021-12-07 Method and device for processing micro service process, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114168234A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114926151A (en) * 2022-06-21 2022-08-19 中关村科学城城市大脑股份有限公司 RPA flow automatic generation method and device based on reinforcement learning

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114926151A (en) * 2022-06-21 2022-08-19 中关村科学城城市大脑股份有限公司 RPA flow automatic generation method and device based on reinforcement learning

Similar Documents

Publication Publication Date Title
Tuli et al. COSCO: Container orchestration using co-simulation and gradient based optimization for fog computing environments
Tuli et al. Dynamic scheduling for stochastic edge-cloud computing environments using a3c learning and residual recurrent neural networks
Navarin et al. LSTM networks for data-aware remaining time prediction of business process instances
US10217061B2 (en) Systems and methods implementing an intelligent optimization platform
Karim et al. BHyPreC: a novel Bi-LSTM based hybrid recurrent neural network model to predict the CPU workload of cloud virtual machine
CN112000459B (en) Method for expanding and shrinking capacity of service and related equipment
CN111768008A (en) Federal learning method, device, equipment and storage medium
US11983245B2 (en) Unmanned driving behavior decision-making and model training
Bao et al. Autoconfig: Automatic configuration tuning for distributed message systems
US20220405641A1 (en) Method for recommending information, recommendation server, and storage medium
Yperman et al. Bayesian optimization of hyper-parameters in reservoir computing
CN111258767A (en) Intelligent cloud computing resource allocation method and device for complex system simulation application
Liu et al. Predicting of job failure in compute cloud based on online extreme learning machine: a comparative study
US20210142224A1 (en) Systems and methods for an accelerated and enhanced tuning of a model based on prior model tuning data
CN113094116B (en) Deep learning application cloud configuration recommendation method and system based on load characteristic analysis
CN109542585B (en) Virtual machine workload prediction method supporting irregular time intervals
CN110648080A (en) Information physical system based on intelligent points and construction method thereof
CN114580747A (en) Abnormal data prediction method and system based on data correlation and fuzzy system
CN114168234A (en) Method and device for processing micro service process, electronic equipment and storage medium
Kumar et al. Association learning based hybrid model for cloud workload prediction
Agarwal et al. A Deep Recurrent-Reinforcement Learning Method for Intelligent AutoScaling of Serverless Functions
Lu et al. Gaussian process temporal-difference learning with scalability and worst-case performance guarantees
Mu et al. Automating the configuration of MapReduce: A reinforcement learning scheme
Hristov et al. Deriving explicit control policies for Markov decision processes using symbolic regression
KR20230089509A (en) Bidirectional Long Short-Term Memory based web application workload prediction method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination