CN117369962A - Workflow execution sequence generation method, device, computer equipment and storage medium - Google Patents

Workflow execution sequence generation method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN117369962A
CN117369962A CN202311149336.XA CN202311149336A CN117369962A CN 117369962 A CN117369962 A CN 117369962A CN 202311149336 A CN202311149336 A CN 202311149336A CN 117369962 A CN117369962 A CN 117369962A
Authority
CN
China
Prior art keywords
execution sequence
workflow
workflow execution
initial
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311149336.XA
Other languages
Chinese (zh)
Inventor
赵伟驰
周方
俞圣亮
唐雪飞
王晓江
方启明
周超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lab
Original Assignee
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lab filed Critical Zhejiang Lab
Priority to CN202311149336.XA priority Critical patent/CN117369962A/en
Publication of CN117369962A publication Critical patent/CN117369962A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/092Reinforcement learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application relates to a workflow execution sequence generation method, a workflow execution sequence generation device, computer equipment and a storage medium. The method comprises the following steps: acquiring a workflow execution sequence data set, wherein the workflow execution sequence data set comprises a training workflow description and a corresponding standard workflow execution sequence, training an initial model based on the workflow execution sequence data set to obtain an execution sequence generation model, inputting a target workflow demand description into the execution sequence generation model to obtain an initial workflow execution sequence, inputting the initial workflow execution sequence into an execution sequence optimization model if the initial workflow execution sequence does not meet a preset requirement, determining a target workflow execution sequence, and training the execution sequence optimization model by a reinforcement learning model. The method not only improves the efficiency of generating the workflow execution sequence, saves a great deal of manpower and time, but also improves the resource utilization rate.

Description

Workflow execution sequence generation method, device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of supercomputer technologies, and in particular, to a workflow execution sequence generating method, a workflow execution sequence generating device, a computer device, and a storage medium.
Background
With the development of technology, the processing demands of large-scale data and computing tasks are more and more, the demands on the processing capacity of the computer are also more and more, and the supercomputer is generated, so that a solution for processing the large-scale data and computing tasks is provided for users due to the extremely large data storage capacity and the extremely rapid data processing speed. In a traditional supercomputing environment, tasks are typically run in a batch-wise fashion, which is relatively difficult to manage for complex computing tasks that require multiple steps to work in concert. The workflow is used as a higher-level task management mode, solves the problem of difficult management, can represent the execution sequence of a series of tasks with dependency relationship, can simplify the task management work, and improves the utilization rate of resources. However, how to design and optimize a workflow is also a difficult challenge.
In the prior art, a fixed mode or a predefined template is usually adopted, a workflow sequence is automatically designed, generated and optimized by setting an objective function and constraint conditions, the workflow generation result may deviate from the actual user requirement, the method needs to manually define the objective function and the constraint conditions in advance, the efficiency is low, and for large-scale complex workflows, the expected effect cannot be obtained, and a large amount of manpower and material resources are wasted.
Therefore, there is a need in the related art for a way to accurately capture the user requirements, improve the efficiency of generating the workflow execution sequence, and improve the resource utilization.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a workflow execution sequence generation method, apparatus, computer device, and computer-readable storage medium that can accurately capture user demands, improve workflow execution sequence generation efficiency, and improve resource utilization.
In a first aspect, the present application provides a workflow execution sequence generation method. The method comprises the following steps:
acquiring a workflow execution sequence data set, wherein the workflow execution sequence data set comprises a training workflow description and a corresponding standard workflow execution sequence;
training an initial model based on the workflow execution sequence data set to obtain an execution sequence generation model;
inputting the target workflow demand description into the execution sequence generation model to obtain an initial workflow execution sequence;
if the initial workflow execution sequence does not meet the preset requirement, inputting the initial workflow execution sequence into an execution sequence optimization model, and determining a target workflow execution sequence, wherein the execution sequence optimization model is trained by a reinforcement learning model.
Optionally, in an embodiment of the present application, the training the initial model based on the workflow execution sequence dataset includes:
inputting the training workflow description into the initial model to obtain a training workflow execution sequence;
determining a loss function based on the training workflow execution sequence and a standard workflow execution sequence;
model parameters of the initial model are adjusted based on the loss function.
Optionally, in an embodiment of the present application, the inputting the training workflow description into the initial model, obtaining a training workflow execution sequence includes:
extracting keywords of the training workflow description, and determining the training workflow description vector set based on the keywords;
compressing the training workflow description vector set to obtain semantic vectors with specified lengths, sequentially determining the state of each moment based on the semantic vectors and outputting the state to form a training workflow execution sequence.
Optionally, in an embodiment of the present application, the inputting the initial workflow execution sequence into an execution sequence optimization model, determining a target workflow execution sequence includes:
modifying the initial workflow execution sequence through an execution sequence optimization model, and determining a corresponding first reward function value and a new workflow execution sequence;
Updating parameters of the rewarding function based on the first rewarding function value and a new workflow execution sequence, and determining a corresponding second rewarding function value;
determining a reward function change value based on the first reward function value and the second reward function value, and determining the new workflow execution sequence as a target workflow execution sequence if the reward function change value is smaller than a preset threshold value or the update iteration number reaches a preset number.
Optionally, in an embodiment of the present application, if the initial workflow execution sequence does not meet a preset requirement, inputting the initial workflow execution sequence into an execution sequence optimization model, and determining the target workflow execution sequence includes:
verifying whether the initial workflow execution sequence is feasible or not and matching degree with the target workflow requirement description;
and if the initial workflow execution sequence is not feasible and/or the matching degree with the target workflow demand description does not meet the preset requirement, re-inputting the target workflow demand description into the execution sequence generation model.
Optionally, in an embodiment of the present application, the verifying whether the initial workflow execution sequence is feasible, and the matching degree with the workflow requirement description further includes:
If the initial workflow execution sequence is feasible and the matching degree with the target workflow demand description meets the preset requirement, displaying the initial workflow execution sequence to a user, and receiving a user instruction fed back by the user based on the initial workflow execution sequence;
and determining the user satisfaction degree of the initial workflow execution sequence based on the user instruction, and determining whether the user satisfaction degree meets the preset requirement.
Optionally, in one embodiment of the present application, the determining the target workflow execution sequence further includes:
and storing the characteristic attribute of the target workflow execution sequence, and executing the target workflow execution sequence.
In a second aspect, the present application further provides a workflow execution sequence generating device. The device comprises:
the workflow execution sequence data set acquisition module is used for acquiring a workflow execution sequence data set, wherein the workflow execution sequence data set comprises training workflow descriptions and corresponding standard workflow execution sequences;
the initial model training module is used for training an initial model based on the workflow execution sequence data set to obtain an execution sequence generation model;
The initial workflow execution sequence determining module is used for inputting the target workflow demand description into the execution sequence generating model to obtain an initial workflow execution sequence;
and the target workflow execution sequence determining module is used for inputting the initial workflow execution sequence into an execution sequence optimizing model to determine a target workflow execution sequence if the initial workflow execution sequence does not meet the preset requirement, and the execution sequence optimizing model is trained by a reinforcement learning model.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor executing the steps of the method according to the various embodiments described above.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the method described in the above embodiments.
In the method for generating the workflow execution sequence, firstly, a workflow execution sequence data set is obtained, the workflow execution sequence data set comprises a training workflow description and a corresponding standard workflow execution sequence, then an initial model is trained based on the workflow execution sequence data set to obtain an execution sequence generation model, then a target workflow demand description is input into the execution sequence generation model to obtain an initial workflow execution sequence, and then the initial workflow execution sequence is input into an execution sequence optimization model to determine a target workflow execution sequence if the initial workflow execution sequence does not meet preset requirements, and the execution sequence optimization model is obtained by training a reinforcement learning model. That is, when the workflow execution sequence is generated, the user only provides simple demand description in an automatic mode, and the workflow execution sequence is automatically generated and optimized by using the generation model of the deep learning algorithm and the optimization model of the reinforcement learning algorithm, so that complicated design and adjustment are not needed manually, the workflow execution sequence generation efficiency is improved, a large amount of manpower and time are saved, and the resource utilization rate is improved.
Drawings
FIG. 1 is an application environment diagram of a workflow execution sequence generation method in one embodiment;
FIG. 2 is a flow diagram of a method of workflow execution sequence generation in one embodiment;
FIG. 3 is a flow diagram of workflow execution sequence generation in one embodiment;
FIG. 4 is a flow diagram of optimizing an initial workflow execution sequence using an execution sequence optimization model in one embodiment;
FIG. 5 is an interactive schematic diagram of a workflow execution sequence generation method applied to a super computing cloud platform in one embodiment;
FIG. 6 is a schematic diagram showing steps of a method for generating a workflow execution sequence in another embodiment;
FIG. 7 is a block diagram of a workflow execution sequence generation apparatus in one embodiment;
fig. 8 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The workflow execution sequence generation method provided by the embodiment of the application can be applied to an application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be located on a cloud or other network server. The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things devices, and portable wearable devices, where the internet of things devices may be smart speakers, smart televisions, smart air conditioners, smart vehicle devices, and the like. The portable wearable device may be a smart watch, smart bracelet, headset, or the like. The server 104 may be implemented as a stand-alone server or as a server cluster of multiple servers.
In one embodiment, as shown in fig. 2, a workflow execution sequence generating method is provided, and the method is applied to the server in fig. 1 for illustration, and includes the following steps:
s201: a workflow execution sequence data set is obtained, the workflow execution sequence data set comprising a training workflow description and a corresponding standard workflow execution sequence.
In the embodiment of the application, first, a workflow execution sequence data set is acquired. The workflow execution sequence data set is a large-scale mark training set, and comprises a large number of training workflow descriptions and corresponding standard workflow execution sequences, all data are generated through historical workflow data and expert knowledge, the training workflow descriptions refer to workflow text descriptions comprising a plurality of operations, and the corresponding standard workflow execution sequences refer to real workflow execution sequences.
S203: and training an initial model based on the workflow execution sequence data set to obtain an execution sequence generation model.
In the embodiment of the application, after the workflow execution sequence data set is acquired, the initial model is trained based on the workflow execution sequence data set to obtain the execution sequence generation model, namely, the model capable of generating the workflow execution sequence through workflow description. Specifically, the initial model may be a deep learning model, such as a bi-directional recurrent neural network (Bidirectional Recurrent Neural Network, biRNN) model, a sequence-to-sequence (Sequence to Sequence, seq2 Seq) model, etc., which employs a multi-layer neural network and a specific optimization algorithm that is capable of extracting useful features and information from the workflow description. And (3) taking training workflow description in the workflow execution sequence data set as input, taking a corresponding standard workflow execution sequence as standard output, comparing the difference between model output and standard output, updating parameters of an initial model, gradually reducing the difference, and when the difference is smaller than a set threshold value, finishing model training, and reserving the parameters of the initial model at the moment to obtain an execution sequence generation model.
S205: and inputting the target workflow demand description into the execution sequence generation model to obtain an initial workflow execution sequence.
In the embodiment of the application, after the execution sequence generation model is obtained, the target workflow demand description is input into the execution sequence generation model to obtain the initial workflow execution sequence. Wherein the target workflow requirement description refers to a workflow text description comprising a plurality of operations, and specifically comprises a description of workflow functions, an illustration of action categories, one or more descriptions capable of helping to generate expected workflow characteristics, and the like, and the descriptions are input by a user through a terminal in general. By inputting the trained execution sequence generation model, the output initial workflow execution sequence can be obtained. Through the trained deep learning model, not only can an initial workflow execution sequence which accords with the target workflow requirement provided by a user be quickly generated, but also the execution sequence can be continuously optimized and adjusted with the help of the model so as to better meet the actual requirement of the user.
S207: if the initial workflow execution sequence does not meet the preset requirement, inputting the initial workflow execution sequence into an execution sequence optimization model, and determining a target workflow execution sequence, wherein the execution sequence optimization model is trained by a reinforcement learning model.
In this embodiment of the present application, after the initial workflow execution sequence is obtained, whether the initial workflow execution sequence meets a preset requirement is to be verified, where the preset requirement includes relevant indicators for quality verification of the initial workflow execution sequence, such as feasibility, matching degree with the original description, user satisfaction degree, and the like. When the preset requirements are not met, inputting an initial workflow execution sequence into an execution sequence optimization model, optimizing the initial workflow execution sequence, adjusting the sequence of the initial workflow execution sequence, adding, deleting or replacing a certain component, in the optimization process, taking indexes such as calculation execution efficiency and resource utilization rate as measurement standards in the workflow execution sequence process, simultaneously verifying whether the optimized workflow execution sequence meets the preset requirements, for example, collecting and evaluating user satisfaction through an online questionnaire or a user feedback mechanism, when the execution efficiency and the resource utilization rate are high, and meeting the preset requirements, optimizing the workflow execution sequence, finishing the workflow execution sequence, reserving the current workflow execution sequence as a target workflow execution sequence, wherein the execution efficiency is obtained by calculating the time required by each task from the beginning to the completion, and adding the completion time of all tasks to obtain the total execution time. Execution efficiency may be defined as the total number of tasks divided by the total execution time. Resource utilization is the utilization of computing resources (e.g., CPU, memory, etc.) by computing during execution of a task. Resource utilization may be defined as the amount of resources used divided by the total amount of resources. It should be noted that, the execution sequence optimization model is obtained by training a reinforcement learning model, in the training process, the values of the execution efficiency and the resource utilization rate are higher by adjusting model parameters, and meanwhile, the performance of the execution sequence optimization model is verified by adopting a verification data set through the accuracy and the loss, wherein the accuracy refers to the proportion of the number of correct samples in the model prediction to the total number of samples, and the loss refers to the difference between the model prediction result and the real result. Through continuous experiments and learning, the reinforcement learning model not only can find a strategy to maximize the execution efficiency and the resource utilization rate, but also can adapt to different user demands and scenes, realizes personalized workflow optimization, and achieves optimal accuracy and minimum loss.
In the above workflow execution sequence generation method, firstly, a workflow execution sequence data set is obtained, the workflow execution sequence data set includes a training workflow description and a corresponding standard workflow execution sequence, then, an initial model is trained based on the workflow execution sequence data set to obtain an execution sequence generation model, then, a target workflow demand description is input into the execution sequence generation model to obtain an initial workflow execution sequence, and then, if the initial workflow execution sequence does not meet a preset requirement, the initial workflow execution sequence is input into an execution sequence optimization model to determine a target workflow execution sequence, and the execution sequence optimization model is trained by a reinforcement learning model. That is, when the workflow execution sequence is generated, the user only provides simple demand description in an automatic mode, and the workflow execution sequence is automatically generated and optimized by using the generation model of the deep learning algorithm and the optimization model of the reinforcement learning algorithm, so that complicated design and adjustment are not needed manually, the workflow execution sequence generation efficiency is improved, a large amount of manpower and time are saved, and the resource utilization rate is improved.
In one embodiment of the present application, the training the initial model based on the workflow execution sequence dataset includes:
s301: and inputting the training workflow description into the initial model to obtain a training workflow execution sequence.
S303: a penalty function is determined based on the training workflow execution sequence and a standard workflow execution sequence.
S305: model parameters of the initial model are adjusted based on the loss function.
In one embodiment of the present application, firstly, training workflow descriptions in a workflow execution sequence data set are input into an initial model to obtain a training workflow execution sequence corresponding to the generation, then, a loss function is determined based on the training workflow execution sequence and a standard workflow execution sequence in the workflow execution sequence data set, specifically, a cross entropy loss function can be selected as the loss function to measure the difference between an execution sequence predicted and generated by the initial model and a real execution sequence, then, based on the loss function, model parameters of the initial model are updated by a gradient descent method, values of the loss function are minimized, and the method is iterated for a plurality of times, when the loss function reaches the minimum, performance of the model is verified, and when the performance reaches a satisfactory level, model training is completed to obtain an execution sequence generation model. In the specific application, a part of data is separated from the workflow execution sequence data set and used as a verification set, the verification set is adopted to verify the performance of the model in the training process, and if the accuracy of the model on the verification set is not improved by more than 0.5% in 10 continuous iterations, the training is stopped, and the model with the best current performance is stored.
In this embodiment, the training workflow description is input into the initial model to obtain the training workflow execution sequence, the loss function is determined based on the training workflow execution sequence and the standard workflow execution sequence, and the model parameters of the initial model are adjusted based on the loss function, so that the model generated by the training workflow execution sequence is more accurate, has better performance, and cannot be fitted.
In one embodiment of the present application, the inputting the training workflow description into the initial model, obtaining a training workflow execution sequence includes:
s401: extracting keywords of the training workflow description, and determining the training workflow description vector set based on the keywords.
S403: compressing the training workflow description vector set to obtain semantic vectors with specified lengths, sequentially determining the state of each moment based on the semantic vectors and outputting the state to form a training workflow execution sequence.
In one embodiment of the present application, for a training workflow description input to an initial model, first, keywords therein are extracted, a training workflow description vector set is determined based on the keywords, that is, each character and its context are converted into vectors, and a set of vector sets is output. In particular, a BiRNN model may be employed that connects two opposite-direction hidden layers to the same output, i.e., one RNN scanning from left to right and one RNN scanning from right to left. By utilizing the characteristics of the BiRNN model, a set of descriptions of a workflow containing N characters is converted into vectors of high dimension h-dimension, where each vector will represent a textual representation and vary with the semantic context of its original character. And then compressing the training workflow description vector set to obtain semantic vectors with specified length, sequentially determining the state of each moment based on the semantic vectors and outputting the semantic vectors to form a training workflow execution sequence, wherein a Seq2Seq model can be adopted, and the model consists of two Long Short-Term Memory (LSTM). The vector set is input into the Seq2Seq model, the input vector feature sequence is mapped to a semantic vector with a specified length based on LSTM, and then the target sequence is decoded from the vector by using another LSTM. The whole process comprises two steps of encoding (Encoder) and decoding (Decoder).
In a specific application, as shown in fig. 3, a workflow description is input, a vector set { St1 … Stn } is output through the BiRNN model, and a flag of "< EOS >" is added at the end of the vector set { St1 … Stn } to represent a state transition. It is input into the Seq2Seq model, and in the encoding phase, the state at the last moment [ cXT, hXT ] represents the semantic vector c, which will be the initial state of the Decoder. In the decoding stage, the output of each time instant is taken as the input of the next time instant until the Decoder predicts the end of outputting the special symbol "< EOS >" at a certain time instant. Each output value corresponds to one super computing cradle head component, and the components are connected by respective input and output.
In this embodiment, by extracting keywords of the training workflow description, determining a training workflow description vector set based on the keywords, compressing the training workflow description vector set to obtain a semantic vector with a specified length, sequentially determining a state at each moment based on the semantic vector and outputting the state to form a training workflow execution sequence, high quality representation of data can be ensured, the input-output relationship of a model can be more easily established, and the accuracy of a training model result can be improved.
In one embodiment of the present application, the inputting the initial workflow execution sequence into an execution sequence optimization model, and determining the target workflow execution sequence includes:
s501: and modifying the initial workflow execution sequence through an execution sequence optimization model, and determining a corresponding first reward function value and a new workflow execution sequence.
S503: and updating parameters of the reward function based on the first reward function value and a new workflow execution sequence, and determining a corresponding second reward function value.
S505: determining a reward function change value based on the first reward function value and the second reward function value, and determining the new workflow execution sequence as a target workflow execution sequence if the reward function change value is smaller than a preset threshold value or the update iteration number reaches a preset number.
In one embodiment of the present application, as shown in fig. 4, taking an execution sequence optimization model as an reinforcement Learning model using a Q-Learning algorithm as an example, in the execution sequence optimization model, a reward function Q (s, a) is defined, which represents an expected reward obtained by executing an action a in a state s, where the state s represents a current workflow execution sequence, and the action a represents a modification, such as adding, deleting or replacing a component, of the workflow execution sequence, where the expected reward is determined by factors such as execution efficiency, resource utilization, and the like. First, an initial workflow execution sequence is obtained, defining an action space in which the initial workflow execution sequence can be modified. Thereafter, a bonus function is initialized, which may be zero or a random value. And then, modifying the initial workflow execution sequence, determining a corresponding first rewarding function value and a new workflow execution sequence, specifically, selecting an action a by using an epsilon-greedy strategy, namely selecting the action with the largest current Q value with the probability of 1-epsilon, and randomly selecting an action with the probability of epsilon. The choice of using the epsilon-greedy policy as the action selection mechanism is based mainly on three considerations: firstly, the epsilon-greedy strategy can effectively balance exploration and utilization, and actions are randomly selected by introducing a small probability epsilon, so that a model is allowed to have a certain degree of freedom in the learning process for exploration. Second, the epsilon-greedy strategy is relatively simple to implement, requiring no complex calculations or parameter adjustments, as compared to other more complex action selection strategies. Finally, due to its versatility and simplicity, the epsilon-greedy strategy can be widely applied to many different types of reinforcement learning environments and problems.
Thereafter, this action is performed, resulting in a new workflow execution sequence (i.e., new state s'). Thereafter, the parameters of the reward function are updated based on the first reward function value and the new workflow execution sequence, and a corresponding second reward function value is determined, specifically, the function may be updated by the following formula.
Q(s,a)←Q(s,a)+α[r+γmax_a'Q(s',a')-Q(s,a)]
Where α is the learning rate, γ is the discount factor, max_a 'Q (s', a ') is the maximum Q value of all possible actions in the new state s'. In a particular application, the choice of learning rate α will generally depend on the particular task and experimental setup. A larger learning rate may make the learning process faster, but may also lead to instability of the learning result; while a smaller learning rate may make the learning process more stable, but may also result in a slower learning rate. In practical applications, the learning rate generally needs to be adjusted by experiments to find the most suitable value. In the embodiment of the present application, the learning rate α is set to 0.001.
And determining a change value of the reward function based on the first reward function value and the second reward function value, and determining the new workflow execution sequence as a target workflow execution sequence if the change value of the reward function is smaller than a preset threshold value or the update iteration number reaches a preset number. If the change value of the reward function is not smaller than the preset threshold value or the update iteration number does not reach the preset number, returning to continue to optimize, continuing to modify the workflow execution sequence, updating the reward function, and continuously optimizing the workflow execution sequence.
In this embodiment, by modifying the initial workflow execution sequence, a corresponding first reward function value and a new workflow execution sequence are determined, parameters of a reward function are updated based on the first reward function value and the new workflow execution sequence, a corresponding second reward function value is determined, a reward function change value is determined based on the first reward function value and the second reward function value, if the reward function change value is smaller than a preset threshold value or the update iteration number reaches a preset number of times, the new workflow execution sequence is determined as a target workflow execution sequence, and the workflow execution sequence is continuously optimized, so that the resource utilization rate can be improved, and the resource waste is avoided.
In one embodiment of the present application, if the initial workflow execution sequence does not meet a preset requirement, inputting the initial workflow execution sequence into an execution sequence optimization model, and determining the target workflow execution sequence includes:
s601: and verifying whether the initial workflow execution sequence is feasible or not and matching degree with the target workflow requirement description.
S603: and if the initial workflow execution sequence is not feasible and/or the matching degree with the target workflow demand description does not meet the preset requirement, re-inputting the target workflow demand description into the execution sequence generation model.
In one embodiment of the present application, after the initial workflow execution sequence is generated, a preliminary verification of the initial workflow execution sequence is required, including verifying whether it is feasible and matching the target workflow requirement description, before optimizing the initial workflow execution sequence. Where verifying whether feasible refers to feasibility verification, it is mainly checked whether the generated workflow execution sequence is technically executable, for example, whether all components can run on the supercomputer cloud platform, and whether all inputs and outputs are correctly connected together. Verifying the matching with the target workflow requirement description refers to comparing the generated workflow execution sequence with the user-provided description to see if they are consistent in function and category. Through verification, if the initial workflow execution sequence is not feasible or the matching degree with the target workflow demand description does not meet the preset requirement or is neither feasible nor meets the preset requirement, the target workflow demand description is input into the execution sequence generation model again, and a new execution sequence is regenerated.
In this embodiment, by verifying whether the initial workflow execution sequence is feasible or not and matching degree with the target workflow requirement description, the initial workflow execution sequence which does not meet the verification requirement is invalidated and regenerated, so that the generation quality of the workflow execution sequence can be ensured.
In one embodiment of the present application, the verifying whether the initial workflow execution sequence is feasible, and the matching degree with the workflow requirement description, then includes:
s701: and if the initial workflow execution sequence is feasible and the matching degree with the target workflow demand description meets the preset requirement, displaying the initial workflow execution sequence to a user, and receiving a user instruction fed back by the user based on the initial workflow execution sequence.
S703: and determining the user satisfaction degree of the initial workflow execution sequence based on the user instruction, and determining whether the user satisfaction degree meets the preset requirement.
In one embodiment of the present application, after the initial verification of the initial workflow execution sequence is completed, further verification of the quality of the initial workflow execution sequence is required, primarily by further evaluation in conjunction with collecting user feedback. First, if the initial workflow execution sequence is feasible and the matching degree with the target workflow requirement description meets the preset requirement, the initial workflow execution sequence is displayed to the user, and user instructions fed back by the user based on the initial workflow execution sequence are received. And then, determining the user satisfaction degree of the initial workflow execution sequence based on the received user instruction, judging whether the user satisfaction degree meets the preset requirement, and determining whether the initial workflow execution sequence needs to be optimized.
In this embodiment, by further evaluating the quality of the initial workflow execution sequence in combination with user feedback, the workflow execution sequence can be generated more flexibly, and the user requirements and the performance requirements of the system can be better satisfied.
In one embodiment of the present application, the determining the target workflow execution sequence further includes:
and storing the characteristic attribute of the target workflow execution sequence, and executing the target workflow execution sequence.
In one embodiment of the application, after determining the target workflow execution sequence, storing the corresponding characteristic attribute in a database of the super computing cloud platform, and simultaneously, sending a request to a dispatching center according to the target workflow execution sequence, wherein the dispatching center is responsible for dispatching computing resources in a cluster and executing the target workflow execution sequence. The characteristic attribute comprises an input/output (I/O) attribute and a semantic rule, wherein the input/output (I/O) attribute refers to input and output of each component in the workflow execution sequence, the input/output includes information such as data type, data size and the like, and the semantic rule refers to logic relation and execution sequence of each component in the workflow execution sequence.
In this embodiment, by storing the feature attribute of the target workflow execution sequence and executing the target workflow execution sequence, the threshold of using the super computing resource is reduced, so that more users can perform efficient computation and analysis by using the super computing cloud platform.
The following describes, in a specific embodiment, the interaction of the workflow execution sequence generation method of the present application. As shown in fig. 5, includes an execution sequence generation model, an execution sequence optimization model, a dispatch center, and a storage system. The module where the execution sequence generation model is located is respectively connected with the dispatching center and the storage system. First, S801, a workflow execution sequence data set is acquired, the workflow execution sequence data set including a training workflow description and a corresponding standard workflow execution sequence. Then, S803, training an initial model based on the workflow execution sequence data set, and obtaining an execution sequence generation model. Specifically, S805-S811, extracting a keyword of the training workflow description, determining the training workflow description vector set based on the keyword, compressing the training workflow description vector set to obtain a semantic vector with a specified length, sequentially determining a state at each moment based on the semantic vector and outputting the state to form a training workflow execution sequence, determining a loss function based on the training workflow execution sequence and a standard workflow execution sequence, and adjusting model parameters of the initial model based on the loss function.
After that, S813, the user inputs the target workflow demand description through the terminal. And S815, inputting the target workflow demand description into the execution sequence generation model to obtain an initial workflow execution sequence. And then, S817, verifying whether the initial workflow execution sequence is feasible or not and matching degree with the target workflow demand description, and if the initial workflow execution sequence is not feasible and/or the matching degree with the target workflow demand description does not meet the preset requirement, re-inputting the target workflow demand description into the execution sequence generation model. And then, if the initial workflow execution sequence is feasible and the matching degree with the target workflow requirement description meets the preset requirement, displaying the initial workflow execution sequence to a user, receiving a user instruction fed back by the user based on the initial workflow execution sequence, determining the user satisfaction degree of the initial workflow execution sequence based on the user instruction, and determining whether the user satisfaction degree meets the preset requirement.
And S821, if the initial workflow execution sequence does not meet the preset requirement, inputting the initial workflow execution sequence into an execution sequence optimization model, and determining a target workflow execution sequence, wherein the execution sequence optimization model is trained by a reinforcement learning model. Specifically, S823-S827, modify the initial workflow execution sequence through an execution sequence optimization model, determine a corresponding first rewarding function value and a new workflow execution sequence, update parameters of the rewarding function based on the first rewarding function value and the new workflow execution sequence, determine a corresponding second rewarding function value, determine a rewarding function change value based on the first rewarding function value and the second rewarding function value, and if the rewarding function change value is smaller than a preset threshold value or the update iteration number reaches a preset number of times, determine the new workflow execution sequence as a target workflow execution sequence.
Finally, S829, the characteristic attribute of the target workflow execution sequence is stored, and the target workflow execution sequence is executed. The corresponding characteristic attribute is stored in a database of the super computing cloud platform, meanwhile, a request is sent to a dispatching center according to the target workflow execution sequence, and the dispatching center is responsible for dispatching computing resources in the cluster and executing the target workflow execution sequence.
In one embodiment of the present application, as shown in fig. 6, first, a front-end interface is displayed and user input is detected, the front-end interface generated by the workflow execution sequence is presented to the user, and a target workflow requirement description is input by the user. And inputting the target workflow demand description into a trained execution sequence generation model, wherein the execution sequence generation model comprises two deep learning models, a BiRNN model and a Seq2Seq model, wherein after the target workflow demand description is input, a high-dimensional h-dimensional vector in the BiRNN model is extracted by the BiRNN model, then a latent relation feature of a component is deduced by the Seq2Seq model, namely, an input vector feature sequence is mapped to a semantic vector with a specified length based on LSTM, then the target sequence is decoded from the vector by using another LSTM, finally, a DAG workflow execution sequence, namely, an initial workflow execution sequence represented by a directed acyclic graph, is output, and after preliminary verification, the initial workflow execution sequence is judged to be optimized, the initial workflow execution sequence is input into an execution sequence optimization model, and is optimized by adopting a reinforcement learning algorithm, and is subjected to repeated feedback and continuous learning, so that the target workflow execution sequence meeting the requirements is finally obtained. The Long Short-Term Memory (LSTM) network is chosen over other types of neural networks, mainly because LSTM is good at processing time-dependent sequence data, which is critical for understanding and generating complex workflow execution sequences.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a workflow execution sequence generation device for realizing the above related workflow execution sequence generation method. The implementation of the solution provided by the apparatus is similar to the implementation described in the above method, so the specific limitation in the embodiments of the workflow execution sequence generating apparatus provided below may refer to the limitation of the workflow execution sequence generating method hereinabove, and will not be repeated herein.
In one embodiment, as shown in fig. 7, there is provided a workflow execution sequence generation apparatus 700, including: a workflow execution sequence dataset acquisition module 701, an initial model training module 703, an initial workflow execution sequence determination module 705, and a target workflow execution sequence determination module 707, wherein:
the workflow execution sequence data set obtaining module 701 is configured to obtain a workflow execution sequence data set, where the workflow execution sequence data set includes a training workflow description and a corresponding standard workflow execution sequence.
An initial model training module 703, configured to train an initial model based on the workflow execution sequence data set, and obtain an execution sequence generation model.
The initial workflow execution sequence determining module 705 is configured to input the target workflow requirement description into the execution sequence generating model, and obtain an initial workflow execution sequence.
And the target workflow execution sequence determining module 707 is configured to input the initial workflow execution sequence into an execution sequence optimization model to determine a target workflow execution sequence if the initial workflow execution sequence does not meet a preset requirement, where the execution sequence optimization model is trained by a reinforcement learning model.
In one embodiment of the present application, the initial model training module is further configured to:
inputting the training workflow description into the initial model to obtain a training workflow execution sequence;
determining a loss function based on the training workflow execution sequence and a standard workflow execution sequence;
model parameters of the initial model are adjusted based on the loss function.
In one embodiment of the present application, the initial model training module is further configured to:
extracting keywords of the training workflow description, and determining the training workflow description vector set based on the keywords;
compressing the training workflow description vector set to obtain semantic vectors with specified lengths, sequentially determining the state of each moment based on the semantic vectors and outputting the state to form a training workflow execution sequence.
In one embodiment of the present application, the target workflow execution sequence determination module is further configured to:
modifying the initial workflow execution sequence through an execution sequence optimization model, and determining a corresponding first reward function value and a new workflow execution sequence;
updating parameters of the rewarding function based on the first rewarding function value and a new workflow execution sequence, and determining a corresponding second rewarding function value;
Determining a reward function change value based on the first reward function value and the second reward function value, and determining the new workflow execution sequence as a target workflow execution sequence if the reward function change value is smaller than a preset threshold value or the update iteration number reaches a preset number.
The workflow execution sequence generating device further comprises an initial workflow execution sequence verification module.
In one embodiment of the present application, the initial workflow execution sequence verification module is configured to:
verifying whether the initial workflow execution sequence is feasible or not and matching degree with the target workflow requirement description;
and if the initial workflow execution sequence is not feasible and/or the matching degree with the target workflow demand description does not meet the preset requirement, re-inputting the target workflow demand description into the execution sequence generation model.
In one embodiment of the present application, the initial workflow execution sequence verification module is further configured to:
if the initial workflow execution sequence is feasible and the matching degree with the target workflow demand description meets the preset requirement, displaying the initial workflow execution sequence to a user, and receiving a user instruction fed back by the user based on the initial workflow execution sequence;
And determining the user satisfaction degree of the initial workflow execution sequence based on the user instruction, and determining whether the user satisfaction degree meets the preset requirement.
The workflow execution sequence generating device further comprises an execution module.
In one embodiment of the present application, the execution module is configured to:
and storing the characteristic attribute of the target workflow execution sequence, and executing the target workflow execution sequence.
The respective modules in the above-described workflow execution sequence generation apparatus may be implemented in whole or in part by software, hardware, and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure thereof may be as shown in fig. 8. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a workflow execution sequence generation method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 8 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, implements the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
It should be noted that, user information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (10)

1. A method for generating a workflow execution sequence, the method comprising:
acquiring a workflow execution sequence data set, wherein the workflow execution sequence data set comprises a training workflow description and a corresponding standard workflow execution sequence;
training an initial model based on the workflow execution sequence data set to obtain an execution sequence generation model;
inputting the target workflow demand description into the execution sequence generation model to obtain an initial workflow execution sequence;
If the initial workflow execution sequence does not meet the preset requirement, inputting the initial workflow execution sequence into an execution sequence optimization model, and determining a target workflow execution sequence, wherein the execution sequence optimization model is trained by a reinforcement learning model.
2. The method of claim 1, wherein the training an initial model based on the workflow execution sequence dataset comprises:
inputting the training workflow description into the initial model to obtain a training workflow execution sequence;
determining a loss function based on the training workflow execution sequence and a standard workflow execution sequence;
model parameters of the initial model are adjusted based on the loss function.
3. The method of claim 2, wherein inputting the training workflow description into the initial model to obtain a training workflow execution sequence comprises:
extracting keywords of the training workflow description, and determining the training workflow description vector set based on the keywords;
compressing the training workflow description vector set to obtain semantic vectors with specified lengths, sequentially determining the state of each moment based on the semantic vectors and outputting the state to form a training workflow execution sequence.
4. The method of claim 1, wherein inputting the initial workflow execution sequence into an execution sequence optimization model, determining a target workflow execution sequence comprises:
modifying the initial workflow execution sequence through an execution sequence optimization model, and determining a corresponding first reward function value and a new workflow execution sequence;
updating parameters of the rewarding function based on the first rewarding function value and a new workflow execution sequence, and determining a corresponding second rewarding function value;
determining a reward function change value based on the first reward function value and the second reward function value, and determining the new workflow execution sequence as a target workflow execution sequence if the reward function change value is smaller than a preset threshold value or the update iteration number reaches a preset number.
5. The method according to claim 1, wherein if the initial workflow execution sequence does not meet a preset requirement, inputting the initial workflow execution sequence into an execution sequence optimization model, and determining the target workflow execution sequence comprises:
verifying whether the initial workflow execution sequence is feasible or not and matching degree with the target workflow requirement description;
And if the initial workflow execution sequence is not feasible and/or the matching degree with the target workflow demand description does not meet the preset requirement, re-inputting the target workflow demand description into the execution sequence generation model.
6. The method of claim 5, wherein verifying whether the initial workflow execution sequence is viable and matching the workflow requirement description then comprises:
if the initial workflow execution sequence is feasible and the matching degree with the target workflow demand description meets the preset requirement, displaying the initial workflow execution sequence to a user, and receiving a user instruction fed back by the user based on the initial workflow execution sequence;
and determining the user satisfaction degree of the initial workflow execution sequence based on the user instruction, and determining whether the user satisfaction degree meets the preset requirement.
7. The method of claim 1, wherein the determining the target workflow execution sequence further comprises, after:
and storing the characteristic attribute of the target workflow execution sequence, and executing the target workflow execution sequence.
8. A workflow execution sequence generation apparatus, the apparatus comprising:
The workflow execution sequence data set acquisition module is used for acquiring a workflow execution sequence data set, wherein the workflow execution sequence data set comprises training workflow descriptions and corresponding standard workflow execution sequences;
the initial model training module is used for training an initial model based on the workflow execution sequence data set to obtain an execution sequence generation model;
the initial workflow execution sequence determining module is used for inputting the target workflow demand description into the execution sequence generating model to obtain an initial workflow execution sequence;
and the target workflow execution sequence determining module is used for inputting the initial workflow execution sequence into an execution sequence optimizing model to determine a target workflow execution sequence if the initial workflow execution sequence does not meet the preset requirement, and the execution sequence optimizing model is trained by a reinforcement learning model.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
CN202311149336.XA 2023-09-07 2023-09-07 Workflow execution sequence generation method, device, computer equipment and storage medium Pending CN117369962A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311149336.XA CN117369962A (en) 2023-09-07 2023-09-07 Workflow execution sequence generation method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311149336.XA CN117369962A (en) 2023-09-07 2023-09-07 Workflow execution sequence generation method, device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117369962A true CN117369962A (en) 2024-01-09

Family

ID=89406718

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311149336.XA Pending CN117369962A (en) 2023-09-07 2023-09-07 Workflow execution sequence generation method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117369962A (en)

Similar Documents

Publication Publication Date Title
US11829874B2 (en) Neural architecture search
CN110796190B (en) Exponential modeling with deep learning features
US20220004879A1 (en) Regularized neural network architecture search
CN110366734B (en) Optimizing neural network architecture
Todorov Efficient computation of optimal actions
US20210271970A1 (en) Neural network optimizer search
CN103502899B (en) Dynamic prediction Modeling Platform
CN107330715B (en) Method and device for selecting picture advertisement material
CN108431832A (en) Neural network is expanded using external memory
CN110520871A (en) Training machine learning model
CN106471525A (en) Strength neural network is to generate additional output
CN109313720A (en) The strength neural network of external memory with sparse access
CN108205699A (en) Generation is used for the output of neural network output layer
CN111652378B (en) Learning to select vocabulary for category features
WO2019084560A1 (en) Neural architecture search
WO2019165462A1 (en) Unsupervised neural network training using learned optimizers
US20200241878A1 (en) Generating and providing proposed digital actions in high-dimensional action spaces using reinforcement learning models
US11900263B2 (en) Augmenting neural networks
US20220044109A1 (en) Quantization-aware training of quantized neural networks
CN113065882A (en) Commodity processing method and device and electronic equipment
CN109242927B (en) Advertisement template generation method and device and computer equipment
CN117011118A (en) Model parameter updating method, device, computer equipment and storage medium
CN117369962A (en) Workflow execution sequence generation method, device, computer equipment and storage medium
CN110837596B (en) Intelligent recommendation method and device, computer equipment and storage medium
CN114493781A (en) User behavior prediction method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination