CN115081609A - Acceleration method in intelligent decision, terminal equipment and storage medium - Google Patents

Acceleration method in intelligent decision, terminal equipment and storage medium Download PDF

Info

Publication number
CN115081609A
CN115081609A CN202210766315.1A CN202210766315A CN115081609A CN 115081609 A CN115081609 A CN 115081609A CN 202210766315 A CN202210766315 A CN 202210766315A CN 115081609 A CN115081609 A CN 115081609A
Authority
CN
China
Prior art keywords
parameter
training
state
parameters
network model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210766315.1A
Other languages
Chinese (zh)
Inventor
吴保元
李隆康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chinese University of Hong Kong Shenzhen
Shenzhen Research Institute of Big Data SRIBD
Original Assignee
Chinese University of Hong Kong Shenzhen
Shenzhen Research Institute of Big Data SRIBD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chinese University of Hong Kong Shenzhen, Shenzhen Research Institute of Big Data SRIBD filed Critical Chinese University of Hong Kong Shenzhen
Priority to CN202210766315.1A priority Critical patent/CN115081609A/en
Publication of CN115081609A publication Critical patent/CN115081609A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Neurology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses an acceleration method, terminal equipment and a storage medium in intelligent decision making, wherein the method comprises the following steps: step 1, optimizing a plurality of parameters of a target object, and acquiring a historical iteration value of each parameter in a current optimization time interval; step 2, sequentially inputting the historical iteration value of each parameter into a trained strategy network model to obtain the posterior probability of each parameter converging to each discrete candidate state; and 3, judging whether the posterior probability corresponding to each parameter is larger than a preset early-fixed threshold, if so, fixing the state of the parameter, acquiring historical iteration values of other parameters except the parameter with the fixed state in the next optimization time interval, repeating the steps 2 to 3 until a preset termination condition is reached, otherwise, acquiring the historical iteration value of each parameter in the next optimization time interval, and repeating the steps 2 to 3 until the preset termination condition is reached. The method provided by the invention can improve the speed and ensure the accuracy of decision.

Description

Acceleration method in intelligent decision, terminal equipment and storage medium
Technical Field
The invention discloses an acceleration method in intelligent decision, terminal equipment and a computer readable storage medium, and belongs to the field of intelligent decision.
Background
The intelligent decision-making is a process of modeling, analyzing and obtaining a decision on related data by using various intelligent technologies and tools based on a set target, and is used for solving increasingly complex production and living problems in a new era, and the decision-making problem can be normalized into an integer programming problem.
Integer programming is a general modeling tool with widely used discrete or combinatorial optimization problems. In general, integer programming solution algorithms can be divided into two categories: an exact method and an approximate method. Although the accurate method can obtain an optimal solution, the method repeatedly solves the relaxed linear problem and is long in time consumption.
Among the approximation methods, the Lp-Box Alternating Direction Method (Alternating Direction Method of Multipliers, hereinafter abbreviated as ADMM) is getting more and more attention. Lp-BoxADMM equivalently replaces the binary constraint with the intersection of the box constraint and the L-p norm sphere constraint. However, in practical studies it was found that the Lp-boxamdm method converged at a sub-linear rate (Sublimearrate), i.e. most variables fluctuate around their final convergence state in long iterations. To further speed up this approximation algorithm, possible solutions in the prior art are "Early Stopping" (Early Stopping) or "Early retiring" (Early exit).
However, such "early-stop" or "early-fallback" has a trade-off between efficiency and accuracy: running time may save more if an earlier stop is chosen, but accuracy may be reduced even more.
Disclosure of Invention
The present application aims to provide an acceleration method in an intelligent decision, a terminal device and a computer readable storage medium, so as to solve the technical problem of accuracy reduction when an early-stop or early-quit acceleration approximation method is used in a decision process.
The first aspect of the present invention provides an acceleration method in intelligent decision making, including:
step 1, optimizing a plurality of parameters of a target object, and acquiring a historical iteration value of each parameter in a current optimization time interval;
step 2, sequentially inputting the historical iteration value of each parameter into a trained strategy network model to obtain the posterior probability of each parameter converging to each discrete candidate state;
and 3, judging whether the posterior probability corresponding to each parameter is greater than a preset early-fixed threshold, if so, fixing the state of the parameter, acquiring historical iteration values of other parameters except the parameters with fixed states in the next optimization time interval, repeating the steps 2 to 3 until a preset termination condition is reached, if not, acquiring the historical iteration value of each parameter in the next optimization time interval, and repeating the steps 2 to 3 until the preset termination condition is reached.
Preferably, the fixing the state of the parameter specifically includes:
and fixing the state of the parameter into a discrete candidate state corresponding to the posterior probability which is larger than a preset early-fixed threshold.
Preferably, the sequentially inputting the historical iteration value of each parameter into the trained policy network model to obtain the posterior probability that each parameter converges to each discrete candidate state specifically includes:
sequentially inputting the historical iteration value of each parameter into a trained strategy network model;
reconstructing nodes based on the historical iteration values;
based on the reconstructed data of the nodes, an a posteriori probability that each of the parameters converges to each of the discrete candidate states is determined using an attention mechanism.
Preferably, the determining, based on the data after node reconstruction, the posterior probability of convergence of each parameter to each discrete candidate state by using an attention mechanism specifically includes:
determining output data of each node by using an attention mechanism based on the data after the node reconstruction;
and performing dimension reduction on the output data of each node to obtain the posterior probability of each parameter converging to each discrete candidate state.
Preferably, the policy network model comprises an attention layer and an output layer;
the training process of the policy network model includes:
acquiring a training data set, wherein the training data set comprises real actions of parameters and real states corresponding to the real actions; the real action refers to whether the state of the parameter is to be determined early;
inputting the real action to the attention layer to obtain a predicted state of the parameter;
and training the strategy network model according to the predicted state and the corresponding real state and by combining a loss function to obtain a trained strategy network model.
Preferably, the loss function is a weighted binary cross-entropy loss function.
Preferably, the loss function is determined according to a first formula, the first formula being:
Figure BDA0003725727950000031
in the formula, J θ For the loss function, θ is a parameter of the policy network, ω e,r,i Is the weight of the i-th parameter of the r-th training block of the e-th training instance in the training data set, q e,r,i Binary cross entropy of the ith parameter of the r-th training block of the e-th training instance in a training data set, N being the total number of the training instances, γ being the total number of the training blocks in each of the training instances, and N being the total number of the parameters in each of the training blocks.
Preferably, the binary cross entropy is determined according to a second formula, which is:
q e,r,i =a * logπ θ (a|s)+(1-a * )log(1-π θ (a|s))
in the formula, pi θ Policy network pi with (as) parameter theta θ Probability that the current action a in question is determined as state s, a * Is the actual action corresponding to action a.
A second aspect of the invention provides a terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the method when executing the computer program.
A third aspect of the invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, carries out the steps of the above-mentioned method.
Compared with the prior art, the acceleration method, the terminal device and the storage medium in the intelligent decision have the following beneficial effects:
the invention provides an acceleration method for solving integer programming based on early determination (EarlyFixing) in intelligent decision, wherein parameters are independent, and for a single parameter, the invention predicts and judges whether the integer programming is early or not according to a series of iteration values of the parameter in the past. Compared with the prior art of 'early quitting' or 'early stopping', the invention provides the concept of 'early determination', not only pays attention to the depth of iteration, but also pays attention to the dimensionality of parameters in the iteration process, and achieves the effect of ensuring the accuracy while improving the optimization speed of the approximation method. The method has great significance for improving the efficiency of solving problems by integer programming in intelligent decision making, and especially has great effect in the fields of computer vision or machine learning and the like. Particularly, if the method is applied to the field of image segmentation, the image segmentation speed can be improved, and the accuracy of image segmentation is ensured.
The invention makes the process of solving the integer programming problem into Markov decision (Markov decision process) process, particularly, the parameters are independent, and for a single parameter, the prediction and the judgment of whether the parameter is early determined are carried out according to a series of iteration values of the parameter in the past. The invention adopts the learning technology of Attention Mechanism (Attention Mechanism) to decide whether to determine the parameter early in the strategy network, because the Attention Mechanism does not depend on the sequence between words, but excavates information by calculating the similarity between words, the information loss can not occur, and simultaneously the Attention Mechanism can be calculated in parallel, thereby greatly improving the calculation efficiency, achieving the optimization speed of the approximation method and ensuring the accuracy. Meanwhile, when the strategy network model is trained, Behavior Cloning (Behavior Cloning) is used as a method for simulating learning for training, so that the training speed can be increased. In the invention, Weighted Binary Cross Entropy Loss (Weighted Binary Cross-Entropy Loss) is used in the training process so as to further ensure the precision of the output result of the trained strategy network model.
Drawings
FIG. 1 is a flow chart of an acceleration method in intelligent decision making according to an embodiment of the present invention;
FIG. 2 is a framework diagram of a policy network model provided by the present invention;
FIG. 3 is a graph of the advantage of the present invention of early phasing over early stall.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
At present, the Chinese industry is in an important stage of the development from digitalization to intellectualization, each scene of the intellectualization development faces a great amount of decision-making requirements, and the decision-making is often millions of orders and tens of millions of orders, and is required to be completed in real time, but the original manpower decision-making mode cannot be met at all, so the intelligent decision-making is more and more important in the current society. For example, in the automobile industry, the sales conversion rate is predicted through AI decision model analysis, the follow-up priority is defined, and the input-output rate is improved; in the tourism industry, on the basis of analyzing the data of the airline drivers, the characteristics of the airline ticket buying intention customer group are extracted by an intelligent algorithm, an airline ticket personalized recommendation model and demand prediction are obtained, and marketing operation efficiency is improved; in the financial industry, banks realize more accurate and comprehensive customer service, marketing management and risk management by means of AI decision making technology and intelligent algorithm; in the e-commerce industry, decision intelligence is fused with a machine learning algorithm and a deep learning algorithm, so that users, articles and scenes are deeply analyzed, personalized commodity recommendation is carried out, and the purchasing efficiency and the customer experience of the users are improved.
The invention provides an acceleration method in intelligent decision making, as shown in fig. 1, comprising:
step 1, optimizing a plurality of parameters of a target object, and acquiring a historical iteration value of each parameter in a current optimization time interval.
In the embodiment of the invention, the parameters of the target object are optimized by using an approximation method in the existing integer programming. The approximation method may be a multiplier-based alternating direction method (ADMM) and its related improved methods, such as BetheaADMM, BregmanaDMM, Lp-BoxADMM, etc., which are not limited in this respect.
In the embodiment of the present invention, the optimization time interval is a plurality of continuous intervals, each interval may have equal length or different lengths, and may be set according to needs, which is not limited herein. For example, the current optimization time interval is an interval composed of optimization processes of 1-100 times, and the next optimization time interval is an interval composed of optimization processes of 101-300 times.
The historical iteration value in the embodiment of the invention comprises each iteration value of each parameter in the iteration process in the optimization time interval.
And 2, sequentially inputting the historical iteration value of each parameter into the trained strategy network model to obtain the posterior probability of each parameter converging to each discrete candidate state.
The framework of the policy network model of the present invention is shown in fig. 2, and includes at least an input layer, a node reconstruction layer, an attention layer, an MLP layer, and an output layer. The model is used to determine whether a single parameter is fixed early in the iteration process. Unlike prior art early stops, where a collection of parameters is considered as a whole, the early determination framework of the present invention makes decisions and analyses for each parameter independently. It takes as input the iteration value of the parameter and gives as output the probability. The method specifically comprises the following steps: inputting the iteration values of the parameters of the past multi-step iterations, the policy network will evaluate the posterior probability that each parameter converges to all discrete candidate states (i.e., eventually to 0 or 1).
The process of obtaining the posterior probability that each parameter converges to each discrete candidate state by using the policy network model is as follows:
sequentially inputting the historical iteration value of each parameter into the trained strategy network model through an input layer;
the node reconstruction layer carries out node reconstruction based on the historical iteration value;
based on the data after node reconstruction, the attention layer determines the posterior probability that each parameter converges to each discrete candidate state by using an attention mechanism, and specifically comprises the following steps:
based on the data after the node reconstruction, the attention layer determines output data of each node by using an attention mechanism;
and performing dimension reduction on the output data of each node by using an MLP layer to obtain the posterior probability of each parameter converging to each discrete candidate state.
And 3, judging whether the posterior probability corresponding to each parameter is larger than a preset early-fixed threshold, if so, fixing the state of the parameter, acquiring historical iteration values of other parameters except the parameter with the fixed state in the next optimization time interval, repeating the steps 2 to 3 until a preset termination condition is reached, otherwise, acquiring the historical iteration value of each parameter in the next optimization time interval, and repeating the steps 2 to 3 until the preset termination condition is reached.
The state of the fixed parameter is specifically as follows: and fixing the state of the parameter as a discrete candidate state corresponding to the posterior probability greater than a preset early-fixed threshold.
The predetermined termination condition is that the stopping criterion of the approximation method reaches convergence or that all parameters are fixed.
For example, when the predetermined early threshold is 0.8, the posterior probability of the parameter obtained by using the policy network model converging to 0 is 0.9, and the posterior probability of the parameter converging to 1 is 0.1, the state of the parameter is fixed to 0.
The pseudo code of the strategy network model working process is shown in table 1, and the pseudo code takes the historical iteration value of the parameter in the current optimization time interval as input and gives the posterior probability as output. Inputting the iteration values of the variables of the past multi-step iterations, the policy network model will evaluate the posterior probability that each parameter converges to all discrete candidate states (i.e., eventually to 0 or 1). If the a posteriori probability for a state exceeds a threshold, i.e. an early-defined threshold, then the act of fixing the parameter to that discrete state is performed, the parameter not participating in the iteration at a later time; otherwise, no fixed operation is performed, iteration is continued, and the parameter is further updated.
Table 1 pseudo code for policy network procedures
Figure BDA0003725727950000071
According to the policy network model, the state of the parameters can be updated, thereby converting some parameters from free parameters to fixed parameters. The iteration will terminate when the stopping criterion reaches convergence or all variables are fixed. As a markov decision process, this early-fix process is staged, with each stage early-fixing a portion of the variables, so that the problem size becomes smaller after each round of early-fix, thereby speeding up the optimization process.
The advantage of the early phasing of the present invention over early stall is shown in FIG. 3.
Further, the training process of the policy network model in step 2 of the present invention includes:
acquiring a training data set, wherein the training data set comprises a plurality of training examples, each training example comprises a plurality of training blocks, each training block comprises a plurality of parameters, and each parameter comprises a real action and a real state corresponding to the real action; the real action refers to whether the state of the parameter needs to be determined early;
inputting the real action to an attention layer of an initially constructed strategy network model to obtain a prediction state of a training example;
and training the strategy network according to the predicted state and the corresponding real state by combining the loss function to obtain the trained strategy network.
The loss function is a weighted binary cross entropy loss function, and specifically comprises the following steps:
Figure BDA0003725727950000072
in the formula, J θ For the loss function, θ is a parameter of the policy network, ω e,r,i Is the weight of the i-th parameter of the r-th training block of the e-th training instance in the training data set, q e,r,i Binary cross entropy of the ith parameter of the r-th training block for the e-th training instance in the training data set, N is the total number of training instances, γ is the total number of training blocks in each training instance, and N is the total number of parameters in each training block.
Wherein q is e,r,i As in equation (2):
q e,r,i =a * logπ θ (a|s)+(1-a * )log(1-π θ (a|s)) (2)
in the formula, pi θ Policy network pi with (as) parameter theta θ Probability that the current action a in question is determined as state s, a * Is the actual action corresponding to action a.
The invention adopts the imitation learning scheme to train the early-fixed strategy network model, and the network is not trained from the beginning, but the early-fixed characteristics are learned and directly mapped to convergence, thereby achieving the purpose of acceleration.
Specifically, the policy network of the present invention is trained by behavioral cloning (BehaviorCloning) as an expert-driven mimic learning method, the present invention uses the accelerated approximation method itself as an expert rule, first runs an expert policy on a collection of several training instances, and selects a dataset of expert < state-action > pairs. This strategy is learned by minimizing Weighted binary cross Entropy (Weighted binary cross Entropy) loss.
A second aspect of the invention provides a terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the steps of the method being implemented when the computer program is executed by the processor.
A third aspect of the invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, carries out the steps of the above-mentioned method.
The process of the present invention will be described in more detail below with reference to more specific examples.
Step one, optimizing the problem of image segmentation by using an ADMM method, and acquiring a plurality of parameters in the optimization process, wherein in the example, the problem of image segmentation comprises 10000 variable parameters. Presetting an optimization time interval, wherein each interval is equal in length and is 10 times of iteration, and acquiring a historical iteration value of each parameter in the current optimization time interval.
And step two, sequentially inputting the historical iteration value of each parameter into the trained strategy network model to obtain the early-fixed probability of the parameter. Taking a single parameter as an example, after the first round of early determination, the probability of convergence to 1 of the parameter is 0.95, the probability of convergence to 0 is 0.05, and if the threshold value of the determination is 0.9, 0.95>0.9 is obtained, that is, the parameter is determined early and converged to 1; taking another parameter as an example, after the first round of early determination, the probability of convergence to 1 of the parameter is 0.8, the probability of convergence to 0 is 0.2, and also assuming that the threshold of determination is 0.9, 0.8<0.9 is obtained, i.e., no early determination operation is performed on the parameter.
And step three, repeating the step two until all the parameters are fixed or the optimization algorithm converges.
Although the present application has been described with reference to a few embodiments, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the application as defined by the appended claims.

Claims (10)

1. An acceleration method in intelligent decision making, comprising:
step 1, optimizing a plurality of parameters of a target object, and acquiring a historical iteration value of each parameter in a current optimization time interval;
step 2, sequentially inputting the historical iteration value of each parameter into a trained strategy network model to obtain the posterior probability of each parameter converging to each discrete candidate state;
and 3, judging whether the posterior probability corresponding to each parameter is greater than a preset early-fixed threshold, if so, fixing the state of the parameter, acquiring historical iteration values of other parameters except the parameters with fixed states in the next optimization time interval, repeating the steps 2 to 3 until a preset termination condition is reached, if not, acquiring the historical iteration value of each parameter in the next optimization time interval, and repeating the steps 2 to 3 until the preset termination condition is reached.
2. The method of claim 1, wherein fixing the state of the parameter specifically comprises:
and fixing the state of the parameter into a discrete candidate state corresponding to the posterior probability which is larger than a preset early-fixed threshold.
3. The method for accelerating an intelligent decision according to claim 1, wherein the historical iteration value of each parameter is sequentially input into a trained policy network model to obtain a posterior probability that each parameter converges on each discrete candidate state, specifically comprising:
sequentially inputting the historical iteration value of each parameter into a trained strategy network model;
reconstructing nodes based on the historical iteration values;
based on the reconstructed data of the nodes, an a posteriori probability that each of the parameters converges to each of the discrete candidate states is determined using an attention mechanism.
4. The method for accelerating intelligent decision-making according to claim 3, wherein the step of determining the posterior probability of each parameter converging on each discrete candidate state by using an attention mechanism based on the reconstructed data of the nodes comprises:
determining output data of each node by using an attention mechanism based on the data after the node reconstruction;
and performing dimension reduction on the output data of each node to obtain the posterior probability of each parameter converging to each discrete candidate state.
5. The method of accelerating in intelligent decision making according to claim 1, wherein said policy network model comprises an attention layer and an output layer;
the training process of the policy network model includes:
acquiring a training data set, wherein the training data set comprises real actions of parameters and real states corresponding to the real actions; the real action refers to whether the state of the parameter is to be determined early;
inputting the real action to the attention layer to obtain a predicted state of the parameter;
and training the strategy network model according to the predicted state and the corresponding real state and by combining a loss function to obtain a trained strategy network model.
6. An acceleration method in intelligent decision making according to claim 5, characterized in that said loss function is a weighted binary cross entropy loss function.
7. An acceleration method in intelligent decision making according to claim 6, characterized in that the loss function is determined according to a first formula:
Figure FDA0003725727940000021
in the formula, J θ For the loss function, θ is a parameter of the policy network, ω e,r,i Is the weight of the i-th parameter of the r-th training block of the e-th training instance in the training data set, q e,r,i Binary cross entropy of the ith parameter of the r-th training block of the e-th training instance in a training data set, N being the total number of the training instances, γ being the total number of the training blocks in each of the training instances, and N being the total number of the parameters in each of the training blocks.
8. An acceleration method in intelligent decision making according to claim 7, characterized in that the binary cross entropy is determined according to a second formula:
q e,r,i =a * logπ θ (a|s)+(1-a * )log(1-π θ (a|s))
in the formula, pi θ Policy network pi with (as) parameter theta θ Probability that the current action a in question is determined as state s, a * Is the actual action corresponding to action a.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the steps of the method of any of claims 1-8 are implemented when the computer program is executed by the processor.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 8.
CN202210766315.1A 2022-07-01 2022-07-01 Acceleration method in intelligent decision, terminal equipment and storage medium Pending CN115081609A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210766315.1A CN115081609A (en) 2022-07-01 2022-07-01 Acceleration method in intelligent decision, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210766315.1A CN115081609A (en) 2022-07-01 2022-07-01 Acceleration method in intelligent decision, terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115081609A true CN115081609A (en) 2022-09-20

Family

ID=83257398

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210766315.1A Pending CN115081609A (en) 2022-07-01 2022-07-01 Acceleration method in intelligent decision, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115081609A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116027756A (en) * 2023-02-24 2023-04-28 季华实验室 Production parameter online optimization method and device, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116027756A (en) * 2023-02-24 2023-04-28 季华实验室 Production parameter online optimization method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
WO2021109578A1 (en) Method and apparatus for alarm prediction during service operation and maintenance, and electronic device
Mousavi et al. Traffic light control using deep policy‐gradient and value‐function‐based reinforcement learning
CN111079931A (en) State space probabilistic multi-time-series prediction method based on graph neural network
CN113361680B (en) Neural network architecture searching method, device, equipment and medium
CN111861013B (en) Power load prediction method and device
CN112910690A (en) Network traffic prediction method, device and equipment based on neural network model
CN111369299A (en) Method, device and equipment for identification and computer readable storage medium
CN113537580B (en) Public transportation passenger flow prediction method and system based on self-adaptive graph learning
CN112765894B (en) K-LSTM-based aluminum electrolysis cell state prediction method
WO2021035412A1 (en) Automatic machine learning (automl) system, method and device
CN113051130A (en) Mobile cloud load prediction method and system of LSTM network combined with attention mechanism
CN115081609A (en) Acceleration method in intelligent decision, terminal equipment and storage medium
CN113033898A (en) Electrical load prediction method and system based on K-means clustering and BI-LSTM neural network
CN114650321A (en) Task scheduling method for edge computing and edge computing terminal
WO2022252694A1 (en) Neural network optimization method and apparatus
CN115907000A (en) Small sample learning method for optimal power flow prediction of power system
CN112667394B (en) Computer resource utilization rate optimization method
CN113111308B (en) Symbolic regression method and system based on data-driven genetic programming algorithm
CN115238775A (en) Model construction method
He et al. GA-based optimization of generative adversarial networks on stock price prediction
CN111382391A (en) Target correlation feature construction method for multi-target regression
CN116957166B (en) Tunnel traffic condition prediction method and system based on Hongmon system
Huang et al. Elastic dnn inference with unpredictable exit in edge computing
CN117010459B (en) Method for automatically generating neural network based on modularization and serialization
US20220405599A1 (en) Automated design of architectures of artificial neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination