CN113610226B - Online deep learning-based data set self-adaptive cutting method - Google Patents

Online deep learning-based data set self-adaptive cutting method Download PDF

Info

Publication number
CN113610226B
CN113610226B CN202110810438.6A CN202110810438A CN113610226B CN 113610226 B CN113610226 B CN 113610226B CN 202110810438 A CN202110810438 A CN 202110810438A CN 113610226 B CN113610226 B CN 113610226B
Authority
CN
China
Prior art keywords
data set
neural network
action
learning
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110810438.6A
Other languages
Chinese (zh)
Other versions
CN113610226A (en
Inventor
杨峰
吴超
纪程
周明亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Zhongke Inverse Entropy Technology Co ltd
Original Assignee
Nanjing Zhongke Inverse Entropy Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Zhongke Inverse Entropy Technology Co ltd filed Critical Nanjing Zhongke Inverse Entropy Technology Co ltd
Priority to CN202110810438.6A priority Critical patent/CN113610226B/en
Publication of CN113610226A publication Critical patent/CN113610226A/en
Application granted granted Critical
Publication of CN113610226B publication Critical patent/CN113610226B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a data set self-adaptive cutting method based on online deep learning, which selects an online deep learning neural network architecture aiming at an image data set; training the selected architecture by using a road image data set to obtain a converged network model; online learning is carried out on the characteristics of the input data set by using machine learning classification algorithm learning, and the class of the image data set is labeled according to the learning result; the reinforcement model identifies the current state according to the data set category label, selects an action according to the reinforcement learning strategy, namely respectively selects a structural cutting strategy for cutting one channel by one channel, executes the action and obtains the feedback of the action. Through online real-time training, the reinforced model can finally select an optimal cutting strategy according to the type of the data set so as to realize the balance of precision and expenditure.

Description

Online deep learning-based data set self-adaptive cutting method
Technical Field
The invention belongs to a deep learning technology, and particularly relates to a data set self-adaptive reducing method based on online deep learning.
Background
Currently, deep reinforcement learning has been applied in the field of unmanned vehicles. Deep learning can be divided into two strategies of online learning and batch learning, wherein the batch learning strategy cannot overcome the flow change of a data set, the memory space consumption is high, and the data set possibly has the problem of concept drift, and the online learning strategy is widely applied because the unmanned automobile has violent scene change, large difference of the characteristics of the data set and limited memory resources of an embedded system. However, the online learning strategy cannot select a model in the online training process, and the online training is time-consuming and long in time-consuming to realize model selection and needs a lot of manpower waste, so that a complex model is often selected to ensure sufficient learning ability in reality, and thus, computing resources and memory resources are wasted. Generally, model cutting and compression is an effective method for solving the problem, but the current cutting method aims at a single data set in an experimental platform, neglects the requirements of drastic data set characteristic changes on different regularization penalty terms in an unmanned automobile scene, and cannot fully adapt to different characteristics of the data set to realize the balance of precision, calculation expense and memory expense. How to formulate a dynamic cutting strategy adapting to the characteristics of a data set becomes a problem to be solved urgently.
"Amc is automatic for model compression and access on mobile device proceedings of the national European Conference on Computer Vision (ECCV)," discloses a neural network automatic compression technology of a mobile platform, based on reinforcement learning, taking parameters such as the number of layers, the size of a convolution kernel, input dimension, cut-down and residual weight number as model states, taking a continuous function between 0 and 1 as the sparsity degree of a current layer to construct an action space, and taking prediction precision as feedback to construct a reinforcement model so as to realize automatic compression cutting-down of the model. However, the dynamic characteristics of the data set in the unmanned automobile scene are not fully considered in the state definition of the scheme, a single-structure cutting strategy is adopted in the action definition, the requirements of the dynamic characteristics changing in the data set on different cutting strategies are neglected, only prediction precision is considered in the feedback definition, and the calculation overhead and the memory overhead of the model are not considered, so that the balance between the precision and the overhead cannot be realized by the dynamic characteristics of the data set. In addition, the model adopts a layer-by-layer cutting mode, neglects the relevance among layers and easily leads to suboptimal solution.
Disclosure of Invention
The invention provides a data set self-adaptive cutting method based on online deep learning.
The technical solution for realizing the invention is as follows: a data set self-adaptive cutting method based on online deep learning comprises the following steps:
determining a neural network architecture according to the acquired data set;
classifying the data set by using a KNN classification algorithm to obtain a data set class label;
the method comprises the steps that a reinforced model identifies the current state of a neural network to be cut according to the type of a data set, the number of layers of the neural network, the size of a convolution kernel, the number of cut channels and the number of residual channels;
selecting and executing exploration or utilization according to a greedy algorithm, wherein an exploration finger randomly selects a cutting strategy, and a utilization finger selects an action with the maximum Q value from a Q table according to the current state;
selecting and executing corresponding actions according to the selected exploration or utilization;
and calculating an incentive feedback value after the neural network carries out prediction and back propagation training and updating the feedback value to a Q table.
Preferably, the neural network architecture comprises three hidden layers, namely a convolutional layer, a pooling layer and a fully-connected layer.
Preferably, the specific method for determining the neural network architecture is as follows: and inputting the data set into a neural network, and sequentially increasing the number and the depth of each hidden layer until the neural network converges to the point that the precision of the prediction result is greater than a given threshold value.
Preferably, the Q table is maintained in the memory, and is used to store the learning result of reinforcement learning, the vertical axis of the table is all states, the horizontal axis is all actions, and each state-action column maintains the Q value of the state-action.
Preferably, the action is a combination of several structural clipping methods, the clipping action for the convolutional layer is defined as selecting a certain kernel clipping shape for each convolutional kernel or clipping the kernel completely, one convolutional kernel is clipped for each action, and the action for the fully-connected layer is defined as selecting a structural clipping algorithm for each channel for clipping.
Preferably, the neural network performs prediction and back propagation training to obtain prediction precision, neural network memory overhead, inference overhead and training overhead after performing the action, and obtains the reward feedback value by weighting and combining the prediction precision, the neural network memory overhead, the inference overhead and the training overhead.
Preferably, the updated Q value is obtained by weighting the reward feedback value to add to the Q value in the corresponding status-action column in the Q table.
Compared with the prior art, the invention has the following remarkable advantages: the invention realizes the balance optimization of prediction precision, memory expense and calculation expense by selecting the optimal action in each action selection.
The present invention is described in further detail below with reference to the attached drawings.
Drawings
Fig. 1 is a schematic diagram of the present invention.
Detailed Description
As shown in fig. 1, a method for adaptively reducing a data set based on online deep learning specifically includes the steps of:
step 1: collecting a data set of a real-time image of the road condition of the unmanned vehicle;
step 2: selecting a neural network architecture from the collected data set; the concrete structure comprises a convolution layer, a pooling layer and three hidden layers of a full-connection layer; the specific selection process is that the number of layers and the depth of each hidden layer are sequentially increased until the neural network converges to the point that the precision of the prediction result is greater than a given threshold value;
and step 3: classifying the data set by using a KNN classification algorithm to obtain a data set class label, wherein the specific labeling process comprises the following steps: classifying by a classification algorithm according to the image latitude of each sample data in the data set, the mean value of all the sample data, the variance of deviation from the mean value, the Mahalanobis distance of the data in the space of the maximum value and the minimum value of each sample data, and classifying samples with close distances into one class;
and step 3: and the reinforced model receives the data set category label and identifies the current state of the neural network to be cut according to the data set category, the number of the neural network layers, the convolution kernel size, the number of cut channels and the number of residual channels. For example, if the current data set has 5 classes, 5 convolution kernels, 5 channels in the fully-connected layer, 5 clipping kernel shapes in the convolution kernels, and 3 clipping algorithms in the fully-connected layer, the total number of states is represented by equation (1):
S=5×5 6 ×5 3 (1)
where S is the number of states, 5 represents 5 data set categories, 5 6 Represents which clipping kernel shape is used by each of the 5 convolution kernels, where 6 represents the clipping shape in 5 plus the removal of the convolution kernel, and 5 3 And respectively adopting one of 3 clipping algorithms to represent 5 full connection layer channels.
And 4, step 4: and selecting and executing exploration or utilization according to a greedy algorithm, wherein the exploration finger randomly selects a cutting strategy, and the utilization finger selects the action with the maximum Q value from the Q table according to the current state. The Q table is maintained in the internal memory or the external memory and is used for storing the learning result of reinforcement learning, the vertical axis of the table is all states, the horizontal axis of the table is all actions, and each state-action column maintains the Q value of the state-action. After each action is executed, the enhancement model calculates the reward of the state-action and superposes the weight of the reward to the Q value in the corresponding state-action column in the Q table;
and 5: selecting an action from the Q form by the model agent according to the exploration selected by the reinforcement learning model or by utilizing the model agent;
in a further embodiment, the actions are a combination of several structural clipping approaches, clipping the convolutional layer is defined as selecting a certain kernel clipping shape for each convolutional kernel or clipping the kernel completely, clipping one convolutional kernel at a time, and clipping the fully-connected layer is defined as selecting a certain structure for each channelCutting by a structural cutting algorithm, wherein the structural cutting algorithm comprises L1, L2 and polimization Document [3] Regularization penalty factors are used alone and in combination two by two.
Step 6: the model agent executes a cutting action, the convolution kernel to be cut is cut or completely cut according to the selected kernel cutting shape, the channel to be cut is thinned according to the selected structural cutting strategy, and one convolution kernel or one channel is cut by each action.
The invention aims at the convolution layer, and a kernel cutting shape library is established by referring to ' PatDNN, Achievezing read-Time DNN Execution on Mobile Devices with cutting shape-based Weight cutting. ASPLOS '20: procedures of the project-Fifth International Conference on architecture Support for Programming and Operating systems ', and convolution kernels with dimensions of 3X3 or higher can be cut into kernel cutting shapes only retaining a small amount of kernel weights.
And 7: and after the next neural network carries out prediction and back propagation training, obtaining the prediction precision, neural network memory overhead, reasoning overhead and training overhead after the action is executed, and calculating a reward feedback value, namely a Q value, by weighted combination. The prediction precision is obtained by comparing the predicted value of the neural network with the acquired real result, and the memory overhead and the calculation overhead are tested in an operating system;
and 8: and the model agent updates the reward feedback value to a state-action column corresponding to the Q table.
The method captures the characteristics of an image data set in the unmanned system in real time, defines the state of the reinforced model by comprehensively considering all indexes of the image data set, finds the optimal strategy aiming at different data flow collection characteristics by incorporating different cutting strategies in the action of the reinforced model, incorporates calculation and memory overhead of the model in feedback definition, converts the problem of accuracy and overhead balance into a multi-target learning problem and solves the problem through the reinforced model, and simultaneously adopts channel cutting to avoid the problem of suboptimal solution caused by cutting layer by layer. By the method, the optimal action is selected in each action selection to achieve the balance optimization of prediction precision, memory overhead and calculation overhead.
Examples
The embodiment mainly aims at scenes with severe dynamic characteristic changes of an unmanned automobile data set, such as pixel changes in image recognition, severe changes of distance parameters acquired by a distance sensor and the like, and firstly selects an over-complex online deep learning neural network architecture for the image data set. The specific selection process is that the number and the depth of the convolutional layer, the pooling layer and the full-link layer of the neural network are gradually increased from simple to complex until the prediction accuracy of the neural network can reach a given accuracy threshold.
Training the selected architecture by using a road image data set to obtain a converged network model;
online learning is carried out on the characteristics of the input data set by using machine learning classification algorithm learning, and the class of the image data set is labeled according to the learning result;
the reinforcement model identifies the current state according to the data set category label, selects an action according to the reinforcement learning strategy, namely respectively selects a structural cutting strategy for cutting one channel by one channel, executes the action and obtains the feedback of the action. Through online real-time training, the reinforced model can finally select an optimal cutting strategy according to the type of the data set so as to realize the balance of precision and expenditure.
In the training process of the online deep learning, the main improvement point of the embodiment is an online real-time deep network cutting method based on the self-adaption data set of the reinforcement learning.
The method can automatically adapt to the flow characteristic change of the image data set in the unmanned automobile environment, and respectively adapt to the optimal cutting strategy combination for the convolution kernel and the neural network channel one by one, so that the balance of prediction precision, memory overhead, reasoning overhead and training overhead is achieved.
The present embodiment classifies datasets by real-time learning of flow data features by a feature classifier. Through learning of the image data set categories and the neural network architecture after real-time cutting, the reinforced model can explore the optimal neural network architecture under different image data set categories, including convolution kernel cutting shape combination and the number of cut channels. By the weighted combination of the prediction precision, the memory overhead, the inference overhead and the training overhead in the feedback definition, the reinforced model can find the optimal cutting strategy suitable for the image data category to realize the balance of the precision and the overhead under the real-time image data sets with different characteristics.
In this embodiment, the overhead of the Q table is large, which may cause insufficient memory in the embedded system of the real-time unmanned vehicle controller, and the Q table needs to be stored in the external memory, and the frequently accessed state-action bar is placed in the internal memory by using an lru (least squares) algorithm to achieve the balance between the overhead of the memory and the performance.

Claims (7)

1. A data set self-adaptive reducing method based on online deep learning is characterized by comprising the following steps:
determining a neural network architecture from the acquired image dataset;
classifying the image data set by using a KNN classification algorithm to obtain a data set class label;
the method comprises the steps that a reinforced model identifies the current state of a neural network to be cut according to the type of a data set, the number of layers of the neural network, the size of a convolution kernel, the number of cut channels and the number of residual channels;
selecting and executing exploration or utilization according to a greedy algorithm, wherein an exploration finger randomly selects a cutting strategy, and a utilization finger selects an action with the maximum Q value from a Q table according to the current state;
selecting and executing corresponding actions according to the selected exploration or utilization;
and calculating an incentive feedback value after the neural network carries out prediction and back propagation training and updating the feedback value to a Q table.
2. The online deep learning-based dataset adaptive reduction method according to claim 1, wherein the neural network architecture comprises three hidden layers, namely a convolutional layer, a pooling layer and a fully-connected layer.
3. The online deep learning-based dataset adaptive reduction method according to claim 2, wherein the specific method for determining the neural network architecture is as follows: and inputting the data set into a neural network, and sequentially increasing the number and the depth of each hidden layer until the neural network converges to the point that the precision of the prediction result is greater than a given threshold value.
4. The adaptive data set pruning method based on online deep learning according to claim 1, wherein the Q table is used to store the learning result of reinforcement learning, the vertical axis of the table is all states, the horizontal axis is all actions, and each state-action column maintains the Q value of the state-action.
5. The method of claim 1, wherein the action is a combination of several structural clipping methods, the clipping action on convolutional layers is defined as selecting a certain kernel clipping shape for each convolutional kernel or clipping the kernel completely, each action clips a convolutional kernel, and the action on fully-connected layers is defined as selecting a structural clipping algorithm for each channel for clipping.
6. The online deep learning-based dataset adaptive reduction method according to claim 1, wherein the neural network performs prediction and back propagation training to obtain prediction precision, neural network memory overhead, inference overhead and training overhead after performing actions, and obtains the reward feedback value by weighted combination of the prediction precision, the neural network memory overhead, the inference overhead and the training overhead.
7. The online deep learning-based dataset adaptive clipping method according to claim 6, wherein the Q value in the corresponding status-action column in the Q table is added with a reward feedback value weight to obtain an updated Q value.
CN202110810438.6A 2021-07-19 2021-07-19 Online deep learning-based data set self-adaptive cutting method Active CN113610226B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110810438.6A CN113610226B (en) 2021-07-19 2021-07-19 Online deep learning-based data set self-adaptive cutting method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110810438.6A CN113610226B (en) 2021-07-19 2021-07-19 Online deep learning-based data set self-adaptive cutting method

Publications (2)

Publication Number Publication Date
CN113610226A CN113610226A (en) 2021-11-05
CN113610226B true CN113610226B (en) 2022-08-09

Family

ID=78337823

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110810438.6A Active CN113610226B (en) 2021-07-19 2021-07-19 Online deep learning-based data set self-adaptive cutting method

Country Status (1)

Country Link
CN (1) CN113610226B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230362196A1 (en) * 2022-05-04 2023-11-09 National Tsing Hua University Master policy training method of hierarchical reinforcement learning with asymmetrical policy architecture

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110210548A (en) * 2019-05-27 2019-09-06 清华大学深圳研究生院 A kind of picture dynamic self-adapting compression method based on intensified learning
CN112116089A (en) * 2020-09-07 2020-12-22 南京理工大学 Deep learning network clipping method for video processing of resource-limited equipment
CN112930541A (en) * 2018-10-29 2021-06-08 谷歌有限责任公司 Determining a control strategy by minimizing delusional effects

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112930541A (en) * 2018-10-29 2021-06-08 谷歌有限责任公司 Determining a control strategy by minimizing delusional effects
CN110210548A (en) * 2019-05-27 2019-09-06 清华大学深圳研究生院 A kind of picture dynamic self-adapting compression method based on intensified learning
CN112116089A (en) * 2020-09-07 2020-12-22 南京理工大学 Deep learning network clipping method for video processing of resource-limited equipment

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
PatDNN: Achieving Real-Time DNN Execution on Mobile Devices with Pattern-basedWeight Pruning;Wei Niu等;《ASPLOS "20: Proceedings of the Twenty-Fifth International Conference on Architectural Support for Programming Languages and Operating Systems》;20200313;第907–922页 *
Pruning Deep Reinforcement Learning for Dual User Experience and Storage Lifetime Improvement on Mobile Devices;Chao Wu等;《IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems》;20201002;第3993-4003页 *
Simulation of Decision-making Method for Vehicle Longitudinal Automatic Driving Based on Deep Q Neural Network;Xu Cheng等;《ICAL 2020: Proceedings of the 2020 the 7th International Conference on Automation and Logistics (ICAL)》;20200731;第12-17页 *

Also Published As

Publication number Publication date
CN113610226A (en) 2021-11-05

Similar Documents

Publication Publication Date Title
CN113221905B (en) Semantic segmentation unsupervised domain adaptation method, device and system based on uniform clustering and storage medium
CN107862864B (en) Driving condition intelligent prediction estimation method based on driving habits and traffic road conditions
EP4080416A1 (en) Adaptive search method and apparatus for neural network
CN108830196A (en) Pedestrian detection method based on feature pyramid network
CN113486764B (en) Pothole detection method based on improved YOLOv3
Rahimi et al. A parallel fuzzy c-mean algorithm for image segmentation
CN109671102A (en) A kind of composite type method for tracking target based on depth characteristic fusion convolutional neural networks
CN111783937A (en) Neural network construction method and system
CN111767860A (en) Method and terminal for realizing image recognition through convolutional neural network
CN113610226B (en) Online deep learning-based data set self-adaptive cutting method
CN114912195B (en) Aerodynamic sequence optimization method for commercial vehicle
Tan et al. A fuzzy adaptive gravitational search algorithm for two-dimensional multilevel thresholding image segmentation
CN111931904A (en) Neural network construction method and device
CN114162146A (en) Driving strategy model training method and automatic driving control method
CN115390565A (en) Unmanned ship dynamic path planning method and system based on improved D-star algorithm
EP4086818A1 (en) Method of optimizing neural network model that is pre-trained, method of providing a graphical user interface related to optimizing neural network model, and neural network model processing system performing the same
CN116432736A (en) Neural network model optimization method and device and computing equipment
US20230102866A1 (en) Neural deep equilibrium solver
CN111697570A (en) Power grid load prediction method
Wang et al. A Second-Order HMM Trajectory Prediction Method based on the Spark Platform.
Zehraoui et al. CASEP2: Hybrid case-based reasoning system for sequence processing
CN113988357B (en) Advanced learning-based high-rise building wind induced response prediction method and device
CN114859921B (en) Automatic driving optimization method based on reinforcement learning and related equipment
CN116384446A (en) Neural network architecture searching method and system based on mutation ware
CN118245901A (en) Driving behavior classifier parameter selection method, device, equipment, medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant