CN115035304A - Image description generation method and system based on course learning - Google Patents

Image description generation method and system based on course learning Download PDF

Info

Publication number
CN115035304A
CN115035304A CN202210612591.2A CN202210612591A CN115035304A CN 115035304 A CN115035304 A CN 115035304A CN 202210612591 A CN202210612591 A CN 202210612591A CN 115035304 A CN115035304 A CN 115035304A
Authority
CN
China
Prior art keywords
image
model
training
image description
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210612591.2A
Other languages
Chinese (zh)
Inventor
叶剑
冯建勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Computing Technology of CAS
Original Assignee
Institute of Computing Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Computing Technology of CAS filed Critical Institute of Computing Technology of CAS
Priority to CN202210612591.2A priority Critical patent/CN115035304A/en
Publication of CN115035304A publication Critical patent/CN115035304A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention provides an image description generation method and system based on course learning, which comprises the following steps: acquiring an image training set marked with expression information, and constructing an initial image description model as a current image description model; sequentially inputting the characteristics of all images in the image training set into the current image description model to obtain the description of each image, and obtaining the difficulty of each image in the image training set according to the description of the image and the corresponding expression information; and judging whether the performance index of the current image description model reaches a preset value, if so, using all the image training sets to carry out iterative training on the current image description model, otherwise, executing the step of selecting an input proportion according to the performance index, and selecting the characteristics of the image with the lowest difficulty in the image training sets according to the input proportion to train the current image description model. The image description generator trained by the invention has better generalization performance.

Description

Image description generation method and system based on course learning
Technical Field
The invention relates to the technical field of image description in image recognition and analysis, in particular to an image description generation method and system based on course learning.
Background
In recent years, two fields of natural language processing and computer vision are continuously developed, and the development and application range of related technologies in the field of artificial intelligence are greatly promoted. In order to adapt to complex application scenarios, the image description generation task generates a sentence of natural language for a given image to describe the image content. An image description method based on deep learning is a current mainstream scheme, and the method is used for constructing an image description generation model and optimizing implicit parameters of the image description generation model based on a large-scale image description data set. The model first extracts image features using an encoder (e.g., a convolutional neural network) and then generates text using a decoder and attention mechanism. The existing model training framework is mainly divided into two stages: cross entropy based training and the use of reinforcement learning further optimizes the model parameters.
Although the method of deep learning has made a certain progress, the existing methods still have some problems in model training and data utilization, mainly because:
on the one hand, because the model requires multiple network modules and a large number of learnable parameters to master cross-modal knowledge, the deeper the model architecture becomes, the more difficult it is to optimize.
On the other hand, data is highly heterogeneous in terms of quality, difficulty, and noise, which may cause performance degradation, meaning that while all samples are involved in model training, different samples contribute differently to the model.
Disclosure of Invention
The invention aims to solve the problems of difficult optimization and insufficient utilization of training data of the prior art model, and provides an image description generation method and system based on course learning.
Aiming at the defects of the prior art, the invention provides an image description generation method based on course learning, which comprises the following steps:
step 1, acquiring an image training set marked with expression information, and constructing an initial image description model as a current image description model;
step 2, inputting the characteristics of all images in the image training set into the current image description model in sequence to obtain the description of each image, and obtaining the difficulty of each image in the image training set according to the description of the image and the corresponding expression information; judging whether the performance index of the current image description model reaches a preset value, if so, executing a step 4, otherwise, executing a step 3;
step 3, selecting an input proportion according to the performance index, selecting the features of the image with the lowest difficulty in the image training set according to the input proportion to perform primary training on the current image description model, and executing the step 2 after primary training is completed;
and 4, performing iterative training on the current image description model by using all the image training sets until a preset training iteration number is reached or the loss of the current image description model is converged, and saving the current image description model as a final image description model to generate an image description result for an image to be described.
The image description generation method based on curriculum learning is characterized in that the difficulty is cross entropy loss or reinforcement learning reward.
The image description generation method based on curriculum learning, wherein the step 2 obtains the difficulty of each image through the following formula:
in the t-th iteration, the difficulty d of the current image sample s is:
Figure BDA0003672491870000021
where s represents the current image sample, | represents the model parameter, | D train I represents the number of all image samples in the image training set, e (s; theta) t-1 ) Represents the cross-entropy loss or reinforcement learning reward value for a previous iteration of the image description model, e (s; theta t ) Represents the cross entropy loss or reinforcement learning reward value of the current iteration of the model, and epsilon is a regularization item.
The image description generation method based on curriculum learning is characterized in that the performance index c of the current image description model is obtained through the following formula:
Figure BDA0003672491870000022
Figure BDA0003672491870000023
in the iteration round t, c (t) epsilon (0, 1)]P is a parameter controlling the progress of the course learning, c 0 Representing the initial capacity value of the model; c t The CIDER value of the current image description model, and beta is a parameter for controlling the learning speed of the course; c P And C T The best CIDER values on the cross entropy training and reinforcement learning phase datasets, respectively.
The invention also provides an image description generation system based on course learning, which comprises:
the initial module is used for acquiring an image training set marked with expression information, and constructing an initial image description model as a current image description model;
the difficulty measuring and calculating module is used for sequentially inputting the characteristics of all images in the image training set into the current image description model to obtain the description of each image, and obtaining the difficulty of each image in the image training set according to the description of the image and the corresponding expression information; judging whether the performance index of the current image description model reaches a preset value, if so, calling a second training module, otherwise, calling a first training module;
the first training module is used for selecting an input proportion according to the performance index, selecting the characteristics of the image with the lowest difficulty in the image training set according to the input proportion to perform primary training on the current image description model, and executing the difficulty measuring and calculating module after primary training is completed;
and the second training module performs iterative training on the current image description model by using all the image training sets until a preset training iteration number is reached or the loss of the current image description model is converged, and stores the current image description model as a final image description model to generate an image description result for an image to be described.
The image description generation system based on curriculum learning is characterized in that the difficulty is cross entropy loss or reinforcement learning reward.
The image description generation system based on course learning, wherein the difficulty measuring and calculating module is used for obtaining the difficulty of each image according to the following formula:
in the t-th iteration, the difficulty d of the current image sample s is:
Figure BDA0003672491870000031
where s represents the current image sample, | D represents the model parameter, | D train I represents the number of all image samples in the image training set, e (s; theta) t-1 ) Represents the cross-entropy loss or reinforcement learning reward value for a previous iteration of the image description model, e (s; theta t ) Represents the cross entropy loss or reinforcement learning reward value of the current iteration of the model, and epsilon is a regularization item.
The image description generation system based on curriculum learning obtains the performance index c of the current image description model according to the following formula:
Figure BDA0003672491870000041
Figure BDA0003672491870000042
in the iteration round t, c (t) epsilon (0, 1)]P is a parameter controlling the progress of the course learning, c 0 Representing the initial capacity value of the model; c t The CIDER value of the current image description model, and beta is a parameter for controlling the course learning speed; c P And C T The best CIDER values on the cross entropy training and reinforcement learning phase datasets, respectively.
The present invention also proposes a storage medium for the program of any one of the lesson learning-based image description generation methods.
The invention also provides a client used for the image description generation system based on the course learning.
According to the scheme, the invention has the advantages that:
as shown in fig. 3, the present invention performed experiments on the MS COCO dataset, which is the most commonly used dataset in the image description generation task, containing 123000 images, each with 5 descriptions. The results were evaluated using the indicators BLEU, METEOR, ROUGE, CIDER, SPICE.
The results of the validation set show that the method is superior to the previous training method in all 6 indexes based on the same model, and the results of the test set further prove the effectiveness of the method.
Drawings
FIG. 1 is a block diagram of the method of the present invention;
FIG. 2 is a block diagram of a curriculum learning-based image description generation method;
fig. 3 is a comparison of the technical effects of the present invention and the prior art.
Detailed Description
The invention adopts a course learning method, thereby leading the training from simple to complex, leading the training to be smoother, simultaneously reducing the interference of noise data, leading the training to be more generalized and further improving the performance of an image description generation task.
Specifically, aiming at the complexity of an image description task model and the heterogeneity of data, the invention provides that the sample difficulty is evaluated at each training stage, the main idea is to consider the loss or the reduction of reward value and search for a sample with a large learning space, meanwhile, the model capability of the invention is continuously enhanced, the difference between the current model and the training completion model is measured to determine how many samples are learned, and all samples are learned after the model capability reaches a certain degree.
In order to make the aforementioned features and effects of the present invention more comprehensible, embodiments accompanied with figures are described in detail below.
The method of the invention is shown in fig. 1, and comprises:
processing the image description data set, and extracting feature data in the image; because the image feature extraction needs a lot of time, if the features are not extracted, each iteration turn needs repeated extraction, and the training speed is reduced. After the features are extracted, the extracted features are only needed to be stored, and the features can be directly used in later training, so that repeated calculation is reduced.
Constructing an image description generation network model;
starting a training stage, judging whether the current maximum round is reached or not in each iteration, if the current maximum round is larger than or equal to the maximum round, saving the state of the model, finishing the training, and otherwise, continuing the training;
and when the iteration is less than the maximum iteration round, firstly, evaluating the difficulty of the sample. Calculating difficulty and ordering (from simple to complex) of each sample by using cross entropy loss during cross entropy training, and using reward as an evaluation standard during reinforcement learning training;
comparing the indexes of the current model with the indexes of the model which completes all previous training or preset model indexes to measure the capability level of the current model, and if the capability of the model reaches a threshold value, considering that the capability of the model reaches the standard, and putting all samples for training at the moment; i.e. all samples need to calculate the learning difficulty and gradually increase the sample difficulty until all samples are involved in the training.
Determining the proportion of training samples input by current iteration to the total samples according to the current model capability, and selecting samples meeting requirements to input for training;
the next iteration continues.
FIG. 2 is a block diagram of an image description generation method based on course learning: the core technology point in the whole method is that the sample difficulty evaluation is measured by the model capability.
And (4) evaluating the difficulty of the sample. The part measures the difficulty of each image-description pair before the model begins to train each time, and quantifies the difficulty value of the current sample. After the image description generation model parameters are updated, the sample difficulty is recalculated, so that the system is ensured to always select training data with proper difficulty for training. The invention defines the difficulty of a sample according to the descending amplitude of the cross entropy loss or reward value to consider the model change between two adjacent iterations. The invention uses t (t is more than or equal to 1) as iteration turns, s represents the current sample, and theta represents the model parameter. In the t-th iteration, the difficulty of the current sample s is:
Figure BDA0003672491870000061
wherein, | D train I represents the number of all samples, and the first iteration (t 1) assumes that all samples are the same difficulty. When t is more than 1, the invention calculates the descending amplitude of the loss or reward value, e (s; theta) t-1 ) Represents the cross-entropy loss or reward value for the last iteration of the model, e (s; theta t ) The cross entropy loss or the reward value of the current iteration of the model is represented, epsilon is a regularization item, overflow caused by excessively small denominator along with the training is prevented, and the value is generally 10 -5
And (5) measuring the capability of the model. In the invention, CIDER (Consensus-based Image Description Evaluation) is used as an index for measuring model performance, and | D is selected in each training train C (t) the simplest samples for the current model, and each iteration continues to add simple samples until the model is trained with all samples. In addition, all sample orders are shuffled at each iteration to preserve local randomness. In iteration round t, the invention defines model capability c (t) epsilon (0, 1)]P is a preset parameter for controlling the progress of the course learning, c 0 Representing the initial capacity of the model, 0.01 was taken.
Figure BDA0003672491870000062
The above formula ensures that more and more samples participate in the training along with the trainingConsidering the current state of the model, the invention introduces M (t) as the capability of the current state of the model. The invention will C P And C T The best CIDER values on the two training phase datasets were recorded as the basis for evaluation. The optimal CIDER value is the CIDER value obtained in the practical process of the model in the prior art, namely the invention firstly adopts the mature model in the prior art to record the training set C P And C T For subsequent use.
Figure BDA0003672491870000063
C t Is the current CIDER index of the model, and beta is a parameter for controlling the learning speed of the course. When C is present t ≥C P Beta or C t -C P ≥(C T -C P ) Beta, indicating that the model is ready to be trained on the entire data set, the model is trained on the entire training set until convergence.
The sample difficulty evaluation and model capability metric function design ensures that the current training sample is selected to be suitable for preventing the model from falling into poor local optimality. In the assumption of the present invention, the training samples are determined by the difficulty of estimating the samples and the ability of the model at each iteration because the ability to generate descriptions at model initialization is weak. The model is continuously learned until it has sufficient capacity to process the entire training set. The method based on course learning provided by the invention can be easily fused into the training strategy of the mainstream model.
The following are system examples corresponding to the above method examples, and this embodiment can be implemented in cooperation with the above embodiments. The related technical details mentioned in the above embodiments are still valid in this embodiment, and are not described herein again in order to reduce repetition. Accordingly, the related-art details mentioned in the present embodiment can also be applied to the above-described embodiments.
The invention also provides an image description generation system based on course learning, which comprises:
the initial module is used for acquiring an image training set marked with expression information, and constructing an initial image description model as a current image description model;
the difficulty measuring and calculating module is used for sequentially inputting the characteristics of all images in the image training set into the current image description model to obtain the description of each image, and obtaining the difficulty of each image in the image training set according to the description of the image and the corresponding expression information; judging whether the performance index of the current image description model reaches a preset value, if so, calling a second training module, and otherwise, calling a first training module;
the first training module is used for selecting an input proportion according to the performance index, selecting the characteristics of the image with the lowest difficulty in the image training set according to the input proportion to perform primary training on the current image description model, and executing the difficulty measuring and calculating module after primary training is completed;
and the second training module performs iterative training on the current image description model by using all the image training sets until a preset training iteration number is reached or the loss of the current image description model is converged, and stores the current image description model as a final image description model to generate an image description result for an image to be described.
The image description generation system based on curriculum learning is characterized in that the difficulty is cross entropy loss or reinforcement learning reward.
The image description generation system based on course learning, wherein the difficulty measuring and calculating module is used for obtaining the difficulty of each image according to the following formula:
in the t-th iteration, the difficulty d of the current image sample s is:
Figure BDA0003672491870000071
where s represents the current image sample, | D represents the model parameter, | D train I represents the number of all image samples in the image training set, e (s; theta) t-1 ) Represents a cross-entropy loss or reinforcement learning reward value for a round of iteration over the image description model, e (s; theta t ) Cross entropy representing current iteration of modelLoss or reinforcement learning reward value, ε is a regularization term.
The image description generation system based on curriculum learning obtains the performance index c of the current image description model according to the following formula:
Figure BDA0003672491870000081
Figure BDA0003672491870000082
in the iteration round t, c (t) epsilon (0, 1)]P is a parameter controlling the progress of the course learning, c 0 Representing the initial capacity value of the model; c t The CIDER value of the current image description model, and beta is a parameter for controlling the learning speed of the course; c P And C T The best CIDER values on the cross entropy training and reinforcement learning phase datasets, respectively.
The present invention also proposes a storage medium for the program of any one of the lesson learning-based image description generation methods.
The invention also provides a client used for the image description generation system based on the course learning.

Claims (10)

1. An image description generation method based on course learning is characterized by comprising the following steps:
step 1, acquiring an image training set marked with expression information, and constructing an initial image description model as a current image description model;
step 2, inputting the characteristics of all images in the image training set into the current image description model in sequence to obtain the description of each image, and obtaining the difficulty of each image in the image training set according to the description of the image and the corresponding expression information; judging whether the performance index of the current image description model reaches a preset value, if so, executing a step 4, otherwise, executing a step 3;
step 3, selecting an input proportion according to the performance index, selecting the features of the image with the lowest difficulty in the image training set according to the input proportion to perform primary training on the current image description model, and executing the step 2 after primary training is completed;
and 4, performing iterative training on the current image description model by using all the image training sets until a preset training iteration number is reached or the loss of the current image description model is converged, and saving the current image description model as a final image description model to generate an image description result for an image to be described.
2. The method as claimed in claim 1, wherein the difficulty level is cross entropy loss or reinforcement learning reward.
3. The method as claimed in claim 1, wherein the step 2 obtains the difficulty of each image by the following formula:
in the t-th iteration, the difficulty d of the current image sample s is:
Figure FDA0003672491860000011
where s represents the current image sample, | represents the model parameter, | D train I represents the number of all image samples in the image training set, e (s; theta) t-1 ) Represents the cross-entropy loss or reinforcement learning reward value for a previous iteration of the image description model, e (s; theta t ) Represents the cross entropy loss or reinforcement learning reward value of the current iteration of the model, and epsilon is a regularization item.
4. The curriculum-learning-based image description generation method of claim 1, wherein the performance index c of the current image description model is obtained by:
Figure FDA0003672491860000012
Figure FDA0003672491860000021
in the iteration round t, c (t) epsilon (0, 1)]P is a parameter controlling the progress of the course learning, c 0 Representing the initial capacity value of the model; c t The CIDER value of the current image description model, and beta is a parameter for controlling the learning speed of the course; c P And C T The best CIDER values on the cross entropy training and reinforcement learning phase datasets, respectively.
5. An image description generation system based on curriculum learning, comprising:
the initial module is used for acquiring an image training set marked with expression information, and constructing an initial image description model as a current image description model;
the difficulty measuring and calculating module is used for sequentially inputting the characteristics of all images in the image training set into the current image description model to obtain the description of each image, and obtaining the difficulty of each image in the image training set according to the description of the image and the corresponding expression information; judging whether the performance index of the current image description model reaches a preset value, if so, calling a second training module, and otherwise, calling a first training module;
the first training module is used for selecting an input proportion according to the performance index, selecting the characteristics of the image with the lowest difficulty in the image training set according to the input proportion to perform primary training on the current image description model, and executing the difficulty measuring and calculating module after primary training is completed;
and the second training module performs iterative training on the current image description model by using all the image training sets until a preset training iteration number is reached or the loss of the current image description model is converged, and stores the current image description model as a final image description model to generate an image description result for an image to be described.
6. The lesson-learning-based image description generation system of claim 5, wherein the difficulty level is cross-entropy loss or reinforcement learning reward.
7. The system of claim 5, wherein the difficulty estimation module is configured to obtain the difficulty of each image according to the following formula:
in the t-th iteration, the difficulty d of the current image sample s is:
Figure FDA0003672491860000022
where s represents the current image sample, | D represents the model parameter, | D train I represents the number of all image samples in the image training set, e (s; θ) t-1 ) Represents a cross-entropy loss or reinforcement learning reward value for a round of iteration over the image description model, e (s; theta.theta. t ) Represents the cross entropy loss or reinforcement learning reward value of the current iteration of the model, and epsilon is a regularization item.
8. The curriculum-learning-based image description generation system of claim 5, wherein the performance metric c for the current image description model is obtained by:
Figure FDA0003672491860000031
Figure FDA0003672491860000032
in the iteration round t, c (t) epsilon (0, 1)]P is a parameter controlling the progress of the course learning, c 0 Representing the initial capacity value of the model; c t The CIDER value of the current image description model, and beta is a parameter for controlling the learning speed of the course; c P And C T Cross entropy training and strong respectivelyBest CIDER values on the study phase dataset.
9. A storage medium storing a program for executing the image description generation method based on lesson learning according to any one of claims 1 to 4.
10. A client for use in the image description generation system based on course learning as claimed in any one of claims 5 to 8.
CN202210612591.2A 2022-05-31 2022-05-31 Image description generation method and system based on course learning Pending CN115035304A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210612591.2A CN115035304A (en) 2022-05-31 2022-05-31 Image description generation method and system based on course learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210612591.2A CN115035304A (en) 2022-05-31 2022-05-31 Image description generation method and system based on course learning

Publications (1)

Publication Number Publication Date
CN115035304A true CN115035304A (en) 2022-09-09

Family

ID=83123150

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210612591.2A Pending CN115035304A (en) 2022-05-31 2022-05-31 Image description generation method and system based on course learning

Country Status (1)

Country Link
CN (1) CN115035304A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116910185A (en) * 2023-09-07 2023-10-20 北京中关村科金技术有限公司 Model training method, device, electronic equipment and readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116910185A (en) * 2023-09-07 2023-10-20 北京中关村科金技术有限公司 Model training method, device, electronic equipment and readable storage medium
CN116910185B (en) * 2023-09-07 2023-11-28 北京中关村科金技术有限公司 Model training method, device, electronic equipment and readable storage medium

Similar Documents

Publication Publication Date Title
CN108399428B (en) Triple loss function design method based on trace ratio criterion
CN110147450B (en) Knowledge complementing method and device for knowledge graph
CN110110862A (en) A kind of hyperparameter optimization method based on adaptability model
CN110969290B (en) Runoff probability prediction method and system based on deep learning
CN112257341B (en) Customized product performance prediction method based on heterogeneous data difference compensation fusion
CN111832627A (en) Image classification model training method, classification method and system for suppressing label noise
CN111650453B (en) Power equipment diagnosis method and system based on windowing characteristic Hilbert imaging
CN112000772A (en) Sentence-to-semantic matching method based on semantic feature cube and oriented to intelligent question and answer
CN113128671B (en) Service demand dynamic prediction method and system based on multi-mode machine learning
CN112365033B (en) Wind power interval prediction method, system and storage medium
CN116309571B (en) Three-dimensional cerebrovascular segmentation method and device based on semi-supervised learning
CN113326852A (en) Model training method, device, equipment, storage medium and program product
CN112686376A (en) Node representation method based on timing diagram neural network and incremental learning method
CN115035304A (en) Image description generation method and system based on course learning
CN117290429B (en) Method for calling data system interface through natural language
CN110569966A (en) Data processing method and device and electronic equipment
CN113313250B (en) Neural network training method and system adopting mixed precision quantization and knowledge distillation
CN112463643A (en) Software quality prediction method
CN114372618A (en) Student score prediction method and system, computer equipment and storage medium
CN113095328A (en) Self-training-based semantic segmentation method guided by Gini index
CN112884129B (en) Multi-step rule extraction method, device and storage medium based on teaching data
CN112132310A (en) Power equipment state estimation method and device based on improved LSTM
CN110837847A (en) User classification method and device, storage medium and server
CN109409226A (en) A kind of finger vena plot quality appraisal procedure and its device based on cascade optimization CNN
CN116407088B (en) Exercise heart rate prediction model training method and heart rate prediction method based on power vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination