CN112508091B - Low-quality image classification method based on convolutional neural network - Google Patents

Low-quality image classification method based on convolutional neural network Download PDF

Info

Publication number
CN112508091B
CN112508091B CN202011411042.6A CN202011411042A CN112508091B CN 112508091 B CN112508091 B CN 112508091B CN 202011411042 A CN202011411042 A CN 202011411042A CN 112508091 B CN112508091 B CN 112508091B
Authority
CN
China
Prior art keywords
images
model
image
data set
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011411042.6A
Other languages
Chinese (zh)
Other versions
CN112508091A (en
Inventor
张维石
周景春
要健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Maritime University
Original Assignee
Dalian Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Maritime University filed Critical Dalian Maritime University
Priority to CN202011411042.6A priority Critical patent/CN112508091B/en
Publication of CN112508091A publication Critical patent/CN112508091A/en
Application granted granted Critical
Publication of CN112508091B publication Critical patent/CN112508091B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a low-quality image classification method based on a convolutional neural network. Firstly, preprocessing a data set, unifying the sizes of all images in the data set, dividing the data set into a training set and a testing set, and labeling corresponding labels according to shooting environments; secondly, constructing a deep learning model which is a convolutional neural network, wherein the model comprises 6 MSCB substructures, and two auxiliary classifiers are adopted to enhance the robustness of the model, and image category information is output through a softmax layer; then, inputting the preprocessed data set into a model, and training each parameter in the model through a gradient descent algorithm; finally, inputting the image to be predicted into a trained model for forward propagation to obtain the types of the image, wherein the types can be totally divided into 4 types: foggy day images, underwater images, low light images, normal environment images.

Description

Low-quality image classification method based on convolutional neural network
Technical Field
The invention relates to a low-quality image classification method based on a convolutional neural network.
Background
The image classification is to distinguish different types of images according to the characteristic information of the images, is an important basic problem in computer vision, and is also the basis of other high-level visual tasks such as image detection, image segmentation, object tracking, behavior analysis and the like. The image classification has wide application in the fields of security, internet, traffic and the like.
The image classification method based on deep learning can learn the hierarchical feature description in a supervised or unsupervised mode, thereby replacing the work of manually designing or selecting image features. The convolutional neural network (Convolution Neural Network, CNN) in the deep learning model has achieved remarkable results in the image field in recent years, the CNN directly uses image pixel information as input, all information of the input image is reserved to the greatest extent, feature extraction and high-level abstraction are carried out through convolution operation, and model output is directly the result of image recognition. The learning method based on the input-output direct end to end achieves a very good effect and is widely applied. However, the full connection layer in the traditional CNN is too redundant, so that the parameters of the whole network structure are too much, the training efficiency is reduced, and gradient problems such as gradient explosion, gradient disappearance and the like are easy to occur.
Disclosure of Invention
The invention overcomes the defects of the prior art, and provides a novel low-quality image classification method based on a convolutional neural network, which can divide low-quality images into 4 types according to different shooting environments, namely a foggy day image, an underwater image, a low-illumination image and a normal environment image. The method comprises the following four processes: first, preprocessing a data set, constructing a model, training the model, and predicting and classifying. Firstly, preprocessing a data set, unifying the sizes of all images in the data set, dividing the data set into a training set and a testing set, and labeling corresponding labels according to shooting environments; secondly, a deep learning model is built, the deep learning model is a convolutional neural network, the model comprises 6 MSCB (Multi-scale convolution block) substructures, meanwhile, two auxiliary classifiers are adopted to enhance the robustness of the model, and image category information is output through a softmax layer; then, inputting the preprocessed data set into a model, and training each parameter in the model through a gradient descent algorithm; finally, inputting the image to be predicted into a trained model for forward propagation to obtain the types of the image, wherein the types can be totally divided into 4 types: (1) foggy day images; (2) underwater images; (3) a low-light image; (4) normal environmental images. The method can effectively classify the low-quality images according to the shooting environment types without any prior information, and can be applied to preprocessing of underwater images.
The invention adopts the technical scheme that: the low-quality image classification method based on the convolutional neural network is characterized by comprising the following steps of:
step S01: preprocessing a data set; unifying the sizes of all images in the data set, and enabling the data set to be in accordance with 4:1 is randomly divided into a training set and a testing set, and then images in the training set and the testing set are respectively divided into 4 types according to different shooting environments: foggy day images, underwater images, low-light images, and normal environment images;
step S02: building a deep learning model; the deep learning model is a convolutional neural network, the deep learning model comprises 6 MSCB substructures, two auxiliary classifiers are adopted to enhance the robustness of the model, and finally, image category information is output through a softmax layer;
step S03: inputting the preprocessing data set into the deep learning model, and training parameters of the deep learning model through a gradient descent algorithm;
step S04: inputting an image to be predicted into a trained model for forward propagation to obtain the type of the image to be predicted; the image types are classified into: foggy day images, underwater images, low light images, and normal environment images.
Further, the preprocessing dataset in step S01 unifies the sizes of all the images in the dataset, and the theoretical formula is as follows:
I reshape =resize(I);
wherein I represents an original image, I reshape Representing the uniformly sized image, the resize represents the image size modification function.
Further, the deep learning model in the step S02 performs multi-scale feature extraction through 3 independent convolution layers and 6 MSCB substructures, and sets two auxiliary classifiers to participate in auxiliary classification; the deep learning model reduces the feature quantity through a pooling layer, then flattens the feature matrix into one dimension through a full-connection layer, finally obtains probability values of all categories through a softmax layer structure, and obtains a prediction result after weighting with the result of an auxiliary classifier; and calculating a predicted result and a true value through cross entropy to obtain a loss function value, wherein a theoretical formula is as follows:
wherein L represents the result of the calculated loss function, m represents the number of classified categories, p i And p i * Respectively representing the i-th predicted value and the true tag value, y i E is a natural constant, which is the i-th output value of the classifier.
Furthermore, in the model training process in the step S03, the preprocessed data set is input into a model, and model parameters are trained through a gradient descent algorithm, and a theoretical formula is as follows:
S t =S t-1 +g(w t )·g(w t );
wherein S is t Represents the sum of squares of the loss gradients at time t, S t-1 Represents the sum of squares of the loss gradients at time t-1, w t Representing the parameter value at time t, w t+1 The parameter value at time t+1, g (w t ) The parameter value at time t is represented by rate, which is the learning rate, and λ is a constant of 10 -7 The purpose is to prevent the denominator from being zero.
Further, the forward propagation in step S04 is described as follows, wherein the theoretical formula derived from the n-th layer neuron to the n+1-th layer neuron is as follows:
wherein X is n+1 Represented by model nThe resulting value after conduction of layer neurons to the n+1th layer, m represents the number of layer neurons, y i And w i Respectively representing the output value and the corresponding weight of the i-th neuron of the nth layer, b n A bias parameter value representing an n-th layer; the LeakyRelu activation function formula in the formula is as follows:
wherein a is a slope parameter whose input value is smaller than zero, and is set as a fixed parameter in the (1, + -infinity) interval. Compared with the prior art, the invention has the following advantages:
the invention overcomes the defects of the prior art, and provides a novel low-quality image classification method based on a convolutional neural network, which can divide low-quality images into 4 types according to different shooting environments, namely a foggy day image, an underwater image, a low-illumination image and a normal environment image. The method comprises the following four processes: first, preprocessing a data set, constructing a model, training the model, and predicting and classifying. Firstly, preprocessing a data set, unifying the sizes of all images in the data set, dividing the data set into a training set and a testing set, and labeling corresponding labels according to shooting environments; secondly, a deep learning model is built, the deep learning model is a convolutional neural network, the model comprises 6 MSCB (Multi-scale convolution block) substructures, meanwhile, two auxiliary classifiers are adopted to enhance the robustness of the model, and image category information is output through a softmax layer; then, inputting the preprocessed data set into a model, and training each parameter in the model through a gradient descent algorithm; finally, inputting the image to be predicted into a trained model for forward propagation to obtain the types of the image, wherein the types can be totally divided into 4 types: (1) foggy day images; (2) underwater images; (3) a low-light image; (4) normal environmental images. The method can effectively classify the low-quality images according to the shooting environment types without any prior information, and can be applied to preprocessing of underwater images.
Drawings
In order to clarify the invention or the technical solution, a brief overview of the drawings employed for the description of the embodiments or prior art will be given below.
FIG. 1 is a model overview flow chart of the present invention;
FIG. 2 is a schematic flow chart of an MSCB sub-structure in the model of the present invention;
fig. 3 is an example of a foggy day image. Wherein 3-1, 3-2 and 3-3 are foggy day images of different scenes respectively.
Fig. 4 is an example low light day image. Wherein 4-1, 4-2, 4-3 are respectively low-illumination images of different scenes.
Fig. 5 is an example of an underwater image. Wherein 5-1, 5-2 and 5-3 are respectively underwater images of different scenes.
Fig. 6 is an example of a normal shooting environment image. Wherein 6-1, 6-2 and 6-3 are respectively normal environment images of different scenes.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In order to verify the effectiveness of the invention on image classification, images of different scenes are selected as test data sets, and compared with experimental results of network structures such as common BP neural network, alexNet, vggNet and the like for analysis and verification. The specific steps and principles are as follows:
as shown in fig. 1, the present invention provides a low-quality image classification method based on a convolutional neural network, comprising the following steps:
step S01: preprocessing a data set; unifying the sizes of all images in the data set, and enabling the data set to be in accordance with 4:1 is randomly divided into a training set and a testing set, and then images in the training set and the testing set are respectively divided into 4 types according to different shooting environments: foggy day images, underwater images, low-light images, and normal environment images;
step S02: building a deep learning model; the deep learning model is a convolutional neural network, the deep learning model comprises 6 MSCB substructures, two auxiliary classifiers are adopted to enhance the robustness of the model, and finally, image category information is output through a softmax layer;
step S03: inputting the preprocessing data set into the deep learning model, and training parameters of the deep learning model through a gradient descent algorithm;
step S04: inputting an image to be predicted into a trained model for forward propagation to obtain the type of the image to be predicted; the image types are classified into: foggy day images, underwater images, low light images, and normal environment images.
As a preferred embodiment, in the present application, the preprocessing dataset in step S01 unifies the sizes of all the images in the dataset, and the theoretical formula is as follows:
I reshape =resize(I);
wherein I represents an original image, I reshape Representing the uniformly sized image, the resize represents the image size modification function.
Preferably, the deep learning model in the step S02 performs multi-scale feature extraction through 3 separate convolution layers and 6 MSCB substructures, and sets two auxiliary classifiers to participate in auxiliary classification; the deep learning model reduces the feature quantity through a pooling layer, then flattens the feature matrix into one dimension through a full-connection layer, finally obtains probability values of all categories through a softmax layer structure, and obtains a prediction result after weighting with the result of an auxiliary classifier; and calculating a predicted result and a true value through cross entropy to obtain a loss function value, wherein a theoretical formula is as follows:
wherein L represents the result of the calculated loss function, m represents the number of classified categories, p i And p i * Respectively representing the i-th predicted value and the true tag value, y i E is a natural constant, which is the i-th output value of the classifier.
Furthermore, in the model training process in the step S03, the preprocessed data set is input into a model, and model parameters are trained through a gradient descent algorithm, and a theoretical formula is as follows:
S t =S t-1 +g(w t )·g(w t );
wherein S is t Represents the sum of squares of the loss gradients at time t, S t-1 Represents the sum of squares of the loss gradients at time t-1, w t Representing the parameter value at time t, w t+1 The parameter value at time t+1, g (w t ) The parameter value at time t is represented by rate, which is the learning rate, and λ is a constant of 10 -7 The purpose is to prevent the denominator from being zero.
Meanwhile, in the present application, the forward propagation process in step S04, in which the theoretical formula derived from the n-th layer neuron to the n+1-th layer neuron is as follows:
wherein X is n+1 Represents the resulting value after conduction from the model n-th layer neuron to the n+1-th layer, m represents the number of n-th layer neurons, y i And w i Respectively representing the output value and the corresponding weight of the i-th neuron of the nth layer, b n A bias parameter value representing an n-th layer; the LeakyRelu activation function formula in the formula is as follows:
wherein a is a slope parameter whose input value is smaller than zero, and is set as a fixed parameter in the (1, + -infinity) interval.
Examples
In order to verify the accuracy and robustness of the present invention, the index of the accuracy of the test set is adopted to perform comparative analysis with the network structures such as the common BP neural network and AlexNet, vggNet, and specific data are shown in Table 1.
Table 1 network and other network test set accuracy of the invention
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present invention, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced with equivalents; such modifications and substitutions do not depart from the spirit of the technical solutions according to the embodiments of the present invention.

Claims (3)

1. The low-quality image classification method based on the convolutional neural network is characterized by comprising the following steps of:
step S01: preprocessing a data set; unifying the sizes of all images in the data set, and enabling the data set to be in accordance with 4:1 is randomly divided into a training set and a testing set, and then images in the training set and the testing set are respectively divided into 4 types according to different shooting environments: foggy day images, underwater images, low-light images, and normal environment images;
step S02: building a deep learning model; the deep learning model is a convolutional neural network, the deep learning model comprises 6 MSCB substructures, two auxiliary classifiers are adopted to enhance the robustness of the model, and finally, image category information is output through a softmax layer;
step S03: inputting the preprocessing data set into the deep learning model, and training parameters of the deep learning model through a gradient descent algorithm; in the step S03 model training process, the preprocessed data set is input into a model, model parameters are trained through a gradient descent algorithm, and a theoretical formula is as follows:
S t =S t-1 +g(w t )·g(w t );
wherein S is t Represents the sum of squares of the loss gradients at time t, S t-1 Represents the sum of squares of the loss gradients at time t-1, w t At tValues of parameters of the scale, w t+1 The parameter value at time t+1, g (w t ) The parameter value at time t is represented by rate, which is the learning rate, and λ is a constant of 10 -7 The purpose is to prevent denominator from being zero;
step S04: inputting an image to be predicted into a trained model for forward propagation to obtain the type of the image to be predicted; the image types are classified into: foggy day images, underwater images, low-light images, and normal environment images; the forward propagation process in step S04, in which the theoretical formula derived from the n-th layer neurons to the n+1-th layer neurons is as follows:
wherein X is n+1 Represents the resulting value after conduction from the model n-th layer neuron to the n+1-th layer, m represents the number of n-th layer neurons, y i And w i Respectively representing the output value and the corresponding weight of the i-th neuron of the nth layer, b n A bias parameter value representing an n-th layer; the LeakyRelu activation function formula in the formula is as follows:
wherein a is a slope parameter whose input value is smaller than zero, and is set as a fixed parameter in the (1, + -infinity) interval.
2. The low-quality image classification method based on convolutional neural network according to claim 1, wherein: the preprocessing dataset in the step S01 unifies the sizes of all the images in the dataset, and the theoretical formula is as follows:
I reshape =resize(I);
wherein I represents an original image, I reshape Representing the uniformly sized image, the resize represents the image size modification function.
3. The low-quality image classification method based on convolutional neural network according to claim 1, wherein: the deep learning model in the step S02 performs multi-scale feature extraction through 3 independent convolution layers and 6 MSCB substructures, and sets two auxiliary classifiers to participate in auxiliary classification; the deep learning model reduces the feature quantity through a pooling layer, then flattens the feature matrix into one dimension through a full-connection layer, finally obtains probability values of all categories through a softmax layer structure, and obtains a prediction result after weighting with the result of an auxiliary classifier; and calculating a predicted result and a true value through cross entropy to obtain a loss function value, wherein a theoretical formula is as follows:
wherein L represents the result of the calculated loss function, m represents the number of classified categories, p i And p i * Respectively representing the i-th predicted value and the true tag value, y i E is a natural constant, which is the i-th output value of the classifier.
CN202011411042.6A 2020-12-03 2020-12-03 Low-quality image classification method based on convolutional neural network Active CN112508091B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011411042.6A CN112508091B (en) 2020-12-03 2020-12-03 Low-quality image classification method based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011411042.6A CN112508091B (en) 2020-12-03 2020-12-03 Low-quality image classification method based on convolutional neural network

Publications (2)

Publication Number Publication Date
CN112508091A CN112508091A (en) 2021-03-16
CN112508091B true CN112508091B (en) 2024-01-19

Family

ID=74971857

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011411042.6A Active CN112508091B (en) 2020-12-03 2020-12-03 Low-quality image classification method based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN112508091B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106651830A (en) * 2016-09-28 2017-05-10 华南理工大学 Image quality test method based on parallel convolutional neural network
KR20180050832A (en) * 2016-11-07 2018-05-16 한국과학기술원 Method and system for dehazing image using convolutional neural network
CN111046967A (en) * 2019-12-18 2020-04-21 江苏科技大学 Underwater image classification method based on convolutional neural network and attention mechanism
WO2020181685A1 (en) * 2019-03-12 2020-09-17 南京邮电大学 Vehicle-mounted video target detection method based on deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106651830A (en) * 2016-09-28 2017-05-10 华南理工大学 Image quality test method based on parallel convolutional neural network
KR20180050832A (en) * 2016-11-07 2018-05-16 한국과학기술원 Method and system for dehazing image using convolutional neural network
WO2020181685A1 (en) * 2019-03-12 2020-09-17 南京邮电大学 Vehicle-mounted video target detection method based on deep learning
CN111046967A (en) * 2019-12-18 2020-04-21 江苏科技大学 Underwater image classification method based on convolutional neural network and attention mechanism

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于改进卷积神经网络的图像分类方法;胡貌男;邱康;谢本亮;;通信技术(第11期);全文 *

Also Published As

Publication number Publication date
CN112508091A (en) 2021-03-16

Similar Documents

Publication Publication Date Title
Alani et al. Hand gesture recognition using an adapted convolutional neural network with data augmentation
CN111028217A (en) Image crack segmentation method based on full convolution neural network
CN109840560B (en) Image classification method based on clustering in capsule network
CN110619369A (en) Fine-grained image classification method based on feature pyramid and global average pooling
CN109993100B (en) Method for realizing facial expression recognition based on deep feature clustering
CN109273054B (en) Protein subcellular interval prediction method based on relational graph
CN111145145B (en) Image surface defect detection method based on MobileNet
CN110751195A (en) Fine-grained image classification method based on improved YOLOv3
CN111832580B (en) SAR target recognition method combining less sample learning and target attribute characteristics
CN110991247B (en) Electronic component identification method based on deep learning and NCA fusion
Yang et al. An improved algorithm for the detection of fastening targets based on machine vision
CN111783688B (en) Remote sensing image scene classification method based on convolutional neural network
Kailkhura et al. Ensemble learning-based approach for crack detection using CNN
CN111709442A (en) Multilayer dictionary learning method for image classification task
CN112508091B (en) Low-quality image classification method based on convolutional neural network
CN115410047A (en) Infrared image electric bicycle target detection method based on improved YOLO v5s
CN114417938A (en) Electromagnetic target classification method using knowledge vector embedding
CN114612450A (en) Image detection segmentation method and system based on data augmentation machine vision and electronic equipment
CN112699898A (en) Image direction identification method based on multi-layer feature fusion
CN111461130A (en) High-precision image semantic segmentation algorithm model and segmentation method
HS et al. A novel method to recognize object in Images using Convolution Neural Networks
CN110717544A (en) Pedestrian attribute analysis method and system under vertical fisheye lens
CN109508742A (en) Handwritten Digit Recognition method based on ARM platform and independent loops neural network
CN116246128B (en) Training method and device of detection model crossing data sets and electronic equipment
Sharma et al. Classification of Image with Convolutional Neural Network and TensorFlow on CIFAR-10 Dataset

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant