CN113793328A - Light-weight egg shape recognition method based on SE-ResNet structure - Google Patents

Light-weight egg shape recognition method based on SE-ResNet structure Download PDF

Info

Publication number
CN113793328A
CN113793328A CN202111113900.3A CN202111113900A CN113793328A CN 113793328 A CN113793328 A CN 113793328A CN 202111113900 A CN202111113900 A CN 202111113900A CN 113793328 A CN113793328 A CN 113793328A
Authority
CN
China
Prior art keywords
egg
resnet
model
data
data set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111113900.3A
Other languages
Chinese (zh)
Inventor
李振波
郭玉阳
李萌
岳峻
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Agricultural University
Original Assignee
China Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Agricultural University filed Critical China Agricultural University
Priority to CN202111113900.3A priority Critical patent/CN113793328A/en
Publication of CN113793328A publication Critical patent/CN113793328A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Abstract

The invention discloses a light-weight egg shape recognition method based on an SE-ResNet structure, which comprises the steps of collecting egg shape sample pictures and establishing a common egg shape data set; expanding data by rotating, translating and overturning sample pictures in the data set to expand the hatching egg data set; adopting a Labelme tool to label the data of the sample pictures in the expanded data set; randomly dividing the marked egg-shaped data set into a training set and a testing set; combining the SENet and ResNet34 convolutional neural networks to construct a new lightweight convolutional neural network model, wherein the model is formed by stacking a multi-scale convolutional module, a maximum pooling layer, an SE-ResNet module, a SENet module, an average pooling layer and a full-connection layer; and importing a training set, training and testing by using the testing set. The model established in the method has smaller model volume, higher egg shape recognition accuracy and higher speed in the training process, and provides technical feasibility for the deployment of the mobile terminal and the model.

Description

Light-weight egg shape recognition method based on SE-ResNet structure
Technical Field
The invention belongs to the technical field of image classification, and particularly relates to a light-weight egg shape recognition method based on an SE-ResNet structure.
Background
The egg shapes comprise small eggs, malformed eggs, double-yolk eggs, standard eggs and the like, and have high research value. Egg shape is a great factor reflecting the health quality of breeding hens, a large amount of unqualified breeding eggs are not detected every day, the health quality of the breeding hens is ignored, and a great amount of loss is caused. The accurate detection and identification of egg shapes are key factors influencing the health of breeding hens and are also key factors for improving economic benefits.
The traditional egg shape recognition method mainly comprises the steps of preprocessing an egg shape image by utilizing an image processing technology, extracting certain specific features, and then classifying the extracted features by using a classifier, thereby realizing the egg shape classification recognition. The Chinese great symptomatology of inner Mongolia university proposes an egg shape test by utilizing computer vision to segment egg shapes, extracts the major diameter and the minor diameter of the egg on the basis of machine vision and moment technology, eliminates the egg with unqualified egg shape index, and then constructs a reasonable genetic neural network model and establishes a grading algorithm. The method has the advantages that the detection accuracy rates of detecting round eggs, over-tip eggs, deformed eggs and normal eggs reach 97.10%, 95.59%, 94.87% and 95.75% respectively, the actual production requirements can be met, but the method still does not solve the time-consuming problem of the algorithm and is only suitable for detecting white-shell eggs; egg shape index is one of the important indicators for measuring the appearance characteristics of eggs.
ResNet was proposed in 2015 to obtain the first name on ImageNet race classification task, with a top-5 error rate of 3.57%. Because it coexists "simple and practical", this architecture makes good use of the computational resources in the network and increases the width and depth of the network without increasing the computational load. Meanwhile, in order to optimize the network quality, the Hebbian principle and multi-scale processing are adopted. ResNet achieves good effects on classification and detection.
The Batch Normalization Szegedy in Batch Normalization of Batch Normalization deep network training by reducing Internal Covariate Shift 2015 is to solve the problem of Internal Covariate Shift. The BN is to forcibly pull back the distribution of the input value of any neuron of each layer of neural network to a standard normal distribution with a mean value of 0 and a variance of 1 by a certain normalization means, so as to avoid the disappearance of the gradient and accelerate the convergence of the network. By combining BN and ResNet structures, the optimal result of ImageNet classification is improved: the error rate of top-5 reaches 4.9%, which exceeds the level of human beings.
In Squeeze-and-Excitation Networks (SENET) 2017: "Proceedings of the IEEE conference on computer vision and pattern recognition.2018", which won the champion of the last ImageNet 2017 contest classification task. SENTET focuses on the relationships between channels, explicitly models the dependencies between channels, i.e. the degree of importance of each feature channel, by a compression-excitation (SE) block, and then promotes useful features and suppresses features that are not useful for the current task according to this degree of importance. And the SEnet idea is simple and can be easily expanded in the existing network structure.
In order to improve the accuracy and real-time property of egg shape recognition and the supportability of hardware. The existing mature object identification method mainly based on deep learning in academic circles has a good effect on public data sets, but agricultural data has special characteristics and cannot be directly migrated and applied. In addition, most of the better recognition network models in the current academic world are larger, and the production application of an actual mobile terminal cannot be met.
Disclosure of Invention
Based on the problems, the invention takes the egg shape as a research object, self-establishes an egg data set containing 4 types of egg shapes (including normal types), and provides a lightweight egg shape identification network based on ResNet, SENEt and batch normalization algorithm. A lightweight network architecture is designed by utilizing a ResNet structure and combining a SEnet module and a BN layer, and parameter tuning is continuously carried out in the training process.
A light egg shape recognition method based on an SE-ResNet structure comprises the following steps:
step 1, data establishment, namely acquiring egg-shaped sample pictures and establishing a common egg-shaped data set;
step 2, data expansion, namely expanding data by rotating, translating and overturning sample pictures in the data set to expand the hatching egg data set;
step 3, data labeling, namely performing data labeling on the sample picture in the data set expanded in the step 2 by adopting a Labelme tool;
step 4, data division, namely randomly dividing the egg-shaped data set marked in the step 3 into a training set and a test set;
step 5, model construction, namely combining the SENet and ResNet34 convolutional neural networks to construct a new lightweight convolutional neural network model, namely an SE-ResNet model, wherein the model is formed by stacking a multi-scale convolution module, a maximum pooling layer, an SE-ResNet module, a SENet module, an average pooling layer and a full connection layer;
step 6, training the model, importing a training set, training the SE-ResNet model established in the step, and storing the trained SE-ResNet model;
step 7, carrying out test comparison test on the SE-ResNet model trained in the step 6 and other convolutional neural networks by using a test set so as to verify the quality between the neural networks;
and 8, identifying the hatching eggs by using the tested SE-ResNet model.
Preferably, the hatching egg data set expanded in the step 2 comprises 3735 egg-shaped pictures of small eggs, malformed eggs, double yellow eggs and standard eggs in 4 types.
Preferably, the ratio of the training set to the test set divided in step 3 is 8: 2.
Preferably, the model established in step 5 is composed of 1 multi-scale convolution module, 1 convolution module, 2 maximum pooling layers, 4 SE-ResNet modules, 1 SE module, 1 average pooling layer and 1 full-connection layer.
Preferably, the convolution module includes a convolution layer and a batch normalization processing layer, and the batch normalization processing is performed after the convolution layer.
Preferably, ResNet is ResNet v2, which consists of three convolution kernels, 1 × 1, 3 × 3, 1 × 1.
Drawings
FIG. 1 is a flowchart of steps of a lightweight egg shape recognition method based on an SE-ResNet structure according to the present invention;
FIG. 2 is a diagram of the SE-ResNet model architecture used in the method of the present invention;
FIG. 3 is a block diagram of a multi-scale feature extraction used in the method of the present invention;
FIG. 4 is a diagram of ResNetV 1;
FIG. 5 is a diagram of ResNetV 2;
FIG. 6 is a block diagram of the SE-ResNet module used in the method of the present invention.
Detailed Description
As shown in fig. 1, the present invention provides a light-weight egg shape recognition method based on SE-ResNet structure, comprising the following steps:
step 1, data establishment. Collecting egg-shaped sample pictures and establishing a common egg-shaped data set;
and 2, data expansion. The hatching egg data set is expanded by rotating, translating and overturning the sample picture. The data set of the invention comprises 3735 egg-shaped pictures of small eggs, malformed eggs, double-yellow eggs and standard eggs in 4 types.
And 3, data annotation. And (3) labeling the egg-shaped picture in the step (2) by adopting a Labelme tool.
And 4, dividing data. And (4) dividing the egg-shaped data set labeled in the step (3) into a training set and a test set according to the ratio of 8: 2.
And 5, constructing a model. The SENet and ResNet34 convolutional neural networks are combined to construct a new lightweight convolutional neural network model, as shown in FIG. 2, the model is formed by stacking 1 multi-scale convolution module, 1 convolution module, 2 maximum pooling layers, 4 SE-ResNet modules, 1 SENet module, 1 average pooling layer and 1 full-connection layer.
A multi-scale convolution module, as shown in fig. 3. Because the egg-shaped types are more, the sizes, colors and texture characteristics of different egg-shaped shapes have larger differences. The difference of the same egg shape in different stages is obvious; therefore, convolution kernels of different scales are adopted for convolution of the input picture, local features extracted by features of multiple scales can be extracted simultaneously, and robustness of the network is improved.
The convolution module is composed of a convolution layer and a Batch Normalization (BN) layer, and the BN layer is adopted after convolution. The main purpose of batch normalization is to force the distribution of this input value of any neuron in each layer of neural network back to the standard normal distribution with the mean value of 0 and the variance of 1 through a certain normalization means. Therefore, the convergence rate is accelerated, and the prediction precision and the model generalization capability are better.
The main operation steps of batch normalization are as follows:
inputting: value of x in one batch: b ═ x1... m }; the parameters gamma, beta to be learned
And (3) outputting: { yi=BNγ,β(xi)}
Mean of data per training batch:
Figure BDA0003274620410000051
variance of data per training batch:
Figure BDA0003274620410000052
the data for this batch were normalized using the found mean and variance:
Figure BDA0003274620410000053
and (3) specification conversion:
Figure BDA0003274620410000054
wherein x is the input to the function; in the formula, B represents a set consisting of a plurality of x; m is the number of x contained in set B; i represents the position information of x in the set B; ε is a slight positive number used to avoid a divisor of 0.
4 SE-ResNet modules. The SENET model is mainly divided into three steps of compression, excitation and restoration. The specific flow of SENEt is as follows: assuming that the original feature map is H × W × C, first obtaining a 1 × 1 × C feature map through global pooling (compression); then using the full connection layer, the RELU activation layer, the full connection layer and the softmax layer to obtain a characteristic diagram (excitation) of 1 × 1 × C; and finally, the original feature map size is restored (recovered).
In order to better fit complex correlation among channels in the compression and restoration processes, the SENET model greatly reduces parameter quantity and calculated quantity, increases more nonlinearity, and reduces the dimension of the neuron number C divided by r when the first full connection layer is used, wherein r is the compression ratio of the channels. Then, the dimension is raised again through a second full connection layer, and a characteristic diagram of 1 multiplied by C is obtained. Furthermore, because of the correlation between channels, softmax is used without sigmoid after the second fully-connected layer.
The SENet model introduces an attention mechanism into the excitation module, so that the local features of the target in the image can be gathered, and the detection precision is improved. The model explicitly models the interdependence relation among the characteristic channels starting from the relation among the characteristic channels, acquires the importance degree of each characteristic channel by adopting a characteristic recalibration strategy, and then enhances useful characteristics and inhibits the characteristics with little use for the current task according to the importance degree, so that the whole network structure not only focuses on the whole information, but also focuses on the local information.
ResNet has introduced residual Network structure (residual Network), the residual Network has borrowed the cross-layer link thought of the high-speed Network (high way Network), the jump structure of this residual, the output that has broken the traditional neural Network n-1 layer can only give n layers as the convention of the input, make the output of a certain layer can stride several layers as the input of a certain layer after directly, its meaning lies in providing the new direction for the problem that the error rate of the whole study model does not drop and rise for the Network of stack multilayer.
The ResNet network structure includes both ResNet v1 and ResNet v 2. As shown in fig. 4, the ResNetV1 network structure is a basic module in ResNet, and ResNetV1 has two convolution kernels of 3 × 3, which can improve the utilization rate of network resources, improve the width and depth of the network under the condition that the calculation amount is not changed, reduce the calculation bottleneck, increase the number of network layers, and improve the expression capability of the network.
As shown in fig. 5, the ResNetV2 network structure changes two 3 × 3 convolution kernels into three 1 × 1, 3 × 3, 1 × 1 convolution kernels on the basis of ResNetV1, which has several advantages over ResNetV1 as follows: firstly, the 1 × 1 convolution kernel can save a large amount of network parameters compared with the 3 × 3 convolution kernel; and secondly, the number of network layers is increased, richer spatial characteristics can be processed, and the diversity of the characteristics is increased.
As shown in fig. 6, the present invention combines the SENet model with the ResNetV2 network structure in ResNet using the ResNetV2 network structure.
In summary, the basic model structure of the SE-ResNet model is shown in Table 1.
TABLE 1 basic model Structure
Figure BDA0003274620410000061
Figure BDA0003274620410000071
And 6, training the model. Leading in a training set, and storing a trained SE-ResNet model;
during the training process, it is observed whether the training curve converges. The loss function here is a cross-entropy loss function commonly used in classification. It characterizes the distance of the actual output (probability) from the desired output (probability), i.e. the smaller the value of the cross entropy, the closer the two probability distributions are. The cross entropy formula is as follows:
Figure BDA0003274620410000072
wherein x is the input of the function, y is the distribution representing the real label, a is the predicted label distribution of the trained model, and the similarity of y and a can be measured by the cross entropy loss function. When the task is multi-classification, softmax is used as an activation layer of an output layer, and the loss is measured by adopting the following formula:
Figure BDA0003274620410000073
where p represents the predicted value of the model, t is the label value, and i and j correspond to the value of the data and the category of the data, respectively.
And 7, testing and comparing the SE-ResNet model with other convolutional neural networks by using an egg-shaped test set so as to verify the quality between the neural networks.
The model established by the invention can achieve better precision and smaller model volume by testing on a common egg-shaped data set, and the identification effect is shown in table 2. The accuracy of the specific egg shape recognition is shown in table 3. To verify the robustness of the network, tests were also performed on the Mini-ImageNet public dataset and the experimental results are shown in table 4.
TABLE 2 Experimental results for different models
Figure BDA0003274620410000074
Figure BDA0003274620410000081
TABLE 3 identification accuracy of each egg shape under different models
Figure BDA0003274620410000082
TABLE 4 Mini-ImageNet data set Experimental results
Figure BDA0003274620410000083
Detailed experimental results show that the model provided by the invention has higher identification accuracy and smaller model volume. As can be seen from Table 2, the overall prediction accuracy of the method reaches 98.51%, and the method is the highest compared with the existing lightweight networks such as MobileNet V1, MobileNet V2 and ShuffleNet V2. From table 3, it can be seen that the accuracy of the method of the present invention is higher in the class 4 egg shape recognition. As can be seen from Table 4, the model also performed well on the Mini-ImageNet public data set, and the recognition accuracy of the test set was 91.83%, which is higher than that of the control model. The convergence rate of the model with the BN layer is higher, so that the calculation resources can be greatly saved. The accuracy of the network can be improved by combining ResNet and SENet. By adopting the ResNet structure and the BN layer, the whole model is lighter and more convenient, and a foundation is laid for the application of a mobile terminal such as a single chip microcomputer or a mobile phone.
The present invention is not limited to the above embodiments, and any changes or substitutions that can be easily made by those skilled in the art within the technical scope of the present invention are also within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (6)

1. A light-weight egg shape recognition method based on an SE-ResNet structure is characterized by comprising the following steps:
step 1, data establishment, namely acquiring egg-shaped sample pictures and establishing a common egg-shaped data set;
step 2, data expansion, namely expanding data by rotating, translating and overturning sample pictures in the data set to expand the hatching egg data set;
step 3, data labeling, namely performing data labeling on the sample picture in the data set expanded in the step 2 by adopting a Labelme tool;
step 4, data division, namely randomly dividing the egg-shaped data set marked in the step 3 into a training set and a test set;
step 5, model construction, namely combining the SENet and ResNet34 convolutional neural networks to construct a new lightweight convolutional neural network model, namely an SE-ResNet model, wherein the model is formed by stacking a multi-scale convolution module, a maximum pooling layer, an SE-ResNet module, a SENet module, an average pooling layer and a full connection layer;
step 6, training the model, importing a training set, training the SE-ResNet model established in the step, and storing the trained SE-ResNet model;
step 7, carrying out test comparison test on the SE-ResNet model trained in the step 6 and other convolutional neural networks by using a test set so as to verify the quality between the neural networks;
and 8, identifying the hatching eggs by using the tested SE-ResNet model.
2. The method as claimed in claim 1, wherein the egg data set expanded in step 2 comprises 3735 egg-shaped pictures of small egg, malformed egg, double yellow egg and standard egg in 4 categories.
3. The method for lightweight egg shape recognition based on SE-ResNet structure as claimed in claim 1, wherein the ratio of training set to test set divided in step 3 is 8: 2.
4. The lightweight egg shape recognition method based on SE-ResNet structure as claimed in claim 1, wherein the model established in step 5 is composed of 1 multi-scale convolution module, 1 convolution module, 2 maximum pooling layers, 4 SE-ResNet modules, 1 SE module, 1 average pooling layer and 1 full-connection layer.
5. The SE-ResNet structure-based lightweight egg shape identification method as claimed in claim 1 or 4, wherein the convolution module comprises a convolution layer and a batch normalization processing layer, and batch normalization processing is performed after the convolution layer.
6. The light-weight egg shape recognition method based on SE-ResNet structure as claimed in claim 1, wherein ResNet is ResNet V2, which is composed of three convolution kernels of 1 x 1, 3 x 3 and 1 x 1.
CN202111113900.3A 2021-09-23 2021-09-23 Light-weight egg shape recognition method based on SE-ResNet structure Pending CN113793328A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111113900.3A CN113793328A (en) 2021-09-23 2021-09-23 Light-weight egg shape recognition method based on SE-ResNet structure

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111113900.3A CN113793328A (en) 2021-09-23 2021-09-23 Light-weight egg shape recognition method based on SE-ResNet structure

Publications (1)

Publication Number Publication Date
CN113793328A true CN113793328A (en) 2021-12-14

Family

ID=78879151

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111113900.3A Pending CN113793328A (en) 2021-09-23 2021-09-23 Light-weight egg shape recognition method based on SE-ResNet structure

Country Status (1)

Country Link
CN (1) CN113793328A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1804620A (en) * 2005-12-30 2006-07-19 南京农业大学 Method and apparatus for detecting surface quality of egg
CN109191461A (en) * 2018-10-22 2019-01-11 广东工业大学 A kind of Countryside Egg recognition methods and identification device based on machine vision technique
CN110927167A (en) * 2019-10-31 2020-03-27 北京海益同展信息科技有限公司 Egg detection method and device, electronic equipment and storage medium
CN110942454A (en) * 2019-11-26 2020-03-31 北京科技大学 Agricultural image semantic segmentation method
CN111696101A (en) * 2020-06-18 2020-09-22 中国农业大学 Light-weight solanaceae disease identification method based on SE-Inception

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1804620A (en) * 2005-12-30 2006-07-19 南京农业大学 Method and apparatus for detecting surface quality of egg
CN109191461A (en) * 2018-10-22 2019-01-11 广东工业大学 A kind of Countryside Egg recognition methods and identification device based on machine vision technique
CN110927167A (en) * 2019-10-31 2020-03-27 北京海益同展信息科技有限公司 Egg detection method and device, electronic equipment and storage medium
CN110942454A (en) * 2019-11-26 2020-03-31 北京科技大学 Agricultural image semantic segmentation method
CN111696101A (en) * 2020-06-18 2020-09-22 中国农业大学 Light-weight solanaceae disease identification method based on SE-Inception

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JIE HU ET AL: "Squeeze-and-Excitation Networks", 《ARXIV: 1709.01507V4》, pages 1 - 13 *

Similar Documents

Publication Publication Date Title
CN111696101A (en) Light-weight solanaceae disease identification method based on SE-Inception
WO2021134871A1 (en) Forensics method for synthesized face image based on local binary pattern and deep learning
WO2021022521A1 (en) Method for processing data, and method and device for training neural network model
CN113516012B (en) Pedestrian re-identification method and system based on multi-level feature fusion
CN106709511A (en) Urban rail transit panoramic monitoring video fault detection method based on depth learning
CN109241317A (en) Based on the pedestrian's Hash search method for measuring loss in deep learning network
CN106228185A (en) A kind of general image classifying and identifying system based on neutral net and method
CN110309868A (en) In conjunction with the hyperspectral image classification method of unsupervised learning
CN111400536B (en) Low-cost tomato leaf disease identification method based on lightweight deep neural network
CN106845525A (en) A kind of depth confidence network image bracket protocol based on bottom fusion feature
CN113887517B (en) Crop remote sensing image semantic segmentation method based on parallel attention mechanism
CN111898703B (en) Multi-label video classification method, model training method, device and medium
CN110321805B (en) Dynamic expression recognition method based on time sequence relation reasoning
CN109508675A (en) A kind of pedestrian detection method for complex scene
CN110287777A (en) A kind of golden monkey body partitioning algorithm under natural scene
WO2021051987A1 (en) Method and apparatus for training neural network model
Islam et al. InceptB: a CNN based classification approach for recognizing traditional bengali games
CN112232395B (en) Semi-supervised image classification method for generating countermeasure network based on joint training
CN113705641A (en) Hyperspectral image classification method based on rich context network
CN113344077A (en) Anti-noise solanaceae disease identification method based on convolution capsule network structure
CN112487938A (en) Method for realizing garbage classification by utilizing deep learning algorithm
CN114170657A (en) Facial emotion recognition method integrating attention mechanism and high-order feature representation
CN112633301A (en) Traditional Chinese medicine tongue image greasy feature classification method based on depth metric learning
CN111860601A (en) Method and device for predicting large fungus species
CN113793328A (en) Light-weight egg shape recognition method based on SE-ResNet structure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination