CN112861869A - Sorghum lodging image segmentation method based on lightweight convolutional neural network - Google Patents
Sorghum lodging image segmentation method based on lightweight convolutional neural network Download PDFInfo
- Publication number
- CN112861869A CN112861869A CN202110287975.7A CN202110287975A CN112861869A CN 112861869 A CN112861869 A CN 112861869A CN 202110287975 A CN202110287975 A CN 202110287975A CN 112861869 A CN112861869 A CN 112861869A
- Authority
- CN
- China
- Prior art keywords
- neural network
- convolutional neural
- sorghum
- lodging
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 235000011684 Sorghum saccharatum Nutrition 0.000 title claims abstract description 41
- 238000013527 convolutional neural network Methods 0.000 title claims abstract description 38
- 238000000034 method Methods 0.000 title claims abstract description 31
- 238000003709 image segmentation Methods 0.000 title claims abstract description 20
- 241000209072 Sorghum Species 0.000 title abstract description 7
- 238000012549 training Methods 0.000 claims abstract description 33
- 238000012795 verification Methods 0.000 claims abstract description 21
- 238000012360 testing method Methods 0.000 claims abstract description 17
- 238000002372 labelling Methods 0.000 claims abstract description 4
- 240000006394 Sorghum bicolor Species 0.000 claims description 34
- 238000000605 extraction Methods 0.000 claims description 13
- 230000004913 activation Effects 0.000 claims description 8
- 238000005457 optimization Methods 0.000 claims description 7
- 230000008569 process Effects 0.000 claims description 6
- 230000011218 segmentation Effects 0.000 abstract description 9
- 238000012545 processing Methods 0.000 abstract description 3
- 238000003062 neural network model Methods 0.000 abstract 1
- 230000006870 function Effects 0.000 description 17
- 238000005516 engineering process Methods 0.000 description 6
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 4
- 241000196324 Embryophyta Species 0.000 description 4
- 238000001514 detection method Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 230000007306 turnover Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 235000003932 Betula Nutrition 0.000 description 1
- 241000219429 Betula Species 0.000 description 1
- 241000009298 Trigla lyra Species 0.000 description 1
- 238000012271 agricultural production Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 235000013339 cereals Nutrition 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000003306 harvesting Methods 0.000 description 1
- 238000007477 logistic regression Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 230000036961 partial effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/188—Vegetation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/68—Food, e.g. fruit or vegetables
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Biophysics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Bioinformatics & Computational Biology (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Evolutionary Biology (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a sorghum lodging image segmentation method based on a lightweight convolutional neural network, and relates to the technical field of image processing, wherein the method comprises the following steps: dividing the collected RGB image data into a training set, a verification set and a test set, and labeling the training set and the verification set; performing data expansion on the marked training set, the marked verification set and the unmarked test set; inputting the expanded training set into an Incomplate-SegNet convolution neural network model for training; inputting the expanded verification set into an Incomplate-SegNet convolutional neural network model for model verification, and optimizing model parameters; and inputting the test set into an Incomplite-SegNet convolutional neural network model to complete the segmentation of the image. The method can automatically identify the lodging regions and boundaries of the sorghum, and the accuracy rate of identifying the lodging regions of the sorghum can be up to 98.54%.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a sorghum lodging image segmentation method based on a lightweight convolutional neural network.
Background
Sorghum is one of main crops in China, is mainly planted in northeast regions of China, and is prone to lodging in summer due to frequent rainstorm and strong wind weather, grain quality is affected, harvesting is difficult, and yield is reduced finally. The unmanned aerial vehicle carrying the camera is combined with a deep learning method to extract the image of the sorghum lodging region, so that the method has the characteristics of rapidness, convenience, accuracy and the like, and has important significance for agricultural insurance damage assessment, productivity prediction and agricultural production management.
Most of methods for judging the lodging of the sorghum when computer graphics and remote sensing technologies are immature depend on manual work entering the sorghum field for surveying, and the working efficiency is extremely low. With the rapid development of computer vision and remote sensing technology, the method is proposed to be applied to plant lodging region judgment, but the research of using the computer vision, the remote sensing technology and the image segmentation technology in sorghum lodging region identification is less at present, and the extraction of other plant lodging region images is mainly divided into two directions at present, namely, the plant lodging region is identified based on the computer vision and the remote sensing technology, but the region boundary has poor fitting performance, the modeling is complex and the self-learning capability is unavailable; and secondly, image segmentation is automatically performed on other plant lodging regions except sorghum by applying an image segmentation technology, and the SegNet network model which is applied to agricultural image segmentation at present and has a good effect is large in parameter scale and long in time required by training and prediction due to the fact that the VGG16 is used as a main feature extraction network, so that the SegNet network model is not beneficial to rapidly segmenting the sorghum lodging regions.
Disclosure of Invention
In order to solve the above problems, the present invention provides a sorghum lodging image segmentation method based on a lightweight convolutional neural network, comprising:
s1: dividing the collected RGB image data into a training set, a verification set and a test set, and labeling the training set and the verification set;
s2: performing data expansion on the training set, the verification set and the unmarked test set marked in the step S1;
s3: inputting the training set expanded in the step S2 into an Incomplate-SegNet convolutional neural network model for training;
s4: and (5) inputting the verification set expanded in the step (S2) into the trained model for verification, and optimizing the model parameters.
S5: inputting the test set in the step S1 into the optimized model to complete the image segmentation.
Further, the expansion mode is to sequentially turn over horizontally, turn over vertically, and turn over horizontally and vertically at the same time.
Furthermore, the Incomplite-SegNet convolutional neural network model adopts a MobileNet network in the encoding stage, and comprises five encoding units:
a first encoding unit: adopting Conv with the step length of 2 and Conv dw + Conv with the step length of 1 to extract image features;
the second coding unit, the third coding unit and the fifth coding unit firstly adopt Conv dw with the step length of 2 and Conv with the step length of 1, and then adopt Conv dw and Conv with the step length of 1 to extract image features;
a fourth encoding unit: image feature extraction is performed by using Conv dw with a step size of 2 and Conv with a step size of 1 for 5 times.
Furthermore, a Droupout optimization algorithm is added in the coding stage of the Incompyte-SegNet convolutional neural network model to prevent an over-fitting condition from occurring.
Further, the Incomplete-SegNet convolutional neural network model decoding stage includes 4 decoding units: the first decoding unit, the second decoding unit and the third decoding unit all adopt Upsampling, 2D + ZeroPadding 2D + Conv2D + BatchNormalization to process the image, the fourth decoding unit adopts Upsampling 2D + ZeroPadding 2D + Conv2D + BatchNormalization to process the image, and the obtained feature map is input into a Softmax function to judge the sorghum lodging region probability.
Further, the activation function of the Incomplite-SegNet convolutional neural network model is Relu 6.
Further, the Incomplite-SegNet convolutional neural network modulo the loss function is an entropy function:
wherein N represents the total number of pixel points; x represents a feature vector of each input pixel;a representative pixel classification vector;representing the predictor classification vector.
The invention has the beneficial effects that:
the method can automatically identify the lodging regions and boundaries of the sorghum, the accuracy rate of identifying the lodging regions of the sorghum can be up to 98.54%, the training time is 3h, the single-frame detection time is 0.6s, and the parameter scale is 5.56 multiplied by 106, and compared with the traditional SegNet network, the method has the advantages that the accuracy is improved by 0.51%, the training time is reduced by 3.77h, the single-frame detection time is reduced by 0.21s, and the parameter scale is reduced by 70.27% under the same test parameters and environment.
In addition to the objects, features and advantages described above, other objects, features and advantages of the present invention are also provided. The present invention will be described in further detail below with reference to the drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the invention and, together with the description, serve to explain the invention and not to limit the invention.
Fig. 1 is a flow chart of a sorghum lodging image segmentation method based on a lightweight convolutional neural network according to an embodiment of the present invention;
FIG. 2 is a diagram of an Incomplite-SegNet convolutional neural network model architecture according to an embodiment of the present invention;
fig. 3 illustrates a segmentation effect of a sorghum lodging image according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, the invention relates to a sorghum lodging image segmentation method based on a lightweight convolutional neural network. Firstly, acquiring, labeling and data expanding images of a sorghum lodging area by using an unmanned aerial vehicle; then, in a SegNet coding backbone network, depth separable convolution is used to reduce the calculated amount, normalization processing is added to the feature map in the feature extraction stage, and meanwhile, the decoding stage of the SegNet network is changed into 4 times of up-sampling; and finally, carrying out a segmentation test on the image of the sorghum lodging region by adjusting superior hyper-parameters, randomly extracting training data and adopting an Adam + learning rate attenuation optimization algorithm. The method can accurately and automatically identify the sorghum lodging region and boundary and can provide reference for automatic prediction of sorghum lodging.
Examples
Sorghum lodging picture acquisition is carried out in 10 months in 2019 by using an InPIRE V2.0 unmanned aerial vehicle of Dajiang Wu 1 carrying a Zen Si X3 camera, the acquisition place is Chan Pi factory village of Betula Piper factory of Chanyi district of Jilin province, the image resolution is 1280X 720, the acquired sorghum lodging image data is manually screened, and 171 pieces of high-quality image data are finally screened.
S1: dividing the collected RGB image data into a training set, a verification set and a test set, wherein the number ratio of the training set to the verification set to the test set is 8:1:1, and then manually marking sorghum lodging regions of the training set and the verification set by adopting a Label ME marking tool;
s2: performing data expansion on the training set, the verification set and the unmarked test set marked in the step S1;
the expansion method comprises the steps of sequentially turning the image horizontally, turning the image vertically and simultaneously turning the image horizontally and vertically.
S3: inputting the training set expanded in the step S2 into an Incomplate-SegNet convolutional neural network model for training;
referring to fig. 2, an Incomplete-SegNet convolutional neural network model is constructed, which includes an encoder and a decoder.
In fig. 2, Conv denotes convolution, s1 denotes convolution step size 1, s2 denotes convolution step size 2, Conv dw denotes depth separable convolution, Upsampling denotes Upsampling, ZeroPadding denotes zero padding, BatchNormalization denotes batch normalization, and SoftMax denotes logistic regression function.
In the encoding stage, the Incomplite-SegNet convolutional neural network model of the application adopts a MobileNet network to replace VGG16, the MobileNet is a lightweight feature extraction network model proposed by Google aiming at mobile equipment and embedded equipment, deep separable convolution is used in the MobileNet to extract features, the calculated amount is reduced from two aspects of the number of channels and the size of a feature map, and the five-layer encoding process of an encoder is simplified as follows:
a first encoding unit: using Conv with the step length of 2 and Conv dw + Conv with the step length of 1 to extract image features, obtaining F1 layer image features, and enabling the length and width of an image to be reduced by 1 time compared with an original image;
a second encoding unit: firstly, Conv dw with the step size of 2 and Conv with the step size of 1 are adopted, then Conv dw and Conv with the step size of 1 are adopted to carry out image feature extraction, and F2 layer image features are obtained, so that the length-width ratio of an image is reduced by 2 times compared with an original image
A third encoding unit: firstly, Conv dw with the step length of 2 and Conv with the step length of 1 are adopted, and then Conv dw and Conv with the step length of 1 are adopted to carry out image feature extraction, so that the image features of an F3 layer are obtained, and the length-width ratio of the image is reduced by 4 times compared with the original image;
a fourth encoding unit: adopting Conv dw with the step length of 2 and Conv with the step length of 1 for 4 times to carry out image feature extraction to obtain image features of an F4 layer, and enabling the length and width of an image to be reduced by 8 times compared with an original image;
a fifth encoding unit: image feature extraction is performed by using Conv dw with the step size of 2 and Conv with the step size of 1 for 5 times, so that image features of an F5 layer are obtained, and the length and width ratio of an image is reduced by 16 times compared with an original image.
The structure of the MobileNet trunk feature extraction network is shown in table 1:
table 1 MobileNet backbone feature extraction network structure
In the Incomplte-SegNet convolutional neural network model, Relu6 is used for replacing Relu in an activation function, the function structure of Relu6 is more complex compared with that of Relu, the gradient disappearance phenomenon is solved, the maximum output value is limited by Relu6, Relu is not limited, Relu6 is used as the activation function of the feature extraction network, the mobile device float16 can have better resolution even with low precision, the activation range is not limited by the Relu activation function, the output value range is 0 to infinity, and if the activation value is particularly large, the output result distribution range is wider, so that the low-precision float16 cannot accurately describe values in too large range, and precision loss is caused. In addition, Relu6 has a fast calculation speed, and the convergence speed is faster than the Sigmoid and Tanh functions. Therefore, Relu6 is used as an activation function, and a foundation is laid for the model to be transplanted to a small device end, wherein the function expression is as follows:
a Droupout optimization algorithm is also added in the coding stage of the Incomplite-SegNet convolutional neural network model to randomly inactivate some neurons so as to prevent an over-fitting condition from occurring.
And a decoding stage:
the Incomplite-SegNet convolutional neural network model decoding stage comprises 4 decoding units: the first decoding unit, the second decoding unit and the third decoding unit are Upsampling, 2D + ZeroPadding, 2D + Conv2D + BatchNormalization, the fourth decoding unit is Upsampling 2D + ZeroPadding 2D + Conv2D + BatchNormalization, and the feature map obtained by the fourth decoding unit is input into a Softmax function to judge the sorghum lodging region probability.
The expression of the softmax function is as follows:
wherein,represents the total number of categories;an index representing a certain category;andoutput representing a preceding output unit of the classifier, i.e., an output value of a certain class;representing the ratio of the index of the current element to the sum of the indices of all elements.
Training the Incomplate-SegNet convolutional neural network model by adopting the expanded training set,
in order to enable the Incomplite-SegNet network to achieve a good effect in sorghum lodging region segmentation and enable the output of the model to have high segmentation precision, a cross entropy function is used for representing the difference between the segmentation result of the model and a standard result. In the segmentation of the image of the sorghum lodging region, the cross entropy function represents the difference between the manual annotation and the segmentation result of the sorghum lodging region. In the invention, the cross entropy of each pixel is averaged to be used as a final loss function result, and the expression is as follows:
wherein N represents the total number of pixel points; x represents a feature vector of each input pixel;a representative pixel classification vector;representing the predictor classification vector.
The optimizer in the model incorporates a learning rate decay optimization algorithm based on Adam. Adam can replace the first-order process of the traditional random gradient descent, can update the weight of the neural network iteratively according to data, and has the advantages of high calculation efficiency, small memory consumption, suitability for unsteady targets, large-scale data and parameter optimization solution and the like in the non-convex optimization problem, wherein the parameter of Adam adopts a default parameter. The learning rate attenuation (reduce lronplan) is to reduce the learning rate under the condition that the network performance is no longer improved, so that the training effect is better.
And (3) after the training model is built, performing a training stage on the network model, and after the training is completed, generating a weight file for judging the lodging region of the sorghum image by the system.
S4: inputting the expanded verification set into a trained Incomplite-SegNet convolutional neural network model for model verification, and optimizing model parameters.
S5: and inputting the test set into the optimized Incomplite-SegNet convolutional neural network model to complete the segmentation of the image.
The results show that the average accuracy of the test set is 98.54%, the training time is 3h, the single-frame detection time is 0.6s, and the parameter scale is 5.56 × 106. Under the same test parameters and environment, compared with the traditional SegNet network model, the method improves the accuracy by 0.51 percent, reduces the training time by 3.77h, reduces the single-frame detection time by 0.21s, and reduces the parameter scale by 70.27 percent. It can also be seen that the algorithm can more accurately segment the sorghum lodging region, and the boundary is more accurate than that of the traditional SegNet algorithm, and partial effects are shown in fig. 3.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (7)
1. A sorghum lodging image segmentation method based on a lightweight convolutional neural network is characterized by comprising the following steps:
s1: dividing the collected RGB image data into a training set, a verification set and a test set, and labeling the training set and the verification set;
s2: performing data expansion on the training set, the verification set and the unmarked test set marked in the step S1;
s3: inputting the training set expanded in the step S2 into an Incomplate-SegNet convolutional neural network model for training;
s4: inputting the verification set expanded in the step S2 into a trained model for verification, and optimizing model parameters;
s5: inputting the test set in the step S1 into the optimized model to complete the image segmentation.
2. The sorghum lodging image segmentation method based on the lightweight convolutional neural network as claimed in claim 1, wherein the expansion mode is horizontal flipping, vertical flipping, and simultaneous flipping in the horizontal and vertical directions.
3. The sorghum lodging image segmentation method based on the lightweight convolutional neural network as claimed in claim 1, wherein the Incomplite-SegNet convolutional neural network model adopts a MobileNet network in the encoding stage, and comprises five encoding units:
a first encoding unit: adopting Conv with the step length of 2 and Conv dw + Conv with the step length of 1 to extract image features;
the second coding unit, the third coding unit and the fifth coding unit firstly adopt Conv dw with the step length of 2 and Conv with the step length of 1, and then adopt Conv dw and Conv with the step length of 1 to extract image features;
a fourth encoding unit: image feature extraction is performed by using Conv dw with a step size of 2 and Conv with a step size of 1 for 5 times.
4. The sorghum lodging image segmentation method based on the lightweight convolutional neural network as claimed in claim 3, wherein a Droupout optimization algorithm is further added in the coding stage of the Incomplite-SegNet convolutional neural network model to prevent an over-fitting situation.
5. The sorghum lodging image segmentation method based on the lightweight convolutional neural network as claimed in claim 1, wherein the Incomplite-SegNet convolutional neural network model decoding stage comprises 4 decoding units: the first decoding unit, the second decoding unit and the third decoding unit all adopt Upsampling, 2D + ZeroPadding 2D + Conv2D + BatchNormalization to process the image, the fourth decoding unit adopts Upsampling 2D + ZeroPadding 2D + Conv2D + BatchNormalization to process the image, and the obtained feature map is input into a Softmax function to judge the sorghum lodging region probability.
6. The sorghum lodging image segmentation method based on the lightweight convolutional neural network as claimed in claim 1, wherein the activation function of the Incomplite-SegNet convolutional neural network model is Relu 6.
7. The sorghum lodging image segmentation method based on the lightweight convolutional neural network of claim 1, wherein the loss function modeled by the Incomplite-SegNet convolutional neural network is an entropy function:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110287975.7A CN112861869A (en) | 2021-03-17 | 2021-03-17 | Sorghum lodging image segmentation method based on lightweight convolutional neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110287975.7A CN112861869A (en) | 2021-03-17 | 2021-03-17 | Sorghum lodging image segmentation method based on lightweight convolutional neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112861869A true CN112861869A (en) | 2021-05-28 |
Family
ID=75995171
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110287975.7A Withdrawn CN112861869A (en) | 2021-03-17 | 2021-03-17 | Sorghum lodging image segmentation method based on lightweight convolutional neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112861869A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114257783A (en) * | 2021-11-12 | 2022-03-29 | 山东省农业科学院作物研究所 | System and method for monitoring growth condition of sweet sorghum in field |
WO2023035766A1 (en) * | 2021-09-07 | 2023-03-16 | 中国电信股份有限公司 | Feature extraction method and apparatus, encoder, and communication system |
-
2021
- 2021-03-17 CN CN202110287975.7A patent/CN112861869A/en not_active Withdrawn
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023035766A1 (en) * | 2021-09-07 | 2023-03-16 | 中国电信股份有限公司 | Feature extraction method and apparatus, encoder, and communication system |
CN114257783A (en) * | 2021-11-12 | 2022-03-29 | 山东省农业科学院作物研究所 | System and method for monitoring growth condition of sweet sorghum in field |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114092832B (en) | High-resolution remote sensing image classification method based on parallel hybrid convolutional network | |
CN105139395B (en) | SAR image segmentation method based on small echo pond convolutional neural networks | |
CN114943963A (en) | Remote sensing image cloud and cloud shadow segmentation method based on double-branch fusion network | |
CN112950780B (en) | Intelligent network map generation method and system based on remote sensing image | |
CN111695640B (en) | Foundation cloud picture identification model training method and foundation cloud picture identification method | |
CN110751644B (en) | Road surface crack detection method | |
CN114494821B (en) | Remote sensing image cloud detection method based on feature multi-scale perception and self-adaptive aggregation | |
CN113420619A (en) | Remote sensing image building extraction method | |
CN112861869A (en) | Sorghum lodging image segmentation method based on lightweight convolutional neural network | |
CN115223063A (en) | Unmanned aerial vehicle remote sensing wheat new variety lodging area extraction method and system based on deep learning | |
CN111178438A (en) | ResNet 101-based weather type identification method | |
CN112001293A (en) | Remote sensing image ground object classification method combining multi-scale information and coding and decoding network | |
CN113111716A (en) | Remote sensing image semi-automatic labeling method and device based on deep learning | |
CN114299379A (en) | Shadow area vegetation coverage extraction method based on high dynamic image | |
CN114120359A (en) | Method for measuring body size of group-fed pigs based on stacked hourglass network | |
CN114092467A (en) | Scratch detection method and system based on lightweight convolutional neural network | |
CN114881286A (en) | Short-time rainfall prediction method based on deep learning | |
CN115497006A (en) | Urban remote sensing image change depth monitoring method and system based on dynamic hybrid strategy | |
CN112380985A (en) | Real-time detection method for intrusion foreign matters in transformer substation | |
CN115330703A (en) | Remote sensing image cloud and cloud shadow detection method based on context information fusion | |
CN111738052A (en) | Multi-feature fusion hyperspectral remote sensing ground object classification method based on deep learning | |
CN116681657B (en) | Asphalt pavement disease detection method based on improved YOLOv7 model | |
CN111553272A (en) | High-resolution satellite optical remote sensing image building change detection method based on deep learning | |
CN115797765A (en) | Method and system for extracting field block based on boundary extraction and breakpoint connection post-processing | |
CN114821295A (en) | Reservoir water body extraction method based on remote sensing image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20210528 |