CN112861869A - Sorghum lodging image segmentation method based on lightweight convolutional neural network - Google Patents

Sorghum lodging image segmentation method based on lightweight convolutional neural network Download PDF

Info

Publication number
CN112861869A
CN112861869A CN202110287975.7A CN202110287975A CN112861869A CN 112861869 A CN112861869 A CN 112861869A CN 202110287975 A CN202110287975 A CN 202110287975A CN 112861869 A CN112861869 A CN 112861869A
Authority
CN
China
Prior art keywords
neural network
convolutional neural
sorghum
lodging
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110287975.7A
Other languages
Chinese (zh)
Inventor
齐明洋
唐友
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin Agricultural Science and Technology College
Original Assignee
Jilin Agricultural Science and Technology College
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin Agricultural Science and Technology College filed Critical Jilin Agricultural Science and Technology College
Priority to CN202110287975.7A priority Critical patent/CN112861869A/en
Publication of CN112861869A publication Critical patent/CN112861869A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/68Food, e.g. fruit or vegetables

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Biology (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a sorghum lodging image segmentation method based on a lightweight convolutional neural network, and relates to the technical field of image processing, wherein the method comprises the following steps: dividing the collected RGB image data into a training set, a verification set and a test set, and labeling the training set and the verification set; performing data expansion on the marked training set, the marked verification set and the unmarked test set; inputting the expanded training set into an Incomplate-SegNet convolution neural network model for training; inputting the expanded verification set into an Incomplate-SegNet convolutional neural network model for model verification, and optimizing model parameters; and inputting the test set into an Incomplite-SegNet convolutional neural network model to complete the segmentation of the image. The method can automatically identify the lodging regions and boundaries of the sorghum, and the accuracy rate of identifying the lodging regions of the sorghum can be up to 98.54%.

Description

Sorghum lodging image segmentation method based on lightweight convolutional neural network
Technical Field
The invention relates to the technical field of image processing, in particular to a sorghum lodging image segmentation method based on a lightweight convolutional neural network.
Background
Sorghum is one of main crops in China, is mainly planted in northeast regions of China, and is prone to lodging in summer due to frequent rainstorm and strong wind weather, grain quality is affected, harvesting is difficult, and yield is reduced finally. The unmanned aerial vehicle carrying the camera is combined with a deep learning method to extract the image of the sorghum lodging region, so that the method has the characteristics of rapidness, convenience, accuracy and the like, and has important significance for agricultural insurance damage assessment, productivity prediction and agricultural production management.
Most of methods for judging the lodging of the sorghum when computer graphics and remote sensing technologies are immature depend on manual work entering the sorghum field for surveying, and the working efficiency is extremely low. With the rapid development of computer vision and remote sensing technology, the method is proposed to be applied to plant lodging region judgment, but the research of using the computer vision, the remote sensing technology and the image segmentation technology in sorghum lodging region identification is less at present, and the extraction of other plant lodging region images is mainly divided into two directions at present, namely, the plant lodging region is identified based on the computer vision and the remote sensing technology, but the region boundary has poor fitting performance, the modeling is complex and the self-learning capability is unavailable; and secondly, image segmentation is automatically performed on other plant lodging regions except sorghum by applying an image segmentation technology, and the SegNet network model which is applied to agricultural image segmentation at present and has a good effect is large in parameter scale and long in time required by training and prediction due to the fact that the VGG16 is used as a main feature extraction network, so that the SegNet network model is not beneficial to rapidly segmenting the sorghum lodging regions.
Disclosure of Invention
In order to solve the above problems, the present invention provides a sorghum lodging image segmentation method based on a lightweight convolutional neural network, comprising:
s1: dividing the collected RGB image data into a training set, a verification set and a test set, and labeling the training set and the verification set;
s2: performing data expansion on the training set, the verification set and the unmarked test set marked in the step S1;
s3: inputting the training set expanded in the step S2 into an Incomplate-SegNet convolutional neural network model for training;
s4: and (5) inputting the verification set expanded in the step (S2) into the trained model for verification, and optimizing the model parameters.
S5: inputting the test set in the step S1 into the optimized model to complete the image segmentation.
Further, the expansion mode is to sequentially turn over horizontally, turn over vertically, and turn over horizontally and vertically at the same time.
Furthermore, the Incomplite-SegNet convolutional neural network model adopts a MobileNet network in the encoding stage, and comprises five encoding units:
a first encoding unit: adopting Conv with the step length of 2 and Conv dw + Conv with the step length of 1 to extract image features;
the second coding unit, the third coding unit and the fifth coding unit firstly adopt Conv dw with the step length of 2 and Conv with the step length of 1, and then adopt Conv dw and Conv with the step length of 1 to extract image features;
a fourth encoding unit: image feature extraction is performed by using Conv dw with a step size of 2 and Conv with a step size of 1 for 5 times.
Furthermore, a Droupout optimization algorithm is added in the coding stage of the Incompyte-SegNet convolutional neural network model to prevent an over-fitting condition from occurring.
Further, the Incomplete-SegNet convolutional neural network model decoding stage includes 4 decoding units: the first decoding unit, the second decoding unit and the third decoding unit all adopt Upsampling, 2D + ZeroPadding 2D + Conv2D + BatchNormalization to process the image, the fourth decoding unit adopts Upsampling 2D + ZeroPadding 2D + Conv2D + BatchNormalization to process the image, and the obtained feature map is input into a Softmax function to judge the sorghum lodging region probability.
Further, the activation function of the Incomplite-SegNet convolutional neural network model is Relu 6.
Further, the Incomplite-SegNet convolutional neural network modulo the loss function is an entropy function:
Figure 455939DEST_PATH_IMAGE001
wherein N represents the total number of pixel points; x represents a feature vector of each input pixel;
Figure 918144DEST_PATH_IMAGE002
a representative pixel classification vector;
Figure 938053DEST_PATH_IMAGE003
representing the predictor classification vector.
The invention has the beneficial effects that:
the method can automatically identify the lodging regions and boundaries of the sorghum, the accuracy rate of identifying the lodging regions of the sorghum can be up to 98.54%, the training time is 3h, the single-frame detection time is 0.6s, and the parameter scale is 5.56 multiplied by 106, and compared with the traditional SegNet network, the method has the advantages that the accuracy is improved by 0.51%, the training time is reduced by 3.77h, the single-frame detection time is reduced by 0.21s, and the parameter scale is reduced by 70.27% under the same test parameters and environment.
In addition to the objects, features and advantages described above, other objects, features and advantages of the present invention are also provided. The present invention will be described in further detail below with reference to the drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the invention and, together with the description, serve to explain the invention and not to limit the invention.
Fig. 1 is a flow chart of a sorghum lodging image segmentation method based on a lightweight convolutional neural network according to an embodiment of the present invention;
FIG. 2 is a diagram of an Incomplite-SegNet convolutional neural network model architecture according to an embodiment of the present invention;
fig. 3 illustrates a segmentation effect of a sorghum lodging image according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, the invention relates to a sorghum lodging image segmentation method based on a lightweight convolutional neural network. Firstly, acquiring, labeling and data expanding images of a sorghum lodging area by using an unmanned aerial vehicle; then, in a SegNet coding backbone network, depth separable convolution is used to reduce the calculated amount, normalization processing is added to the feature map in the feature extraction stage, and meanwhile, the decoding stage of the SegNet network is changed into 4 times of up-sampling; and finally, carrying out a segmentation test on the image of the sorghum lodging region by adjusting superior hyper-parameters, randomly extracting training data and adopting an Adam + learning rate attenuation optimization algorithm. The method can accurately and automatically identify the sorghum lodging region and boundary and can provide reference for automatic prediction of sorghum lodging.
Examples
Sorghum lodging picture acquisition is carried out in 10 months in 2019 by using an InPIRE V2.0 unmanned aerial vehicle of Dajiang Wu 1 carrying a Zen Si X3 camera, the acquisition place is Chan Pi factory village of Betula Piper factory of Chanyi district of Jilin province, the image resolution is 1280X 720, the acquired sorghum lodging image data is manually screened, and 171 pieces of high-quality image data are finally screened.
S1: dividing the collected RGB image data into a training set, a verification set and a test set, wherein the number ratio of the training set to the verification set to the test set is 8:1:1, and then manually marking sorghum lodging regions of the training set and the verification set by adopting a Label ME marking tool;
s2: performing data expansion on the training set, the verification set and the unmarked test set marked in the step S1;
the expansion method comprises the steps of sequentially turning the image horizontally, turning the image vertically and simultaneously turning the image horizontally and vertically.
S3: inputting the training set expanded in the step S2 into an Incomplate-SegNet convolutional neural network model for training;
referring to fig. 2, an Incomplete-SegNet convolutional neural network model is constructed, which includes an encoder and a decoder.
In fig. 2, Conv denotes convolution, s1 denotes convolution step size 1, s2 denotes convolution step size 2, Conv dw denotes depth separable convolution, Upsampling denotes Upsampling, ZeroPadding denotes zero padding, BatchNormalization denotes batch normalization, and SoftMax denotes logistic regression function.
In the encoding stage, the Incomplite-SegNet convolutional neural network model of the application adopts a MobileNet network to replace VGG16, the MobileNet is a lightweight feature extraction network model proposed by Google aiming at mobile equipment and embedded equipment, deep separable convolution is used in the MobileNet to extract features, the calculated amount is reduced from two aspects of the number of channels and the size of a feature map, and the five-layer encoding process of an encoder is simplified as follows:
a first encoding unit: using Conv with the step length of 2 and Conv dw + Conv with the step length of 1 to extract image features, obtaining F1 layer image features, and enabling the length and width of an image to be reduced by 1 time compared with an original image;
a second encoding unit: firstly, Conv dw with the step size of 2 and Conv with the step size of 1 are adopted, then Conv dw and Conv with the step size of 1 are adopted to carry out image feature extraction, and F2 layer image features are obtained, so that the length-width ratio of an image is reduced by 2 times compared with an original image
A third encoding unit: firstly, Conv dw with the step length of 2 and Conv with the step length of 1 are adopted, and then Conv dw and Conv with the step length of 1 are adopted to carry out image feature extraction, so that the image features of an F3 layer are obtained, and the length-width ratio of the image is reduced by 4 times compared with the original image;
a fourth encoding unit: adopting Conv dw with the step length of 2 and Conv with the step length of 1 for 4 times to carry out image feature extraction to obtain image features of an F4 layer, and enabling the length and width of an image to be reduced by 8 times compared with an original image;
a fifth encoding unit: image feature extraction is performed by using Conv dw with the step size of 2 and Conv with the step size of 1 for 5 times, so that image features of an F5 layer are obtained, and the length and width ratio of an image is reduced by 16 times compared with an original image.
The structure of the MobileNet trunk feature extraction network is shown in table 1:
table 1 MobileNet backbone feature extraction network structure
Figure 878327DEST_PATH_IMAGE004
In the Incomplte-SegNet convolutional neural network model, Relu6 is used for replacing Relu in an activation function, the function structure of Relu6 is more complex compared with that of Relu, the gradient disappearance phenomenon is solved, the maximum output value is limited by Relu6, Relu is not limited, Relu6 is used as the activation function of the feature extraction network, the mobile device float16 can have better resolution even with low precision, the activation range is not limited by the Relu activation function, the output value range is 0 to infinity, and if the activation value is particularly large, the output result distribution range is wider, so that the low-precision float16 cannot accurately describe values in too large range, and precision loss is caused. In addition, Relu6 has a fast calculation speed, and the convergence speed is faster than the Sigmoid and Tanh functions. Therefore, Relu6 is used as an activation function, and a foundation is laid for the model to be transplanted to a small device end, wherein the function expression is as follows:
Figure 667292DEST_PATH_IMAGE005
a Droupout optimization algorithm is also added in the coding stage of the Incomplite-SegNet convolutional neural network model to randomly inactivate some neurons so as to prevent an over-fitting condition from occurring.
And a decoding stage:
the Incomplite-SegNet convolutional neural network model decoding stage comprises 4 decoding units: the first decoding unit, the second decoding unit and the third decoding unit are Upsampling, 2D + ZeroPadding, 2D + Conv2D + BatchNormalization, the fourth decoding unit is Upsampling 2D + ZeroPadding 2D + Conv2D + BatchNormalization, and the feature map obtained by the fourth decoding unit is input into a Softmax function to judge the sorghum lodging region probability.
The expression of the softmax function is as follows:
Figure 300398DEST_PATH_IMAGE006
wherein,
Figure 10865DEST_PATH_IMAGE007
represents the total number of categories;
Figure 551568DEST_PATH_IMAGE008
an index representing a certain category;
Figure 398301DEST_PATH_IMAGE009
and
Figure 467888DEST_PATH_IMAGE010
output representing a preceding output unit of the classifier, i.e., an output value of a certain class;
Figure 414721DEST_PATH_IMAGE011
representing the ratio of the index of the current element to the sum of the indices of all elements.
Training the Incomplate-SegNet convolutional neural network model by adopting the expanded training set,
in order to enable the Incomplite-SegNet network to achieve a good effect in sorghum lodging region segmentation and enable the output of the model to have high segmentation precision, a cross entropy function is used for representing the difference between the segmentation result of the model and a standard result. In the segmentation of the image of the sorghum lodging region, the cross entropy function represents the difference between the manual annotation and the segmentation result of the sorghum lodging region. In the invention, the cross entropy of each pixel is averaged to be used as a final loss function result, and the expression is as follows:
Figure 759115DEST_PATH_IMAGE012
wherein N represents the total number of pixel points; x represents a feature vector of each input pixel;
Figure 725933DEST_PATH_IMAGE013
a representative pixel classification vector;
Figure 700843DEST_PATH_IMAGE014
representing the predictor classification vector.
The optimizer in the model incorporates a learning rate decay optimization algorithm based on Adam. Adam can replace the first-order process of the traditional random gradient descent, can update the weight of the neural network iteratively according to data, and has the advantages of high calculation efficiency, small memory consumption, suitability for unsteady targets, large-scale data and parameter optimization solution and the like in the non-convex optimization problem, wherein the parameter of Adam adopts a default parameter. The learning rate attenuation (reduce lronplan) is to reduce the learning rate under the condition that the network performance is no longer improved, so that the training effect is better.
And (3) after the training model is built, performing a training stage on the network model, and after the training is completed, generating a weight file for judging the lodging region of the sorghum image by the system.
S4: inputting the expanded verification set into a trained Incomplite-SegNet convolutional neural network model for model verification, and optimizing model parameters.
S5: and inputting the test set into the optimized Incomplite-SegNet convolutional neural network model to complete the segmentation of the image.
The results show that the average accuracy of the test set is 98.54%, the training time is 3h, the single-frame detection time is 0.6s, and the parameter scale is 5.56 × 106. Under the same test parameters and environment, compared with the traditional SegNet network model, the method improves the accuracy by 0.51 percent, reduces the training time by 3.77h, reduces the single-frame detection time by 0.21s, and reduces the parameter scale by 70.27 percent. It can also be seen that the algorithm can more accurately segment the sorghum lodging region, and the boundary is more accurate than that of the traditional SegNet algorithm, and partial effects are shown in fig. 3.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (7)

1. A sorghum lodging image segmentation method based on a lightweight convolutional neural network is characterized by comprising the following steps:
s1: dividing the collected RGB image data into a training set, a verification set and a test set, and labeling the training set and the verification set;
s2: performing data expansion on the training set, the verification set and the unmarked test set marked in the step S1;
s3: inputting the training set expanded in the step S2 into an Incomplate-SegNet convolutional neural network model for training;
s4: inputting the verification set expanded in the step S2 into a trained model for verification, and optimizing model parameters;
s5: inputting the test set in the step S1 into the optimized model to complete the image segmentation.
2. The sorghum lodging image segmentation method based on the lightweight convolutional neural network as claimed in claim 1, wherein the expansion mode is horizontal flipping, vertical flipping, and simultaneous flipping in the horizontal and vertical directions.
3. The sorghum lodging image segmentation method based on the lightweight convolutional neural network as claimed in claim 1, wherein the Incomplite-SegNet convolutional neural network model adopts a MobileNet network in the encoding stage, and comprises five encoding units:
a first encoding unit: adopting Conv with the step length of 2 and Conv dw + Conv with the step length of 1 to extract image features;
the second coding unit, the third coding unit and the fifth coding unit firstly adopt Conv dw with the step length of 2 and Conv with the step length of 1, and then adopt Conv dw and Conv with the step length of 1 to extract image features;
a fourth encoding unit: image feature extraction is performed by using Conv dw with a step size of 2 and Conv with a step size of 1 for 5 times.
4. The sorghum lodging image segmentation method based on the lightweight convolutional neural network as claimed in claim 3, wherein a Droupout optimization algorithm is further added in the coding stage of the Incomplite-SegNet convolutional neural network model to prevent an over-fitting situation.
5. The sorghum lodging image segmentation method based on the lightweight convolutional neural network as claimed in claim 1, wherein the Incomplite-SegNet convolutional neural network model decoding stage comprises 4 decoding units: the first decoding unit, the second decoding unit and the third decoding unit all adopt Upsampling, 2D + ZeroPadding 2D + Conv2D + BatchNormalization to process the image, the fourth decoding unit adopts Upsampling 2D + ZeroPadding 2D + Conv2D + BatchNormalization to process the image, and the obtained feature map is input into a Softmax function to judge the sorghum lodging region probability.
6. The sorghum lodging image segmentation method based on the lightweight convolutional neural network as claimed in claim 1, wherein the activation function of the Incomplite-SegNet convolutional neural network model is Relu 6.
7. The sorghum lodging image segmentation method based on the lightweight convolutional neural network of claim 1, wherein the loss function modeled by the Incomplite-SegNet convolutional neural network is an entropy function:
Figure DEST_PATH_IMAGE001
wherein N represents the total number of pixel points; x represents a feature vector of each input pixel;
Figure 752850DEST_PATH_IMAGE002
a representative pixel classification vector;
Figure DEST_PATH_IMAGE003
representing the predictor classification vector.
CN202110287975.7A 2021-03-17 2021-03-17 Sorghum lodging image segmentation method based on lightweight convolutional neural network Withdrawn CN112861869A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110287975.7A CN112861869A (en) 2021-03-17 2021-03-17 Sorghum lodging image segmentation method based on lightweight convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110287975.7A CN112861869A (en) 2021-03-17 2021-03-17 Sorghum lodging image segmentation method based on lightweight convolutional neural network

Publications (1)

Publication Number Publication Date
CN112861869A true CN112861869A (en) 2021-05-28

Family

ID=75995171

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110287975.7A Withdrawn CN112861869A (en) 2021-03-17 2021-03-17 Sorghum lodging image segmentation method based on lightweight convolutional neural network

Country Status (1)

Country Link
CN (1) CN112861869A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114257783A (en) * 2021-11-12 2022-03-29 山东省农业科学院作物研究所 System and method for monitoring growth condition of sweet sorghum in field
WO2023035766A1 (en) * 2021-09-07 2023-03-16 中国电信股份有限公司 Feature extraction method and apparatus, encoder, and communication system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023035766A1 (en) * 2021-09-07 2023-03-16 中国电信股份有限公司 Feature extraction method and apparatus, encoder, and communication system
CN114257783A (en) * 2021-11-12 2022-03-29 山东省农业科学院作物研究所 System and method for monitoring growth condition of sweet sorghum in field

Similar Documents

Publication Publication Date Title
CN114092832B (en) High-resolution remote sensing image classification method based on parallel hybrid convolutional network
CN105139395B (en) SAR image segmentation method based on small echo pond convolutional neural networks
CN114943963A (en) Remote sensing image cloud and cloud shadow segmentation method based on double-branch fusion network
CN112950780B (en) Intelligent network map generation method and system based on remote sensing image
CN111695640B (en) Foundation cloud picture identification model training method and foundation cloud picture identification method
CN110751644B (en) Road surface crack detection method
CN114494821B (en) Remote sensing image cloud detection method based on feature multi-scale perception and self-adaptive aggregation
CN113420619A (en) Remote sensing image building extraction method
CN112861869A (en) Sorghum lodging image segmentation method based on lightweight convolutional neural network
CN115223063A (en) Unmanned aerial vehicle remote sensing wheat new variety lodging area extraction method and system based on deep learning
CN111178438A (en) ResNet 101-based weather type identification method
CN112001293A (en) Remote sensing image ground object classification method combining multi-scale information and coding and decoding network
CN113111716A (en) Remote sensing image semi-automatic labeling method and device based on deep learning
CN114299379A (en) Shadow area vegetation coverage extraction method based on high dynamic image
CN114120359A (en) Method for measuring body size of group-fed pigs based on stacked hourglass network
CN114092467A (en) Scratch detection method and system based on lightweight convolutional neural network
CN114881286A (en) Short-time rainfall prediction method based on deep learning
CN115497006A (en) Urban remote sensing image change depth monitoring method and system based on dynamic hybrid strategy
CN112380985A (en) Real-time detection method for intrusion foreign matters in transformer substation
CN115330703A (en) Remote sensing image cloud and cloud shadow detection method based on context information fusion
CN111738052A (en) Multi-feature fusion hyperspectral remote sensing ground object classification method based on deep learning
CN116681657B (en) Asphalt pavement disease detection method based on improved YOLOv7 model
CN111553272A (en) High-resolution satellite optical remote sensing image building change detection method based on deep learning
CN115797765A (en) Method and system for extracting field block based on boundary extraction and breakpoint connection post-processing
CN114821295A (en) Reservoir water body extraction method based on remote sensing image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20210528