CN112906627A - Green pricklyash peel identification method based on semantic segmentation - Google Patents

Green pricklyash peel identification method based on semantic segmentation Download PDF

Info

Publication number
CN112906627A
CN112906627A CN202110274867.6A CN202110274867A CN112906627A CN 112906627 A CN112906627 A CN 112906627A CN 202110274867 A CN202110274867 A CN 202110274867A CN 112906627 A CN112906627 A CN 112906627A
Authority
CN
China
Prior art keywords
remote sensing
image
model
sub
green
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110274867.6A
Other languages
Chinese (zh)
Other versions
CN112906627B (en
Inventor
张�浩
冉进业
王帅
杨余
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest University
Original Assignee
Southwest University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest University filed Critical Southwest University
Priority to CN202110274867.6A priority Critical patent/CN112906627B/en
Publication of CN112906627A publication Critical patent/CN112906627A/en
Application granted granted Critical
Publication of CN112906627B publication Critical patent/CN112906627B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds

Abstract

The invention discloses a green pepper identification method based on semantic segmentation, which comprises the steps of obtaining a first remote sensing image and a second remote sensing image of a green pepper in a target area; respectively cutting the first remote sensing image and the second remote sensing image, and performing pixel-level high-precision marking work on the second image and the first image to obtain a first binary mask file and a second binary mask file; performing logic and operation on the first binary mask file and the second binary mask file, and performing data enhancement on the obtained fourth data; constructing a green pepper recognition model based on semantic segmentation to obtain an optimization model; inputting the test data set into an optimization model for testing to obtain an optimal semantic segmentation model; constructing a second 3D model in the target area, and extracting agricultural information; the method has the advantages of higher identification precision of the green pepper planting area, providing powerful data support for detection of relevant large-area agriculture and establishment of agricultural policies, and having wide development and application prospects.

Description

Green pricklyash peel identification method based on semantic segmentation
Technical Field
The invention relates to the technical field of agricultural satellite remote sensing image recognition and deep learning, in particular to a green pepper recognition method based on semantic segmentation.
Background
The high-resolution satellite remote sensing image interpretation technology based on deep learning has the advantages of high interpretation speed, high interpretation precision, low interpretation cost and the like, and can better adapt to the monitoring and management requirements of modern large-scale planting agriculture compared with the traditional time-consuming and labor-consuming manual interpretation mode. The method is limited by the loss of the current nine-leaf green pepper high-resolution satellite remote sensing image labeling data set and the influence of larger morphological feature differences of the nine-leaf green pepper in the high-resolution satellite remote sensing images in different periods, and relatively speaking, the current technology capable of accurately and efficiently acquiring the relevant planting information of the nine-leaf green pepper from the high-resolution satellite remote sensing images is relatively lacking. With the large-area and large-scale planting of the nine-leaf green pepper in different areas in recent years, the monitoring and management requirements of the green pepper which is rapidly increased at present cannot be met no matter depending on the field visit investigation of personnel or depending on the agricultural information statistical mode of manually interpreting high-resolution satellite remote sensing images.
Disclosure of Invention
The invention aims to provide a green pepper identification method based on semantic segmentation, which is characterized in that a green pepper identification model based on different-period satellite image sequences is constructed on a database provided by the invention by adopting a deep neural network model, so that the green pepper identification precision in a high-resolution satellite remote sensing image is improved, the monitoring and management cost of large-scale green pepper planting is further reduced, and the green and healthy development of green pepper agriculture is realized.
The invention is realized by the following technical scheme:
a green pepper identification method based on semantic segmentation comprises the following steps:
s1: the method comprises the steps of obtaining a first remote sensing image and a second remote sensing image of green pricklyash peel in a target area, dividing the target area into a plurality of sub-areas, selecting the sub-area with a representative area as the target sub-area, wherein the first remote sensing image is a remote sensing image of the green pricklyash peel before pruning, the second remote sensing image is a remote sensing image of the green pricklyash peel within one month after pruning, the target area is a green pricklyash peel planting area, and the sub-area with the representative area is a sub-area with more concentrated green pricklyash peel planting;
s2: cutting the first remote sensing image to obtain a plurality of first remote sensing sub-images, selecting the first remote sensing sub-images in a target sub-area to form a second image, and performing pixel-level high-precision marking work on the second image to obtain a first binary mask file;
s3: cutting the second remote sensing image to obtain a plurality of second remote sensing sub-images, selecting the second remote sensing sub-images in the target sub-region to form a third image, and performing pixel-level high-precision marking work on the third image to obtain a second binary mask file;
s4: performing logical AND operation on the first binary mask file and the second binary mask file to obtain fourth data;
s5: performing data enhancement on the obtained fourth data, and dividing the data obtained after the data enhancement into a test data set, a training data set and a verification data set according to a certain proportion;
s6: building a zanthoxylum schinifolium recognition model based on semantic segmentation, using a training data set to train the built model to obtain a training model, inputting a verification data set into the training model, and optimizing the model to obtain an optimized model;
s7: inputting the test data set into an optimization model for testing to obtain an optimal semantic segmentation model;
s8: constructing a first 3D model in the target area based on the first remote sensing sub-image and the optimal semantic segmentation model, and extracting first agricultural information based on the first 3D model;
s9: constructing a second 3D model in the target area based on the second remote sensing sub-image and the optimal semantic segmentation model, and extracting second agricultural information based on the second 3D model;
s10: and evaluating the yield of the planted green peppers based on the first agricultural information and the second agricultural information.
Traditionally, the agricultural information related to the green peppers is acquired through a manual interpretation mode, but the agricultural information related to the green peppers acquired through the manual interpretation mode is time-consuming and labor-consuming, and the identification accuracy is not high enough; the invention provides a green pepper identification method based on semantic segmentation, which is characterized in that green pepper data obtained in different periods before and after pruning are combined, deep neural network learning and high-resolution satellite remote sensing image technology is adopted, agricultural information related to the green pepper can be more accurately obtained, and the yield of the green pepper in a corresponding area is more accurately estimated by using the obtained agricultural information.
The sub-regions divide the target region into a plurality of sub-regions according to a lower-level administrative boundary, the selected sub-regions are obtained by comparing the plurality of divided sub-regions, and the distribution condition of the green peppers can be visually embodied.
Preferably, in step S2, the specific labeling method for performing high-precision labeling work on the second image at the pixel level includes:
acquiring a first characteristic of the green peppers in a remote sensing image, wherein the first characteristic is a green region with a regular spatial distribution, the shape and contour of the green pepper region are clear, and pepper fields are distributed continuously;
based on the obtained first characteristic, one first remote sensing sub-image is selected randomly, whether a green pepper region exists in the first remote sensing sub-image is judged, if yes, the corresponding pixel value region in the first remote sensing sub-image is marked as 1, otherwise, the corresponding pixel value region in the first remote sensing sub-image is marked as 0 until all the first remote sensing sub-images in the second image are traversed, and a first binary mask file is obtained.
The first characteristic is the data characteristic of the green pricklyash peel in the satellite remote sensing image before pruning, specifically, the shape and the contour of the green pricklyash peel are clear, the color is green, partial shadows exist, the spatial distribution rule of the pricklyash tree and the pricklyash tree is regular, and the distribution of the whole pricklyash field is continuous.
Preferably, in step S3, the specific labeling method for performing high-precision labeling work at a pixel level on the third image is as follows:
acquiring a second characteristic of the green pricklyash peel in the remote sensing image, wherein the second characteristic is a black spot area with a regular spatial distribution, and the pricklyash peel field is earthy brown;
and based on the obtained second characteristic, randomly selecting one second remote sensing sub-image, judging whether the second remote sensing sub-image has a zanthoxylum area, if so, marking the corresponding pixel value area in the second remote sensing sub-image as 1, otherwise, marking the corresponding pixel value area in the second remote sensing sub-image as 0, and obtaining a second binary mask file until all second remote sensing sub-images in the third image are traversed.
The second characteristic is the data characteristic of the green pricklyash peel in the remote sensing satellite image after pruning.
Preferably, the specific method step of step S8 is:
randomly selecting one first remote sensing sub-image, predicting the first remote sensing sub-image through an optimal semantic segmentation model to obtain a first remote sensing image data map, and obtaining a plurality of first remote sensing image maps until all the first remote sensing sub-images are traversed;
combining the first remote sensing image maps into a third remote sensing image map;
establishing a first 3D model based on the second remote sensing image map and the elevation data corresponding to the second remote sensing image map;
and extracting agricultural information of the green pricklyash peel before pruning based on the first 3D model.
Preferably, the specific method steps of step S9 are:
randomly selecting one second remote sensing sub-image, predicting the second remote sensing sub-image through an optimal semantic segmentation model to obtain a second remote sensing image until all second remote sensing sub-images are traversed to obtain a plurality of second remote sensing image maps;
combining the second remote sensing image maps into a fourth remote sensing image map;
establishing a second 3D model based on the fourth remote sensing image map and elevation data corresponding to the fourth remote sensing image map;
and extracting agricultural information of the green pepper after pruning based on the second 3D model.
Preferably, the data enhancement includes horizontally flipping, vertically flipping, 90 degree rotating, 180 degree rotating, 270 degree rotating, color dithering, and adding gaussian noise to the fourth data.
The first agricultural information is the planting area and the distribution condition of the green peppers before pruning, and the second agricultural information is the planting area and the distribution condition of the green peppers after pruning.
In the step S5, the method includes the following steps: the ratio of the test data set, the training data set, and the validation data set was 8:1: 1.
Compared with the prior art, the invention has the following advantages and beneficial effects:
1. by adopting the green pepper identification method based on semantic segmentation, provided by the invention, the satellite remote sensing images of the green pepper before pruning and after pruning are combined, a model is built, and the agricultural information related to the green pepper is extracted, so that the labeled information is more accurate, and the identification precision of the area of the green pepper planting region is higher;
2. the green pepper identification method based on semantic segmentation provided by the invention can provide powerful data support for detection of relevant large-area agriculture and establishment of agricultural policies, and has wide development prospect and application prospect.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the principles of the invention. In the drawings:
FIG. 1 is a schematic diagram of a green pepper identification method
FIG. 2 is a schematic diagram of a constructed semantic segmentation model based on deep learning
FIG. 3 is a graph of predicted results
FIG. 4 is a schematic of 3D modeling
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to examples and accompanying drawings, and the exemplary embodiments and descriptions thereof are only used for explaining the present invention and are not meant to limit the present invention.
Example one
The embodiment discloses a zanthoxylum schinifolium recognition method based on semantic segmentation, which comprises the following steps as shown in fig. 1:
s1: the method comprises the steps of obtaining a first remote sensing image and a second remote sensing image of the green pricklyash peel in a target area, dividing the target area into a plurality of sub-areas, selecting the sub-areas with representative areas as the target sub-areas, wherein the first remote sensing image is the remote sensing image of the green pricklyash peel before pruning, the second remote sensing image is the remote sensing image of the green pricklyash peel in one month after pruning, the target area is a green pricklyash peel planting area, and the sub-areas with the representative areas are relatively concentrated sub-areas for planting the green pricklyash peel.
The pruning operation of the green pricklyash peel can be carried out in June every year, the high and stable yield of crops in successive years can be ensured through pruning, and in the high-resolution satellite remote sensing image, the data of the high-resolution satellite remote sensing image of the green pricklyash peel before pruning and after pruning have obvious difference: the green pricklyash peel in the data after pruning is mainly characterized by black spots, and the green pricklyash peel in the data before pruning is characterized by regular green fruit trees. Therefore, the obtained first remote sensing image is a remote sensing image of green Chinese prickly ash before pruning, the selected second remote sensing image is a remote sensing image of green Chinese prickly ash after pruning, the first remote sensing image and the second remote sensing image are RGB three-channel images, the selected target area is divided into a plurality of sub-areas according to a lower-level administrative boundary, comparison is carried out among the sub-areas, and the sub-area with a large planting area and uniform distribution is selected as the target sub-area.
S2: cutting the first remote sensing image to obtain a plurality of first remote sensing sub-images, selecting the first remote sensing sub-images in a target sub-area to form a second image, and performing pixel-level high-precision marking work on the second image to obtain a first binary mask file;
the method comprises the steps of cutting a first remote sensing image into a plurality of tile data with the resolution size of 256 × 256, cutting the tile data in a direct cutting mode, carrying out high-precision labeling work on all first remote sensing sub-influences in a target sub-region, assuming that the resolution size of a high-resolution satellite remote sensing image to be cut is H × W, firstly generating W/256 folders with sequentially increasing numbers, wherein each folder corresponds to a region with the resolution size of H × 256 in an image to be cut respectively, then sequentially cutting each region with the resolution size of H × 256 into pictures with the resolution size of 256 × 256 from top to bottom, sequentially increasing the naming number of each picture from 0, and finally placing the pictures with the resolution size of 256 × 256 obtained by cutting each region into the corresponding folders.
The specific labeling method for performing high-precision labeling work at pixel level on the second image comprises the following steps:
acquiring a first characteristic of green pricklyash peel in a remote sensing image; the first characteristic is a green area with a regular spatial distribution, and the shape and the contour of the green pepper area are clear, so that pepper fields are distributed continuously;
the method comprises the steps of obtaining image characteristic data of green peppers before pruning in a high-resolution satellite remote sensing image through field visit investigation, wherein the image characteristic data comprises color characteristics, morphological characteristics and spatial distribution characteristics of the green peppers at different periods;
based on the acquired first characteristic, randomly selecting one first remote sensing sub-image, judging whether the first remote sensing sub-image has a green pepper region, if so, marking the corresponding pixel value region in the first remote sensing sub-image as 1, otherwise, marking the corresponding pixel value region in the first remote sensing sub-image as 0 until all first remote sensing sub-images in a second image are traversed, and acquiring a first binary mask file;
judging whether the first remote sensing sub-image acquired in the target sub-region has green pepper, comparing the first remote sensing sub-image with the historically acquired green pepper remote sensing satellite image, judging that the pixel regions in the first remote sensing sub-image are green pepper regions, marking the regions as 1 if the pixel regions are the green pepper regions, and otherwise marking the regions as 0.
S3: cutting the second remote sensing image to obtain a plurality of second remote sensing sub-images, selecting the second remote sensing sub-images in the target sub-region to form a third image, and performing pixel-level high-precision marking work on the third image to obtain a second binary mask file;
the specific labeling method for performing high-precision labeling work at pixel level on the third image comprises the following steps:
acquiring a second characteristic of the green pricklyash peel in the remote sensing image; the second characteristic is a black spot area with a regular spatial distribution, and the field of Chinese prickly ash is earthy brown;
the second characteristic obtained here is that the image characteristic data of the green pepper after pruning in the high-resolution satellite remote sensing image is obtained through field visit investigation, and the image characteristic data comprises the color characteristic, the morphology characteristic and the spatial distribution characteristic of the green pepper at different periods;
based on the obtained second characteristics, randomly selecting a second remote sensing sub-image, judging whether the second remote sensing sub-image has a zanthoxylum area, if so, marking the corresponding pixel value area in the second remote sensing sub-image as 1, otherwise, marking the corresponding pixel value area in the second remote sensing sub-image as 0 until all second remote sensing sub-images in a third image are traversed, and obtaining a second binary mask file;
when judging whether the second remote sensing sub-image has the green pepper region, determining whether the green pepper region exists in the high-resolution satellite remote sensing sub-image to be labeled by analyzing and comparing the difference between the historical high-resolution satellite remote sensing image data of the region to be researched and the green pepper high-resolution satellite remote sensing image data acquired before and after pruning.
S4: performing logical AND operation on the first binary mask file and the second binary mask file to obtain fourth data;
the two binary mask files obtained at different periods are subjected to logic and operation, data which belong to the first binary mask file and data which belong to the second binary mask file are extracted to serve as fourth data, the data which are shared by the two binary mask files are extracted to be subjected to next operation, the deviation of labeling at different periods can be eliminated, the reliability of labeled data is improved, and a data basis is laid for data-driven semantic segmentation model modeling.
S5: performing data enhancement on the obtained fourth data, and dividing the data obtained after the data enhancement into a test data set, a training data set and a verification data set; data enhancement includes horizontal flipping, vertical flipping, 90 degree rotation, 180 degree rotation, 270 degree rotation, color dithering, and adding gaussian noise to the fourth data.
The data enhancement is mainly used for amplifying the scale of a data set, enhancing the stability of the model and meeting the requirement of updating a large number of parameters in the model, and the training data set, the verification data set and the test data set are divided in a layered random sampling mode according to the proportion of 8:1:1 in the enhanced data.
S6: building a zanthoxylum schinifolium recognition model based on semantic segmentation, using a training data set to train the built model to obtain a training model, inputting a verification data set into the training model, and optimizing the model to obtain an optimized model;
the Seg-Net network of the green pepper recognition model based on semantic segmentation consists of an encoder and a decoder. The encoder comprises a convolution layer, a batch normalization layer, an activation layer and a pooling layer; the decoder mainly comprises an upsampling layer, a deconvolution layer and finally a Softmax layer, as shown in fig. 2. Input high-resolution satellite remote sensing image data blocks and corresponding mask file data enter an encoder through operations such as zooming and shearing, the data repeatedly passes through a convolution layer, a batch normalization layer, an activation layer and a pooling layer for multiple times to extract high-dimensional features in the data, then the obtained features are sent into a decoder to repeatedly pass through an upsampling layer and a deconvolution layer for multiple times to perform dimension reduction from high dimension to low dimension, finally the data after dimension reduction is sent into a Softmax layer to divide and classify each pixel point in the data, and whether the data is a zanthoxylum area or a background area is judged.
In the encoder, a feature map of an image is obtained by a convolution layer through convolution operation, a batch normalization layer is used for eliminating offset of data distribution after the convolution operation, dependence of a model training result on initialization is reduced, convergence of the model is accelerated, an activation layer adds a nonlinear factor to an extracted feature map by adopting an activation function, and a pooling layer is used for expanding a receptive field, namely, each pixel point in the feature map is mapped to a region corresponding to an input image, the size of the feature map is reduced, and the calculated amount is reduced.
The model loss function adopts a cross entropy loss function based on Softmax, an Adam optimizer which is subjected to a large number of practical tests is adopted as an optimizer for model training, the initial learning rate in the model training process is set to be 0.0001, then the subsequent learning rate is adjusted in a dynamic attenuation mode, the other iteration times of the model training are 50 times, and the batch sample size is 8.
The cross-over ratio (IOU) is used as a performance evaluation index of the model, namely the ratio of the area of the overlapping part of the area of the green pepper predicted by the evaluation model and the real green pepper area to the total area of the overlapping part, wherein the higher the ratio is, the better the effect of the model is. And finally, selecting a model with the highest IOU evaluation result as an optimal model.
S7: inputting the test data set into an optimization model for testing to obtain an optimal semantic segmentation model;
and sending the test set data into a trained semantic segmentation model for testing, recording the numerical value of each model intersection ratio evaluation index on the test set data, and selecting the model with the highest evaluation index on the test data set, namely the model with the optimal generalization performance as an output model.
Compared with the traditional high-resolution satellite remote sensing image interpretation with the accuracy of only 60% -70%, the precision of the nine-leaf green pepper high-resolution satellite remote sensing image database and the semantic segmentation model established based on the method in the test data set reaches 93%, and through on-site survey verification of the region to be researched, the model can accurately identify the nine-leaf green pepper existing in the high-resolution satellite remote sensing image, as shown in fig. 3, the whitish region is the nine-leaf green pepper field identified by the model. The accuracy of the model on a test set is 93%, and through field random sampling inspection and verification, 1 missing identification exists at 50 inspection points because a large tree is shielded in a pepper field; the method comprises the following steps that 1, false identification exists, namely, the false identification is carried out on a flower field and is a pepper planting area; the accuracy of random sampling inspection is 96%.
S8: constructing a first 3D model in the target area based on the first remote sensing sub-image and the optimal semantic segmentation model, and extracting first agricultural information based on the first 3D model; as shown in fig. 4, the 3D modeling result is a graph, and the first agricultural information is the planting area and distribution condition of zanthoxylum bungeanum before pruning.
Randomly selecting one first remote sensing sub-image, predicting the first remote sensing sub-image through an optimal semantic segmentation model to obtain a first remote sensing image data map, and obtaining a plurality of first remote sensing image maps until all the first remote sensing sub-images are traversed;
combining the first remote sensing image maps into a third remote sensing image map;
establishing a first 3D model based on the second remote sensing image map and the elevation data corresponding to the second remote sensing image map;
and extracting agricultural information of the green pricklyash peel before pruning based on the first 3D model.
The 3D model established based on the first remote sensing sub-image is used for predicting first agricultural information of the green pricklyash peel in a target area before pruning;
adding geographical position information to the remote sensing image in GIS software, simultaneously obtaining elevation data corresponding to the high-resolution satellite remote sensing image, adding the elevation data into the remote sensing image with the geographical position information in the GIS software to obtain the high-resolution satellite remote sensing image with slope, then establishing a base of a 3D model, stretching the boundary of the high-resolution satellite remote sensing image to obtain a final 3D model of the area, counting data such as the area of the green pepper pattern spots in the area by using a field calculator in the GIS software to obtain relevant agricultural information of the green pepper planted in the area
S9: and constructing a second 3D model in the target area based on the second remote sensing sub-image and the optimal semantic segmentation model, and extracting second agricultural information based on the second 3D model, wherein the second agricultural information is the planting area and the distribution condition of the green peppers after pruning.
Randomly selecting one second remote sensing sub-image, predicting the second remote sensing sub-image through an optimal semantic segmentation model to obtain a second remote sensing image until all second remote sensing sub-images are traversed to obtain a plurality of second remote sensing image maps;
combining the second remote sensing image maps into a fourth remote sensing image map;
establishing a second 3D model based on the fourth remote sensing image map and elevation data corresponding to the fourth remote sensing image map;
based on the second 3D model, second agricultural information is extracted.
Adding geographical position information to the remote sensing image in GIS software, simultaneously obtaining elevation data corresponding to the high-resolution satellite remote sensing image, adding the elevation data into the remote sensing image with the geographical position information in the GIS software to obtain the high-resolution satellite remote sensing image with the slope, then establishing a base of a 3D model, stretching the boundary of the high-resolution satellite remote sensing image to obtain a final 3D model of the area, and counting data such as the area of the green pepper pattern spots in the area by using a field calculator in the GIS software to obtain related agricultural information of the green pepper planted in the area.
S10: and evaluating the yield of the planted green peppers based on the first agricultural information and the second agricultural information.
Through the constructed 3D model, the planting area of the green peppers in the target area and the distribution condition of the green peppers in the target area can be estimated, the planting area of the green peppers and the distribution condition of the green peppers are the obtained related agricultural information, and the estimation of the yield of the green peppers can be realized based on the planting area of the green peppers and the distribution condition of the green peppers in the target area.
In the embodiment, the high-resolution satellite remote sensing images at different periods are collected, the data set is amplified in an image cutting and image enhancement mode, data standard work based on pixel levels is carried out, the Seg-Net deep neural network model is adopted for training and predicting, and the zanthoxylum bungeanum is identified.
The application prospect of the embodiment is as follows: the method comprises the steps of using a deep learning semantic segmentation technology based on data driving as an identification model of the nine-leaf green pepper, constructing a high-resolution satellite remote sensing image data set of the nine-leaf green pepper with high accuracy and high reliability by analyzing the difference of images of the nine-leaf green pepper before and after pruning in a high-resolution satellite remote sensing image, training the deep learning-based nine-leaf green pepper semantic segmentation model in the constructed data, greatly improving the identification accuracy of the nine-leaf green pepper in the satellite remote sensing image, and replacing the dependence on manual actual measurement; the method has very high identification precision, can be widely applied to identifying different crops in other areas, provides powerful data support for monitoring of related large-area agriculture and establishment of agricultural policies, and has wide development and application prospects.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (8)

1. A green pepper identification method based on semantic segmentation is characterized by comprising the following steps:
s1: the method comprises the steps of obtaining a first remote sensing image and a second remote sensing image of the green pricklyash peel in a target area, dividing the target area into a plurality of sub-areas, selecting the sub-areas with representative areas as the target sub-areas, wherein the first remote sensing image is the remote sensing image of the green pricklyash peel before pruning, the second remote sensing image is the remote sensing image of the green pricklyash peel in one month after pruning, the target area is a green pricklyash peel planting area, and the sub-areas with the representative areas are relatively concentrated sub-areas for planting the green pricklyash peel.
S2: cutting the first remote sensing image to obtain a plurality of first remote sensing sub-images, selecting the first remote sensing sub-images in a target sub-area to form a second image, and performing pixel-level high-precision marking work on the second image to obtain a first binary mask file;
s3: cutting the second remote sensing image to obtain a plurality of second remote sensing sub-images, selecting the second remote sensing sub-images in the target sub-region to form a third image, and performing pixel-level high-precision marking work on the third image to obtain a second binary mask file;
s4: performing logical AND operation on the first binary mask file and the second binary mask file to obtain fourth data;
s5: performing data enhancement on the obtained fourth data, and dividing the data obtained after the data enhancement into a test data set, a training data set and a verification data set according to a certain proportion;
s6: building a zanthoxylum schinifolium recognition model based on semantic segmentation, using a training data set to train the built model to obtain a training model, inputting a verification data set into the training model, and optimizing the model to obtain an optimized model;
s7: inputting the test data set into an optimization model for testing to obtain an optimal semantic segmentation model;
s8: constructing a first 3D model in the target area based on the first remote sensing sub-image and the optimal semantic segmentation model, and extracting first agricultural information based on the first 3D model;
s9: constructing a second 3D model in the target area based on the second remote sensing sub-image and the optimal semantic segmentation model, and extracting second agricultural information based on the second 3D model;
s10: and evaluating the yield of the planted green peppers based on the first agricultural information and the second agricultural information.
2. The green pepper recognition method based on semantic segmentation as claimed in claim 1, characterized in that: in step S2, the specific labeling method for performing high-precision labeling work at a pixel level on the second image is as follows:
acquiring a first characteristic of the green peppers in a remote sensing image, wherein the first characteristic is a green region with a regular spatial distribution, the shape and contour of the green pepper region are clear, and pepper fields are distributed continuously;
based on the obtained first characteristic, one first remote sensing sub-image is selected randomly, whether a green pepper region exists in the first remote sensing sub-image is judged, if yes, the corresponding pixel value region in the first remote sensing sub-image is marked as 1, otherwise, the corresponding pixel value region in the first remote sensing sub-image is marked as 0 until all the first remote sensing sub-images in the second image are traversed, and a first binary mask file is obtained.
3. The green pepper recognition method based on semantic segmentation as claimed in claim 1, characterized in that: in step S3, the specific labeling method for performing high-precision labeling work at a pixel level on the third image is as follows:
acquiring a second characteristic of the green pricklyash peel in the remote sensing image, wherein the second characteristic is a black spot area with a regular spatial distribution, and the pricklyash peel field is earthy brown;
and based on the obtained second characteristic, randomly selecting one second remote sensing sub-image, judging whether the second remote sensing sub-image has a zanthoxylum area, if so, marking the corresponding pixel value area in the second remote sensing sub-image as 1, otherwise, marking the pixel value of the second remote sensing sub-image as 0 until all second remote sensing sub-images in the third image are traversed, and obtaining a second binary mask file.
4. The method for recognizing zanthoxylum bungeanum based on semantic segmentation according to claim 3, wherein the specific method of the step S8 comprises the following steps:
randomly selecting one first remote sensing sub-image, predicting the first remote sensing sub-image through an optimal semantic segmentation model to obtain a first remote sensing image data map, and obtaining a plurality of first remote sensing image maps until all the first remote sensing sub-images are traversed;
combining the first remote sensing image maps into a third remote sensing image map;
establishing a first 3D model based on the second remote sensing image map and the elevation data corresponding to the second remote sensing image map;
based on the first 3D model, first agricultural information is extracted.
5. The method for recognizing zanthoxylum schinifolium based on semantic segmentation according to claim 1, wherein the specific method of the step S9 comprises the following steps:
randomly selecting one second remote sensing sub-image, predicting the second remote sensing sub-image through an optimal semantic segmentation model to obtain a second remote sensing image until all second remote sensing sub-images are traversed to obtain a plurality of second remote sensing image maps;
combining the second remote sensing image maps into a fourth remote sensing image map;
establishing a second 3D model based on the fourth remote sensing image map and elevation data corresponding to the fourth remote sensing image map;
based on the second 3D model, second agricultural information is extracted.
6. The zanthoxylum bungeanum maxim recognition method based on semantic segmentation according to claim 5, wherein the data enhancement comprises horizontal flipping, vertical flipping, 90-degree rotation, 180-degree rotation, 270-degree rotation, color dithering and Gaussian noise addition to the fourth data.
7. The method for identifying green peppers based on semantic segmentation as claimed in claim 1, wherein the first agricultural information is planting area and distribution condition of green peppers before pruning, and the second agricultural information is planting area and distribution condition of green peppers after pruning.
8. The method for recognizing zanthoxylum schinifolium based on semantic segmentation according to claim 7, wherein in the step S5, the following steps are performed according to a certain proportion: the ratio of the test data set, the training data set, and the validation data set was 8:1: 1.
CN202110274867.6A 2021-03-15 2021-03-15 Green pricklyash peel identification method based on semantic segmentation Active CN112906627B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110274867.6A CN112906627B (en) 2021-03-15 2021-03-15 Green pricklyash peel identification method based on semantic segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110274867.6A CN112906627B (en) 2021-03-15 2021-03-15 Green pricklyash peel identification method based on semantic segmentation

Publications (2)

Publication Number Publication Date
CN112906627A true CN112906627A (en) 2021-06-04
CN112906627B CN112906627B (en) 2022-11-15

Family

ID=76105156

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110274867.6A Active CN112906627B (en) 2021-03-15 2021-03-15 Green pricklyash peel identification method based on semantic segmentation

Country Status (1)

Country Link
CN (1) CN112906627B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114241344A (en) * 2021-12-20 2022-03-25 电子科技大学 Plant leaf disease and insect pest severity assessment method based on deep learning

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106815847A (en) * 2017-01-12 2017-06-09 非凡智慧(宁夏)科技有限公司 Trees dividing method and single tree extracting method based on laser radar point cloud
CN107316289A (en) * 2017-06-08 2017-11-03 华中农业大学 Crop field spike of rice dividing method based on deep learning and super-pixel segmentation
CN108764255A (en) * 2018-05-21 2018-11-06 二十世纪空间技术应用股份有限公司 A kind of extracting method of winter wheat planting information
CN109784320A (en) * 2019-03-25 2019-05-21 中国科学院地理科学与资源研究所 Ginseng industrialized agriculture domain determines method
CN110490081A (en) * 2019-07-22 2019-11-22 武汉理工大学 A kind of remote sensing object decomposition method based on focusing weight matrix and mutative scale semantic segmentation neural network
CN111259898A (en) * 2020-01-08 2020-06-09 西安电子科技大学 Crop segmentation method based on unmanned aerial vehicle aerial image
CN111815014A (en) * 2020-05-18 2020-10-23 浙江大学 Crop yield prediction method and system based on unmanned aerial vehicle low-altitude remote sensing information
CN112183428A (en) * 2020-10-09 2021-01-05 浙江大学中原研究院 Wheat planting area segmentation and yield prediction method
CN112418473A (en) * 2019-08-20 2021-02-26 阿里巴巴集团控股有限公司 Crop information processing method, device, equipment and computer storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106815847A (en) * 2017-01-12 2017-06-09 非凡智慧(宁夏)科技有限公司 Trees dividing method and single tree extracting method based on laser radar point cloud
CN107316289A (en) * 2017-06-08 2017-11-03 华中农业大学 Crop field spike of rice dividing method based on deep learning and super-pixel segmentation
CN108764255A (en) * 2018-05-21 2018-11-06 二十世纪空间技术应用股份有限公司 A kind of extracting method of winter wheat planting information
CN109784320A (en) * 2019-03-25 2019-05-21 中国科学院地理科学与资源研究所 Ginseng industrialized agriculture domain determines method
CN110490081A (en) * 2019-07-22 2019-11-22 武汉理工大学 A kind of remote sensing object decomposition method based on focusing weight matrix and mutative scale semantic segmentation neural network
CN112418473A (en) * 2019-08-20 2021-02-26 阿里巴巴集团控股有限公司 Crop information processing method, device, equipment and computer storage medium
CN111259898A (en) * 2020-01-08 2020-06-09 西安电子科技大学 Crop segmentation method based on unmanned aerial vehicle aerial image
CN111815014A (en) * 2020-05-18 2020-10-23 浙江大学 Crop yield prediction method and system based on unmanned aerial vehicle low-altitude remote sensing information
CN112183428A (en) * 2020-10-09 2021-01-05 浙江大学中原研究院 Wheat planting area segmentation and yield prediction method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王帅: "小麦赤霉病高光谱特征提取方法研究", 《中国优秀硕士论文全文数据库(农业科技辑)》 *
齐锐丽等: "基于HSV模型与改进的OTSU算法花椒图像分割", 《中国农机化学报》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114241344A (en) * 2021-12-20 2022-03-25 电子科技大学 Plant leaf disease and insect pest severity assessment method based on deep learning

Also Published As

Publication number Publication date
CN112906627B (en) 2022-11-15

Similar Documents

Publication Publication Date Title
CN108846832B (en) Multi-temporal remote sensing image and GIS data based change detection method and system
CN111986099B (en) Tillage monitoring method and system based on convolutional neural network with residual error correction fused
CN110136170B (en) Remote sensing image building change detection method based on convolutional neural network
CN108830870B (en) Satellite image high-precision farmland boundary extraction method based on multi-scale structure learning
Turker et al. Building‐based damage detection due to earthquake using the watershed segmentation of the post‐event aerial images
CN110263717B (en) Method for determining land utilization category of street view image
CN111626947B (en) Map vectorization sample enhancement method and system based on generation of countermeasure network
Gobeawan et al. Modeling trees for virtual Singapore: From data acquisition to CityGML models
CN111191628B (en) Remote sensing image earthquake damage building identification method based on decision tree and feature optimization
CN111028255A (en) Farmland area pre-screening method and device based on prior information and deep learning
CN113449594A (en) Multilayer network combined remote sensing image ground semantic segmentation and area calculation method
CN109063660B (en) Crop identification method based on multispectral satellite image
Peeters et al. Automated recognition of urban objects for morphological urban analysis
Ghanea et al. Automatic building extraction in dense urban areas through GeoEye multispectral imagery
CN113657324A (en) Urban functional area identification method based on remote sensing image ground object classification
CN104952070A (en) Near-rectangle guide based remote-sensing cornfield image segmentation method
CN115223054A (en) Remote sensing image change detection method based on partition clustering and convolution
CN114241321A (en) Rapid and accurate identification method for high-resolution remote sensing image flat-topped building
Oka et al. Vectorization of contour lines from scanned topographic maps
CN112906627B (en) Green pricklyash peel identification method based on semantic segmentation
CN115468917A (en) Method and system for extracting crop information of farmland plot based on high-resolution remote sensing
CN113033386B (en) High-resolution remote sensing image-based transmission line channel hidden danger identification method and system
CN109657728B (en) Sample production method and model training method
CN113378642B (en) Method for detecting illegal occupation buildings in rural areas
Oehmcke et al. Deep point cloud regression for above-ground forest biomass estimation from airborne LiDAR

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant