CN114463637B - Winter wheat remote sensing identification analysis method and system based on deep learning - Google Patents

Winter wheat remote sensing identification analysis method and system based on deep learning Download PDF

Info

Publication number
CN114463637B
CN114463637B CN202210117044.7A CN202210117044A CN114463637B CN 114463637 B CN114463637 B CN 114463637B CN 202210117044 A CN202210117044 A CN 202210117044A CN 114463637 B CN114463637 B CN 114463637B
Authority
CN
China
Prior art keywords
area
winter wheat
file
data set
semantic segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210117044.7A
Other languages
Chinese (zh)
Other versions
CN114463637A (en
Inventor
张兵
彭代亮
刘胜威
陈俊杰
潘玉豪
郑诗军
胡锦康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aerospace Information Research Institute of CAS
Original Assignee
Aerospace Information Research Institute of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aerospace Information Research Institute of CAS filed Critical Aerospace Information Research Institute of CAS
Priority to CN202210117044.7A priority Critical patent/CN114463637B/en
Publication of CN114463637A publication Critical patent/CN114463637A/en
Application granted granted Critical
Publication of CN114463637B publication Critical patent/CN114463637B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30188Vegetation; Agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a winter wheat remote sensing identification analysis method and system based on deep learning. The method comprises the following steps: creating a label vector file of a polygonal area, converting the label vector file into a raster file, generating square vector data, cutting the median synthetic images and the raster file of the polygonal area in five growth periods in batch by using the square vector data, and adjusting the size of the median synthetic images and the raster file to obtain a training data set, a verification data set and a test data set; training the semantic segmentation model by taking the training data set and the verification data set of each birth phase as input, and classifying the test set of each birth phase. And generating a spatial distribution map of the winter wheat in each growth period, and performing spatial mapping and area extraction on the winter wheat. The scheme provided by the invention is based on a semantic segmentation classification method, the overall precision is high, the classification effect is good, and the remote sensing identification precision of winter wheat in the heading stage is highest. The extraction precision of the winter wheat area in the jointing-extraction heading stage research area is high by the deep learning method.

Description

Winter wheat remote sensing identification analysis method and system based on deep learning
Technical Field
The invention belongs to the field of winter wheat remote sensing identification, and particularly relates to a winter wheat remote sensing identification analysis method and system based on deep learning.
Background
At present, researches are carried out to calculate the separability between winter wheat in different growth stages and other land utilization coverage types by a Jeffries-Matusita (J-M) distance method, and finally determine that the sentinel-2 image in the heading stage is the optimal period for extracting the areas of the winter wheat in the north and middle regions of Anhui province.
The method also researches and utilizes a time polymerization technology, combines Landsat-8 OLI and sentinel-2 data to explore the remote sensing identification condition of winter wheat in each growth period of Shandong province, finally determines that the data of the maturity period and the green turning period are more effective, and the remote sensing identification effect of the winter wheat is better.
The prior art has the following defects:
in the researches, only a random forest classification method is adopted to research the remote sensing identification condition of the winter wheat in the region, and different methods such as the influence of deep learning on the remote sensing identification condition of the winter wheat in each growth period are not further evaluated.
Disclosure of Invention
In order to solve the technical problems, the invention provides a technical scheme of a winter wheat remote sensing identification analysis method and system based on deep learning, and aims to solve the technical problems.
The invention discloses a winter wheat remote sensing identification analysis method based on deep learning, which comprises the following steps:
s1, training, verifying and testing data set manufacturing: selecting a polygonal area required by a part of a research area, creating a label vector file of the polygonal area, converting the label vector file into a raster file, reclassifying to generate square vector data, cutting the sendienl-2 median synthetic images of five growth periods of the polygonal area and the raster file in batch by using the square vector data, and adjusting the size of the cut sendienl-2 median synthetic images and the raster file to obtain a training data set, a verification data set and a test data set; the obtained sentinel-2 median synthetic images of the five growth periods share one label;
s2, cutting and processing the sentinel-2 median synthetic images of five growth periods in the whole region of the research area to obtain a spatial distribution data set;
s3, building a U-Net semantic segmentation model and setting parameters;
s4, training a U-Net semantic segmentation model: training the U-Net semantic segmentation model by taking the training data set and the verification data set of each growth period as input to obtain the U-Net semantic segmentation model under the optimal weight of each growth period;
s5, calling a U-Net semantic segmentation model under the optimal weight of each growth period, and classifying the test set of each growth period to obtain a classified result image;
s6, calling a U-Net semantic segmentation model under the optimal weight of each growth period, classifying the spatial distribution data sets of each growth period, and splicing classification results to generate a winter wheat spatial distribution map of each growth period;
s7, evaluating the classification precision of winter wheat of the U-Net semantic segmentation model;
s8, spatial mapping and area extraction of winter wheat: selecting a winter wheat spatial distribution map with the highest classification precision in a growth period, counting the area of the winter wheat to obtain an extraction area, and applying the extraction area and a ground true value to perform precision evaluation on the extraction area.
According to the method of the first aspect of the present invention, in step S1, the specific method for creating the tag vector file of the polygon area includes:
and cutting a synthetic image of the whole growth cycle of the winter wheat by using the polygonal surface elements of the polygonal area, and establishing a label vector file of the polygonal area by referring to the synthetic images of five growth periods and wild real measuring points of the winter wheat.
According to the method of the first aspect of the present invention, in step S1, the specific method for converting the tag vector file into a raster file and reclassifying the raster file to generate square vector data includes:
and converting the label vector file into a raster file, reclassifying the raster file into classes 0,1, randomly creating point elements in the polygonal area, then establishing a graphic buffer area for the point elements, and generating square vector data with the size of 1280m by 1280m, wherein the boundary of the square vector data is ensured to be completely in a raster.
According to the method of the first aspect of the present invention, in the step S1, the sentinel-2 median synthetic image includes:
red, green, blue and near infrared four bands;
adjusting the sizes of the cropped sentienl-2 median synthetic image and the raster file as follows: 128 pixels by 128 pixels.
According to the method of the first aspect of the present invention, in step S2, the specific method for obtaining the spatially distributed data set by cutting and processing the sentinel-2 median synthetic image of five growth periods of the whole area of the study region includes:
and cutting the sentinel-2 median composite image of five growth periods in the whole area in the research area into image blocks with the size of 512 pixels by 512 pixels, and removing the image blocks with the values of all background values to obtain a spatial distribution data set.
According to the method of the first aspect of the present invention, in the step S7, a specific method for assessing the classification accuracy of winter wheat by using the U-Net semantic segmentation model includes:
and comparing the classified result images with the self-made labels, and quantitatively evaluating the winter wheat semantic segmentation accuracy of the five growth period test set images by using the accuracy rate, the recall rate, the F1-score, the cross-over ratio and the accuracy rate.
According to the method of the first aspect of the present invention, in step S8, the specific formula for performing precision evaluation on the extraction area by using the extraction area and the ground true value includes:
Figure BDA0003496813360000031
in the formula, P represents the area extraction precision, S represents the winter wheat planting area extracted by the deep learning method, and S' represents the real winter wheat planting area on the ground.
The invention discloses a winter wheat remote sensing identification and analysis system based on deep learning in a second aspect, which comprises:
the first processing module is configured to select a polygonal area required by a part of a study area, create a label vector file of the polygonal area, convert the label vector file into a raster file, reclassify the raster file, generate square vector data, batch-cut the sentienl-2 median synthetic images and the raster file of five growth periods of the polygonal area by using the square vector data, and adjust the size of the cut sentienl-2 median synthetic images and the raster file to obtain a training data set, a verification data set and a test data set; the obtained sentinel-2 median synthetic images of the five growth periods share one label;
the second processing module is configured to cut and process the sensorial-2 median synthetic images of five growth periods of the whole region of the research area to obtain a spatial distribution data set;
the third processing module is configured to build a U-Net semantic segmentation model and set parameters;
the fourth processing module is configured to train the U-Net semantic segmentation model by taking the training data set and the verification data set of each growth period as input, so as to obtain the U-Net semantic segmentation model under the optimal weight of each growth period;
the fifth processing module is configured to call a U-Net semantic segmentation model under the optimal weight of each growth period, classify the test set of each growth period and obtain a classified result image;
the sixth processing module is configured to call a U-Net semantic segmentation model under the optimal weight of each growth period, classify the spatial distribution data sets of each growth period, and then splice classification results to generate a winter wheat spatial distribution map of each growth period;
the seventh processing module is configured to evaluate the winter wheat classification precision of the U-Net semantic segmentation model;
and the eighth processing module is configured to select the winter wheat spatial distribution map in the growth period with the highest classification precision, count the area of the winter wheat to obtain an extraction area, and perform precision evaluation on the extraction area by applying the extraction area and the ground real value.
A third aspect of the invention discloses an electronic device. The electronic equipment comprises a memory and a processor, the memory stores a computer program, and when the processor executes the computer program, the steps of the winter wheat remote sensing identification and analysis method based on deep learning in the first aspect of the disclosure are realized.
A fourth aspect of the invention discloses a computer-readable storage medium. The computer readable storage medium stores thereon a computer program, and when the computer program is executed by a processor, the steps in a deep learning-based winter wheat remote sensing identification analysis method according to any one of the first aspect of the disclosure are implemented.
The scheme provided by the invention is based on a U-Net semantic segmentation classification method, the overall precision is high, the classification effect is good, and the remote sensing identification precision of the winter wheat in the heading and heading stage is highest. The extraction precision of the winter wheat area in the jointing-extraction heading stage research area is high by the deep learning method.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of a winter wheat remote sensing identification analysis method based on deep learning according to an embodiment of the invention;
fig. 2 is a 2019-2020 winter wheat spatial distribution diagram of the main north yunnan region generated by the deep learning method according to the embodiment of the invention;
FIG. 3 is a structural diagram of a winter wheat remote sensing identification analysis system based on deep learning according to an embodiment of the invention;
fig. 4 is a block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention discloses a winter wheat remote sensing identification and analysis method based on deep learning. Fig. 1 is a flowchart of a remote sensing recognition analysis method for winter wheat based on deep learning according to an embodiment of the present invention, as shown in fig. 1, the method includes:
s1, training, verifying and manufacturing a test data set: selecting a polygonal area required by a part of a research area, creating a label vector file of the polygonal area, converting the label vector file into a raster file, reclassifying to generate square vector data, cutting the sendienl-2 median synthetic images of five growth periods of the polygonal area and the raster file in batch by using the square vector data, and adjusting the size of the cut sendienl-2 median synthetic images and the raster file to obtain a training data set, a verification data set and a test data set; the obtained sentinel-2 median synthetic images of the five growth periods share one label;
specifically, in the above steps, the obtained sentinel-2 median composite images of five growth periods share a grid file obtained by cutting, namely a label;
s2, cutting and processing the sentinel-2 median synthetic image of five growth periods in the whole area of the research area to obtain a spatial distribution data set;
s3, building a U-Net semantic segmentation model and setting parameters;
s4, training a U-Net semantic segmentation model: training the U-Net semantic segmentation model by taking the training data set and the verification data set of each growth period as input to obtain the U-Net semantic segmentation model under the optimal weight of each growth period;
s5, calling a U-Net semantic segmentation model under the optimal weight of each growth period, and classifying the test set of each growth period to obtain a classified result image;
s6, calling a U-Net semantic segmentation model under the optimal weight of each growth period, classifying the spatial distribution data sets of each growth period, and splicing classification results to generate a winter wheat spatial distribution map of each growth period;
s7, evaluating winter wheat classification precision of the U-Net semantic segmentation model;
s8, spatial mapping and area extraction of winter wheat: selecting a winter wheat spatial distribution map with the highest classification precision in a growth period, counting the area of the winter wheat to obtain an extraction area, and applying the extraction area and a ground true value to perform precision evaluation on the extraction area.
In step S1, training, validation and production of test data sets: selecting a polygonal area required by a part of a research area, creating a label vector file of the polygonal area, converting the label vector file into a raster file, reclassifying to generate square vector data, cutting the sendienl-2 median synthetic images of five growth periods of the polygonal area and the raster file in batch by using the square vector data, and adjusting the size of the cut sendienl-2 median synthetic images and the raster file to obtain a training data set, a verification data set and a test data set; the five growth phase sentienl-2 median synthetic shadows share a grid label.
In some embodiments, in step S1, the specific method for creating the tag vector file of the polygon area includes:
cutting a synthetic image of the winter wheat in the whole growth period by using the polygonal surface elements of the polygonal area, and establishing a label vector file of the polygonal area by referring to the synthetic images of five growth periods and field real measuring points of the winter wheat;
the specific method for converting the label vector file into a raster file and reclassifying to generate square vector data comprises the following steps:
converting the label vector file into a raster file, reclassifying the raster file into classes 0,1, randomly creating point elements in the polygonal area, then establishing a graphic buffer area for the point elements, generating square vector data with the size of 1280m by 1280m, and ensuring the boundaries of the square vector data to be all in a raster;
the sentienl-2 median composite image comprises:
red, green, blue and near infrared four bands;
adjusting the sizes of the cropped sentienl-2 median synthetic image and the raster file as follows: 128 pixels by 128 pixels.
Specifically, since the research area lacks winter wheat label data, and the training process of U-Net requires an image data set and respective labels, the invention performs deep learning classification extraction of winter wheat by self-making the data set by the following method.
Since the image data of the research area is large, it is time-consuming and labor-consuming to make the label, and therefore, a part of the required polygonal area needs to be selected from the research area. The principle of selecting the area is as follows: (1) selecting an area with stronger ground object image features and obvious contrast to be distinguished; (2) the selected area is required to cover all the ground feature types; (3) the selected area should be from various cities within the study area.
And then in ArcMap, cutting a synthetic image of the whole growth cycle of the winter wheat by using polygonal surface elements of the polygonal area, creating a label vector file of the polygonal area by referring to the synthetic images of five growth periods and field real measuring points of the winter wheat, modifying the field value of the id of the label vector file, and setting a spatial reference of the label vector file.
And converting the label vector file into a raster file, reclassifying the raster file into classes 0,1, randomly creating point elements in the polygonal area, then establishing a graphic buffer area for the point elements, and generating square vector data with the size of 1280m by 1280m, wherein the boundary of the square vector data is ensured to be completely in a raster.
And using the square vector data to cut the sensory-2 median synthetic image (the sensory-2 median synthetic image comprises four bands of red, green, blue and near infrared) of five growth periods of the polygonal area and the raster file in batches, and adjusting the size of the sensory-2 median synthetic image and the raster file which are obtained by cutting by using a python program to be 128 pixels by 128 pixels.
And finishing the data set production. The data set was randomly divided into a training set, a validation set, and a test set in a proportion of 7. Before the training set images are used as the input of the training algorithm, the training set images are normalized, and the training data set is subjected to data expansion, namely, the images are subjected to rotation symmetry operation by adjusting the image colors of the training set images to generate new images.
In step S2, the sentinel-2 median synthetic images of five fertility phases of the whole area of the study area are cut and processed to obtain a spatial distribution data set.
In some embodiments, in step S2, the specific method for obtaining the spatially distributed data set by segmenting and processing the sentinel-2 median synthetic image of five fertility sessions of the whole area of the study region includes:
and cutting the sentinel-2 median composite image of five growth periods in the whole area in the study area into image blocks with the size of 512 pixels by 512 pixels, and removing the image blocks with all background values to obtain a spatial distribution data set.
And S3, building a U-Net semantic segmentation model and setting parameters.
Specifically, the U-Net network structure is composed of a down-sampling part and an up-sampling part, and the whole network is similar to a 'U'. The first part performs feature extraction on the input image by means of convolutional layers and max pooling layers, each 3 × 3 convolutional layer followed by an activation function ReLU and a 2 × 2 max pooling operation. And in the second part, the resolution is restored by performing deconvolution operation and then splicing the result with the corresponding feature map, and a 1-by-1 convolution kernel is adopted in the final output layer. U-Net realizes the feature fusion through the mode of concatenation based on Encoder-Decoder structure, and the structure is succinct and stable.
The hardware environment for the operation of the training, verifying and testing processes is as follows: an Inter (R) Xeon (R) Gold 6226R 2.9GHz 16-core processor, an NVIDIA GeForce RTX 3090 24GB video card and a 256GB Haisha DDR4 memory. The software environment is python3.7, pytorch1.7.1.
The learning rate is set to 1 × 10 -7 And the learning rate is adaptively adjusted by calling ReduceLROnPateau in the pytorech, namely when the loss function of the verification set does not decrease after 20 epochs any more, the learning rate is adjusted to one tenth of the original learning rate. Define number of classes (number of classes): batch size (batch size): 128, number of bands (number of bands): 4, the number of training times (Epoch) is set to: 400, input image size is set to: 128 × 128 pixels, an Adam algorithm is selected on the network optimizer, and the loss function selects a cross entropy loss function. In addition, in order to prevent the overfitting and gradient disappearance phenomena of the model, a coefficient of 5 multiplied by 10 is added in the training process -4 The L2 regularization term of (1).
In step S4, training a U-Net semantic segmentation model: and training the U-Net semantic segmentation model by taking the training data set and the verification data set of each growth period as input to obtain the U-Net semantic segmentation model under the optimal weight of each growth period.
Specifically, after setting of each parameter is completed, the same U-Net network model is used for model training of training and verification data sets of each growth period of the research area. In the model training process of 5 growth periods, setting a monitoring object as a verification set accuracy, and after the model is stable, automatically storing an optimal weight when the verification set accuracy is maximum, and storing the model under the optimal weight.
In step S5, calling the U-Net semantic segmentation model under the optimal weight of each growth period, and classifying the test set of each growth period to obtain a classified result image.
And S6, calling a U-Net semantic segmentation model under the optimal weight of each growth period, classifying the spatial distribution data sets of each growth period, and splicing classification results to generate the winter wheat spatial distribution map of each growth period.
Specifically, calling a U-Net semantic segmentation model under the optimal weight of each growth period, classifying the spatial distribution data sets of each growth period, and splicing the classification results by using a gdal library in python to generate a winter wheat spatial distribution map of each growth period.
And S7, evaluating the classification precision of the winter wheat by the U-Net semantic segmentation model.
In some embodiments, in the step S7, the specific method for assessing the classification accuracy of the winter wheat by the U-Net semantic segmentation model includes:
and comparing the classified result images with the self-made labels, and quantitatively evaluating the winter wheat semantic segmentation accuracy of the five growth period test set images by using the accuracy rate, the recall rate, the F1-score, the cross-over ratio and the accuracy rate.
In particular, the amount of the solvent to be used,
Figure BDA0003496813360000111
Figure BDA0003496813360000112
Figure BDA0003496813360000113
Figure BDA0003496813360000114
Figure BDA0003496813360000115
in the formula: recall represents the Recall ratio, precision represents the precision ratio, accuracy represents the Accuracy ratio, ioU represents the cross-over ratio, TP represents the number of true positives, TN represents the true negatives, FP represents the false positives, FN represents the false negatives, and F1-score is the harmonic mean of the precision ratio and the Recall ratio.
As shown in fig. 2, it is a spatial distribution diagram of winter wheat in the main area of north yunnan of 2019-2020, which is generated by the deep learning method in the present invention. Based on a U-Net semantic segmentation classification method, ioU obtained by different growth period test sets are respectively 0.78, 0.84, 0.86, 0.88 and 0.82, wherein the heading period of the jointing is the highest, and the model effect is the best. In addition, the precision rate, the recall rate, the F1 score and the precision rate of the jointing heading stage are respectively 0.94, 0.93, 0.94 and 0.94.
In step S8, spatial mapping and area extraction of winter wheat: selecting a winter wheat spatial distribution map with the highest classification precision in a growth period, counting the area of the winter wheat to obtain an extraction area, and applying the extraction area and a ground true value to perform precision evaluation on the extraction area.
In some embodiments, in the step S8, the applying the extracted area and the ground true value, and the specific formula for performing the precision evaluation on the extracted area includes:
Figure BDA0003496813360000116
in the formula, P represents the area extraction precision, S represents the winter wheat planting area extracted by the deep learning method, and S' represents the real winter wheat planting area on the ground.
Specifically, the spatial distribution map of the winter wheat in the jointing and heading period is imported into the GEE, and an ee.image.pixelArea () function is called to count the area of the winter wheat. The areas of winter wheat of each pixel in the period are summed to calculate the extraction area of winter wheat of the whole research area. The winter wheat seeding area precision is the ratio of the estimated extraction area of a research area to the ground true value, and by combining with the official agricultural statistics yearbook, the precision evaluation is carried out on the extraction area by adopting the following formula:
Figure BDA0003496813360000121
in the formula, P represents the area extraction precision, S represents the winter wheat planting area extracted by the deep learning method, and S' represents the real winter wheat planting area on the ground.
The extraction areas of the winter wheat in the jointing and heading period research area of the deep learning method are 895.84 kilo hectares respectively, and the area extraction precision is 88.44%.
In conclusion, the IoU obtained by the scheme provided by the invention in different growth period test sets is respectively 0.78, 0.84, 0.86, 0.88 and 0.82, wherein the heading period of the jointing is the highest, and the model effect is the best. In addition, the precision rate, the recall rate, the F1 score and the precision rate of the jointing heading stage are respectively 0.94, 0.93, 0.94 and 0.94. The extraction areas of the winter wheat in the jointing and heading period research area of the deep learning method are 895.84 kilo hectares respectively, and the area extraction precision is 88.44%.
The present study also used FastFCN, deplab v3+ semantic segmentation networks in deep learning to evaluate the recognition accuracy of winter wheat at the heading date, as shown in table 1,
TABLE 1 evaluation indexes of winter wheat precision in heading stage of different semantic segmentation networks
Network Rate of accuracy Recall rate Fraction of F1 Rate of accuracy Cross ratio of
U-Net 0.94 0.93 0.94 0.94 0.88
FastFCN 0.91 0.91 0.91 0.92 0.86
DeeplabV3+ 0.92 0.92 0.92 0.93 0.88
Compared with the two networks, the semantic segmentation performance of the winter wheat of the U-Net network under the small sample data is superior to that of other networks, the requirement on equipment is not too high, and the application of the semantic segmentation method of the U-Net network in extraction of the planting area of the winter wheat is convenient to popularize.
The invention discloses a winter wheat remote sensing identification analysis system based on deep learning in a second aspect. FIG. 3 is a structural diagram of a winter wheat remote sensing identification analysis system based on deep learning according to an embodiment of the invention; as shown in fig. 3, the system 100 includes:
the first processing module 101 is configured to select a polygonal area required by a part of a study area, create a label vector file of the polygonal area, convert the label vector file into a raster file, reclassify the raster file, generate square vector data, batch-cut the sentienl-2 median synthetic images and the raster file of five growth periods of the polygonal area by using the square vector data, and adjust the size of the cut sentienl-2 median synthetic images and the raster file to obtain the training data set, the verification data set and the test data set; the obtained sentinel-2 median synthetic images of the five growth periods share one label;
a second processing module 102, configured to cut and process the sentinel-2 median composite image of five growth periods of the whole area of the study region, so as to obtain a spatial distribution data set;
the third processing module 103 is configured to construct a U-Net semantic segmentation model and set parameters;
a fourth processing module 104, configured to train the U-Net semantic segmentation model with the training data set and the verification data set of each growth period as inputs, to obtain a U-Net semantic segmentation model under the optimal weight of each growth period;
the fifth processing module 105 is configured to call a U-Net semantic segmentation model under the optimal weight of each growth period, classify the test set of each growth period, and obtain a classified result image;
the sixth processing module 106 is configured to call a U-Net semantic segmentation model under the optimal weight of each growth period, classify the spatial distribution data sets of each growth period, and then splice classification results to generate a winter wheat spatial distribution map of each growth period;
a seventh processing module 107, configured to evaluate the classification accuracy of winter wheat by using a U-Net semantic segmentation model;
the eighth processing module 108 is configured to select the winter wheat spatial distribution map of the growing season with the highest classification precision, count the area of the winter wheat to obtain an extraction area, and perform precision evaluation on the extraction area by using the extraction area and the ground real value.
According to the system of the second aspect of the present invention, the first processing module 101 is specifically configured to, the specific method for creating the tag vector file of the polygon area includes:
cutting a synthetic image of the winter wheat in the whole growth period by using the polygonal surface elements of the polygonal area, and establishing a label vector file of the polygonal area by referring to the synthetic images of five growth periods and field real measuring points of the winter wheat;
the specific method for converting the label vector file into a raster file and reclassifying to generate square vector data comprises the following steps:
converting the label vector file into a raster file, reclassifying the raster file into classes 0,1, randomly creating point elements in the polygonal area, then establishing a graphic buffer area for the point elements, generating square vector data with the size of 1280m by 1280m, and ensuring the boundaries of the square vector data to be all in a raster;
the sentienl-2 median composite image comprises:
red, green, blue and near infrared four bands;
adjusting the sizes of the sendienl-2 median synthetic image and the raster file obtained by cutting as follows: 128 pixels by 128 pixels.
Specifically, since the research area lacks winter wheat label data, and the training process of U-Net requires an image data set and respective labels, the invention performs deep learning classification extraction of winter wheat by self-making the data set by the following method.
Since the image data of the study area is large, it is time-consuming and labor-consuming to make the label, and therefore, a part of the required polygonal area is selected from the study area. The principle of selecting the area is as follows: (1) selecting an area with stronger ground object image features and obvious contrast to be distinguished; (2) the selected area is required to cover all the ground feature types; (3) the selected area should be from various cities within the study area.
And then in ArcMap, cutting a synthetic image of the whole growth cycle of the winter wheat by using polygonal surface elements of the polygonal area, creating a label vector file of the polygonal area by referring to the synthetic images of five growth periods and field real measuring points of the winter wheat, modifying the field value of the id of the label vector file, and setting a spatial reference of the label vector file.
And converting the label vector file into a grid file, reclassifying the grid file into classes 0,1, randomly creating point elements in the polygonal area, then establishing a graphic buffer area for the point elements, and generating square vector data with the size of 1280m-1280m, wherein the boundaries of the square vector data ensure that all the square vector data are in the grid.
And using the square vector data to cut the sensory-2 median synthetic image (the sensory-2 median synthetic image comprises four bands of red, green, blue and near infrared) of five growth periods of the polygonal area and the raster file in batches, and adjusting the size of the sensory-2 median synthetic image and the raster file which are obtained by cutting by using a python program to be 128 pixels by 128 pixels.
And finishing the data set production. The data set was randomly divided into a training set, a validation set, and a test set in a proportion of 7. Before the training set images are used as the input of the training algorithm, the training set images are normalized, and the training data set is subjected to data expansion, namely, the images are subjected to rotation symmetry operation by adjusting the image colors of the training set images to generate new images.
In the system according to the second aspect of the present invention, the second processing module 102 is specifically configured to perform the cutting and processing on the sentinel-2 median synthetic images of five fertility sessions of the whole area of the study, so as to obtain the spatially distributed data set, specifically including:
and cutting the sentinel-2 median composite image of five growth periods in the whole area in the study area into image blocks with the size of 512 pixels by 512 pixels, and removing the image blocks with all background values to obtain a spatial distribution data set.
According to the system of the second aspect of the present invention, the third processing module 103 is specifically configured such that the U-Net network structure is composed of two parts of down-sampling and up-sampling, and the whole network is shaped like a 'U'. The first part performs feature extraction on the input image by means of convolutional layers and max pooling layers, each 3 x 3 convolutional layer followed by an activation function ReLU and a 2 x 2 max pooling operation. And in the second part, the resolution is restored by performing deconvolution operation and then splicing the result with the corresponding feature map, and a 1-by-1 convolution kernel is adopted in the final output layer. U-Net is based on the Encoder-Decoder structure, realizes the feature fusion through the mode of concatenation, and the structure is succinct and stable.
The hardware environment for the operation of the training, verifying and testing processes is as follows: an Inter (R) Xeon (R) Gold 6226R 2.9GHz 16-core processor, an NVIDIA GeForce RTX 3090 24GB video card and a 256GB Haisha DDR4 memory. The software environment is python3.7, pytorch1.7.1.
The learning rate is set to 1 × 10 -7 And the learning rate is adaptively adjusted by calling ReduceLROnPateau in the pytorech, namely when the loss function of the verification set does not decrease after 20 epochs any more, the learning rate is adjusted to one tenth of the original learning rate. Define number of classes (number of classes): batch size (batch size): 128, number of bands (number of bands): 4, the number of training times (Epoch) is set to: 400, the input image size is set to: 128 × 128 pixels, an Adam algorithm is selected on the network optimizer, and the loss function selects a cross entropy loss function. In addition, in order to prevent the overfitting and gradient disappearance phenomena of the model, a coefficient of 5 multiplied by 10 is added in the training process -4 The L2 regularization term of (1).
According to the system of the second aspect of the present invention, the fourth processing module 104 is specifically configured to, after setting of each parameter, perform model training on training and verification data sets of each growth period of the research area respectively using the same U-Net network model. In the model training process of 5 growth periods, setting a monitoring object as a verification set accuracy, and after the model is stable, automatically storing an optimal weight when the verification set accuracy is maximum, and storing the model under the optimal weight.
According to the system of the second aspect of the present invention, the sixth processing module 106 is specifically configured to invoke a U-Net semantic segmentation model under the optimal weight for each growth period, classify the spatial distribution data sets of each growth period, and then splice the classification results with a gdal library in python to generate a winter wheat spatial distribution map of each growth period.
According to the system of the second aspect of the present invention, the seventh processing module 107 is specifically configured such that the specific method for assessing the classification accuracy of winter wheat by using the U-Net semantic segmentation model includes:
and comparing the classified result images with the self-made labels, and quantitatively evaluating the winter wheat semantic segmentation accuracy of the five growth period test set images by using the accuracy rate, the recall rate, the F1-score, the cross-over ratio and the accuracy rate.
In particular, the amount of the solvent to be used,
Figure BDA0003496813360000171
Figure BDA0003496813360000172
Figure BDA0003496813360000173
Figure BDA0003496813360000174
Figure BDA0003496813360000175
in the formula: recall represents the Recall ratio, precision represents the precision ratio, accuracy represents the Accuracy ratio, ioU represents the cross-over ratio, TP represents the number of true positives, TN represents the true negatives, FP represents the false positives, FN represents the false negatives, and the F1-score is the harmonic mean of the precision ratio and the Recall ratio.
According to the system of the second aspect of the present invention, the eighth processing module 108 is specifically configured to, by applying the extracted area and the ground truth value, perform precision evaluation on the extracted area by using a specific formula including:
Figure BDA0003496813360000176
in the formula, P represents the area extraction precision, S represents the winter wheat planting area extracted by the deep learning method, and S' represents the real winter wheat planting area on the ground.
Specifically, the spatial distribution map of the winter wheat in the heading stage of the jointing is imported into the GEE, and an ee. The areas of the winter wheat of each pixel element in the period are summed to calculate the extraction area of the winter wheat of the whole research area. The winter wheat seeding area precision is the ratio of the estimated extraction area of a research area to the ground true value, and by combining with the official agricultural statistics yearbook, the precision evaluation is carried out on the extraction area by adopting the following formula:
Figure BDA0003496813360000177
in the formula, P represents the area extraction precision, S represents the winter wheat planting area extracted by the deep learning method, and S' represents the real winter wheat planting area on the ground.
A third aspect of the invention discloses an electronic device. The electronic equipment comprises a memory and a processor, the memory stores a computer program, and when the processor executes the computer program, the steps of the winter wheat remote sensing identification analysis method based on deep learning in any one of the first aspect of the disclosure are realized.
Fig. 4 is a block diagram of an electronic device according to an embodiment of the present invention, and as shown in fig. 4, the electronic device includes a processor, a memory, a communication interface, a display screen, and an input device, which are connected by a system bus. Wherein the processor of the electronic device is configured to provide computing and control capabilities. The memory of the electronic equipment comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the electronic device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, near Field Communication (NFC) or other technologies. The display screen of the electronic equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the electronic equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the electronic equipment, an external keyboard, a touch pad or a mouse and the like.
It will be understood by those skilled in the art that the structure shown in fig. 4 is only a partial block diagram related to the technical solution of the present disclosure, and does not constitute a limitation of the electronic device to which the solution of the present application is applied, and a specific electronic device may include more or less components than those shown in the drawings, or combine some components, or have a different arrangement of components.
A fourth aspect of the invention discloses a computer-readable storage medium. The computer readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps in the method for identifying and analyzing winter wheat based on deep learning remote sensing are realized.
It should be noted that the technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, however, as long as there is no contradiction between the combinations of the technical features, the scope of the present description should be considered. The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not to be construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (7)

1. A winter wheat remote sensing identification analysis method based on deep learning is characterized by comprising the following steps:
s1, training, verifying and manufacturing a test data set: selecting a polygonal area required by a part from a research area, creating a label vector file of the polygonal area, converting the label vector file into a raster file, reclassifying the raster file to generate square vector data, cutting the sendienl-2 median synthetic images and the raster file of the polygonal area in five growth periods in batch by using the square vector data, and adjusting the size of the cut sendienl-2 median synthetic images and the raster file to obtain a training data set, a verification data set and a test data set; the obtained sentinel-2 median synthetic images of the five growth periods share one label;
s2, cutting and processing the sentinel-2 median synthetic images of five growth periods in the whole region of the research area to obtain a spatial distribution data set;
s3, building a U-Net semantic segmentation model and setting parameters;
s4, training a U-Net semantic segmentation model: training the U-Net semantic segmentation model by taking the training data set and the verification data set of each growth period as input to obtain the U-Net semantic segmentation model under the optimal weight of each growth period;
s5, calling a U-Net semantic segmentation model under the optimal weight of each growth period, and classifying the test set of each growth period to obtain a classified result image;
s6, calling a U-Net semantic segmentation model under the optimal weight of each growth period, classifying the spatial distribution data sets of each growth period, and splicing classification results to generate a winter wheat spatial distribution map of each growth period;
s7, evaluating winter wheat classification precision of the U-Net semantic segmentation model;
s8, spatial mapping and area extraction of winter wheat: selecting a winter wheat spatial distribution map with the highest classification precision in a growth period, counting the area of the winter wheat to obtain an extraction area, and applying the extraction area and a ground real value to perform precision evaluation on the extraction area; in step S1, the specific method for creating the tag vector file of the polygon area includes:
cutting a synthetic image of the winter wheat in the whole growth period by using the polygonal surface elements of the polygonal area, and establishing a label vector file of the polygonal area by referring to the synthetic images of five growth periods and field real measuring points of the winter wheat;
in the step S1, the specific method for converting the tag vector file into a raster file, reclassifying the raster file, and generating square vector data includes:
converting the label vector file into a raster file, reclassifying the raster file into classes 0,1, randomly creating point elements in the polygonal area, then establishing a graphic buffer area for the point elements, generating square vector data with the size of 1280m by 1280m, and ensuring the boundaries of the square vector data to be all in a raster;
in the step S7, the specific method for evaluating the classification accuracy of the winter wheat by using the U-Net semantic segmentation model includes:
and comparing the classified result images with the self-made labels, and quantitatively evaluating the winter wheat semantic segmentation accuracy of the five growth period test set images by using the accuracy rate, the recall rate, the F1-score, the cross-over ratio and the accuracy rate.
2. The winter wheat remote sensing identification and analysis method based on deep learning of claim 1, wherein in the step S1, the sentinel-2 median synthetic image comprises:
red, green, blue and near infrared four bands;
adjusting the sizes of the sendienl-2 median synthetic image and the raster file obtained by cutting as follows: 128 pixels by 128 pixels.
3. The winter wheat remote sensing identification and analysis method based on deep learning of claim 1, wherein in the step S2, the specific method for obtaining the spatial distribution data set by cutting and processing the sentinel-2 median synthetic image of five growth periods in the whole area of the study area comprises:
and cutting the sentinel-2 median composite image of five growth periods in the whole area in the research area into image blocks with the size of 512 pixels by 512 pixels, and removing the image blocks with the values of all background values to obtain a spatial distribution data set.
4. The winter wheat remote sensing identification analysis method based on deep learning of claim 1, wherein in the step S8, the specific formula for applying the extracted area and the ground true value to perform precision evaluation on the extracted area comprises:
Figure FDA0004026089310000031
in the formula, P represents the area extraction precision, S represents the winter wheat planting area extracted by the deep learning method, and S' represents the real winter wheat planting area on the ground.
5. A winter wheat remote sensing identification analysis system for deep learning based, characterized in that the system comprises:
the first processing module is configured to select a polygonal area required by a part of a study area, create a label vector file of the polygonal area, convert the label vector file into a raster file, reclassify the raster file, generate square vector data, batch-cut the sentienl-2 median synthetic images and the raster file of five growth periods of the polygonal area by using the square vector data, and adjust the size of the cut sentienl-2 median synthetic images and the raster file to obtain a training data set, a verification data set and a test data set; the obtained sentinel-2 median synthetic images of the five growth periods share one label;
cutting a synthetic image of the winter wheat in the whole growth period by using the polygonal surface elements of the polygonal area, and establishing a label vector file of the polygonal area by referring to the synthetic images of five growth periods and field real measuring points of the winter wheat;
converting the label vector file into a raster file, reclassifying the raster file into classes 0,1, randomly creating point elements in the polygonal area, then establishing a graphic buffer area for the point elements, generating square vector data with the size of 1280m by 1280m, and ensuring the boundaries of the square vector data to be all in a raster;
the second processing module is configured to cut and process the sentinel-2 median synthetic images of five growth periods of the whole region of the research area to obtain a spatial distribution data set;
the third processing module is configured to build a U-Net semantic segmentation model and set parameters;
the fourth processing module is configured to train the U-Net semantic segmentation model by taking the training data set and the verification data set of each growth period as input to obtain the U-Net semantic segmentation model under the optimal weight of each growth period;
the fifth processing module is configured to call a U-Net semantic segmentation model under the optimal weight of each growth period, classify the test set of each growth period and obtain a classified result image;
the sixth processing module is configured to call a U-Net semantic segmentation model under the optimal weight of each growth period, classify the spatial distribution data sets of each growth period, and then splice classification results to generate a winter wheat spatial distribution map of each growth period;
the seventh processing module is configured to evaluate the classification precision of winter wheat of the U-Net semantic segmentation model;
comparing the classified result image with the self-made label, and quantitatively evaluating the winter wheat semantic segmentation accuracy of the five growth period test set images by using the accuracy rate, the recall rate, the F1-score, the cross-over ratio and the accuracy rate;
and the eighth processing module is configured to select the winter wheat spatial distribution map of the growing period with the highest classification precision, count the area of the winter wheat to obtain an extraction area, and perform precision evaluation on the extraction area by applying the extraction area and the ground real value.
6. An electronic device, comprising a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the remote sensing identification analysis method for winter wheat based on deep learning according to any one of claims 1 to 4 when executing the computer program.
7. A computer-readable storage medium, wherein a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps in the method for remote recognition and analysis of winter wheat based on deep learning according to any one of claims 1 to 4 are implemented.
CN202210117044.7A 2022-02-07 2022-02-07 Winter wheat remote sensing identification analysis method and system based on deep learning Active CN114463637B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210117044.7A CN114463637B (en) 2022-02-07 2022-02-07 Winter wheat remote sensing identification analysis method and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210117044.7A CN114463637B (en) 2022-02-07 2022-02-07 Winter wheat remote sensing identification analysis method and system based on deep learning

Publications (2)

Publication Number Publication Date
CN114463637A CN114463637A (en) 2022-05-10
CN114463637B true CN114463637B (en) 2023-04-07

Family

ID=81411499

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210117044.7A Active CN114463637B (en) 2022-02-07 2022-02-07 Winter wheat remote sensing identification analysis method and system based on deep learning

Country Status (1)

Country Link
CN (1) CN114463637B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115082808B (en) * 2022-06-17 2023-05-09 安徽大学 Soybean planting area extraction method based on high-resolution first data and U-Net model
CN115578637B (en) * 2022-10-17 2023-05-30 中国科学院空天信息创新研究院 Winter wheat estimated yield analysis method and system based on long-term and short-term memory network
CN115690585B (en) * 2022-11-11 2023-06-06 中国科学院空天信息创新研究院 Method and system for extracting wheat tillering number based on digital photo
CN116052141B (en) * 2023-03-30 2023-06-27 北京市农林科学院智能装备技术研究中心 Crop growth period identification method, device, equipment and medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111460936A (en) * 2020-03-18 2020-07-28 中国地质大学(武汉) Remote sensing image building extraction method, system and electronic equipment based on U-Net network
CN112183428A (en) * 2020-10-09 2021-01-05 浙江大学中原研究院 Wheat planting area segmentation and yield prediction method
CN112669325B (en) * 2021-01-06 2022-10-14 大连理工大学 Video semantic segmentation method based on active learning
CN113487638A (en) * 2021-07-06 2021-10-08 南通创越时空数据科技有限公司 Ground feature edge detection method of high-precision semantic segmentation algorithm U2-net

Also Published As

Publication number Publication date
CN114463637A (en) 2022-05-10

Similar Documents

Publication Publication Date Title
CN114463637B (en) Winter wheat remote sensing identification analysis method and system based on deep learning
CN107909039B (en) High-resolution remote sensing image earth surface coverage classification method based on parallel algorithm
CN114092833B (en) Remote sensing image classification method and device, computer equipment and storage medium
CN112949738B (en) Multi-class unbalanced hyperspectral image classification method based on EECNN algorithm
CN111222545B (en) Image classification method based on linear programming incremental learning
CN111160114B (en) Gesture recognition method, gesture recognition device, gesture recognition equipment and computer-readable storage medium
CN113096080B (en) Image analysis method and system
CN113901900A (en) Unsupervised change detection method and system for homologous or heterologous remote sensing image
CN116863345A (en) High-resolution image farmland recognition method based on dual attention and scale fusion
CN115995005B (en) Crop extraction method and device based on single-period high-resolution remote sensing image
CN113221731B (en) Multi-scale remote sensing image target detection method and system
CN117409330B (en) Aquatic vegetation identification method, aquatic vegetation identification device, computer equipment and storage medium
CN115953612A (en) ConvNeXt-based remote sensing image vegetation classification method and device
CN117474863A (en) Chip surface defect detection method for compressed multi-head self-attention neural network
CN117197462A (en) Lightweight foundation cloud segmentation method and system based on multi-scale feature fusion and alignment
CN113096079B (en) Image analysis system and construction method thereof
CN111079807A (en) Ground object classification method and device
CN113705538A (en) High-resolution remote sensing image road change detection device and method based on deep learning
CN112149518A (en) Pine cone detection method based on BEGAN and YOLOV3 models
CN117237599A (en) Image target detection method and device
CN109190451B (en) Remote sensing image vehicle detection method based on LFP characteristics
CN114998672B (en) Small sample target detection method and device based on meta learning
CN116385820A (en) Method and device for predicting chlorophyll concentration of water body based on multispectral image
CN115019044A (en) Individual plant segmentation method and device, terminal device and readable storage medium
CN116824419A (en) Dressing feature recognition method, recognition model training method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant