CN111814563B - Method and device for classifying planting structures - Google Patents

Method and device for classifying planting structures Download PDF

Info

Publication number
CN111814563B
CN111814563B CN202010526298.5A CN202010526298A CN111814563B CN 111814563 B CN111814563 B CN 111814563B CN 202010526298 A CN202010526298 A CN 202010526298A CN 111814563 B CN111814563 B CN 111814563B
Authority
CN
China
Prior art keywords
remote sensing
index
classifying
network model
formula
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010526298.5A
Other languages
Chinese (zh)
Other versions
CN111814563A (en
Inventor
李卫东
孟凡谦
白林燕
赵晨曦
李磊
王亚兵
刘甲
张定文
刘钦灏
吴峥嵘
时春波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan University of Technology
Original Assignee
Henan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan University of Technology filed Critical Henan University of Technology
Priority to CN202010526298.5A priority Critical patent/CN111814563B/en
Publication of CN111814563A publication Critical patent/CN111814563A/en
Application granted granted Critical
Publication of CN111814563B publication Critical patent/CN111814563B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a method and a device for classifying a planting structure, which mainly preprocess remote sensing data, namely extract a planting structure index of the remote sensing data, add the determined planting structure index into original remote sensing image data, fuse images of a plurality of wave bands, use the fused images as a training set after labeling, train a constructed network model, preprocess the remote sensing image to be classified and input the preprocessed remote sensing image into the trained network model to classify the planting structure. According to the invention, the spatial resolution and the spectrum richness of the images for training are increased, so that the parameters in the subsequent network model training are determined, and the remote sensing images to be tested are accurately classified.

Description

Method and device for classifying planting structures
Technical Field
The invention relates to a classification method and a classification device for a planting structure, and belongs to the technical field of agricultural information.
Background
The crop planting structure is an important component of the spatial pattern of crops, and the time-space variation information of the crop planting structure is an important expression of the crop planting mode of a region or a production unit. The yield estimation, the agricultural planting structure adjustment and the optimization of regional crops depend on accurately and timely determining the change information of the agricultural planting structure, and the crop planting structure is extracted by combining remote sensing with high and new technology, so that the method has good application prospect and huge development potential.
Meanwhile, with the development of machine learning, algorithms such as a Neural Network (NNS) and a Support Vector Machine (SVM) are applied to the classification of high-resolution remote sensing images, but they are shallow learning algorithms and cannot express complex functions well. Therefore, the models cannot adapt to the semantic segmentation problem with large sample size and high complexity.
Then, researchers combine the Convolutional Neural Network (CNN) with the advantages of automatic feature extraction and automatic classification with the remote sensing field to realize classification of planting structures. However, the data source of the current method is too single, the classification quality of the land use conditions in the villages is not high, and the information of the land use conditions in the villages cannot be accurately acquired.
Disclosure of Invention
The invention aims to provide a method and a device for classifying planting structures, which are used for solving the problem that the information of land use conditions in villages cannot be accurately acquired in the prior art.
In order to achieve the purpose, the technical scheme of the classification method of the planting structure comprises the following steps:
1) acquiring original remote sensing image data;
2) preprocessing original remote sensing image data:
extracting at least one planting structure index of NDVI, AWEI and SAVI according to the original remote sensing image data; the NDVI is a normalized difference vegetation index, the AWEI is an automatic water body extraction index, and the SAVI is a soil regulation vegetation index;
adding the extracted planting structure index into original remote sensing image data to obtain an image;
fusing images of a plurality of wave bands;
labeling the fused images to obtain a training set;
3) constructing a network model;
4) inputting the training set into the constructed network model for training to obtain a trained network model;
5) and inputting the remote sensing images to be classified into the trained network model after preprocessing to classify the planting structure.
The beneficial effects of the invention are:
according to the method, the remote sensing data are preprocessed, namely the remote sensing data are subjected to the extraction of the planting structure index, the determined planting structure index is added into the original remote sensing image data, the images of a plurality of wave bands are fused, namely the spatial resolution and the spectrum richness of the images for training are increased, so that the parameters in the subsequent network model training are determined, and the remote sensing images to be tested are accurately classified.
Furthermore, the fusion adopts a G-S image fusion method.
Further, the normalized difference vegetation index is expressed by the following formula:
NDVI=(NIR-RED)/(NIR+RED)
in the formula, NIR is the reflectivity of a near infrared band, and RED is the reflectivity of a RED band.
Further, the formula of the automatic water body extraction index is as follows:
AWEI=BLUE+2.5*GREEN-1.5*(NIR+SWIR1)-0.25*SWIR2
in the formula, BLUE is a BLUE light waveband, GREEN is a GREEN waveband, and SWIR1 and SWIR2 are short-wave infrared wavebands.
Further, the formula of the soil regulating vegetation index is as follows:
Figure BDA0002531419540000021
in the formula, L is the vegetation coverage condition corresponding to the soil conditioning factor, and L is 0.5.
Further, the constructed network model is a U-Net network model, and the U-Net network model comprises a depth separable convolution module and an activation function.
The invention also provides a technical scheme of the planting structure classifying device, the device comprises a processor and a memory, and the processor executes the technical scheme of the planting structure classifying method stored in the memory.
Drawings
FIG. 1 is a flow chart of an embodiment of a method of classifying a planting structure of the present invention;
FIG. 2 is a schematic diagram of the structure of the U-Net network model of the present invention;
FIG. 3-a is a conventional U-Net training accuracy curve;
FIG. 3-b is a Separable-UNet training accuracy curve of the present invention;
FIG. 3-c is a Mish-Separable-UNet training accuracy curve of the present invention;
FIG. 4-a is a seven band classification of the raw remote sensing influence data;
FIG. 4-b is a classification result of adding NDVI to seven bands of the original remote sensing influence data according to the present invention;
4-c are the classification results of the present invention with AWEI added in seven bands of the raw remote sensing influence data;
FIG. 4-d is a classification result of adding SAVI to seven bands of the original remote sensing influence data according to the present invention;
4-e are the classification results of adding NDVI and AWEI to the seven bands of the original remote sensing influence data according to the invention;
FIG. 4-f is a classification result of adding NDVI and SAVI to seven bands of the original remote sensing influence data according to the present invention;
FIG. 4-g is a classification result of adding AWEI and SAVI to seven bands of the original remote sensing influence data according to the invention;
FIG. 4-h is a classification result of the present invention with three planting result indices added to seven bands of the original remote sensing influence data;
fig. 5 is a schematic structural diagram of a classification device of the planting structure of the present invention.
Detailed Description
The scheme of the invention is concretely explained by combining the drawings and the specific embodiment.
The embodiment of the planting structure classification method comprises the following steps:
taking a certain irrigation area in a certain city of a certain province as an example, the planting structure classification method is specifically introduced by using Landsat8 remote sensing image data of a certain day in the area in 5 months, which is stored in a database.
The planting structure classification method disclosed by the invention comprises the following steps as shown in figure 1:
1) acquiring original remote sensing image data;
in this embodiment, the original remote sensing image data is an acquired Landsat8 commercial multispectral satellite dataset.
2) Preprocessing original remote sensing image data:
extracting at least one planting structure index of NDVI, AWEI and SAVI according to the original remote sensing image data; the NDVI is a normalized difference vegetation index, the AWEI is an automatic water body extraction index, and the SAVI is a soil regulation vegetation index;
adding the extracted planting structure index into original remote sensing image data to obtain an image;
fusing images of a plurality of wave bands;
labeling the fused images to obtain a training set;
the most representative Normalized Difference Vegetation Index is constructed according to the characteristics that the reflectance of chlorophyll in Vegetation is relatively high in a green band with low reflectance in a red band of a visible light band and gradually decreases in reflectance in a range from a near infrared band to a middle infrared band, and the formula is as follows:
NDVI=(NIR-RED)/(NIR+RED)
in the formula, NIR is the reflectivity of a near infrared band, and RED is the reflectivity of a RED band.
The Automatic Water Extraction Index (AWEI) improves the factors of low classification precision, relatively unfixed threshold selection and the like existing in Water Extraction, and the formula is as follows:
AWEIsh=BLUE+2.5*GREEN-1.5*(NIR+SWIR1)-0.25*SWIR2
in the formula, BLUE is a BLUE light band, GREEN is a GREEN band, and SWIR1 and SWIR2 are short wave infrared bands.
Soil conditioning Vegetation Index (Soil Adjusted Vegetation Index, SAVI) introduces a Soil conditioning factor to improve the recognition capability of the normalized Index under different Soil backgrounds, and the formula is as follows:
Figure BDA0002531419540000041
in the formula, L is the vegetation coverage condition corresponding to the soil conditioning factor, 0 is no vegetation, and 1 is vegetation complete coverage. According to the field situation of the research area, the optimal factor value is selected to be 0.5.
It should be noted that, in this embodiment, three planting indexes, namely NDVI, AWEI and SAVI, are selected based on different index characteristics, and a 1-7 waveband remote sensing image is added to improve the spectrum complexity, wherein the NDVI and the SAVI can effectively distinguish the spectrum difference between vegetation and the difference between vegetation and soil, the AWEI has the capability of efficiently identifying water, and the three planting structure indexes are selected simultaneously, so that the land use condition of a region can be determined, and the method has important significance for the reasonable planning of subsequent land.
As another embodiment, one or two of the above-mentioned indexes can be selected for selecting the plant structure index in the present invention, and an index image map is obtained. It should be noted that when one or two of them are selected, such as AWEI, it has the ability to efficiently identify the water body, and can efficiently distinguish the water body, but it is not obvious to distinguish the vegetation.
And performing image fusion on the images of the plurality of wave bands by using a G-S image fusion method.
It should be noted that the synthesis process of the index and the remote sensing image map in this embodiment is as follows: and increasing the data dimension of the image matrix to 3 dimensions, superposing the plurality of wave band images and the index image data matrix to the second dimension in the 3-dimensional matrix, generating three-dimensional multi-channel data, and realizing the synthesis of the index and the remote sensing image.
Then, on the basis of the synthesis, the synthesized data and the panchromatic band image are fused by adopting a G-S image fusion method, so that the image fusion for improving the spatial resolution and the spectral complexity of the data is realized. And (4) slicing input network is carried out on the fusion data realized in the process, and training is carried out by combining the label data.
In the present embodiment, the labeling of the image is performed based on known data or after a field investigation.
In the embodiment, 80% of the training data is randomly selected as training data to make a training set for training a network, and the other 20% of the training data is used as test set data for result evaluation; the images for training and the images for verification set are sliced with 128 × 128 pixels and are in one-to-one correspondence.
3) Constructing a network model;
the network model in this embodiment is a U-Net network model, which includes a deep separable convolution module and an activation function, as shown in fig. 2;
the depth separable convolution (depthwise separable convolution) is formed by combining two parts, namely Depthwise (DW) and Pointwise (PW), and is used for extracting features.
In the embodiment, the deep separable convolution is adopted, so that the parameter calculation amount is effectively reduced, and the parameter amount is obtained by adding two parts and is about one third of the calculation amount of the conventional convolution. While the depth separable convolution considers both channel and region changes, conventional convolution considers regions only and then channels, achieving channel and region separation.
Wherein, the formula of the Mish activation function is as follows: mix ═ x tanh (ln (1+ e)x) In which the formula of the Tanh function is:
Figure BDA0002531419540000051
tanh x has a value in the range (-1, 1), and the function is a strictly monotonically increasing curve passing through the origin and crossing I, III quadrants.
The Mish activation function in the embodiment avoids saturation caused by capping, and the smooth activation function allows better information to enter a neural network, so that better accuracy and generalization are obtained, and the operation efficiency and accuracy are further improved.
4) Inputting the training set into the constructed network model for training to obtain a trained network model;
5) and inputting the remote sensing images to be classified into the trained network model after preprocessing to classify the planting structure.
And inputting the training data set into a network model for training, and predicting the test set by using the trained weight file to obtain a classification result.
Further, in order to evaluate the accuracy of the method of the present application, the method further includes a step of evaluating the test result, specifically: in the present embodiment, the precision, recall ratio, two-harmonic mean, and kappa coefficient are used as indexes of the evaluation method. These indices are calculated from a confusion matrix, where the accuracy is formulated as:
Figure BDA0002531419540000052
wherein, CiiNumber of samples representing correct classification, CijIndicating that class I samples were mistaken for class J.
Recall represents the average correct proportion of pixels classified into a class, and the formula for Recall is:
Figure BDA0002531419540000053
in addition, the model can be further evaluated by calculating a harmonic mean of F precision and recall F1, the formula for the F1 value being:
F1=2*(Precision*Recall)/(Precision+Recall) (3)
the kappa coefficient measures the consistency of the prediction class with the artificial label. The kappa coefficient is formulated as:
Figure BDA0002531419540000054
in order to further verify the effectiveness of the method, the invention only carries out comparative analysis on the U-Net and the improved model thereof. The invention adds a U-Net model of a deep separable convolution module to a conventional U-Net model and changes an activation function network model to obtain the models in the same test set data training 200 batches, such as curves shown in figures 3-a to 3-c.
Therefore, compared with the traditional U-Net model, after the depth separable convolution module is added, the overfitting batch of the model is reduced from 100 to 30, the calculated amount is greatly reduced, and the fitting rate is improved. After the Mish activation function is adopted, the accuracy and the calculation rate are improved slightly due to the smooth characteristic. The improved network model realizes the parameter lightweight, the volume of the weight file is reduced from 303MB to 53MB, the reduction is 82.5 percent, and the production cost is greatly saved.
Meanwhile, the invention synthesizes 1-7 seven wave bands in the original remote sensing image with 1, 2 and 3 indexes respectively, then fuses panchromatic wave bands by using a G-S method, and verifies the prediction result, wherein the prediction result is shown in figures 4-a to 4-h.
The following table 1 is used for carrying out accuracy statistics on the prediction results of the methods by comparing real label data, and researching economic crops such as jujube trees in most forest land areas, so that wheat, forest land and cotton are classified into crop categories.
TABLE 1
Figure BDA0002531419540000061
As shown in Table 1, the integral accuracy of the 7-waveband fusion images without indexes is up to 90.3%, the accuracy of the addition of the indexes to participate in the image fusion is higher than that of the images without indexes, and the addition of the indexes has a positive effect on the classification of crop planting structures by using an improved U-Net network for training.
Under the condition of adding a single index, the three indexes all show improvement on the whole accuracy, wherein the AWEI has the largest influence on the classification of the planting structure, the crop identification accuracy is improved by 2.4%, and due to the water body identification characteristics of the three indexes, the village pixel and the water body identification are superior to the other two indexes and reach 81.4% and 93.9%, the effect is obvious, the whole accuracy reaches 92.7%, and the three indexes are the best.
Under the condition of two index 9-waveband fusion images, the AWEI index still plays an important role, and the classification effect of the training image containing the AWEI is superior to that of an NDVI-SAVI 9-waveband fusion image. The NDVI index further improves the crop identification capability of the AWEI, improves the mutual identification capability among crops, and improves the accuracy rates of the wheat, cotton and forest land to 85.2%, 74.4% and 87.5% respectively. Meanwhile, the higher water body and village identification accuracy is respectively up to 81.8% and 93.5%, only the water body identification accuracy is lower than that of the AWEI-SAVI method, the difference is 0.3%, and the overall accuracy is up to 93.7%.
Compared with an NDVI-AWEI 9 band fusion image method, the overall accuracy of the 10 band fusion image is improved by only 0.1%, the crop classification recognition degree is reduced, the improvement of SAVI on the fusion image is limited, and the priority is lower under the condition of considering the production cost. In conclusion, the NDVI-AWEI 9 waveband fusion image method has superiority in cooperation with the improved U-Net network model classification planting structure method.
According to the invention, the reliability and accuracy of the classification result of the planting structure of each index method combined with an improved U-Net model are evaluated by calculating the kappa coefficient, Precision, Recall rate Recall and a harmonic mean value F1 of the kappa coefficient and the Precision Recall, as shown in the following table 2, compared with an image without indexes, the classification Precision of the planting structure of a fusion image containing the indexes is obviously improved, the kappa coefficient is improved to be more than 0.874 containing the indexes from 0.868, and the consistency of the prediction class and the artificial label is further improved. The precision, recall and the harmonic mean value of the two are all improved by more than 0.02. In the new method, the NDVI-AWEI fusion method and the three-index method have the same precision of 0.873, the Kappa coefficient and the mean value F1 are better than those of the three-index method and are respectively 0.886 and 0.872, and accurate and stable crop planting structure classification results can be provided.
TABLE 2
Figure BDA0002531419540000071
Planting structure sorter embodiment:
as shown in fig. 5, the plant structure classification apparatus provided in this embodiment includes a processor and a memory, where the memory stores a computer program that can be executed on the processor, and the processor implements the method of the above-mentioned plant structure classification method embodiment when executing the computer program.
That is, the above methods of the plant structure classification method embodiments should be understood that the flow of the plant structure classification method may be implemented by computer program instructions. These computer program instructions may be provided to a processor such that execution of the instructions by the processor results in the implementation of the functions specified in the method flow described above.
The processor referred to in this embodiment refers to a processing device such as a microprocessor MCU or a programmable logic device FPGA.
The memory referred to in this embodiment includes a physical device for storing information, and generally, the information is digitized and stored in a medium using an electric, magnetic, optical, or the like. For example: various memories for storing information by using an electric energy mode, such as RAM, ROM and the like; various memories for storing information by magnetic energy, such as hard disk, floppy disk, magnetic tape, magnetic core memory, bubble memory, and U disk; various types of memory, CD or DVD, that store information optically. Of course, there are other ways of memory, such as quantum memory, graphene memory, and so forth.
The apparatus comprising the memory, the processor and the computer program is realized by the processor executing corresponding program instructions in the computer, and the processor can be loaded with various operating systems, such as windows operating system, linux system, android, iOS system, and the like.
As other embodiments, the device can also comprise a display, and the display is used for displaying the test result for the reference of workers.
The foregoing is merely a preferred embodiment of the present invention, which has been described in detail by way of general illustration and specific description, but is not intended to be limiting of the invention, which is susceptible to various modifications and adaptations by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the claims of the present invention.

Claims (5)

1. A method of classifying a plant structure, comprising the steps of:
1) acquiring original remote sensing image data;
2) preprocessing original remote sensing image data:
extracting NDVI, AWEI and SAVI planting structure indexes according to the original remote sensing image data; wherein, NDVI is the normalized difference vegetation index, SAVI is the soil adjustment vegetation index, AWEI is the automatic water body extraction index, and the formula of AWEI is: AWEIshBLUE +2.5 GREEN-1.5 (NIR + SWIR1) -0.25 SWIR2, where NIR is the near infrared band reflectance, BLUE is the BLUE band reflectance, GREEN is the GREEN band reflectance, and SWIR1 and SWIR2 are the short wavelength band reflectance;
by increasing the data dimension of the image matrix to 3 dimensions, superposing a plurality of wave band images and the index image data matrix to the second dimension in the 3-dimensional matrix, generating three-dimensional multi-channel data and realizing the synthesis of the index and the remote sensing image;
fusing the synthesized data with the panchromatic band image;
labeling the fused images to obtain a training set;
3) constructing a U-Net network model comprising a depth separable convolution module and an activation function, wherein the activation function is a Mish activation function, and the formula is as follows: mix ═ x tanh (ln (1+ e)x) In the formula, the formula of the tanh function is:
Figure FDA0003592232570000011
Figure FDA0003592232570000012
4) inputting the training set into the constructed network model for training to obtain a trained network model;
5) and inputting the remote sensing images to be classified into the trained network model after preprocessing to classify the planting structure.
2. The method for classifying a plant structure according to claim 1, wherein said fusing employs a G-S image fusion method.
3. The method for classifying a planting structure according to claim 1,
the normalized difference vegetation index is formulated as:
NDVI=(NIR-RED)/(NIR+RED)
in the formula, NIR is the reflectivity of a near infrared band, and RED is the reflectivity of a RED band.
4. The method for classifying a planting structure according to claim 1, wherein the soil adjusted vegetation index is formulated as:
Figure FDA0003592232570000013
in the formula, L is the vegetation coverage condition corresponding to the soil conditioning factor, and L is 0.5.
5. An apparatus for classifying a plant structure comprising a processor and a memory, wherein the processor executes a program of the method for classifying a plant structure according to any one of claims 1-4 stored in the memory.
CN202010526298.5A 2020-06-09 2020-06-09 Method and device for classifying planting structures Active CN111814563B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010526298.5A CN111814563B (en) 2020-06-09 2020-06-09 Method and device for classifying planting structures

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010526298.5A CN111814563B (en) 2020-06-09 2020-06-09 Method and device for classifying planting structures

Publications (2)

Publication Number Publication Date
CN111814563A CN111814563A (en) 2020-10-23
CN111814563B true CN111814563B (en) 2022-05-17

Family

ID=72846503

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010526298.5A Active CN111814563B (en) 2020-06-09 2020-06-09 Method and device for classifying planting structures

Country Status (1)

Country Link
CN (1) CN111814563B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112991351B (en) * 2021-02-23 2022-05-27 新华三大数据技术有限公司 Remote sensing image semantic segmentation method and device and storage medium
CN112883892A (en) * 2021-03-03 2021-06-01 青岛农业大学 Soil type remote sensing classification identification method, device, equipment and storage medium
CN113298086A (en) * 2021-04-26 2021-08-24 自然资源部第一海洋研究所 Red tide multispectral detection method based on U-Net network
CN117496281B (en) * 2024-01-03 2024-03-19 环天智慧科技股份有限公司 Crop remote sensing image classification method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110287869A (en) * 2019-06-25 2019-09-27 吉林大学 High-resolution remote sensing image Crop classification method based on deep learning
CN110647932A (en) * 2019-09-20 2020-01-03 河南工业大学 Planting crop structure remote sensing image classification method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10586105B2 (en) * 2016-12-30 2020-03-10 International Business Machines Corporation Method and system for crop type identification using satellite observation and weather data

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110287869A (en) * 2019-06-25 2019-09-27 吉林大学 High-resolution remote sensing image Crop classification method based on deep learning
CN110647932A (en) * 2019-09-20 2020-01-03 河南工业大学 Planting crop structure remote sensing image classification method and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
国产高分辨率遥感卫星融合方法比较;赵文驰 等;《测绘与空间地理信息》;20191130;第42卷(第11期);第154-158,163页 *
基于深度学习模型的遥感图像分割方法;许玥 等;《计算机应用》;20191010;第39卷(第10期);第2905-2914页 *
高分辨率多光谱遥感影像森林类型分类深度U-net优化方法;王雅慧 等;《林业科学研究》;20200116;第33卷(第1期);第11-18页 *

Also Published As

Publication number Publication date
CN111814563A (en) 2020-10-23

Similar Documents

Publication Publication Date Title
CN111814563B (en) Method and device for classifying planting structures
Gomes et al. Land use and land cover scenarios: An interdisciplinary approach integrating local conditions and the global shared socioeconomic pathways
CN111709379A (en) Remote sensing image-based hilly area citrus planting land plot monitoring method and system
CN110569747A (en) method for rapidly counting rice ears of paddy field rice by using image pyramid and fast-RCNN
CN112749627A (en) Method and device for dynamically monitoring tobacco based on multi-source remote sensing image
CN109829425B (en) Farmland landscape small-scale ground feature classification method and system
CN108982377A (en) Corn growth stage spectrum picture and chlorophyll content correlation and period division methods
CN110287882A (en) A kind of big chrysanthemum kind image-recognizing method based on deep learning
CN109977802A (en) Crops Classification recognition methods under strong background noise
CN109299673A (en) The green degree spatial extraction method of group of cities and medium
CN115965812B (en) Evaluation method for classification of unmanned aerial vehicle images on wetland vegetation species and land features
CN113657158A (en) Google Earth Engine-based large-scale soybean planting region extraction algorithm
CN115223063A (en) Unmanned aerial vehicle remote sensing wheat new variety lodging area extraction method and system based on deep learning
CN108038499A (en) A kind of seeds sorting technique and system based on deep learning
CN113435254A (en) Sentinel second image-based farmland deep learning extraction method
Yu et al. A recognition method of soybean leaf diseases based on an improved deep learning model
Yan et al. Identification and picking point positioning of tender tea shoots based on MR3P-TS model
CN115641412A (en) Hyperspectral data-based three-dimensional semantic map generation method
Liu et al. Automated classification of stems and leaves of potted plants based on point cloud data
Zhang et al. Appearance quality classification method of Huangguan pear under complex background based on instance segmentation and semantic segmentation
CN109164444A (en) A kind of natural landscape reconstructing method based on remotely-sensed data
Engstrom et al. Evaluating the Relationship between Contextual Features Derived from Very High Spatial Resolution Imagery and Urban Attributes: A Case Study in Sri Lanka
CN114511850B (en) Method for identifying size particle image of sunlight rose grape fruit
Wang et al. Overlapped tobacco shred image segmentation and area computation using an improved Mask RCNN network and COT algorithm
CN116310338A (en) Single litchi red leaf tip segmentation method based on examples and semantic segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant