WO2023029373A1 - 一种高精度的农田植被信息提取方法 - Google Patents

一种高精度的农田植被信息提取方法 Download PDF

Info

Publication number
WO2023029373A1
WO2023029373A1 PCT/CN2022/074147 CN2022074147W WO2023029373A1 WO 2023029373 A1 WO2023029373 A1 WO 2023029373A1 CN 2022074147 W CN2022074147 W CN 2022074147W WO 2023029373 A1 WO2023029373 A1 WO 2023029373A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
farmland
class
pixels
model
Prior art date
Application number
PCT/CN2022/074147
Other languages
English (en)
French (fr)
Inventor
刘大召
刘秋斌
郭碧峰
李卓
Original Assignee
广东海洋大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广东海洋大学 filed Critical 广东海洋大学
Publication of WO2023029373A1 publication Critical patent/WO2023029373A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches

Definitions

  • the invention relates to the technical field of agricultural remote sensing, more specifically, it relates to a method for extracting farmland vegetation information with high precision.
  • the existing technology mainly focuses on improving the model structure of the neural network, and pays less attention to the quality of the image.
  • the neural network model is trained based on a large-scale image dataset, and the quality of the training set has an important impact on the results.
  • the UAV may output an image with uneven intensity, which is not conducive to the extraction and monitoring of target features in subsequent image processing, making the accuracy of farmland vegetation information extraction low.
  • the present invention aims to design and provide a high-precision farmland vegetation information extraction method to solve the above-mentioned problems.
  • the purpose of the present invention is to provide a high-precision method for extracting farmland vegetation information.
  • the method of the present invention can learn more expressive semantic features from remote sensing images of farmland crops, and obtain more accurate crop information extraction results.
  • a high-precision method for extracting farmland vegetation information specifically comprising the following steps:
  • S1 collect data, use drones to collect original images of farmland
  • step S2 Perform flat-field correction processing on the original farmland image collected in step S1;
  • step S3 Import the image data processed by the flat-field correction in step S2 into Pix4Dmapper or ENVI software, and perform image splicing and cropping according to the research area;
  • step S2 the specific method of performing flat-field correction processing on the original farmland image in step S2 is as follows:
  • the average pixel precision and the average cross-over-union ratio are used as the classification result evaluation indicators, and the average pixel precision is expressed as the number of pixels correctly classified for each type of target in the label and the label The ratio of the number of pixels in this class, and finally average all classes, the calculation formula is:
  • p ii represents the number of pixels of the i-th class that are correctly classified into the i-th class
  • p ij represents the number of pixels of the i-th class but classified into the j-th class
  • p ji represents the number of pixels of the j-th class but classified into the i-th class.
  • the present invention has the following beneficial effects:
  • the method of the present invention can reduce the inhomogeneity of image pixels by performing flat-field correction processing on the collected farmland original image, so that the image appears clearer and more uniform; at the same time, the flat-field image is also used to eliminate gradients in the imaging system.
  • the influence of halo, dust and other optical changes has a significant effect on the improvement of image quality, so that the model can better train appropriate parameters and improve the accuracy of farmland vegetation information extraction;
  • the method of the present invention can learn more expressive semantic features from remote sensing images of farmland crops, and obtain more accurate crop information extraction results.
  • Fig. 1 is the flowchart in the embodiment of the present invention.
  • Fig. 2 is a schematic diagram of the experimental field in the research area in the embodiment of the present invention.
  • Fig. 3 is a comparison diagram of the effects of the Unet model in the embodiment of the present invention and the Unet model trained by the method of the present invention.
  • Embodiment a kind of high-precision farmland vegetation information extraction method, as shown in Figure 1, specifically comprises the following steps:
  • step S2. Perform flat-field correction processing on the original farmland image collected in step S1.
  • step S3 Import the image data processed by the flat-field correction in step S2 into Pix4Dmapper or ENVI software, and perform image splicing and cropping according to the research area.
  • the deep learning operation hardware environment in the present invention is Lenovo Y7000P, equipped with NVIDIA GTX2060s graphics card; the operating system is Ubuntu 16.04, and the Unet network is constructed using the Keras deep learning framework; the model uses Adam as the optimization algorithm to control the learning rate, iterative The number of times is 6250; the loss function is Binary_Crossentropy.
  • the flat-field corrected image is obtained from the above, and the pixels that deviate from the normal range at the four corners of the image can also be corrected, which can eliminate adverse effects such as darkness around the middle and light in the middle, and has a certain improvement effect on the restoration of image quality .
  • the average pixel precision and the average intersection and union ratio are used as the classification result evaluation indicators.
  • the ratio of , and finally average all classes, the calculation formula is:
  • p ii represents the number of pixels of the i-th class that are correctly classified into the i-th class
  • p ij represents the number of pixels of the i-th class but classified into the j-th class
  • p ji represents the number of pixels of the j-th class but classified into the i-th class.
  • the experimental site of the research area is Hongxing Farm located in Xuwen County, Zhanjiang City, Guangdong City (20°26′-20°29′ north latitude, 110°17′-110°18′ east longitude).
  • the southeastern part of the county is located at the southeastern tip of Leizhou Peninsula.
  • the study area includes farmland with different crop types. Major crops include pineapples and sugar cane.
  • DJI Phantom 4RTK UAV was used to collect remote sensing images of the test area in 4 different wave bands in early July, including 3 visible lights of 490nm (B), 550nm (G) and 680nm (R) band and 720nm1 near-infrared band.
  • the flying height of the UAV is 150m
  • the heading overlap rate is 80%
  • the side overlap rate is 70%.
  • the original image was first processed by flat-field correction, and then spliced into an orthophoto image by Pix4Dmapper software.
  • the average resolution of the spliced farmland remote sensing image is about 13000 pixels ⁇ 21000 pixels.
  • the remote sensing image data on July 4th is used as the training set and verification set of the classification model, and the data on July 5th is used as the test set;
  • the training set is used to train the model, and the verification set is input into the model along with the training set but not Participate in the training to adjust the hyperparameters of the model and evaluate the model, and the test set is used to test the generalization performance of the model; first, combined with field investigation and visual interpretation, using the LabelMe tool to manually label the ortho-remote sensing images, and obtain two test fields
  • the ground truth (Groundtruth, GT)
  • a sample image of 256 pixels ⁇ 256 pixels is randomly cut from the image taken on July 4th, and then processed by adding noise, rotating, scaling, mirroring, blurring, and lighting adjustment Carry out image expansion, and divide the expanded image into training set and verification set according to the ratio of 3:1, and the number of samples is 7500 and 2500 respectively.
  • the method of this implementation is compared with the Unet model without flat-field correction processing:
  • the data set is first preprocessed, converted into a standard data set format, and then trained. During training, adjust the hyperparameters in time according to the model loss, and find a better initial value. Finally, predict the test set images with the trained model and calculate the accuracy.
  • the original image is firstly subjected to flat-field correction processing, followed by data preprocessing, and then training, and the specific training and prediction methods are the same as the above.
  • the prediction results of the model are shown in Figure 3. It can be seen from Table 1 and Figure 3 that the method of the present invention has a higher average accuracy rate and average cross-merge ratio, and its prediction map is closer to the label map, mainly for the ability to accurately identify and distinguish farmland and harvested land Cultivated land, water environment, etc. Although the original model can generally identify farmland vegetation, there are many details errors, stripes, and uneven pixels, and the generalization performance of the model is poor.
  • the method of the present invention has a higher accuracy rate because the texture features of farmland crops are the main basis for information extraction.
  • the method of the present invention can well restore the original image, and the original image can expose more details to the model for feature extraction.
  • the extraction makes the pixel representation more accurate, improves the signal-to-noise ratio to a certain extent, and finally obtains a better trained model.
  • the above verification results show that the method of the present invention is better than the traditional Unet network model processing method, and has certain feasibility. Due to the high-resolution characteristics of UAV remote sensing images, the texture features of farmland crops are the main basis for classification. The method of the present invention can improve the quality of the image training set, and high-quality crop image maps help the neural network to extract more accurately Rich features for better extraction results. At the same time, the method of the present invention verifies the necessity of performing flat-field correction processing on the image training set when the UAV image is used for neural network training.
  • the method of the present invention can reduce the inhomogeneity of image pixels by performing flat-field correction processing on the collected farmland original image, so that the image appears clearer and more uniform; at the same time, the flat-field image also It is used to eliminate the influence of vignetting, dust and other optical changes in the imaging system, and has a significant effect on improving the image quality, so that the model can be better trained to obtain appropriate parameters and improve the accuracy of farmland vegetation information extraction.

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本发明公开一种高精度的农田植被信息提取方法,涉及农业遥感技术领域,其技术方案要点是:采集农田原始图像;对农田原始图像进行平场校正处理;将平场校正处理后的图像数据导入Pix4Dmapper或ENVI软件,进行图像的拼接和裁剪;将图像数据标定标签,通过数据增强、图像切割的预处理方式,制定神经网络数据集;构建并训练Unet神经网络模型;保存模型,将待识别的农田图像输入保存的模型中,得提取的农田植被纹理、空间分布信息结果。本发明能从农田的遥感影像中快速准确获取作物的纹理和空间分布信息,解决面向卫星遥感解译农田影像中存在的人工筛选特征复杂、识别精度低问题。

Description

一种高精度的农田植被信息提取方法 技术领域
本发明涉及农业遥感技术领域,更具体地说,它涉及一种高精度的农田植被信息提取方法。
背景技术
随着信息技术与工业的发展,遥感信息在农业领域有着广泛的应用,如农作物分类、灾害预测、耕地监测、产量预测等。传统对农田作物的信息获取与研究主要有手工选取特征,支持向量机等机器学习的方法,但具有一定的局限性。近年来,深度学习语义分割在图像分类领域有着较大的突破,比人工特征分类的方法有着较为明显的优势。通过无人机采集农田数据对农业生产进行科学的分析与决策已成为较常见的一种技术手段。然而,沿用面向卫星遥感影像的解译方法处理无人机航拍影像,存在人工筛选特征复杂、识别精度低等问题。针对这些现象,本发明提出一种基于平场校正和神经网络的无人机影像农田植被信息提取的方法。
目前,现有技术主要集中对神经网络的模型结构上做改进,而较少地关注图像的质量。然而,神经网络的模型是基于大规模的图像数据集训练得到的,训练集质量的优劣对结果有着重要的影响。在采集原始农田图像过程中,我们需要考虑光线、阴影、设备等方面,减弱无关因素的影响,获得较优质量的农田图像,从而更好地构建训练集以提升训练结果。实际上,由于存在光源的不均匀性、像元响应的不一致性、暗电流偏置等因素的影响,或者光路中存在一些影响成像的 因素(如探测器表面粘有灰尘),无人机的相机对于一个灰度均匀的目标,可能会输出强度不均匀的图像,不利于后续图像处理中目标特征的提取与监测,使得对农田植被信息提取的精度低。
因此,本发明旨在设计提供一种高精度的农田植被信息提取方法,以解决上述问题。
发明内容
本发明的目的是提供一种高精度的农田植被信息提取方法,本发明的方法能够从农田作物的遥感影像中学习到表达力更强的语义特征,获得更加准确的作物信息提取结果。
本发明的上述技术目的是通过以下技术方案得以实现的:一种高精度的农田植被信息提取方法,具体包括以下步骤:
S1、采集数据,利用无人机采集农田原始图像;
S2、对步骤S1采集的农田原始图像进行平场校正处理;
S3、将步骤S2中平场校正处理后的图像数据导入Pix4Dmapper或ENVI软件中,根据研究区域进行图像的拼接和裁剪;
S4、利用目视解译法将图像数据标定标签,并将其通过数据增强、图像切割的预处理方式,制定神经网络数据集;
S5、构建并训练Unet神经网络模型,将神经网络数据集代入Unet神经网络模型中进行训练,根据训练结果调整超参数,并进行精度评价,得到提取效果最佳的模型;
S6、保存模型,将待识别的无人机采集的农田图像输入保存的模型中,得到提取的农田植被纹理、空间分布信息结果。
进一步地,步骤S2中对农田原始图像进行平场校正处理的具体方法为:
A、获取系统的暗本底图像B(x,y),将相机的快门时间设置为1/8000s,拍取10张图像,并对这批10张图像内的像素值取中值,输出的图像作为暗本底图像B(x,y),然后通过以下公式(1):
Figure PCTCN2022074147-appb-000001
求出求暗本底图像B(x,y)所有像素点的灰度值,其中W,H分别为图像的宽和高;
B、通过均匀光场成像获得平场图像F(x,y),将快门曝光时间设置3s,将一束接近日光的饱和光源射入积分球中,球壁内部漫反射均匀,从出光口射出均匀度高的均匀光照于无人机CMOS的传感器上,拍取10张图像并对该10张图像的像素值取中值,获得平场图像F(x,y),求出平场图像F(x,y)各个像素点所对应的光照水平,计算公式如下公式(2):
Figure PCTCN2022074147-appb-000002
然后,再求所有像素点的平均光照水平,如下公式(3):
Figure PCTCN2022074147-appb-000003
C、采用相对标定的方法用于图像的校正,校正后的图像可由以下公式(4)表示:
Figure PCTCN2022074147-appb-000004
进一步地,步骤S5中Unet神经网络模型中进行训练过程中,采用平均像素精度和平均交并比作为分类结果评价指标,平均像素精度表示为标签中每一类目标被正确分类的像素数量与标签中该类像素数量的比值,最后对所有类求平均值,计算公式为:
Figure PCTCN2022074147-appb-000005
Figure PCTCN2022074147-appb-000006
平均交并比公式为:
Figure PCTCN2022074147-appb-000007
设共有k+1个类,包含一个空类或背景,其中,p ii表示第i类被正确分为第i类的像素数量;p ij表示第i类但被分为第j类的像素数量;p ji表示第j类但被分为第i类的像素数量。
综上所述,本发明具有以下有益效果:
1、本发明的方法通过对采集的农田原始图像进行平场校正处理,能够减少图像像素的不均匀性,使图像显得更为清晰、均匀;同时,平场图像也用于消除成像系统中渐晕、灰尘尘埃和其他光学变化的影响,对图像质量的改善效果具明显作用,从而便于模型更好地训练出合适的参数,提高农田植被信息提取的精度;
2、本发明的方法能够从农田作物的遥感影像中学习到表达力更强的语义特征,获得更加准确的作物信息提取结果。
附图说明
图1是本发明实施例中的流程图;
图2是本发明实施例中研究区域试验田示意图;
图3是本发明实施例中Unet模型与本发明方法训练的Unet模型的效果比较图。
具体实施方式
以下结合附图1-3对本发明作进一步详细说明。
实施例:一种高精度的农田植被信息提取方法,如图1所示,具体包括以下步骤:
S1、采集数据,利用无人机采集农田原始图像。
S2、对步骤S1采集的农田原始图像进行平场校正处理。
S3、将步骤S2中平场校正处理后的图像数据导入Pix4Dmapper或ENVI软件中,根据研究区域进行图像的拼接和裁剪。
S4、利用目视解译法将图像数据标定标签,并将其通过数据增强、图像切割的预处理方式,制定神经网络数据集。
S5、构建并训练Unet神经网络模型,将神经网络数据集代入Unet神经网络模型中进行训练,根据训练结果调整超参数,并进行精度评价,得到提取效果最佳的模型。
S6、保存模型,将待识别的无人机采集的农田图像输入保存的模型中,得到提取的农田植被纹理、空间分布信息结果。
在本实施例中,本发明中的深度学习运行硬件环境为Lenovo Y7000P,搭载NVIDIA GTX2060s显卡;操作系统为Ubuntu 16.04,采用Keras深度学习框架构建Unet网络;模型采用Adam作为优化算 法控学习率,迭代次数为6250次;损失函数为Binary_Crossentropy。
其中,对农田原始图像进行平场校正处理的具体方法为:
A、获取系统的暗本底图像B(x,y),将相机的快门时间设置为1/8000s,拍取10张图像,并对这批10张图像内的像素值取中值,输出的图像作为暗本底图像B(x,y),然后通过以下公式(1):
Figure PCTCN2022074147-appb-000008
求出求暗本底图像B(x,y)所有像素点的灰度值,其中W,H分别为图像的宽和高。
B、通过均匀光场成像获得平场图像F(x,y),将快门曝光时间设置3s,将一束接近日光的饱和光源射入积分球中,球壁内部漫反射均匀,从出光口射出均匀度高的均匀光照于无人机CMOS的传感器上,拍取10张图像并对该10张图像的像素值取中值,获得平场图像F(x,y),求出平场图像F(x,y)各个像素点所对应的光照水平,计算公式如下公式(2):
Figure PCTCN2022074147-appb-000009
然后,再求所有像素点的平均光照水平,如下公式(3):
Figure PCTCN2022074147-appb-000010
C、采用相对标定的方法用于图像的校正,校正后的图像可由以下公式(4)表示:
Figure PCTCN2022074147-appb-000011
由上述即获得平场校正后的图像,图像的四个边角处存在偏离正常范围的像元也能被校正,可消除四周暗中间亮等不良影响,对图像质量的修复有一定的改良作用。
其中,Unet神经网络模型进行训练过程中,采用平均像素精度和平均交并比作为分类结果评价指标,平均像素精度表示为标签中每一类目标被正确分类的像素数量与标签中该类像素数量的比值,最后对所有类求平均值,计算公式为:
Figure PCTCN2022074147-appb-000012
Figure PCTCN2022074147-appb-000013
平均交并比公式为:
Figure PCTCN2022074147-appb-000014
设共有k+1个类,包含一个空类或背景,其中,p ii表示第i类被正确分为第i类的像素数量;p ij表示第i类但被分为第j类的像素数量;p ji表示第j类但被分为第i类的像素数量。
在本实施例中,研究区域的实验地点采用位于广东省湛江市徐闻县的红星农场(北纬20°26′~20°29′,东经110°17′~110°18′),红星农场位于徐闻县东南部,处雷州半岛东南端。如图2所示,研究区域包括作物种类不尽相同的农田。主要作物包括菠萝和甘蔗。
步骤S1中采集数据:采用大疆精灵4RTK无人机,于7月上旬采 集试验区域在4个不同波段下的遥感影像,包括490nm(B)、550nm(G)、680nm(R)3个可见光波段和720nm1个近红外波段。无人机飞行高度150m,航向重叠率80%,旁向重叠率70%,按预定飞行轨迹点进行拍摄。原始图像首先经过平场校正处理,再通过Pix4Dmapper软件拼接成正射影像图,拼接后的农田遥感影像平均分辨率约为13000像素×21000像素。
构建数据集:采用7月4日遥感影像数据作为分类模型的训练集和验证集,7月5日数据作为测试集;其中,训练集用来训练模型,验证集随训练集一起输入模型但不参与训练,用于调整模型的超参数和评估模型,测试集用于检验模型的泛化性能;首先,结合实地调查和目视解译,利用LabelMe工具人工标注正射遥感影像,得到两块试验田的地面实况(Groundtruth,GT);其次,从7月4日拍摄的图像中随机裁切成256像素×256像素的样本图像,再通过加噪、旋转、缩放、镜像、模糊、光照调整等处理进行图像扩充,并按3:1的比例将扩充后的图像划分成训练集和验证集,其样本数量分别为7500幅和2500幅。
为验证本发明的方法的有效性,将本实施的方法与未经平场校正处理的Unet模型进行比较:
对于Unet神经网络模型,先对数据集进行简单的预处理,转换为标准数据集格式后,然后进行训练。训练时根据模型损失及时调整超参数,并找到较优的初始值。最后,以训练好的模型预测测试集图像并计算准确率。在本发明的方法中,先对原始图像进行平场校正处 理,再进行数据预处理操作,然后进行训练,具体训练与预测方式与上者相同。
以本实施中7月5日的无人机遥感影像作为预测集,传统Unet方法处理与平场校正结合Unet神经网络处理的实验结果指标如表1所示:
表1.无人机遥感影像作物提取方法结果比较
Figure PCTCN2022074147-appb-000015
模型预测结果如图3所示。从表1、图3可以看出,本发明的方法有着较高的平均准确率和平均交并比,其预测图更接近标签图,主要表现为能准确地识别与区分农田和已被收割地耕地、水体环境等。原模型虽然大体上能识别出农田植被,但是存在较多地细节错误、条纹、像素不均匀现象,模型泛化性能较差。
本发明的方法准确率较高,是因为农田作物的纹理特征是信息提取的主要依据,本发明的方法能够很好地修复原图像,将原图像能够暴露出更多的细节给模型用于特征的提取,使得像素的表示更加精确,一定程度上提升了信噪比,最终能得到一个训练较佳的模型。
通过上述验证结果表明,本发明的方法比传统Unet网络模型处理的方法要好,具备一定的可行性。由于无人机遥感影像高分辨率的 特点,农田作物的纹理特征是分类的主要依据,本发明的方法能提升图像训练集的质量,高质量的作物影像图有助于神经网络提取更为准确丰富的特征,从而获得更好的提取效果。同时,本发明的方法验证了无人机影像用于神经网络训练时,图像训练集进行平场校正处理的必要性。
在本发明的上述实施例中,本发明的方法通过对采集的农田原始图像进行平场校正处理,能够减少图像像素的不均匀性,使图像显得更为清晰、均匀;同时,平场图像也用于消除成像系统中渐晕、灰尘尘埃和其他光学变化的影响,对图像质量的改善效果具明显作用,从而便于模型更好地训练出合适的参数,提高农田植被信息提取的精度。
本具体实施例仅仅是对本发明的解释,其并不是对本发明的限制,本领域技术人员在阅读完本说明书后可以根据需要对本实施例做出没有创造性贡献的修改,但只要在本发明的权利要求范围内都受到专利法的保护。

Claims (3)

  1. 一种高精度的农田植被信息提取方法,其特征是:具体包括以下步骤:
    S1、采集数据,利用无人机采集农田原始图像;
    S2、对步骤S1采集的农田原始图像进行平场校正处理;
    S3、将步骤S2中平场校正处理后的图像数据导入Pix4Dmapper或ENVI软件中,根据研究区域进行图像的拼接和裁剪;
    S4、利用目视解译法将图像数据标定标签,并将其通过数据增强、图像切割的预处理方式,制定神经网络数据集;
    S5、构建并训练Unet神经网络模型,将神经网络数据集代入Unet神经网络模型中进行训练,根据训练结果调整超参数,并进行精度评价,得到提取效果最佳的模型;
    S6、保存模型,将待识别的无人机采集的农田图像输入保存的模型中,得到提取的农田植被纹理、空间分布信息结果。
  2. 根据权利要求1所述的一种高精度的农田植被信息提取方法,其特征是:步骤S2中对农田原始图像进行平场校正处理的具体方法为:
    A、获取系统的暗本底图像B(x,y),将相机的快门时间设置为1/8000s,拍取10张图像,并对这批10张图像内的像素值取中值,输出的图像作为暗本底图像B(x,y),然后通过以下公式(1):
    Figure PCTCN2022074147-appb-100001
    求出求暗本底图像B(x,y)所有像素点的灰度值,其中W,H分别为图 像的宽和高;
    B、通过均匀光场成像获得平场图像F(x,y),将快门曝光时间设置3s,将一束接近日光的饱和光源射入积分球中,球壁内部漫反射均匀,从出光口射出均匀度高的均匀光照于无人机CMOS的传感器上,拍取10张图像并对该10张图像的像素值取中值,获得平场图像F(x,y),求出平场图像F(x,y)各个像素点所对应的光照水平,计算公式如下公式(2):
    Figure PCTCN2022074147-appb-100002
    然后,再求所有像素点的平均光照水平,如下公式(3):
    Figure PCTCN2022074147-appb-100003
    C、采用相对标定的方法用于图像的校正,校正后的图像可由以下公式(4)表示:
    Figure PCTCN2022074147-appb-100004
  3. 根据权利要求1所述的一种高精度的农田植被信息提取方法,其特征是:步骤S5中Unet神经网络模型中进行训练过程中,采用平均像素精度和平均交并比作为分类结果评价指标,平均像素精度表示为标签中每一类目标被正确分类的像素数量与标签中该类像素数量的比值,最后对所有类求平均值,计算公式为:
    Figure PCTCN2022074147-appb-100005
    Figure PCTCN2022074147-appb-100006
    平均交并比公式为:
    Figure PCTCN2022074147-appb-100007
    设共有k+1个类,包含一个空类或背景,其中,p ii表示第i类被正确分为第i类的像素数量;p ij表示第i类但被分为第j类的像素数量;p ji表示第j类但被分为第i类的像素数量。
PCT/CN2022/074147 2021-08-30 2022-01-27 一种高精度的农田植被信息提取方法 WO2023029373A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111001811.XA CN113920441A (zh) 2021-08-30 2021-08-30 一种高精度的农田植被信息提取方法
CN202111001811.X 2021-08-30

Publications (1)

Publication Number Publication Date
WO2023029373A1 true WO2023029373A1 (zh) 2023-03-09

Family

ID=79233478

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/074147 WO2023029373A1 (zh) 2021-08-30 2022-01-27 一种高精度的农田植被信息提取方法

Country Status (3)

Country Link
CN (1) CN113920441A (zh)
WO (1) WO2023029373A1 (zh)
ZA (1) ZA202204532B (zh)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116453003A (zh) * 2023-06-14 2023-07-18 之江实验室 一种基于无人机监测智能识别水稻生长势的方法和系统
CN116805396A (zh) * 2023-08-24 2023-09-26 杭州稻道农业科技有限公司 一种基于卫星遥感的农田杂草精准识别方法及装置
CN116993583A (zh) * 2023-06-09 2023-11-03 昆明理工大学 一种基于高分辨率遥感影像高原湿地智能精细提取方法
CN117094430A (zh) * 2023-07-19 2023-11-21 青海师范大学 一种农作物分布预测方法、系统、设备及介质
CN117132902A (zh) * 2023-10-24 2023-11-28 四川省水利科学研究院 基于自监督学习算法的卫星遥感影像水体识别方法及系统
CN117274844A (zh) * 2023-11-16 2023-12-22 山东科技大学 利用无人机遥感影像的大田花生出苗情况快速提取方法
CN117607063A (zh) * 2024-01-24 2024-02-27 中国科学院地理科学与资源研究所 一种基于无人机的森林垂直结构参数测量系统和方法
CN117788351A (zh) * 2024-02-27 2024-03-29 杨凌职业技术学院 一种农业遥感图像校正方法及系统

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113920441A (zh) * 2021-08-30 2022-01-11 广东海洋大学 一种高精度的农田植被信息提取方法
CN115035422A (zh) * 2022-08-15 2022-09-09 杭州航天星寰空间技术有限公司 一种面向遥感影像区域土壤种植结构的数据增广方法及分割方法
CN115861859A (zh) * 2023-02-20 2023-03-28 中国科学院东北地理与农业生态研究所 一种坡耕地环境监测方法及系统
CN115995005B (zh) * 2023-03-22 2023-08-01 航天宏图信息技术股份有限公司 基于单期高分辨率遥感影像的农作物的提取方法和装置
CN116503741B (zh) * 2023-06-25 2023-08-25 山东仟邦建筑工程有限公司 一种农作物成熟期智能预测系统

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110889394A (zh) * 2019-12-11 2020-03-17 安徽大学 基于深度学习UNet网络的水稻倒伏识别方法
CN113221997A (zh) * 2021-05-06 2021-08-06 湖南中科星图信息技术股份有限公司 一种基于深度学习算法的高分影像油菜提取方法
CN113920441A (zh) * 2021-08-30 2022-01-11 广东海洋大学 一种高精度的农田植被信息提取方法

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109448127B (zh) * 2018-09-21 2022-11-18 洛阳中科龙网创新科技有限公司 一种基于无人机遥感的农田高精度导航地图生成方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110889394A (zh) * 2019-12-11 2020-03-17 安徽大学 基于深度学习UNet网络的水稻倒伏识别方法
CN113221997A (zh) * 2021-05-06 2021-08-06 湖南中科星图信息技术股份有限公司 一种基于深度学习算法的高分影像油菜提取方法
CN113920441A (zh) * 2021-08-30 2022-01-11 广东海洋大学 一种高精度的农田植被信息提取方法

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Master's Thesis", 20 May 2010, JINAN UNIVERSITY, CN, article ZENG, KAIHUA ET AL.: "Notes on High Precision Aperture Photometry of Stars", pages: 1 - 55, XP009544107 *
ALBERTO GARCIA-GARCIA; SERGIO ORTS-ESCOLANO; SERGIU OPREA; VICTOR VILLENA-MARTINEZ; JOSE GARCIA-RODRIGUEZ: "A Review on Deep Learning Techniques Applied to Semantic Segmentation", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 23 April 2017 (2017-04-23), 201 Olin Library Cornell University Ithaca, NY 14853 , XP080764780 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116993583A (zh) * 2023-06-09 2023-11-03 昆明理工大学 一种基于高分辨率遥感影像高原湿地智能精细提取方法
CN116453003B (zh) * 2023-06-14 2023-09-01 之江实验室 一种基于无人机监测智能识别水稻生长势的方法和系统
CN116453003A (zh) * 2023-06-14 2023-07-18 之江实验室 一种基于无人机监测智能识别水稻生长势的方法和系统
CN117094430B (zh) * 2023-07-19 2024-04-26 青海师范大学 一种农作物分布预测方法、系统、设备及介质
CN117094430A (zh) * 2023-07-19 2023-11-21 青海师范大学 一种农作物分布预测方法、系统、设备及介质
CN116805396A (zh) * 2023-08-24 2023-09-26 杭州稻道农业科技有限公司 一种基于卫星遥感的农田杂草精准识别方法及装置
CN116805396B (zh) * 2023-08-24 2023-12-29 杭州稻道农业科技有限公司 一种基于卫星遥感的农田杂草精准识别方法及装置
CN117132902A (zh) * 2023-10-24 2023-11-28 四川省水利科学研究院 基于自监督学习算法的卫星遥感影像水体识别方法及系统
CN117132902B (zh) * 2023-10-24 2024-02-02 四川省水利科学研究院 基于自监督学习算法的卫星遥感影像水体识别方法及系统
CN117274844A (zh) * 2023-11-16 2023-12-22 山东科技大学 利用无人机遥感影像的大田花生出苗情况快速提取方法
CN117274844B (zh) * 2023-11-16 2024-02-06 山东科技大学 利用无人机遥感影像的大田花生出苗情况快速提取方法
CN117607063A (zh) * 2024-01-24 2024-02-27 中国科学院地理科学与资源研究所 一种基于无人机的森林垂直结构参数测量系统和方法
CN117607063B (zh) * 2024-01-24 2024-04-19 中国科学院地理科学与资源研究所 一种基于无人机的森林垂直结构参数测量系统和方法
CN117788351A (zh) * 2024-02-27 2024-03-29 杨凌职业技术学院 一种农业遥感图像校正方法及系统
CN117788351B (zh) * 2024-02-27 2024-05-03 杨凌职业技术学院 一种农业遥感图像校正方法及系统

Also Published As

Publication number Publication date
ZA202204532B (en) 2022-11-30
CN113920441A (zh) 2022-01-11

Similar Documents

Publication Publication Date Title
WO2023029373A1 (zh) 一种高精度的农田植被信息提取方法
CN111461052B (zh) 基于迁移学习的多个生育期小麦倒伏区域识别方法
CN111461053B (zh) 基于迁移学习的多个生育期小麦倒伏区域识别系统
Lu et al. Improved estimation of aboveground biomass in wheat from RGB imagery and point cloud data acquired with a low-cost unmanned aerial vehicle system
CN109086826B (zh) 基于图像深度学习的小麦干旱识别方法
CN109389049A (zh) 基于多时相sar数据与多光谱数据的作物遥感分类方法
Zhao et al. Object-oriented vegetation classification method based on UAV and satellite image fusion
CN115481368B (zh) 一种基于全遥感机器学习的植被覆盖度估算方法
CN111144250A (zh) 融合雷达和光学遥感数据的土地覆盖分类方法
Sun et al. Wheat head counting in the wild by an augmented feature pyramid networks-based convolutional neural network
Jiang et al. HISTIF: A new spatiotemporal image fusion method for high-resolution monitoring of crops at the subfield level
Skovsen et al. Robust species distribution mapping of crop mixtures using color images and convolutional neural networks
CN115861858A (zh) 基于背景过滤的小样本学习农作物冠层覆盖度计算方法
Yang et al. Fraction vegetation cover extraction of winter wheat based on RGB image obtained by UAV
Zhang et al. Multispectral drone imagery and SRGAN for rapid phenotypic mapping of individual chinese cabbage plants
Farooque et al. Red-green-blue to normalized difference vegetation index translation: a robust and inexpensive approach for vegetation monitoring using machine vision and generative adversarial networks
Chen et al. 3D model construction and ecological environment investigation on a regional scale using UAV remote sensing
Zou et al. The fusion of satellite and unmanned aerial vehicle (UAV) imagery for improving classification performance
CN112924967A (zh) 一种基于雷达和光学数据组合特征的农作物倒伏遥感监测方法与应用
CN112581301A (zh) 一种基于深度学习的农田残膜残留量的检测预警方法及系统
CN116205879A (zh) 一种基于无人机图像及深度学习的小麦倒伏面积估算方法
CN115147730A (zh) 一种联合全卷积神经网络和集成学习的遥感分类方法
Yue et al. Mapping cropland rice residue cover using a radiative transfer model and deep learning
Yang et al. Simple, Low-Cost Estimation of Potato Above-Ground Biomass Using Improved Canopy Leaf Detection Method
Yang et al. Feature extraction of cotton plant height based on DSM difference method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22862524

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE