WO2020000271A1 - 一种基于无人机的数据处理的方法及装置 - Google Patents

一种基于无人机的数据处理的方法及装置 Download PDF

Info

Publication number
WO2020000271A1
WO2020000271A1 PCT/CN2018/093173 CN2018093173W WO2020000271A1 WO 2020000271 A1 WO2020000271 A1 WO 2020000271A1 CN 2018093173 W CN2018093173 W CN 2018093173W WO 2020000271 A1 WO2020000271 A1 WO 2020000271A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature
dimensional gabor
gabor
dimensional
expression
Prior art date
Application number
PCT/CN2018/093173
Other languages
English (en)
French (fr)
Inventor
贾森
张萌
朱家松
邬国锋
Original Assignee
深圳大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳大学 filed Critical 深圳大学
Priority to PCT/CN2018/093173 priority Critical patent/WO2020000271A1/zh
Publication of WO2020000271A1 publication Critical patent/WO2020000271A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Definitions

  • the present invention relates to the field of computers, and in particular, to a method and device for data processing based on unmanned aerial vehicles.
  • the hyperspectral image obtained by imaging a feature on hundreds of bands with a hyperspectral sensor contains triple information about the feature's radiation, space, and spectrum, making the feature identification and classification more effective. It is the current remote sensing imaging technology. Research hotspots. However, hyperspectral sensors are susceptible to the influence of clouds. At the same time, the situation of same-spectrum heterospectrum and same-spectrum foreign matter in hyperspectral images widely exists, resulting in low accuracy of classification directly using the original hyperspectral image.
  • the embodiment of the invention discloses a method for data processing based on unmanned aerial vehicles.
  • the feature fusion is performed by using a hyperspectral image and laser detection and measurement data containing the geometrical information of the ground feature to improve the classification accuracy of the ground feature.
  • a first aspect of the embodiments of the present invention discloses a method for processing data based on an unmanned aerial vehicle.
  • the method includes:
  • Supervised classification is performed according to the fusion expression feature and a support vector machine with a radial basis kernel function RBF kernel.
  • a second aspect of the invention discloses a device, the device including:
  • Acquisition unit for synchronously acquiring hyperspectral images and laser detection and measurement LiDAR data
  • An extraction unit for extracting amplitude features of LiDAR data using a two-dimensional Gabor filter bank to obtain two-dimensional Gabor feature expressions
  • the extraction unit is further configured to use a three-dimensional Gabor filter bank to perform amplitude feature extraction on the hyperspectral image and express the three-dimensional Gabor feature;
  • a connecting unit configured to connect the two-dimensional Gabor amplitude feature and the three-dimensional Gabor amplitude feature to obtain a target Gabor feature expression
  • a dimensionality reduction unit configured to perform dimensionality reduction processing on the target Gabor feature expression and the hyperspectral image by using a principal component analysis algorithm KPCA of a kernel function;
  • An acquisition unit configured to obtain a fusion expression feature according to a target Gabor feature expression and a hyperspectral image after the dimensionality reduction process
  • a classification unit is configured to perform supervised classification according to the fused expression feature and a support vector machine with a radial basis kernel function RBF kernel.
  • a third aspect of the present invention discloses a server, where the server includes:
  • a processor coupled to the memory
  • the processor calls the executable program code stored in the memory to execute the method according to any one of the first aspects of the present invention.
  • the fourth aspect of the present invention discloses a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program, wherein the computer program causes a terminal to execute the method according to any one of the first aspect of the present invention. .
  • the hyperspectral image and the LiDAR data are collected and stored; the two-dimensional Gabor filter and the three-dimensional Gabor filter are used to extract the amplitude features of the LiDAR data and the hyperspectral image, respectively, to obtain similarity and complementarity.
  • Texture features connect the two types of extracted texture features, use KPCA algorithm to extract the features of the connected texture features, connect the extracted features with the original hyperspectral data after dimensionality reduction to obtain the final fusion feature, and use the support Vector machines perform supervised classification.
  • the advantage of this method is to use Gabor features to extract the texture features of heterogeneous data, so that the original heterogeneous data can be fused in the texture feature space.
  • the feature expression of the effective spectral information of the original hyperspectral image is added. Finally, the spectrum and texture are fused.
  • the three major features of elevation and elevation improve the recognition accuracy of ground features.
  • FIG. 1 is a schematic flowchart of a method for processing data based on a drone according to an embodiment of the present invention
  • FIG. 2 is a schematic flowchart of another UAV-based data processing method disclosed by an embodiment of the present invention.
  • FIG. 3 is a schematic structural diagram of a UAV-based data processing device disclosed by an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of another unmanned aerial vehicle-based data processing device disclosed by an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of a physical structure of a UAV-based data processing device disclosed in an embodiment of the present invention.
  • the embodiment of the invention discloses a method and a device for data processing, and improves the classification accuracy of a feature by performing a feature fusion of a hyperspectral image and a laser detection and measurement data including the feature geometrical information of the feature. Each of them will be described in detail below.
  • the invention is a feature extraction and fusion classification technology and system using a drone hyperspectral image and laser detection and measurement data (Light Detection and Ranging, LiDAR).
  • Hyperspectral images obtained by imaging hypersurfaces over hundreds of bands with hyperspectral sensors contain triple information about surface radiation, space, and spectrum, making the identification and classification of surface objects more efficient.
  • hyperspectral sensors are susceptible to the influence of clouds.
  • clouds the same-spectrum and same-spectrum foreign matter in hyperspectral images are widespread, resulting in low accuracy of classification directly using original hyperspectral images.
  • the elevation information contained in it has spatial correlation. It mainly uses two-dimensional spatial feature extraction methods; two-dimensional spatial feature extraction mainly uses filters in different directions to express the characteristics of LiDAR data. Specifically, it first extracts the spatial features in each direction, and then superimposes the spatial features in different directions.
  • two-dimensional Gabor and two-dimensional Local Binary Pattern (LBP) are two typical spatial feature extraction methods.
  • LBP Local Binary Pattern
  • the traditional two-dimensional spatial feature extraction method cannot fully extract its spatial-spectral joint information; the three-dimensional spatial-spectrum feature extraction method examines the spatial-spectral structure relationship between different pixels , Can combine spatial features and spectral features to express hyperspectral images.
  • the spatial-spectrum joint feature extraction method makes full use of the spatial, radiative, and spectral information about the ground features in the hyperspectral image, and can obtain identification information that reflects the multi-faceted characteristics of the ground features, which improves the ability to distinguish features.
  • 3D Gabor is a typical spatial-spectrum joint feature extraction method. By selecting and fusing a series of 3D Gabor features, representative features reflecting the spatial-spectrum joint structure of hyperspectral images can be obtained.
  • KPCA Kernel Principal Component Analysis
  • FIG. 1 is a schematic flowchart of a data processing method based on a drone according to an embodiment of the present invention.
  • the data processing method may include the following steps.
  • the original hyperspectral image and LiDAR data of the ground scene and target are acquired synchronously, and the data is stored in real time. It can be understood that the collected data can be stored locally or distributed.
  • a spectral image with a spectral resolution in the order of 10 l is called a hyperspectral image.
  • Hyperspectral sensors namely imaging spectrometers, mounted on different space platforms, simultaneously image the target area in the ultraviolet, visible, near-infrared, and mid-infrared regions of the electromagnetic spectrum with dozens to hundreds of continuous and subdivided spectral bands . While obtaining surface image information, it also obtains its spectral information, so that the combination of spectrum and image is achieved. Its biggest feature is the combination of imaging technology and spectral detection technology. While imaging the spatial characteristics of the target, each spatial pixel is dispersed to form dozens or even hundreds of narrow bands for continuous spectral coverage.
  • the data formed in this way can be described graphically using "three-dimensional data blocks".
  • x and y represent two-dimensional plane pixel information coordinate axes
  • the third dimension ( ⁇ axis) is a wavelength information coordinate axis.
  • the hyperspectral image combines the image information and spectral information of the sample.
  • the image information can reflect the external quality characteristics such as the size, shape, and defects of the sample. Because different components have different spectral absorptions, the image will reflect a certain defect at a particular wavelength, and the spectral information can fully reflect the sample. Differences in internal physical structure and chemical composition.
  • the two-dimensional Gabor feature extraction of LiDAR data includes: Let the original LiDAR data image be I LiDAR ⁇ R X ⁇ Y , where X, Y are the spatial dimensions of the image. Convolution operation is performed on the 2D Gabor filter bank generated in step (1) with the image I LiDAR , and an absolute value operation is performed on the result, that is,
  • the coordinates of the spatial-spectrum joint domain of a pixel in a multi-band image are (x, y, b), and b represents a certain band of the image.
  • the coordinates of the spatial-spectrum joint domain of a pixel in a multi-band image are (x, y, b), and b represents a certain band of the image.
  • design four different frequency amplitudes ⁇ f s , s 1,2, ..., 4 ⁇ , 13 different directions
  • the three-dimensional Gabor feature extraction of the hyperspectral image includes: setting the original hyperspectral image as I HSI ⁇ R X ⁇ Y ⁇ B , where B is the spectral dimension of the hyperspectral image. Convolution operation is performed on the 3D Gabor filter bank generated in step (2) with the image I HSI , and an absolute value operation is performed on the result, that is,
  • the Gabor feature based on KPCA dimensionality reduction is fused with the original hyperspectral image.
  • KPCA algorithm has a high spectral dimension, large redundancy between bands, and heterogeneity.
  • the dimensions are compressed to K dimensions (K ⁇ B) by using the KPCA algorithm to obtain I KPCA ⁇ R X ⁇ Y ⁇ K and N KPCA ⁇ R X ⁇ Y ⁇ K.
  • x i is the i-th feature vector
  • y i is the class label of x i
  • ⁇ i ,, b are the desired model parameters.
  • the hyperspectral image and LiDAR data are collected and stored; the two-dimensional Gabor filter and the three-dimensional Gabor filter are used to extract the amplitude features of the LiDAR data and the hyperspectral image, respectively.
  • To obtain similar and complementary texture features connect the two types of extracted texture features, use KPCA algorithm to extract the texture features after connection, and connect the extracted features with the original hyperspectral data after dimensionality reduction
  • the final fusion features are obtained and supervised classification using support vector machines.
  • the advantage of this method is to use Gabor features to extract the texture features of heterogeneous data, so that the original heterogeneous data can be fused in the texture feature space.
  • the feature expression of the effective spectral information of the original hyperspectral image is added. Finally, the spectrum and texture are fused.
  • the three major features of elevation and elevation improve the recognition accuracy of ground features.
  • FIG. 2 is a schematic flowchart of a data processing method based on a drone according to an embodiment of the present invention. As shown in FIG. 2, the method may include the following steps.
  • the fusion process includes: connecting the two-dimensional Gabor amplitude feature and the three-dimensional Gabor amplitude feature to obtain a target Gabor feature expression (that is, a texture feature expression);
  • S205 Perform supervised classification according to the fusion expression feature and a support vector machine with a radial basis kernel function RBF kernel.
  • the Gabor feature can be used to extract the texture features of the heterogeneous data, so that the original heterogeneous data can be fused in the texture feature space.
  • the feature expression of the effective spectral information of the original hyperspectral image is added.
  • FIG. 3 is a schematic structural diagram of a UAV-based data processing device disclosed in an embodiment of the present invention.
  • the structure described in FIG. 3 may include:
  • An acquisition unit 301 which is used for synchronously acquiring hyperspectral images and laser detection and measurement LiDAR data;
  • An extraction unit 302 configured to perform amplitude feature extraction on LiDAR data using a two-dimensional Gabor filter bank to obtain a two-dimensional Gabor feature expression
  • the extraction unit 302 is further configured to use a three-dimensional Gabor filter bank to perform amplitude feature extraction on the hyperspectral image to express the three-dimensional Gabor feature;
  • a connecting unit 303 configured to connect the two-dimensional Gabor amplitude feature and the three-dimensional Gabor amplitude feature to obtain a target Gabor feature expression
  • a dimensionality reduction unit 304 configured to perform a dimensionality reduction process on the target Gabor feature expression and the hyperspectral image by using a principal component analysis algorithm KPCA of a kernel function;
  • An obtaining unit 305 configured to obtain a fusion expression feature according to the target Gabor feature expression and the hyperspectral image after the dimensionality reduction process
  • a classification unit 306 is configured to perform supervised classification according to the fused expression features and a support vector machine with a radial basis kernel function RBF kernel.
  • FIG. 3 can be used to execute the methods described in S101-S107.
  • FIG. 4 is a schematic structural diagram of another UAV-based data processing device disclosed by an embodiment of the present invention.
  • the apparatus shown in FIG. 4 includes:
  • An acquisition unit 401 which is used to realize synchronous data acquisition and storage of hyperspectral images and LiDAR data through a drone;
  • a generating unit 402 configured to generate a two-dimensional Gabor filter and a three-dimensional Gabor filter
  • An extraction unit 403, configured to perform Gabor feature extraction on the LiDAR data and the hyperspectral image using the generated two-dimensional and three-dimensional Gabor filters;
  • a fusion unit 404 configured to fuse the extracted Gabor features to obtain a texture feature expression of a feature
  • a dimensionality reduction unit 405 is configured to perform a dimensionality reduction processing on the target Gabor feature expression and the hyperspectral image by using a principal component analysis algorithm KPCA of a kernel function, and according to the target Gabor feature expression and hyperspectral after the dimensionality reduction processing.
  • a classification unit 406 is configured to perform supervised classification according to the fused expression features and a support vector machine with a radial basis kernel function RBF kernel.
  • terminal described in FIG. 4 can execute the methods described in S201-S205.
  • FIG. 5 is a schematic structural diagram of another unmanned aerial vehicle-based data processing device disclosed in an embodiment of the present invention.
  • the device may include: at least one processor 510, such as a CPU, and a memory. 520, at least one communication bus 530, an input device 540, and an output device 550.
  • the communication bus 530 is used to implement a communication connection between these components.
  • the memory 520 may be a high-speed RAM memory, or may be a non-volatile memory (non-volatile memory), for example, at least one magnetic disk memory.
  • the memory 520 may optionally be at least one storage device located far from the foregoing processor 510.
  • the processor 510 stores a set of program codes, and the processor 510 calls the program codes stored in the memory 520 for executing the methods shown in S101 to S107, and may also execute the methods shown in steps S201 to S205.
  • a computer-readable storage medium stores a computer program.
  • the processor executes the methods shown in lines S101 to S107.
  • the method shown in steps S201 to S205 may also be performed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

一种基于无人机的数据处理的方法及装置。所述方法包括:通过无人机实现高光谱图像和LiDAR数据的同步数据采集和存储(S201);然后利用二维Gabor滤波器和三维Gabor滤波器分别对LiDAR数据和高光谱图像进行幅值特征提取,得到具有相似性和互补性的纹理特征;将提取到的两类纹理特征连接,利用KPCA算法进行特征抽取,进一步与降维后的原始高光谱数据连接得到最终的融合特征,并利用支持向量机进行监督分类。该方法的优点在于使用Gabor特征提取异质数据的纹理特征,使原始异质数据在纹理特征空间具有可融合性,同时加入原始高光谱图像的有效光谱信息的特征表达,最终融合了光谱、纹理、高程三大特征,提升了地物的识别精度。

Description

一种基于无人机的数据处理的方法及装置 技术领域
本发明涉及计算机领域,尤其涉及一种基于无人机的数据处理的方法及装置。
背景技术
目前,随着科学技术的发展,通过遥感影像获取地面专题信息已成空间信息科学以及相关行业部门研究和应用的热点。遥感影像信息提取的基础和关键是影像分类,即对同一类地物所对应的影响目标进行划分。
具体的,通过高光谱传感器在数百个波段上对地物成像获得的高光谱图像包含了关于地物辐射、空间和光谱三重信息,使得地物的识别和分类更加有效,是当前遥感成像技术的研究热点。然而,高光谱传感器易受云层影响,同时高光谱图像中同物异谱、同谱异物的情况广泛存在,造成直接使用原始高光谱图像进行分类的精度低下。
发明内容
本发明实施例公开了一种基于无人机的数据处理的方法,通过将高光谱图像和包含地物高程几何信息的激光探测与测量数据进行特征融合以提高地物分类精度。
本发明实施例第一方面公开一种基于无人机的数据处理的方法,所述方法包括:
同步采集高光谱图像和激光探测与测量LiDAR数据;
利用二维Gabor滤波器组对LiDAR数据进行幅值特征提取以获取二维Gabor特征表达;
利用三维Gabor滤波器组对所述高光谱图像进行幅值特征提取以三维Gabor特征表达;
将所述二维Gabor幅值特征与所述三维Gabor幅值特征进行连接以获取目标Gabor特征表达;
利用核函数的主成分分析算法KPCA分别对所述目标Gabor特征表达和所述高光谱图像进行降维处理;
根据经过降维处理后的目标Gabor特征表达和高光谱图像获取融合表达 特征;
根据所述融合表达特征和基于带径向基核函数RBF核的支持向量机进行监督分类。
本发明第二方面公开了一种装置,所述装置包括:
采集单元,用于同步采集高光谱图像和激光探测与测量LiDAR数据;
提取单元,用于利用二维Gabor滤波器组对LiDAR数据进行幅值特征提取以获取二维Gabor特征表达;
所述提取单元,还用于利用三维Gabor滤波器组对所述高光谱图像进行幅值特征提取以三维Gabor特征表达;
连接单元,用于将所述二维Gabor幅值特征与所述三维Gabor幅值特征进行连接以获取目标Gabor特征表达;
降维单元,用于利用核函数的主成分分析算法KPCA分别对所述目标Gabor特征表达和所述高光谱图像进行降维处理;
获取单元,用于根据经过降维处理后的目标Gabor特征表达和高光谱图像获取融合表达特征;
分类单元,用于根据所述融合表达特征和基于带径向基核函数RBF核的支持向量机进行监督分类。
本发明第三方面公开了一种服务器,所述服务器包括:
存储有可执行程序代码的存储器;
与所述存储器耦合的处理器;
所述处理器调用所述存储器中存储的所述可执行程序代码,执行本发明第一方面中任一项所述的方法。
本发明第四方面公开了一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序,其中,所述计算机程序使得终端执行如本发明第一方面任一项所述的方法。
与现有技术相比,本发明实施例具有以下有益效果:
本发明实施例中,采集并存储高光谱图像和LiDAR数据的;利用二维Gabor滤波器和三维Gabor滤波器分别对LiDAR数据和高光谱图像进行幅值特征提取,得到具有相似性和互补性的纹理特征;将提取到的两类纹理特征连接,利用KPCA算法对连接后的纹理特征进行特征抽取,将提取后的特征与降维后的原始高光谱数据连接得到最终的融合特征,并利用支持向量机进行监督分类。该方法的优点在于使用Gabor特征提取异质数据的纹理特征,使原始异质数据在纹理特征空间具有可融合性,同时加入原始高光谱图像的有效光谱信息的特征表达,最终融合了光谱、纹理、高程三大特征,提升了地物的识别精度。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例公开的一种基于无人机的数据处理的方法的流程示意图;
图2是本发明实施例公开的另一种基于无人机的数据处理的方法的流程示意图;
图3是本发明实施例公开的一种基于无人机的数据处理装置的结构示意图;
图4是本发明实施例公开的另一种基于无人机的数据处理装置的结构示意图;
图5是本发明实施例公开的一种基于无人机的数据处理装置的物理结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本发明实施例公开了一种数据处理的方法及装置,通过将高光谱图像和包含地物高程几何信息的激光探测与测量数据进行特征融合以提高地物分类精度。以下分别进行详细说明。
本发明是一种联合使用无人机高光谱图像和激光探测与测量数据(Light Detection And Ranging,LiDAR)的特征提取与融合分类技术及系统。通过高光谱传感器在数百个波段上对地物成像获得的高光谱图像包含了关于地物辐射、空间和光谱三重信息,使得地物的识别和分类更加有效。然而,高光谱传 感器易受云层影响,同时高光谱图像中同物异谱、同谱异物的情况广泛存在,造成直接使用原始高光谱图像进行分类的精度低下。随着遥感成像技术的不断进步,将高光谱图像和包含地物高程几何信息的激光探测与测量数据(Light Detection And Ranging,LiDAR)进行特征联合是提高地物分类精度的可行途径。
一般来说,在实施分类之前,首先分别对高光谱图像和LiDAR数据进行特征提取;然后对提取到的特征进行特征降维及融合;最后对融合后的特征进行地物分类。根据高光谱图像和LiDAR数据维度的不同,可分别采用三维空谱特征提取和二维空间高程特征提取。
对于LiDAR数据来说,其所包含的地物高程信息具有空间位置上的相关性,主要采用二维空间特征提取方法;二维空间特征提取主要利用不同方向的滤波器对LiDAR数据进行特征表达。具体来说,它首先提取各个方向的空间特征,然后将不同方向的空间特征叠加在一起。例如,二维Gabor和二维局部二值模式(Local Binary Pattern,LBP)是两种典型的空间特征提取方法。二维Gabor特征对图像中的光照变化具有很好的鲁棒性,二维LBP可以充分利用图像中的局部空间依赖关系。
对于高光谱图像来说,由于其具有三维空谱联合结构,传统的二维空间特征提取方法不能够充分挖掘其空谱联合信息;三维空谱特征提取方法通过考察不同像素间的空谱结构关系,能够联合空间特征和光谱特征对高光谱图像进行表达。空谱联合特征提取方法充分利用了高光谱图像中关于地物的空间、辐射和光谱等信息,能够获取反映地物多方面特性的鉴别信息,提升了特征的判别能力。三维Gabor是一种典型的空谱联合特征提取方法。通过对一系列三维 Gabor特征进行选择和融合,可以获取反映高光谱图像空谱联合结构的代表性特征。
另一方面,针对特征维数高导致分类算法计算复杂度高的问题,可采用核函数的主成分分析算法(Kernel Principal Component Analysis,KPCA)进行解决。KPCA是一种有效的非线性特征抽取和降维方法,可以提升特征融合的效果。在本发明中,结合Gabor特征提取方法和KPCA算法,提出了基于无人机高光谱图像和LiDAR数据的特征融合算法。该算法将LiDAR数据包含的高程几何信息和无人机高光谱图像包含的光谱和纹理信息进行整合,建立统一的特征提取和融合框架,从而实现地物分类精度的有效提升。
请参阅图1,图1是本发明实施例公开的一种基于无人机的数据处理方法的流程示意图。该数据处理的方法可以包括以下步骤。
S101、同步采集高光谱图像和激光探测与测量LiDAR数据;
具体的,随无人机平台的运动,得到同步采集地面场景和目标的原始高光谱图像和LiDAR数据,并实现数据的实时存储。可以理解的是,采集到的数据可以进行本地存储还可以进行分布式存储。
另外,需要指出的是,光谱分辨率在10l数量级范围内的光谱图像称为高光谱图像。通过搭载在不同空间平台上的高光谱传感器,即成像光谱仪,在电磁波谱的紫外、可见光、近红外和中红外区域,以数十至数百个连续且细分的光谱波段对目标区域同时成像。在获得地表图像信息的同时,也获得其光谱信息,做到了光谱与图像的结合。其最大特点是将成像技术与光谱探测技术结合,在对目标的空间特征成像的同时,对每个空间像元经过色散形成几十个乃至几百个窄波段以进行连续的光谱覆盖。这样形成的数据可以用“三维数据块”来形象地描述,举例来说,x和y表示二维平面像素信息坐标轴,第三维(λ轴) 是波长信息坐标轴。高光谱图像集样本的图像信息与光谱信息于一身。图像信息可以反映样本的大小、形状、缺陷等外部品质特征,由于不同成分对光谱吸收也不同,在某个特定波长下图像对某个缺陷会有较显著的反映,而光谱信息能充分反映样品内部的物理结构、化学成分的差异。
S102、利用二维Gabor滤波器组对LiDAR数据进行幅值特征提取以获取二维Gabor特征表达;
其中,设单波段图像的某一像素点(某个样本)空间域坐标为(x,y),对应于二维Gabor特征提取,按照公式(1)设计4个不同频率{u m,m=1,2,...,4}、6个不同方向{θ n,n=1,2,...,6}共24个Gabor滤波器组,编号为{ψ m,n,m=1,2,...,4,n=1,2,...,6}:
Figure PCTCN2018093173-appb-000001
其中,z=xcosθ n+ysinθ n
对LiDAR数据进行二维Gabor特征提取包括:设原始LiDAR数据图像为I LiDAR∈R X×Y,其中X,Y为图像的空间维度。将步骤(1)中生成的二维Gabor滤波器组,与图像I LiDAR进行卷积操作,并对结果取绝对值运算,即
Figure PCTCN2018093173-appb-000002
得到24个二维Gabor幅值特征,连接这些特征得到LiDAR数据的二维Gabor特征表达,其特征维数为L H=24。
S103、利用三维Gabor滤波器组对所述高光谱图像进行幅值特征提取以三维Gabor特征表达;
其中,在多波段图像的某一像素点的空谱联合域坐标为(x,y,b),b代表图像的某一波段。对应于三维Gabor特征提取,按照公式(2)设计4个不同频 率幅度{f s,s=1,2,...,4},13个不同方向
Figure PCTCN2018093173-appb-000003
共52个Gabor滤波器组,编号为
Figure PCTCN2018093173-appb-000004
Figure PCTCN2018093173-appb-000005
其中,u=f ssinφ tcosθ t,v=f ssinφ tsinθ t,w=f scosφ t
对高光谱图像进行三维Gabor特征提取包括:设原始高光谱图像为I HSI∈R X×Y×B,其中B为高光谱图像的光谱维度。将步骤(2)中生成的三维Gabor滤波器组,与图像I HSI进行卷积操作,并对结果取绝对值运算,即
Figure PCTCN2018093173-appb-000006
得到52个三维Gabor幅值特征,连接这些特征得到高光谱图像的三维Gabor特征表达,其特征维数为L G=52*B。
Figure PCTCN2018093173-appb-000007
S104、将所述二维Gabor幅值特征与所述三维Gabor幅值特征进行连接以获取目标Gabor特征表达;
其中,需要指出的是,连接提取到的二维和三维Gabor幅值特征,得到总的Gabor特征表达(即目标Gabor特征表达),波段个数为L F=L H+L G
S105、利用核函数的主成分分析算法KPCA分别对所述目标Gabor特征表达和所述高光谱图像进行降维处理;
其中,可以理解的是,基于KPCA降维的Gabor特征与原始高光谱图像融合。考虑到原始高光谱图像I HSI∈R X×Y×B及提取的Gabor特征
Figure PCTCN2018093173-appb-000008
的光谱维度高、波段间冗余度较大且具有异质性,用KPCA算法将维数压缩到K维 (K<B),分别得到I KPCA∈R X×Y×K和N KPCA∈R X×Y×K
S106、根据经过降维处理后的目标Gabor特征表达和高光谱图像获取融合表达特征;
可以理解的是,将二者进一步连接得到最终的融合表达特征F={I KPCA;F KPCA}∈R X×Y×2K
S107、根据所述融合表达特征和基于带径向基核函数RBF核的支持向量机进行监督分类。
具体的,利用基于RBF核的支持向量机进行监督分类,包括:给定某特征空间上的训练数据集T={(x 1,y 1),(x 2,y 2),...,(x N,y N)},带径向基核函数(RadialBasisFunction,RBF)的SVM分类器可表示为:
Figure PCTCN2018093173-appb-000009
Figure PCTCN2018093173-appb-000010
其中,x i为第i个特征向量,y i为x i的类标记,α i,,b是所欲求得的模型参数。
对于经步骤S106提取的特征样本F∈R Z×2K,Z=X×Y代表样本的总个数。划分样本为训练数据集F train和测试数据集F test
设f tr∈F train是一个训练样本,{f tr k,k=1,2,...,2K}是相对应的2K个特征数据,使用公式(3)的支持向量机方法进行模型训练,得到输出模型Model={α i,b}。
设f te∈F test是一个测试样本,{f te k,k=1,2,...,2K}是相对应的2K个特征数据,样本的类别预测为:
Class(f te)=w(f te) Model
从上可知,通过实施本发明实施例提供的技术方案,采集并存储高光谱图像和LiDAR数据的;利用二维Gabor滤波器和三维Gabor滤波器分别对LiDAR数据和高光谱图像进行幅值特征提取,得到具有相似性和互补性的纹理特征;将提取到的两类纹理特征连接,利用KPCA算法对连接后的纹理特征进行特征抽取,将提取后的特征与降维后的原始高光谱数据连接得到最终的融合特征,并利用支持向量机进行监督分类。该方法的优点在于使用Gabor特征提取异质数据的纹理特征,使原始异质数据在纹理特征空间具有可融合性,同时加入原始高光谱图像的有效光谱信息的特征表达,最终融合了光谱、纹理、高程三大特征,提升了地物的识别精度。
请参阅图2,图2是本发明实施例公开的一种基于无人机的数据处理方法的流程示意图。如图2所示,该方法可以包括以下步骤。
S201、通过无人机实现高光谱图像和LiDAR数据的同步数据采集和存储;
S202、生成二维Gabor滤波器和三维Gabor滤波器;
S203、利用生成的二维和三维Gabor滤波器分别对LiDAR数据和高光谱图像进行Gabor特征提取,再将提取得到的Gabor特征进行融合以获得地物的纹理特征表达;
其中,利用生成的二维Gabor滤波器对LiDAR数据进行Gabor特征提取以获取二维Gabor幅值特征;同理,利用生成的三维Gabor滤波器对高光谱图像进行Gabor特征提取以获取三维Gabor幅值特征;
融合的过程包括:将所述二维Gabor幅值特征与所述三维Gabor幅值特征进行连接以获取目标Gabor特征表达(即纹理特征表达);
S204、利用核函数的主成分分析算法KPCA分别对所述目标Gabor特征表达和所述高光谱图像进行降维处理,并根据经过降维处理后的目标Gabor特征表达和高光谱图像获取融合表达特征;
S205、根据所述融合表达特征和基于带径向基核函数RBF核的支持向量机进行监督分类。
其中,可以理解的是,S201至S205中相关步骤的具体实现可参考实施例1的描述。
在图2所描述的方法中,能够使用Gabor特征提取异质数据的纹理特征,使原始异质数据在纹理特征空间具有可融合性,同时加入原始高光谱图像的有效光谱信息的特征表达,最终融合了光谱、纹理、高程三大特征,提升了地物的识别精度。
请参阅图3,图3是本发明实施例公开的一种基于无人机的数据处理装置的结构示意图。在图3所描述的结构中,可以包括:
采集单元301,用于同步采集高光谱图像和激光探测与测量LiDAR数据;
提取单元302,用于利用二维Gabor滤波器组对LiDAR数据进行幅值特征提取以获取二维Gabor特征表达;
提取单元302,还用于利用三维Gabor滤波器组对所述高光谱图像进行幅值特征提取以三维Gabor特征表达;
连接单元303,用于将所述二维Gabor幅值特征与所述三维Gabor幅值特征进行连接以获取目标Gabor特征表达;
降维单元304,用于利用核函数的主成分分析算法KPCA分别对所述目标Gabor特征表达和所述高光谱图像进行降维处理;
获取单元305,用于根据经过降维处理后的目标Gabor特征表达和高光谱 图像获取融合表达特征;
分类单元306,用于根据所述融合表达特征和基于带径向基核函数RBF核的支持向量机进行监督分类。
需要指出的是,图3所示的结构可用于执行S101-S107所述的方法。
请一并参阅图4,图4是本发明实施例公开的另一种基于无人机的数据处理装置的结构示意图。图4所示的装置包括:
采集单元401,用于通过无人机实现高光谱图像和LiDAR数据的同步数据采集和存储;
生成单元402,用于生成二维Gabor滤波器和三维Gabor滤波器;
提取单元403,用于利用生成的二维和三维Gabor滤波器分别对LiDAR数据和高光谱图像进行Gabor特征提取;
融合单元404,用于将提取得到的Gabor特征进行融合以获得地物的纹理特征表达;
降维单元405,用于利用核函数的主成分分析算法KPCA分别对所述目标Gabor特征表达和所述高光谱图像进行降维处理,并根据经过降维处理后的目标Gabor特征表达和高光谱图像获取融合表达特征;
分类单元406,用于根据所述融合表达特征和基于带径向基核函数RBF核的支持向量机进行监督分类。
可以理解的是,图4所述的终端能够执行S201-S205所述的方法。
请参阅图5,图5为本发明实施例公开的又一种基于无人机的数据处理装置的结构示意图,如图5所示,该装置可以包括:至少一个处理器510,例如CPU,存储器520,至少一个通信总线530,输入装置540,输出装置550。其中,通信总线530用于实现这些组件之间的通信连接。存储器520可以是高速 RAM存储器,也可以是非不稳定的存储器(non-volatile memory),例如至少一个磁盘存储器。存储器520可选的还可以是至少一个位于远离前述处理器510的存储装置。其中,处存储器520中存储一组程序代码,且处理器510调用存储器520中存储的程序代码,用于执行S101至S107所示的方法,还可以执行步骤S201至S205所示的方法。
另外,在本发明的一个实施例中,公开了一种计算机可读存储介质,所述介质中存储有计算机程序,当所述计算机程序被执行时,处理器执行行S101至S107所示的方法,还可以执行步骤S201至S205所示的方法。
以上对本发明实施例公开的一种基于无人机的数据处理的方法及装置进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (10)

  1. 一种基于无人机的数据处理的方法,其特征在于,所述方法包括:
    同步采集高光谱图像和激光探测与测量LiDAR数据;
    利用二维Gabor滤波器组对LiDAR数据进行幅值特征提取以获取二维Gabor特征表达;
    利用三维Gabor滤波器组对所述高光谱图像进行幅值特征提取以三维Gabor特征表达;
    将所述二维Gabor幅值特征与所述三维Gabor幅值特征进行连接以获取目标Gabor特征表达;
    利用核函数的主成分分析算法KPCA分别对所述目标Gabor特征表达和所述高光谱图像进行降维处理;
    根据经过降维处理后的目标Gabor特征表达和高光谱图像获取融合表达特征;
    根据所述融合表达特征和基于带径向基核函数RBF核的支持向量机进行监督分类。
  2. 根据权利要求1所述的方法,其特征在于,
    所述LiDAR数据图像为I LiDAR∈R X×Y,其中X,Y为图像的空间维度;
    所述二维Gabor滤波器组是根据4个不同频率{u m,m=1,2,...,4}、6个不同方向{θ n,n=1,2,...,6}以及第一预设公式获取的;
    其中,第一公式为:
    Figure PCTCN2018093173-appb-100001
    其中,z=x cosθ n+y sinθ n
    所述利用二维Gabor滤波器组对LiDAR数据进行幅值特征提取以获取二维Gabor特征表达,包括:
    将所述二维Gabor滤波器组与图像I LiDAR进行卷积操作,并对结果取绝对值运算以得到24个二维Gabor幅值特征;
    将所述24个二维Gabor幅值特征进行连接以得到LiDAR数据的二维Gabor特征表达。
  3. 根据权利要求2所述的方法,其特征在于,所述高光谱图像为I HSI∈R X×Y×B,其中B为高光谱图像的光谱维度,其中X,Y为图像的空间维度;
    所述三维Gabor滤波器组是根据4个不同频率幅度{f s,s=1,2,...,4},13个不同方向
    Figure PCTCN2018093173-appb-100002
    以及第二预设公式获取的;
    其中,第二公式为:
    Figure PCTCN2018093173-appb-100003
    其中,u=f ssinφ tcosθ t,v=f ssinφ tsinθ t,w=f scosφ t
    所述利用三维Gabor滤波器组对所述高光谱图像进行幅值特征提取以三维Gabor特征表达,包括:
    将所述二维Gabor滤波器组与图像I HSI进行卷积操作,并对结果取绝对值运算以得到52个三维Gabor幅值特征;
    将所述52个三维Gabor幅值特征进行连接以得到高光谱图像的三维Gabor特征表达。
  4. 根据权利要求3所述的方法,其特征在于,所述将所述二维Gabor幅值特征与所述三维Gabor幅值特征进行连接以获取目标Gabor特征表达,包括:
    Figure PCTCN2018093173-appb-100004
    其中,所述N为目标Gabor特征表达;
    Figure PCTCN2018093173-appb-100005
    Figure PCTCN2018093173-appb-100006
    其中,
    Figure PCTCN2018093173-appb-100007
    Figure PCTCN2018093173-appb-100008
    其中,L F=L H+L G,L H=24,L G=52*B。
  5. 根据权利要求1至4任一所述的方法,其特征在于,融合表达特征F={I KPCA;F KPCA}∈R X×Y×2K;其中,高光谱图像I HSI∈R X×Y×B及目标Gabor特征
    Figure PCTCN2018093173-appb-100009
    经过KPCA降维后分别为I KPCA∈R X×Y×K和N KPCA∈R X×Y×K
    所述根据所述融合表达特征和基于带径向基核函数RBF核的支持向量机进行监督分类,包括:
    利用所述融合表达特征将确定样本总个数,并将样本划分为训练数据集F train和测试数据集F test
    利用所述训练数据集F train以及所述支持向量机获得模型Model={α i,b};
    根据所述测试数据集F test以及所述模型确定样本类别。
  6. 一种基于无人机的数据处理的装置,其特征在于,所述装置包括:
    采集单元,用于同步采集高光谱图像和激光探测与测量LiDAR数据;
    提取单元,用于利用二维Gabor滤波器组对LiDAR数据进行幅值特征提取以获取二维Gabor特征表达;
    所述提取单元,还用于利用三维Gabor滤波器组对所述高光谱图像进行幅 值特征提取以三维Gabor特征表达;
    连接单元,用于将所述二维Gabor幅值特征与所述三维Gabor幅值特征进行连接以获取目标Gabor特征表达;
    降维单元,用于利用核函数的主成分分析算法KPCA分别对所述目标Gabor特征表达和所述高光谱图像进行降维处理;
    获取单元,用于根据经过降维处理后的目标Gabor特征表达和高光谱图像获取融合表达特征;
    分类单元,用于根据所述融合表达特征和基于带径向基核函数RBF核的支持向量机进行监督分类。
  7. 根据权利要求6所述的装置,其特征在于,
    所述LiDAR数据图像为I LiDAR∈R X×Y,其中X,Y为图像的空间维度;
    所述二维Gabor滤波器组是根据4个不同频率{u m,m=1,2,...,4}、6个不同方向{θ n,n=1,2,...,6}以及第一预设公式获取的;
    其中,第一公式为:
    Figure PCTCN2018093173-appb-100010
    其中,z=x cosθ n+y sinθ n
    所述提取单元,具体用于:
    将所述二维Gabor滤波器组与图像I LiDAR进行卷积操作,并对结果取绝对值运算以得到24个二维Gabor幅值特征;
    将所述24个二维Gabor幅值特征进行连接以得到LiDAR数据的二维Gabor特征表达。
  8. 根据权利要求7所述的装置,其特征在于,所述高光谱图像为I HSI∈R X×Y×B,其中B为高光谱图像的光谱维度,其中X,Y为图像的空间维度;
    所述三维Gabor滤波器组是根据4个不同频率幅度{f s,s=1,2,...,4},13个不同方向
    Figure PCTCN2018093173-appb-100011
    以及第二预设公式获取的;
    其中,第二公式为:
    Figure PCTCN2018093173-appb-100012
    其中,u=f ssinφ tcosθ t,v=f ssinφ tsinθ t,w=f scosφ t
    所述提取单元,具体用于:
    将所述二维Gabor滤波器组与图像I HSI进行卷积操作,并对结果取绝对值运算以得到52个三维Gabor幅值特征;
    将所述52个三维Gabor幅值特征进行连接以得到高光谱图像的三维Gabor特征表达。
  9. 根据权利要求8所述的装置,其特征在于,所述连接单元,具体用于根据以下公式进行连接计算:
    Figure PCTCN2018093173-appb-100013
    其中,所述N为目标Gabor特征表达;
    Figure PCTCN2018093173-appb-100014
    Figure PCTCN2018093173-appb-100015
    其中,
    Figure PCTCN2018093173-appb-100016
    Figure PCTCN2018093173-appb-100017
    其中,L F=L H+L G,L H=24,L G=52*B。
  10. 根据权利要求6至9任一所述的装置,其特征在于,融合表达特征 F={I KPCA;F KPCA}∈R X×Y×2K;其中,高光谱图像I HSI∈R X×Y×B及目标Gabor特征
    Figure PCTCN2018093173-appb-100018
    经过KPCA降维后分别为I KPCA∈R X×Y×K和N KPCA∈R X×Y×K
    所述分类单元,具体用于:
    利用所述融合表达特征将确定样本总个数,并将样本划分为训练数据集F train和测试数据集F test
    利用所述训练数据集F train以及所述支持向量机获得模型Model={α i,b};
    根据所述测试数据集F test以及所述模型确定样本类别。
PCT/CN2018/093173 2018-06-27 2018-06-27 一种基于无人机的数据处理的方法及装置 WO2020000271A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/093173 WO2020000271A1 (zh) 2018-06-27 2018-06-27 一种基于无人机的数据处理的方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/093173 WO2020000271A1 (zh) 2018-06-27 2018-06-27 一种基于无人机的数据处理的方法及装置

Publications (1)

Publication Number Publication Date
WO2020000271A1 true WO2020000271A1 (zh) 2020-01-02

Family

ID=68985546

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/093173 WO2020000271A1 (zh) 2018-06-27 2018-06-27 一种基于无人机的数据处理的方法及装置

Country Status (1)

Country Link
WO (1) WO2020000271A1 (zh)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113658063A (zh) * 2021-07-28 2021-11-16 中国科学院西安光学精密机械研究所 一种用于aotf型光谱成像仪的自动数据修正方法及系统
CN113850572A (zh) * 2021-11-29 2021-12-28 泰德网聚(北京)科技股份有限公司 一种数据集约化管理转分发的方法
CN114187479A (zh) * 2021-12-28 2022-03-15 河南大学 一种基于空谱特征联合的高光谱图像分类方法
CN114511781A (zh) * 2022-01-28 2022-05-17 中国人民解放军空军军医大学 无人机搭载多光谱相机对伪装人员识别的方法、装置及介质
CN114529503A (zh) * 2021-12-17 2022-05-24 南京邮电大学 改进Gabor与HOG的自适应加权多特征融合的植物叶片识别方法
CN114637876A (zh) * 2022-05-19 2022-06-17 中国电子科技集团公司第五十四研究所 基于矢量地图特征表达的大场景无人机图像快速定位方法
CN114972294A (zh) * 2022-06-13 2022-08-30 南京大学 一种基于Gabor滤波器的肺部超声图像条纹特征的识别方法
CN117809193A (zh) * 2024-03-01 2024-04-02 江西省林业科学院 一种无人机高光谱影像与地物高光谱数据融合方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104036289A (zh) * 2014-06-05 2014-09-10 哈尔滨工程大学 一种基于空间-光谱特征和稀疏表达的高光谱图像分类方法
CN106022391A (zh) * 2016-05-31 2016-10-12 哈尔滨工业大学深圳研究生院 一种高光谱图像特征的并行提取与分类方法
CN106529484A (zh) * 2016-11-16 2017-03-22 哈尔滨工业大学 基于类指定多核学习的光谱和激光雷达数据联合分类方法
CN107451614A (zh) * 2017-08-01 2017-12-08 西安电子科技大学 基于空间坐标与空谱特征融合的高光谱分类方法
CN107480620A (zh) * 2017-08-04 2017-12-15 河海大学 基于异构特征融合的遥感图像自动目标识别方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104036289A (zh) * 2014-06-05 2014-09-10 哈尔滨工程大学 一种基于空间-光谱特征和稀疏表达的高光谱图像分类方法
CN106022391A (zh) * 2016-05-31 2016-10-12 哈尔滨工业大学深圳研究生院 一种高光谱图像特征的并行提取与分类方法
CN106529484A (zh) * 2016-11-16 2017-03-22 哈尔滨工业大学 基于类指定多核学习的光谱和激光雷达数据联合分类方法
CN107451614A (zh) * 2017-08-01 2017-12-08 西安电子科技大学 基于空间坐标与空谱特征融合的高光谱分类方法
CN107480620A (zh) * 2017-08-04 2017-12-15 河海大学 基于异构特征融合的遥感图像自动目标识别方法

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113658063A (zh) * 2021-07-28 2021-11-16 中国科学院西安光学精密机械研究所 一种用于aotf型光谱成像仪的自动数据修正方法及系统
CN113658063B (zh) * 2021-07-28 2023-05-26 中国科学院西安光学精密机械研究所 一种用于aotf型光谱成像仪的自动数据修正方法及系统
CN113850572A (zh) * 2021-11-29 2021-12-28 泰德网聚(北京)科技股份有限公司 一种数据集约化管理转分发的方法
CN114529503A (zh) * 2021-12-17 2022-05-24 南京邮电大学 改进Gabor与HOG的自适应加权多特征融合的植物叶片识别方法
CN114187479A (zh) * 2021-12-28 2022-03-15 河南大学 一种基于空谱特征联合的高光谱图像分类方法
CN114511781A (zh) * 2022-01-28 2022-05-17 中国人民解放军空军军医大学 无人机搭载多光谱相机对伪装人员识别的方法、装置及介质
CN114637876A (zh) * 2022-05-19 2022-06-17 中国电子科技集团公司第五十四研究所 基于矢量地图特征表达的大场景无人机图像快速定位方法
CN114972294A (zh) * 2022-06-13 2022-08-30 南京大学 一种基于Gabor滤波器的肺部超声图像条纹特征的识别方法
CN117809193A (zh) * 2024-03-01 2024-04-02 江西省林业科学院 一种无人机高光谱影像与地物高光谱数据融合方法
CN117809193B (zh) * 2024-03-01 2024-05-17 江西省林业科学院 一种无人机高光谱影像与地物高光谱数据融合方法

Similar Documents

Publication Publication Date Title
WO2020000271A1 (zh) 一种基于无人机的数据处理的方法及装置
Chen et al. An end-to-end shape modeling framework for vectorized building outline generation from aerial images
US11244197B2 (en) Fast and robust multimodal remote sensing image matching method and system
Paisitkriangkrai et al. Semantic labeling of aerial and satellite imagery
US20170294027A1 (en) Remote determination of quantity stored in containers in geographical region
Liu et al. Data analysis in visual power line inspection: An in-depth review of deep learning for component detection and fault diagnosis
Zhang et al. RCNN-based foreign object detection for securing power transmission lines (RCNN4SPTL)
WO2019047248A1 (zh) 高光谱遥感图像的特征提取方法及装置
CN109101977B (zh) 一种基于无人机的数据处理的方法及装置
Tatar et al. A robust object-based shadow detection method for cloud-free high resolution satellite images over urban areas and water bodies
EP3751514B1 (en) Method and system for impurity detection using multi-modal imaging
Polewski et al. A voting-based statistical cylinder detection framework applied to fallen tree mapping in terrestrial laser scanning point clouds
Chawda et al. Extracting building footprints from satellite images using convolutional neural networks
Blomley et al. 3D semantic labeling of ALS point clouds by exploiting multi-scale, multi-type neighborhoods for feature extraction
Aryal et al. Mobile hyperspectral imaging for material surface damage detection
Parmehr et al. Mapping urban tree canopy cover using fused airborne lidar and satellite imagery data
Yang et al. Infrared and visible image fusion based on infrared background suppression
Jing et al. Island road centerline extraction based on a multiscale united feature
Vakalopoulou et al. Simultaneous registration, segmentation and change detection from multisensor, multitemporal satellite image pairs
Shit et al. An encoder‐decoder based CNN architecture using end to end dehaze and detection network for proper image visualization and detection
Teo et al. Object-based land cover classification using airborne lidar and different spectral images
Gupta A survey of techniques and applications for real time image processing
Baiocchi et al. Artificial neural networks exploiting point cloud data for fragmented solid objects classification
Ouerghemmi et al. Urban vegetation mapping by airborne hyperspetral imagery; feasibility and limitations
Chen et al. Spectral Query Spatial: Revisiting the Role of Center Pixel in Transformer for Hyperspectral Image Classification

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18923885

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18923885

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 08/04/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18923885

Country of ref document: EP

Kind code of ref document: A1