WO2020062360A1 - 一种图像融合分类的方法及装置 - Google Patents

一种图像融合分类的方法及装置 Download PDF

Info

Publication number
WO2020062360A1
WO2020062360A1 PCT/CN2018/110916 CN2018110916W WO2020062360A1 WO 2020062360 A1 WO2020062360 A1 WO 2020062360A1 CN 2018110916 W CN2018110916 W CN 2018110916W WO 2020062360 A1 WO2020062360 A1 WO 2020062360A1
Authority
WO
WIPO (PCT)
Prior art keywords
superpixel
matrix
segmentation
classification
hyperspectral image
Prior art date
Application number
PCT/CN2018/110916
Other languages
English (en)
French (fr)
Inventor
贾森
邓彬
朱家松
邓琳
李清泉
Original Assignee
深圳大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳大学 filed Critical 深圳大学
Publication of WO2020062360A1 publication Critical patent/WO2020062360A1/zh
Priority to US17/209,120 priority Critical patent/US11586863B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/194Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB

Definitions

  • the present invention relates to the field of computers, and in particular, to a method and device for image fusion classification.
  • superpixel extraction can provide the spatial homogeneous representation of the observation object, thus providing an important means for describing the feature information for hyperspectral image analysis.
  • superpixels are homogenized regions that contain multiple pixels. Since the distribution of ground features often has a certain regularity, hyperspectral images based on superpixel segmentation can better excavate between neighboring pixels. Interrelationship.
  • the methods used to classify hyperspectral images using superpixels are mainly divided into two categories: pre-processing and post-processing.
  • the super-pixel information is mainly used to obtain the spatial-spectral features to further guide the subsequent classification.
  • the super-pixel segmentation map is mainly used as a method to aid decision-making to fuse the classification results.
  • there is a huge flaw that is, it is difficult to accurately estimate the number of extracted superpixels.
  • Embodiments of the present invention provide a method and a device for intelligent image fusion classification.
  • the problem of estimating the number of superpixels can be avoided.
  • the cascaded method is used to fuse the different
  • the spatial structure information of objects has significantly improved the discriminative power of features.
  • the first aspect of the present invention discloses a method for intelligent image fusion classification, the method includes:
  • a category to which a sample belongs is determined according to the regular matrix.
  • the obtaining a three-dimensional weight matrix of a hyperspectral image by using a support vector machine classifier includes:
  • Step 21 For the training sample set A, use the support vector machine method to perform model training to obtain a probability output model Model;
  • Step 22 Use the probability output model Model to perform category probability output on any test sample g to obtain a weighted probability that g belongs to each class;
  • Step 23 Repeat step 22 for all samples in the hyperspectral image to obtain a three-dimensional weight matrix of all samples in the hyperspectral image.
  • performing the superpixel segmentation on the hyperspectral image to obtain K superpixel images includes:
  • Entropy rate superpixel segmentation is used to perform superpixel segmentation on the hyperspectral image to obtain K superpixel images.
  • performing the regularization on the three-dimensional weight matrix using the superpixel segmentation method to obtain a regular matrix includes:
  • step 42 after performing step 41 on all superpixels, a regular matrix U is obtained.
  • determining the category to which the sample belongs according to the regular matrix includes:
  • the maximum value corresponding to each sample in the classification matrix is the classification to which the sample belongs.
  • a second aspect of the present invention discloses a device for image fusion classification, and the device includes:
  • a segmentation unit configured to perform superpixel segmentation on the hyperspectral image to obtain K superpixel images, where K is a positive integer;
  • a processing unit configured to perform regularization using the three-dimensional weight matrix of the superpixel map segmentation method to obtain a regular matrix
  • a determining unit configured to determine a category to which a sample belongs according to the regular matrix.
  • the obtaining unit is specifically configured to perform the steps described in 21-23:
  • Step 21 For the training sample set A, use the support vector machine method to perform model training to obtain a probability output model Model;
  • Step 22 Use the probability output model Model to perform category probability output on any test sample g to obtain a weighted probability that g belongs to each class;
  • Step 23 Repeat step 22 for all samples in the hyperspectral image to obtain a three-dimensional weight matrix of all samples in the hyperspectral image.
  • the segmentation unit is specifically configured to perform superpixel segmentation on the hyperspectral image using an entropy rate superpixel segmentation method to obtain K superpixel images.
  • the processing unit is specifically configured to perform the steps described in 41-42:
  • step 42 after performing step 41 on all superpixels, a regular matrix U is obtained.
  • the determining unit is specifically configured to integrate the regular matrix into a classification matrix; and determine a maximum value corresponding to each sample in the classification matrix as a classification to which the sample belongs.
  • a third aspect of the present invention discloses a storage medium.
  • the storage medium stores program code.
  • the program code is executed, the method of the first aspect is executed.
  • a fourth aspect of the present invention discloses an image fusion classification device, the device includes a processor and a transceiver, wherein the transceiver function described in the second aspect can be implemented by the transceiver, and the logic described in the second aspect The function (that is, the specific function of the logic unit) can be implemented by the processor;
  • a fifth aspect of the present invention discloses a computer program product.
  • the computer program product includes program code; when the program code is executed, the method of the first aspect is executed.
  • the embodiment provided by the present invention discloses a cascade fusion hyperspectral image classification method based on superpixel mapping (Superpixel Regularized Cascade Fusion (SRCF)).
  • SRCF Superpixel Regularized Cascade Fusion
  • a support vector machine classifier Small Vector Machine Classifier, SVM
  • SVM Support Vector Machine Classifier
  • ERS entropy rate superpixel segmentation
  • the category to which the sample belongs is determined according to the maximum weight, and the feature classification is realized.
  • FIG. 1 is a schematic flowchart of an image fusion classification method according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of another image fusion classification according to an embodiment of the present invention.
  • FIG. 3 is a schematic flowchart of another image fusion classification method according to an embodiment of the present invention.
  • FIG. 4 is a schematic flowchart of another image fusion classification method according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of an image fusion classification device according to an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of a physical structure of another image fusion classification device according to an embodiment of the present invention.
  • An embodiment of the present invention provides a method and device for image fusion classification.
  • the method includes: using a support vector machine classifier to obtain a three-dimensional weight matrix of a hyperspectral image; and performing hyperpixel segmentation on the hyperspectral image to obtain K Superpixel maps, where K is a positive integer; regularizing the three-dimensional weight matrix using the segmentation method of the superpixel map to obtain a regular matrix; and determining the category to which the sample belongs according to the regular matrix.
  • the present invention is a hyperspectral remote sensing image classification technology and system based on superpixel mapping. Because the hyperspectral remote sensing image formed by the hyperspectral sensor on hundreds of bands contains terrestrial-rich radiation, space and spectrum triple information, this makes the identification and classification of the terrain more effective. In order to improve the classification accuracy of hyperspectral remote sensing images, the spatial distribution features of the ground features and the spectral features are fused, and then the ground features are classified.
  • FIG. 1 is a schematic flowchart of an image fusion classification method according to an embodiment of the present invention.
  • an image fusion classification method provided by an embodiment of the present invention includes the following contents:
  • SVM Support Vector Machine
  • the commonly used methods include: (1) One-to-many method. The idea is to treat the samples of one category as one category and the samples of the other categories as another category, which becomes a two-class problem. Then, repeat the above steps for the remaining samples. (2) One-to-one method. The method is to consider only two types of samples at a time in multi-value classification, that is, to design an SVM model for each two types of samples. Therefore, a total of k (k-1) 2 / SVM models need to be designed. (3) SVM decision Tree method. It is usually combined with a binary decision tree to form a multi-class recognizer for recognition.
  • the hyperspectral image is acquired by an imaging spectrometer, which provides tens to hundreds of narrow-band spectral information for each pixel, and generates a complete and continuous spectral curve. It enables substances that would otherwise be undetectable in wide-band remote sensing to be detected in the hyperspectrum.
  • Hyperspectral data can be represented as a hyperspectral data cube, which is a three-dimensional data structure.
  • Hyperspectral data can be regarded as a three-dimensional image, and one-dimensional spectral information is added in addition to the ordinary two-dimensional image.
  • Its spatial image describes the two-dimensional spatial characteristics of the earth's surface, and its spectral dimension reveals the characteristics of the spectral curve of each pixel of the image, thereby achieving the organic fusion of the remote sensing data image dimension and the spectral dimension information.
  • the obtaining a three-dimensional weight matrix of a hyperspectral image by using a support vector machine classifier includes:
  • Step 21 For the training sample set A, use the support vector machine method to perform model training to obtain a probability output model Model;
  • Step 22 Use the probability output model Model to perform category probability output on any test sample g to obtain a weighted probability that g belongs to each class;
  • Step 23 Repeat step 22 for all samples in the hyperspectral image to obtain a three-dimensional weight matrix of all samples in the hyperspectral image.
  • the hyperspectral image is H ⁇ i X ⁇ Y ⁇ Z , where i represents a real number, and X, Y, Z represent the number of spatial and spectral dimensions of the hyperspectral image, respectively.
  • a ⁇ i Z ⁇ n denote n samples in the training set.
  • the classification process is as follows:
  • step 2) for all the samples in the hyperspectral image to obtain the three-dimensional weight matrix of all samples, denoted as W ⁇ i X ⁇ Y ⁇ C.
  • performing the superpixel segmentation on the hyperspectral image to obtain K superpixel images includes: using an entropy rate superpixel segmentation method (Entropy Rate Superpixel Segmentation, ERS) to the hyperspectral image.
  • the image is subjected to superpixel segmentation to obtain K superpixel maps.
  • common superpixel segmentation methods also include: TurboPixel, SLIC, NCut, Graph-based, Watershed (Marker-based Watershed), Meanshift, and so on.
  • step 42 after performing step 41 on all superpixels, a regular matrix U is obtained.
  • all the primitives in the regular matrix U ⁇ i X ⁇ Y ⁇ C are initialized to zero.
  • the classification matrix Z ⁇ i X ⁇ Y ⁇ C is initialized, and all elements in the classification matrix are zero.
  • U k denotes the k-th superpixel corresponding weight information.
  • determining the category to which a sample belongs according to the regular matrix includes: incorporating the regular matrix into a classification matrix; determining that a maximum value corresponding to each sample in the classification matrix is classification.
  • the final classification result can be obtained.
  • the method includes: obtaining a three-dimensional weight matrix of a hyperspectral image using a support vector machine classifier; Pixel segmentation to obtain K superpixel maps, where K is a positive integer; performing regularization on the three-dimensional weight matrix using the segmentation method of the superpixel map to obtain a regular matrix; and determining a category to which a sample belongs according to the regular matrix .
  • FIG. 2 of the present invention provides a schematic diagram of a cascade fusion hyperspectral image classification method (Superpixel Regularized Cascade Fusion, SRCF) based on superpixel mapping.
  • the specific process is shown in Figure 3.
  • the method includes:
  • FIG. 4 is a schematic flowchart of another image fusion classification method according to another embodiment of the present invention.
  • another image fusion classification method provided by another embodiment of the present invention may include the following content:
  • training sample set A use the support vector machine method to perform model training to obtain a probability output model Model
  • X can be equal to 40 or 50 or 60, which will not be enumerated one by one here, nor is it limited.
  • the final classification result can be obtained. Examine the calculated classification matrix Z. For each sample, the category corresponding to the maximum value is the predicted category.
  • FIG. 5 is a schematic structural diagram of an image fusion classification according to an embodiment of the present invention.
  • an image fusion classification device 400 according to an embodiment of the present invention is provided.
  • the device 400 includes an acquisition unit 401, a segmentation unit 402, a processing unit 403, and a determination unit 404.
  • An obtaining unit 401 configured to obtain a three-dimensional weight matrix of a hyperspectral image by using a support vector machine classifier
  • a segmentation unit 402 configured to perform superpixel segmentation on the hyperspectral image to obtain K superpixel images, where K is a positive integer;
  • a processing unit 403, configured to perform regularization using the three-dimensional weight matrix of the superpixel map segmentation method to obtain a regular matrix
  • a determining unit 404 is configured to determine a category to which a sample belongs according to the regular matrix.
  • the obtaining unit 401 is specifically configured to perform the steps described in 21-23: step 21, using the support vector machine method for model training on the training sample set A, to obtain a probability output model Model; step 22, using the The probability output model Model performs class probability output on any test sample g to obtain the weighted probability that g belongs to each class; step 23, repeating step 22 for all samples in the hyperspectral image to obtain the values of all the samples in the hyperspectral image.
  • Three-dimensional weight matrix is specifically configured to perform the steps described in 21-23: step 21, using the support vector machine method for model training on the training sample set A, to obtain a probability output model Model; step 22, using the The probability output model Model performs class probability output on any test sample g to obtain the weighted probability that g belongs to each class; step 23, repeating step 22 for all samples in the hyperspectral image to obtain the values of all the samples in the hyperspectral image.
  • the segmentation unit 402 is specifically configured to perform superpixel segmentation on the hyperspectral image using an entropy rate superpixel segmentation method to obtain K superpixel images.
  • step 42 after performing step 41 on all superpixels, a regular matrix U is obtained.
  • the determining unit 404 is specifically configured to integrate the regular matrix into a classification matrix; and determine that a maximum value corresponding to each sample in the classification matrix is a classification to which the sample belongs.
  • the obtaining unit 401, the dividing unit 402, the processing unit 403, and the determining unit 404 may be configured to execute the method described in any one of the foregoing embodiments.
  • the dividing unit 402, the processing unit 403, and the determining unit 404 may be configured to execute the method described in any one of the foregoing embodiments.
  • the processing unit 403, and the determining unit 404 may be configured to execute the method described in any one of the foregoing embodiments.
  • an image fusion classification device 500 is provided.
  • the device 500 includes hardware such as a CPU 501, a memory 502, a bus 503, and a display 504.
  • the CPU 501 executes a server program stored in the memory 502 in advance, and the execution process specifically includes:
  • a category to which a sample belongs is determined according to the regular matrix.
  • the obtaining a three-dimensional weight matrix of a hyperspectral image by using a support vector machine classifier includes:
  • Step 21 For the training sample set A, use the support vector machine method to perform model training to obtain a probability output model Model;
  • Step 22 Use the probability output model Model to perform category probability output on any test sample g to obtain a weighted probability that g belongs to each class;
  • Step 23 Repeat step 22 for all samples in the hyperspectral image to obtain a three-dimensional weight matrix of all samples in the hyperspectral image.
  • performing superpixel segmentation on the hyperspectral image to obtain K superpixel images includes:
  • Entropy rate superpixel segmentation is used to perform superpixel segmentation on the hyperspectral image to obtain K superpixel images.
  • performing regularization on the three-dimensional weight matrix using the segmentation method of the superpixel map to obtain a regular matrix includes:
  • step 42 after performing step 41 on all superpixels, a regular matrix U is obtained.
  • determining the category to which the sample belongs according to the regular matrix includes:
  • the maximum value corresponding to each sample in the classification matrix is the classification to which the sample belongs.
  • a storage medium stores program code.
  • the program code When the program code is executed, the method in the foregoing method embodiment is executed.
  • a computer program product in another embodiment, and the computer program product includes program code; when the program code is executed, the method in the foregoing method embodiment is executed.
  • the disclosed device may be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the unit is only a logical function division.
  • multiple units or components may be combined or may Integration into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be electrical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objective of the solution of this embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above integrated unit may be implemented in the form of hardware or in the form of software functional unit.
  • the integrated unit When the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a computer-readable storage medium.
  • the technical solution of the present invention essentially or part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, which is stored in a storage medium Including a plurality of instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in each embodiment of the present invention.
  • the foregoing storage media include: U disks, Read-Only Memory (ROM), Random Access Memory (RAM), mobile hard disks, magnetic disks, or optical disks, and other media that can store program codes .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Astronomy & Astrophysics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

一种图像融合分类的方法及装置,所述方法包括:利用支持向量机分类器获得高光谱图像的三维权重矩阵(101);对所述高光谱图像进行超像素分割,以得到K个超像素图,其中,K为正整数(102);利用所述超像素图的分割法所述三维权重矩阵进行正则化以获得正则矩阵(103);根据所述正则矩阵确定样本所属的类别(104)。通过该方法,能够避免超像素个数的估计问题,进一步的,通过级联的方式融合不同超像素图包含的地物空间结构信息,显著提升了特征的判别力。

Description

一种图像融合分类的方法及装置 技术领域
本发明涉及计算机领域,具体涉及了一种图像融合分类的方法及装置。
背景技术
随着计算机视觉领域中图像分割技术的进步,超像素提取能够提供观测对象的空间同质化表征,从而为高光谱图像分析提供了重要的地物信息描述手段。
具体来说,超像素是指包含了多个像素的同质化区域,由于地物分布往往具有一定的规整性,因此基于超像素分割的高光谱图像可以更好的挖掘其邻域像素之间的相互关系。目前,使用超像素作用于高光谱图像分类的方法主要分为两大类:前处理和后处理。在前处理中,主要是利用超像素信息来得到空间-光谱特征,从而进一步指导后续分类;在后处理中,超像素分割图主要作为一种辅助决策的手段来融合分类结果。然而,在已有使用超像素的方法中,都存在一个巨大的缺陷,即提取的超像素个数难以得到精确估计。
发明内容
本发明实施例提供了一种智能图像融合分类的方法及装置,通过使用本发明提供的方法,能够避免超像素个数的估计问题,进一步的,通过级联的方式融合不同超像素图包含的地物空间结构信息,显著提升了特征的判别力。
本发明第一方面公开了一种智能图像融合分类的方法,所述方法包括:
利用支持向量机分类器获得高光谱图像的三维权重矩阵;
对所述高光谱图像进行超像素分割,以得到K个超像素图,其中,K为正整数;
利用所述超像素图的分割法所述三维权重矩阵进行正则化以获得正则矩阵;
根据所述正则矩阵确定样本所属的类别。
其中,可选的,所述利用支持向量机分类器获得高光谱图像的三维权重矩阵,包括:
步骤21对训练样本集A,使用所述支持向量机方法进行模型训练,以得到概率输出模型Model;
步骤22,使用所述概率输出模型Model对任意测试样本g进行类别概率输出,得到g属于每一个类的权重概率;
步骤23,对高光谱图像中的所有样本重复步骤22,得到所述高光谱图像 中所有样本的三维权重矩阵。
其中,可选的,所述对所述高光谱图像进行超像素分割,以得到K个超像素图,包括:
利用熵率超像素分割法对所述高光谱图像进行超像素分割以得到K个超像素图。
其中,可选的,所述利用所述超像素图的分割法所述三维权重矩阵进行正则化以获得正则矩阵,包括:
步骤41,对每一个超像素S k进行判断;如果S k仅包含训练样本集A中属于类别c的一个样本,则向量u∈i c×1中,u c=1,其余原素均为零,并置U k中的每一个列向量等于u;反之,计算U k关于每一行的均值,得到向量u,并除以该超像素中像素的个数,进一步置U k中的每一个列向量等于u。
步骤42,在对所有超像素执行步骤41后,得到正则矩阵U。
其中,可选的,所述根据所述正则矩阵确定样本所属的类别,包括:
将所述正则矩阵融入分类矩阵;
确定所述分类矩阵中每个样本对应的最大值为所述样本所属的分类。
本发明第二方面公开了一种图像融合分类的装置,所述装置包括:
获取单元,用于利用支持向量机分类器获得高光谱图像的三维权重矩阵;
分割单元,用于对所述高光谱图像进行超像素分割,以得到K个超像素图,其中,K为正整数;
处理单元,用于利用所述超像素图的分割法所述三维权重矩阵进行正则化以获得正则矩阵;
确定单元,用于根据所述正则矩阵确定样本所属的类别。
可选的,所述获取单元,具体用于执行21-23所述的步骤:
步骤21,对训练样本集A,使用所述支持向量机方法进行模型训练,以得到概率输出模型Model;
步骤22,使用所述概率输出模型Model对任意测试样本g进行类别概率输出,得到g属于每一个类的权重概率;
步骤23,对高光谱图像中的所有样本重复步骤22,得到所述高光谱图像中所有样本的三维权重矩阵。
其中,可选的,
所述分割单元,具体用于利用熵率超像素分割法对所述高光谱图像进行超像素分割以得到K个超像素图。
其中,所述处理单元,具体用于执行41-42所述的步骤:
步骤41,对每一个超像素S k进行判断;如果S k仅包含训练样本集A中属于类别c的一个样本,则向量u∈i c×1中,u c=1,其余原素均为零,并置U k中的每一个列向量等于u;反之,计算U k关于每一行的均值,得到向量u,并除以该超像素中像素的个数,进一步置U k中的每一个列向量等于u。
步骤42,在对所有超像素执行步骤41后,得到正则矩阵U。
其中可选的,所述确定单元,具体用于将所述正则矩阵融入分类矩阵;以及确定所述分类矩阵中每个样本对应的最大值为所述样本所属的分类。
本发明第三方面公开了一种存储介质,所述存储介质中存储有程序代码,当所述程序代码被运行时,所述第一方面的方法会被执行;
本发明第四方面公开了一种图像融合分类的装置,所述装置包括处理器和收发器,其中,第二方面所述的收发功能可通过所述收发器实现,第二方面所述的逻辑功能(即逻辑单元所具体的功能)可由处理器实现;
本发明第五方面公开了一种计算机程序产品,所述计算机程序产品中包含有程序代码;当所述程序代码被运行时,所述第一方面的方法会被执行。
可以看出,本发明提供的实施例中公开了一种基于超像素制图的级联融合高光谱图像分类方法(Superpixel Regularized Cascade Fusion,SRCF)。在本发明提供的实施例中的,利用支持向量机分类器(Support Vector Machine Classifier,SVM)获得高光谱图像的初始分类结果,计算样本属于每一个类别的权重,从而得到包含所有样本类属关系的权重矩阵;然后利用熵率超像素(Entropy Rate Superpixel Segmentation,ERS)分割法获得包含不同超像素个数的高光谱图像超像素图,依次利用过分割超像素图(over-segmentation map)到欠分割超像素图(under-segmentation map)对权重矩阵进行正则化,最终根据最大权重确定样本所属的类别,实现地物分类。通过使用本发明提供的方法,能够避免超像素个数的估计问题,进一步的,通过级联的方式融合不同超像素图包含的地物空间结构信息,显著提升了特征的判别力。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例提供的一种图像融合分类的方法的流程示意图;
图2为本发明的实施例提供的另一种图像融合分类的示意图;
图3为本发明实施例提供的另一种图像融合分类的方法的流程示意图;
图4为本发明实施例提供的另一种图像融合分类的方法的流程示意图;
图5为本发明实施例提供的一种图像融合分类装置的结构示意图;
图6为本发明实施例提供的另一种图像融合分类装置的物理结构示意图。
具体实施方式
本发明实施例提供了一种图像融合分类的方法及装置,所述方法包括:利用支持向量机分类器获得高光谱图像的三维权重矩阵;对所述高光谱图像进行超像素分割,以得到K个超像素图,其中,K为正整数;利用所述超像素图的 分割法所述三维权重矩阵进行正则化以获得正则矩阵;根据所述正则矩阵确定样本所属的类别。通过使用本发明提供的方法,能够避免超像素个数的估计问题,进一步的,通过级联的方式融合不同超像素图包含的地物空间结构信息,显著提升了特征的判别力。
为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚地描述,显然,所描述的实施例是本发明一部分的实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本发明保护的范围。
本发明说明书、权利要求书和附图中出现的术语“第一”、“第二”和“第三”等是用于区别不同的对象,而并非用于描述特定的顺序。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其它步骤或单元。
首先,需要指出的是,本发明是一种基于超像素制图的高光谱遥感影像分类技术及系统。由于高光谱传感器在数百个波段上形成的高光谱遥感影像包含了地物丰富的辐射、空间和光谱三重信息,这使得地物的识别和分类更加有效。为了提升高光谱遥感影像分类的精度,会将地物的空间分布特征和光谱特征进行融合,然后进行地物的分类。
请参阅图1,图1是本发明一个实施例提供的一种图像融合分类的方法的流程示意图。其中,如图1所示,本发明的一个实施例提供的一种图像融合分类的方法包括以下内容:
101、利用支持向量机分类器获得高光谱图像的三维权重矩阵;
其中,需要指出的是,支持向量机(Support Vector Machine Classifier,SVM)是针对二值分类问题提出的,并且成功地应用子解函数回归及一类分类问题。目前常用的方法包括:(1)一对多法。其思想是把某一种类别的样本当作一个类别,剩余其他类别的样本当作另一个类别,这样就变成了一个两分类问题。然后,在剩余的样本中重复上面的步骤。(2)一对一方法。其做法是在多值分类中,每次只考虑两类样本,即对每两类样本设计一个SVM模型,因此,总共需要设计k(k一l)2/个SVM模型.(3)SVM决策树法。它通常和二叉决策树结合起来,构成多类别的识别器进行识别。
另外,需要进一步指出的是,高光谱图像是由成像光谱仪获取的,成像光谱仪为每个像元提供数十至数百个窄波段光谱信息,产生一条完整而连续的光谱曲线。它使本来在宽波段遥感中不可探测的物质,在高光谱中能被探测。
高光谱数据可表示为高光谱数据立方体,是三维数据结构。高光谱数据可视为三维图像,在普通二维图像之外又多了一维光谱信息。其空间图像为描述 地表二维空间特征,其光谱维揭示了图像每一像元的光谱曲线特征,由此实现了遥感数据图像维与光谱维信息的有机融合。
其中,需要指出的是,所述利用支持向量机分类器获得高光谱图像的三维权重矩阵,包括:
步骤21对训练样本集A,使用所述支持向量机方法进行模型训练,以得到概率输出模型Model;
步骤22,使用所述概率输出模型Model对任意测试样本g进行类别概率输出,得到g属于每一个类的权重概率;
步骤23,对高光谱图像中的所有样本重复步骤22,得到所述高光谱图像中所有样本的三维权重矩阵。
举例来说,假设高光谱图像为H∈i X×Y×Z,其中i表示实数,X,Y,Z分别表示高光谱图像的空间维度和光谱维度的个数。设A∈i Z×n表示训练集中的n个样本,图像中共有C类地物类别,则对于任意测试样本g∈i Z×1,其分类过程如下:
1)对训练样本集A,使用概率输出的支持向量机方法进行模型训练,得到概率输出模型Model;
2)使用模型Model对任意测试样本g进行类别概率输出,得到g属于每一个类的权重概率{P c,c=1,2,..,C};
3)对高光谱图像中的所有样本重复步骤2),得到所有样本的三维权重矩阵,记为W∈i X×Y×C
102、对所述高光谱图像进行超像素分割,以得到K个超像素图,其中,K为正整数;
其中,需要指出的是,所述对所述高光谱图像进行超像素分割,以得到K个超像素图,包括:利用熵率超像素分割法(Entropy Rate Superpixel Segmentation,ERS)对所述高光谱图像进行超像素分割以得到K个超像素图。
进一步,需要指出的是,常见的超像素分割方法还包括:TurboPixel,SLIC,NCut,Graph-based,Watershed(Marker-based Watershed),Meanshift等等。
103、利用所述超像素图的分割法所述三维权重矩阵进行正则化以获得正则矩阵;
其中,需要指出的是,所述利用所述超像素图的分割法所述三维权重矩阵进行正则化以获得正则矩阵,包括:步骤41,对每一个超像素S k进行判断;如果S k仅包含训练样本集A中属于类别c的一个样本,则向量u∈i c×1中,u c=1,其余原素均为零,并置U k中的每一个列向量等于u;反之,计算U k关于每一行的均值,得到向量u,并除以该超像素中像素的个数,进一步置U k中的每一个列向量等于u。步骤42,在对所有超像素执行步骤41后,得到正则矩阵U。
举例来说,使用ERS的超像素分割方法对高光谱图像进行分割之后会得到具有K个超像素的分割图{S k,k=1,2,..,K}。同时将正则矩阵U∈i X×Y×C中的所有原素初始化为零。同时也会初始化的分类矩阵Z∈i X×Y×C,将该分类矩阵中 的所有元素为零。对每一个超像素S k,U k表示第k个超像素对应的权重信息。
此时,对该超像素S k进行判断:
(1)如果S k仅包含训练样本集A中属于类别c的一个样本,那么向量u∈i c×1中,u c=1,其余原素均为零,并置U k中的每一个列向量等于u;
(2)反之,计算U k关于每一行的均值,得到向量u,并除以该超像素中像素的个数,进一步置U k中的每一个列向量等于u。
(3)在对所有超像素执行步骤2)后,得到正则矩阵U,进一步的,将该正则矩阵融入分类矩阵Z,即Z=Z+U。
104、根据所述正则矩阵确定样本所属的类别。
其中,需要指出的是,所述根据所述正则矩阵确定样本所属的类别,包括:将所述正则矩阵融入分类矩阵;确定所述分类矩阵中每个样本对应的最大值为所述样本所属的分类。
举例来说,可以以步长p=50减少超像素个数K的值,即K=K-p,重复上述步骤3中的各步,直到超像素个数K达到50。当超像素个数K达到50时,便可获得最终分类结果。考察计算得到的分类矩阵Z,对于每一个样本来说,取最大值对应的类别即为预测类别。
可以看出,本实施例的方案中公开了一种图像融合分类的方法及装置,所述方法包括:利用支持向量机分类器获得高光谱图像的三维权重矩阵;对所述高光谱图像进行超像素分割,以得到K个超像素图,其中,K为正整数;利用所述超像素图的分割法所述三维权重矩阵进行正则化以获得正则矩阵;根据所述正则矩阵确定样本所属的类别。通过使用本发明提供的方法,能够避免超像素个数的估计问题,进一步的,通过级联的方式融合不同超像素图包含的地物空间结构信息,显著提升了特征的判别力。
本发明图2提供了一种基于超像素制图的级联融合高光谱图像分类方法(Superpixel Regularized Cascade Fusion,SRCF)的示意图。具体流程如图3所示,所述方法包括:
201、利用支持向量机分类器获得高光谱图像的三维权重矩阵;
202、利用熵率超像素分割法对所述高光谱图像进行超像素分割以得到K个超像素图,其中,K为正整数;
203、利用所述超像素图对所述三维权重矩阵进行级联超像素修正以获取分类矩阵;
204、确定所述分类矩阵中每个样本对应的最大值为所述样本所属的分类。
其中,需要指出的是,图3所述的方法的相关步骤的解释可参考图1对应的实施例。
请参阅图4,图4是本发明的另一个实施例提供的另一种图像融合分类的方法的流程示意图。其中,如图4所示,本发明的另一个实施例提供的另一种图像融合分类的方法可以包括以下内容:
301、对训练样本集A,使用所述支持向量机方法进行模型训练,以得到 概率输出模型Model;
302、使用所述概率输出模型Model对高光谱图像中的所有样本进行类别概率输出,得到每个样本属于每一个类的权重概率,根据每个样本属于每一个类的权重概率得到所述高光谱图像中所有样本的三维权重矩阵;
303、利用熵率超像素分割法对所述高光谱图像进行超像素分割以得到K个超像素图,其中,K为正整数;
304、对每一个超像素进行判断;判断每个超像素是否仅包含训练样本集A中属于类别c的一个样本;
如果该超像素S k仅包含训练样本集A中属于类别c的一个样本,则向量u∈i c×1中,u c=1其余原素均为零,并置U k中的每一个列向量等于u;反之,计算U k关于每一行的均值,得到向量u,并除以该超像素中像素的个数,进一步置U k中的每一个列向量等于u。
305、在对所有超像素执行步骤304后,得到正则矩阵U;将该正则矩阵融入分类矩阵Z,即Z=Z+U;
306、以步长p=X减少超像素个数K的值,即K=K-p,重复上述步骤305中的各步,直到超像素个数K达到50;
其中,X可以等于40或50或60,在此不再一一列举,也不做限制。
307、当超像素个数K达到50时,便可获得最终分类结果。考察计算得到的分类矩阵Z,对于每一个样本来说,取最大值对应的类别即为预测类别。
可以看出,本实施例的方案中,通过使用本发明提供的方法,能够避免超像素个数的估计问题,进一步的,通过级联的方式融合不同超像素图包含的地物空间结构信息,显著提升了特征的判别力。
请参阅图5,图5是本发明的一个实施例提供的一种图像融合分类的结构示意图。其中,如图5所示,本发明的一个实施例提供的一种图像融合分类装置400,其中,该装置400包括获取单元401,分割单元402、处理单元403以及确定单元404;
获取单元401,用于利用支持向量机分类器获得高光谱图像的三维权重矩阵;
分割单元402,用于对所述高光谱图像进行超像素分割,以得到K个超像素图,其中,K为正整数;
处理单元403,用于利用所述超像素图的分割法所述三维权重矩阵进行正则化以获得正则矩阵;
确定单元404,用于根据所述正则矩阵确定样本所属的类别。
其中,获取单元401,具体用于执行21-23所述的步骤:步骤21,对训练样本集A,使用所述支持向量机方法进行模型训练,以得到概率输出模型Model;步骤22,使用所述概率输出模型Model对任意测试样本g进行类别概率输出,得到g属于每一个类的权重概率;步骤23,对高光谱图像中的所有样本重复步骤22,得到所述高光谱图像中所有样本的三维权重矩阵。
其中分割单元402,具体用于利用熵率超像素分割法对所述高光谱图像进行超像素分割以得到K个超像素图。
其中,处理单元403,具体用于执行41-42所述的步骤:步骤41,对每一个超像素S k进行判断;如果S k仅包含训练样本集A中属于类别c的一个样本,则向量u∈i c×1中,u c=1,其余原素均为零,并置U k中的每一个列向量等于u;反之,计算U k关于每一行的均值,得到向量u,并除以该超像素中像素的个数,进一步置U k中的每一个列向量等于u。步骤42,在对所有超像素执行步骤41后,得到正则矩阵U。
其中,确定单元404,具体用于将所述正则矩阵融入分类矩阵;以及确定所述分类矩阵中每个样本对应的最大值为所述样本所属的分类。
其中,获取单元401,分割单元402、处理单元403以及确定单元404可以用于执行上述任一实施例所描述方法,具体描述详见实施例1对所述方法的描述,在此不再赘述。
请参阅图6,在本发明的另一个实施例中,提供一种图像融合分类装置500。该装置500包括CPU 501、存储器502、总线503、显示器504等硬件。
其中,CPU 501执行预先存储在存储器502中的服务器程序,该执行过程具体包括:
利用支持向量机分类器获得高光谱图像的三维权重矩阵;
对所述高光谱图像进行超像素分割,以得到K个超像素图,其中,K为正整数;
利用所述超像素图的分割法所述三维权重矩阵进行正则化以获得正则矩阵;
根据所述正则矩阵确定样本所属的类别。
可选的,所述利用支持向量机分类器获得高光谱图像的三维权重矩阵,包括:
步骤21对训练样本集A,使用所述支持向量机方法进行模型训练,以得到概率输出模型Model;
步骤22,使用所述概率输出模型Model对任意测试样本g进行类别概率输出,得到g属于每一个类的权重概率;
步骤23,对高光谱图像中的所有样本重复步骤22,得到所述高光谱图像中所有样本的三维权重矩阵。
可选的,所述对所述高光谱图像进行超像素分割,以得到K个超像素图,包括:
利用熵率超像素分割法对所述高光谱图像进行超像素分割以得到K个超像素图。
可选的,所述利用所述超像素图的分割法所述三维权重矩阵进行正则化以获得正则矩阵,包括:
步骤41,对每一个超像素S k进行判断;如果S k仅包含训练样本集A中属 于类别c的一个样本,则向量u∈i c×1中,u c=1,其余原素均为零,并置U k中的每一个列向量等于u;反之,计算U k关于每一行的均值,得到向量u,并除以该超像素中像素的个数,进一步置U k中的每一个列向量等于u。
步骤42,在对所有超像素执行步骤41后,得到正则矩阵U。
可选的,所述根据所述正则矩阵确定样本所属的类别,包括:
将所述正则矩阵融入分类矩阵;
确定所述分类矩阵中每个样本对应的最大值为所述样本所属的分类。
在本发明的另一个实施例中,公开了一种存储介质,所述存储介质中存储有程序代码,当所述程序代码被运行时,前述方法实施例中的方法会被执行。
在本发明的另一个实施例中,公开了一种计算机程序产品,所述计算机程序产品中包含有程序代码;当所述程序代码被运行时,前述方法实施例中的方法会被执行。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置,可通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理 解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims (10)

  1. 一种智能图像融合分类的方法,其特征在于,所述方法包括:
    利用支持向量机分类器获得高光谱图像的三维权重矩阵;
    对所述高光谱图像进行超像素分割,以得到K个超像素图,其中,K为正整数;
    利用所述超像素图的分割法所述三维权重矩阵进行正则化以获得正则矩阵;
    根据所述正则矩阵确定样本所属的类别。
  2. 根据权利要求1所述的方法,其特征在于,所述利用支持向量机分类器获得高光谱图像的三维权重矩阵,包括:
    步骤21对训练样本集A,使用所述支持向量机方法进行模型训练,以得到概率输出模型Model;
    步骤22,使用所述概率输出模型Model对任意测试样本g进行类别概率输出,得到g属于每一个类的权重概率;
    步骤23,对高光谱图像中的所有样本重复步骤22,得到所述高光谱图像中所有样本的三维权重矩阵。
  3. 根据权利要求2所述的方法,其特征在于,所述对所述高光谱图像进行超像素分割,以得到K个超像素图,包括:
    利用熵率超像素分割法对所述高光谱图像进行超像素分割以得到K个超像素图。
  4. 根据权利要求3所述的方法,其特征在于,所述利用所述超像素图的分割法所述三维权重矩阵进行正则化以获得正则矩阵,包括:
    步骤41,对每一个超像素S k进行判断;如果S k仅包含训练样本集A中属于类别c的一个样本,则向量u∈i c×1中,u c=1,其余原素均为零,并置U k中的每一个列向量等于u;反之,计算U k关于每一行的均值,得到向量u,并除以该超像素中像素的个数,进一步置U k中的每一个列向量等于u;
    步骤42,在对所有超像素执行步骤41后,得到正则矩阵U。
  5. 根据权利要求4所述的方法,其特征在于,所述根据所述正则矩阵确 定样本所属的类别,包括:
    将所述正则矩阵融入分类矩阵;
    确定所述分类矩阵中每个样本对应的最大值为所述样本所属的分类。
  6. 一种图像融合分类的装置,其特征在于,所述装置包括:
    获取单元,用于利用支持向量机分类器获得高光谱图像的三维权重矩阵;
    分割单元,用于对所述高光谱图像进行超像素分割,以得到K个超像素图,其中,K为正整数;
    处理单元,用于利用所述超像素图的分割法所述三维权重矩阵进行正则化以获得正则矩阵;
    确定单元,用于根据所述正则矩阵确定样本所属的类别。
  7. 根据权利要求6所述的方法,其特征在于,所述获取单元,具体用于执行21-23所述的步骤:
    步骤21,对训练样本集A,使用所述支持向量机方法进行模型训练,以得到概率输出模型Model;
    步骤22,使用所述概率输出模型Model对任意测试样本g进行类别概率输出,得到g属于每一个类的权重概率;
    步骤23,对高光谱图像中的所有样本重复步骤22,得到所述高光谱图像中所有样本的三维权重矩阵。
  8. 根据权利要求7所述的装置,其特征在于,
    所述分割单元,具体用于利用熵率超像素分割法对所述高光谱图像进行超像素分割以得到K个超像素图。
  9. 根据权利要求8所述的装置,其特征在于,所述处理单元,具体用于执行41-42所述的步骤:
    步骤41,对每一个超像素S k进行判断;如果S k仅包含训练样本集A中属于类别c的一个样本,则向量u∈i c×1中,u c=1,其余原素均为零,并置U k中的每一个列向量等于u;反之,计算U k关于每一行的均值,得到向量u,并除以该超像素中像素的个数,进一步置U k中的每一个列向量等于u;
    步骤42,在对所有超像素执行步骤41后,得到正则矩阵U。
  10. 根据权利要求9所述的装置,其特征在于,所述确定单元,具体用于 将所述正则矩阵融入分类矩阵;以及确定所述分类矩阵中每个样本对应的最大值为所述样本所属的分类。
PCT/CN2018/110916 2018-09-29 2018-10-19 一种图像融合分类的方法及装置 WO2020062360A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/209,120 US11586863B2 (en) 2018-09-29 2021-03-22 Image classification method and device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811146961.8 2018-09-29
CN201811146961.8A CN109472199B (zh) 2018-09-29 2018-09-29 一种图像融合分类的方法及装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/209,120 Continuation US11586863B2 (en) 2018-09-29 2021-03-22 Image classification method and device

Publications (1)

Publication Number Publication Date
WO2020062360A1 true WO2020062360A1 (zh) 2020-04-02

Family

ID=65663160

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/110916 WO2020062360A1 (zh) 2018-09-29 2018-10-19 一种图像融合分类的方法及装置

Country Status (3)

Country Link
US (1) US11586863B2 (zh)
CN (1) CN109472199B (zh)
WO (1) WO2020062360A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112766335A (zh) * 2021-01-08 2021-05-07 四川九洲北斗导航与位置服务有限公司 一种图像处理方法、装置、电子设备及存储介质
CN113255698A (zh) * 2021-06-03 2021-08-13 青岛星科瑞升信息科技有限公司 用于高光谱影像空间特征提取的超像素级自适应ssa方法
CN113963207A (zh) * 2021-10-21 2022-01-21 江南大学 基于空谱信息特征引导融合网络的高光谱图像分类方法

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11341607B2 (en) * 2019-06-07 2022-05-24 Texas Instruments Incorporated Enhanced rendering of surround view images
CN110276408B (zh) * 2019-06-27 2022-11-22 腾讯科技(深圳)有限公司 3d图像的分类方法、装置、设备及存储介质
CN113469011A (zh) * 2021-07-31 2021-10-01 国网上海市电力公司 一种基于遥感图像分类算法的规划地地物识别方法和装置
CN115830424B (zh) * 2023-02-09 2023-04-28 深圳酷源数联科技有限公司 基于融合图像的矿废识别方法、装置、设备及存储介质
CN116167955A (zh) * 2023-02-24 2023-05-26 苏州大学 面向遥感领域的高光谱与激光雷达图像融合方法及系统
CN116188879B (zh) * 2023-04-27 2023-11-28 广州医思信息科技有限公司 图像分类、图像分类模型训练方法、装置、设备及介质
CN117314813B (zh) * 2023-11-30 2024-02-13 奥谱天成(湖南)信息科技有限公司 高光谱图像波段融合方法及系统、介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110170783A1 (en) * 2010-01-08 2011-07-14 Sumitomo Electric Industries, Ltd. Equipment and method for analyzing image data
CN105023239A (zh) * 2015-08-18 2015-11-04 西安电子科技大学 基于超像素和最大边界分布的高光谱数据降维方法
CN106503739A (zh) * 2016-10-31 2017-03-15 中国地质大学(武汉) 联合光谱和纹理特征的高光谱遥感影像svm分类方法及系统
CN107844751A (zh) * 2017-10-19 2018-03-27 陕西师范大学 引导滤波长短记忆神经网络高光谱遥感图像的分类方法
CN108460326A (zh) * 2018-01-10 2018-08-28 华中科技大学 一种基于稀疏表达图的高光谱图像半监督分类方法

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080008372A1 (en) * 2006-07-07 2008-01-10 General Electric Company A method and system for reducing artifacts in a tomosynthesis imaging system
US8842937B2 (en) * 2011-11-22 2014-09-23 Raytheon Company Spectral image dimensionality reduction system and method
CN106469316B (zh) * 2016-09-07 2020-02-21 深圳大学 基于超像素级信息融合的高光谱图像的分类方法及系统
CN108009559B (zh) * 2016-11-02 2021-12-24 哈尔滨工业大学 一种基于空谱联合信息的高光谱数据分类方法
US10635927B2 (en) * 2017-03-06 2020-04-28 Honda Motor Co., Ltd. Systems for performing semantic segmentation and methods thereof
CN107590515B (zh) * 2017-09-14 2020-08-14 西安电子科技大学 基于熵率超像素分割的自编码器的高光谱图像分类方法
CN108734211B (zh) * 2018-05-17 2019-12-24 腾讯科技(深圳)有限公司 图像处理的方法和装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110170783A1 (en) * 2010-01-08 2011-07-14 Sumitomo Electric Industries, Ltd. Equipment and method for analyzing image data
CN105023239A (zh) * 2015-08-18 2015-11-04 西安电子科技大学 基于超像素和最大边界分布的高光谱数据降维方法
CN106503739A (zh) * 2016-10-31 2017-03-15 中国地质大学(武汉) 联合光谱和纹理特征的高光谱遥感影像svm分类方法及系统
CN107844751A (zh) * 2017-10-19 2018-03-27 陕西师范大学 引导滤波长短记忆神经网络高光谱遥感图像的分类方法
CN108460326A (zh) * 2018-01-10 2018-08-28 华中科技大学 一种基于稀疏表达图的高光谱图像半监督分类方法

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112766335A (zh) * 2021-01-08 2021-05-07 四川九洲北斗导航与位置服务有限公司 一种图像处理方法、装置、电子设备及存储介质
CN112766335B (zh) * 2021-01-08 2023-12-01 四川九洲北斗导航与位置服务有限公司 一种图像处理方法、装置、电子设备及存储介质
CN113255698A (zh) * 2021-06-03 2021-08-13 青岛星科瑞升信息科技有限公司 用于高光谱影像空间特征提取的超像素级自适应ssa方法
CN113963207A (zh) * 2021-10-21 2022-01-21 江南大学 基于空谱信息特征引导融合网络的高光谱图像分类方法
CN113963207B (zh) * 2021-10-21 2024-03-29 江南大学 基于空谱信息特征引导融合网络的高光谱图像分类方法

Also Published As

Publication number Publication date
US20210209426A1 (en) 2021-07-08
CN109472199A (zh) 2019-03-15
CN109472199B (zh) 2022-02-22
US11586863B2 (en) 2023-02-21

Similar Documents

Publication Publication Date Title
WO2020062360A1 (zh) 一种图像融合分类的方法及装置
US11657602B2 (en) Font identification from imagery
JP6192271B2 (ja) 画像処理装置、画像処理方法及びプログラム
CN111783779B (zh) 图像处理方法、装置和计算机可读存储介质
US11861669B2 (en) System and method for textual analysis of images
Deng et al. Cloud detection in satellite images based on natural scene statistics and gabor features
US20220004740A1 (en) Apparatus and Method For Three-Dimensional Object Recognition
CN112669298A (zh) 一种基于模型自训练的地基云图像云检测方法
Tang et al. Improving cloud type classification of ground-based images using region covariance descriptors
Chen et al. Residual shuffling convolutional neural networks for deep semantic image segmentation using multi-modal data
CN113343900A (zh) 基于cnn与超像素结合的组合核遥感影像目标检测方法
CN112819753A (zh) 一种建筑物变化检测方法、装置、智能终端及存储介质
Rusyn et al. Deep learning for atmospheric cloud image segmentation
Elashry et al. Feature matching enhancement using the graph neural network (gnn-ransac)
CN116758419A (zh) 针对遥感图像的多尺度目标检测方法、装置和设备
Basar et al. Color image segmentation using k-means classification on rgb histogram
Singh et al. Land information extraction with boundary preservation for high resolution satellite image
CN113657196A (zh) Sar图像目标检测方法、装置、电子设备和存储介质
CN113762249A (zh) 图像攻击检测、图像攻击检测模型训练方法和装置
CN113723469A (zh) 基于空谱联合网络的可解释高光谱图像分类方法及装置
CN113723464B (zh) 一种遥感影像分类方法及装置
CN116740410B (zh) 双模态目标检测模型构建方法、检测方法及计算机设备
Ankayarkanni et al. Object based segmentation techniques for classification of satellite image
US20230162489A1 (en) Method of extracting unsuiitable and defective data from plurality of pieces of training data used for learning of machine learning model, information processing device, and non-transitory computer-readable storage medium storing computer program
Kumar et al. Segmentation Of Hyperspectral Image Using Rkm Techniques

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18934531

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 09.07.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18934531

Country of ref document: EP

Kind code of ref document: A1