CN110766655A - Hyperspectral image significance analysis method based on abundance - Google Patents

Hyperspectral image significance analysis method based on abundance Download PDF

Info

Publication number
CN110766655A
CN110766655A CN201910884113.5A CN201910884113A CN110766655A CN 110766655 A CN110766655 A CN 110766655A CN 201910884113 A CN201910884113 A CN 201910884113A CN 110766655 A CN110766655 A CN 110766655A
Authority
CN
China
Prior art keywords
pixel
abundance
hyperspectral image
model
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910884113.5A
Other languages
Chinese (zh)
Inventor
罗晓燕
申智琪
薛瑞
尹继豪
李磊
吴立民
龙亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Beijing University of Aeronautics and Astronautics
Beijing Institute of Space Research Mechanical and Electricity
Original Assignee
Beijing University of Aeronautics and Astronautics
Beijing Institute of Space Research Mechanical and Electricity
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Aeronautics and Astronautics, Beijing Institute of Space Research Mechanical and Electricity filed Critical Beijing University of Aeronautics and Astronautics
Priority to CN201910884113.5A priority Critical patent/CN110766655A/en
Publication of CN110766655A publication Critical patent/CN110766655A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10036Multispectral image; Hyperspectral image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a hyperspectral image significance analysis method based on abundance, and belongs to the technical field of remote sensing image processing. The implementation process comprises the following steps: 1) constructing an automatic encoder model comprising an encoding part and a decoding part, wherein the last layer of the encoding part represents the extracted abundance vector, and the weight of the decoding part correspondingly represents the extracted end member; 2) estimating the abundance of each pixel by using the coding part of the trained model; 3) and calculating the purity of each pixel by using the abundance information of the pixels, sequencing the pixels, and selecting the first t percent of pixels with the highest purity as significant pixels. The invention provides a hyperspectral image significance analysis algorithm based on abundance aiming at the characteristics of low spatial resolution and serious mixed pixel phenomenon of a hyperspectral image, and can accurately detect the significance region of the hyperspectral image.

Description

Hyperspectral image significance analysis method based on abundance
Technical Field
The invention relates to a hyperspectral image significance analysis method based on abundance, belongs to the field of remote sensing image processing, and particularly aims at hyperspectral image significance analysis and target detection application.
Background
The hyperspectral image contains hundreds of consecutive narrow bands from the visible spectrum to the infrared spectrum. The huge spectral information can accurately describe materials and objects, but increases the difficulty of information processing. Meanwhile, the hyperspectral image also contains great spatial information, such as Indian Pines data frequently used in experiments, the spatial resolution is 145 × 145, the spatial resolution is 220 wave bands, and the data volume of the hyperspectral image is far larger than that of a traditional spectrum image on the premise of the same coverage area and ground resolution. To reduce redundant information, many subsequent hyperspectral applications are based on extracted regions of interest (ROIs), such as ROI-based hyperspectral compression algorithms, ROI-based automatic object recognition methods and classification algorithms, and the like. Therefore, how to effectively extract the region of interest of the hyperspectral image has great significance for subsequent hyperspectral application.
In recent years, many researchers have extracted ROIs in images based on saliency detection. In the field of computer vision, saliency is generally defined as the area of primary interest to the human eye or the area that stands out relative to its surroundings. Early image significance research is mainly inspired by the neural structure of the visual system of primates, and Itti et al realizes focus shift of attention on images by establishing significance mapping, and is the most representative work in the significance field. Currently, the significance detection of the hyperspectral image is mostly based on the inspiration of the early natural image significance, the significance is measured by using the contrast, and no significance selection method specially aiming at the hyperspectral image is formed.
Therefore, how to select the most effective and most significant area by combining the characteristics of the hyperspectral image is still a challenging problem. The invention provides a hyperspectral image significance analysis algorithm based on abundance, aiming at the characteristics of low spatial resolution and serious mixed pixel phenomenon of a hyperspectral image.
Disclosure of Invention
Technical problem to be solved
In view of the fact that no significance extraction method specially aiming at the hyperspectral image is formed at present, the invention provides a hyperspectral image significance analysis algorithm based on pixel abundance. The method can be used for relatively accurately extracting the salient region of the hyperspectral image by combining the essential characteristics of the hyperspectral image, and has important practical significance.
(II) technical scheme
A hyperspectral image significance analysis method based on abundance specifically comprises the following steps:
step 1: an automatic encoder model for unmixing is constructed, the model comprising an encoding portion and a decoding portion, the encoding portion being comprised of a multi-layer network including a non-linear active layer, and the decoding portion being comprised of a layer of linear functions. The last layer of the coding part in the model represents an abundance vector, and the weight value of the decoding part represents the extracted end member spectrum;
step 2: estimating the abundance of each pixel by using a coding part of a trained model, and inputting all pixels in the hyperspectral image into the coding part, thereby calculating the abundance fraction of each pixel;
and step 3: and calculating the purity index of each pixel by using the acquired abundance information, and selecting the pixel with high purity as the significant pixel.
(III) advantageous effects
Most of the existing hyperspectral image saliency analysis algorithms are inspired by natural image saliency detection, and the saliency is measured by using contrast, but the hyperspectral image and the natural image have great difference, and the image contrast-based method is not suitable for being used in the hyperspectral image to a great extent. The invention designs a significance analysis method based on pixel abundance aiming at the inherent characteristics of a hyperspectral image, and is more suitable for being used in the hyperspectral image.
Drawings
FIG. 1: a flow chart of a hyperspectral image significance analysis method based on abundance;
FIG. 2: recall at different values of t
FIG. 3: visualizing saliency detection results
Detailed Description
For a better understanding of the process of the present invention, reference is made to the following detailed description of the invention taken in conjunction with the accompanying drawings. The specific implementation flow of the invention is shown in fig. 1, and the specific implementation details of each part are as follows:
step 1: an automatic encoder model for unmixing is constructed, the model including an encoding portion and a decoding portion, the encoding portion being composed of a plurality of layers of networks including nonlinear functions, and the decoding portion being composed of a layer of linear functions. The last layer of the encoding part represents the extracted abundance vector, and the weight of the decoding part represents the extracted end member;
step 1-1: the coding part of the model is constructed. The coding part has six layers except the input layer, the first four layers are hidden layers, each hidden layer is followed by a nonlinear activation function ReLU, and the formula of the nonlinear activation function is shown in formula (1).
g(z)=max{0,z} (1)
To efficiently train the constructed network, the sixth layer is set as a Batch Normalization (BN) layer. Specifically, it can transform the distribution of the layer into a standard normal distribution with a mean of 0 and a variance of 1, as shown in equation (2).
zi+1=BN(zi)=γzi+β (2)
Wherein z isiAnd zi+1The seventh layer uses a soft threshold ReLU activation function, as shown in equation (3), where the dynamic threshold α is a learnable parameter.
g(z)=max(0,z-α) (3)
Step 1-2: the decoding portion of the model is constructed. The decoding part consists of a layer of linear functions, and the weights of this layer represent the extracted end-members.
Step 1-3: and inputting all pixel spectrums of the hyperspectrum into the network by using the designed model so as to optimize network parameters.
Step 2: estimating the abundance of each pixel by using a coding part of a trained automatic encoder, specifically, inputting all pixels in a hyperspectral image into the coding part, and outputting the last layer of the coding part to represent the abundance fraction of each pixel;
and step 3: and calculating the purity index of each pixel by using the acquired abundance information, and selecting the pixel with high purity as the significant pixel. Due to the fact that the hyperspectral image has low spatial resolution, a large number of mixed pixels exist. However, these mixed pixels can be represented by linear or non-linear combinations of pure pixels. Therefore, the clean pels can be considered as the main information of the hyperspectral image, i.e. the saliency pels.
Step 3-1: and calculating the purity of the pixel. In order to find significant pixels in a hyperspectral image, and since the abundance value can be normalized to a value between zero and one, the purity of each pixel can be measured using a p-norm, as shown in equation (4).
Figure BDA0002206769410000031
Considering that the sum of the abundance of a certain pixel in the hyperspectral image is 1, S isqiThe larger the pixel is, the purer the pixel is.
Step 3-2: and sorting the pixels through the purity, and selecting the pixels of the top t percent as significant pixels.
Examples
This example selects two widely used hyperspectral datasets, namely Indian Pines and PaviaUniversity. Indian Pines consists of 145 x 145 pixels and 224 spectral bands, and in experiments a 200 band version was used that contained no water contamination and noise bands. The Pavia University consists of 103 spectral bands and 610 × 340 pixels.
In order to prove the significance detection result of the invention on the real hyperspectral data, three comparison methods are selected, including a classical image significance detection method: CAS, NLV and spectral significance detection methods: and (7) SS. Since there is no hyperspectral data dedicated to saliency detection, the classification truth map is considered as a saliency target map, as shown in fig. 3(a), a colored region represents a target region, and a white region represents a background region. For quantitative comparison of the results of significance detection of different methods, fig. 2 shows recall (probability of detected significant pixel occupying target pixel) at different values of t. In order to more intuitively display the significance detection results, fig. 3 visualizes the results obtained by different methods, taking Indian Pines as an example. The first row represents the grayscale saliency map, and the second row represents the binarization saliency map when t is taken to be 50, 70. Therefore, the method can effectively capture the target pixel in the hyperspectral image and can also retain the structural information of the salient region.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (4)

1. The hyperspectral image significance analysis method based on abundance is characterized by constructing an automatic encoder model for unmixing, wherein the model comprises an encoding part and a decoding part, the last layer of the encoding part represents the extracted abundance vector, and the weight of the decoding part represents the extracted end member; calculating the abundance of each pixel by using the coding part of the trained model; and calculating the purity of each pixel by using the obtained abundance, and selecting the pixel with high purity as the significant pixel.
2. An automated encoder model for constructing a model for unmixing, the model comprising an encoding portion and a decoding portion, the last layer of the encoding portion representing the extracted abundance vector, the weights of the decoding portion representing the extracted end-members, comprising the steps of:
21) the coding part of the model is constructed. Removing the input layer, the coding part has six layers, the first four layers are hidden layers, and each hidden layer is followed by a nonlinear activation function ReLU, as shown in formula (1).
g(z)=max{0,z} (1)
To effectively train the constructed network, the sixth layer is set as a Batch Normalization (BN) layer, which can transform the distribution of the layer into a standard normal distribution with a mean of 0 and a variance of 1, as shown in equation (2).
zi+1=BN(zi)=γzi+β (2)
Wherein z isiAnd zi+1The seventh layer uses a soft threshold ReLU activation function with a dynamic threshold α, where the dynamic threshold α is a learnable parameter, as shown in equation (3).
g(z)=max(0,z-α) (3)
22) The decoding portion of the model is constructed. The decoding part consists of a layer of linear functions, and the weights of this layer represent the extracted end-members.
23) And inputting all spectral curves of the hyperspectrum into the network by using the designed model so as to optimize network parameters.
3. The method of claim 1, wherein the abundance fraction of each pixel is calculated by inputting all spectral curves in the hyperspectral image into the coding portion of the model.
4. The method for calculating the purity index of each pixel by using the acquired abundance information and selecting the pixel with higher purity as the significant pixel according to claim 1, comprising the steps of:
41) and calculating the purity of the pixel. In order to find significant pixels in a hyperspectral image, simultaneously, since the abundance value can be normalized to a value between zero and one, the purity of each pixel can be measured by using a p-norm, as shown in formula (4).
Figure FDA0002206769400000011
As the sum of the abundance of a certain pixel in the hyperspectral image is 1, the hyperspectral image is obtained
Figure FDA0002206769400000012
The larger the pixel is, the purer the pixel is.
42) And sorting the pixels by the purity index, and selecting the pixels of the first percent t as significant pixels.
CN201910884113.5A 2019-09-19 2019-09-19 Hyperspectral image significance analysis method based on abundance Pending CN110766655A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910884113.5A CN110766655A (en) 2019-09-19 2019-09-19 Hyperspectral image significance analysis method based on abundance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910884113.5A CN110766655A (en) 2019-09-19 2019-09-19 Hyperspectral image significance analysis method based on abundance

Publications (1)

Publication Number Publication Date
CN110766655A true CN110766655A (en) 2020-02-07

Family

ID=69330078

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910884113.5A Pending CN110766655A (en) 2019-09-19 2019-09-19 Hyperspectral image significance analysis method based on abundance

Country Status (1)

Country Link
CN (1) CN110766655A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112560960A (en) * 2020-12-16 2021-03-26 北京影谱科技股份有限公司 Hyperspectral image classification method and device and computing equipment
CN112699838A (en) * 2021-01-13 2021-04-23 武汉大学 Hyperspectral mixed pixel nonlinear blind decomposition method based on spectral diagnosis characteristic weighting

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101692125A (en) * 2009-09-10 2010-04-07 复旦大学 Fisher judged null space based method for decomposing mixed pixels of high-spectrum remote sensing image
CN104268856A (en) * 2014-09-15 2015-01-07 西安电子科技大学 Method for extracting pixel purity index based on end member of image processor
CN104463224A (en) * 2014-12-24 2015-03-25 武汉大学 Hyperspectral image demixing method and system based on abundance significance analysis

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101692125A (en) * 2009-09-10 2010-04-07 复旦大学 Fisher judged null space based method for decomposing mixed pixels of high-spectrum remote sensing image
CN104268856A (en) * 2014-09-15 2015-01-07 西安电子科技大学 Method for extracting pixel purity index based on end member of image processor
CN104463224A (en) * 2014-12-24 2015-03-25 武汉大学 Hyperspectral image demixing method and system based on abundance significance analysis

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZHIQI SHEN 等: "LOOK FOR SALIENCY IN HYPERSPECTRAL IMAGES", 《IGARSS 2019》 *
王晔琳: "基于深度学习的高光谱解混", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112560960A (en) * 2020-12-16 2021-03-26 北京影谱科技股份有限公司 Hyperspectral image classification method and device and computing equipment
CN112699838A (en) * 2021-01-13 2021-04-23 武汉大学 Hyperspectral mixed pixel nonlinear blind decomposition method based on spectral diagnosis characteristic weighting
CN112699838B (en) * 2021-01-13 2022-06-07 武汉大学 Hyperspectral mixed pixel nonlinear blind decomposition method based on spectral diagnosis characteristic weighting

Similar Documents

Publication Publication Date Title
Liang et al. Material based salient object detection from hyperspectral images
CN109684922B (en) Multi-model finished dish identification method based on convolutional neural network
Kekre et al. Improved texture feature based image retrieval using Kekre’s fast codebook generation algorithm
CN110309781B (en) House damage remote sensing identification method based on multi-scale spectrum texture self-adaptive fusion
CN105528595A (en) Method for identifying and positioning power transmission line insulators in unmanned aerial vehicle aerial images
WO2018076138A1 (en) Target detection method and apparatus based on large-scale high-resolution hyper-spectral image
CN108829711B (en) Image retrieval method based on multi-feature fusion
Le Moan et al. Saliency for spectral image analysis
CN110766708B (en) Image comparison method based on contour similarity
CN109977834B (en) Method and device for segmenting human hand and interactive object from depth image
CN106373146A (en) Target tracking method based on fuzzy learning
Castrodad et al. Discriminative sparse representations in hyperspectral imagery
CN106529472B (en) Object detection method and device based on large scale high-resolution high spectrum image
CN110766655A (en) Hyperspectral image significance analysis method based on abundance
Kiadtikornthaweeyot et al. Region of interest detection based on histogram segmentation for satellite image
CN114926826A (en) Scene text detection system
CN117456376A (en) Remote sensing satellite image target detection method based on deep learning
CN109241932A (en) A kind of thermal infrared human motion recognition method based on movement variogram phase property
CN106022226B (en) A kind of pedestrian based on multi-direction multichannel strip structure discrimination method again
CN108491888B (en) Environmental monitoring hyperspectral data spectrum section selection method based on morphological analysis
CN111402223B (en) Transformer substation defect problem detection method using transformer substation video image
CN112800968B (en) HOG blocking-based feature histogram fusion method for identifying identity of pigs in drinking area
CN114463379A (en) Dynamic capturing method and device for video key points
Shen et al. Look for saliency in hyperspectral images
CN110647844A (en) Shooting and identifying method for articles for children

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200207