CN111160300B - Deep learning hyperspectral image saliency detection algorithm combined with global prior - Google Patents

Deep learning hyperspectral image saliency detection algorithm combined with global prior Download PDF

Info

Publication number
CN111160300B
CN111160300B CN201911419512.0A CN201911419512A CN111160300B CN 111160300 B CN111160300 B CN 111160300B CN 201911419512 A CN201911419512 A CN 201911419512A CN 111160300 B CN111160300 B CN 111160300B
Authority
CN
China
Prior art keywords
image
pixel
spectral
hyperspectral image
gradient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911419512.0A
Other languages
Chinese (zh)
Other versions
CN111160300A (en
Inventor
许廷发
郝建华
徐畅
余越
黄晨
潘晨光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Chongqing Innovation Center of Beijing University of Technology
Original Assignee
Beijing Institute of Technology BIT
Chongqing Innovation Center of Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT, Chongqing Innovation Center of Beijing University of Technology filed Critical Beijing Institute of Technology BIT
Priority to CN201911419512.0A priority Critical patent/CN111160300B/en
Publication of CN111160300A publication Critical patent/CN111160300A/en
Application granted granted Critical
Publication of CN111160300B publication Critical patent/CN111160300B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/194Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Remote Sensing (AREA)
  • Astronomy & Astrophysics (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of a hyperspectral image saliency target detection algorithm and discloses a depth learning hyperspectral image saliency detection algorithm combined with global prior. Firstly, acquiring a corresponding spectrum gradient image according to a hyperspectral image, performing superpixel segmentation on the spectrum gradient image, and calculating a spectrum angular distance characteristic map of each superpixel to serve as a global prior map. And (3) adopting the VGG16 as a basic network structure, combining the global prior map and the segmented image as the input of the network, and reordering the characteristics output by the last full-connection layer of the VGG16 into a two-dimensional image to obtain a significance result map. And training network parameters to obtain a final hyperspectral image saliency target detection model. The invention can fully mine the high-level semantic information contained in the image so as to improve the detection precision of the model.

Description

Deep learning hyperspectral image saliency detection algorithm combined with global prior
Technical Field
The invention relates to the field of a hyperspectral image saliency target detection algorithm, in particular to a depth learning hyperspectral image saliency detection algorithm combined with global prior.
Background
The salient object detection technology is mainly used for searching image areas which arouse the interest of a human visual cognitive system in an image, and is the basis of various tasks in computer vision, such as image cropping, image classification, object recognition and the like. The hyperspectral image can record the reflection spectrum in the scene object with the resolution of nanometer magnitude, so that the hyperspectral image can be widely applied to the fields of food industry, remote sensing, medical care and the like. Hyperspectral images obtained in the visible spectrum contain information that can be exploited by the human visual system, which is not well represented by ordinary images. Therefore, the method has great significance for solving the problem of detecting the salient target by utilizing the hyperspectral image.
At present, most of the saliency target detection methods are oriented to natural images, and are applied to hyperspectral images less. The existing method mostly adopts a bottom-up model, takes a pixel element as a basic unit, extracts low-level visual characteristics of image such as intensity, texture, direction and the like, calculates the difference between the center and the periphery, and obtains a pixel significance value. For example, the documents "Jie Liang, Jun Zhou, Xiao Bai, and Yuntao Qian," clinical object detection in hyperspectral image, "Image Processing (ICIP), 201320 th IEEE International conference on, Sept 2013, pp.2393-2397" use a conventional Itti model in which the intensity saliency map and direction saliency map calculation methods are unchanged, color features are replaced with spectral features, and the spectral differences between the pixel elements and their neighbors are calculated using Euclidean distances and angular distances of spectral vectors. However, the bottom-up model only utilizes the bottom-layer features of the image, lacks rich semantic information contained in the image, and has low detection accuracy in the image with low contrast and complex background.
In recent years, with the rise of artificial intelligence technology and the continuous upgrade of computer hardware, a deep-level network model is trained to be very simple and convenient, and a deep convolution neural network is successfully applied to tasks such as image semantic segmentation and image recognition and has great success. The deep neural network can learn different semantic features from a low layer to a high layer step by step, has strong learning and generalization capabilities, and can greatly improve the detection accuracy when being applied to the detection of the salient target. Due to the fact that the hyperspectral image data are difficult to acquire, the convolutional neural network is basically used for natural images at present and is not popularized on hyperspectral images, and the combination of deep learning and hyperspectral image saliency target detection is a major challenge.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: aiming at the existing problems, a depth learning hyperspectral image saliency detection algorithm combined with global prior is provided, and the method is used for solving the problem that the detection precision of the traditional hyperspectral image saliency target detection method is not high under low contrast and complex scenes.
The technical scheme adopted by the invention is as follows: a deep learning hyperspectral image saliency detection algorithm combined with global prior comprises the following steps:
S1: performing data expansion on the hyperspectral image, and increasing the number of training samples;
s2: calculating the spectral gradient of each pixel of the hyperspectral image to generate a spectral gradient image;
s3: performing superpixel segmentation on the spectral gradient image by adopting a simple linear iterative clustering method to generate a superpixel segmentation graph;
s4: calculating the spectral angular distance of each super pixel, and generating a spectral angular distance characteristic map for each super pixel;
s5: and respectively merging the spectral angular distance characteristic graph of each super pixel and the super pixel segmentation image, inputting the merged image into a convolutional neural network for processing, and generating a final saliency result graph.
Further, in S1, the hyperspectral image is data-augmented by mirroring and rotation.
Further, in S2, the spectral gradient feature is first used to eliminate the influence of uneven brightness on the data, and then the spectral gradient is calculated for each pixel to generate a spectral gradient image.
Further, the S3 specifically includes:
s31: setting the number of superpixels to be K, uniformly initializing the same number of clustering centers { C on the spectral gradient imagekK, the interval between adjacent cluster centers is s;
s32: calculating gradient values of all pixel points in a 3 x 3 neighborhood of the clustering center, and moving the clustering center to a place with the minimum gradient in the neighborhood;
S33: in the neighborhood with the side length of each clustering center being 2s, distributing a class label which is the same as the clustering center for each pixel point, and then iteratively updating the label of the pixel and the clustering center according to the distance measurement;
pixel point and cluster center CkMeasured as a distance of
Figure BDA0002351983520000021
Wherein d isg(j, k) is the Euclidean distance between the pixel point j and the spectral gradient of the clustering center, ds(j, k) is the Euclidean distance between the pixel point j and the space position of the clustering center, and alpha is the weight coefficient between the two distances;
s34: if the distance measurement between the pixel point and the current clustering center is smaller than the distance between the pixel point and the clustering center which belongs to before, the pixel point is marked as belonging to the current clustering center CkOtherwise, keeping the state unchanged;
s35: and repeating the steps S33 and S34 until the change of each cluster center between two iterations is smaller than the set threshold value.
Further, in S4, the spectral angular distance calculation formula is:
Figure BDA0002351983520000022
wherein p isiAnd pjRepresenting two super-pixels that are not identical,
Figure BDA0002351983520000023
is piThe average spectral gradient vector of all the pixel points in the image,
Figure BDA0002351983520000031
is pjAverage spectral gradient vectors of all the pixel points in the spectrum.
Further, in S5, the VGG16 convolutional neural network is used as a basic network structure, the spectral angular distance feature map of each super pixel is merged with the super pixel segmentation image and then input to the network, the softmax layer in the VGG16 convolutional neural network is removed, and the one-dimensional vectors output by the full connection layer in the VGG16 convolutional neural network are reordered into a two-dimensional image as a saliency result map.
Compared with the prior art, the beneficial effects of adopting the technical scheme are as follows: according to the method, the deep neural network is combined with the hyperspectral image, and high-level semantic information contained in the hyperspectral image is fully excavated, so that the detection precision of the model is improved.
Drawings
FIG. 1 is a general flow diagram of the present invention.
Fig. 2 is a block diagram of the VGG16 convolutional neural network employed in the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
As shown in fig. 1, the invention provides a method for detecting the significance of a deep learning hyperspectral image by combining global prior, which comprises the following steps:
1. image preprocessing: due to the fact that hyperspectral image data used for significance detection are insufficient and cannot support training of a depth network, the hyperspectral images are subjected to data expansion by methods of mirroring, rotation and the like, and the number of training samples is increased.
2. Generating a spectral gradient image: and eliminating the influence of uneven brightness on data by adopting the spectral gradient characteristics, and calculating the spectral gradient of each pixel to generate a spectral gradient image. The spectral gradient vector of the pixel is represented as
Figure BDA0002351983520000032
Figure BDA0002351983520000033
Is the jth component of the spectral gradient vector of pixel i,
Figure BDA0002351983520000034
is the jth component of the original spectral vector and Δ λ is the spacing of adjacent bands.
3. Performing superpixel segmentation on the hyperspectral image: and performing superpixel segmentation on the spectral gradient image by adopting a simple linear iterative clustering method so as to keep the edge information of the target in the image and reduce the calculated amount in the detection process.
The segmentation steps are as follows:
(1) uniformly initializing the same number of clustering centers { C) on the spectral gradient image based on the set number K of superpixelskK, with spacing s between adjacent cluster centers.
(2) And calculating gradient values of all pixel points in a 3 multiplied by 3 neighborhood of the clustering center, and moving the clustering center to a place with the minimum gradient in the neighborhood.
(3) And in the neighborhood with the side length of each clustering center being 2s, assigning a class label which is the same as the clustering center to each pixel point, and then iteratively updating the labels and the clustering centers of the pixels according to the distance measurement. Pixel point j and cluster center CkMeasured as a distance of
Figure BDA0002351983520000041
Wherein d isg(j, k) is the Euclidean distance between the pixel point and the spectral gradient of the clustering center, ds(j, k) the Euclidean distance of the pixel point from the spatial position of the cluster center, α is the weight coefficient between the two distances.
(4) If D (j, k) between the pixel point j and the current clustering center is smaller than the distance between the pixel point j and the previous clustering center, marking the pixel point j as belonging to the current clustering center C kOtherwise, the data remains unchanged.
(5) And (4) repeating the steps (3) and (4) until the change of each cluster center between two iterations is smaller than a set threshold value.
4. In order to fully utilize the spectral information and accelerate the detection process of a subsequent network, a spectral angular distance characteristic diagram is calculated for each super pixel, and the spectral angular distance characteristic diagram is used as a global prior diagram of the super pixel. Two super-pixels piAnd pjThe spectral angular distance between
Figure BDA0002351983520000042
Wherein
Figure BDA0002351983520000043
And
Figure BDA0002351983520000044
are each piAnd pjAverage spectral gradient vectors of all the pixel points in the spectrum. At the super pixel piIn the global prior map of (2), a superpixel piWeighted sum of spectral angular distances from other superpixels, denoted piCharacteristic value f (p) ofi),
Figure BDA0002351983520000045
Where K is the number of superpixels, niIs piThe number of pixels in (1) is,
Figure BDA0002351983520000046
is the spatial distance weight, d (p)i,pj) Is a super pixel piAnd pjThe spatial distance of (a). Except for super pixel piIn addition, the characteristic values of the rest of the super-pixels are used as the pixels in the corresponding super-pixelsMean values are presented.
5. The VGG16 is used as a basic network structure, the structure diagram of the VGG16 is shown in FIG. 2, a global prior diagram and a segmentation image of each super pixel are combined to be used as network input, the last softmax layer is removed, and one-dimensional vectors output by a full connection layer are reordered into two-dimensional images to be used as a saliency result diagram. And training the network parameters to obtain a final hyperspectral image saliency target detection model.
The invention is not limited to the foregoing embodiments. The invention extends to any novel feature or any novel combination of features disclosed in this specification and any novel method or process steps or any novel combination of features disclosed. Those skilled in the art to which the invention pertains will appreciate that insubstantial changes or modifications can be made without departing from the spirit of the invention as defined by the appended claims.

Claims (6)

1. A deep learning hyperspectral image saliency detection algorithm combined with global prior is characterized by comprising the following steps:
s1: performing data expansion on the hyperspectral image, and increasing the number of training samples;
s2: calculating the spectral gradient of each pixel of the hyperspectral image to generate a spectral gradient image;
s3: performing superpixel segmentation on the spectral gradient image by adopting a simple linear iterative clustering method to generate a superpixel segmentation graph;
s4: calculating the spectral angular distance of each super pixel, and generating a spectral angular distance characteristic map for each super pixel;
s5: and respectively merging the spectral angular distance characteristic graph of each super pixel and the super pixel segmentation image, inputting the merged image into a convolutional neural network for processing, and generating a final saliency result graph.
2. The algorithm for detecting the significance of the deep learning hyperspectral image combined with the global prior as claimed in claim 1, wherein in S1, the hyperspectral image is data-augmented by mirroring and rotation.
3. The algorithm for detecting the significance of the deep learning hyperspectral image combined with the global prior as claimed in claim 1, wherein in S2, the spectral gradient feature is first used to eliminate the influence of non-uniform brightness on the data, and then the spectral gradient is calculated for each pixel to generate the spectral gradient image.
4. The depth-learning hyperspectral image saliency detection algorithm combined with global prior according to any one of claims 1 to 3, wherein the S3 specifically comprises:
s31: setting the number of superpixels to be K, and uniformly initializing the same number of clustering centers { C on the spectral gradient imagekK, the interval between adjacent cluster centers is s;
s32: calculating gradient values of all pixel points in a 3 x 3 neighborhood of the clustering center, and moving the clustering center to a place with the minimum gradient in the neighborhood;
s33: in the neighborhood with the side length of each clustering center being 2s, distributing a class label which is the same as the clustering center for each pixel point, and then iteratively updating the label of the pixel and the clustering center according to the distance measurement;
pixel point and cluster center CkMeasured as a distance of
Figure FDA0002351983510000011
Wherein d isg(j, k) is the Euclidean distance between the pixel point j and the spectral gradient of the clustering center, d s(j, k) is the Euclidean distance between the pixel point j and the space position of the clustering center, and alpha is the weight coefficient between the two distances;
s34: if the distance measurement between the pixel point and the current clustering center is smaller than the distance between the pixel point and the clustering center which belongs to before, the pixel point is marked as belonging to the current clustering center CkOtherwise, keeping the state unchanged;
s35: and repeating the steps S33 and S34 until the change of each cluster center between two iterations is smaller than the set threshold value.
5. The depth learning hyperspectral image saliency detection algorithm combined with global prior according to any one of claims 1-3, wherein in S4, the spectral angular distance calculation formula is as follows:
Figure FDA0002351983510000021
wherein p isiAnd pjRepresenting two super-pixels that are not identical,
Figure FDA0002351983510000022
is piThe average spectral gradient vector of all the pixel points in the image,
Figure FDA0002351983510000023
is pjAverage spectral gradient vectors of all the pixel points in the spectrum.
6. The deep learning hyperspectral image saliency detection algorithm combined with global priors according to any one of claims 1 to 3, wherein in S5, a VGG16 convolutional neural network is used as a basic network structure, a spectral angular distance feature map of each super pixel is merged with a super pixel segmentation image and then input into the network, a softmax layer in a VGG16 convolutional neural network is removed, and one-dimensional vectors output by a full connection layer in the VGG16 convolutional neural network are reordered into two-dimensional images to be used as a saliency result map.
CN201911419512.0A 2019-12-31 2019-12-31 Deep learning hyperspectral image saliency detection algorithm combined with global prior Active CN111160300B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911419512.0A CN111160300B (en) 2019-12-31 2019-12-31 Deep learning hyperspectral image saliency detection algorithm combined with global prior

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911419512.0A CN111160300B (en) 2019-12-31 2019-12-31 Deep learning hyperspectral image saliency detection algorithm combined with global prior

Publications (2)

Publication Number Publication Date
CN111160300A CN111160300A (en) 2020-05-15
CN111160300B true CN111160300B (en) 2022-06-28

Family

ID=70560427

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911419512.0A Active CN111160300B (en) 2019-12-31 2019-12-31 Deep learning hyperspectral image saliency detection algorithm combined with global prior

Country Status (1)

Country Link
CN (1) CN111160300B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112801881B (en) * 2021-04-13 2021-06-22 湖南大学 High-resolution hyperspectral calculation imaging method, system and medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102903116A (en) * 2012-10-20 2013-01-30 复旦大学 Manifold dimension reduction method of hyperspectral images based on image block distance
CN103729848A (en) * 2013-12-28 2014-04-16 北京工业大学 Hyperspectral remote sensing image small target detection method based on spectrum saliency
CN104463203A (en) * 2014-12-03 2015-03-25 复旦大学 Hyper-spectral remote sensing image semi-supervised classification method based on ground object class membership grading
CN106097252A (en) * 2016-06-23 2016-11-09 哈尔滨工业大学 High spectrum image superpixel segmentation method based on figure Graph model
CN106570874A (en) * 2016-11-10 2017-04-19 宁波大学 Image marking method combining local image constraint and overall target constraint
CN107274419A (en) * 2017-07-10 2017-10-20 北京工业大学 A kind of deep learning conspicuousness detection method based on global priori and local context
CN107274416A (en) * 2017-06-13 2017-10-20 西北工业大学 High spectrum image conspicuousness object detection method based on spectrum gradient and hierarchical structure
CN107316309A (en) * 2017-06-29 2017-11-03 西北工业大学 High spectrum image conspicuousness object detection method based on matrix decomposition
CN107609552A (en) * 2017-08-23 2018-01-19 西安电子科技大学 Salient region detection method based on markov absorbing model
CN109191482A (en) * 2018-10-18 2019-01-11 北京理工大学 A kind of image combination and segmentation method based on region adaptivity spectral modeling threshold value
CN109446894A (en) * 2018-09-18 2019-03-08 西安电子科技大学 The multispectral image change detecting method clustered based on probabilistic segmentation and Gaussian Mixture

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107301651A (en) * 2016-04-13 2017-10-27 索尼公司 Object tracking apparatus and method

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102903116A (en) * 2012-10-20 2013-01-30 复旦大学 Manifold dimension reduction method of hyperspectral images based on image block distance
CN103729848A (en) * 2013-12-28 2014-04-16 北京工业大学 Hyperspectral remote sensing image small target detection method based on spectrum saliency
CN104463203A (en) * 2014-12-03 2015-03-25 复旦大学 Hyper-spectral remote sensing image semi-supervised classification method based on ground object class membership grading
CN106097252A (en) * 2016-06-23 2016-11-09 哈尔滨工业大学 High spectrum image superpixel segmentation method based on figure Graph model
CN106570874A (en) * 2016-11-10 2017-04-19 宁波大学 Image marking method combining local image constraint and overall target constraint
CN107274416A (en) * 2017-06-13 2017-10-20 西北工业大学 High spectrum image conspicuousness object detection method based on spectrum gradient and hierarchical structure
CN107316309A (en) * 2017-06-29 2017-11-03 西北工业大学 High spectrum image conspicuousness object detection method based on matrix decomposition
CN107274419A (en) * 2017-07-10 2017-10-20 北京工业大学 A kind of deep learning conspicuousness detection method based on global priori and local context
CN107609552A (en) * 2017-08-23 2018-01-19 西安电子科技大学 Salient region detection method based on markov absorbing model
CN109446894A (en) * 2018-09-18 2019-03-08 西安电子科技大学 The multispectral image change detecting method clustered based on probabilistic segmentation and Gaussian Mixture
CN109191482A (en) * 2018-10-18 2019-01-11 北京理工大学 A kind of image combination and segmentation method based on region adaptivity spectral modeling threshold value

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"Region Merging Method for Remote Sensing Spectral Image Aided by Inter-Segment and Boundary Homogeneities";Yuhan Zhang等;《remote sensing》;20190614;第1-22页 *
"Spatial Group Sparsity Regularized Nonnegative Matrix Factorization for Hyperspectral Unmixing";Xinyu Wang等;《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING》;20170728;第1-18页 *
"基于特征度量的高光谱遥感影像波段选择方法研究";谭雨蕾;《中国优秀硕士学位论文全文数据库 基础科学辑》;20171015;第A011-65页 *
优化加权核K-means聚类初始中心点的SLIC算法;杨艳等;《计算机科学与探索》;20170823(第03期);第494-501页 *

Also Published As

Publication number Publication date
CN111160300A (en) 2020-05-15

Similar Documents

Publication Publication Date Title
US9042648B2 (en) Salient object segmentation
Wei et al. Tensor voting guided mesh denoising
Jain et al. Deformable template models: A review
Miao et al. A semi-automatic method for road centerline extraction from VHR images
Ochs et al. Segmentation of moving objects by long term video analysis
Babenko et al. Robust object tracking with online multiple instance learning
Mukhopadhyay et al. Fusion of 2D grayscale images using multiscale morphology
CN110610505A (en) Image segmentation method fusing depth and color information
Kong et al. Intrinsic depth: Improving depth transfer with intrinsic images
Liu et al. Interactive geospatial object extraction in high resolution remote sensing images using shape-based global minimization active contour model
Meng et al. Image fusion with saliency map and interest points
CN106407978B (en) Method for detecting salient object in unconstrained video by combining similarity degree
CN113033432A (en) Remote sensing image residential area extraction method based on progressive supervision
CN112686952A (en) Image optical flow computing system, method and application
CN111160300B (en) Deep learning hyperspectral image saliency detection algorithm combined with global prior
Arulananth et al. Edge detection using fast pixel based matching and contours mapping algorithms
Babu et al. Robust tracking with interest points: A sparse representation approach
Parmehr et al. Automatic parameter selection for intensity-based registration of imagery to LiDAR data
Reso et al. Occlusion-aware method for temporally consistent superpixels
Quast et al. Shape adaptive mean shift object tracking using gaussian mixture models
Schulz et al. Object-class segmentation using deep convolutional neural networks
Schmidt et al. Real-time rotated convolutional descriptor for surgical environments
Ghosh et al. Robust simultaneous registration and segmentation with sparse error reconstruction
Poornima et al. A method to align images using image segmentation
De La Vega et al. Object segmentation in hyperspectral images using active contours and graph cuts

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant