CN105243154B - Remote sensing image retrieval method based on notable point feature and sparse own coding and system - Google Patents

Remote sensing image retrieval method based on notable point feature and sparse own coding and system Download PDF

Info

Publication number
CN105243154B
CN105243154B CN201510708598.4A CN201510708598A CN105243154B CN 105243154 B CN105243154 B CN 105243154B CN 201510708598 A CN201510708598 A CN 201510708598A CN 105243154 B CN105243154 B CN 105243154B
Authority
CN
China
Prior art keywords
image
notable
feature
matrix
sparse
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510708598.4A
Other languages
Chinese (zh)
Other versions
CN105243154A (en
Inventor
邵振峰
周维勋
李从敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201510708598.4A priority Critical patent/CN105243154B/en
Publication of CN105243154A publication Critical patent/CN105243154A/en
Application granted granted Critical
Publication of CN105243154B publication Critical patent/CN105243154B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5838Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Library & Information Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The characteristic point of each image obtains characteristic point matrix in a kind of remote sensing image retrieval method and system based on notable point feature and sparse own coding, including extraction image library, and the notable figure of each image is calculated using visual attention model;Using Adaptive Thresholding by notable figure binaryzation, and filtered remarkable characteristic is obtained into line mask operation with characteristic point matrix;Several remarkable characteristics are chosen respectively from each training image and construct training sample, and sparse autoencoder network is trained according to the training sample set after albefaction, obtains feature extractor;Feature extraction is carried out using feature extractor, threshold function table is used in combination to carry out LS-SVM sparseness to the characteristics of image of extraction, obtains the final feature vector for retrieval;Feature vector based on extraction carries out image retrieval according to preset similarity measurement criterion.The present invention realizes automatically extracting for characteristics of image by trained sparse autoencoder network, and the feature extracted has good discernment, ensure that retrieval precision ratio.

Description

Remote sensing image retrieval method based on notable point feature and sparse own coding and system
Technical field
The invention belongs to technical field of image processing, are related to a kind of remote sensing figure based on notable point feature and sparse own coding As search method and system.
Background technology
With the raising of remote sensing earth observation ability, retrievable remotely-sensed data shows the spy of diversification and magnanimity Point.However, while mass remote sensing data provides abundant data source for all kinds of major application demands, at current ground data Reason and analysis ability are insufficient, and " data magnanimity, information are flooded " problem of remote sensing big data becomes increasingly conspicuous.How emerging section is utilized Computing technique and means are learned, quickly positioning and the interesting target in intelligent retrieval remote sensing images or region, are remote sensing big datas Processing and analysis facing challenges and field of remote sensing image processing problem in science urgently to be resolved hurrily.Remote Sensing Image Retrieval technology It is effective ways the bottleneck that solution, studies efficient image retrieval technologies and have great importance.
Current remote Sensing Image Retrieval technology is mainly to carry out similarity measurement by the low-level feature to image, and then return Return similar image.Compared to traditional search method based on keyword, content-based retrieval method efficiency and accuracy higher, But a kind of character description method that can effectively describe various complicated remote sensing images scenes of design is very difficult.In recent years, deep Degree learns the research hotspot due to being increasingly becoming field of image recognition with good feature learning ability.Compared to engineer's Feature, the method based on deep learning can obtain a feature extractor to realize that the automatic of characteristics of image carries by sample training It takes, is suitable for inclusion in the remote Sensing Image Retrieval of complex scene.Since network design and training are relatively easy, sparse own coding has become For a kind of common deep learning method, and it is widely used in image procossing.
Sparse autoencoder network is trained, in terms of constructing training sample, existing method is typically from training image Certain amount and the image block construction training sample of size are randomly selected, this sample architecture method has following defect.First, from For the angle of human eye vision theory, of people's attention is the specific objective on remote sensing images, and the image block randomly selected can The specific objective of concern can and not be included.Second, since the size of training image is fixed, randomly select image block construction instruction The method for practicing sample is likely to result in lack of training samples.Third utilizes trained network since training sample is image block Obtained when feature extraction be image block rather than entire image feature, therefore cannot be directly used to image retrieval.It is terrible To the feature of entire image, it usually needs the method for using convolution, but not only computational efficiency is low but also can introduce it for this process His parameter.In terms of activation primitive selection, existing method generally use sigmoid functions swash as network hidden layer neuron Function living, and the problems such as sigmoid functions disappear in network backpropagation there are serious gradient, it is unfavorable for network training. For sparse autoencoder network feature extraction, existing method be typically directly using the activation value of hidden layer as the feature extracted and Do not pass through LS-SVM sparseness, and tests and show that sparse features performance is more preferable.
Invention content
In view of the deficiencies of the prior art, the present invention provides a kind of distant based on notable point feature and sparse own coding Feel image retrieval technologies scheme.The present invention will extract the notable point features of remote sensing images as the input of sparse autoencoder network into And it is trained, it is final to realize remote Sensing Image Retrieval using the feature extractor extraction characteristics of image of training.
The technical solution adopted in the present invention is a kind of remote Sensing Image Retrieval based on notable point feature and sparse own coding Method includes the following steps:
Step 1, the characteristic point for extracting each image in image library obtains characteristic point matrix, and is calculated using visual attention model The notable figure of each image;
Step 2, for the notable figure of each image in image library, Adaptive Thresholding is respectively adopted by notable figure binaryzation, And characteristic point matrix corresponding with image obtains filtered remarkable characteristic into line mask operation;Realization method is as follows,
When using Adaptive Thresholding by notable figure binaryzation, according to the conspicuousness size of notable figure pixel, determine notable The binary-state threshold T of figure is as follows,
Wherein, w and h indicates that the width and height of notable figure, I (x, y) indicate the saliency value of notable figure pixel (x, y) respectively;
If obtaining binaryzation notable figure to notable figure binaryzation according to binary-state threshold T, should mutually there be matrix IbinaryIf P Indicate the characteristic point matrix of image, PIIndicating filtered notable feature dot matrix, calculating notable feature dot matrix is as follows,
Step 3, it takes several images as training image from image library, several notable spies is chosen respectively from each training image Sign point construction training sample, obtains training sample set X, and sparse autoencoder network is trained according to the training sample set X ' after albefaction, Obtain feature extractor;
The sparse autoencoder network input layer, hidden layer and output layer, wherein hidden layer neuron use ReLU letters Number is used as activation primitive, and output layer neuron is using softplus functions as activation primitive, the cost of sparse autoencoder network Function is defined as follows,
Wherein, first item is mean square error item, and Section 2 is regular terms, HW,bIndicate the network output of training sample set X ' Value, W=[W1,W2] and b=[b1,b2] respectively indicate network input layer and implicit interlayer weight W1With biasing b1And hidden layer With the weight W of output interlayer2With biasing b2The weight matrix of composition, λ indicate regularization coefficient;
Step 4, to all images in image library, feature extraction is carried out using the feature extractor of step 3 training gained, It is used in combination threshold function table to carry out LS-SVM sparseness to the characteristics of image of extraction, obtains the final feature vector for retrieval;It realizes Mode is as follows,
The characteristics of image Y of extraction indicates as follows,
Y=f1(W1PI′+b1)
Wherein, notable feature dot matrix PI' it is according to filtered notable feature dot matrix P obtained by step 2IAfter albefaction As a result;
For the characteristics of image Y of extraction, carries out following LS-SVM sparseness and obtains sparse features matrix Z,
Z=[Z+,Z-Max]=[(0, Y- α), max (0, α-Y)]
Wherein, α indicates the threshold value of threshold function table, matrix Z+=max (0, Y- α), Z-=max (0, α-Y);
If the SIFT point numbers detected from piece image are n, sparse features matrix Z is further processed, spy is obtained It is as follows to levy vector F,
Wherein,WithRepresenting matrix Z respectively+And Z-I-th of column vector.
Step 5, the feature vector extracted based on step 4 carries out image retrieval according to preset similarity measurement criterion.
Moreover, in step 1, the characteristic point for extracting each image in image library obtains characteristic point matrix, is carried using SIFT operators Take realization.
Moreover, in step 5, preset similarity measurement criterion uses city distance.
The present invention correspondingly provides a kind of Content-based Remote Sensing Image Retrieval System based on notable point feature and sparse own coding, including With lower module,
Feature point extraction module, the characteristic point for extracting each image in image library obtains characteristic point matrix, and utilizes and regard Feel that attention model calculates the notable figure of each image;
Adaptive threshold is respectively adopted for the notable figure for each image in image library in remarkable characteristic extraction module Method is by notable figure binaryzation, and characteristic point matrix corresponding with image obtains filtered remarkable characteristic into line mask operation; Realization method is as follows,
When using Adaptive Thresholding by notable figure binaryzation, according to the conspicuousness size of notable figure pixel, determine notable The binary-state threshold T of figure is as follows,
Wherein, w and h indicates that the width and height of notable figure, I (x, y) indicate the saliency value of notable figure pixel (x, y) respectively;
If obtaining binaryzation notable figure to notable figure binaryzation according to binary-state threshold T, should mutually there be matrix IbinaryIf P Indicate the characteristic point matrix of image, PIIndicating filtered notable feature dot matrix, calculating notable feature dot matrix is as follows,
Training module, for taking several images as training image from image library, if being chosen respectively from each training image Dry remarkable characteristic constructs training sample, obtains training sample set X, sparse self-editing according to the training sample set X ' training after albefaction Code network, obtains feature extractor;
The sparse autoencoder network input layer, hidden layer and output layer, wherein hidden layer neuron use ReLU letters Number is used as activation primitive, and output layer neuron is using softplus functions as activation primitive, the cost of sparse autoencoder network Function is defined as follows,
Wherein, first item is mean square error item, and Section 2 is regular terms, HW,bIndicate the network output of training sample set X ' Value, W=[W1,W2] and b=[b1,b2] respectively indicate network input layer and implicit interlayer weight W1With biasing b1And hidden layer With the weight W of output interlayer2With biasing b2The weight matrix of composition, λ indicate regularization coefficient;
Characteristic extracting module, for all images in image library, using step 3 training gained feature extractor into Row feature extraction is used in combination threshold function table to carry out LS-SVM sparseness to the characteristics of image of extraction, obtains the final spy for retrieval Sign vector;Realization method is as follows,
The characteristics of image Y of extraction indicates as follows,
Y=f1(W1PI′+b1)
Wherein, notable feature dot matrix PI' it is according to filtered notable feature dot matrix P obtained by step 2IAfter albefaction As a result;
For the characteristics of image Y of extraction, carries out following LS-SVM sparseness and obtains sparse features matrix Z,
Z=[Z+,Z-Max]=[(0, Y- α), max (0, α-Y)]
Wherein, α indicates the threshold value of threshold function table, matrix Z+=max (0, Y- α), Z-=max (0, α-Y);
If the SIFT point numbers detected from piece image are n, sparse features matrix Z is further processed, spy is obtained It is as follows to levy vector F,
Wherein,WithRepresenting matrix Z respectively+And Z-I-th of column vector.
Module is retrieved, for the feature vector of feature based extraction module extraction, according to preset similarity measurement criterion Carry out image retrieval.
Moreover, in feature point extraction module, the characteristic point for extracting each image in image library obtains characteristic point matrix, utilizes SIFT operator extractions are realized.
Moreover, in retrieval module, preset similarity measurement criterion uses city distance.
Compared with prior art, the present invention has following features and advantageous effect,
1, the notable figure of image, and the characteristic point that notable figure binaryzation extracts SIFT are calculated using visual attention model It is filtered to obtain the remarkable characteristic of image, not only conforms with the vision attention feature of human eye and can preferably reflect people Search Requirement.
2, the remarkable characteristic for choosing image constructs training sample, compensates for traditional grab sample structure on training image Make the defect of training sample.
3, the feature extractor trained using sparse autoencoder network realizes automatically extracting for characteristics of image, exempts For the characteristic Design process of complicated remote sensing images.
4, favorable expandability, training sample include but are not limited to remarkable characteristic.
Description of the drawings
Fig. 1 is the flow chart of the embodiment of the present invention.
Specific implementation mode
Remote sensing image retrieval method proposed by the present invention based on notable point feature and sparse own coding extracts image first Characteristic point obtain characteristic point matrix, and calculate the notable figure of image, then use adaptive threshold by notable figure binaryzation with Characteristic point matrix carries out " mask " operation and obtains remarkable characteristic, then chooses the remarkable characteristic construction training sample of certain amount The sparse autoencoder network of this training, and automatically extract characteristics of image using trained feature extractor and obtain the feature for retrieval Vector finally carries out image retrieval according to preset method for measuring similarity and returns to similar image.
For the present invention will be described in detail, technical solution provides embodiment flow and is described as follows referring to Fig. 1:
Step 1, the characteristic point for extracting each image in image library obtains characteristic point matrix, and is calculated using visual attention model The notable figure of each image.
When it is implemented, existing image library or the image library voluntarily built by those skilled in the art may be used.Example The high-resolution remote sensing image that a width includes multiple atural object classifications is such as chosen, cutting structure packet is carried out using Tiles partitioned modes Retrieval image library containing multiple classifications.For every piece image in image library, embodiment uses SIFT (Scale first Invariant Feature Transform) characteristic point (key point) of operator extraction image obtains characteristic point matrix, then adopts The notable figure of image, Tile method of partition, SIFT operators are calculated with GBVS (Graph-Based Visual Saliency) model And GBVS models are the prior arts, it will not go into details by the present invention.
Step 2, for the notable figure of each image in image library, Adaptive Thresholding is respectively adopted by notable figure binaryzation, And characteristic point matrix corresponding with image carries out " mask " operation and obtains filtered remarkable characteristic.
The binary-state threshold of notable figure, binaryzation notable figure and feature are determined in embodiment according to the conspicuousness size of pixel Dot matrix obtains remarkable characteristic after carrying out " mask " operation, realizes as follows:
According to the conspicuousness size of notable figure pixel, the binary-state threshold T of notable figure is determined by formula (1).
Wherein, w and h indicates that the width and height of notable figure, I (x, y) indicate the saliency value of pixel at notable figure (x, y) respectively.
According to binary-state threshold T to notable figure binaryzation, binaryzation notable figure is obtained, should mutually there is matrix Ibinary.Utilize two Value notable figure is filtered to obtain remarkable characteristic to the characteristic point matrix of image.Let p represent the characteristic point matrix of image, PI Indicate filtered notable feature dot matrix, then notable feature dot matrix can be calculated by formula (2).
Wherein,
Matrix
The corresponding feature vector of one SIFT key point of each element representation of matrix P, and SIFT key points are corresponding Feature vector is usually 128 dimensions, and the embodiment of the present invention accordingly uses 128 dimensions;
Matrix
Wherein, P128(x, y) indicates the corresponding feature vector of characteristic point, if pixel does not have characteristic point P at (x, y)128(x, Y)=0.IbinaryMiddle each element is 0 or 1, Ibinary(x, y) indicates value of the binaryzation notable figure at (x, y).Symbol For scale multiplication symbol.
Step 3, several images are chosen from image library as training image, are chosen respectively from each training image several notable Characteristic point constructs training sample, and the sparse autoencoder network of training obtains feature extractor.
In embodiment, in step 3 choose certain amount training image remarkable characteristic and unconventional image block structure Make training sample, when training selects ReLU (Rectified Linear Units) functions and unconventional sigmoid functions are made For the activation primitive of sparse autoencoder network hidden layer neuron.For example, in step 3 each remarkable characteristic be one 4 × 4 × The feature vector of 8=128 dimensions, a characteristic point constitute a training sample.When it is implemented, the number of training image, a width The number of remarkable characteristic can voluntarily be specified by those skilled in the art in training image.
It is implemented as follows:
First, the remarkable characteristic of image is chosen, training sample set is constructed.
Embodiment randomly selects the image of certain amount as training image from image library first, then randomly selects one The remarkable characteristic of fixed number purpose training image constructs training sample set.Training sample set can use formula (3) to indicate:
Wherein, m indicates that the number of training sample, each row of X indicate a remarkable characteristic, i.e. a training sample. For example, [x1,1,x2,1,…,x128,1] it is the 1st training sample, [x1,2,x2,2,…,x128,2] it is the 2nd training sample.
Then, the sparse autoencoder network of training obtains feature extractor.
There are certain correlations for the remarkable characteristic extracted due to same width training image, cannot be directly training Sample set X inputs sparse autoencoder network and is trained.ZCA (Zero Component Analysis) albefaction is used before training It is handled to obtain the training sample set X ' after albefaction to training sample, and preserves relevant parameter when ZCA albefactions, ZCA albefactions It is embodied as the prior art, it will not go into details by the present invention.
Embodiment defines a sparse autoencoder network comprising input layer, 3 layers of hidden layer and output layer, wherein hidden Neuron containing layer uses ReLU functions f1=max (0, x) is used as activation primitive, output layer neuron to use softplus functions f2 =ln (1+ex) it is used as activation primitive.Compared to traditional sigmoid functions, ReLU functions can alleviate gradient disappearance to a certain extent Problem is more conducive to network training.Training sample set X ' is given, then the cost function of sparse autoencoder network may be defined as formula (4).
First item is mean square error item in formula, and Section 2 is regular terms, HW,bIndicate the network output of training sample set X ' Value, W=[W1,W2] and b=[b1,b2] respectively indicate network input layer and implicit interlayer weight W1With biasing b1And hidden layer With the weight W of output interlayer2With biasing b2The weight matrix of composition, λ indicate regularization coefficient.When it is implemented, can be adopted when training Weight and bias matrix parameter W and b are obtained with the cost function in the methods of gradient decline optimized-type (4).
Step 4, to all images in image library, feature extraction is carried out using the feature extractor of step 3 training gained, It is used in combination threshold function table to carry out LS-SVM sparseness to the feature of extraction, obtains the final feature vector for retrieval.
It is mapped the remarkable characteristic input feature vector extractor of image to obtain corresponding image in the step 4 of embodiment Feature, recycle threshold function table to the feature of extraction carry out LS-SVM sparseness can be obtained the final feature for retrieval to Amount.
The characteristics of image Y of extraction can use formula (5) to indicate as follows,
Y=f1(W1PI′+b1) (5)
Wherein, by W1PI+b1ReLU functions f is substituted into as variable x1=max (0, x), remarkable characteristic square used herein Battle array PI' it is according to filtered notable feature dot matrix obtained by step 2, using identical as when carrying out albefaction to training sample set X ZCA whitening parameters carry out pretreated result.For the characteristics of image Y of extraction, with formula (6) carry out LS-SVM sparseness obtain it is dilute Dredge eigenmatrix Z.
Z=[Z+,Z-Max]=[(0, Y- α), max (0, α-Y)] (6)
Wherein, α indicates the threshold value of threshold function table f=max (0, x- α) and f=max (0, α-Y), matrix Z+=max (0, Y- α), Z-=max (0, α-Y).
The final feature vector F for retrieval in order to obtain, if the SIFT point numbers detected from piece image are n It is a, sparse features matrix Z is further processed with formula (7).
Wherein,WithRepresenting matrix Z respectively+And Z-I-th of column vector.
Step 5, the feature vector extracted based on step 4 carries out image retrieval according to preset similarity measurement criterion:Tool When body is implemented, those skilled in the art can voluntarily preset similarity measurement criterion.Embodiment is counted using city distance (L1 norms) The similitude of query image and other images is calculated, and associated picture is returned by similitude size.When it is implemented, can be with image library Middle any image is query image, obtains the associated picture returned by similitude size, to other images other than image library, Same mode may be used and extract feature vector, and retrieved from image library.
When it is implemented, computer software mode, which can be used, in the above flow realizes automatic running flow, mould can also be used Block mode provides corresponding system.The present invention correspondingly provides a kind of remote sensing images based on notable point feature and sparse own coding Searching system comprises the following modules,
Feature point extraction module, the characteristic point for extracting each image in image library obtains characteristic point matrix, and utilizes and regard Feel that attention model calculates the notable figure of each image;
Adaptive threshold is respectively adopted for the notable figure for each image in image library in remarkable characteristic extraction module Method is by notable figure binaryzation, and characteristic point matrix corresponding with image obtains filtered remarkable characteristic into line mask operation; Realization method is as follows,
When using Adaptive Thresholding by notable figure binaryzation, according to the conspicuousness size of notable figure pixel, determine notable The binary-state threshold T of figure is as follows,
Wherein, w and h indicates that the width and height of notable figure, I (x, y) indicate the saliency value of notable figure pixel (x, y) respectively;
If obtaining binaryzation notable figure to notable figure binaryzation according to binary-state threshold T, should mutually there be matrix IbinaryIf P Indicate the characteristic point matrix of image, PIIndicating filtered notable feature dot matrix, calculating notable feature dot matrix is as follows,
Training module, for taking several images as training image from image library, if being chosen respectively from each training image Dry remarkable characteristic constructs training sample, obtains training sample set X, sparse self-editing according to the training sample set X ' training after albefaction Code network, obtains feature extractor;
The sparse autoencoder network input layer, hidden layer and output layer, wherein hidden layer neuron use ReLU letters Number is used as activation primitive, and output layer neuron is using softplus functions as activation primitive, the cost of sparse autoencoder network Function is defined as follows,
Wherein, first item is mean square error item, and Section 2 is regular terms, HW,bIndicate the network output of training sample set X ' Value, W=[W1,W2] and b=[b1,b2] respectively indicate network input layer and implicit interlayer weight W1With biasing b1And hidden layer With the weight W of output interlayer2With biasing b2The weight matrix of composition, λ indicate regularization coefficient;
Query characteristics extraction module, for image to be checked, being carried out using the feature extractor of step 3 training gained Feature extraction is used in combination threshold function table to carry out LS-SVM sparseness to the characteristics of image of extraction, obtains the final feature for retrieval Vector;Realization method is as follows,
The characteristics of image Y of extraction indicates as follows,
Y=f1(W1PI′+b1)
Wherein, notable feature dot matrix PI' it is according to filtered notable feature dot matrix P obtained by step 2IAfter albefaction As a result;
For the characteristics of image Y of extraction, carries out following LS-SVM sparseness and obtains sparse features matrix Z,
Z=[Z+,Z-Max]=[(0, Y- α), max (0, α-Y)]
Wherein, α indicates the threshold value of threshold function table, matrix Z+=max (0, Y- α), Z-=max (0, α-Y);
If the SIFT point numbers detected from piece image are n, sparse features matrix Z is further processed, spy is obtained It is as follows to levy vector F,
Wherein,WithRepresenting matrix Z respectively+And Z-I-th of column vector.
Module is retrieved, the feature vector for being extracted based on query characteristics extraction module, according to preset similarity measurement Criterion carries out image retrieval.
Specific embodiment described herein is given an example to the present invention.The skill of the technical field of the invention Art personnel can do various modifications or supplement to described specific embodiment or substitute by a similar method, but can't be inclined Spirit or beyond the scope defined by the appended claims from the present invention.

Claims (6)

1. a kind of remote sensing image retrieval method based on notable point feature and sparse own coding, it is characterised in that:Including following step Suddenly,
Step 1, the characteristic point for extracting each image in image library obtains characteristic point matrix, and calculates each figure using visual attention model The notable figure of picture;
Step 2, for the notable figure of each image in image library, Adaptive Thresholding is respectively adopted by notable figure binaryzation, and with The corresponding characteristic point matrix of image obtains filtered remarkable characteristic into line mask operation;Realization method is as follows,
When using Adaptive Thresholding by notable figure binaryzation, according to the conspicuousness size of notable figure pixel, notable figure is determined Binary-state threshold T is as follows,
Wherein, w and h indicates that the width and height of notable figure, I (x, y) indicate the saliency value of notable figure pixel (x, y) respectively;
If obtaining binaryzation notable figure to notable figure binaryzation according to binary-state threshold T, should mutually there be matrix Ibinary, let p represent The characteristic point matrix of image, PIIndicating filtered notable feature dot matrix, calculating notable feature dot matrix is as follows,
Step 3, it takes several images as training image from image library, several remarkable characteristics is chosen respectively from each training image Training sample is constructed, training sample set X is obtained, according to the sparse autoencoder network of training sample set X ' training after albefaction, is obtained Feature extractor;
The sparse autoencoder network input layer, hidden layer and output layer, wherein hidden layer neuron are made using ReLU functions For activation primitive, output layer neuron is using softplus functions as activation primitive, the cost function of sparse autoencoder network It is defined as follows,
Wherein, first item is mean square error item, and Section 2 is regular terms, HW,bIndicate the network output valve of training sample set X ', W =[W1,W2] and b=[b1,b2] respectively indicate network input layer and implicit interlayer weight W1With biasing b1And hidden layer and defeated Go out the weight W of interlayer2With biasing b2The weight matrix of composition, λ indicate regularization coefficient;
Step 4, to all images in image library, feature extraction is carried out using the feature extractor of step 3 training gained, is used in combination Threshold function table carries out LS-SVM sparseness to the characteristics of image of extraction, obtains the final feature vector for retrieval;Realization method It is as follows,
The characteristics of image Y of extraction indicates as follows,
Y=f1(W1PI′+b1)
Wherein, f1() is ReLU functions, notable feature dot matrix PI' it is according to filtered notable feature dot matrix obtained by step 2 PIResult after albefaction;
For the characteristics of image Y of extraction, carries out following LS-SVM sparseness and obtains sparse features matrix Z,
Z=[Z+,Z-Max]=[(0, Y- α), max (0, α-Y)]
Wherein, α indicates the threshold value of threshold function table, matrix Z+=max (0, Y- α), Z-=max (0, α-Y);
If the SIFT point numbers detected from piece image are n, sparse features matrix Z is further processed, obtain feature to It is as follows to measure F,
Wherein,WithRepresenting matrix Z respectively+And Z-I-th of column vector;
Step 5, the feature vector extracted based on step 4 carries out image retrieval according to preset similarity measurement criterion.
2. the remote sensing image retrieval method based on notable point feature and sparse own coding, feature exist according to claim 1 In:In step 1, the characteristic point for extracting each image in image library obtains characteristic point matrix, is realized using SIFT operator extractions.
3. the remote sensing image retrieval method according to claim 1 or claim 2 based on notable point feature and sparse own coding, feature It is:In step 5, preset similarity measurement criterion uses city distance.
4. a kind of Content-based Remote Sensing Image Retrieval System based on notable point feature and sparse own coding, it is characterised in that:Including with lower die Block,
Feature point extraction module, the characteristic point for extracting each image in image library obtains characteristic point matrix, and is noted using vision Meaning model calculates the notable figure of each image;
Remarkable characteristic extraction module, for the notable figure for each image in image library, Adaptive Thresholding is respectively adopted will Notable figure binaryzation, and characteristic point matrix corresponding with image obtains filtered remarkable characteristic into line mask operation;It realizes Mode is as follows,
When using Adaptive Thresholding by notable figure binaryzation, according to the conspicuousness size of notable figure pixel, notable figure is determined Binary-state threshold T is as follows,
Wherein, w and h indicates that the width and height of notable figure, I (x, y) indicate the saliency value of notable figure pixel (x, y) respectively;
If obtaining binaryzation notable figure to notable figure binaryzation according to binary-state threshold T, should mutually there be matrix Ibinary, let p represent The characteristic point matrix of image, PIIndicating filtered notable feature dot matrix, calculating notable feature dot matrix is as follows,
Training module is chosen several aobvious respectively for taking several images as training image from image library from each training image It writes characteristic point and constructs training sample, obtain training sample set X, according to the sparse own coding net of training sample set X ' training after albefaction Network obtains feature extractor;
The sparse autoencoder network input layer, hidden layer and output layer, wherein hidden layer neuron are made using ReLU functions For activation primitive, output layer neuron is using softplus functions as activation primitive, the cost function of sparse autoencoder network It is defined as follows,
Wherein, first item is mean square error item, and Section 2 is regular terms, HW,bIndicate the network output valve of training sample set X ', W =[W1,W2] and b=[b1,b2] respectively indicate network input layer and implicit interlayer weight W1With biasing b1And hidden layer and defeated Go out the weight W of interlayer2With biasing b2The weight matrix of composition, λ indicate regularization coefficient;
Characteristic extracting module, for all images in image library, being carried out using the feature extractor of step 3 training gained special Sign extraction, is used in combination threshold function table to carry out LS-SVM sparseness to the characteristics of image of extraction, obtain the final feature for retrieval to Amount;Realization method is as follows,
The characteristics of image Y of extraction indicates as follows,
Y=f1(W1PI′+b1)
Wherein, f1() is ReLU functions, notable feature dot matrix PI' it is according to filtered notable feature dot matrix obtained by step 2 PIResult after albefaction;
For the characteristics of image Y of extraction, carries out following LS-SVM sparseness and obtains sparse features matrix Z,
Z=[Z+,Z-Max]=[(0, Y- α), max (0, α-Y)]
Wherein, α indicates the threshold value of threshold function table, matrix Z+=max (0, Y- α), Z-=max (0, α-Y);
If the SIFT point numbers detected from piece image are n, sparse features matrix Z is further processed, obtain feature to It is as follows to measure F,
Wherein,WithRepresenting matrix Z respectively+And Z-I-th of column vector;
Module is retrieved, the feature vector for being extracted based on query characteristics extraction module, according to preset similarity measurement criterion Carry out image retrieval.
5. the Content-based Remote Sensing Image Retrieval System based on notable point feature and sparse own coding, feature exist according to claim 4 In:In feature point extraction module, the characteristic point for extracting each image in image library obtains characteristic point matrix, utilizes SIFT operator extractions It realizes.
6. the Content-based Remote Sensing Image Retrieval System based on notable point feature and sparse own coding according to claim 4 or 5, feature It is:It retrieves in module, preset similarity measurement criterion uses city distance.
CN201510708598.4A 2015-10-27 2015-10-27 Remote sensing image retrieval method based on notable point feature and sparse own coding and system Active CN105243154B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510708598.4A CN105243154B (en) 2015-10-27 2015-10-27 Remote sensing image retrieval method based on notable point feature and sparse own coding and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510708598.4A CN105243154B (en) 2015-10-27 2015-10-27 Remote sensing image retrieval method based on notable point feature and sparse own coding and system

Publications (2)

Publication Number Publication Date
CN105243154A CN105243154A (en) 2016-01-13
CN105243154B true CN105243154B (en) 2018-08-21

Family

ID=55040802

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510708598.4A Active CN105243154B (en) 2015-10-27 2015-10-27 Remote sensing image retrieval method based on notable point feature and sparse own coding and system

Country Status (1)

Country Link
CN (1) CN105243154B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105718531B (en) * 2016-01-14 2019-12-17 广州市万联信息科技有限公司 Image database establishing method and image identification method
CN106228130B (en) * 2016-07-19 2019-09-10 武汉大学 Remote sensing image cloud detection method of optic based on fuzzy autoencoder network
CN106295613A (en) * 2016-08-23 2017-01-04 哈尔滨理工大学 A kind of unmanned plane target localization method and system
CN106909924B (en) * 2017-02-18 2020-08-28 北京工业大学 Remote sensing image rapid retrieval method based on depth significance
CN107122809B (en) * 2017-04-24 2020-04-28 北京工业大学 Neural network feature learning method based on image self-coding
CN107515895B (en) * 2017-07-14 2020-06-05 中国科学院计算技术研究所 Visual target retrieval method and system based on target detection
CN108830172A (en) * 2018-05-24 2018-11-16 天津大学 Aircraft remote sensing images detection method based on depth residual error network and SV coding
CN109259733A (en) * 2018-10-25 2019-01-25 深圳和而泰智能控制股份有限公司 Apnea detection method, apparatus and detection device in a kind of sleep
CN111144483B (en) * 2019-12-26 2023-10-17 歌尔股份有限公司 Image feature point filtering method and terminal
CN112731410B (en) * 2020-12-25 2021-11-05 上海大学 Underwater target sonar detection method based on CNN

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102073748A (en) * 2011-03-08 2011-05-25 武汉大学 Visual keyword based remote sensing image semantic searching method
CN102867196A (en) * 2012-09-13 2013-01-09 武汉大学 Method for detecting complex sea-surface remote sensing image ships based on Gist characteristic study
CN103309982A (en) * 2013-06-17 2013-09-18 武汉大学 Remote sensing image retrieval method based on vision saliency point characteristics
CN104462494A (en) * 2014-12-22 2015-03-25 武汉大学 Remote sensing image retrieval method and system based on non-supervision characteristic learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102073748A (en) * 2011-03-08 2011-05-25 武汉大学 Visual keyword based remote sensing image semantic searching method
CN102867196A (en) * 2012-09-13 2013-01-09 武汉大学 Method for detecting complex sea-surface remote sensing image ships based on Gist characteristic study
CN103309982A (en) * 2013-06-17 2013-09-18 武汉大学 Remote sensing image retrieval method based on vision saliency point characteristics
CN104462494A (en) * 2014-12-22 2015-03-25 武汉大学 Remote sensing image retrieval method and system based on non-supervision characteristic learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
利用视觉注意模型和局部特征的遥感影像检索方法;周维勋 等;《武汉大学学报.信息科学版》;20150105;第40卷(第1期);第46-52页 *
基于视觉显著点特征的遥感影像检索方法;王星 等;《测绘科学》;20140420;第39卷(第4期);第34-38页 *

Also Published As

Publication number Publication date
CN105243154A (en) 2016-01-13

Similar Documents

Publication Publication Date Title
CN105243154B (en) Remote sensing image retrieval method based on notable point feature and sparse own coding and system
CN108804530B (en) Subtitling areas of an image
CN110598029B (en) Fine-grained image classification method based on attention transfer mechanism
Yuan et al. Exploring a fine-grained multiscale method for cross-modal remote sensing image retrieval
Yi et al. ASSD: Attentive single shot multibox detector
CN112750140B (en) Information mining-based disguised target image segmentation method
CN107818314B (en) Face image processing method, device and server
CN110222140A (en) A kind of cross-module state search method based on confrontation study and asymmetric Hash
CN110309856A (en) Image classification method, the training method of neural network and device
CN109344821A (en) Small target detecting method based on Fusion Features and deep learning
CN104462494B (en) A kind of remote sensing image retrieval method and system based on unsupervised feature learning
CN106909924A (en) A kind of remote sensing image method for quickly retrieving based on depth conspicuousness
CN109559300A (en) Image processing method, electronic equipment and computer readable storage medium
CN107256246A (en) PRINTED FABRIC image search method based on convolutional neural networks
CN106227851A (en) Based on the image search method searched for by depth of seam division that degree of depth convolutional neural networks is end-to-end
CN109508675B (en) Pedestrian detection method for complex scene
CN110929080B (en) Optical remote sensing image retrieval method based on attention and generation countermeasure network
CN105243139A (en) Deep learning based three-dimensional model retrieval method and retrieval device thereof
CN105868706A (en) Method for identifying 3D model based on sparse coding
CN114758362B (en) Clothing changing pedestrian re-identification method based on semantic perception attention and visual shielding
CN107958067A (en) It is a kind of based on without mark Automatic Feature Extraction extensive electric business picture retrieval system
CN111666919A (en) Object identification method and device, computer equipment and storage medium
CN112597324A (en) Image hash index construction method, system and equipment based on correlation filtering
CN114842208A (en) Power grid harmful bird species target detection method based on deep learning
Termritthikun et al. NU-LiteNet: Mobile landmark recognition using convolutional neural networks

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant