CN109460773A - A kind of cross-domain image sparse matching process based on depth convolutional network - Google Patents

A kind of cross-domain image sparse matching process based on depth convolutional network Download PDF

Info

Publication number
CN109460773A
CN109460773A CN201810846985.8A CN201810846985A CN109460773A CN 109460773 A CN109460773 A CN 109460773A CN 201810846985 A CN201810846985 A CN 201810846985A CN 109460773 A CN109460773 A CN 109460773A
Authority
CN
China
Prior art keywords
matching
image
tensor
arest neighbors
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810846985.8A
Other languages
Chinese (zh)
Inventor
科菲尔·阿博曼
陈宝权
史明镒
达尼·李其思
达尼·科恩尔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BEIJING FILM ACADEMY
Original Assignee
BEIJING FILM ACADEMY
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BEIJING FILM ACADEMY filed Critical BEIJING FILM ACADEMY
Priority to CN201810846985.8A priority Critical patent/CN109460773A/en
Publication of CN109460773A publication Critical patent/CN109460773A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a kind of cross-domain image sparse matching process based on depth convolutional network.It can be suitable for the image that external appearance characteristic changes greatly using the present invention, and robustness is preferable.The present invention utilizes the convolutional layer, pond layer of different levels in depth convolutional network or the tensor information of active coating, inverted pyramid formula gradually accurate Feature Points Matching is successively up carried out from bottom grade, it can be suitable for the image that external appearance characteristic changes greatly, and robustness is preferable.The not strong arest neighbors matching pair of semantic information is rejected to screening in addition, matching using tensor response to arest neighbors, so that the arest neighbors screened is matched to semantic dependency.

Description

A kind of cross-domain image sparse matching process based on depth convolutional network
Technical field
The present invention relates to technical field of computer vision, and in particular to a kind of cross-domain image based on depth convolutional network is dilute Dredge matching process.
Background technique
Images match is not only a key problem in computer vision, and many graphics apply such as anamorphose Basis.
In traditional algorithm, the process of searching images match point is divided into two steps, extracts the feature in each image first Characteristic point, is then compared and is matched by point.It proposes many effective methods both at home and abroad to this, is broadly divided into angle point spy The three categories such as sign detection, speckled characteristic detection and provincial characteristics detection.SIFT (the Scale- put forward by Lowe et al. in 1999 Invariant feature transform) algorithm, the point of interest based on image local appearance and with the size and rotation of image It is unrelated, there is invariance (Lowe, David the G. " Object of translation, rotation, scale and part visual angle change recognition from local scale-invariant features."Computer vision,1999.The proceedings of the seventh IEEE international conference on.Vol.2.Ieee, 1999.);SURF (the Speeded up robust features) algorithm put forward by Herbert for 2006 is SIFT algorithm Improvement, greatly reduce time complexity (Bay, Herbert, Tinne Tuytelaars, and the Luc Van of algorithm Gool."Surf:Speeded up robust features."European conference on computer vision.Springer,Berlin,Heidelberg,2006.)。
However the feature for being based only on image itself, for the image that external appearance characteristic changes greatly, traditional algorithm can not be obtained To preferable matching result.Further, since lacking semantic information, obtained match point does not have semantic dependency, Wu Fawei yet More significant follow-up works provide guidance.Christopher B.Choy et al. proposes a kind of based on depth network Learning method learns the matching between image, however their data for having the method for supervision to be only applicable to corresponding data collection, Shandong Stick it is poor (Chandraker M, Choy C B, Savarese S.UNIVERSAL CORRESPONDENCE NETWORK:, US20170124711[P].2017.)。
Summary of the invention
In view of this, the present invention provides a kind of cross-domain image sparse matching process based on depth convolutional network, it can Suitable for the image that external appearance characteristic changes greatly, and robustness is preferable.
Cross-domain image sparse matching process based on depth convolutional network of the invention, includes the following steps:
Step 1, construct depth convolutional network, and it be trained, obtain capable of extracting characteristics of image, it is trained Depth convolutional network;
Step 2, two images A, B to be matched are distinguished into the trained depth convolutional network of input step 1, extracts depth The output of convolutional layer, pond layer in convolutional network in each level or active coating;Wherein, level extracts wherein one layer defeated Out, each level can extract the output (such as extracting active coating) of same layer, can also extract the output of layer not of the same race;
Step 3, for the output of each layer of extraction, the search subregion of this layer is set separately;Since the bottom, under It is supreme to carry out arest neighbors matching respectively in the corresponding search subregion of each layer;Wherein, arest neighbors matching process is as follows:
For l layers of n-th of search subregion, to each of l layers of n-th of search subregion of image A Sub- tensorWith the sub- tensor of same position in l layers of image BCentered on matching area in find withDistance Nearest sub- tensor, wherein matching area is less than search subregion;Likewise, to l layers of n-th of search sub-district of image B The sub- tensor in each of domainWith the sub- tensor of same position in l layers of image ACentered on matching area in Find withThe nearest sub- tensor of distance;If A, in n-th of search subregion of l layers of B there are two sub- tensors each other away from From nearest, then the two sub- tensors are referred to as arest neighbors matching pair;
Wherein, l layers of n-th of search subregion is that n-th of matching of next layer (l+1 layers) is reflected in l layer It penetrates;The search subregion of bottom l=L (total number that L is step 1 extract layer) is bottom whole region;
And so on, obtain the arest neighbors matching pair of top l=1;
Step 4, the matching using the arest neighbors matching of top l=1 to image A, B is carried out.
Further, in the step 1, the depth convolutional network is the network extract with characteristics of image, Such as image classification network.
Further, in the step 1, network training is carried out using public data collection such as ImageNet or is directlyed adopt The existing disclosed good depth convolutional network of pre-training.
Further, in the step 2, the convolutional layer of preceding 4 or preceding 5 levels, pond layer or active coating are only extracted Output executes subsequent step.
Further, in the step 3, the size of matching area is determined according to detection range and network structure.
Further, in the step 3, for l layers of n-th of search subregion ln, arest neighbors matching process is as follows:
The sub- tensor of each of search subregion ln to image AIn the sub- tensor of the same position of image B Centered on matching area in find withThe nearest sub- tensor of distanceThen sub- tensor is calculatedIn the identical of image A The sub- tensor of positionCentered on matching area inThe nearest sub- tensor of distance whether beIf so, thinkingWithFor the matching pair of a pair of of arest neighbors.
Further, in the step 3, for each layer, the tensor response of this layer of arest neighbors matching pair is calculated, chooses and rings The final arest neighbors matching pair that this layer is paired into more than or equal to the arest neighbors of given threshold should be worth, using final nearest Neighbour's matching is to execution subsequent step.
Further, it in the step 3, if source images and target image style gap are larger, is carrying out apart from meter When calculation, the corresponding search subregion of image A, B is subjected to style conversion, is converted to unified public style, then carry out again away from From calculating, arest neighbors matching pair is obtained.
Further, in the step 4, before the matching for carrying out image A, B, first respectively to the top l of step 3 acquisition =1 arest neighbors matching is clustered to using unsupervised clustering method, then using each cluster centre to progress image A, B Matching.
Further, the unsupervised clustering method is K-Means clustering procedure, DBSCAN clustering procedure or Mean-Shift Clustering procedure.
The utility model has the advantages that
The present invention is using the convolutional layer, pond layer of different levels in depth convolutional network or the tensor information of active coating, Successively up carry out gradually accurate Feature Points Matching from bottom grade to pyramid, can be suitable for external appearance characteristic variation compared with Big image, and robustness is preferable.
Before search subregion carries out arest neighbors matching, search subregion is subjected to style unification, eliminates image in appearance The influence of the stylistic differences such as the color in feature.
Arest neighbors is matched using tensor response and rejects the not strong arest neighbors matching pair of semantic information to screening, So that the arest neighbors screened is matched to semantic dependency.
It is matched using arest neighbors of the unsupervised clustering method to top active coating to clustering, utilizes cluster centre To images match is carried out, matching efficiency is improved;And it is possible to flexibly change match point quantity according to application demand.
Detailed description of the invention
Fig. 1 is of the invention by bottom active coating successively up gradually accurate matched exemplary diagram.
Fig. 2 is the exemplary diagram in the present invention by l layers of matching to the corresponding region for expanding to l-1 layers.
Fig. 3 is the exemplary diagram that style is converted in the present invention.
Fig. 4 is the arithmetic result of an example of the invention.
Specific embodiment
The present invention will now be described in detail with reference to the accompanying drawings and examples.
The present invention provides a kind of cross-domain image sparse matching process based on depth convolutional network carries out image A, B Matching is exported using each level of depth convolutional network, gradually accurate matching is realized from the bottom to top, as shown in Figure 1, applicable In the image that external appearance characteristic changes greatly, and robustness is preferable.Specifically comprise the following steps:
Step 1, construct depth convolutional network, and it be trained, obtain capable of extracting characteristics of image, it is trained Depth convolutional network.
Wherein it is possible to select the depth convolutional network for inherently having image characteristics extraction function, such as image classification network, To reduce training burden, network complexity is reduced;In training, the large datas such as ImageNet collection can use to the net of building Network carries out pre-training;Or the good image classification network of existing disclosed pre-training, such as VGG19 also can be directly used.
Step 2, using two images A, B to be matched as the input of network, the trained depth of input step 1 is rolled up respectively Product network, the output of the convolutional layer of each level of registered depth convolutional network, pond layer or active coating as a result, be denoted as respectivelyL is total number of plies of record;Wherein, a level extracts wherein one layer of output, each level The output (such as extracting active coating) that same layer can be extracted, can also extract the output of layer not of the same race;Below to mention It takes and is illustrated for the active coating of each level.
Wherein, image A, B are scaled identical size using the image procossing library in language respectively, network is guaranteed with this Each layer output is of similar shape.
To improve computational efficiency, the output of top n level can be only extracted as a result, can extract swashing for preceding 4 or 5 levels The output of layer living is as a result, it is possible to which calculation amount is small in the case where realizing guarantee matching precision.
Step 3, the search subregion that this layer is respectively set in each active coating divides from the bottom to top since the active coating of bottom Arest neighbors matching is not carried out in each layer, obtains the arest neighbors matching pair of each layer.
Wherein, the output of each active coating is all the tensor that a shape is (length × wide × channel), swashs from the bottom Layer living starts, in the search subregion of this layer of (long × wide) plane sets of each active coating;For bottom active coating l= L, search subregion are bottom active coating (long × wide) plane.
Closest matching is carried out in bottom active coating l=L to image A and image B first: a) constructing matching area, Middle matching area is less than search subregion, and the size of matching area is flexibly determined according to detection range and network structure;Each activation The size of the matching area of layer can be the same or different, but the matching area size in same layer needs unanimously;B) to image A The bottom active coating outputTo the sub- tensor on its each position of tensor planeIn the bottom active coating of image B OutputIn, with the sub- tensor of same positionCentered on matching areaIt is interior searching withThe nearest son of Euclidean distance Amount;Likewise, to the bottom active coating of image BTo the sub- tensor on its each position of tensor plane?In with The sub- tensor of same positionCentered on matching areaIt is interior searching withThe nearest sub- tensor of Euclidean distance;If It is middle that there are one pair of sub- tensors each other all for apart from recently, then the two sub- tensors are referred to as a pair of of arest neighbors matching pair.
Above-mentioned is that the method in search subregion using traversal realizes arest neighbors matching, further, it is also possible to using following Method carries out more efficiently arest neighbors matching:
For n-th of search subregion ln of active coating l, arest neighbors matching process is as follows:
The sub- tensor of each of search subregion ln to image AIn the sub- tensor of the same position of image B Centered on matching area in find withThe nearest sub- tensor of distanceThen sub- tensor is calculatedIn the identical of image A The sub- tensor of positionCentered on matching area inThe nearest sub- tensor of distance whether beIf so, thinkingWithFor the matching pair of a pair of of arest neighbors.
When carrying out the calculating of sub- tensor Euclidean distance, it is contemplated that the differences such as the color of image on external appearance characteristic, if figure As A and image B style gap are larger, then can preferably first corresponding to image A, B search subregion carries out style conversion, unite One becomes public style, then carries out Euclidean distance calculating again, obtains arest neighbors matching pair.
Wherein, style conversion can be obtained by following formula:
Wherein:ForIn PlValue in region;μA, μB, σA, σBRespectively represent the eigenmatrix in specified region and interior Hold matrix;μm, σmThe public characteristic matrix for representing corresponding region, can be calculated by following formula:
Then the distance between tensor formula are as follows:
Further, it is also possible to screen out a part of tensor response lower than setting using the response value tag of active coating output tensor The arest neighbors matching pair of threshold value is determined, so that the arest neighbors matching that the active coating is retained is to being rich in semantic information.
Wherein, the response of each sub- tensor is obtained by the following formula:
Wherein,Respectively p point, i point existThe output of layer, " | | | | " it is absolute value sign.
In all arest neighbors matching for obtaining bottom active coating to rear, according to network structure, arest neighbors is matched into mapping It is mapped in one layer of active coating;Arest neighbors matching is search of L-1 layers of active coating to the mapping area in L-1 layers of active coating Region;Then same method is used, arest neighbors matching is carried out respectively in each search subregion in L-1 layers of active coating, obtains To the arest neighbors matching pair of L-1 layers of active coating;And so on, finally obtain the arest neighbors matching pair of top l=1 active coating.
Step 4, the matching using the arest neighbors matching of top active coating to image A, B is carried out.
Wherein, if the arest neighbors matching of top active coating is more to number, it can use unsupervised clustering method The arest neighbors of top active coating is matched to clustering, respectively by matching all in image A, image B to be divided into 5,10, 15,20 or other are several classes of, obtain all kinds of cluster centres pair.Then the matching using each cluster centre to image A, B is carried out. Wherein it is possible to which K-Means clustering procedure, DBSCAN clustering procedure or Mean-Shift clustering procedure etc. are clustered.
It is described in detail below with reference to a specific example.
Step 1: being separately input into after image A, B are scaled 224 × 224 respectively using the image procossing library in language Trained VGG19 image classification network, extract network before 5 levels active coating output result:
Step 2: fromStart to carry out matched search.That the output result of layer 5 is 14 × 14 × 512 Amount, using the first peacekeeping second dimension as search plane, only one search subregion of layer 5, i.e. layer 5 searches for plane sheet Body.For each sub- tensor of image A, Europe is found in the matching area in image B centered on the sub- tensor of same position A nearest sub- tensor of formula distance;Equally, for each sub- tensor of image B, with the sub- tensor of same position in image A Centered on matching area in find a nearest sub- tensor of Euclidean distance.One pair of sub- tensor is all most to connect each other if it exists Closely, then they are a pair of of arest neighbors matchings pair.
During search, it is contemplated that the external appearance characteristics such as color of original image differ greatly, before distance calculates, to searching Large rope region carries out style conversion.
Search for being averaged using further feature proposed in the article of the style convert reference Johnson in 2016 of subregion The style conversion that value and variance are carried out as normalized parameter.The object of style conversion, is the corresponding search sub-district of image A, B DomainIt is changed into public style using following formula:
Wherein:
Style switch target is a specific search subregion, therefore in search subregionIn, for two The tensor p of specific position, the distance between q definition, also with the connection of search subregion together, formula is defined as:
The instance graph of style conversion can be found in Fig. 3.Search subregion after style conversion scans for, and obtains arest neighbors Matching pair.
Then, by construction receptance function, the responsiveness of feature vector on each position is obtained, it can be by the position Semantic abundant information be depicted come.Formula is as follows:
After the response for obtaining all arest neighbors matchings pair, the arest neighbors matching pair lower than given threshold is screened out, this A little points mean do not have stronger semantic information.The arest neighbors matching pair positioned at tensor edge is screened out simultaneously, these points are to difficulty To find corresponding region of search in upper layer activation layer.
Step 3: as shown in Fig. 2, low layer obtain arest neighbors matching pair position after, can use network structure, will be low The matching of layer is extended in upper space coordinate, obtains the search subregion pair of upper space.In each search subregion Centering repeats step 2, generates the arest neighbors matching pair of this layer.
Step 3 is repeated, layer extension is continued up, finally obtains the arest neighbors matching pair of the original image of top.
Step 4: tending to find more than a arest neighbors matchings pair up to a hundred in the original image of top.In order to allow arest neighbors Pairing energy expresses more different semantic informations as far as possible, using K-Means clustering algorithm, with x, y of arest neighbors matching pair Coordinate is clustered as the feature of point, obtains 5,10,30 etc. multi-class, takes each classification closest to the point at center as such It Dai Biao not put and stay in the list of arest neighbors matching pair.Obtain the experimental result of Fig. 4.
Since then, the process by finding sparse match point in two images is just completed.
In conclusion the above is merely preferred embodiments of the present invention, being not intended to limit the scope of the present invention. All within the spirits and principles of the present invention, any modification, equivalent replacement, improvement and so on should be included in of the invention Within protection scope.

Claims (10)

1. a kind of cross-domain image sparse matching process based on depth convolutional network, which comprises the steps of:
Step 1, depth convolutional network is constructed, and it is trained, obtains capable of extracting characteristics of image, trained depth Convolutional network;
Step 2, two images A, B to be matched are distinguished into the trained depth convolutional network of input step 1, extracts depth convolution The output of convolutional layer, pond layer in network in each level or active coating;
Step 3, for the output of each layer of extraction, the search subregion of this layer is set separately;Since the bottom, from the bottom to top Arest neighbors matching is carried out respectively in the corresponding search subregion of each layer;Wherein, arest neighbors matching process is as follows:
For l layers of n-th of search subregion, to each of l layers n-th of search subregion of image A AmountWith the sub- tensor of same position in l layers of image BCentered on matching area in find withDistance is most Close sub- tensor, wherein matching area is less than search subregion;Likewise, to l layers of n-th of search subregion of image B Each of sub- tensorWith the sub- tensor of same position in l layers of image ACentered on matching area in seek Look forThe nearest sub- tensor of distance;If A, there are two sub- tensors distances each other in n-th of search subregion of l layers of B Recently, then the two sub- tensors are referred to as arest neighbors matching pair;
Wherein, n-th of l layers search subregion is that n-th of l+1 layer is matched in l layers of mapping;The search of the bottom Subregion is bottom whole region;
And so on, obtain the arest neighbors matching pair of top;
Step 4, the matching using the arest neighbors matching of top to image A, B is carried out.
2. the cross-domain image sparse matching process based on depth convolutional network as described in claim 1, which is characterized in that described In step 1, the depth convolutional network is image classification network.
3. the cross-domain image sparse matching process as stated in claim 1 or 2 based on depth convolutional network, feature exist In carrying out network training using public data collection or directly adopt the existing disclosed good depth of pre-training in the step 1 Spend convolutional network.
4. the cross-domain image sparse matching process based on depth convolutional network as described in claim 1, which is characterized in that described In step 2, the output execution subsequent step of the convolutional layer of preceding 4 or preceding 5 levels, pond layer or active coating is only extracted.
5. the cross-domain image sparse matching process based on depth convolutional network as described in claim 1, which is characterized in that described In step 3, the size of matching area is determined according to detection range and network structure.
6. the cross-domain image sparse matching process based on depth convolutional network as described in claim 1, which is characterized in that described In step 3, for l layers of n-th of search subregion ln, arest neighbors matching process is as follows:
To the search subregion l of image AnEach of sub- tensorIn the sub- tensor of the same position of image BFor in Found in the matching area of the heart withThe nearest sub- tensor of distanceThen sub- tensor is calculatedIn the same position of image A Sub- tensorCentered on matching area inThe nearest sub- tensor of distance whether beIf so, thinkingWithFor the matching pair of a pair of of arest neighbors.
7. the cross-domain image sparse matching process based on depth convolutional network as described in claim 1, which is characterized in that described In step 3, for each layer, the tensor response of each arest neighbors matching pair of this layer is calculated, selection response, which is greater than or equal to, to be set The arest neighbors for determining threshold value is paired into the final arest neighbors matching pair of this layer, subsequent to executing using final arest neighbors matching Step.
8. the cross-domain image sparse matching process based on depth convolutional network as described in claim 1, which is characterized in that described In step 3, when carrying out distance calculating, the corresponding search subregion of image A, B is subjected to style conversion, is converted to unified public affairs Then style altogether carries out obtaining arest neighbors matching pair apart from calculating again.
9. the cross-domain image sparse matching process based on depth convolutional network as described in claim 1, which is characterized in that described In step 4, before the matching for carrying out image A, B, the arest neighbors of the top first obtained respectively to step 3 is matched to using no prison The clustering method superintended and directed is clustered, then the matching using each cluster centre to image A, B is carried out.
10. the cross-domain image sparse matching process based on depth convolutional network as claimed in claim 9, which is characterized in that institute Stating unsupervised clustering method is K-Means clustering procedure, DBSCAN clustering procedure or Mean-Shift clustering procedure.
CN201810846985.8A 2018-07-27 2018-07-27 A kind of cross-domain image sparse matching process based on depth convolutional network Pending CN109460773A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810846985.8A CN109460773A (en) 2018-07-27 2018-07-27 A kind of cross-domain image sparse matching process based on depth convolutional network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810846985.8A CN109460773A (en) 2018-07-27 2018-07-27 A kind of cross-domain image sparse matching process based on depth convolutional network

Publications (1)

Publication Number Publication Date
CN109460773A true CN109460773A (en) 2019-03-12

Family

ID=65606321

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810846985.8A Pending CN109460773A (en) 2018-07-27 2018-07-27 A kind of cross-domain image sparse matching process based on depth convolutional network

Country Status (1)

Country Link
CN (1) CN109460773A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110909778A (en) * 2019-11-12 2020-03-24 北京航空航天大学 Image semantic feature matching method based on geometric consistency

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103456022A (en) * 2013-09-24 2013-12-18 中国科学院自动化研究所 High-resolution remote sensing image feature matching method
CN106203533A (en) * 2016-07-26 2016-12-07 厦门大学 The degree of depth based on combined training study face verification method
CN106227851A (en) * 2016-07-29 2016-12-14 汤平 Based on the image search method searched for by depth of seam division that degree of depth convolutional neural networks is end-to-end
US20170083792A1 (en) * 2015-09-22 2017-03-23 Xerox Corporation Similarity-based detection of prominent objects using deep cnn pooling layers as features

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103456022A (en) * 2013-09-24 2013-12-18 中国科学院自动化研究所 High-resolution remote sensing image feature matching method
US20170083792A1 (en) * 2015-09-22 2017-03-23 Xerox Corporation Similarity-based detection of prominent objects using deep cnn pooling layers as features
CN106203533A (en) * 2016-07-26 2016-12-07 厦门大学 The degree of depth based on combined training study face verification method
CN106227851A (en) * 2016-07-29 2016-12-14 汤平 Based on the image search method searched for by depth of seam division that degree of depth convolutional neural networks is end-to-end

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
周文罡等: "图像检索技术研究进展", 《南京信息工程大学学报》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110909778A (en) * 2019-11-12 2020-03-24 北京航空航天大学 Image semantic feature matching method based on geometric consistency

Similar Documents

Publication Publication Date Title
CN108764065B (en) Pedestrian re-recognition feature fusion aided learning method
Lin et al. RSCM: Region selection and concurrency model for multi-class weather recognition
CN106649487B (en) Image retrieval method based on interest target
CN105869173B (en) A kind of stereoscopic vision conspicuousness detection method
Peng et al. Automatic image segmentation by dynamic region merging
CN102314614B (en) Image semantics classification method based on class-shared multiple kernel learning (MKL)
CN108537121B (en) Self-adaptive remote sensing scene classification method based on meteorological environment parameter and image information fusion
CN107992850B (en) Outdoor scene three-dimensional color point cloud classification method
CN111339343A (en) Image retrieval method, device, storage medium and equipment
CN110516533B (en) Pedestrian re-identification method based on depth measurement
Li et al. Real-time crop recognition in transplanted fields with prominent weed growth: a visual-attention-based approach
CN109840518B (en) Visual tracking method combining classification and domain adaptation
Quek et al. Structural image classification with graph neural networks
CN109461115A (en) A kind of automatic Image Registration Method based on depth convolutional network
Zhang et al. Category modeling from just a single labeling: Use depth information to guide the learning of 2d models
Karaman et al. Multi-layer local graph words for object recognition
CN109460773A (en) A kind of cross-domain image sparse matching process based on depth convolutional network
JP6486084B2 (en) Image processing method, image processing apparatus, and program
Lu et al. Closing the loop for edge detection and object proposals
JP2017117025A (en) Pattern identification method, device thereof, and program thereof
Zhan et al. Gaussian mixture model on tensor field for visual tracking
Cui et al. Weighted particle swarm clustering algorithm for self-organizing maps
Xia et al. Lazy texture selection based on active learning
CN114494819A (en) Anti-interference infrared target identification method based on dynamic Bayesian network
CN108509838B (en) Method for analyzing group dressing under joint condition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190312

RJ01 Rejection of invention patent application after publication