CN104298758A - Multi-perspective target retrieval method - Google Patents

Multi-perspective target retrieval method Download PDF

Info

Publication number
CN104298758A
CN104298758A CN201410566595.7A CN201410566595A CN104298758A CN 104298758 A CN104298758 A CN 104298758A CN 201410566595 A CN201410566595 A CN 201410566595A CN 104298758 A CN104298758 A CN 104298758A
Authority
CN
China
Prior art keywords
view
width
representational
representational view
database
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410566595.7A
Other languages
Chinese (zh)
Inventor
刘安安
苏育挺
曹群
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201410566595.7A priority Critical patent/CN104298758A/en
Publication of CN104298758A publication Critical patent/CN104298758A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering

Abstract

The invention discloses a multi-perspective target retrieval method. The multi-perspective target retrieval method includes steps of acquiring a retrieval target input by a user and view sets of objects in database; utilizing the image characteristics extraction algorithm to extract characteristics of the retrieval target and the objects in the database; clustering the view sets by a clustering method after characteristics extraction and extracting representative views of each type; determining corresponding initial weight of each representative view according to scale of the belonging type and updating the weights by means of relation between the representative views and generating the final weights; establishing a weighted bipartite graph by means of the representative views of the two view sets and their weights; seeking the optimal matching of the weighted bipartite graph by the bipartite graph matching algorithm and acquiring similarity between the retrieval target and each object in the database, sequencing and outputting the sequencing result as retrieval results. By the multi-perspective target retrieval method, accuracy of multi-perspective target retrieval is improved.

Description

A kind of method of various visual angles target retrieval
Technical field
The present invention relates to field of image search, particularly relate to a kind of method of various visual angles target retrieval.
Background technology
In actual life, the existence of object has spatiality, and human eye is also 3 D stereo to the perception of object.The 2 d plane picture of the just object that traditional camera technique obtains, and RGB – D (three primary colors add distance) video camera, such as: Kinect said three-dimensional body sense video camera, two-dimensional signal can not only be obtained, can also obtain corresponding depth information, this just compensate for the defect of traditional cameras.Three-dimensional model and image are compared, and the perception details that it can be expressed is abundanter, more presses close to the sense of reality of human eye solid, so be more suitable for the cognition impression of the mankind.
The acquisition problem of three-dimensional model is the large problem that must consider.If all need to set up a set of three-dimensional model at every turn, workload will be very huge, simultaneously can energy time of inevitably at substantial, and not be that ordinary person can complete, obviously very unrealistic.Obtain three-dimensional model is all oneself set up or rely on spatial digitizer before, realize difficulty inconvenient again, nowadays this situation has obtained huge improvement, can be downloaded by NetFind three-dimensional model easily, therefore nowadays sharable three-dimensional model quantity presents blasting type rising tendency.Therefore, must accomplish to rely on network to make full use of existing model resource [1].The fast development of network technology and numerous search engine system appear as three-dimensional model resource share and propagation brings great convenience factor.Therefore, help user from the mass data daily database, to retrieve desired model quickly and accurately, namely study three-dimensional model search technology, become current problem demanding prompt solution and study hotspot.
Various visual angles target retrieval algorithm is mainly divided into two classes: text based retrieval and content-based retrieval [2].Text based retrieval technology due to implementation algorithm simple, very ripe, application also widely, but due to the inherent shortcoming that it exists self, the quantity of information of text carrying very little, therefore and be not suitable for three-dimensional model search the abundant informations such as the physical dimension of three-dimensional body, topological structure, material texture cannot be described accurately and effectively.By contrast, the feature that content-based retrieval has is: less human intervention, visual effect true to nature, retrieval rate are high.Utilize machine automatically to calculate and extract the internal characteristics of three-dimensional model, by the calculating of specific algorithm realization for distortion in interrogation model and database, thus set up characteristic key index, reach browsing and search function of will realizing.
During the similarity of current various visual angles target retrieval algorithm between calculating two objects, mostly only calculate the Euclidean distance that two objects correspond to each other between view, and do not consider the relevance between each view of same object and importance, retrieval accuracy has much room for improvement.
Summary of the invention
The invention provides a kind of method of various visual angles target retrieval, described below:
A method for various visual angles target retrieval, said method comprising the steps of:
(1) the view collection of object in the searched targets of user's input and database is obtained;
(2) image characteristics extraction algorithm is used to carry out feature extraction to the view collection of object in described searched targets and database;
(3) adopt clustering method to carry out cluster to the view collection after feature extraction, and extract the representational view of each class;
(4) determine the corresponding initial weight of each width representational view according to the scale of place class, utilize the relation between representational view to carry out weight renewal, generate final weight;
(5) representational view of two view collection and their weighted value is utilized to build weighting bipartite graph;
(6) use bipartite graph matching algorithm to seek the Optimum Matching of described weighting bipartite graph, obtain the similarity between each object in searched targets and database, and sort with this, using the result after sequence as search and output.
The beneficial effect of technical scheme provided by the invention is: the present invention is by carrying out cluster to the view collection of the three-dimensional body obtained, extract representational view, provide weight, in conjunction with bipartite graph Optimum Matching, obtain the similarity between searched targets and database object, improve the accuracy of various visual angles target retrieval.Weighted value after renewal contains the information such as the scale of contact between representational view and cluster.The similarity utilizing bipartite graph matching to obtain contains the relevance between two model representation views, than merely compute euclidian distances, and better effects if.
Accompanying drawing explanation
Fig. 1 is a kind of process flow diagram of various visual angles target retrieval method.
Fig. 2 is that in ETH database, the standard-Cha full curve of looking into of three kinds of algorithms compares.
Fig. 3 is that NN, FT and ST of three kinds of algorithms in ETH database compares.
Embodiment
For making the object, technical solutions and advantages of the present invention clearly, below embodiment of the present invention is described further in detail.
Embodiments provide a kind of method of various visual angles target retrieval, see Fig. 1, the method comprises:
101: obtain the view collection of object in the searched targets of user's input and database;
Wherein, the view collection of object in the searched targets of user's input and database is the corresponding one group of two dimension view representing these three dimensional objects.These two dimension views can be obtained real three dimensional object shooting by real camera, and the Softcam that also can pass through 3D program software (such as 3D MAX) obtains the shooting of virtual three-dimensional object.
102: use image characteristics extraction algorithm to carry out feature extraction to the view collection of object in searched targets and database;
The Image Visual Feature extraction algorithm of current popular can be adopted to carry out feature extraction and sign to view collection, and without loss of generality, the embodiment of the present invention have employed can the gradient orientation histogram of Efficient Characterization picture shape and architectural feature [3](Histogram of Oriented Gradient is called for short HOG operator) carries out characteristic present.
The circular of HOG operator is: the gradient calculating each width view regional area respectively, uses statistical means to construct gradient orientation histogram, thus forms gradient orientation histogram operator feature, is used for describing former view.
103: adopt clustering method to carry out cluster to the view collection after feature extraction, and extract the representational view of each class;
The view clustering algorithm of current popular can be adopted to carry out cluster to the view collection after feature extraction, without loss of generality, have employed classical K-means [4]clustering method.
K-means clustering method is specially: the exact amount K first determining wanted cluster, and it, as cluster centre, to remaining each view, according to the distance of itself and each cluster centre, is assigned to nearest class by initial selected K view.Recalculate the mean value of view in each class, form new cluster centre.Repeat this process, until clustering convergence.View set representations as a three-dimensional model M is wherein be the width two dimension view in V, i is view sequence number, and M is three-dimensional model, n mit is the number of view.Adopt each the width view of HOG operator representation in 102, and then try to achieve two width views with between Euclidean distance:
d ( v i M , v j M ) = ( f i - f j ) T ( f i - f j )
Wherein i, j are view sequence number, f iand f jbe respectively with hOG proper vector, T represents matrix transpose.
View collection thus after carrying out K-means cluster, obtain K view subset, i.e. V={V 1, V 2..., V k, and the view in each view subset is visually similar view.Calculate each the width view in each class and the Euclidean distance sum in such between other views, choose the representational view of the minimum view of Euclidean distance sum as such, obtain K width representational view collection { rv 1, rv 2..., rv i..., rv k, wherein rv ibe the i-th width representational view, i is representational view sequence number.Concrete, the value of K is generally determined by subjectivity, and the number of views that Primary Reference view is concentrated determines, selects K=15 in this experiment.
104: determine the corresponding initial weight of each width representational view according to the scale of place class, utilize the relation between representational view to carry out weight renewal, generate final weight;
Concrete method is as follows:
1) initial weight is generated;
According to formula
p rv i 0 = | N ( i ) | | A |
Try to achieve the initial weight of each width representational view | N (i) | be the view number in i-th class; | A| is the view number in model M; And then obtain initial weight value vector
2) final weight is generated;
The weighted value of every width representational view only depends on that the scale of place class is not accurate enough.When a width representational view and another width representational view quite close to time, this problem is just more obvious.Therefore, must consider that the relation between the representational view selected carries out weight renewal.
First, structure associated diagram describes the relation between representational view.Wherein each node on behalf one width representational view, the limit between two nodes represents two width representational view rv 1and rv 2between correlativity r (rv 1, rv 2).
According to formula
r ( rv 1 , rv 2 ) = exp ( - d ( rv 1 , rv 2 ) 2 σ 2 )
Try to achieve two width representational view rv 1and rv 2between correlativity; Wherein the value of σ is generally determined by experience, selects variance between all representational view as parameter in the present embodiment; D (rv 1, rv 2) represent two width representational view rv 1and rv 2between Euclidean distance.
Secondly, according to formula
t ( rv 1 , rv 2 ) = r ( rv 1 , rv 2 ) Σ i r ( rv 1 , rv i )
Try to achieve from representational view rv 1to representational view rv 2transition probability; Wherein, r (rv 1, rv 2) represent two width representational view rv 1and rv 2between correlativity.
Finally, according to formula
p rv 1 n + 1 = γ p rv 1 0 + ( 1 - γ ) Σ i ≠ 1 t ( rv i , rv 1 ) p i n p rv 2 n + 1 = γ p rv 2 0 + ( 1 - γ ) Σ i ≠ 2 t ( rv i , rv 2 ) p i n . . . p rv k n + 1 = γ p rv k 0 + ( 1 - γ ) Σ i ≠ k t ( rv i , rv k ) p i n
Try to achieve the final weight of each width representational view respectively; ..., after (n+1)th iteration respectively, the 1st width representational view, the 2nd width representational view ..., the weight of kth width representational view; ..., the 1st width representational view respectively, the 2nd width representational view ..., the initial weight of kth width representational view; Be the parameter determining initial weight value significance level, in the present embodiment, select γ=0.8; T (rv i, rv k) be transition probability from the i-th width representational view to kth width representational view; be the weighted value of the i-th width representational view after n-th iteration, k is clusters number, 1≤i≤k.
By experience through iteration several times, this process will converge stopping, and arranging iterations in the present embodiment is 5.And then obtain final weighted value vector p f = ( p rv 1 f , p rv 2 f , . . . , p rv k f ) .
105: utilize the representational view of two view collection and corresponding weighted value to build weighting bipartite graph;
If the representational view collection of searched targets A, wherein represent the 1st width representational view of searched targets A respectively, the 2nd width representational view ..., n-th awidth representational view, n ait is the representational view collection number of searched targets; the representational view collection of an object B in database, wherein represent the 1st width representational view of searched targets B respectively, the 2nd width representational view ..., n-th bwidth representational view, n bit is the representational view collection number of this object; with represent the weighted value set of an object B in searched targets A and database respectively.Successively all objects in searched targets A and database are weighted to the structure of bipartite graph.Concrete, the method building weighting bipartite graph is as follows:
1) foundation newly gathers R ';
Because the representational view number in representational view collection Q with R is not necessarily identical, so first will dimension be unified.N is supposed in the present embodiment a>=n b, n a-n bindividual new element adds in R.If j=1,2 ..., n aif, j>n b, then for sky, be 0.Thus ensure that two view collection have the representational view of equal number, be convenient to ensuing calculating and compare.So just set up and newly gather R '.
2) the weighted value g on limit is calculated i, j;
Each limit g in weighting bipartite graph i, j(i, j=1,2 ..., n a) represent the representational view of searched targets A with the representational view of an object in database between contact.
According to formula
g i , j = 1 2 ( p rv i a f + p rv j b f ) × d ( rv i a rv j b ) if j ≤ n b , 0 otherwise .
Try to achieve the weighted value g on every bar limit i, j; Wherein with the representational view of searched targets A respectively with the representational view of an object B in database weighted value; representative with between Euclidean distance.
3) weighting bipartite graph is built;
In the present embodiment, jointly set up weighting bipartite graph G={Q, R ', U} by the representational view collection R ' of an object B in the representational view collection Q of searched targets A and database.Wherein in node set Q, each node represents the width representational view in representational view collection Q; In node set R ', each node represents the width representational view in representational view collection R '; Limit set U={g i, j, the weighted connections between all representational view representing an object B in all representational view in searched targets A and database.
Successively all objects in searched targets A and database are weighted to the structure of bipartite graph.
106: use bipartite graph matching algorithm to seek the Optimum Matching of weighting bipartite graph, obtain the similarity between each object in searched targets and database, and sort with this, using the result after sequence as search and output.
The bipartite graph matching algorithm of current popular can be adopted to ask the Optimum Matching of weighting bipartite graph, without loss of generality, have employed Kuhn-Munkres algorithm [5].
1) Optimum Matching is asked
To the weighting bipartite graph G={Q formed, R ', U}, use Kuhn-Munkres algorithm, under the constraint of mating one to one, can obtain the subgraph Λ that weights are minimum m, as the Optimum Matching of this bipartite graph, and weights are sued for peace the Similarity value obtained in searched targets A and database between an object B.
According to the objective function Equation of most authority two points coupling
Λ M = ar gma x Λ k ∈ Λ Σ 1 ≤ i ≤ n c a k ( i ) , b k ( i ) = ar gma x Λ k ∈ Λ Σ 1 ≤ i ≤ n ( G - g a k ( i ) , b k ( i ) )
And Similarity value formula
S Match = max Λ k ∈ Λ Σ 1 ≤ i ≤ n ( G - g a k ( i ) , b k ( i ) )
Try to achieve Optimum Matching Λ mand corresponding similarity S match; Λ krepresent bipartite graph matching; Λ is all possible bipartite graph matching; it is the element of n × n limit efficiency matrix C [6]; a in bipartite graph k(i) and b ki () is the weighted value on the limit that two matched node are formed; G is than max (g ij) slightly larger constant, argmax function stand finds the parameter with maximal value, max function stand maximizing.
2) sequencing of similarity
According to the Similarity value S between object each in searched targets and database match, sort from big to small, S matchlarger both representatives similarity is higher.Using the result after sequence as search and output.
Experiment
1, experimental data base
Testing database used is the online ETH database shared, and has 80 three-dimensional models, comprises 8 classes, every class 10 objects in this database.Apple, car, milk cow, cup, doggie, horse, pears, tomato respectively.
2, evaluation criteria
Four kinds of evaluation criterias are applied in experiment [7], specific as follows:
(1) arest neighbors (Nearest neighbor is called for short NN): arest neighbors is closest to the number percent belonging to query categories coupling in inquiry.
(2) one-level precision ratio (First tier is called for short FT): the response of K arest neighbors coupling, wherein K is the radix of query categories.In this experiment, K=10.
(3) secondary precision ratio (Second tier is called for short ST): the response of 2K arest neighbors coupling, wherein K is the radix of query categories.
(4) standard-Cha full curve (Precision-Recall) is looked into: the average response (AverageRecall of the Performance Evaluation aspect of three-dimensional body retrieval, be called for short AR) and mean accuracy (Average Precision is called for short AP).
Try to achieve AR and AP according to following formula, make and look into standard-Cha full curve:
Recall = N z N r
Wherein, Recall is response value; N zthe quantity of correct searching object; N rthe quantity of all related objects.
Precision = N z N all
Wherein, Precision is accuracy value; N allthe quantity of all searching objects.
AR = Σ i = 1 N m Recall ( i )
Wherein, N mthe quantity of three-dimensional model class; Recall (i) is the response value of the i-th class.
AP = Σ i = 1 N m Rrecision ( i )
Wherein, Precision (i) is the accuracy value of the i-th class.
3, algorithm is contrasted
In experiment, this method and following two kinds of methods are contrasted:
ED [8](A 3D Model Retrieval Approach Based on the Elevation Descriptor), is again the 3D searching algorithm based on height descriptors.
CCFV [9](Camera Constraint-Free View-Based 3D Object Retrieval), is again the 3D searching algorithm based on view under free-viewing angle.
4, experimental result
In ETH database, the standard-Cha full curve of looking into of three kinds of algorithms compares as Fig. 2.Wherein, ordinate represents essence (Precision), and (Recall) is responded in horizontal ordinate representative.Look into standard-Cha full curve and transverse and longitudinal area that coordinate encloses is larger, represent retrieval performance more excellent.
In ETH database, NN, FT and ST of three kinds of algorithms compare as Fig. 3.NN, FT and ST value is larger, represents retrieval performance more excellent.
Looking in standard-Cha full curve, this method curve and transverse and longitudinal area that coordinate encloses maximum, be obviously better than ED and CCFV; In ETH database, this method is compared with CCFV algorithm, and its NN, FT, ST index exceeds 16.25%, 6%, 4.25% respectively; Compared with ED algorithm, its NN, FT, ST index exceeds 17.5%, 13.88%, 13% respectively.As shown in experimental result, this method can reach better retrieval performance relative to ED and CCFV.
List of references
[1] Jia Hui, Liu Jianyuan, Zhang Jiangang. 3 d model library semantic net builds and search method research [J]. Xian Institute of Posts and Telecoms's journal, 2012,17 (3): 53-57.
[2] Zheng Baichuan. content-based 3D model index technical research [D]. Zhejiang University, 2004.
[3]Dalal?N,Triggs?B.Histograms?of?oriented?gradients?for?human?detection[C].//Computer?Vision?and?Pattern?Recognition,2005.CVPR?2005.IEEE?Computer?Society?Conference?on.IEEE,2005:886-893.
[4] the .K-means Research of Clustering Algorithms [J] such as Wang Qian, Wang Cheng, Feng Zhenyuan. electronic design engineering, 2012,20 (7) .DOI:10.3969/j.issn.1674-6236.2012.07.008.
[5] Huajian is new. and the web service based on semanteme finds and algorithm research [D]. Institutes Of Technology Of Changsha, 2010.
[6]Gao?Y,Dai?Q,Wang?M,et?al.3D?model?retrieval?using?weighted?bipartite?graph?matching[J].Signal?Processing:Image?Communication,2011,26(1):39-47.
[7]Gao?Y,Dai?Q,Zhang?N?Y.3D?model?comparison?using?spatial?structure?circular?descriptor[J].Pattern?Recognition,2010,43(3):1142-1151.
[8]Shih?J?L,Lee?C?H,Wang?J?T.A?new?3D?model?retrieval?approach?based?on?the?elevation?descriptor[J].Pattern?Recognition,2007,40(1):283-295.
[9]Gao?Y,Tang?J,Hong?R,et?al.Camera?constraint-free?view-based?3-D?object?retrieval[J].Image?Processing,IEEE?Transactions?on,2012,21(4):2269-2281.
It will be appreciated by those skilled in the art that accompanying drawing is the schematic diagram of a preferred embodiment, the invention described above embodiment sequence number, just to describing, does not represent the quality of embodiment.
The foregoing is only preferred embodiment of the present invention, not in order to limit the present invention, within the spirit and principles in the present invention all, any amendment done, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (6)

1. a method for various visual angles target retrieval, is characterized in that, said method comprising the steps of:
(1) the view collection of object in the searched targets of user's input and database is obtained;
(2) image characteristics extraction algorithm is used to carry out feature extraction to the view collection of object in described searched targets and database;
(3) adopt clustering method to carry out cluster to the view collection after feature extraction, and extract the representational view of each class;
(4) determine the corresponding initial weight of each width representational view according to the scale of place class, utilize the relation between representational view to carry out weight renewal, generate final weight;
(5) representational view of two view collection and their weighted value is utilized to build weighting bipartite graph;
(6) use bipartite graph matching algorithm to seek the Optimum Matching of described weighting bipartite graph, obtain the similarity between each object in searched targets and database, and sort with this, using the result after sequence as search and output.
2. the method for a kind of various visual angles target retrieval according to claim 1, is characterized in that, described utilization image characteristics extraction algorithm is specially the operation that the view collection of object in described searched targets and database carries out feature extraction:
Calculate the gradient of each width view regional area respectively, use statistical means to construct gradient orientation histogram, thus form gradient orientation histogram operator feature, be used for describing former view.
3. the method for a kind of various visual angles target retrieval according to claim 1, is characterized in that, described employing clustering method is specially the operation that the view collection after feature extraction carries out cluster:
First determine the exact amount K of wanted cluster, it, as cluster centre, to remaining each view, according to the distance of itself and each cluster centre, is assigned to nearest class by initial selected K view; Recalculate the mean value of view in each class, form new cluster centre; Repeat this process, until clustering convergence.
4. the method for a kind of various visual angles target retrieval according to claim 1, it is characterized in that, described representational view is specially:
Calculate the Euclidean distance sum between each width view and other views in each class, choose the view minimum with other view Euclidean distance sums representatively property view.
5. the method for a kind of various visual angles target retrieval according to claim 1, it is characterized in that, described initial weight is specially:
p rv i 0 = | N ( i ) | | A |
Wherein, | N (i) | be view number in i-th cluster; | A| is view number in model M.
6. the method for a kind of various visual angles target retrieval according to claim 1, is characterized in that, described final weight is specially:
p rv 1 n + 1 = γp rv 1 0 + ( 1 - γ ) Σ i ≠ 1 t ( rv i , rv 1 ) p i n p rv 2 n + 1 = γp rv 2 0 + ( 1 - γ ) Σ i ≠ 2 t ( rv i , rv 2 ) p i n . . . p rv k n + 1 = γp rv k 0 + ( 1 - γ ) Σ i ≠ k t ( rv i , rv k ) p i n
Wherein, after (n+1)th iteration respectively, the 1st width representational view, the 2nd width representational view ..., the weight of kth width representational view; the 1st width representational view respectively, the 2nd width representational view ..., the initial weight of kth width representational view; γ is the parameter determining original weighted value significance level; T (rv i, rv k) be transition probability from i-th representational view to a kth representational view; be the weighted value of the i-th width representational view after n-th iteration, k is clusters number, 1≤i≤k.
CN201410566595.7A 2014-10-22 2014-10-22 Multi-perspective target retrieval method Pending CN104298758A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410566595.7A CN104298758A (en) 2014-10-22 2014-10-22 Multi-perspective target retrieval method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410566595.7A CN104298758A (en) 2014-10-22 2014-10-22 Multi-perspective target retrieval method

Publications (1)

Publication Number Publication Date
CN104298758A true CN104298758A (en) 2015-01-21

Family

ID=52318483

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410566595.7A Pending CN104298758A (en) 2014-10-22 2014-10-22 Multi-perspective target retrieval method

Country Status (1)

Country Link
CN (1) CN104298758A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105868324A (en) * 2016-03-28 2016-08-17 天津大学 Multi-view target retrieving method based on implicit state model
CN106503270A (en) * 2016-12-09 2017-03-15 厦门大学 A kind of 3D target retrieval methods based on multiple views and Bipartite Matching
CN106557533A (en) * 2015-09-24 2017-04-05 杭州海康威视数字技术股份有限公司 A kind of method and apparatus of many image retrieval-by-unifications of single goal
WO2017124697A1 (en) * 2016-01-20 2017-07-27 北京百度网讯科技有限公司 Information searching method and apparatus based on picture
GB2569979A (en) * 2018-01-05 2019-07-10 Sony Interactive Entertainment Inc Image generating device and method of generating an image
CN110263196A (en) * 2019-05-10 2019-09-20 南京旷云科技有限公司 Image search method, device, electronic equipment and storage medium
CN112818451A (en) * 2021-02-02 2021-05-18 盈嘉互联(北京)科技有限公司 VGG-based BIM model optimal visual angle construction method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040249809A1 (en) * 2003-01-25 2004-12-09 Purdue Research Foundation Methods, systems, and data structures for performing searches on three dimensional objects
CN101398854A (en) * 2008-10-24 2009-04-01 清华大学 Video fragment searching method and system
CN101599077A (en) * 2009-06-29 2009-12-09 清华大学 A kind of method of retrieving three-dimensional objects

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040249809A1 (en) * 2003-01-25 2004-12-09 Purdue Research Foundation Methods, systems, and data structures for performing searches on three dimensional objects
CN101398854A (en) * 2008-10-24 2009-04-01 清华大学 Video fragment searching method and system
CN101599077A (en) * 2009-06-29 2009-12-09 清华大学 A kind of method of retrieving three-dimensional objects

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YUE GAO ET AL: "3D model retrieval using weighted bipartite graph matching", 《SIGNAL PROCESSING: IMAGE COMMUNICATION》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106557533A (en) * 2015-09-24 2017-04-05 杭州海康威视数字技术股份有限公司 A kind of method and apparatus of many image retrieval-by-unifications of single goal
CN106557533B (en) * 2015-09-24 2020-03-06 杭州海康威视数字技术股份有限公司 Single-target multi-image joint retrieval method and device
WO2017124697A1 (en) * 2016-01-20 2017-07-27 北京百度网讯科技有限公司 Information searching method and apparatus based on picture
CN105868324A (en) * 2016-03-28 2016-08-17 天津大学 Multi-view target retrieving method based on implicit state model
CN106503270A (en) * 2016-12-09 2017-03-15 厦门大学 A kind of 3D target retrieval methods based on multiple views and Bipartite Matching
CN106503270B (en) * 2016-12-09 2020-02-14 厦门大学 3D target retrieval method based on multi-view and bipartite graph matching
GB2569979A (en) * 2018-01-05 2019-07-10 Sony Interactive Entertainment Inc Image generating device and method of generating an image
US10848733B2 (en) 2018-01-05 2020-11-24 Sony Interactive Entertainment Inc. Image generating device and method of generating an image
GB2569979B (en) * 2018-01-05 2021-05-19 Sony Interactive Entertainment Inc Rendering a mixed reality scene using a combination of multiple reference viewing points
CN110263196A (en) * 2019-05-10 2019-09-20 南京旷云科技有限公司 Image search method, device, electronic equipment and storage medium
CN110263196B (en) * 2019-05-10 2022-05-06 南京旷云科技有限公司 Image retrieval method, image retrieval device, electronic equipment and storage medium
CN112818451A (en) * 2021-02-02 2021-05-18 盈嘉互联(北京)科技有限公司 VGG-based BIM model optimal visual angle construction method

Similar Documents

Publication Publication Date Title
CN104298758A (en) Multi-perspective target retrieval method
Gao et al. 3D model retrieval using weighted bipartite graph matching
CN105243139B (en) A kind of method for searching three-dimension model and its retrieval device based on deep learning
Sun et al. Dagc: Employing dual attention and graph convolution for point cloud based place recognition
CN111027140B (en) Airplane standard part model rapid reconstruction method based on multi-view point cloud data
CN109034035A (en) Pedestrian's recognition methods again based on conspicuousness detection and Fusion Features
CN104317838A (en) Cross-media Hash index method based on coupling differential dictionary
CN103530649A (en) Visual searching method applicable mobile terminal
Nie et al. Convolutional deep learning for 3D object retrieval
Zhang et al. 3D object retrieval with multi-feature collaboration and bipartite graph matching
CN104462365A (en) Multi-view target searching method based on probability model
Xu et al. Discriminative analysis for symmetric positive definite matrices on lie groups
CN111078916A (en) Cross-domain three-dimensional model retrieval method based on multi-level feature alignment network
CN104317946A (en) Multi-key image-based image content retrieval method
CN112085072A (en) Cross-modal retrieval method of sketch retrieval three-dimensional model based on space-time characteristic information
CN104361135A (en) Image search method
CN111797269A (en) Multi-view three-dimensional model retrieval method based on multi-level view associated convolutional network
CN102930291B (en) Automatic K adjacent local search heredity clustering method for graphic image
Li et al. Combining topological and view-based features for 3D model retrieval
CN104143088A (en) Face identification method based on image retrieval and feature weight learning
CN111597367B (en) Three-dimensional model retrieval method based on view and hash algorithm
CN110334226B (en) Depth image retrieval method fusing feature distribution entropy
CN102289661A (en) Method for matching three-dimensional grid models based on spectrum matching
CN104765764A (en) Indexing method based on large-scale image
CN109857886A (en) A kind of method for searching three-dimension model approached based on minimax value theory of games view

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20150121

WD01 Invention patent application deemed withdrawn after publication