CN103530649A - Visual searching method applicable mobile terminal - Google Patents

Visual searching method applicable mobile terminal Download PDF

Info

Publication number
CN103530649A
CN103530649A CN201310483155.0A CN201310483155A CN103530649A CN 103530649 A CN103530649 A CN 103530649A CN 201310483155 A CN201310483155 A CN 201310483155A CN 103530649 A CN103530649 A CN 103530649A
Authority
CN
China
Prior art keywords
image
mobile terminal
gps information
identified
sample image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201310483155.0A
Other languages
Chinese (zh)
Inventor
桂振文
刘越
王涌天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN201310483155.0A priority Critical patent/CN103530649A/en
Publication of CN103530649A publication Critical patent/CN103530649A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a visual searching method applicable a mobile terminal. The method comprises the concrete processes including the steps that 1, the mobile terminal is utilized for collecting images to be identified of the current scene, and in addition, the gravity direction of the mobile terminal during the image collection and the GPS (global position system) information of the current scene are obtained; 2, the binary local feature vectors of the images to be identified are obtained; 3, the GPS information and the binary local feature vectors are packed into a descriptor file, and the descriptor file is sent out; 4, images corresponding to the GPS information with the shortest distance from the extracted GPS information are extracted from a sample image base, and are defined as inquiry images; 5, the images to be identified are subjected to one-to-one matching with the binary local feature vectors of the inquiry images, the inquiry images with the shortest distance from the images to be identified are found, in addition, corresponding information is transmitted to the mobile terminal, and the visual searching is realized. The method can provide a more convenient measure for users of the mobile terminal to obtain relevant information of the current scene.

Description

A kind of visual search method that is applicable to mobile terminal
Technical field
The invention belongs to mobile augmented reality technical field, be specifically related to a kind of visual search method that is applicable to mobile terminal.
Background technology
The goal in research of visual search is to utilize computer generation to replace people automatically to go to process the physical message of magnanimity, identify target and the object of various different modes, brainwork from part replacement people, also can expand the inefficient field of mankind biorgan, in fields such as remote sensing image processing, Medical Image Processing and augmented realities, have a wide range of applications.
At present, along with the development of internet, the mankind are stepping into an informationalized society, and internet has become that the mankind issue, obtain, the Important Platform of exchange message.The exponential growth of internet information amount, makes how to allow user in the data of magnanimity, find rapidly and accurately its information needed to become an important problem.In the last few years, camera, smart mobile phone, PAD, the electronic products such as video camera universal, we can anytime anywhere take the scenery that we like, animal, the various pictures of food etc.By the end of in January, 2010, Facebook claims that picture number on its website is over 25,000,000,000.In the face of the picture resource of magnanimity like this, how finding fast and accurately our interested picture is the problem ,Shi business circles that must solve and an important research direction of academia.Yet, along with the very big growth of picture scale, guarantee the real-time of picture search, corresponding Image Coding, image retrieval technologies and database index technology also must be done corresponding adjustment or acceleration.
The fast development of simultaneous computer soft and hardware technology, for augmented reality is walked out indoor application and then is supported complicated analysis, decision-making and management to lay a solid foundation.The function of some mobile terminal devices (as PDA, smart mobile phone etc.) is also more and more abundanter, and has had embedded OS, touch-screen, GPS location, the first-class function of video camera, has also possessed stronger calculating and processing power simultaneously.The augmented reality system of exploitation based on mobile terminal that be integrated into of these functions laid a good foundation.According to interrelated data, by the end of China mobile phone user in 2010, can reach 7.4 hundred million, the user who wherein has smart mobile phone has accounted for suitable proportion, and smart mobile phone will have very large application potential as the application platform of augmented reality.Progressively opening, moving of 3G net, means the beginning in brand-new epoch of mobile value-added service, and augmented reality and LBS combine and can realize real-time, interactive, the dynamic three-dimensional display of information, can make man-machine interface more friendly and have intelligent.
Based on above-mentioned analysis, in conjunction with having camera, the terminal of GPS sensor and wireless network sensor and the image recognition of server end and matching technique, can become possibility by the ONLINE RECOGNITION of the extensive object of scene and so on.
Yet the research work of many image recognitions is in the past not all to be to consider image retrieval problem in very large situation in sample size, many methods can not be generalized to more massive problem of image recognition, and system performance and the data scale that can process exist limitation.When the data of City-level scale, during Yi Baiwanwei unit, image identification system needs the storage space of magnanimity and the quick computing power of mass data.Because image itself needs very large space storage, the various feature description vectors that extract from image also need a large amount of space storages.Meanwhile, in image recognition and matching process, descriptor index, coupling are also needed to powerful calculating ability.
Summary of the invention
In view of this, the invention provides a kind of visual search method that is applicable to mobile terminal, utilize the method can realize the identification of online large nuber of images, thereby reach the object of online visual search, the method has greatly reduced memory data output simultaneously, has improved the speed of image recognition rate and visual search.
Realize technical scheme of the present invention as follows:
A kind of visual search method that is applicable to mobile terminal, the sample image storehouse that is applicable to the method meets two conditions: 1. each sample image in sample image storehouse is with GPS information, and 2. each sample image in sample image storehouse adopts scale-of-two local feature vectors to represent; The detailed process of the method is:
Step 1: utilize mobile terminal to gather the image to be identified of current scene, and obtain the gravity direction of mobile terminal and the GPS information of current scene while gathering image;
Step 2: adopt scale-of-two local feature detection algorithm BRISK to treat recognition image and carry out feature point detection, obtain the unique point of image to be identified; According to described gravity direction, with feature descriptor FREAK, described unique point is described, obtain the scale-of-two local feature vectors of image to be identified;
Step 3: described GPS information and scale-of-two local feature vectors are packaged into a descriptor file, and are sent;
Step 4: server receives after descriptor file is extracted GPS information from this descriptor file, searches with extracted GPS information at a distance of the nearest corresponding image of GPS information, and be defined as query image from sample image storehouse;
Step 5: image to be identified is mated one by one with the scale-of-two local feature vectors of query image, finds the query image the most close with image to be identified, and by its corresponding communication to mobile terminal, realize visual search.
Further, the present invention, before treating recognition image feature point detection, also comprises image is carried out to down-sampled processing.
Further, the descriptor file that the present invention generates in step 3 also further comprises scale-of-two local feature vectors number, and GPS information and scale-of-two local feature vectors number are placed on the beginning of descriptor file.
Further, coupling of the present invention is: by calculating the Hamming distance of the vector of binary features of image to be identified and query image, based on Hamming distance, find the image the most close with image to be identified.
Further, when the nearest GPS information GPS information corresponding with image to be identified that the present invention finds when step 4 surpasses setting threshold apart, the feedback signal that now server generation cannot inquire relevant information returns to mobile terminal.
Further, the present invention is applicable to the sample image storehouse employing following steps foundation of the method:
S01, obtain the sample image with GPS information, wherein said GPS information is the GPS information of the shown scene of sample image;
S02, extract the scale-of-two local feature vectors of each width sample image, and generate inverted file concordance list;
S03, set up and take the more high-rise index that the GPS information of sample image is cluster centre, by inverted file index table stores to chained list corresponding to the sample image that belongs to same cluster centre.
Beneficial effect:
The first, the present invention uses mobile terminal to gather the image to be identified of current scene, and on server, utilize the method for images match, collected outdoor scene is identified, provide the various information that image to be identified is corresponding, thereby the relevant information of obtaining current scene for the user of mobile terminal provides a kind of means more easily.
The second, the present invention adopts scale-of-two local feature vectors to carry out Description Image feature, only needs several bytes just to represent the description vectors of higher-dimension, has saved storage space, stores on a large scale data feasibility is provided for mobile terminal.
The 3rd, the present invention is when searching the similar image of image to be identified, first according to GPS information, judge whether the sample image that location is nearer, if had, carry out the similarity comparison of follow-up binary features local feature, if no, directly notify mobile terminal to can not find similar sample image; Therefore the present invention is by the positional information of GPS sensor, and the scope of dwindling matched sample, has saved the time of coupling.
Four, the present invention utilizes binary Hamming distance comparison algorithm to mate, only with a computer instruction, just can be described coupling calculates, improved the Rapid matching ability of magnanimity descriptor, for the identification faster of outdoor mass picture provides more favourable condition.
Five, the present invention, when recording sample binary features code, carries out cluster by GPS value, chooses the position that cluster centre represents this class, and by the sampling feature vectors of this class, is placed in a large chained list, conveniently searches.
Six, the present invention can expand the interactive application of intelligent terminal, met the expanded application on intelligent terminal such as tourism, navigation, traffic, hotel service, made Virtual network operator and content supplier can utilize its abundant server resource and superior server performance to develop its business.
Accompanying drawing explanation
Fig. 1 merges the outdoor magnanimity object system Organization Chart of sensor information;
Fig. 2 recognizer process flow diagram;
Fig. 3 AGAST_9-16 template;
Fig. 4 FREAK descriptor is inspired by human retina structure;
(a) be that the Density Distribution (b) of rod cell on retina is amphiblestroid three subregions;
Fig. 5 FREAK descriptor is for determining the sampled point pair of characteristic direction;
The inverted index structure of Fig. 6 outdoor scene training sample vector of binary features;
Embodiment
Below in conjunction with accompanying drawing and concrete example, describe the present invention.
The present invention is applicable to the visual search method of mobile terminal, the applicable sample image storehouse of the method meets two conditions: 1. each sample image in sample image storehouse is with GPS information, and 2. each sample image in sample image storehouse adopts scale-of-two local feature vectors to represent; As shown in Figure 1, the method detailed process is:
Step 1: user opens the capture apparatus of mobile terminal, the image to be identified of collection current scene; Call again GPS sensor interface and the gravity sensor interface of terminal, and obtain the gravity direction of mobile terminal and the GPS information of current scene while gathering image.
Step 2: in order to reduce operand, terminal is carried out down-sampled processing (reducing image resolution ratio) to the image to be identified collecting, and image resolution ratio unification is reduced to 320 * 240; Then adopt scale-of-two local feature detection algorithm BRISK(Binary Robust Invariant Scalable Keypoints) treat recognition image and carry out feature point detection, obtain the unique point of image to be identified; According to described gravity direction, with feature descriptor FREAK(Fast Retina Keypoint) described unique point is described, thereby being converted into scale-of-two local feature vectors, the information that image to be identified is comprised represents.Because piece image may include a large amount of information, so piece image may have up to a hundred scale-of-two local feature vectors.
Below the detailed process of this step is described:
(1) angle point (unique point) in discrete metric space is surveyed;
Scale-of-two local feature detection algorithm BRISK, it uses angle point as unique point, first needs to search in image to meet point (angle point) the conduct preparation unique point in neighborhood with high-lighting, obtains preparing unique point set; Secondly for the every bit in the set of preparation unique point, with the adjacent bilevel neighborhood of its metric space, carry out non-maximum value and suppress (Non Maximum Suppression-NPS), reject the non-great point of some high-lightings, now remaining point is exactly in the neighborhood of metric space, to have unique unique unique point.Detailed process is:
Use AGAST algorithm every one deck in the discrete metric space of image to search angle point, use pixel intensity index as a comparison, weigh the high-lighting of tested point.The tolerance of high-lighting is FAST score, and computing formula is:
V = max ( Σ x ∈ S Brigtness | I p → x - I p | - τ , Σ x ∈ S Brigtness | I p - I p → x | - τ ) - - - ( 1 )
Wherein V is FAST response score, S brightnessbrightness set for neighborhood point; I is gray-scale value; τ, for weighing whether remarkable gray difference threshold, if the luminance difference of central point p and neighborhood point is less than τ, just think that central point is close with neighborhood brightness, does not meet significantly bright or dark condition.The selection of neighborhood will be determined according to application platform and demand, the conventional template that has 5-8,8-12,9-16.As shown in Figure 3, under the template of AGAST_9-16, in neighborhood at tested point with 16 somes compositions of layer, when FAST response score is not less than 9 when (having at least the gray scale of 9 neighborhood points to be entirely greater than or to be entirely less than the brightness of central point to be measured), just think that central point has high-lighting in neighborhood, will include in preparation unique point.
Searching is in metric space all angle points as after preparation unique point, implementing non-maximum value at yardstick adjacent bed and suppress all preparation unique points.Finally be confirmed to be the pixel P of unique point d, the some P of its correspondence in the adjacent bed up and down of metric space d+1and P d-1on its yardstick layer separately, also should there is high-lighting, and P dfAST response score should compare P d-1and P d+1all want high.Thus, just guaranteed the uniqueness of unique point in metric space.
(2) unique point is described
The sampling pattern of FREAK descriptor is simulated amphiblestroid optic cell characteristic distributions, and anatomical evidence shows, to light intensity sensitivity depending on rhabdocyte, consider oneself as macula lutea to edge, size becomes greatly gradually, density is corresponding minimizing also.As shown in Figure 4, the Density Distribution of rod cell on retina by mind-set edge be exponential relationship and reduce.Near looking the region of macula lutea, less cell photosensitive area and make this region meticulousr to the impression of light intensity compared with maxicell density, quantity of information is also larger.This conforms to our daily life experience, if because on ordinary days want to see clearly an object, can adjust eyeball, object is gone out in center, the visual field, and so image can drop near the macula lutea at center.
As Fig. 4-(a), the sampling point density that the sampling pattern of FREAK is placed in the closer neighborhoods of (having simulated the position of looking macula lutea) centered by unique point is larger, and the gaussian kernel parameter of each sampled point (is intuitively presented as in the drawings and take the red circle radius that sampled point is the center of circle, simulated the photosensitive region radius of optic cell, radius is approximately large, precision is lower) less, and from the distant pattern fringe region of unique point, the density of sampled point is little and gaussian kernel parameter is larger.
The sampling pattern that FREAK is used comprises 43 sampled points, so can generate the descriptor of 43 * 42/2=1118 bit, but final FREAK only has the length of 512 bits, so wherein related to the right screening of sampled point.Different sampled points are different for the uniqueness contribution of descriptor, the object of screening be feature is described have more unique, so standard is selected the ratio sampled point pair that makes result have more changeableization (population variance is large) exactly.
(3) feature principal direction is determined, and construction feature descriptor
Similar with BRISK, in FREAK, with partial gradient, carry out representation feature principal direction, but difference is right the choosing of sampled point for compute gradient, and the former uses the sampled point pair of long distance, FREAK selects the simple several groups of sampled points pair with respect to central point as shown in Figure 5.
Computing formula for the partial gradient O of representation feature principal direction is as follows:
O = 1 M Σ P o ∈ G ( I ( P o T 1 ) - I ( P o T 2 ) ) P o T 1 - P o T 2 | | P o T 1 - P o T 2 | | - - - ( 2 )
Obtain after principal direction, sampling point set is rotated to θ=arctan2 (g around unique point k y, g x), [g wherein x, g y, g z] be the acceleration of three coordinate directions that in mobile terminal, acceleration of gravity inductor obtains, the gravity direction that step 1 obtains, then starts to build descriptor F:
F = Σ 0 ≤ a ≤ N 2 a T ( P a ) - - - ( 3 )
Wherein, P afor a pair of sampled point, N is descriptor length.And T (P a) meet:
Figure BDA0000396472650000082
Wherein,
Figure BDA0000396472650000085
with
Figure BDA0000396472650000086
the brightness of this pair of sampled point after representative smoothly respectively; G is that sampled point for compute gradient is to general collection; M is sampled point logarithm in G;
Figure BDA0000396472650000083
1 He
Figure BDA0000396472650000084
volume coordinate vector for a pair of sampled point.
Step 3: described GPS information and scale-of-two local feature vectors are packaged into a descriptor file, and are sent;
Whether for the ease of receiving end, can in receiving course, judge fast a descriptor file receives, the further part using scale-of-two local feature vectors number as feature descriptor in this step, and GPS information and scale-of-two local feature vectors number are placed on to the beginning of descriptor file, then descriptor file are sent.
Step 4: server receives after descriptor file is extracted GPS information from this descriptor file, searches with described GPS information at a distance of the nearest corresponding image of GPS information, and be defined as query image by now searching the image obtaining from sample image storehouse.
The nearest GPS information that finds when the step 4 GPS information corresponding with image to be identified is when surpassing setting threshold, now represent the sample image matching with it in sample image storehouse, the feedback signal that server generation cannot inquire relevant information returns to mobile terminal.
Step 5: image to be identified is mated one by one with the scale-of-two local feature vectors of query image, finds the query image the most close with image to be identified, and by its corresponding communication to mobile terminal, realize moving-vision search.
In technical field of image processing, the method for images match is a lot, and the better employing following methods of the present embodiment is found the most close query image:
Step 501, server, from the descriptor file receiving, extract the vector of binary features of image to be identified.
Step 502, by the vector of binary features of image to be identified, carry out Hamming distance calculating with the vector of binary features of each query image one by one, for Hamming distance, be greater than the unmatched vector of binary features of being judged to be of setting threshold, for Hamming distance, be less than or equal to the vector of binary features that being judged to be of threshold value matches; Described in the present embodiment, threshold value is generally 30.
Enumerating an example is below elaborated to the calculating of Hamming distance:
For the respectively feature descriptor (it represents with vector of binary features) of 512 bits of two unique points generations, calculate the Hamming distance of the two, when Hamming distance is less than a threshold value R thtime think 2 couplings.
If two width image A, the descriptor set in B is combined into D a1, D a2... D amand D b1, D b2... D bn, for the D in image A ai, i ∈ [1, m], at D b1, D b2... D bnthe arest neighbors D of middle searching and its Hamming distance minimum bj, obtain minor increment r min.If r min< R th, think D aiwith D bjcoupling, i.e. D aiwith D bjform matching double points; Otherwise just judge D aiin B figure, there is no match point.
The vector of binary features number of matches of step 503, the vector of binary features of adding up image to be identified and query image is maximum, by this query image image as a result of, return to image I D prior storage, corresponding with described result images and relevant information (such as this relevant information, can be the information such as hotel about this scene periphery, market, station) to mobile terminal.Terminal can show above-mentioned recognition result, and user can click above-mentioned classification results, checks details.
So far, this flow process finishes.
The applicable sample image storehouse of visual search method of the present invention can adopt following steps to set up:
S01, obtain the sample image with GPS information, wherein said GPS information is the GPS information of the shown scene of sample image.
In general can obtain sample image by scene, for example, from network, download or take on the spot, each scene is obtained a few width sample images from different perspectives, and the GPS information of scene is exactly the GPS information of sample image.
S02, extract the scale-of-two local feature vectors of each width sample image, and generate inverted file concordance list;
S03, set up and take the more high-rise index that the GPS information of sample image is cluster centre, by inverted file index table stores to chained list corresponding to the sample image that belongs to same cluster centre, the sample image storehouse of setting up as shown in Figure 6.In the chained list of Fig. 6, the corresponding sample image of each Index note1, its first row form stores sample image descriptor proper vector, secondary series is for ID and the GPS information of storing sample image, and the 3rd row can be for scene information of storing sample image correlation etc.
In image processing field, the extracting method of the scale-of-two local feature vectors of image is a lot, the method of image scale-of-two local feature vectors to be identified is extracted in the better employing of the present embodiment, utilize scale-of-two local feature detection algorithm BRISK(Binary Robust Invariant Scalable Keypoints) sample image is carried out to feature point detection, obtain the unique point of image to be identified; Then use feature descriptor FREAK(Fast Retina Keypoint) unique point is described, thereby being converted into binary features partial vector, the information that image to be identified is comprised represents.
While carrying out images match in the sample image storehouse of the present invention in above-mentioned foundation, now in step 4, directly from sample image storehouse, search with described GPS information at a distance of the nearest corresponding center of GPS information chained list, can from sample image storehouse, find query image fast like this.
In sum, these are only preferred embodiment of the present invention, be not intended to limit protection scope of the present invention.Within the spirit and principles in the present invention all, any modification of doing, be equal to replacement, improvement etc., within all should being included in protection scope of the present invention.

Claims (6)

1. a visual search method that is applicable to mobile terminal, the sample image storehouse that is applicable to the method meets two conditions: 1. each sample image in sample image storehouse is with GPS information, and 2. each sample image in sample image storehouse adopts scale-of-two local feature vectors to represent; It is characterized in that, the detailed process of the method is:
Step 1: utilize mobile terminal to gather the image to be identified of current scene, and obtain the gravity direction of mobile terminal and the GPS information of current scene while gathering image;
Step 2: adopt scale-of-two local feature detection algorithm BRISK to treat recognition image and carry out feature point detection, obtain the unique point of image to be identified; According to described gravity direction, with feature descriptor FREAK, described unique point is described, obtain the scale-of-two local feature vectors of image to be identified;
Step 3: described GPS information and scale-of-two local feature vectors are packaged into a descriptor file, and send it to server;
Step 4: server receives after descriptor file is extracted GPS information from this descriptor file, searches with extracted GPS information at a distance of the nearest corresponding image of GPS information, and be defined as query image from sample image storehouse;
Step 5: image to be identified is mated one by one with the scale-of-two local feature vectors of query image, finds the query image the most close with image to be identified, and by its corresponding communication to mobile terminal, realize visual search.
2. the visual search method that is applicable to according to claim 1 mobile terminal, is characterized in that, before treating recognition image feature point detection, treats recognition image and carries out down-sampled processing.
3. be applicable to according to claim 1 the visual search method of mobile terminal, it is characterized in that, the descriptor file generating in step 3 also further comprises scale-of-two local feature vectors number, and GPS information and scale-of-two local feature vectors number are placed on the beginning of descriptor file.
4. be applicable to according to claim 1 the visual search method of mobile terminal, it is characterized in that, described coupling is: by calculating the Hamming distance of the vector of binary features of image to be identified and query image, based on Hamming distance, find the query image the most close with image to be identified.
5. be applicable to according to claim 1 the visual search method of mobile terminal, it is characterized in that, the nearest GPS information that finds when the step 4 GPS information corresponding with image to be identified is when surpassing setting threshold, and now server generates the feedback signal that cannot inquire relevant information and returns to mobile terminal.
6. according to the visual search method that is applicable to mobile terminal described in any in claim 1 to 5, it is characterized in that, the applicable sample image database of this visual search method adopts following steps to set up:
The sample image storehouse that is applicable to the method adopts following steps to set up:
S01, obtain the sample image with GPS information, wherein said GPS information is the GPS information of the shown scene of sample image;
S02, extract the scale-of-two local feature vectors of each width sample image, and generate inverted file concordance list;
S03, set up and take the more high-rise index that the GPS information of sample image is cluster centre, by inverted file index table stores to chained list corresponding to the sample image that belongs to same cluster centre.
CN201310483155.0A 2013-10-16 2013-10-16 Visual searching method applicable mobile terminal Pending CN103530649A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310483155.0A CN103530649A (en) 2013-10-16 2013-10-16 Visual searching method applicable mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310483155.0A CN103530649A (en) 2013-10-16 2013-10-16 Visual searching method applicable mobile terminal

Publications (1)

Publication Number Publication Date
CN103530649A true CN103530649A (en) 2014-01-22

Family

ID=49932645

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310483155.0A Pending CN103530649A (en) 2013-10-16 2013-10-16 Visual searching method applicable mobile terminal

Country Status (1)

Country Link
CN (1) CN103530649A (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104010206A (en) * 2014-06-17 2014-08-27 合一网络技术(北京)有限公司 Virtual reality video playing method and system based on geographic position
CN104268602A (en) * 2014-10-14 2015-01-07 大连理工大学 Shielded workpiece identifying method and device based on binary system feature matching
CN104268519A (en) * 2014-09-19 2015-01-07 袁荣辉 Image recognition terminal based on mode matching and recognition method of image recognition terminal
CN104616012A (en) * 2014-04-30 2015-05-13 北京大学 Method for acquiring compact global characteristics descriptor
CN105843828A (en) * 2015-06-30 2016-08-10 维沃移动通信有限公司 Search method for picture information applied to mobile terminal and mobile terminal
CN105989628A (en) * 2015-02-06 2016-10-05 北京网梯科技发展有限公司 Method and system device for obtaining information through mobile terminal
CN106250906A (en) * 2016-07-08 2016-12-21 大连大学 Extensive medical image clustering method based on over-sampling correction
CN107122979A (en) * 2017-05-23 2017-09-01 珠海市魅族科技有限公司 Information processing method and device, computer installation and computer-readable recording medium
CN107209762A (en) * 2014-05-15 2017-09-26 思腾科技(巴巴多斯)有限公司 Visual interactive formula is searched for
CN108073854A (en) * 2016-11-14 2018-05-25 中移(苏州)软件技术有限公司 A kind of detection method and device of scene inspection
CN110019874A (en) * 2017-12-29 2019-07-16 上海全土豆文化传播有限公司 The generation method of index file, apparatus and system
CN110517435A (en) * 2019-09-08 2019-11-29 天津大学 The portable instant fire prevention early warning of one kind and Information Collecting & Processing early warning system and method
CN111126304A (en) * 2019-12-25 2020-05-08 鲁东大学 Augmented reality navigation method based on indoor natural scene image deep learning
CN117216308A (en) * 2023-11-09 2023-12-12 天津华来科技股份有限公司 Searching method, system, equipment and medium based on large model
US11995559B2 (en) 2018-02-06 2024-05-28 Cognizant Technology Solutions U.S. Corporation Enhancing evolutionary optimization in uncertain environments by allocating evaluations via multi-armed bandit algorithms

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060251339A1 (en) * 2005-05-09 2006-11-09 Gokturk Salih B System and method for enabling the use of captured images through recognition
CN101802824A (en) * 2007-09-20 2010-08-11 诺基亚公司 Method, apparatus and computer program product for providing a visual search interface
CN102216941A (en) * 2008-08-19 2011-10-12 数字标记公司 Methods and systems for content processing
CN102395966A (en) * 2009-04-14 2012-03-28 高通股份有限公司 Systems and methods for image recognition using mobile devices

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060251339A1 (en) * 2005-05-09 2006-11-09 Gokturk Salih B System and method for enabling the use of captured images through recognition
CN101802824A (en) * 2007-09-20 2010-08-11 诺基亚公司 Method, apparatus and computer program product for providing a visual search interface
CN102216941A (en) * 2008-08-19 2011-10-12 数字标记公司 Methods and systems for content processing
CN102395966A (en) * 2009-04-14 2012-03-28 高通股份有限公司 Systems and methods for image recognition using mobile devices

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZHENWEN GUI ET AL.: "Outdoor scenes identification on mobile device by integrating vision and inertial sensors", 《2013 9TH INTERNATIONAL WIRELESS COMMUNICATIONS AND MOBILE COMPUTING CONFERENCE(IWCMC)》 *
段凌宇等: "移动视觉搜索技术研究与标准化进展", 《信息通信技术》 *

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104616012B (en) * 2014-04-30 2018-03-02 北京大学 The method for obtaining compact global characteristics description
CN104616012A (en) * 2014-04-30 2015-05-13 北京大学 Method for acquiring compact global characteristics descriptor
US11216496B2 (en) 2014-05-15 2022-01-04 Evolv Technology Solutions, Inc. Visual interactive search
CN107209762A (en) * 2014-05-15 2017-09-26 思腾科技(巴巴多斯)有限公司 Visual interactive formula is searched for
CN104010206B (en) * 2014-06-17 2016-03-02 合一网络技术(北京)有限公司 Based on the method and system of the virtual reality video playback in geographical position
CN104010206A (en) * 2014-06-17 2014-08-27 合一网络技术(北京)有限公司 Virtual reality video playing method and system based on geographic position
CN104268519A (en) * 2014-09-19 2015-01-07 袁荣辉 Image recognition terminal based on mode matching and recognition method of image recognition terminal
CN104268519B (en) * 2014-09-19 2018-03-30 袁荣辉 Image recognition terminal and its recognition methods based on pattern match
CN104268602A (en) * 2014-10-14 2015-01-07 大连理工大学 Shielded workpiece identifying method and device based on binary system feature matching
CN105989628A (en) * 2015-02-06 2016-10-05 北京网梯科技发展有限公司 Method and system device for obtaining information through mobile terminal
CN105843828A (en) * 2015-06-30 2016-08-10 维沃移动通信有限公司 Search method for picture information applied to mobile terminal and mobile terminal
CN106250906A (en) * 2016-07-08 2016-12-21 大连大学 Extensive medical image clustering method based on over-sampling correction
CN108073854A (en) * 2016-11-14 2018-05-25 中移(苏州)软件技术有限公司 A kind of detection method and device of scene inspection
CN107122979A (en) * 2017-05-23 2017-09-01 珠海市魅族科技有限公司 Information processing method and device, computer installation and computer-readable recording medium
CN110019874A (en) * 2017-12-29 2019-07-16 上海全土豆文化传播有限公司 The generation method of index file, apparatus and system
US11995559B2 (en) 2018-02-06 2024-05-28 Cognizant Technology Solutions U.S. Corporation Enhancing evolutionary optimization in uncertain environments by allocating evaluations via multi-armed bandit algorithms
CN110517435A (en) * 2019-09-08 2019-11-29 天津大学 The portable instant fire prevention early warning of one kind and Information Collecting & Processing early warning system and method
CN111126304A (en) * 2019-12-25 2020-05-08 鲁东大学 Augmented reality navigation method based on indoor natural scene image deep learning
CN111126304B (en) * 2019-12-25 2023-07-07 鲁东大学 Augmented reality navigation method based on indoor natural scene image deep learning
CN117216308A (en) * 2023-11-09 2023-12-12 天津华来科技股份有限公司 Searching method, system, equipment and medium based on large model
CN117216308B (en) * 2023-11-09 2024-04-26 天津华来科技股份有限公司 Searching method, system, equipment and medium based on large model

Similar Documents

Publication Publication Date Title
CN103530649A (en) Visual searching method applicable mobile terminal
CN109947975B (en) Image search device, image search method, and setting screen used therein
US8977055B2 (en) Information processing device, object recognition method, program, and terminal device
Sun et al. Dagc: Employing dual attention and graph convolution for point cloud based place recognition
CN103761539B (en) Indoor locating method based on environment characteristic objects
CN110866079A (en) Intelligent scenic spot real scene semantic map generating and auxiliary positioning method
CN111639968B (en) Track data processing method, track data processing device, computer equipment and storage medium
CN103514446A (en) Outdoor scene recognition method fused with sensor information
TWI745818B (en) Method and electronic equipment for visual positioning and computer readable storage medium thereof
CN111323024B (en) Positioning method and device, equipment and storage medium
CN103530377B (en) A kind of scene information searching method based on binary features code
CN104484814B (en) A kind of advertising method and system based on video map
CN102880854A (en) Distributed processing and Hash mapping-based outdoor massive object identification method and system
CN110555408A (en) Single-camera real-time three-dimensional human body posture detection method based on self-adaptive mapping relation
CN102880879A (en) Distributed processing and support vector machine (SVM) classifier-based outdoor massive object recognition method and system
CN103105924A (en) Man-machine interaction method and device
CN114241464A (en) Cross-view image real-time matching geographic positioning method and system based on deep learning
CN102819752B (en) System and method for outdoor large-scale object recognition based on distributed inverted files
Kanji Unsupervised part-based scene modeling for visual robot localization
CN106250396A (en) A kind of image tag automatic creation system and method
Liu et al. Indoor Visual Positioning Method Based on Image Features.
Zhao et al. CrowdOLR: Toward object location recognition with crowdsourced fingerprints using smartphones
Liu et al. Robust and accurate mobile visual localization and its applications
CN116824686A (en) Action recognition method and related device
CN116664812B (en) Visual positioning method, visual positioning system and electronic equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20140122