CN102667764A - User interface for presenting search results for multiple regions of a visual query - Google Patents

User interface for presenting search results for multiple regions of a visual query Download PDF

Info

Publication number
CN102667764A
CN102667764A CN2010800451970A CN201080045197A CN102667764A CN 102667764 A CN102667764 A CN 102667764A CN 2010800451970 A CN2010800451970 A CN 2010800451970A CN 201080045197 A CN201080045197 A CN 201080045197A CN 102667764 A CN102667764 A CN 102667764A
Authority
CN
China
Prior art keywords
vision
search
inquiry
subdivision
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2010800451970A
Other languages
Chinese (zh)
Inventor
戴维·彼得鲁
西奥多·鲍尔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google LLC
Original Assignee
Google LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US12/850,513 external-priority patent/US9087059B2/en
Application filed by Google LLC filed Critical Google LLC
Publication of CN102667764A publication Critical patent/CN102667764A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/538Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5838Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/438Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/532Query formulation, e.g. graphical querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Library & Information Science (AREA)
  • Human Computer Interaction (AREA)
  • Mathematical Physics (AREA)
  • Multimedia (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A visual query such as a photograph, screen shot, scanned image, or video frame is submitted to a visual query search system from a client system. The search system processes the visual query by sending it to a plurality of parallel search systems, each implementing a distinct visual query search process. A plurality of results is received from the parallel search systems. Utilizing the search results, an interactive results document is created and sent to the client system. The interactive results document has at least one visual identifier for a sub- portion of the visual query with a selectable link to at least one search result for that sub- portion. The visual identifier may be a bounding box around the respective sub-portion, or a semi-transparent label over the respective sub-portion. Optionally, the bounding box or label is color coded by type of result.

Description

The user interface of Search Results is showed in a plurality of zones that are used to the vision inquiry
Technical field
The disclosed embodiments relate generally to is through being used to handle a plurality of parallel search system demonstration Search Results of vision inquiry.
Background technology
Based on text or based on the search of word, wherein the user is input to speech or phrase in the search engine and receives various results, is the useful tool that is used to search for.Yet, can the inputting related term language based on the search request user of word.Sometimes, the user may like to know that the information about image.For example, the user possibly want to know the people's in the photo name, or the user possibly want to know flower or the title of bird in the picture.Therefore, can receive vision inquiry and provide the system of Search Results to expect.
Summary of the invention
According to some embodiment; A kind of computer implemented method of handling the vision inquiry comprises: on server system, carry out following steps, said server system has one or more processors and the storer of the one or more programs of storage for these one or more processors execution.Reception is from the vision inquiry of FTP client FTP.This vision inquiry is handled for handling simultaneously through the inquiry of this vision being sent to a plurality of parallel search system.In a plurality of search systems each realizes the different vision query search processes in a plurality of vision query search processes.Then, the one or more receptions a plurality of Search Results of server system from a plurality of parallel search system.Its establishment comprises the interactive result document of one or more visual identifier of the corresponding subdivision that vision is inquired about.Each visual identifier has at least one at least one the at user option link in the Search Results.At last, server system should the interactive mode result document send to FTP client FTP.In certain embodiments, Search Results comprises the relevant data of corresponding subdivision of inquiring about with vision.
In certain embodiments, transmission further comprises: the subclass of sending a plurality of Search Results with the search result list form is to show with interactive result document.Alternatively, this method further comprises: the user who receives at least one at user option link selects; And link corresponding Search Results with selected in the identification search result list.
In certain embodiments, visual identifier comprises around one or more bounding boxes of the corresponding subdivision of vision inquiry.Bounding box can be the corresponding subdivision that square maybe can be sketched the contours of the vision inquiry.Alternatively, some bounding boxes comprise littler bounding box within it.
In certain embodiments, each in the bounding box is included in the at user option link of one or more Search Results, and at user option link have with bounding box around the corresponding active region of subdivision of vision inquiry.Even in visual identifier is not among the embodiment of bounding box, the relative users selectable link also has the active region corresponding to the subdivision of the vision inquiry that is associated with corresponding visual identifier.
In certain embodiments, when selectable subdivision comprised text, this method further comprised: the text of selectable subdivision is sent to the text based query processing system.In other embodiments, when the subdivision of vision inquiry when comprising the corresponding visual identifier of text, the corresponding Search Results of corresponding visual identifier with this comprises from least one the result of word query search to the word in the text.
In certain embodiments; When the subdivision with the inquiry of the corresponding vision of corresponding visual identifier comprised people's face, the corresponding Search Results of corresponding visual identifier with this comprised name, address, contact details, account information, address information, matees with the latent image of face that current location, its face that its face is included in the relevant mobile device that the people in the selectable subdivision is associated are included in the people's in the selectable subdivision other images and/or this people.
In certain embodiments; When the subdivision with the inquiry of the corresponding vision of corresponding visual identifier comprises product, the corresponding Search Results of corresponding visual identifier with this comprise product information, product review, initiation to the option of the purchase of product, initiate option, the tabulation of similar products and/or the tabulation of Related product to the bid of product.
In certain embodiments, the corresponding visual identifier in one or more visual identifier is formatd, with according to the type of the entity of being discerned in the corresponding subdivision of vision inquiry, show with visually different modes.Can format corresponding visual identifier, come to show, such as overlapping color, overlapping pattern, label background color, label background patterns, label font color and border color with visually different modes.
In certain embodiments, the corresponding visual identifier in one or more visual identifier comprise by with the corresponding subdivision of vision inquiry in the label formed of at least one word of being associated of image.This label is formatd in interactive result document, on this corresponding subdivision or near this corresponding subdivision, to show.
Description of drawings
Fig. 1 is the block diagram that diagram comprises the computer network of vision querying server system.
Fig. 2 is the diagram process flow diagram that is used for process that to vision inquiry respond consistent with some embodiment.
Fig. 3 is the diagram process flow diagram that is used for interactive result document vision inquired about the process that respond consistent with some embodiment.
Fig. 4 is the process flow diagram of diagram the communication client and vision querying server system between consistent with some embodiment.
Fig. 5 is the block diagram of the diagram FTP client FTP consistent with some embodiment.
Fig. 6 is the block diagram of the diagram front-end vision query processing server system consistent with some embodiment.
To be diagram handle the block diagram of the universal search system in the parallel search system that vision inquires about with consistent being used to of some embodiment to Fig. 7.
Fig. 8 is the block diagram that the OCR search system of vision inquiry is handled in diagram and consistent being used to of some embodiment.
Fig. 9 is the block diagram that the face recognition search system of vision inquiry is handled in diagram and consistent being used to of some embodiment.
Figure 10 is that diagram is handled the block diagram of the image of vision inquiry to the word search system with consistent being used to of some embodiment.
The FTP client FTP of the screenshotss that Figure 11 diagram is consistent with some embodiment with Exemplary Visual inquiry.
Each diagram of Figure 12 A and 12B and consistent the having of some embodiment have the FTP client FTP of screenshotss of the interactive result document of bounding box.
The FTP client FTP of the screenshotss that Figure 13 diagram is consistent with some embodiment with the interactive result document of encoding by type.
Figure 14 diagram and consistent the having of some embodiment have the FTP client FTP of screenshotss of the interactive result document of label.
The screenshotss that interactive result document that Figure 15 diagram is consistent with some embodiment and vision inquiry and the results list show simultaneously.
Whole accompanying drawing, identical reference number refer to corresponding part.
Embodiment
With detailed reference implementation example, illustrate the example of said embodiment in the accompanying drawings at present.In the detailed description below, many details have been set forth to provide to overall understanding of the present invention.Yet, will be that it is obvious that to those of ordinary skills, do not having can to put into practice the present invention under the situation of these details.In other cases, do not describe well-known method, program, assembly, circuit and network in detail, in order to avoid unnecessarily make the aspect of embodiment smudgy.
It will also be understood that although first, second grade of word can be used to describe various elements at this, these elements should be by these word restrictions.These words only are used to distinguish element.For example, under the situation that does not deviate from scope of the present invention, first contact person can be called as second contact person, and similarly, second contact person can be called as first contact person.First contact person and second contact person are the contact persons, but it is not same contact person.
Employed term only is used to describe the purpose of specific embodiment in this description of this invention, and and to be not intended to be limitation of the present invention.Only if linguistic context is clearly indication in addition, as employed in description of the invention and accompanying claims, singulative " ", " one " and " said " also are intended to comprise plural form.It will also be understood that, as this employed word " and/or " be meant and one or more any and institute of containing in the item of listing of being associated might make up.What will be further understood that is; Word " comprises " existence of characteristic, complete thing, step, operation, element and/or the assembly of indication statement when using in this manual, but does not get rid of the existence or the interpolation of one or more other characteristics, complete thing, step, operation, element, assembly and/or its cohort.
Depend on linguistic context, as employed at this, word " if " can be interpreted into and mean " ... the time " or " when ... " or " in response to confirming " or " in response to detecting ".Similarly; Depend on linguistic context, phrase " if confirming " or " if detection " can be interpreted into and mean " when definite " or " in response to confirming " or " when detecting (the conditioned disjunction incident of statement) " or " in response to detecting (the conditioned disjunction incident of statement) ".
Fig. 1 is the block diagram of diagram according to the computer network that comprises vision querying server system of some embodiment.Computer network 100 comprises one or more FTP client FTPs 102 and vision querying server system 106.One or more communication networks 104 make these assembly interconnects.Communication network 104 can be any network in the multiple network, comprises the combination of Local Area Network, wide area network (WAN), wireless network, cable network, the Internet or such network.
FTP client FTP 102 comprises the client application 108 that is used to receive vision inquiry (for example, the vision of Figure 11 inquiry 1102), and it is carried out by FTP client FTP.The vision inquiry is an image of submitting to search engine or search system as inquiry.Unrestricted document and the image and the picture that comprises photo, scanning of example of vision inquiry.In certain embodiments, client application 108 is selected from by search application, is used for the search engine plug-in unit of browser application and is used for the set that the search engine expansion of browser application is formed.In certain embodiments, client application 108 is " widely " search boxes, and it allows the user that the image of any form is dragged and dropped into this search box to be used as the vision inquiry.
FTP client FTP 102 sends to inquiry vision querying server system 106 and receives data from vision querying server system 106.FTP client FTP 102 can be any computing machine or other equipment that can communicate with vision querying server system 106.Example is unrestricted to be comprised desk-top and notebook computer, mainframe computer, server computer, the mobile device such as mobile phone and personal digital assistant, the network terminal and STB.
Vision querying server system 106 comprises front-end vision query processing server 110.Front-end server 110 receives the visions inquiry from client 102, and this vision inquiry is sent to a plurality of parallel search system 112 for handling simultaneously.Search system 112 each realization different visual query search process, and visit its corresponding database 114 in case of necessity and handle through its different search procedure vision is inquired about.For example, recognition of face search system 112-A will visit facial image data base 114-A to search the facial match with image querying.As will more specify with reference to figure 9, if the vision inquiry comprises people's face, then face recognition search system 112-A will return one or more Search Results from face image data storehouse 114-A (for example, people's face of name, coupling etc.).In another example, optical character identification (OCR) search system 112-B becomes text to return as one or more Search Results any discernible text-converted in the vision inquiry.In optical character identification (OCR) the search system 112-B,, can visit OCR database 114-B with identification specific font or patterns of text as will more specifying with reference to figure 8.
Can use any amount of parallel search system 112.Some examples comprise face recognition search system 112-A, OCR search system 112-B, image to the 112-C of word search system (its can identifying object or object type), (it can be configured to discern the two dimensional image such as book cover and CD to the product identification search system; And can also be configured to discern 3-D view such as furniture), the auxiliary place identification of bar code recognition search system (it discerns a peacekeeping two-dimensional pattern bar code), named entity recognition search system, terrestrial reference identification (it can be configured to discern the specific famous landmark as Eiffel Tower, and can also be configured to discern the corpus such as the specific image of billboard), the geographical location information that provides by gps receiver in the FTP client FTP 102 or mobile telephone network, color identification search system and similar image search system (its search and sign and vision are inquired about similar image).More search system can be added in Fig. 1 by represented, the other parallel search of the 112-N of system system.Except that the OCR search system, all search systems are defined as the search system of carries out image matching process jointly at this.All search systems that comprise the OCR search system are collectively referred to as by the image querying search system.In certain embodiments, vision querying server system 106 comprise face recognition search system 112-A, OCR search system 112-B and at least one other by image querying search system 112.
Parallel search system 112 each respectively to visual search inquiry handle, and its result is returned to front-end server system 110.In certain embodiments; Front-end server 100 can be carried out one or more analyses to Search Results; Such as following one or more: the subclass that the result is aggregated into compound document, selection result shows and the result is carried out rank, like what will more specify with reference to figure 6.Front-end server 110 communicates by letter Search Results to FTP client FTP 102.
FTP client FTP 102 is showed one or more Search Results to the user.The result can or be used for showing to any other device of telex network information on display, through audio tweeter.The user can carry out with Search Results in many ways alternately.In certain embodiments, user's selection, note and be transmitted to vision querying server system 106 alternately with other of Search Results, and be recorded in inquiry and the annotations database 116 with the vision inquiry.Information in inquiry and the annotations database can be used to improve the vision Query Result.In certain embodiments, will be pushed to parallel search system 112 from property information cycle of inquiry and annotations database 116, its any relevant portion with information is integrated with its independent database 114 separately.
Computer network 100 comprises the word querying server system 118 that is used for carrying out in response to the word inquiry search alternatively.With respect to the vision inquiry that comprises image, the word inquiry is the inquiry that comprises one or more words.Word querying server system 118 can be used for generating the Search Results that the information that search engine produced separately of vision querying server system 106 is replenished.The result who returns from word querying server system 118 can comprise any form.Word querying server system 118 can comprise text document, image, video etc.Though word querying server system 118 is shown as autonomous system in Fig. 1, alternatively, vision querying server system 106 can comprise word querying server system 118.
With reference to the process flow diagram among the figure 2-4 the other information about the operation of vision querying server system 106 is provided below.
Fig. 2 is the process flow diagram that be used for vision querying server systems approach that to vision inquiry respond of diagram according to some embodiment of the present invention.In the operation shown in Fig. 2 each can be corresponding to the instruction that is stored in computer memory or the computer-readable recording medium.
Vision querying server system receives vision inquiry (202) from FTP client FTP.FTP client FTP for example can be desk-top computing equipment, mobile device or another similar devices (204), as illustrated with reference to figure 1.Example vision inquiry on example client systems has been shown in Figure 11.
The vision inquiry is the image document of any appropriate format.For example, the vision inquiry can be the image of photo, screenshotss, scanning or the sequence (206) of frame or a plurality of frame of video.In certain embodiments, vision inquiry is the picture that content creation program (Fig. 5 736) is produced.So, in certain embodiments, the inquiry of user's " drafting " vision, and in other embodiments, scanning input or the inquiry of shooting vision.Some vision inquiries are used such as image generation application, photo editing program, plotter program or the image editor of Acrobat and are created.For example, vision inquiry can be from: user at its friend's of its mobile phone photographs photo, then with this photo as vision inquiry submit to server system.Vision inquiry can also from: the user is scanned magazine page, or obtains the screenshotss of the webpage on desk-top computer, will scan then or screenshotss are inquired about as vision and submitted to server system.In certain embodiments, search engine expansion through browser application of vision inquiry, the plug-in unit through being used for browser application or submitted to server system 106 through the search application that FTP client FTP 102 is carried out.The vision inquiry can also or generate other application programs submissions that can be sent to the image that is positioned at long-range server by FTP client FTP by (FTP client FTP is carried out) support.
The vision inquiry can be the combination (208) of text and non-text element.For example, inquiry can be to comprise image and text, stands in road sign next door such as a people, the scanning of magazine page.Vision inquiry can comprise it no matter being by being embedded in image camera in the FTP client FTP or through FTP client FTP scanning or the document face that obtain, the people that receives.The vision inquiry can also be the scanning that only comprises the document of text.The vision inquiry can also be the image of a plurality of different themes, such as several birds, people and object in the forest (for example, automobile, park bench etc.), humans and animals (for example, pet, farm-animals, butterfly etc.).The vision inquiry can have two or more different elements.For example, the vision inquiry can be included in bar code and product image or the name of product in the packing of product.For example, the vision inquiry can be the picture that comprises the book cover of books title, Album Cover Art and bar code.As discuss more in detail below, in some cases, a vision inquiry will produce the corresponding two or more different Search Results of inquiring about with this vision of different piece.
Server system is inquired about vision as follows and is handled.The front-end server system sends to a plurality of parallel search system for handling (210) simultaneously with vision inquiry.Each search system realizes different visual query search process, promptly the separate searches system through himself processing scheme to vision inquiry handle.
In certain embodiments, to be sent out to it be optical character identification (OCR) search system for one in the search system of handling for vision inquiry.In certain embodiments, to be sent out to it be the face recognition search system for one in the search system of handling for vision inquiry.In certain embodiments, a plurality of search systems of operation different visual query search process comprise at least: optical character identification (OCR), face recognition and be different from OCR and another of face recognition by image querying process (212).Another is selected from by the image querying process and includes but not limited to following process collection: product identification, bar code recognition, object or object type identification, named entity recognition and color identification (212).
In certain embodiments; Named entity recognition takes place as the later stage process of OCR search system; Wherein the text results of OCR is analyzed famous people, place, object etc., search in word querying server system (Fig. 1 118) then that to be identified as be the word of named entity.In other embodiments, the image of famous terrestrial reference, sign, people, album cover, trade mark etc. by image to the word search system identification.In other embodiments, utilize the different named entities that separate to the word search system with image by the image querying process.Object or the identification of object type recognition system are as the general result type of " automobile ".In certain embodiments, this system is recognition product brand, specific products model etc. also, and description more specifically is provided, as " Porsche ".Part in the search system can be the search system specific to the special user.For example, the particular version of color identification and face recognition can be the special search system of being used by the blind person.
The front-end server system is from parallel search system reception result (214).In certain embodiments, the result is with the search score value.For some vision inquiries, the part in the search system can not find correlated results.For example, if the vision inquiry is the picture of flower, then face recognition search system and bar code search system can not find any correlated results.In certain embodiments, if there is not correlated results to be found, then receive sky or zero searching score value (216) from this search system.In certain embodiments; If front-end server the predefine period (for example; 0.2,0.5,1,2 or 5 second) do not receive the result afterwards from search system; Then its seemingly this overtime server produced and received result handled as the empty search score value, and will the result who receive from other search systems be handled.
Alternatively, when satisfying the predefine standard at least two in the received Search Results, it is carried out rank (218).In certain embodiments, in the predefine standard gets rid of null result.The predefine standard is that the result is not invalid.In certain embodiments, an eliminating in the predefine standard has the result of (for example, about the relevance factors) numerical score under the minimum score value of the predefine of dropping on.Alternatively, a plurality of Search Results are filtered (220).In certain embodiments, only the sum in the result surpasses under the situation of predefine threshold value, and the result is filtered.In certain embodiments, the result under dropping on the minimum score value of predefine is excluded, all results are carried out rank.For some vision inquiries, result's content is filtered.For example, if the part among the result comprises the information of personal information or individual's protection, then these results of filtering.
Alternatively, the compound Search Results of vision querying server system creation (222).An one of which embodiment is: as explaining with reference to figure 3, when being embedded in the interactive result document more than a search system result.Word querying server system (Fig. 1 118) can use the result from word search to expand from one result in the parallel search system, and wherein the result is linked to document or information source or comprises the text and/or the image of other information that maybe be relevant with the vision inquiry in addition.Therefore, for example, compound Search Results can comprise the link (224) of OCR result and the named entity in the OCR document.
In certain embodiments, the speech that possibly be correlated with in OCR search system (112-B of Fig. 1) or front-end vision query processing server (Fig. 1 110) the identification text.For example, it can discern the named entity such as famous person or place.Named entity is submitted to word querying server system (Fig. 1 118) as query terms.In certain embodiments, the word Query Result that word querying server system is produced is embedded in the vision Query Result as " link ".In certain embodiments, the word Query Result is returned as independent link.For example, if the picture of book cover is the vision inquiry, then possible is that object identification search system will produce hitting than score for these books.So, will in word querying server system 118, move word inquiry, and the word Query Result will be returned with the vision Query Result about the title of these books.In certain embodiments, show that in the tagging group word Query Result is to distinguish itself and vision Query Result.Can distinguish Search Results, maybe can use the named entity of all identifications in the search inquiry to carry out search to produce relevant especially other Search Results.For example, if the vision inquiry is the tourism pamphlet about Paris of scanning, the result who then returns can be included in the link that is used for initiating word is inquired about the search of " Notre Dame de Paris " of word querying server system 118.Similarly, compound Search Results comprises from the result about the text search of the famous image discerned.For example; Under same tourism pamphlet situation; Can also illustrate about in pamphlet, being shown as the famous destination of picture; As " Eiffel Tower " and " Louvre Palace ", the live link (even word " Eiffel Tower " and " Louvre Palace " in pamphlet itself not occur) of word Query Result.
Vision querying server system sends to FTP client FTP (226) with at least one result then.Typically, if at least part of vision query processing server from a plurality of search systems receives a plurality of Search Results, then it sends to FTP client FTP with in a plurality of Search Results at least one then.For some vision inquiries, only a search system can be returned correlated results.For example, in the vision inquiry of the image that only comprises text, only the possibility of result of OCR server is correlated with.For some vision inquiries, be correlated with from only the possibility of result of a search system.For example, only possibly be correlated with the relevant product of bar code of scanning.In these cases, the front-end vision processing server will only return relevant search result.For the inquiry of some visions, a plurality of Search Results are sent to FTP client FTP, and these a plurality of Search Results comprise from the parallel search system more than one Search Results (228).This can take place in vision inquiry the time more than a different images.For example, if the vision inquiry is the picture that the people rides, then can show result with object recognition result about this horse about this people's face recognition.In certain embodiments, make all results gather group and displaying together about ad hoc inquiry via image search system.For example, show the highest N face recognition result down, and under title " object recognition result ", show the highest N object recognition result together at title " face recognition result ".As an alternative, be described below, can make Search Results gather group by image-region from the specific image search system.For example, if vision inquiry comprises two people's faces, then both all produce the face recognition result its, will be shown as not on the same group about the result of everyone face.For some vision inquiries (for example, comprising the vision inquiry of the image of text and one or more objects), Search Results can comprise OCR result and one or more images match both (230) as a result.
In certain embodiments, the user possibly hope to understand more information about particular search result.For example; If the vision inquiry is the picture of dolphin; And " image is to word " search system is returned following word " water ", " dolphin ", " blueness " and " flipper ", and then the user possibly hope the text based query terms search of operation to " flipper ".When the user hopes to move the search (for example, as click or select the correspondence link in the Search Results indicated through the user) to the word inquiry, query terms server system (Fig. 1 118) is conducted interviews, and operation is to the search of selected word.Independent or combination vision Query Result demonstration corresponding search word result (232) on FTP client FTP.In certain embodiments; Front-end vision query processing server (Fig. 1 110) is automatically (promptly except that the initial visual inquiry; Do not receive the Any user order) be that vision is inquired about selection the highest one or more potential text results; Those text results of operation return to FTP client FTP with those word Query Results with the vision Query Result, then as a part (232) that at least one Search Results is returned to FTP client FTP in word querying server system 118.In the above example; If " flipper " is the first word result of the vision inquiry picture of dolphin; Then front-end server is inquired about to " flipper " operation word, and those word Query Results are returned to FTP client FTP with the vision Query Result.This embodiment wherein thinks and possibly before will sending to the user from the Search Results of vision inquiry, automatically performed by the word result that the user selects, and has saved user time.In certain embodiments, as stated, these results are shown as compound Search Results (222).In other embodiments, alternative composite Search Results or except that compound Search Results, said result is the part of search result list.
Fig. 3 is that diagram is used for the process flow diagram with the interactive result document process that inquiry responds to vision.With reference to figure 2 first three operation (202,210,214) has been described in the above.From the Search Results that receives (214) from the parallel search system, create interactive result document (302).
To describe in detail at present and create interactive result document (302).For some vision inquiries, interactive result document comprises one or more visual identifier of each subdivision of vision inquiry.Each visual identifier has at least one at least one the at user option link in the Search Results.The corresponding subdivision of visual identifier sign vision inquiry.For some vision inquiries, interactive result document only has a visual identifier of an at user option link that has one or more results.In certain embodiments, the one or more relative users selectable link in Search Results has active region, and active region is corresponding to the subdivision of the vision inquiry that is associated with corresponding visual identifier.
In certain embodiments, visual identifier is bounding box (304).In certain embodiments, shown in Figure 12 A, bounding box is around the subdivision of vision inquiry.Bounding box needs not to be square or rectangular shaped as frame shape; But can be the shape of any style; Comprise circular, oval-shaped, (for example) isogonism, irregular or any other shape, shown in Figure 12 B with the zone of object, entity or vision inquiry in the vision inquiry.For the inquiry of some visions, bounding box is sketched the contours of the border (306) of the entity that identifies in the subdivision of vision inquiry.In certain embodiments, each bounding box is included in the at user option link of one or more Search Results, wherein at user option link have with bounding box around the corresponding active region of subdivision of vision inquiry.When the space in bounding box (active region of at user option link) when being selected by the user, the corresponding Search Results of the image in the subdivision of returning and sketching the contours of.
In certain embodiments, as shown in Figure 14, visual identifier is label (307).In certain embodiments, label comprises at least one word that the image in the corresponding subdivision of inquiring about with vision is associated.Each label is formatd in interactive result document, on corresponding subdivision or near corresponding subdivision, to show.In certain embodiments, label is a coloud coding.
In certain embodiments, each corresponding visual identifier is formatd, with according to the type of the entity of being discerned in the corresponding subdivision of vision inquiry, show with visually different modes.For example, as shown in Figure 13, each is showed with different cross hatch patterns around product, people, trade mark and two text filed bounding boxes, representes the transparent bounding box of different colours.In certain embodiments, visual identifier is formatd, come to show, such as overlapping color, overlapping pattern, label background color, label background patterns, label font color and border color with visually different modes.
In certain embodiments, the at user option link in the interactive result document is link (308) to document that comprises the one or more results relevant with the corresponding subdivision of vision inquiry or object.In certain embodiments, at least one Search Results comprises the relevant data of corresponding subdivision of inquiring about with vision.So, when the user selects with corresponding subdivision is associated selectable link, this user is directed to the corresponding Search Results of the entity of being discerned in the corresponding subdivision of inquiring about with vision.
For example, if the vision inquiry is the photo of bar code, then have such photo part, it is the uncorrelated part of the packing on bar code invests.Interactive result document can comprise the bounding box that only centers on bar code.When selecting in the bar code bounding box that the user is sketching the contours of, show bar font code Search Results.The bar code Search Results can comprise a result, and with the corresponding name of product of this bar code, or barcode results can comprise several results, such as wherein can buy, the multiple place of this product such as comment.
In certain embodiments; When the subdivision with the inquiry of the corresponding vision of corresponding visual identifier comprised the text that comprises one or more words, the corresponding Search Results of corresponding visual identifier with this comprised from least one the result of word query search to the word in the text.In certain embodiments; The face that comprises the people when subdivision with the corresponding vision inquiry of corresponding visual identifier; When wherein finding at least one coupling (being Search Results) that satisfies predefine reliability (or other) standard for this face, the corresponding Search Results of corresponding visual identifier with this comprises following one or more: name, address, contact details, account information, address information, be included in the people's in the selectable subdivision other images and this people's the latent image coupling of face with current location, its face that its face is included in the relevant mobile device that the people in the selectable subdivision is associated.In certain embodiments; When the subdivision with the corresponding vision inquiry of corresponding visual identifier comprises product; When wherein finding at least one coupling (being Search Results) that satisfies predefine reliability (or other) standard for this product, the corresponding Search Results of corresponding visual identifier with this comprises following one or more: product information, product review, initiation to the option of the purchase of product, initiate option, similar products tabulation and Related product tabulation to the bid of product.
Alternatively, the relative users selectable link in the interactive result document comprises the anchor text, and it shows in document, and needn't activate link.The anchor text provide with when linking the information-related information of being obtained when being activated, such as keyword or word.Can with the anchor text display be the part of label (307) in the part of bounding box (304), show or be shown as when the user with cursor hovers at user option chain reach such as 1 second confirm the period in advance the time the other information that shown.
Alternatively, the relative users selectable link in the interactive result document is the link to search engine, and it is used for search and text based inquiry (being sometimes referred to as the word inquiry at this) information corresponding or document.The activation of this link impels search engine to carry out search; Wherein inquiry and search engine (are for example specified by this link; Search engine is specified by the URL in this link, and the text based search inquiry is specified by the URL parameter of this link), the result is returned to FTP client FTP simultaneously.Alternatively, in this example link can comprise the text of specifying in the search inquiry or the anchor text of word.
In certain embodiments, in response to vision inquiry and the interactive result document that produces can comprise with from corresponding a plurality of link of the result of same search system.For example, the vision inquiry can be group's image or picture.Interactive result document can comprise the bounding box around everyone, and it is that each face among the crowd is from face recognition search system return results when be activated.For the inquiry of some visions, a plurality of links in the interactive result document are corresponding to from the Search Results (310) more than a search system.For example, submitted to if the picture of people and dog is inquired about as vision, the bounding box in the then interactive result document can be sketched the contours of this people and dog respectively.When (in interactive result document) when this people is selected, return Search Results, and, return from the result of image to the word search system when (in interactive result document) when this dog is selected from the face recognition search system.For some vision inquiries, interactive result document comprises OCR result and images match result (312).For example, if the picture that people stands in the sign next door as vision inquiry submitted to, then interactive result document can comprise the visual identifier that is used for this people and is used for the text of this sign.Similarly, if the scanning of magazine is used as the vision inquiry, then interactive result document can comprise photo or the visual identifier of trade mark and the visual identifier that is used for the text of same article on this page that is used for the advertisement on the page.
After having created interactive result document, send it to FTP client FTP (314).In certain embodiments, as being discussed with reference to figure 2, send interactive result document (for example, the document 1200 of Figure 15) in the above in conjunction with search result list from one or more parallel search system.In certain embodiments, as shown in Figure 15, on FTP client FTP is in from the search result list of one or more parallel search system or contiguous said search result list show interactive result document (315).
Alternatively, the user will come to carry out alternately with result document through the visual identifier in the selection result document.Server system receives the information (316) about user's selection of the visual identifier the interactive result document from FTP client FTP.As stated, in certain embodiments, activate link through selecting the active region in the bounding box.In other embodiments, the user of the visual identifier of the subdivision through vision inquiry selects to activate link, and said visual identifier is not a bounding box.In certain embodiments, the visual identifier of link is hot button, is arranged in the underlined speech of label, text or the object of vision inquiry or other expressions of theme near the subdivision.
Search result list is being showed among the embodiment of (315) with interactive result document when the user selected at user option link (316), sign linked corresponding Search Results with selected in the search result list.In certain embodiments, cursor is with redirect or be automatically moved to selected and link corresponding first result.Too little and can not show among some embodiment of interactive result document and whole search result list at the display of client 102; Select link in the interactive result document to impel search result list to roll or redirect, link corresponding at least the first result to show with selected.In some other embodiment, select in response to user the link in the interactive result document, to the results list rearrangement, make to link corresponding first result with this in the demonstration of the top of the results list.
In certain embodiments, when the user selected at user option link (316), vision querying server system sent to client to show (318) to the user with the result's relevant with the corresponding subdivision of vision inquiry subclass at least.In certain embodiments, the user can select a plurality of visual identifier simultaneously, and will receive the subclass as a result about all selected visual identifier simultaneously.In other embodiments; Before the user to any link in the at user option link selects; To be preloaded into corresponding to the Search Results of at user option link on the client, to select in response to user and almost to the user Search Results to be provided at once the one or more links in the interactive result document.
Fig. 4 is the process flow diagram that is shown in the communication between client and the vision querying server system.Client 102 receives vision inquiry (402) from user/inquiry.In certain embodiments, can be only from registered or " selecting to add (opt-in) " accept vision to the user of visual query system and inquire about.In certain embodiments, the user who is merely registered face recognition visual query system carries out the search to facial identification and matching, and is anyone vision inquiry of carrying out other types, no matter whether it " select to add " to the face recognition part.
As stated, the form of vision inquiry can be taked many forms.The vision inquiry possibly comprise one or more themes of the subdivision that is arranged in vision inquiry document.For some vision inquiries, type identification pre-service (404) is carried out in 102 pairs of vision inquiries of FTP client FTP.In certain embodiments, FTP client FTP 102 is searched for specific discernible pattern in this pretreatment system.For example, for some vision inquiries, client can identification colors.For the inquiry of some visions, client can discern specific subdivision possibly comprise text (because this zone by with light color space etc. around less dark-coloured character form).Client can comprise any amount of pre-service type identifier or type identification module.In certain embodiments, client will have the type identification module (bar code recognition 406) that is used to discern bar code.Can come to do so through the unique candy strip in the identification rectangular region.In certain embodiments, client will have the type identification module (people's face detects 408) that the particular topic that is used for recognition visible sensation inquiry or subdivision possibly comprise people's face.
In certain embodiments, " type " discerned returned to the user for checking.For example, FTP client FTP 102 can return statement and " in your vision inquiry, find bar code, is it interested that you dock receipt font code Query Result? " Message.In certain embodiments, message even can indicate the subdivision of the vision inquiry that type found therein.In certain embodiments, this displaying is similar to the interactive result document of being discussed with reference to figure 3.For example, it can sketch the contours of the subdivision of vision inquiry, and indicates this subdivision possibly comprise people's face, and inquires the user whether it is interested in receiving the face recognition result.
After client 102 was carried out the optional pre-service of vision inquiry, client sent to vision querying server system 106 with this vision inquiry, specifically sends to front-end vision query processing server 110.In certain embodiments; If pre-service has produced correlated results; If promptly in the type identification module has produced the result who is higher than a certain threshold value; The subdivision of indication inquiry or inquiry possibly be particular type (people's face, text, a bar code etc.), and then client will be to the information of front transfer about pretreated result.For example, client can indicate face recognition module that the specific subdivision of vision inquiry is comprised people's face has 75% assurance.More generally, the pre-service result if any, comprises one or more theme class offsets (for example, bar code, people's face, text etc.).Alternatively; The pre-service result who sends to vision querying server system comprises following one or more: for each the theme class offset among the pre-service result; The information of the subdivision of the corresponding vision inquiry of identification and this theme class offset; And for each the theme class offset among the pre-service result, indication is put the letter value to the confidence level of the sign of the corresponding subdivision of this theme class offset and/or vision inquiry.
Front-end server 110 receives vision inquiry (202) from FTP client FTP.Received vision inquiry can comprise above-mentioned pretreatment information.As stated, front-end server sends to a plurality of parallel search systems (210) with the vision inquiry.If front-end server 110 has received the pretreatment information of possibility that has comprised the theme of a certain type about subdivision, then front-end server can pass to one or more in the parallel search system forward with this information.For example, it can transmit the information that specific subdivision possibly be people's face, and this branch that makes face recognition search system 112-A at first to inquire about vision handles.Similarly, sending (specific subdivision possibly be people's face) identical information can be made by other parallel search systems and be used for ignoring this subdivision or at first other subdivisions analyzed.In certain embodiments, front-end server can not pass to the parallel search system with pretreatment information, but alternatively uses this information to expand the mode that it is handled the result who receives from the parallel search system.
As illustrated with reference to figure 2, for some vision inquiries, front-end server 110 receives a plurality of Search Results (214) from the parallel search system.Front-end server can be carried out multiple rank and filtration then, and can create the interactive search result document, like what explain with reference to figure 2 and 3.If front-end server 110 has received the pretreatment information of possibility that has comprised the theme of a certain type about subdivision, then its can through mate the type of theme discerned through pre-service those as a result preference filter and sort.If the user has indicated the result of request particular type, then front-end server will be considered user's request when result.For example, if the user has only asked bar code information, then front-end server can the every other result of filtering, or front-end server is with before listing other results, listing all results relevant with the type of being asked.If interactive visual inquiry document is returned; Then server can be to having indicated linking in advance that interested result type is associated to search for the user, and only be provided for carrying out the link to the relevant search of other indicated in interactive result document themes.Then, front-end server 110 sends to FTP client FTP (226) with Search Results.
Client 102 is from server system reception result (412).When in place, these results will comprise the result of the result type that coupling finds in pretreatment stage.For example, in certain embodiments, it will comprise one or more barcode results (414) or one or more face recognition result (416).If it is possible that the pre-processing module of client has been indicated the result of particular type, and this result found, and then will give prominence to the result who is found who lists the type.
Alternatively, the user will be to one or more the selection or note (418) among the result.The user can select a Search Results, can select the Search Results of particular type and/or can select the part (420) of interactive result document.Selection to the result is result's implicit feedback associated with the query of being returned.Such feedback information can be utilized in the query processing operation in future.Note provide also can in the query processing operation in future, be utilized, about the result's that returned demonstration feedback.Note is taked following form: to correction of the result's that returned part (as the correction to the speech of wrong OCRization) or independent note (free form or structurized).
User's the selection to a Search Results, generally several results from same type select " correct " result (for example, selecting the correct result from facial recognition server), are the processes that is called as the selection in the explanation.User's selection to the Search Results of particular type; The result who generally selects interested " type " from several dissimilar results that returned (for example; Select the text through OCRization of the article in the magazine; Rather than about the visual results of same advertisement on the same page), be the process that is called as the disambiguation of intention.Like what specify with reference to figure 8, the user can select the speech (such as the named entity of being discerned) of the certain links in the document of OCRization similarly.
As an alternative or additionally, the user possibly hope particular search result is carried out note.Can accomplish this note (422) with free form style or structured format.Note can be that the description to the result maybe can be the comment to the result.For example, it can indicate the title of the theme among the result, or it can indicate " this is this good book " or " this product damages " in buying 1 year.Another example of note be the bounding box drawn of the user around the subdivision of vision inquiry with this bounding box of sign in the text that provides of the user of object or theme.More specified user comment with reference to figure 5.
The user of Search Results is selected to send to server system (424) with other notes.Front-end server 110 receives and should select and note, and it is further handled (426).If information is the selection to the object in the interactive result document, subregion or word,, can ask further information about this selection as suitably.For example, then will ask more information if select to a visual results about this visual results.If selecting is (arriving the word server from the OCR server or from image) speech, then will send to word querying server system 118 to the text search of this speech.If selecting is the people from face-image identification search system, the profile that then will ask this people.If selecting is the specific part about the interactive search result document, then will ask potential vision Query Result.
With reference to figure 5 explanation, if server system receives note, then with this annotation storage in inquiry and annotations database 116.Then, will copy to the one or more independent annotations database in the parallel server system from property information cycle of annotations database 116, like what discussed with reference to figure 7-10 below.
Fig. 5 is the block diagram of the diagram FTP client FTP 102 consistent with one embodiment of the present of invention.FTP client FTP 102 typically comprises one or more processing units (CPU) 702, one or more network or other communication interfaces 704, storer 712 and the one or more communication buss 714 that are used to make these assembly interconnects.FTP client FTP 102 comprises user interface 705.User interface 705 comprises display device 706, and comprises the input media 708 such as keyboard, mouse or other load buttons alternatively.As an alternative or additionally, display device 706 comprises and touches sensitive surfaces 709 that in this case, display 706/709 is a touch-sensitive display.In the FTP client FTP with touch-sensitive display 706/709, physical keyboard is optional (for example, when the needs keyboard is imported, can show soft keyboard).In addition, some FTP client FTPs use microphone and speech recognition to replenish or alternative keyboard.Alternatively, client 102 comprises GPS (HA Global Positioning Satellite) receiver or is used for confirming other position detecting devices 707 of the position of FTP client FTP 102.In certain embodiments, the service of vision query search is provided, it requires FTP client FTP 102 to support vision querying server system to receive the positional information of the position of indication FTP client FTP 102.
FTP client FTP 102 also comprises image-capturing apparatus 710, such as camera or scanner.Storer 712 comprises high-speed random access memory, such as DRAM, SRAM, DDR RAM or other random access solid-state memory device; And can comprise nonvolatile memory, such as one or more disk storage devices, optical disc memory apparatus, flash memory device or other non-volatile solid-state memory devices.Storer 712 can comprise the one or more memory devices that are positioned at away from the place of CPU 702 alternatively.Storer 712 or as an alternative the non-volatile memory devices in the storer 712 comprise nonvolatile property computer-readable recording medium.In certain embodiments, program, module and data structure or its subclass below the storage of the computer-readable recording medium of storer 712 or storer 712:
Operating system 716, it comprises the program that is used to handle various basic system services and is used to carry out the task of dependence hardware;
Network communication module 718, it is used to via one or more communications network interfaces 704 (wired or wireless) with such as one or more communication networks of the Internet, other wide area networks, LAN, Metropolitan Area Network (MAN) or the like client computer 102 is connected to other computing machines;
Image capture module 720, it is used to handle the respective image that image-capturing apparatus/camera 710 is captured, and wherein this respective image can be used as vision inquiry (for example, by client application module) and sends to vision querying server system;
One or more client application modules 722, the various aspects that it is used to handle query-by-image include but not limited to: press image querying and submit module 724 to, it is used for vision querying server system is submitted in the vision inquiry; Alternatively, area-of-interest is selected module 725, and it detects selection to the area-of-interest in the image (such as, the gesture on touch-sensitive display 706/709), and this area-of-interest is prepared as the vision inquiry; Browser 726 as a result, and it is used to show the result of vision inquiry; And alternatively, annotations module 728, it has: be used for the optional module 730 of structuring narrative text input, such as filling with a kind of form; Or being used for the optional module 732 that the free form narrative text is imported, it can accept the note from multiple form; And image-region selection module 734 (be called as the result sometimes at this and select module), the specific subdivision that its permission user selects image is to carry out note;
Optional content creation uses 736, and it allows the user through establishment or edited image, rather than only catches one via image-capturing apparatus 710 and create the vision inquiry; Alternatively, one or such application 736 can comprise that the subdivision that makes the user can select image is with the instruction as the vision inquiry;
Optional local image analysis module 738, it carried out pre-service to this vision inquiry before the vision inquiry is sent to vision querying server system.Local graphical analysis can recognition image particular type or the subregion in image.Can comprise following one or more by the example of the image type of such module 738 identification: facial types (face-image of identification in the vision inquiry), bar code type (bar code of identification in the vision inquiry) and text (text of in vision is inquired about, discerning); And
Optional client application 740 in addition is such as e-mail applications, phone application, browser application, map application, instant message application, social networks application etc.In certain embodiments, when suitable moved Search Results is selected, can starts or visit and this can move the corresponding application of Search Results.
Alternatively, allow the user to select the specific subdivision of image to select module 734 also to allow the user to select Search Results to hit, and needn't carry out further note it as " correct " with the image-region that carries out note.For example, the user can be had the highest N face recognition coupling by displaying, and can select correct people from this results list.For some search inquiries, with the result who shows more than a type, and the user will select one type result.For example, image querying can comprise that a people stands in tree next door, but only concerning the user, is only interested about this people's result.Therefore, to select module 734 to allow users to indicate which kind of image type be " correct " type to image---promptly, it is interested type in reception.The user also possibly hope to come Search Results is carried out note through using (being used for filling with a kind of form) narrative text load module 730 or free form narrative text load module 732 to add individual's notes and commentary or descriptive speech.
In certain embodiments, optional local image analysis module 738 is parts of client application (Fig. 1 108).In addition, in certain embodiments, optional local image analysis module 738 comprises and is used for carrying out the one or more programs of local graphical analysis so that vision inquiry or its part are carried out pre-service or classification.For example, client application 722 can comprise bar code, people's face or text by recognition image before search engine is submitted in the vision inquiry.In certain embodiments, when local image analysis module 738 detected vision inquiry and comprises the image of particular type, whether it interested in the Search Results of corresponding types for this module queries user.For example, local image analysis module 738 can detect people's face based on the general features (that is, need not confirm which people's face) of people's face, and before inquiry being sent in the vision querying server system, to the user immediate feedback is provided.It can return as " detecting people's face, are you interested in the face recognition coupling that obtains this people's face? " The result.This can save time for vision querying server system (Fig. 1 106).For the inquiry of some visions, front-end vision query processing server (Fig. 1 110) only sends to the corresponding search system of being discerned with local image analysis module 738 112 of image type with the vision inquiry.In other embodiments; Can the vision inquiry be sent to all search system 112A-N to the inquiry of the vision of search system 112, but will carry out rank result from the corresponding search system of being discerned with local image analysis module 738 112 of image type.In certain embodiments, local graphical analysis is depended on the configuration of FTP client FTP or configuration or the processing parameter that is associated with user or FTP client FTP to the mode that the operation of vision querying server system exerts an influence.In addition, the actual content of any particular visual inquiry and by the result that local graphical analysis produces can impel different visions inquiries at FTP client FTP arbitrary with vision querying server system or both locate to be handled differently.
In certain embodiments, carry out bar code recognition with two steps, wherein whether inquiry comprises that analysis local image analysis module 738 places on FTP client FTP of bar code carry out to vision.Then, only when client confirms that the vision inquiry possibly comprise bar code, just this vision inquiry is passed to the bar code search system.In other embodiments, the bar code search system is handled each vision inquiry.
Alternatively, FTP client FTP 102 comprises other client application 740.
Fig. 6 is the block diagram of the diagram front-end vision query processing server system 110 consistent with one embodiment of the present of invention.Front-end server 110 typically comprises one or more processing units (CPU) 802, one or more network or other communication interfaces 804, storer 812 and the one or more communication buss 814 that are used to make these assembly interconnects.Storer 812 comprises high-speed random access memory, such as DRAM, SRAM, DDR RAM or other random access solid-state memory device; And can comprise nonvolatile memory, such as one or more disk storage devices, optical disc memory apparatus, flash memory device or other non-volatile solid-state memory devices.Storer 812 can comprise the one or more memory devices that are positioned at away from the place of CPU 802 alternatively.Storer 812 or as an alternative the non-volatile memory devices in the storer 812 comprise nonvolatile property computer-readable recording medium.In certain embodiments, program, module and data structure or its subclass below the storage of the computer-readable recording medium of storer 812 or storer 812:
Operating system 816, it comprises the program that is used to handle various basic system services and is used to carry out the task of dependence hardware;
Network communication module 818, it is used to via one or more communications network interfaces 804 (wired or wireless) with such as one or more communication networks of the Internet, other wide area networks, LAN, Metropolitan Area Network (MAN) or the like front-end server system 110 is connected to other computing machines;
Inquiry manager 820, it is used to handle the vision inquiry from the entering of FTP client FTP 102, and sends it to two or more parallel search system; Described like other places in this document, at some in particular cases, the vision inquiry can be to only in the search system, such as when the vision inquiry comprises the instruction (for example, " only face recognition search ") of client generation;
Filtering module 822 as a result, and it is used for alternatively the result from one or more parallel search system is filtered, and will be the highest or " being correlated with " result send to FTP client FTP 102 for displaying;
Rank and formatting module 824 as a result, it is used for alternatively the result from one or more parallel search system is carried out rank, and is used for the result is formatd for displaying;
Result document is created module 826, and it is used to create the interactive search result document in due course; Module 826 can comprise submodule, includes but not limited to: bounding box is created module 828 and is created module 830 with link;
Label is created module 831, and it is used to be created as the label of visual identifier of the corresponding subdivision of vision inquiry;
Annotations module 832, it is used for receiving note from the user, and sends it to annotations database 116;
Can move Search Results module 838, it is used for generating one or more Search Results elements that move in response to the vision inquiry, and each is configured to start client-side and moves; The example of Search Results element of can moving be used for telephone calling, initiate email message, the address that is mapped out, carry out the button that the restaurant is predetermined and the option of buying product is provided; And
Inquiry and annotations database 116, it comprises database itself 834 and to the index 836 of database.
824 couples of results that return from one or more parallel search systems (112-A-112-N of Fig. 1) of rank and formatting module carry out rank as a result.As pointed out in the above, for some vision inquiries, only be correlated with from the possibility of result of a search system.Under these circumstances, only the relevant search result from this search system is carried out rank.For some vision inquiries, the Search Results of few types possibly is correlated with.In these cases; In certain embodiments; As a result rank and formatting module 824 make from the search system with correlated results (result who for example, has high correlation score value) all as a result rank be higher than result about more incoherent search system.In other embodiments, rank and formatting module 824 make the highest rank as a result from each related search system be higher than the residue result as a result.In certain embodiments, rank and formatting module 824 carry out rank according to the relevance score that is each calculating in the Search Results to the result as a result.For some vision inquiries,, carry out the text query that expands except that the enterprising line search of parallel visual search system.In certain embodiments, when text query is performed equally, show its result with the mode that visually is different from the visual search system result.
Rank and formatting module 824 also format the result as a result.In certain embodiments, show the result with listings format.In certain embodiments, show the result with interactive result document.In certain embodiments, show interactive result document and the results list.In certain embodiments, how query type indication result is showed.For example,, then produce interactive result document, and iff detects the theme that can search for if in the vision inquiry, detect the theme that can search for more than one, then will be only with the listings format display result.
Result document is created module 826 and is used to create the interactive search result document.The theme that the interactive search result document can have one or more detections and search.Bounding box is created module 828 and is created the one or more bounding box that centers in the theme that searches.Bounding box can be a box, maybe can sketch the contours of the shape of theme.The link that module 830 is created to Search Results is created in link, and said Search Results is associated with its corresponding theme in the interactive search result document.In certain embodiments, in the bounding box zone, click the activation link and create the correspondence link that module is inserted.
Inquiry and annotations database 116 comprise the information that can be used for improving the vision Query Result.In certain embodiments, the user can carry out note to image at the vision Query Result after showing.In addition, in certain embodiments, the user can carry out note to image before image being sent to vision query search system.Note can be helped the vision query processing to the text based search of the speech of note through making in the result set or with the parallel running of vision query search in advance.In certain embodiments, can make picture through the version of note open (for example), to be returned as the latent image match hit when the user for example is not having permitted when open of individual through image and note are indicated as being.For example, if the user has taken the picture of flower, and through providing about the detailed genus and the kind information of this flower to come this image is carried out note, then this user vision from this flower to execution that possibly want this image to search is inquired about anyone displaying of studying.In certain embodiments, will be pushed to parallel search system 112 from property information cycle of inquiry and annotations database 116, its relevant portion with information (if any) merges in its independent database 114 separately.
Fig. 7 is that diagram is used to handle one block diagram in the parallel search system of vision inquiry, and Fig. 7 illustrates " general " search system 112-N consistent with one embodiment of the present of invention.This server system is general, only because any one among its expression vision query search server 112-N.The 112-N of generic server system typically comprises one or more processing units (CPU) 502, one or more network or other communication interfaces 504, storer 512 and the one or more communication buss 514 that are used to make these assembly interconnects.Storer 512 comprises high-speed random access memory, such as DRAM, SRAM, DDR RAM or other random access solid-state memory device; And can comprise nonvolatile memory, such as one or more disk storage devices, optical disc memory apparatus, flash memory device or other non-volatile solid-state memory devices.Storer 512 can comprise the one or more memory devices that are positioned at away from the place of CPU 502 alternatively.Storer 512 or as an alternative the non-volatile memory devices in the storer 512 comprise nonvolatile property computer-readable recording medium.In certain embodiments, program, module and data structure or its subclass below the storage of the computer-readable recording medium of storer 512 or storer 512:
Operating system 516, it comprises the program that is used to handle various basic system services and is used to carry out the task of dependence hardware;
Network communication module 518, it is used to via one or more communications network interfaces 504 (wired or wireless) with such as one or more communication networks of the Internet, other wide area networks, LAN, Metropolitan Area Network (MAN) or the like the 112-N of generic server system is connected to other computing machines;
Specific to the search application 520 of particular server system, it for example can be bar code search application, color identification search application, product identification search application and object or object type search application etc.;
If particular search applications exploiting index, then optional index 522;
Optional image data base 524, it is used to store with particular search uses relevant image, and wherein institute's image stored data if any, depend on the search procedure type;
Optional ranking module as a result 526 (being called as the relevance score module sometimes); It is used for the result from search application is carried out rank; Ranking module can be each the assigned relevance score value as a result from search application; And if come to nothing and reach predefined minimum score value, then can return indication from incoherent sky of the result of this server system or null value score value by forward end vision query processing server; And
Annotations module 528; It is used for from annotations database (Fig. 1 116) receive annotation information, whether any information of confirming annotation information use relevantly with particular search, and any definite relevant portion of annotation information integrated with corresponding annotations database 530.
Fig. 8 is the block diagram that the OCR search system 112-B of vision inquiry is handled in be used to consistent with one embodiment of the present of invention of diagram.OCR search system 112-B typically comprises one or more processing units (CPU) 602, one or more network or other communication interfaces 604, storer 612 and the one or more communication buss 614 that are used to make these assembly interconnects.Storer 612 comprises high-speed random access memory, such as DRAM, SRAM, DDR RAM or other random access solid-state memory device; And can comprise nonvolatile memory, such as one or more disk storage devices, optical disc memory apparatus, flash memory device or other non-volatile solid-state memory devices.Storer 612 can comprise the one or more memory devices that are positioned at away from the place of CPU 602 alternatively.Storer 612 or as an alternative the non-volatile memory devices in the storer 612 comprise nonvolatile property computer-readable recording medium.In certain embodiments, program, module and data structure or its subclass below the storage of the computer-readable recording medium of storer 612 or storer 612:
Operating system 616, it comprises the program that is used to handle various basic system services and is used to carry out the task of dependence hardware;
Network communication module 618, it is used to via one or more communications network interfaces 604 (wired or wireless) with such as one or more communication networks of the Internet, other wide area networks, LAN, Metropolitan Area Network (MAN) or the like OCR search system 112-B is connected to other computing machines;
Optical character identification (OCR) module 620, it attempts the text in the recognition visible sensation inquiry, and converts letter image to character;
Optional OCR database 114-B, it is used to discern specific font, patterns of text by OCR module 620 and to distinctive other characteristics of letter identification;
Optional spell check module 622, it checks the speech through conversion through being directed against dictionary, and the letter of the conversion of the latent fault in the speech of other coupling dictionary word is replaced, and improves the conversion of letter image to character;
Optional named entity recognition module 624, its search are through the named entity in the text of conversion, send to the named entity of being discerned word querying server system (Fig. 1 118) as the word in the word inquiry and will provide explicitly as the named entity that is embedded in the link in the text of OCRization and discerned from the result of word querying server system;
Optional text matches uses 632; It is through being directed against the fragment (such as sentence and paragraph through conversion) of text fragments database auditing through conversion; And the letter to the latent fault in the text fragments of OCRization of other matched text coupling applicating text fragment is changed is replaced; Improve the conversion of letter image, in certain embodiments, text matches is used the text fragments that is found (for example offer the user as link to character; If scanning input a page of the New York Times, then text matches is used the link can be provided to the whole article of delivering on the New York Times website);
Rank and formatting module 626 as a result, it is used for the result through OCRization is formatd for displaying, and the optional link to named entity is formatd, and also alternatively any correlated results of using from text matches is carried out rank; And
Optional annotations module 628; It is used for receiving annotation information, confirming whether any information of annotation information is relevant with the OCR search system from annotations database (Fig. 1 116), and any definite relevant portion of annotation information is integrated with corresponding annotations database 630.
Fig. 9 is the block diagram that the face recognition search system 112-A of vision inquiry is handled in be used to consistent with one embodiment of the present of invention of diagram.Face recognition search system 112-A typically comprises one or more processing units (CPU) 902, one or more network or other communication interfaces 904, storer 912 and the one or more communication buss 914 that are used to make these assembly interconnects.Storer 912 comprises high-speed random access memory, such as DRAM, SRAM, DDR RAM or other random access solid-state memory device; And can comprise nonvolatile memory, such as one or more disk storage devices, optical disc memory apparatus, flash memory device or other non-volatile solid-state memory devices.Storer 912 can comprise the one or more memory devices that are positioned at away from the place of CPU 902 alternatively.Storer 912 or as an alternative the non-volatile memory devices in the storer 912 comprise nonvolatile property computer-readable recording medium.In certain embodiments, program, module and data structure or its subclass below the storage of the computer-readable recording medium of storer 912 or storer 912:
Operating system 916, it comprises the program that is used to handle various basic system services and is used to carry out the task of dependence hardware;
Network communication module 918, it is used to via one or more communications network interfaces 904 (wired or wireless) with such as one or more communication networks of the Internet, other wide area networks, LAN, Metropolitan Area Network (MAN) or the like face recognition search system 112-A is connected to other computing machines;
Face recognition search application 920; The face-image of people's face that it is used for occurring in vision inquiry in face image data storehouse 114-A search matched, and to 922 search of social network data storehouse with in the 114-A of face image data storehouse, find each mate relevant information;
Face image data storehouse 114-A, it is used to the one or more face-images of a plurality of user storage; Alternatively, the face image data storehouse comprises the face-image that removes with outdoor people, occur in the image in being included in face image data storehouse 114-A such as also being identified as of kinsfolk and user's understanding other people; Alternatively, the face image data storehouse comprises the face-image that obtains from external source, and said external source is such as in the legal face-image supplier of PD;
Alternatively; Social network data storehouse 922; It comprises the subscriber-related information with social networks; Such as the Current GPS position of name, address, occupation, group membership, social network relationships, mobile device, shared preference, interest, age, local, individual's statistics, job information etc., like what discuss more in detail with reference to figure 12A;
Rank and formatting module 924 as a result; It is used for (for example carrying out rank from the potential facial match of face image data storehouse 114-A; Correlativity and/or quality of match score value are distributed to said potential facial match), and the result formatd for displaying; In certain embodiments, to result's rank or scoring utilize from aforementioned social network data library searching to relevant information; In certain embodiments, search for that formative result comprises latent image coupling and from the information subset in social network data storehouse; And
Annotations module 926; It is used for receiving annotation information, confirming whether any information of annotation information is relevant with the face recognition search system from annotations database (Fig. 1 116), and any definite relevant portion of annotation information is stored in the corresponding annotations database 928.
Figure 10 is that the block diagram of the image of vision inquiry to the 112-C of word search system handled in be used to consistent with one embodiment of the present of invention of diagram.In certain embodiments, the object (example recognition) during image is inquired about to word search system identification vision.In other embodiments, the object type (type identification) during image is inquired about to word search system identification vision.In certain embodiments, image is to word system identification object and object type.Image is that image in the vision inquiry returns potential word match to the word search system.Image typically comprises one or more processing units (CPU) 1002, one or more network or other communication interfaces 1004, storer 1012 and the one or more communication buss 1014 that are used to make these assembly interconnects to the 112-C of word search system.Storer 1012 comprises high-speed random access memory, such as DRAM, SRAM, DDR RAM or other random access solid-state memory device; And can comprise nonvolatile memory, such as one or more disk storage devices, optical disc memory apparatus, flash memory device or other non-volatile solid-state memory devices.Storer 1012 can comprise the one or more memory devices that are positioned at away from the place of CPU 1002 alternatively.Storer 1012 or as an alternative the non-volatile memory devices in the storer 1012 comprise nonvolatile property computer-readable recording medium.In certain embodiments, program, module and data structure or its subclass below the storage of the computer-readable recording medium of storer 1012 or storer 1012:
Operating system 1016, it comprises the program that is used to handle various basic system services and is used to carry out the task of dependence hardware;
Network communication module 1018, it is used to via one or more communications network interfaces 1004 (wired or wireless) with such as one or more communication networks of the Internet, other wide area networks, LAN, Metropolitan Area Network (MAN) or the like image is connected to other computing machines to the 112-C of word search system;
Image arrives word search application 1 020, the image of its theme in the inquiry of search matched vision in picture search database 114-C;
Picture search database 114-C, it can be by search application 1020 search to find the image of the theme that is similar to the vision inquiry;
Word is to image reversal index 1022, and its storage user is employed text word when using text based query search engine 1006 to come searching image;
Rank and formatting module 1024 as a result, it is used for the latent image coupling is carried out rank and/or to carrying out rank at word to the words that are associated with the latent image coupling of image back index 1022 signs; And
Annotations module 1026; It is used for receiving annotation information, confirming whether any information of annotation information is relevant to the 112-C of word search system with image from annotations database (Fig. 1 116), and any definite relevant portion of annotation information is stored in the corresponding annotations database 1028.
Be intended to the functional descriptions as the various characteristics that can in a sets of computer system, exist more with Fig. 5-10, rather than as the structural signal of embodiment described here.In fact, and like what those of ordinary skills recognized, can make the item shown in separating combined and can make some separation.For example, can on individual server, be implemented in some shown in separating among these figure, and can realize single through one or more servers.Be used for realizing the vision query processing system actual quantity and how assigned characteristics is different because of the difference of embodiment between them.
In the method described here each can be by the instruction control that is stored in the nonvolatile property computer-readable recording medium and is carried out by one or more processors of one or more servers or client.The module or the program (that is, instruction set) of sign needn't be implemented as standalone software programs, which do, routine or module in the above, therefore, in various embodiments, can make up or rearrange in addition each subset of these modules.In the operation shown in Fig. 5-10 each can be corresponding to the instruction that is stored in computer memory or the nonvolatile property computer-readable recording medium.
Figure 11 illustrates the FTP client FTP 102 of the screenshotss that have Exemplary Visual inquiry 1102.At the FTP client FTP shown in Figure 11 102 are mobile devices, such as cell phone, portable music player or portable email equipment.FTP client FTP 102 comprises display 706 and one or more input medias 708, such as at the button shown in this accompanying drawing.In certain embodiments, display 706 is touch-sensitive displays 709.In the embodiment with touch-sensitive display 709, the soft key that on display 709, shows can substitute part or all of in the electromechanical push-button 708 alternatively.As more specify below, touch-sensitive display also is helpful carrying out when mutual with the vision Query Result.FTP client FTP 102 also comprises picture catching mechanism, such as camera 710.
Figure 11 illustrates vision inquiry 1102, and it is the photo or the frame of video of the packing on shop shelf.In the embodiment of inferior description, the vision inquiry is the two dimensional image that on pixel, has the big or small corresponding resolution of inquiring about with vision in each in bidimensional.Vision inquiry 1102 in this example is two dimensional images of three dimensional object.Vision inquiry 1102 comprises the polytype entity on background element, the packing of product 1104 and the packing, comprises people's image 1106, trademark image 1108, product image 1110 and multiple text element 1112.
Like what explain with reference to figure 3, vision inquiry 1102 is sent out to front-end server 110, and it sends to vision inquiry 1102 a plurality of parallel search systems (112A-N), reception result and create interactive result document.
Each illustrates the FTP client FTP 102 of the screenshotss of the embodiment that has interactive result document 1200 Figure 12 A and 12B.Interactive result document 1200 comprises one or more visual identifier 1202 of the corresponding subdivision of vision inquiry 1102, its each be included in the at user option link of Search Results subclass.Figure 12 A and 12B illustrate the interactive result document 1200 that has for the visual identifier of bounding box 1202 (for example, bounding box 1202-1,1202-2,1202-3).In the embodiment shown in Figure 12 A and the 12B, the user through touch active region in the space of sketching the contours of by the bounding box 1202 of specific subdivision activate to the demonstration of the corresponding Search Results of this specific subdivision.For example, the user will activate and the corresponding Search Results of this people's image through the bounding box 1306 (Figure 13) that touches around people's image.In other embodiments, use mouse or keyboard rather than touch-sensitive display to select selectable link.In certain embodiments, when user's preview bounding box 1202 (, when user click, touch once or when hovering over pointer on the bounding box), show the first corresponding search result.When the user selects bounding box (, when the user double-clicks, touches twice or use another mechanism to indicate selection), user activation is to a plurality of corresponding search results' demonstration.
In Figure 12 A and 12B, visual identifier is the bounding box 1202 around the subdivision of vision inquiry.Figure 12 A illustrates the bounding box 1202 into square or rectangular.Figure 12 B illustrates the bounding box 1202 on the border of the entity that identifies in the subdivision of sketching the contours of the vision inquiry, such as the bounding box 1202-3 that is used for beverage bottle.In certain embodiments, each bounding box 1202 comprises littler bounding box 1202 within it.For example, in Figure 12 A and 12B, the bounding box 1202-1 of sign packing is around the bounding box 1202-2 and the every other bounding box 1202 of sign trade mark.In comprising some embodiment of text, also comprise the movable hot link 1204 of the part that is used for text word.Figure 12 B shows example, and wherein " Active Drink " and " United States " is shown as hot link 1204.With the corresponding Search Results of these words be the result who receives from word querying server system 118, and with the corresponding result of bounding box be from result by the image querying search system.
Figure 13 illustrates the FTP client FTP 102 that has by the screenshotss of the interactive result document 1200 of the type coding of the entity of being discerned in the vision inquiry.The vision inquiry of Figure 11 comprises people's image 1106, trademark image 1108, product image 1110 and multiple text element 1112.So, the interactive result document 1200 that in Figure 13, shows comprises the bounding box 1202 around people 1306, label 1308, product 1310 and two text filed 1312.Each shows the bounding box of Figure 13 with different cross hatches, the transparent bounding box 1202 of its expression different colours.In certain embodiments; The visual identifier (and/or the label in the interactive result document 1200 or other visual identifier) of bounding box is formatd to show with visually different modes, such as overlapping color, overlapping pattern, label background color, label background patterns, label font color and bounding box border color.Show the type coding of the entity that is used for specific identification with reference to the bounding box among Figure 13, but also can be applied to visual identifier encoding by type into label.
Figure 14 illustrates the client device 102 of the screenshotss with the interactive result document 1200 that has label 1402, and label 1402 is visual identifier of corresponding subdivision of the vision inquiry 1102 of Figure 11.Each is included in the at user option link of corresponding search result's subclass label visual identifier 1402.In certain embodiments, discern selectable link through the descriptive text that in the zone of label 1402, is shown.Some embodiment are included in a plurality of links in the label 1402.For example; In Figure 14; Hovering over label on the woman's who drinks water the image is included in about this woman's face recognition result's link with to the link about the image recognition result (for example, using other products of identical picture or the image of advertisement) of this particular picture.
In Figure 14, label 1402 is shown as the zone of the partially transparent that has text, and it is positioned on its corresponding subdivision of interactive result document.In other embodiments, respective labels is placed near still not being positioned on its corresponding subdivision of interactive result document.In certain embodiments, with by type label is encoded with reference to the identical mode that Figure 13 was discussed.In certain embodiments, the user through touch activate by the active region in the edge of label 1302 or the peripheral space of being sketched the contours of to demonstration corresponding to the corresponding Search Results of specific subdivision of label 1302.Identical preview and the selection function discussed with reference to the bounding box of figure 12A and 12B in the above also are applicable to the visual identifier into label 1402.
Figure 15 illustrates the screenshotss that interactive result document 1200 and the inquiry 1102 of original vision and the results list 1500 show simultaneously.In certain embodiments, shown in Figure 12-14, interactive result document 1200 shows alone.In other embodiments, as shown in Figure 15, interactive result document 1200 shows with original vision inquiry simultaneously.In certain embodiments, vision Query Result tabulation 1500 shows with original vision inquiry 1102 and/or interactive result document 1200 simultaneously.The type of FTP client FTP and the amount of space on display 706 can confirm whether the results list 1500 shows with interactive result document 1200 simultaneously.In certain embodiments; FTP client FTP 102 (in response to the vision inquiry of submitting to vision querying server system) reception result tabulation 1500 and interactive result document 1200 both; But when below the user is rolled to interactive result document 1200, only the display result tabulation 1500.In in these embodiment some; FTP client FTP 102 shows the visual identifier 1202/1402 corresponding result who selects with the user under the situation of querying server once more; Because the results list 1500 is received in response to the vision inquiry by FTP client FTP 102, be stored in FTP client FTP 102 places by this locality then.
In certain embodiments, the results list 1500 is organized into classification 1502.Each classification comprises at least one result 1503.In certain embodiments, make classification title Gao Liang so that itself and result 1503 are distinguished.Classification 1502 sorts according to the classification weight of its calculating.In certain embodiments, the classification weight is the combination of the highest N result's in this classification weight.So, at first show and to have produced the more classification of correlated results.For the same entity of discerning, to return among the embodiment more than a classification 1502 (face-image identification and matching and images match shown in Figure 15), the classification that at first shows has higher classification weight.
Like what explain with reference to figure 3, in certain embodiments, when the selectable link in the interactive result document 1200 is selected by the user of FTP client FTP 102, cursor will be automatically moved to first result 1503 in suitable classification 1502 or this classification.As an alternative, when the selectable link in the interactive result document is selected by the user of FTP client FTP 102, the results list 1500 is resequenced, make at first to show with selected and link relevant classification.This for example has identification corresponding search result's information through selectable link is encoded, or through Search Results being encoded to indicate corresponding selectable link or the corresponding classification as a result of indication to accomplish.
In certain embodiments, the classification of Search Results is pressed the image querying search system corresponding to what produce those Search Results.For example, in Figure 15, the part in the classification is product coupling 1506, tag match 1508, face recognition coupling 1510, images match 1512.Original vision inquiry 1102 and/or interactive result document 1200 can use the classification title such as inquiry 1504 to show similarly.Similarly, can also the result from the performed any word search of word querying server be shown as independent classification, such as web result 1514.In other embodiments, will bear results by the image querying search system from same in the vision inquiry more than an entity.For example, the vision inquiry can comprise two different people's faces, and it will return Different Results from the face recognition search system.So, in certain embodiments, classification 1502 is divided by entity of being discerned rather than search system.In certain embodiments; The image that in the entity class head of being discerned 1502, shows the entity of being discerned; Make that be differentiable about the result of this entity of discerning with result about another entity of discerning, even both results press the generation of image querying search system by same.For example, in Figure 15, product coupling classification 1506 comprises two entity products entities and same two entity classes 1502---box-packed product 1516 and bottled product 1518, wherein each has a plurality of corresponding search results 1503.In certain embodiments, classification can be divided by the entity of being discerned with by the type of image query systems.For example, in Figure 15, under product coupling classification product, two different entities having returned correlated results are arranged.
In certain embodiments, 1503 comprise thumbnail image as a result.For example; As among Figure 15 about shown in the face recognition matching result, shown less version (being also referred to as thumbnail image) with some textual descriptions about the picture of the facial match of " Actress X (actress X) " and " Social Network Friend Y (social networks friend Y) " such as the people's in the image name.
For illustration purpose, the description of front has been described with reference to specific embodiment.Yet, superincumbent illustrative discuss and be not intended to be limit or limit the present invention to disclosed precise forms.According to top instruction, many modifications and distortion are possible.For best illustration principle of the present invention and its practical application, select and described embodiment, thereby make those skilled in the art can be with the various modification optimum utilization the present invention and the various embodiment of the special-purpose that is suitable for expecting.

Claims (28)

1. handle the computer implemented method that vision is inquired about for one kind, comprising:
Have one or more processors and the server system place of the one or more programs of storage for the storer of said one or more processors execution:
Receive the vision inquiry from FTP client FTP;
Said vision inquiry is handled for handling simultaneously through the inquiry of said vision being sent to a plurality of parallel search system, the search system in wherein said a plurality of search systems realizes the corresponding vision query search process in a plurality of vision query search processes;
The a plurality of Search Results of one or more receptions from said a plurality of parallel search system;
Create interactive result document; Said interactive result document comprises one or more visual identifier of the corresponding subdivision of said vision inquiry, and about at least one at user option link of at least one in the said Search Results of each visual identifier; And
Said interactive result document is sent to said FTP client FTP.
2. computer implemented method according to claim 1, wherein at least one Search Results comprises the relevant data of corresponding subdivision of inquiring about with said vision.
3. according to any one the described computer implemented method among the claim 1-2, further comprise: the text of corresponding subdivision is sent to the text based query processing system.
4. according to any one the described computer implemented method among the claim 1-3; Wherein, When the said subdivision with the corresponding said vision inquiry of corresponding visual identifier comprises the text that comprises one or more words, comprise from least one the result of word query search the said word in the said text with the corresponding Search Results of said corresponding visual identifier.
5. according to any one the described computer implemented method among the claim 1-3; Wherein, When the said subdivision with the corresponding said vision inquiry of corresponding visual identifier comprises people's face, comprise following one or more: name, address, contact details, account information, address information, mate with the latent image of face that current location, its face that its face is included in the relevant mobile device that the said people in the selectable subdivision is associated are included in the said people's in the selectable subdivision other images and said people with the corresponding Search Results of said corresponding visual identifier.
6. according to any one the described computer implemented method among the claim 1-3; Wherein, When the said subdivision with the corresponding said vision inquiry of corresponding visual identifier comprises product, comprise following one or more with the corresponding Search Results of said corresponding visual identifier: product information, product review, initiation to the option of the purchase of said product, initiate option, the tabulation of similar products and the tabulation of Related product to the bid of said product.
7. according to any one the described computer implemented method among the claim 1-6; Wherein the corresponding visual identifier in said one or more visual identifier is formatd, with according to the type of the entity of being discerned in the said corresponding subdivision of said vision inquiry, show with visually different modes.
8. computer implemented method according to claim 7; Wherein said corresponding visual identifier is formatd, show to be selected from: overlapping color, overlapping pattern, label background color, label background patterns, label font color and border color by the visually different mode in the following group of forming.
9. according to any one the described computer implemented method among the claim 1-8; Corresponding visual identifier in wherein said one or more visual identifier comprises the label that at least one word of being associated by the said corresponding subdivision with the inquiry of said vision is formed, wherein to said label format with in said interactive result document on said corresponding subdivision or near said corresponding subdivision displaying.
10. according to any one the described computer implemented method among the claim 1-9, wherein said transmission further comprises: the subclass of sending said a plurality of Search Results with the search result list form is to show with said interactive result document.
11. computer implemented method according to claim 10 further comprises:
Reception is selected the user of said at least one at user option link; And
Discern in the said search result list with selected and link corresponding Search Results.
12. according to any one the described computer implemented method among the claim 1-11, wherein said one or more visual identifier comprise around one or more bounding boxes of the corresponding subdivision of said vision inquiry.
13. computer implemented method according to claim 12, each in the wherein said bounding box are sketched the contours of the said corresponding subdivision of said vision inquiry.
14. according to any one the described computer implemented method among the claim 12-13, wherein at least one bounding box comprises one or more littler bounding boxes.
15. according to any one the described computer implemented method among the claim 12-14; In the wherein said bounding box each is included in the at user option link of one or more Search Results, wherein said at user option link have with said bounding box around the corresponding active region of said subdivision of said vision inquiry.
16. according to any one the described computer implemented method among the claim 1-11; Wherein the one or more relative users selectable link in the said Search Results has active region, and said active region is corresponding to the said subdivision of the said vision inquiry that is associated with corresponding visual identifier.
17. a server system that is used to handle the vision inquiry comprises:
The one or more CPU that are used for executive routine;
The storer by one or more programs of said one or more CPU execution is treated in storage;
Said one or more program comprises and is used for following instruction:
Receive the vision inquiry from FTP client FTP;
Said vision inquiry is handled for handling simultaneously through the inquiry of said vision being sent to a plurality of parallel search system, the search system in wherein said a plurality of search systems realizes the corresponding vision query search process in a plurality of vision query search processes;
The a plurality of Search Results of one or more receptions from said a plurality of parallel search system;
Create interactive result document; Said interactive result document comprises one or more visual identifier of the corresponding subdivision of said vision inquiry, and about at least one at user option link of at least one in the said Search Results of each visual identifier; And
Said interactive result document is sent to said FTP client FTP.
18. system according to claim 17, wherein said one or more visual identifier comprise around one or more bounding boxes of the corresponding subdivision of said vision inquiry.
19. system according to claim 18, each in the wherein said bounding box sketch the contours of the said corresponding subdivision of said vision inquiry.
20. according to any one the described system among the claim 18-19; In the wherein said bounding box each is included in the at user option link of one or more Search Results, wherein said at user option link have with said bounding box around the corresponding active region of said subdivision of said vision inquiry.
21. according to any one the described system among the claim 17-20; Corresponding visual identifier in wherein said one or more visual identifier comprises the label that at least one word of being associated by the said corresponding subdivision with the inquiry of said vision is formed, wherein to said label format with in said interactive result document on said corresponding subdivision or near said corresponding subdivision displaying.
22. a nonvolatile property computer-readable recording medium, its storage are configured to one or more programs of being carried out by computing machine, said one or more programs comprise and are used for following instruction:
Receive the vision inquiry from FTP client FTP;
Said vision inquiry is handled for handling simultaneously through the inquiry of said vision being sent to a plurality of parallel search system, the search system in wherein said a plurality of search systems realizes the corresponding vision query search process in a plurality of vision query search processes;
The a plurality of Search Results of one or more receptions from said a plurality of parallel search system;
Create interactive result document; Said interactive result document comprises one or more visual identifier of the corresponding subdivision of said vision inquiry, and about at least one at user option link of at least one in the said Search Results of each visual identifier; And
Said interactive result document is sent to said FTP client FTP.
23. computer-readable recording medium according to claim 22, wherein said one or more visual identifier comprise around one or more bounding boxes of the corresponding subdivision of said vision inquiry.
24. computer-readable recording medium according to claim 23, each in the wherein said bounding box are sketched the contours of the said corresponding subdivision of said vision inquiry.
25. according to any one the described computer-readable recording medium among the claim 23-24; In the wherein said bounding box each is included in the at user option link of one or more Search Results, wherein said at user option link have with said bounding box around the corresponding active region of said subdivision of said vision inquiry.
26. according to any one the described computer-readable recording medium among the claim 22-25; Corresponding visual identifier in wherein said one or more visual identifier comprise by with the said corresponding subdivision of said vision inquiry in the label formed of at least one word of being associated of image, wherein to said label format with in said interactive result document on said corresponding subdivision or near said corresponding subdivision displaying.
27. nonvolatile property computer-readable recording medium; Its storage is configured to the one or more programs by one or more processors execution of computer system, and said one or more programs comprise to be treated to carry out the instruction that requires any one the described method among the 1-16 with enforcement of rights by said one or more processors.
28. a server system comprises:
One or more processors; And
Store the storer that one or more programs are carried out for said one or more processors, said one or more programs comprise to be treated to carry out the instruction that requires any one the described method among the 1-16 with enforcement of rights by said one or more processors.
CN2010800451970A 2009-08-07 2010-08-05 User interface for presenting search results for multiple regions of a visual query Pending CN102667764A (en)

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
US23239709P 2009-08-07 2009-08-07
US61/232,397 2009-08-07
US26612209P 2009-12-02 2009-12-02
US61/266,122 2009-12-02
US12/850,513 2010-08-04
US12/850,513 US9087059B2 (en) 2009-08-07 2010-08-04 User interface for presenting search results for multiple regions of a visual query
PCT/US2010/044604 WO2011017558A1 (en) 2009-08-07 2010-08-05 User interface for presenting search results for multiple regions of a visual query

Publications (1)

Publication Number Publication Date
CN102667764A true CN102667764A (en) 2012-09-12

Family

ID=43544672

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010800451970A Pending CN102667764A (en) 2009-08-07 2010-08-05 User interface for presenting search results for multiple regions of a visual query

Country Status (8)

Country Link
EP (1) EP2462518A1 (en)
JP (2) JP2013501976A (en)
KR (1) KR101670956B1 (en)
CN (1) CN102667764A (en)
AU (1) AU2010279334A1 (en)
BR (1) BR112012002803A2 (en)
CA (1) CA2770186C (en)
WO (1) WO2011017558A1 (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103488769A (en) * 2013-09-27 2014-01-01 中国科学院自动化研究所 Search method of landmark information mined based on multimedia data
CN104536995A (en) * 2014-12-12 2015-04-22 北京奇虎科技有限公司 Method and system both for searching based on terminal interface touch operation
CN105373552A (en) * 2014-08-25 2016-03-02 中兴通讯股份有限公司 Display terminal based data processing method
CN105718500A (en) * 2014-12-18 2016-06-29 三星电子株式会社 Text-based content management method and apparatus of electronic device
WO2016101768A1 (en) * 2014-12-26 2016-06-30 北京奇虎科技有限公司 Terminal and touch operation-based search method and device
CN105765577A (en) * 2014-09-29 2016-07-13 微软技术许可有限责任公司 Customizable data services
CN106489156A (en) * 2015-02-04 2017-03-08 瓦特博克有限公司 System and method for extracting file and picture from the image for characterizing multiple documents
CN106484817A (en) * 2016-09-26 2017-03-08 广州致远电子股份有限公司 A kind of data search method and system
CN107004007A (en) * 2014-11-12 2017-08-01 微软技术许可有限责任公司 Multitask in many Meta Search Engines and search
CN107624177A (en) * 2015-05-13 2018-01-23 微软技术许可有限责任公司 Automatic vision for the option for the audible presentation for improving user's efficiency and interactive performance is shown
CN108021601A (en) * 2016-10-28 2018-05-11 奥多比公司 Searched for using digital painting canvas to carry out the Spatial Semantics of digital-visual media
CN108369594A (en) * 2015-11-23 2018-08-03 超威半导体公司 Method and apparatus for executing parallel search operation
CN108431828A (en) * 2015-10-25 2018-08-21 阿尔瓦阿尔塔有限公司 It is recognizable to encapsulate, for preparing the system and process of edible product based on the recognizable encapsulation
CN108431829A (en) * 2015-08-03 2018-08-21 奥兰德股份公司 System and method for searching for product in catalogue
CN108475335A (en) * 2016-01-27 2018-08-31 霍尼韦尔国际公司 The Method and kit for of the postmortem analysis of tripping field device in process industrial for using optical character identification & intelligent character recognitions
CN109168069A (en) * 2018-09-03 2019-01-08 聚好看科技股份有限公司 A kind of recognition result subregion display methods, device and smart television
CN109189289A (en) * 2018-09-03 2019-01-11 聚好看科技股份有限公司 A kind of method and device generating icon based on screenshotss image
CN109313824A (en) * 2016-07-26 2019-02-05 谷歌有限责任公司 Interactive geography non-contextual navigation tool
CN111801680A (en) * 2018-03-05 2020-10-20 A9.com股份有限公司 Visual feedback of process state
CN112417192A (en) * 2019-08-21 2021-02-26 上银科技股份有限公司 Image judging system of linear transmission device and image judging method thereof
TWI768232B (en) * 2019-08-07 2022-06-21 上銀科技股份有限公司 Image decision system of linear transmission device and its image decision method

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6153086B2 (en) 2011-12-14 2017-06-28 日本電気株式会社 Video processing system, video processing method, video processing apparatus for portable terminal or server, and control method and control program therefor
CN102594896B (en) * 2012-02-23 2015-02-11 广州商景网络科技有限公司 Electronic photo sharing method and system for same
JP6046393B2 (en) * 2012-06-25 2016-12-14 サターン ライセンシング エルエルシーSaturn Licensing LLC Information processing apparatus, information processing system, information processing method, and recording medium
CN104583983B (en) * 2012-08-31 2018-04-24 惠普发展公司,有限责任合伙企业 The zone of action of image with addressable link
RU2580431C2 (en) 2014-03-27 2016-04-10 Общество С Ограниченной Ответственностью "Яндекс" Method and server for processing search queries and computer readable medium
KR101588950B1 (en) * 2014-03-28 2016-01-26 주식회사 에스원 System for distinction of market share, method for distinction of market share and computer readable recording medium storing program executing the system
US10102565B2 (en) * 2014-11-21 2018-10-16 Paypal, Inc. System and method for content integrated product purchasing
CN104462423A (en) * 2014-12-15 2015-03-25 百度在线网络技术(北京)有限公司 Search method, search device and mobile terminal
RU2015111360A (en) 2015-03-30 2016-10-20 Общество С Ограниченной Ответственностью "Яндекс" Method (options) and system (options) for processing a search query
DE102016201373A1 (en) * 2016-01-29 2017-08-03 Robert Bosch Gmbh Method for recognizing objects, in particular of three-dimensional objects
JP7379059B2 (en) * 2019-10-02 2023-11-14 キヤノン株式会社 Intermediate server device, information processing device, communication method
CN114581360B (en) * 2021-04-01 2024-03-12 正泰集团研发中心(上海)有限公司 Photovoltaic module label detection method, device, equipment and computer storage medium
CN113901257B (en) 2021-10-28 2023-10-27 北京百度网讯科技有限公司 Map information processing method, device, equipment and storage medium

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09330336A (en) * 1996-06-11 1997-12-22 Sony Corp Information processor
US7016532B2 (en) * 2000-11-06 2006-03-21 Evryx Technologies Image capture and identification system and process
JP2003150617A (en) * 2001-11-12 2003-05-23 Olympus Optical Co Ltd Image processor and program
JP2005165461A (en) * 2003-11-28 2005-06-23 Nifty Corp Information providing device and program
JP4413633B2 (en) * 2004-01-29 2010-02-10 株式会社ゼータ・ブリッジ Information search system, information search method, information search device, information search program, image recognition device, image recognition method and image recognition program, and sales system
US7751805B2 (en) * 2004-02-20 2010-07-06 Google Inc. Mobile image-based information retrieval system
WO2006043319A1 (en) * 2004-10-20 2006-04-27 Fujitsu Limited Terminal and server
US7809722B2 (en) * 2005-05-09 2010-10-05 Like.Com System and method for enabling search and retrieval from image files based on recognized information
US7809192B2 (en) * 2005-05-09 2010-10-05 Like.Com System and method for recognizing objects from images and identifying relevancy amongst images and information
JP2007018166A (en) * 2005-07-06 2007-01-25 Nec Corp Information search device, information search system, information search method, and information search program
JP2007018456A (en) * 2005-07-11 2007-01-25 Nikon Corp Information display device and information display method
JP2007026316A (en) * 2005-07-20 2007-02-01 Yamaha Motor Co Ltd Image management device, image-managing computer program and recording medium recording the same
US8849821B2 (en) * 2005-11-04 2014-09-30 Nokia Corporation Scalable visual search system simplifying access to network and device functionality
US20080267504A1 (en) * 2007-04-24 2008-10-30 Nokia Corporation Method, device and computer program product for integrating code-based and optical character recognition technologies into a mobile visual search

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103488769A (en) * 2013-09-27 2014-01-01 中国科学院自动化研究所 Search method of landmark information mined based on multimedia data
CN105373552A (en) * 2014-08-25 2016-03-02 中兴通讯股份有限公司 Display terminal based data processing method
CN105765577A (en) * 2014-09-29 2016-07-13 微软技术许可有限责任公司 Customizable data services
CN107004007A (en) * 2014-11-12 2017-08-01 微软技术许可有限责任公司 Multitask in many Meta Search Engines and search
CN104536995A (en) * 2014-12-12 2015-04-22 北京奇虎科技有限公司 Method and system both for searching based on terminal interface touch operation
CN104536995B (en) * 2014-12-12 2016-05-11 北京奇虎科技有限公司 The method and system of searching for based on terminal interface touch control operation
CN105718500A (en) * 2014-12-18 2016-06-29 三星电子株式会社 Text-based content management method and apparatus of electronic device
CN105718500B (en) * 2014-12-18 2021-10-08 三星电子株式会社 Text-based content management method and device for electronic equipment
WO2016101768A1 (en) * 2014-12-26 2016-06-30 北京奇虎科技有限公司 Terminal and touch operation-based search method and device
CN106489156A (en) * 2015-02-04 2017-03-08 瓦特博克有限公司 System and method for extracting file and picture from the image for characterizing multiple documents
CN107624177A (en) * 2015-05-13 2018-01-23 微软技术许可有限责任公司 Automatic vision for the option for the audible presentation for improving user's efficiency and interactive performance is shown
CN107624177B (en) * 2015-05-13 2021-02-12 微软技术许可有限责任公司 Automatic visual display of options for audible presentation for improved user efficiency and interaction performance
CN108431829A (en) * 2015-08-03 2018-08-21 奥兰德股份公司 System and method for searching for product in catalogue
CN108431828A (en) * 2015-10-25 2018-08-21 阿尔瓦阿尔塔有限公司 It is recognizable to encapsulate, for preparing the system and process of edible product based on the recognizable encapsulation
CN108369594A (en) * 2015-11-23 2018-08-03 超威半导体公司 Method and apparatus for executing parallel search operation
CN108369594B (en) * 2015-11-23 2023-11-10 超威半导体公司 Method and apparatus for performing parallel search operations
CN108475335B (en) * 2016-01-27 2022-10-14 霍尼韦尔国际公司 Method for post-inspection analysis of tripped field devices in process industry using optical character recognition, smart character recognition
CN108475335A (en) * 2016-01-27 2018-08-31 霍尼韦尔国际公司 The Method and kit for of the postmortem analysis of tripping field device in process industrial for using optical character identification & intelligent character recognitions
CN109313824A (en) * 2016-07-26 2019-02-05 谷歌有限责任公司 Interactive geography non-contextual navigation tool
CN109313824B (en) * 2016-07-26 2023-10-03 谷歌有限责任公司 Method, system and user equipment for providing interactive geographic context interface
CN106484817A (en) * 2016-09-26 2017-03-08 广州致远电子股份有限公司 A kind of data search method and system
CN108021601B (en) * 2016-10-28 2023-12-05 奥多比公司 Spatial semantic search of digital visual media using digital canvas
CN108021601A (en) * 2016-10-28 2018-05-11 奥多比公司 Searched for using digital painting canvas to carry out the Spatial Semantics of digital-visual media
CN111801680A (en) * 2018-03-05 2020-10-20 A9.com股份有限公司 Visual feedback of process state
CN109189289A (en) * 2018-09-03 2019-01-11 聚好看科技股份有限公司 A kind of method and device generating icon based on screenshotss image
CN109189289B (en) * 2018-09-03 2021-12-24 聚好看科技股份有限公司 Method and device for generating icon based on screen capture image
CN109168069A (en) * 2018-09-03 2019-01-08 聚好看科技股份有限公司 A kind of recognition result subregion display methods, device and smart television
TWI768232B (en) * 2019-08-07 2022-06-21 上銀科技股份有限公司 Image decision system of linear transmission device and its image decision method
CN112417192A (en) * 2019-08-21 2021-02-26 上银科技股份有限公司 Image judging system of linear transmission device and image judging method thereof

Also Published As

Publication number Publication date
KR101670956B1 (en) 2016-10-31
JP6025812B2 (en) 2016-11-16
KR20120055627A (en) 2012-05-31
AU2010279334A1 (en) 2012-03-15
EP2462518A1 (en) 2012-06-13
WO2011017558A1 (en) 2011-02-10
BR112012002803A2 (en) 2019-09-24
CA2770186A1 (en) 2011-02-10
JP2013501976A (en) 2013-01-17
CA2770186C (en) 2018-05-22
JP2015062141A (en) 2015-04-02

Similar Documents

Publication Publication Date Title
CN102625937B (en) Architecture for responding to visual query
CN102667764A (en) User interface for presenting search results for multiple regions of a visual query
CN102822817B (en) For the Search Results of the action taked of virtual query
CN102667763A (en) Facial recognition with social network aiding
US9087059B2 (en) User interface for presenting search results for multiple regions of a visual query
CN108959586B (en) Identifying textual terms in response to a visual query
CN103493069A (en) Identifying matching canonical documents in response to a visual query
CN102770862A (en) Hybrid use of location sensor data and visual query to return local listings for visual query
AU2016200659B2 (en) Architecture for responding to a visual query
AU2016201546B2 (en) Facial recognition with social network aiding

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20120912