US20200334906A9 - Sharing links in an augmented reality environment - Google Patents

Sharing links in an augmented reality environment Download PDF

Info

Publication number
US20200334906A9
US20200334906A9 US16/218,015 US201816218015A US2020334906A9 US 20200334906 A9 US20200334906 A9 US 20200334906A9 US 201816218015 A US201816218015 A US 201816218015A US 2020334906 A9 US2020334906 A9 US 2020334906A9
Authority
US
United States
Prior art keywords
point
interest
content
image
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US16/218,015
Other versions
US10839605B2 (en
US20190114839A1 (en
Inventor
David Creighton Mott
Arnab Sanat Kumar Dhua
Colin Jon Taylor
Scott Paul Robertson
William Brendel
Nityananda Jayadevaprakash
Kathy Wing Lam Ma
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
A9 com Inc
Original Assignee
A9 com Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by A9 com Inc filed Critical A9 com Inc
Priority to US16/218,015 priority Critical patent/US10839605B2/en
Publication of US20190114839A1 publication Critical patent/US20190114839A1/en
Publication of US20200334906A9 publication Critical patent/US20200334906A9/en
Application granted granted Critical
Publication of US10839605B2 publication Critical patent/US10839605B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/025Services making use of location information using location based information parameters
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C17/00Compasses; Devices for ascertaining true or magnetic north for navigation or surveying purposes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/01Satellite radio beacon positioning systems transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/13Receivers

Definitions

  • Portable computing devices are increasingly powerful and affordable. Users are relying upon portable computing devices to handle various types of tasks. For example, a user can use a portable computing device to search information about a restaurant, store, or other place of interest before deciding whether to visit. However, in such a situation, the user may have to launch a map, a search engine, or other similar application to look up information such as the location of the place even if the place is close-by or in the view of the user. Further, inputting the search query and reviewing the results could take more time than the time to check out the place in the real world.
  • FIG. 1A illustrates an example of a point of interest in the real world in accordance with various embodiments
  • FIGS. 1B and 1C illustrate examples of sharing content and links related to a point of interest in an augmented reality environment in accordance with various embodiments
  • FIGS. 2A and 2B illustrate example processes for sharing content and links to visual elements of a point of interest that can be utilized in accordance with various embodiments
  • FIG. 3A illustrates an example of points of interest as a user is moving through in the real world in accordance with various embodiments
  • FIGS. 3B and 3C illustrate examples of unique images of a point of interest in accordance with various embodiments
  • FIG. 4 illustrates an example process for choosing a unique image of a point of interest that can be utilized in accordance with various embodiments
  • FIG. 5 illustrates an example of an augmented reality system for recognizing and tracking a point of interest in the real world in accordance with various embodiments
  • FIG. 6A and 6B illustrates an example computing device that can be used to implement aspects of the various embodiments
  • FIG. 7 illustrates example components of a computing device such as that illustrated in FIG. 6 , in accordance with various embodiments.
  • FIG. 8 illustrates an environment in which various embodiments can be implemented in accordance with various embodiments.
  • Systems and methods in accordance with various embodiments of the present disclosure overcome deficiencies in conventional approaches in sharing content.
  • various embodiments enable users and business owners to attach content and/or links to visual elements of a place at a physical location, and, in response to a user's portable device pointing at a tagged place, cause the content and/or links to the visual elements of the place to be presented on the portable device.
  • the content and/or links can refer to various types of information, for example, but are not limited to, promotional coupons, menus, advertisements, reservation systems, floor plans, videos, audio, wait time, customer reviews, music, chat walls, attractions of the place, instant or daily specials, recommendations on specific items, hyperlinks to reviews of the place on third party review sites, or other alternative places, etc.
  • content and links are tied to specific objects at a place based at least in part upon one of Global Positioning System (GPS) locations, Inertial Measurement Unit (IMU) orientations, compass data, or one or more visual matching algorithms.
  • GPS Global Positioning System
  • IMU Inertial Measurement Unit
  • compass data or one or more visual matching algorithms.
  • the content and links are attached to the specific objects of the place, they can be discovered by a user with a portable device pointing at the specific objects in the real world.
  • At least some embodiments cause content and/or links to a physical location to be presented on a user's device based at least upon one of the proximity of the user to the physical location, a point of view of the user, a user profile of the user (e.g., user demographic and preferences), or a profile of an owner of the physical location.
  • images submitted by users and/or an owner of a point of interest are used as fiducials to assist recognition and tracking of the point of interest.
  • Multiple images of a point of interest taken from different points of view e.g., crowd-sourced
  • fiducials can be dynamically used as fiducials for recognition and tracking of the point of interest.
  • a different unique image i.e., fiducials
  • At least some embodiments provide various methods to control the types of content and links that can be attached to a physical place in an augmented reality environment, and/or how the attached content and links can be presented. For example, through one or more types of authentication processes, an owner of a physical place may get access to an augmented reality environment associated with his or her place. Upon authentication, the owner can provide some inputs on the types of content and links attached to the place, and/or how the attached content and links are presented (e.g., a blank wall or whole business-front, layout, or visual elements to be attached).
  • a suitable communication means e.g., a canvas or chat blog
  • content e.g., texts, images, or videos
  • the point of interest when a user or owner takes an image at a point of interest with a user device, the point of interest can be determined based at least in part upon a point of view of the user, or one or more visual features of the image.
  • the point of view of the user can be determined based at least in part upon GPS locations, IMU orientations, or compass data of the user device.
  • an indication may be provided to the user or owner about the quality of the captured image so that suitable images can be submitted for image matching. For example, an image with unique visual features works better in image matching than the one that is featureless.
  • the scaled indication e.g., a scale of 0 to 10, or strong/medium/bad
  • the image is not allowed to be submitted.
  • Some embodiments allow a user to take a self-guided tour of a point of interest in an augmented reality environment by pointing a user device with a camera at the point of interest in the real world and then receiving different links, files and/or content related to the point of interest for each image on the camera view of the user device.
  • Various other functions and advantages are described and suggested below as may be provided in accordance with the various embodiments.
  • FIG. 1A illustrates an example of a point of interest in the real world in accordance with various embodiments.
  • a user 101 with a computing device 103 can be seen moving through the Market Street 140 .
  • the client device is not shown in FIG. 1A , it should be understood that various types of electronic or computing devices that are capable of receiving and/or processing images in accordance with various embodiments are discussed herein.
  • These client devices can include, for example desktop PCs, laptop computers, tablet computers, personal data assistants (PDAs), smart phones, portable media file players, e-book readers, portable computers, head-mounted displays, interactive kiosks, mobile phones, net books, single-board computers (SBCs), embedded computer systems, wearable computers (e.g., watches or glasses), gaming consoles, home-theater PCs (HTPCs), TVs, DVD players, digital cable boxes, digital video recorders (DVRs), computer systems capable of running a web-browser, or a combination of any two or more of these.
  • PDAs personal data assistants
  • smart phones portable media file players
  • e-book readers portable computers
  • head-mounted displays interactive kiosks
  • mobile phones net books
  • SBCs single-board computers
  • embedded computer systems wearable computers (e.g., watches or glasses), gaming consoles, home-theater PCs (HTPCs), TVs, DVD players, digital cable boxes, digital video recorders (DVRs), computer systems capable of running a web-brow
  • the computing device may use operating systems that include, but are not limited to, Android, Berkeley Software Distribution (BSD), iPhone OS (iOS), Linus, OS X, Unix-like Real-time Operating System (e.g., QNX), Microsoft Windows, Window Phone, and IBM z/OS.
  • the computing device 103 may have one or more image capture elements (not shown), such as one or more cameras or camera sensors, to capture images and/or videos.
  • the one or more image capture elements may include a charge-coupled device (CCD), an active pixel sensor in complementary metal-oxide-semiconductor (CMOS) or N-type metal-oxide-semiconductor (NMOS), an infrared or ultrasonic image sensor, or an image sensor utilizing other type of image capturing technologies.
  • the computing device 103 may have one or more audio capture devices (not shown) capable of capturing audio data (e.g., word commands from the user 101 ).
  • the user 101 desires to obtain relevant information about the XYZ Bank 110 and the ABC Restaurant 120 using the computing device 103 to determine whether to cross the Market Street 140 to stop by the places.
  • the user 101 can aim one or more image capture elements located on the computing device 103 to capture live view of at least a portion of the XYC Bank 110 and the ABC Restaurant 120 .
  • the XYC Bank 110 and the ABC Restaurant 120 may be recognized by analyzing and comparing the captured image(s) or feature(s) with stored images related to the place in a database.
  • image recognition may require a still image rather than a live view.
  • the user 101 may be required to capture a still image (e.g., press a shutter button) for the purpose of image recognition.
  • image processing processes may include sub-processes such as, for example, thresholding (converting a grayscale image to black and white, or using separation based on a grayscale value), segmentation, blob extraction, pattern recognition, barcode and data matrix code reading, gauging (measuring object dimensions), positioning, edge detection, color analysis, filtering (e.g. morphological filtering) and template matching (finding, matching, and/or counting specific patterns).
  • Various techniques e.g., OCR and other text recognition processes
  • Some techniques are described in co-pending U.S. patent application Ser. No. 14/094,655, filed Dec. 2, 2013, entitled “Visual Search in a Controlled Shopping Environment,” which are hereby incorporated herein by references in their entirety.
  • the content and/or links presented on the user device may include an address, a phone number, business hours, a way to make reservations, and/or customer review of the point of interest.
  • the content listed in the billboard 112 includes an address, phone number, URL, price rating, food critic review, menu/product inventory, reservation service, and user review for the XYZ Bank 110 while the content listed in the billboard 129 includes an address, phone number, URL, and customer rating for the XYZ Bank 120 .
  • the content elements in the billboards 112 and 129 can be interactive.
  • the user 101 may select the URL address, www.xyzbank.com, to open up a webpage of the XYZ Bank 110 , or dial the phone number listed in the content 112 by tapping the number.
  • different levels of detail information e.g., content and links
  • content and links e.g., an instant discount
  • certain content and/or links may be shown on the user device.
  • a different set of content and/or links may be presented to the user.
  • certain content and links are shown to the user in small fonts or icons. The user 101 may get more details of these fonts or icons by selecting the small fonts or icons, or magnifying a display area corresponding to the small fonts or icons.
  • content and links presented on a user device can be determined based at least in part upon a user profile or the location of the user device.
  • an icon or symbol of a landmark e.g., the San Francisco-Oakland Bay Bridge 160
  • the billboard 162 includes some tourist information regarding the landmark (e.g., the distance and URL of the Bay Bridge 160 ).
  • points of interest or landmarks in the direction of a user device have to meet a predetermined set of conditions to be presented on a user device.
  • the predetermined set of conditions include such as, but are not limited to, whether the points of interest are within a predetermined number of miles, having a threshold review rating, or within a predetermined degree of orientation of the user device.
  • the information of a point of interest presented to a user can be customized based at least upon the user profile or GPS locations, weather conditions, compass, or a degree of relevancy to the point of interest.
  • the customization of the information may include choosing what types of information being presented and/or how the information is presented on the user device.
  • the information pertinent to a restaurant may include subject matters, such as the type of food served, menu, price, user reviews, professional critic reviews, etc.
  • information deemed more relevant the user may be displayed more prominently than those less relevant. If a user desires more information about a point of interest, the user may magnify or zoom the point of interest on a user device.
  • FIG. 1C illustrates an example of sharing content and links related to a point of interest in an augmented reality environment in accordance with various embodiments.
  • a user interface is provided to present an overlay of live view of the ABC Restaurant 120 .
  • the overlay can be transparent or have different levels of transparency.
  • a blank canvas or a webpage may be provided for a user to share content and links related to visual elements of a point of interest.
  • the canvas or webpage may also present content and/or links related to the visual elements that were submitted by other users.
  • the user may be allowed to edit or delete the content, comments or links that were submitted earlier by himself or herself.
  • various types of markers can be used to anchor content and/or links submitted by users.
  • the user or owner of the restaurant can designate certain areas with various sizes (e.g., rectangular 151 , 152 , 126 and 128 in dashed lines) to anchor different types of content and links related to the features of the ABC Restaurant 120 .
  • the size, style, fill ratio or transparency percentage of a marker can be customized by the users or owner.
  • a user may outline a particular display area on the user device, which corresponds to a visual element of a point of interest, to submit comments, content, or links related to the point of interest.
  • the user can use the particular display area on the user device to emphasize the feature(s) that the user would like to attach content and/or links to.
  • an indication e.g., different colors or strong/medium/weak
  • At least some embodiments enable an owner of a point of interest to control over at least in part what types of content and link that can be attached to a physical place in an augmented reality environment, or how the content and links can be presented.
  • an owner of a point of interest in the real world may customize or personalize a platform (e.g., a canvas or overlay) in the augmented reality environment for users to submit files, content, and/or links related to the point of interest.
  • one or more types of privacy policies may be implemented to guide or flag content and/or links that are submitted. For example, “User's registration is not required. Any user can submit content or links without logging in with a username, in which case they will be identified by network IP address.”
  • an owner of a point of interest may prohibit users to submit content and/or links to the point of interest in an augmented reality environment, or may require a user to have a minimum privilege level to do so.
  • an owner of a point of interest may allow a user to attach whatever content and/or links the user would like, or freely attach content and/or links the way the user likes.
  • the content and links presented on a canvas or overlay of a point of interest can include subject matters of the point of interest such as, but are not limited to, attractions of the point of interest, instant or daily specials, recommendations on specific items, hyperlinks to reviews of the point of interest on third party review sites, or other alternative point of interest (e.g., proximity, or reviews) based on a user's profile and preferences.
  • a user interface may be provided for the user to log into his or her account without launching other application or web browser.
  • the point of interest is a bank
  • the user can log in on the user interface to check balances and make transactions. If the user desires, the user's log in information can be saved.
  • the user interface (e.g., a canvas or overlay) may automatically log in the user's account in subsequent accesses.
  • the canvas or overlay of the ABC Restaurant 120 comprises the designated area 151 , 152 , 128 , and 126 for users or owner to submit content or links related to the ABC Restaurant 120 .
  • the designated area 151 includes the billboard 124 to display an address, phone number, URL, and user reviews (e.g., reviews from Friends A and B) for the ABC Restaurant 120 .
  • the designated area 152 includes the billboard 121 of videos.
  • the designated area 128 includes the billboard 123 to display a menu 122 of the ABC Restaurant 120 .
  • the designated area 126 includes the billboard 123 to display an instant coupon in bold to draw the user 101 's attentions.
  • the canvas or overlay of the ABC Restaurant 120 may also have a picture of the smiling owner 125 inviting the user 101 to visit the Restaurant 120 , “Welcome! Come on in!” 153 .
  • the picture of smiling owner 125 may be a 2D hologram image with the owner facing the direction of a user as the user walks by the front door.
  • content and/or links presented on a user device of a user is determined based at least in part upon profile (e.g., demographic or preferences) of the user. For example, a close-by store may present an instant discount link of a necktie if a user is male, and present a discount link of a formula if a user is determined to be a new mom.
  • profile e.g., demographic or preferences
  • one or more machine learning algorithms can be used to analyze user profiles and/or behavior from a group of users (e.g., crowd-sourced data). The group of users may share at least one common traits with the user. For example, a discount coupon or sale item that is particular attractive to the same group of users as the user may be presented on the user device.
  • content and/or links presented on a user device of a user is determined based at least in part upon various viewing perspective (e.g., weather conditions, outdoor or indoor, day or night time) from the user.
  • a user may find the content and/or links presented on a user device helpful or not helpful for him or her to determine whether to stop by the point of interest.
  • the user is enabled to comment on the canvas whether content and links presented are helpful.
  • the user may explicitly comment on the content and/or links.
  • such feedback from the user can be determined implicitly. For example, the user may show some interests by clicking on the links, magnifying a display area corresponding to certain content presented, or focusing on certain content over a threshold period of time (e.g., determined by head tracking, gaze tracking, or eye tracking techniques).
  • user reviews related to a point of interest can be collected and extracted from comments of users that have left on a canvas or overlay of the point of interest, or various type of sources such as, but are not limited to, social networking sites, newspaper and magazines, search engines, local directory services, and/or third party service providers.
  • the average review rating can be calculated based on a weighted average of collected explicit and inexplicit comments from users. Comments from different groups of users may be assigned different weights in the calculation. For example, reviews from a user's friends (e.g., contacts in user's address book on a user device or a social networking site) may have a higher weight than reviews from other users.
  • FIG. 2A illustrates an example process 200 for sharing content and links to visual features of a point of interest that can be utilized in accordance with various embodiments. It should be understood that there can be additional, fewer, or alternative steps performed in similar or alternative orders, or in parallel, within the scope of the various embodiments unless otherwise stated.
  • the example method embodiment 200 starts with capturing an image of a point of interest in the real world by a user device, at step 204 .
  • the image corresponds to a point of view from a user of the user device.
  • the point of view of the user can be determined based at least in part upon one of GPS locations, IMU orientations, or compass data of the user device.
  • the image may have one or more features for image matching and recognitions.
  • the one or more features may be used as one or more anchor points in an augmented reality environment for users to attach comment and/or links associated with the point of interest.
  • the point of interest can be recognized by matching the one or more features of the point of interest against each of saved images of a plurality of points of interest in the database, at step 206 .
  • the plurality of images used in the image recognition and matching process are selected based at least in part upon the proximity of the points of interest to the location of the user device or the point of view of the user.
  • Directional cues may be provided on a user interface layer of the user device for points of interests that are not in the point of view of the user.
  • content, files, and/or links related to the point of interest are retrieved from the database.
  • the content, files, and/or links can be retrieved directly from various types of sources, such as, the database, social networking sites, newspapers and magazines, search engines, local directory services, and/or third party service providers.
  • the content, files, and/or links can be subject matters of the point of interest such as, attractions of the point of interest, instant or daily specials, recommendations on specific items, hyperlinks to reviews of the point of interest on third party review sites, or other alternative point of interest (e.g., proximity, or reviews) based on user's profile and preferences.
  • the retrieved content, files, and/or links can be presented on an interface layer of the user device based at least in part upon the user's proximity to the point of interest, the point of view of the user, or the user's profile and preferences.
  • the user can submit one or more additional links, content, and/or files on the interface layer, or edit any part of the links, content, and/or files that were submitted by the user, at step 212 .
  • FIG. 2B illustrates an example process 220 for sharing content and links to a point of interest that can be utilized in accordance with various embodiments. It should be understood that there can be additional, fewer, or alternative steps performed in similar or alternative orders, or in parallel, within the scope of the various embodiments unless otherwise stated.
  • the example method embodiment 220 starts with logging in as an owner of a point of interest through an authentication process, at step 221 .
  • the owner may have to answer correctly a series of questions relating to the history of the point of interest or information listed in an owner record for the point of interest.
  • the owner Once the owner gets authenticated, the owner can provide content or a link relating to the point of interest in an augmented reality environment, at step 223 .
  • the owner can customize a layout of the content or link on a user interface layer corresponding to the point of interest, at step 225 .
  • a user can select the customized layout or a default layout of the content or link on the user interface layer corresponding to the point of interest on a user device, at step 227 .
  • the user can submit one or more links and/or content relating to the point of interest on the user interface layer, or edit any of the content and/or link that were submitted by the user, at step 229 .
  • the submitted one or more links and/or content are associated with at least one of one or more acquired images by the user device, or GPS location, IMU orientations, or compass data of the user device, at step 231 .
  • FIG. 3A illustrates an example of points of interest as a user is moving through in the real world in accordance with various embodiments.
  • a user 301 is operating a computing device 303 incorporating one or more imaging capturing element (not shown) and walking along the Market Street 340 from the position 350 to another position 360 .
  • the computing device 303 has a viewing angle 302 that is able to capture an image of at least a substantial portion of the ABC Restaurant 320 such that the ABC Restaurant 320 is in the center of the viewing angle 302 .
  • the backend server can choose a candidate set of stored images for image recognition and matching.
  • the candidate set of images may share at least a common feature with the images taken by the computing device 303 , or similar enough to the location and point of direction of the computing device 303 .
  • the backend server can then perform image recognition and matching between the images taken by the computing device 303 and each of the candidate set of images, and also calculate a confidence score for each of the candidate set of images.
  • a candidate image, along with content and links associated with the image, which has the highest confidence score can be provided as a fiducial (e.g., a unique image, or a set of feature points describing the unique image) for recognition and tracking.
  • the computing device 303 can recognize the fiducial in the live camera view and dynamically track the changes to the camera view due to the movement of the computing device 303 .
  • FIGS. 3B and 3C illustrate examples of different unique images of a point of interest that are presented on the computing device 303 as the user 301 is walking from the position 350 to the position 360 in accordance with various embodiments.
  • a unique image 328 of the ABC Restaurant 320 is presented on the computing device 303 .
  • the unique image 328 may be selected from a plurality of images that were taken by different users under various conditions (e.g., different weather and light conditions, or different points of view).
  • various conditions e.g., different weather and light conditions, or different points of view
  • FIG. 4 illustrates an example process for choosing a unique image of a point of interest that can be utilized in accordance with various embodiments. It should be understood that there can be additional, fewer, or alternative steps performed in similar or alternative orders, or in parallel, within the scope of the various embodiments unless otherwise stated.
  • the example method embodiment 400 starts with receiving an image of a point of interest in the real world from a user device, at step 402 .
  • the image corresponds to a point of view from a user of the user device and may have one or more features for image matching and recognition.
  • the point of view of the user can be determined based at least in part upon GPS locations, IMU orientations, or compass data of the user device.
  • a set of candidate images can be chosen based at least upon the location of the user device, or the point of view of the user.
  • the received image can be compared with each of the candidate images according to one or more image matching algorithms, at step 406 .
  • the candidate images may be taken at different times of day and/or under different weather conditions.
  • a confidence score is calculated for each candidate image, at step 408 .
  • a unique image, which has the highest confidence score, can be chosen, at step 410 .
  • Content or links attached to the unique image can be presented on a user interface layer of the user device, at step 412 .
  • a new unique image can be utilized by the user device for fiducial recognition and tracking, and this new image might have the same or different links and content as the previous unique image.
  • a new unique image with the same content or links might represent a different point of view of the same point of interest.
  • the new unique image is selected from a new set of candidate images that corresponds to the location of the user device, and the new point of view of the user.
  • new content or new links attached to the new unique image can be presented on the user interface of the user device.
  • FIG. 5 illustrates an example 500 of an augmented reality system 510 for recognizing and tracking a point of interest in the real world in accordance with various embodiments
  • the augmented reality platform 530 communicates with the client computing devices 502 via the network 504 .
  • client computing devices 502 are shown in FIG. 5 , it should be understood that various other types of electronic or computing devices that are capable of receiving, or rendering a web application in accordance with various embodiments are discussed herein.
  • These client devices can include, for example desktop PCs, laptop computers, tablet computers, personal data assistants (PDAs), smart phones, portable media file players, e-book readers, portable computers, head-mounted displays, interactive kiosks, mobile phones, net books, single-board computers (SBCs), embedded computer systems, wearable computers (e.g., watches or glasses), gaming consoles, home-theater PCs (HTPCs), TVs, DVD players, digital cable boxes, digital video recorders (DVRs), computer systems capable of running a web-browser, or a combination of any two or more of these.
  • PDAs personal data assistants
  • smart phones portable media file players
  • e-book readers portable computers
  • head-mounted displays interactive kiosks
  • mobile phones net books
  • SBCs single-board computers
  • embedded computer systems wearable computers (e.g., watches or glasses), gaming consoles, home-theater PCs (HTPCs), TVs, DVD players, digital cable boxes, digital video recorders (DVRs), computer systems capable of running a web-brow
  • the augmented reality platform 530 provides a web service allowing users to search and discover links and other content (e.g., reviews, menus, video, chat walls, contact information, URLs) that are tied to unique visual features at a point of interest in the real world.
  • the client computing devices 502 can display those links and content as augmented reality content on the display screen or the camera preview screen.
  • the augmented reality system 530 enables users or an owner of point of interest to submit or upload links and/or contents related to the point of interest to the database 520 .
  • the links and/or contents are tied to at least one of the point of view (e.g., GPS location, IMU orientation, and compass) from the client computing device 502 , or image features of the point of interest.
  • users in the real world can discover the links and/or content related to the point of interest by pointing the client computing device 502 at the point of interest.
  • the links and/or content related to the point of interest can be presented on as content augmenting a camera preview of the real world.
  • the augmented reality platform 530 enables users to submit multiple and varied points of view of the same point of interest. In some instances, submitted points of view are taken under different weather conditions. As a user moves along in the real world, the augmented reality system 530 can recognize and match features in the real-time image against images that are attached to the points of interest in the vicinity of the client computing device 502 or within the point of view of the user. Candidates images together with attached content and/or links can be dynamically selected, even when points of view of the user is moving and the real-time image is different from the saved images.
  • the augmented reality platform 530 can calculate a confidence score for each candidate image by matching the candidate image against the real-time image and provide a stored image with the highest confidence score for the user to use as a fiducial for recognition and tracking, along with the content and/links associate with the point of interest.
  • FIGS. 6A and 6B illustrate front and back views, respectively, of an example electronic computing device 600 that can be used in accordance with various embodiments.
  • a portable computing device e.g., a smartphone, an electronic book reader, or tablet computer
  • any device capable of receiving and processing input can be used in accordance with various embodiments discussed herein.
  • the devices are capable of receiving, displaying or playing streaming media files in accordance with various embodiments discussed herein.
  • the devices can include, for example, desktop PCs, laptop computers, tablet computers, personal data assistants (PDAs), smart phones, portable media players, e-book readers, portable computers, head-mounted displays, interactive kiosks, mobile phones, net books, single-board computers (SBCs), embedded computer systems, wearable computers (e.g., watches or glasses), gaming consoles, home-theater PCs (HTPCs), TVs, DVD players, digital cable boxes, digital video recorders (DVRs), computer systems capable of running a web-browser, among others.
  • PDAs personal data assistants
  • smart phones portable media players
  • e-book readers portable computers
  • head-mounted displays interactive kiosks
  • mobile phones net books
  • SBCs single-board computers
  • embedded computer systems e.g., wearable computers (e.g., watches or glasses), gaming consoles, home-theater PCs (HTPCs), TVs, DVD players, digital cable boxes, digital video recorders (DVRs), computer systems capable of running
  • the computing device 600 has a display screen 602 (e.g., an LCD element) operable to display information or image content to one or more users or viewers of the device.
  • the display screen of some embodiments displays information (e.g., streaming media file) to the viewer facing the display screen (e.g., on the same side of the computing device as the display screen).
  • the computing device in this example can include one or more imaging elements, in this example including two image capture elements 604 on the front of the device and at least one image capture element 610 on the back of the device. It should be understood, however, that image capture elements could also, or alternatively, be placed on the sides or corners of the device, and that there can be any appropriate number of capture elements of similar or different types.
  • Each image capture element 604 and 610 may be, for example, a camera, a charge-coupled device (CCD), a motion detection sensor or an infrared sensor, or other image capturing technology.
  • CCD charge-coupled device
  • the device can use the images (e.g., still or video) captured from the imaging elements 604 and 610 to generate a three-dimensional simulation of the surrounding environment (e.g., a virtual reality of the surrounding environment for display on the display element of the device). Further, the device can utilize outputs from at least one of the image capture elements 604 and 610 to assist in determining the location and/or orientation of a user and in recognizing nearby persons, objects, or locations. For example, if the user is holding the device, the captured image information can be analyzed (e.g., using mapping information about a particular area) to determine the approximate location and/or orientation of the user. The captured image information may also be analyzed to recognize nearby persons, objects, or locations (e.g., by matching parameters or elements from the mapping information).
  • the captured image information e.g., using mapping information about a particular area
  • the computing device can also include at least one microphone or other audio capture elements capable of capturing audio data, such as words spoken by a user of the device, music being hummed by a person near the device, or audio being generated by a nearby speaker or other such component, although audio elements are not required in at least some devices.
  • there may be only one microphone while in other devices there might be at least one microphone on each side and/or corner of the device, or in other appropriate locations.
  • the device 600 in this example also includes one or more orientation or position-determining elements 618 operable to provide information such as a position, direction, motion, or orientation of the device.
  • These elements can include, for example, accelerometers, inertial sensors, electronic gyroscopes, and electronic compasses.
  • the example device also includes at least one computing mechanism 614 , such as at least one wired or wireless component operable to communicate with one or more electronic devices.
  • the device also includes a power system 616 , such as may include a battery operable to be recharged through all plug-in approaches, or through other approaches such as capacitive charging through proximity with a power mat or other such device.
  • a power system 616 such as may include a battery operable to be recharged through all plug-in approaches, or through other approaches such as capacitive charging through proximity with a power mat or other such device.
  • FIG. 7 illustrates a set of basic components of an electronic computing device 700 such as the device 600 described with respect to FIG. 6 .
  • the device includes at least one processing unit 702 for executing instructions that can be stored in a memory device or element 704 .
  • the device can include many types of memory, data storage, or computer-readable media, such as a first data storage for program instructions for execution by the processing unit(s) 702 , the same or separate storage can be used for images or data, a removable memory can be available for sharing information with other devices, and any number of computing approaches can be available for sharing with other devices.
  • the device typically will include some type of display element 706 , such as a touch screen, electronic ink (e-ink), organic light emitting diode (OLED) or liquid crystal display (LCD), although devices such as portable media players might convey information via other means, such as through audio speakers.
  • display element 706 is capable of displaying streaming media files or other information to viewers facing the display element 706 .
  • the device in many embodiments will include at least one imaging/audio element 708 , such as one or more cameras that are able to capture images of the surrounding environment and that are able to image a user, people, or objects in the vicinity of the device.
  • the image capture element can include any appropriate technology, such as a CCD image capture element having a sufficient resolution, focal range, and viewable area to capture an image of the user when the user is operating the device. Methods for capturing images using a camera element with a computing device are well known in the art and will not be discussed herein in detail. It should be understood that image capture can be performed using a single image, multiple images, periodic imaging, continuous image capturing, image streaming, etc. Further, a device can include the ability to start and/or stop image capture, such as when receiving a command from a user, application, or other device.
  • the example computing device 700 also includes at least one orientation/motion determining element 710 able to determine and/or detect orientation and/or movement of the device.
  • an element can include, for example, an accelerometer or gyroscope operable to detect movement (e.g., rotational movement, angular displacement, tilt, position, orientation, motion along a non-linear path, etc.) of the device 700 .
  • An orientation determining element can also include an electronic or digital compass, which can indicate a direction (e.g., north or south) in which the device is determined to be pointing (e.g., with respect to a primary axis or other such aspect).
  • the device in many embodiments will include at least a positioning element 712 for determining a location of the device (or the user of the device).
  • a positioning element can include or comprise a GPS or similar location-determining elements operable to determine relative coordinates for a position of the device.
  • positioning elements may include wireless access points, base stations, etc., that may either broadcast location information or enable triangulation of signals to determine the location of the device.
  • Other positioning elements may include QR codes, barcodes, RFID tags, NFC tags, etc. that enable the device to detect and receive location information or identifiers that enable the device to obtain the location information (e.g., by mapping the identifiers to a corresponding location).
  • Various embodiments can include one or more such elements in any appropriate combination.
  • some embodiments use the element(s) to track the location of a device.
  • the device of some embodiments may keep track of the location of the device by using the element(s), or in some instances, by using the orientation determining element(s) as mentioned above, or a combination thereof.
  • the algorithms or mechanisms used for determining a position and/or orientation can depend at least in part upon the selection of elements available to the device.
  • the example computing device 700 may also include a low power, low resolution imaging element to capture image data.
  • the low resolution imaging element can transmit the captured image data over a low bandwidth bus, such as an I2C bus, to a low power processor, such as a PIC-class processor.
  • the PIC processor may also communicate with other components of the computing device 700 , such as Orientation Motion Element 710 , etc.
  • the PIC processor can analyze the image data from the low resolution imaging element and other components of the computing device 700 to determine whether the head motion likely corresponds to a recognized head gesture. If the PIC processor determines that the head motion likely corresponds to a recognize head gesture, the PIC processor can enable other image element to activate high resolution image capture and/or main processor to analyze the capture high resolution image data.
  • the example device also includes one or more wireless components 714 operable to communicate with one or more electronic devices within a computing range of the particular wireless channel.
  • the wireless channel can be any appropriate channel used to enable devices to communicate wirelessly, such as Bluetooth, cellular, NFC, or Wi-Fi channels. It should be understood that the device can have one or more al wired communications connections as known in the art.
  • the device also includes a power system 716 , such as may include a battery operable to be recharged through al plug-in approaches, or through other approaches such as capacitive charging through proximity with a power mat or other such device.
  • a power system 716 such as may include a battery operable to be recharged through al plug-in approaches, or through other approaches such as capacitive charging through proximity with a power mat or other such device.
  • Various other elements and/or combinations are possible as well within the scope of various embodiments.
  • the device can include at least one additional input device 718 able to receive al input from a user.
  • This al input can include, for example, a push button, touch pad, touch screen, wheel, joystick, keyboard, mouse, keypad, or any other such device or element whereby a user can input a command or a request for additional product information to the device.
  • I/O devices could even be connected by a wireless infrared or Bluetooth or other link as well in some embodiments.
  • Some devices also can include a microphone or other audio capture element that accepts voice or other audio commands.
  • a device might not include any buttons at all, but might be controlled only through a combination of visual and audio commands, such that a user can control the device without having to be in contact with the device.
  • FIG. 8 illustrates an example of an environment 800 for implementing aspects in accordance with various embodiments.
  • the system includes an electronic computing device 802 , which can include any appropriate device operable to send and receive requests, messages or information over an appropriate network 804 and convey information back to a user of the device. Examples of such computing devices include personal computers, cell phones, handheld messaging devices, laptop computers, set-top boxes, personal data assistants, electronic book readers and the like.
  • the network can include any appropriate network, including an intranet, the Internet, a cellular network, a local area network or any other such network or combination thereof.
  • the network could be a “push” network, a “pull” network, or a combination thereof.
  • a “push” network one or more of the servers push out data to the computing device.
  • a “pull” network one or more of the servers send data to the computing device upon request for the data by the computing device.
  • Components used for such a system can depend at least in part upon the type of network and/or environment selected. Protocols and components for communicating via such a network are well known and will not be discussed herein in detail.
  • Computing over the network can be enabled via wired or wireless connections and combinations thereof.
  • the network includes the Internet, as the environment includes a Web server 806 for receiving requests and serving content in response thereto, although for other networks, an alternative device serving a similar purpose could be used, as would be apparent to one of ordinary skill in the art.
  • the illustrative environment includes at least one application server 808 and a data store 810 .
  • application server 808 can include any appropriate hardware and software for integrating with the data store 810 as needed to execute aspects of one or more applications for the computing device and handling a majority of the data access and business logic for an application.
  • the application server provides access control services in cooperation with the data store and is able to generate content such as text, graphics, audio and/or video to be transferred to the user, which may be served to the user by the Web server 806 in the form of HTML, XML or another appropriate structured language in this example.
  • content such as text, graphics, audio and/or video to be transferred to the user, which may be served to the user by the Web server 806 in the form of HTML, XML or another appropriate structured language in this example.
  • the handling of all requests and responses, as well as the delivery of content between the computing device 802 and the application server 808 can be handled by the Web server 806 . It should be understood that the Web and application servers are not required and are merely example components, as structured code discussed herein can be executed on any appropriate device or host machine as discussed elsewhere herein.
  • the data store 810 can include several separate data tables, databases or other data storage mechanisms and media for storing data relating to a particular aspect.
  • the data store illustrated includes mechanisms for storing content (e.g., production data) 812 and user information 816 , which can be used to serve content for the production side.
  • the user information 816 may include user preference, historical data, user demographic data, and audio system of the user devices associated with users.
  • Demographic data of users may include user age, user gender, user educational background, user marital status, user income level, user ethnicity, user postal code, user primary language, or user spending habit.
  • the audio system may include headphone (e.g., earphone, ear bud, and the like), speaker (e.g., tablet speaker, blue tooth speaker, computer speaker, bookshelf speaker, center-channel speaker, floor speaker, in-wall and in-ceiling speaker, outdoor speaker, sound bar, portable speaker, and woofer/sub-woofer speaker), or various types of audio amplifiers.
  • the data store is also shown to include a mechanism for storing log or session data 814 . It should be understood that there can be many other aspects that may need to be stored in the data store, such as page image information and access rights information, which can be stored in any of the above listed mechanisms as appropriate or in additional mechanisms in the data store 810 .
  • the data store 810 is operable, through logic associated therewith, to receive instructions from the application server 808 and obtain, update or otherwise process data in response thereto.
  • a user might submit a search request for a certain type of item.
  • the data store might access the user information to verify the identity of the user and can access the catalog detail information to obtain information about items of that type.
  • the information can then be returned to the user, such as in a results listing on a Web page that the user is able to view via a browser on the user device 802 .
  • Information for a particular item of interest can be viewed in a dedicated page or window of the browser.
  • Each server typically will include an operating system that provides executable program instructions for the general administration and operation of that server and typically will include computer-readable medium storing instructions that, when executed by a processor of the server, allow the server to perform its intended functions.
  • Suitable implementations for the operating system and general functionality of the servers are known or commercially available and are readily implemented by persons having ordinary skill in the art, particularly in light of the disclosure herein.
  • the environment in one embodiment is a distributed computing environment utilizing several computer systems and components that are interconnected via computing links, using one or more computer networks or direct connections.
  • the environment in one embodiment is a distributed computing environment utilizing several computer systems and components that are interconnected via computing links, using one or more computer networks or direct connections.
  • FIG. 8 it will be appreciated by those of ordinary skill in the art that such a system could operate equally well in a system having fewer or a greater number of components than are illustrated in FIG. 8 .
  • the depiction of the system 800 in FIG. 8 should be taken as being illustrative in nature and not limiting to the scope of the disclosure.
  • the various embodiments can be further implemented in a wide variety of operating environments, which in some cases can include one or more user computers or computing devices which can be used to operate any of a number of applications.
  • User or computing devices can include any of a number of general purpose personal computers, such as desktop or laptop computers running a standard operating system, as well as cellular, wireless and handheld devices running mobile software and capable of supporting a number of networking and messaging protocols.
  • Such a system can also include a number of workstations running any of a variety of commercially-available operating systems and other known applications for purposes such as development and database management.
  • These devices can also include other electronic devices, such as dummy terminals, thin-clients, gaming systems and other devices capable of communicating via a network.
  • Most embodiments utilize at least one network that would be familiar to those skilled in the art for supporting communications using any of a variety of commercially-available protocols, such as TCP/IP, OSI, FTP, UPnP, NFS, CIFS and AppleTalk.
  • the network can be, for example, a local area network, a wide-area network, a virtual private network, the Internet, an intranet, an extranet, a public switched telephone network, an infrared network, a wireless network and any combination thereof.
  • the Web server can run any of a variety of server or mid-tier applications, including HTTP servers, FTP servers, CGI servers, data servers, Java servers and business application servers.
  • the server(s) may also be capable of executing programs or scripts in response requests from user devices, such as by executing one or more Web applications that may be implemented as one or more scripts or programs written in any programming language, such as Java®, C, C# or C++ or any scripting language, such as Perl, Python or TCL, as well as combinations thereof.
  • the server(s) may also include database servers, including without limitation those commercially available from Oracle®, Microsoft®, Sybase® and IBM®.
  • the environment can include a variety of data stores and other memory and storage media as discussed above. These can reside in a variety of locations, such as on a storage medium local to (and/or resident in) one or more of the computers or remote from any or all of the computers across the network. In a particular set of embodiments, the information may reside in a storage-area network (SAN) familiar to those skilled in the art. Similarly, any necessary files for performing the functions attributed to the computers, servers or other network devices may be stored locally and/or remotely, as appropriate.
  • SAN storage-area network
  • each such device can include hardware elements that may be electrically coupled via a bus, the elements including, for example, at least one central processing unit (CPU), at least one input device (e.g., a mouse, keyboard, controller, touch-sensitive display element or keypad) and at least one output device (e.g., a display device, printer or speaker).
  • CPU central processing unit
  • input device e.g., a mouse, keyboard, controller, touch-sensitive display element or keypad
  • at least one output device e.g., a display device, printer or speaker
  • Such a system may also include one or more storage devices, such as disk drives, optical storage devices and solid-state storage devices such as random access memory (RAM) or read-only memory (ROM), as well as removable media devices, memory cards, flash cards, etc.
  • RAM random access memory
  • ROM read-only memory
  • Such devices can also include a computer-readable storage media reader, a communications device (e.g., a modem, a network card (wireless or wired), an infrared computing device) and working memory as described above.
  • the computer-readable storage media reader can be connected with, or configured to receive, a computer-readable storage medium representing remote, local, fixed and/or removable storage devices as well as storage media for temporarily and/or more permanently containing, storing, transmitting and retrieving computer-readable information.
  • the system and various devices also typically will include a number of software applications, modules, services or other elements located within at least one working memory device, including an operating system and application programs such as a client application or Web browser. It should be appreciated that alternate embodiments may have numerous variations from that described above. For example, customized hardware might also be used and/or particular elements might be implemented in hardware, software (including portable software, such as applets) or both. Further, connection to other computing devices such as network input/output devices may be employed.
  • Storage media and computer readable media for containing code, or portions of code can include any appropriate media known or used in the art, including storage media and computing media, such as but not limited to volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage and/or transmission of information such as computer readable instructions, data structures, program modules or other data, including RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices or any other medium which can be used to store the desired information and which can be accessed by a system device.
  • RAM random access memory
  • ROM read only memory
  • EEPROM electrically erasable programmable read-only memory
  • flash memory electrically erasable programmable read-only memory
  • CD-ROM compact disc read-only memory
  • DVD digital versatile disk
  • magnetic cassettes magnetic tape
  • magnetic disk storage magnetic disk storage devices

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Automation & Control Theory (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Multimedia (AREA)

Abstract

Various embodiments provide methods and systems for users and business owners to share content and/or links to visual elements of a place at a physical location, and, in response to a user device pointing at a tagged place, causing the content and/or links to the visual elements of the place to be presented on the user device. In some embodiments, content and links are tied to specific objects at a place based at least in part upon one of Global Positioning System (GPS) locations, Inertial Measurement Unit (IMU) orientations, compass data, or one or more visual matching algorithms. Once the content and links are attached to the specific objects of the place, they can be discovered by a user with a portable device pointing at the specific objects in the real world.

Description

  • This application is a continuation of allowed U.S. application Ser. No. 15/248,944 entitled “SHARING LINKS IN AN AUGMENTED REALITY ENVIRONMENT,” filed Aug. 26, 2016, and issued U.S. Pat. No. 9,432,421, issued Aug. 30, 2016, which are incorporated herein by reference for all purposes.
  • BACKGROUND
  • Portable computing devices are increasingly powerful and affordable. Users are relying upon portable computing devices to handle various types of tasks. For example, a user can use a portable computing device to search information about a restaurant, store, or other place of interest before deciding whether to visit. However, in such a situation, the user may have to launch a map, a search engine, or other similar application to look up information such as the location of the place even if the place is close-by or in the view of the user. Further, inputting the search query and reviewing the results could take more time than the time to check out the place in the real world.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Various embodiments in accordance with the present disclosure will be described with reference to the drawings, in which:
  • FIG. 1A illustrates an example of a point of interest in the real world in accordance with various embodiments;
  • FIGS. 1B and 1C illustrate examples of sharing content and links related to a point of interest in an augmented reality environment in accordance with various embodiments;
  • FIGS. 2A and 2B illustrate example processes for sharing content and links to visual elements of a point of interest that can be utilized in accordance with various embodiments;
  • FIG. 3A illustrates an example of points of interest as a user is moving through in the real world in accordance with various embodiments;
  • FIGS. 3B and 3C illustrate examples of unique images of a point of interest in accordance with various embodiments;
  • FIG. 4 illustrates an example process for choosing a unique image of a point of interest that can be utilized in accordance with various embodiments;
  • FIG. 5 illustrates an example of an augmented reality system for recognizing and tracking a point of interest in the real world in accordance with various embodiments;
  • FIG. 6A and 6B illustrates an example computing device that can be used to implement aspects of the various embodiments;
  • FIG. 7 illustrates example components of a computing device such as that illustrated in FIG. 6, in accordance with various embodiments; and
  • FIG. 8 illustrates an environment in which various embodiments can be implemented in accordance with various embodiments.
  • DETAILED DESCRIPTION
  • Systems and methods in accordance with various embodiments of the present disclosure overcome deficiencies in conventional approaches in sharing content. In particular, various embodiments enable users and business owners to attach content and/or links to visual elements of a place at a physical location, and, in response to a user's portable device pointing at a tagged place, cause the content and/or links to the visual elements of the place to be presented on the portable device. The content and/or links can refer to various types of information, for example, but are not limited to, promotional coupons, menus, advertisements, reservation systems, floor plans, videos, audio, wait time, customer reviews, music, chat walls, attractions of the place, instant or daily specials, recommendations on specific items, hyperlinks to reviews of the place on third party review sites, or other alternative places, etc. In some embodiments, content and links are tied to specific objects at a place based at least in part upon one of Global Positioning System (GPS) locations, Inertial Measurement Unit (IMU) orientations, compass data, or one or more visual matching algorithms. Once the content and links are attached to the specific objects of the place, they can be discovered by a user with a portable device pointing at the specific objects in the real world. At least some embodiments cause content and/or links to a physical location to be presented on a user's device based at least upon one of the proximity of the user to the physical location, a point of view of the user, a user profile of the user (e.g., user demographic and preferences), or a profile of an owner of the physical location.
  • In some embodiments, images submitted by users and/or an owner of a point of interest (e.g., a place, a scene, an object, etc.) are used as fiducials to assist recognition and tracking of the point of interest. Multiple images of a point of interest taken from different points of view (e.g., crowd-sourced) can be dynamically used as fiducials for recognition and tracking of the point of interest. As a user with a user device moves through a point of interest in the real world, a different unique image (i.e., fiducials) can be chosen from a set of stored candidate images of the point of interest based at least upon GPS locations, IMU orientations, or compass data of the user device.
  • At least some embodiments provide various methods to control the types of content and links that can be attached to a physical place in an augmented reality environment, and/or how the attached content and links can be presented. For example, through one or more types of authentication processes, an owner of a physical place may get access to an augmented reality environment associated with his or her place. Upon authentication, the owner can provide some inputs on the types of content and links attached to the place, and/or how the attached content and links are presented (e.g., a blank wall or whole business-front, layout, or visual elements to be attached). In some embodiments, a suitable communication means (e.g., a canvas or chat blog) is provided for a user to attach content (e.g., texts, images, or videos) to a visual element at a point of interest, and interact with other users who have left messages there, which can provide a form of sharing beyond what regular social network sites can offer.
  • In some embodiments, when a user or owner takes an image at a point of interest with a user device, the point of interest can be determined based at least in part upon a point of view of the user, or one or more visual features of the image. The point of view of the user can be determined based at least in part upon GPS locations, IMU orientations, or compass data of the user device. As a part of image processing, an indication may be provided to the user or owner about the quality of the captured image so that suitable images can be submitted for image matching. For example, an image with unique visual features works better in image matching than the one that is featureless. In some instances, the scaled indication (e.g., a scale of 0 to 10, or strong/medium/bad) can be provided to the user. Unless the quality of an image crosses a minimum threshold, the image is not allowed to be submitted.
  • Some embodiments allow a user to take a self-guided tour of a point of interest in an augmented reality environment by pointing a user device with a camera at the point of interest in the real world and then receiving different links, files and/or content related to the point of interest for each image on the camera view of the user device. Various other functions and advantages are described and suggested below as may be provided in accordance with the various embodiments.
  • FIG. 1A illustrates an example of a point of interest in the real world in accordance with various embodiments. In this example, a user 101 with a computing device 103 can be seen moving through the Market Street 140. Although the client device is not shown in FIG. 1A, it should be understood that various types of electronic or computing devices that are capable of receiving and/or processing images in accordance with various embodiments are discussed herein. These client devices can include, for example desktop PCs, laptop computers, tablet computers, personal data assistants (PDAs), smart phones, portable media file players, e-book readers, portable computers, head-mounted displays, interactive kiosks, mobile phones, net books, single-board computers (SBCs), embedded computer systems, wearable computers (e.g., watches or glasses), gaming consoles, home-theater PCs (HTPCs), TVs, DVD players, digital cable boxes, digital video recorders (DVRs), computer systems capable of running a web-browser, or a combination of any two or more of these. The computing device may use operating systems that include, but are not limited to, Android, Berkeley Software Distribution (BSD), iPhone OS (iOS), Linus, OS X, Unix-like Real-time Operating System (e.g., QNX), Microsoft Windows, Window Phone, and IBM z/OS. The computing device 103 may have one or more image capture elements (not shown), such as one or more cameras or camera sensors, to capture images and/or videos. The one or more image capture elements may include a charge-coupled device (CCD), an active pixel sensor in complementary metal-oxide-semiconductor (CMOS) or N-type metal-oxide-semiconductor (NMOS), an infrared or ultrasonic image sensor, or an image sensor utilizing other type of image capturing technologies. The computing device 103 may have one or more audio capture devices (not shown) capable of capturing audio data (e.g., word commands from the user 101).
  • In this example, the user 101 desires to obtain relevant information about the XYZ Bank 110 and the ABC Restaurant 120 using the computing device 103 to determine whether to cross the Market Street 140 to stop by the places. The user 101 can aim one or more image capture elements located on the computing device 103 to capture live view of at least a portion of the XYC Bank 110 and the ABC Restaurant 120. The XYC Bank 110 and the ABC Restaurant 120 may be recognized by analyzing and comparing the captured image(s) or feature(s) with stored images related to the place in a database. In some embodiments, image recognition may require a still image rather than a live view. The user 101 may be required to capture a still image (e.g., press a shutter button) for the purpose of image recognition.
  • Many embodiments provide imaging processing algorithms and recognition techniques to recognize a point of interest by matching the feature(s) or image of the point of interest against saved images in a database. For example, optical character recognition (OCR) can be used as a primary image analysis technique or to enhance other processes. Features (e.g., shape, size, color and text) of the point of interest can be extracted and matched against points of interest determined in the vicinity of the user 101's location. In some embodiments, image processing processes may include sub-processes such as, for example, thresholding (converting a grayscale image to black and white, or using separation based on a grayscale value), segmentation, blob extraction, pattern recognition, barcode and data matrix code reading, gauging (measuring object dimensions), positioning, edge detection, color analysis, filtering (e.g. morphological filtering) and template matching (finding, matching, and/or counting specific patterns). Various techniques (e.g., OCR and other text recognition processes) can be used as the primary image analysis technique or to enhance other processes. Some techniques are described in co-pending U.S. patent application Ser. No. 14/094,655, filed Dec. 2, 2013, entitled “Visual Search in a Controlled Shopping Environment,” which are hereby incorporated herein by references in their entirety.
  • Once the XYZ Bank 110 and the ABC Restaurant 120 are recognized, stored content and links that are associated with the places can be provided based at least upon the captured image data in real time, which is illustrated in FIG. 1B. The content and/or links presented on the user device may include an address, a phone number, business hours, a way to make reservations, and/or customer review of the point of interest. In this example, the content listed in the billboard 112 includes an address, phone number, URL, price rating, food critic review, menu/product inventory, reservation service, and user review for the XYZ Bank 110 while the content listed in the billboard 129 includes an address, phone number, URL, and customer rating for the XYZ Bank 120. In some instances, the content elements in the billboards 112 and 129 can be interactive. For example, the user 101 may select the URL address, www.xyzbank.com, to open up a webpage of the XYZ Bank 110, or dial the phone number listed in the content 112 by tapping the number.
  • Depending on the distance between a user and a point of interest, different levels of detail information (e.g., content and links) related to the point of interest may be presented to the user or allow the user to submit into a database. As the user is getting closer to the point of interest, certain content and/or links (e.g., an instant discount) may be shown on the user device. In some embodiments, depending on the fiducials that the user device is pointing at, a different set of content and/or links may be presented to the user. In some embodiments, based upon a point of view of a user, certain content and links are shown to the user in small fonts or icons. The user 101 may get more details of these fonts or icons by selecting the small fonts or icons, or magnifying a display area corresponding to the small fonts or icons.
  • In some embodiments, content and links presented on a user device can be determined based at least in part upon a user profile or the location of the user device. In this example, if the user 101 is determined to be a first time visitor to San Francisco, an icon or symbol of a landmark (e.g., the San Francisco-Oakland Bay Bridge 160) that is in the direction or in the vicinity of a user device may be presented on the user device, together with an explanatory billboard 162. The billboard 162 includes some tourist information regarding the landmark (e.g., the distance and URL of the Bay Bridge 160). In some embodiments, points of interest or landmarks in the direction of a user device have to meet a predetermined set of conditions to be presented on a user device. The predetermined set of conditions include such as, but are not limited to, whether the points of interest are within a predetermined number of miles, having a threshold review rating, or within a predetermined degree of orientation of the user device.
  • In some embodiments, the information of a point of interest presented to a user can be customized based at least upon the user profile or GPS locations, weather conditions, compass, or a degree of relevancy to the point of interest. The customization of the information may include choosing what types of information being presented and/or how the information is presented on the user device. For example, the information pertinent to a restaurant may include subject matters, such as the type of food served, menu, price, user reviews, professional critic reviews, etc. In some embodiments, information deemed more relevant the user may be displayed more prominently than those less relevant. If a user desires more information about a point of interest, the user may magnify or zoom the point of interest on a user device.
  • Various embodiments enable a user to share a variety of information related to a point of interest within a point of view of the user. FIG. 1C illustrates an example of sharing content and links related to a point of interest in an augmented reality environment in accordance with various embodiments. In this example, a user interface is provided to present an overlay of live view of the ABC Restaurant 120. The overlay can be transparent or have different levels of transparency. In some embodiments, a blank canvas or a webpage may be provided for a user to share content and links related to visual elements of a point of interest. The canvas or webpage may also present content and/or links related to the visual elements that were submitted by other users. The user may be allowed to edit or delete the content, comments or links that were submitted earlier by himself or herself.
  • In some embodiments, various types of markers can be used to anchor content and/or links submitted by users. In this example, the user or owner of the restaurant can designate certain areas with various sizes (e.g., rectangular 151, 152, 126 and 128 in dashed lines) to anchor different types of content and links related to the features of the ABC Restaurant 120. The size, style, fill ratio or transparency percentage of a marker can be customized by the users or owner.
  • In some embodiments, a user may outline a particular display area on the user device, which corresponds to a visual element of a point of interest, to submit comments, content, or links related to the point of interest. The user can use the particular display area on the user device to emphasize the feature(s) that the user would like to attach content and/or links to. In some embodiments, an indication (e.g., different colors or strong/medium/weak) can be provided to a user on whether a particular display area of a point of interest is good enough to be designated as a marker. The user may be prompted to move the user device around the point of interest to determine how likely the features corresponding to the particular display area can be matched against those from different views.
  • At least some embodiments enable an owner of a point of interest to control over at least in part what types of content and link that can be attached to a physical place in an augmented reality environment, or how the content and links can be presented. For example, an owner of a point of interest in the real world may customize or personalize a platform (e.g., a canvas or overlay) in the augmented reality environment for users to submit files, content, and/or links related to the point of interest. In some embodiments, one or more types of privacy policies may be implemented to guide or flag content and/or links that are submitted. For example, “User's registration is not required. Any user can submit content or links without logging in with a username, in which case they will be identified by network IP address.”
  • Some embodiments provide one or more methods for an owner of a point of interest to authenticate himself or herself in an augmented reality environment. In some instances, the owner may have to answer correctly a series of questions related to at least one of the history of the point of interest, or a profile of the owner on the record. In some embodiments, a GPS based authentication system can be used to verify whether the claimed owner of a point of interest has been to the place with a minimum threshold of frequency. Once an owner of a point of interest gets authenticated, the owner can be enabled to control at least in part what types of content and link that can be attached to the point of interest in an augmented reality environment, or how the content and links can be presented (e.g., the layout of the markers, or how many links attached to each marker). In some embodiments, an owner of a point of interest may prohibit users to submit content and/or links to the point of interest in an augmented reality environment, or may require a user to have a minimum privilege level to do so. In some other embodiments, an owner of a point of interest may allow a user to attach whatever content and/or links the user would like, or freely attach content and/or links the way the user likes.
  • The content and links presented on a canvas or overlay of a point of interest can include subject matters of the point of interest such as, but are not limited to, attractions of the point of interest, instant or daily specials, recommendations on specific items, hyperlinks to reviews of the point of interest on third party review sites, or other alternative point of interest (e.g., proximity, or reviews) based on a user's profile and preferences. In some embodiments, if the user has registered an account with a point of interest, a user interface may be provided for the user to log into his or her account without launching other application or web browser. For example, if the point of interest is a bank, the user can log in on the user interface to check balances and make transactions. If the user desires, the user's log in information can be saved. The user interface (e.g., a canvas or overlay) may automatically log in the user's account in subsequent accesses.
  • Some embodiments present content and/or links related to a point of interest in various ways (e.g., glowing effect, bold effect, billboard effect, or a visual 3D element). In the FIG. 1C, the canvas or overlay of the ABC Restaurant 120 comprises the designated area 151, 152, 128, and 126 for users or owner to submit content or links related to the ABC Restaurant 120. The designated area 151 includes the billboard 124 to display an address, phone number, URL, and user reviews (e.g., reviews from Friends A and B) for the ABC Restaurant 120. The designated area 152 includes the billboard 121 of videos. The designated area 128 includes the billboard 123 to display a menu 122 of the ABC Restaurant 120. The designated area 126 includes the billboard 123 to display an instant coupon in bold to draw the user 101's attentions. The canvas or overlay of the ABC Restaurant 120 may also have a picture of the smiling owner 125 inviting the user 101 to visit the Restaurant 120, “Welcome! Come on in!” 153. In some instances, the picture of smiling owner 125 may be a 2D hologram image with the owner facing the direction of a user as the user walks by the front door.
  • In some embodiments, content and/or links presented on a user device of a user is determined based at least in part upon profile (e.g., demographic or preferences) of the user. For example, a close-by store may present an instant discount link of a necktie if a user is male, and present a discount link of a formula if a user is determined to be a new mom. In some embodiments, one or more machine learning algorithms can be used to analyze user profiles and/or behavior from a group of users (e.g., crowd-sourced data). The group of users may share at least one common traits with the user. For example, a discount coupon or sale item that is particular attractive to the same group of users as the user may be presented on the user device. In some embodiments, content and/or links presented on a user device of a user is determined based at least in part upon various viewing perspective (e.g., weather conditions, outdoor or indoor, day or night time) from the user.
  • A user may find the content and/or links presented on a user device helpful or not helpful for him or her to determine whether to stop by the point of interest. The user is enabled to comment on the canvas whether content and links presented are helpful. The user may explicitly comment on the content and/or links. In some embodiments, such feedback from the user can be determined implicitly. For example, the user may show some interests by clicking on the links, magnifying a display area corresponding to certain content presented, or focusing on certain content over a threshold period of time (e.g., determined by head tracking, gaze tracking, or eye tracking techniques).
  • In some embodiments, user reviews related to a point of interest can be collected and extracted from comments of users that have left on a canvas or overlay of the point of interest, or various type of sources such as, but are not limited to, social networking sites, newspaper and magazines, search engines, local directory services, and/or third party service providers. In some embodiments, the average review rating can be calculated based on a weighted average of collected explicit and inexplicit comments from users. Comments from different groups of users may be assigned different weights in the calculation. For example, reviews from a user's friends (e.g., contacts in user's address book on a user device or a social networking site) may have a higher weight than reviews from other users.
  • FIG. 2A illustrates an example process 200 for sharing content and links to visual features of a point of interest that can be utilized in accordance with various embodiments. It should be understood that there can be additional, fewer, or alternative steps performed in similar or alternative orders, or in parallel, within the scope of the various embodiments unless otherwise stated. The example method embodiment 200 starts with capturing an image of a point of interest in the real world by a user device, at step 204. The image corresponds to a point of view from a user of the user device. The point of view of the user can be determined based at least in part upon one of GPS locations, IMU orientations, or compass data of the user device. The image may have one or more features for image matching and recognitions. In some embodiments, the one or more features may be used as one or more anchor points in an augmented reality environment for users to attach comment and/or links associated with the point of interest.
  • The point of interest can be recognized by matching the one or more features of the point of interest against each of saved images of a plurality of points of interest in the database, at step 206. In some embodiments, the plurality of images used in the image recognition and matching process are selected based at least in part upon the proximity of the points of interest to the location of the user device or the point of view of the user. Directional cues may be provided on a user interface layer of the user device for points of interests that are not in the point of view of the user. At step 208, content, files, and/or links related to the point of interest are retrieved from the database. In some embodiments, the content, files, and/or links can be retrieved directly from various types of sources, such as, the database, social networking sites, newspapers and magazines, search engines, local directory services, and/or third party service providers. The content, files, and/or links can be subject matters of the point of interest such as, attractions of the point of interest, instant or daily specials, recommendations on specific items, hyperlinks to reviews of the point of interest on third party review sites, or other alternative point of interest (e.g., proximity, or reviews) based on user's profile and preferences.
  • At step 210, the retrieved content, files, and/or links can be presented on an interface layer of the user device based at least in part upon the user's proximity to the point of interest, the point of view of the user, or the user's profile and preferences. The user can submit one or more additional links, content, and/or files on the interface layer, or edit any part of the links, content, and/or files that were submitted by the user, at step 212.
  • Various other types of methods to share or present content and links to visual elements of a point of interest based at least in part upon a user's proximity to the point of interest, a view of the user, or the user's profile and preferences are also possible, some of which are discussed in further detail elsewhere herein.
  • FIG. 2B illustrates an example process 220 for sharing content and links to a point of interest that can be utilized in accordance with various embodiments. It should be understood that there can be additional, fewer, or alternative steps performed in similar or alternative orders, or in parallel, within the scope of the various embodiments unless otherwise stated. The example method embodiment 220 starts with logging in as an owner of a point of interest through an authentication process, at step 221. The owner may have to answer correctly a series of questions relating to the history of the point of interest or information listed in an owner record for the point of interest. Once the owner gets authenticated, the owner can provide content or a link relating to the point of interest in an augmented reality environment, at step 223. The owner can customize a layout of the content or link on a user interface layer corresponding to the point of interest, at step 225. A user can select the customized layout or a default layout of the content or link on the user interface layer corresponding to the point of interest on a user device, at step 227. The user can submit one or more links and/or content relating to the point of interest on the user interface layer, or edit any of the content and/or link that were submitted by the user, at step 229. The submitted one or more links and/or content are associated with at least one of one or more acquired images by the user device, or GPS location, IMU orientations, or compass data of the user device, at step 231. In some embodiments, an owner of a point of interest can control at least in part how the content or links can be presented (e.g., a canvas or overlay, a layout of markers, or how many links attached to each marker) to users in an augmented reality environment, or what types of content and link can be attached to the point of interest.
  • FIG. 3A illustrates an example of points of interest as a user is moving through in the real world in accordance with various embodiments. In this example, a user 301 is operating a computing device 303 incorporating one or more imaging capturing element (not shown) and walking along the Market Street 340 from the position 350 to another position 360. At the position 350, the computing device 303 has a viewing angle 302 that is able to capture an image of at least a substantial portion of the ABC Restaurant 320 such that the ABC Restaurant 320 is in the center of the viewing angle 302. At the position 360, the computing device 303 has a viewing angle 305 that is able to capture an image of at least a substantial portion of the XYZ Bank 310 such that the XYZ Bank 310 is in the center of the viewing angle 305.
  • A backend server can be configured to store a plurality of images related to various points of interest, along with location and point of view data (e.g., GPS location, IMU orientations and compass data) describing the point of view when each image was taken. In some embodiments, the images and point of view data can be collected through crowd-sourcing or by a fleet of vehicles. In some embodiments, each stored image has attached content and/or links that were previously submitted associated with specific features in the image. In this example, when the user 301 visits the point of interest (e.g., the ABC Restaurant 320 and the XYZ Bank 310) and points the computing device 303 at the point of interest, the computing device 303 can upload a camera image along with location and point of interest data to the backend server.
  • Based at least upon the location and point of direction of the computing device 303, the backend server can choose a candidate set of stored images for image recognition and matching. In some embodiments, the candidate set of images may share at least a common feature with the images taken by the computing device 303, or similar enough to the location and point of direction of the computing device 303. The backend server can then perform image recognition and matching between the images taken by the computing device 303 and each of the candidate set of images, and also calculate a confidence score for each of the candidate set of images. A candidate image, along with content and links associated with the image, which has the highest confidence score can be provided as a fiducial (e.g., a unique image, or a set of feature points describing the unique image) for recognition and tracking. In some embodiments, the computing device 303 can recognize the fiducial in the live camera view and dynamically track the changes to the camera view due to the movement of the computing device 303.
  • FIGS. 3B and 3C illustrate examples of different unique images of a point of interest that are presented on the computing device 303 as the user 301 is walking from the position 350 to the position 360 in accordance with various embodiments. As illustrated in FIG. 3B, a unique image 328 of the ABC Restaurant 320, along with content and links associated with the image, is presented on the computing device 303. The unique image 328 may be selected from a plurality of images that were taken by different users under various conditions (e.g., different weather and light conditions, or different points of view). As illustrated in FIG. 3C, when the user 301 moves along the street from the position 350 to the position 360, a different unique image 338, along with content (e.g., displayed by the billboards 315 and 332), can be selected from a different set of images that corresponds to the new location 360 and the new point of direction of the computing device 303.
  • In some embodiments, a client computing device or a augmented reality system is configured to detect a significant change (e.g., a user walks along a street or moves to a different side of a point of interest, or tracking failure) to the point of view from the client computing device. In response to the changes, a different set of candidate images can be retrieved from the backend server for image recognition and matching. The candidate image with the highest confidence score can be dynamically chosen as a new fiducial and used for subsequent tracking.
  • In some instances, two or more points of interest (e.g., adjacent stores) may be present in the camera image of a client computing device, the client computing device or the augmented reality system is configured to simultaneously retrieve candidate images for image recognition and matching of the two or more points of view. Two or more fiducials, along with attached content and links, may be presented together on the camera view of the client computing device.
  • FIG. 4 illustrates an example process for choosing a unique image of a point of interest that can be utilized in accordance with various embodiments. It should be understood that there can be additional, fewer, or alternative steps performed in similar or alternative orders, or in parallel, within the scope of the various embodiments unless otherwise stated. The example method embodiment 400 starts with receiving an image of a point of interest in the real world from a user device, at step 402. The image corresponds to a point of view from a user of the user device and may have one or more features for image matching and recognition. The point of view of the user can be determined based at least in part upon GPS locations, IMU orientations, or compass data of the user device. At step 404, a set of candidate images can be chosen based at least upon the location of the user device, or the point of view of the user.
  • The received image can be compared with each of the candidate images according to one or more image matching algorithms, at step 406. In many instances, the candidate images may be taken at different times of day and/or under different weather conditions. A confidence score is calculated for each candidate image, at step 408. A unique image, which has the highest confidence score, can be chosen, at step 410. Content or links attached to the unique image can be presented on a user interface layer of the user device, at step 412. At step 414, in response to the point of view of the users having changed over threshold value, a new unique image can be utilized by the user device for fiducial recognition and tracking, and this new image might have the same or different links and content as the previous unique image. A new unique image with the same content or links might represent a different point of view of the same point of interest.
  • The new unique image is selected from a new set of candidate images that corresponds to the location of the user device, and the new point of view of the user. At step 416, new content or new links attached to the new unique image can be presented on the user interface of the user device.
  • FIG. 5 illustrates an example 500 of an augmented reality system 510 for recognizing and tracking a point of interest in the real world in accordance with various embodiments The augmented reality platform 530 communicates with the client computing devices 502 via the network 504. Although only some client computing devices 502 are shown in FIG. 5, it should be understood that various other types of electronic or computing devices that are capable of receiving, or rendering a web application in accordance with various embodiments are discussed herein. These client devices can include, for example desktop PCs, laptop computers, tablet computers, personal data assistants (PDAs), smart phones, portable media file players, e-book readers, portable computers, head-mounted displays, interactive kiosks, mobile phones, net books, single-board computers (SBCs), embedded computer systems, wearable computers (e.g., watches or glasses), gaming consoles, home-theater PCs (HTPCs), TVs, DVD players, digital cable boxes, digital video recorders (DVRs), computer systems capable of running a web-browser, or a combination of any two or more of these.
  • In some embodiments, the augmented reality platform 530 provides a web service allowing users to search and discover links and other content (e.g., reviews, menus, video, chat walls, contact information, URLs) that are tied to unique visual features at a point of interest in the real world. The client computing devices 502 can display those links and content as augmented reality content on the display screen or the camera preview screen. On the production side, the augmented reality system 530 enables users or an owner of point of interest to submit or upload links and/or contents related to the point of interest to the database 520. The links and/or contents are tied to at least one of the point of view (e.g., GPS location, IMU orientation, and compass) from the client computing device 502, or image features of the point of interest. On the consumption side, users in the real world can discover the links and/or content related to the point of interest by pointing the client computing device 502 at the point of interest. The links and/or content related to the point of interest can be presented on as content augmenting a camera preview of the real world.
  • In some embodiments, the augmented reality platform 530 enables users to submit multiple and varied points of view of the same point of interest. In some instances, submitted points of view are taken under different weather conditions. As a user moves along in the real world, the augmented reality system 530 can recognize and match features in the real-time image against images that are attached to the points of interest in the vicinity of the client computing device 502 or within the point of view of the user. Candidates images together with attached content and/or links can be dynamically selected, even when points of view of the user is moving and the real-time image is different from the saved images. The augmented reality platform 530 can calculate a confidence score for each candidate image by matching the candidate image against the real-time image and provide a stored image with the highest confidence score for the user to use as a fiducial for recognition and tracking, along with the content and/links associate with the point of interest.
  • FIGS. 6A and 6B illustrate front and back views, respectively, of an example electronic computing device 600 that can be used in accordance with various embodiments. Although a portable computing device (e.g., a smartphone, an electronic book reader, or tablet computer) is shown, it should be understood that any device capable of receiving and processing input can be used in accordance with various embodiments discussed herein. The devices are capable of receiving, displaying or playing streaming media files in accordance with various embodiments discussed herein. The devices can include, for example, desktop PCs, laptop computers, tablet computers, personal data assistants (PDAs), smart phones, portable media players, e-book readers, portable computers, head-mounted displays, interactive kiosks, mobile phones, net books, single-board computers (SBCs), embedded computer systems, wearable computers (e.g., watches or glasses), gaming consoles, home-theater PCs (HTPCs), TVs, DVD players, digital cable boxes, digital video recorders (DVRs), computer systems capable of running a web-browser, among others.
  • In this example, the computing device 600 has a display screen 602 (e.g., an LCD element) operable to display information or image content to one or more users or viewers of the device. The display screen of some embodiments displays information (e.g., streaming media file) to the viewer facing the display screen (e.g., on the same side of the computing device as the display screen). The computing device in this example can include one or more imaging elements, in this example including two image capture elements 604 on the front of the device and at least one image capture element 610 on the back of the device. It should be understood, however, that image capture elements could also, or alternatively, be placed on the sides or corners of the device, and that there can be any appropriate number of capture elements of similar or different types. Each image capture element 604 and 610 may be, for example, a camera, a charge-coupled device (CCD), a motion detection sensor or an infrared sensor, or other image capturing technology.
  • As discussed, the device can use the images (e.g., still or video) captured from the imaging elements 604 and 610 to generate a three-dimensional simulation of the surrounding environment (e.g., a virtual reality of the surrounding environment for display on the display element of the device). Further, the device can utilize outputs from at least one of the image capture elements 604 and 610 to assist in determining the location and/or orientation of a user and in recognizing nearby persons, objects, or locations. For example, if the user is holding the device, the captured image information can be analyzed (e.g., using mapping information about a particular area) to determine the approximate location and/or orientation of the user. The captured image information may also be analyzed to recognize nearby persons, objects, or locations (e.g., by matching parameters or elements from the mapping information).
  • The computing device can also include at least one microphone or other audio capture elements capable of capturing audio data, such as words spoken by a user of the device, music being hummed by a person near the device, or audio being generated by a nearby speaker or other such component, although audio elements are not required in at least some devices. In this example there are three microphones, one microphone 608 on the front side, one microphone 612 on the back, and one microphone 606 on or near a top or side of the device. In some devices there may be only one microphone, while in other devices there might be at least one microphone on each side and/or corner of the device, or in other appropriate locations.
  • The device 600 in this example also includes one or more orientation or position-determining elements 618 operable to provide information such as a position, direction, motion, or orientation of the device. These elements can include, for example, accelerometers, inertial sensors, electronic gyroscopes, and electronic compasses.
  • The example device also includes at least one computing mechanism 614, such as at least one wired or wireless component operable to communicate with one or more electronic devices. The device also includes a power system 616, such as may include a battery operable to be recharged through all plug-in approaches, or through other approaches such as capacitive charging through proximity with a power mat or other such device. Various other elements and/or combinations are possible as well within the scope of various embodiments.
  • FIG. 7 illustrates a set of basic components of an electronic computing device 700 such as the device 600 described with respect to FIG. 6. In this example, the device includes at least one processing unit 702 for executing instructions that can be stored in a memory device or element 704. As would be apparent to one of ordinary skill in the art, the device can include many types of memory, data storage, or computer-readable media, such as a first data storage for program instructions for execution by the processing unit(s) 702, the same or separate storage can be used for images or data, a removable memory can be available for sharing information with other devices, and any number of computing approaches can be available for sharing with other devices.
  • The device typically will include some type of display element 706, such as a touch screen, electronic ink (e-ink), organic light emitting diode (OLED) or liquid crystal display (LCD), although devices such as portable media players might convey information via other means, such as through audio speakers. The display element 706 is capable of displaying streaming media files or other information to viewers facing the display element 706.
  • As discussed, the device in many embodiments will include at least one imaging/audio element 708, such as one or more cameras that are able to capture images of the surrounding environment and that are able to image a user, people, or objects in the vicinity of the device. The image capture element can include any appropriate technology, such as a CCD image capture element having a sufficient resolution, focal range, and viewable area to capture an image of the user when the user is operating the device. Methods for capturing images using a camera element with a computing device are well known in the art and will not be discussed herein in detail. It should be understood that image capture can be performed using a single image, multiple images, periodic imaging, continuous image capturing, image streaming, etc. Further, a device can include the ability to start and/or stop image capture, such as when receiving a command from a user, application, or other device.
  • The example computing device 700 also includes at least one orientation/motion determining element 710 able to determine and/or detect orientation and/or movement of the device. Such an element can include, for example, an accelerometer or gyroscope operable to detect movement (e.g., rotational movement, angular displacement, tilt, position, orientation, motion along a non-linear path, etc.) of the device 700. An orientation determining element can also include an electronic or digital compass, which can indicate a direction (e.g., north or south) in which the device is determined to be pointing (e.g., with respect to a primary axis or other such aspect).
  • As discussed, the device in many embodiments will include at least a positioning element 712 for determining a location of the device (or the user of the device). A positioning element can include or comprise a GPS or similar location-determining elements operable to determine relative coordinates for a position of the device. As mentioned above, positioning elements may include wireless access points, base stations, etc., that may either broadcast location information or enable triangulation of signals to determine the location of the device. Other positioning elements may include QR codes, barcodes, RFID tags, NFC tags, etc. that enable the device to detect and receive location information or identifiers that enable the device to obtain the location information (e.g., by mapping the identifiers to a corresponding location). Various embodiments can include one or more such elements in any appropriate combination.
  • As mentioned above, some embodiments use the element(s) to track the location of a device. Upon determining an initial position of a device (e.g., using GPS), the device of some embodiments may keep track of the location of the device by using the element(s), or in some instances, by using the orientation determining element(s) as mentioned above, or a combination thereof. As should be understood, the algorithms or mechanisms used for determining a position and/or orientation can depend at least in part upon the selection of elements available to the device. In some embodiments, the example computing device 700 may also include a low power, low resolution imaging element to capture image data. The low resolution imaging element can transmit the captured image data over a low bandwidth bus, such as an I2C bus, to a low power processor, such as a PIC-class processor. The PIC processor may also communicate with other components of the computing device 700, such as Orientation Motion Element 710, etc. The PIC processor can analyze the image data from the low resolution imaging element and other components of the computing device 700 to determine whether the head motion likely corresponds to a recognized head gesture. If the PIC processor determines that the head motion likely corresponds to a recognize head gesture, the PIC processor can enable other image element to activate high resolution image capture and/or main processor to analyze the capture high resolution image data.
  • The example device also includes one or more wireless components 714 operable to communicate with one or more electronic devices within a computing range of the particular wireless channel. The wireless channel can be any appropriate channel used to enable devices to communicate wirelessly, such as Bluetooth, cellular, NFC, or Wi-Fi channels. It should be understood that the device can have one or more al wired communications connections as known in the art.
  • The device also includes a power system 716, such as may include a battery operable to be recharged through al plug-in approaches, or through other approaches such as capacitive charging through proximity with a power mat or other such device. Various other elements and/or combinations are possible as well within the scope of various embodiments.
  • In some embodiments the device can include at least one additional input device 718 able to receive al input from a user. This al input can include, for example, a push button, touch pad, touch screen, wheel, joystick, keyboard, mouse, keypad, or any other such device or element whereby a user can input a command or a request for additional product information to the device. These I/O devices could even be connected by a wireless infrared or Bluetooth or other link as well in some embodiments. Some devices also can include a microphone or other audio capture element that accepts voice or other audio commands. For example, a device might not include any buttons at all, but might be controlled only through a combination of visual and audio commands, such that a user can control the device without having to be in contact with the device.
  • As discussed, different approaches can be implemented in various environments in accordance with the described embodiments. For example, FIG. 8 illustrates an example of an environment 800 for implementing aspects in accordance with various embodiments. As will be appreciated, although a Web-based environment is used for purposes of explanation, different environments may be used, as appropriate, to implement various embodiments. The system includes an electronic computing device 802, which can include any appropriate device operable to send and receive requests, messages or information over an appropriate network 804 and convey information back to a user of the device. Examples of such computing devices include personal computers, cell phones, handheld messaging devices, laptop computers, set-top boxes, personal data assistants, electronic book readers and the like. The network can include any appropriate network, including an intranet, the Internet, a cellular network, a local area network or any other such network or combination thereof. The network could be a “push” network, a “pull” network, or a combination thereof. In a “push” network, one or more of the servers push out data to the computing device. In a “pull” network, one or more of the servers send data to the computing device upon request for the data by the computing device. Components used for such a system can depend at least in part upon the type of network and/or environment selected. Protocols and components for communicating via such a network are well known and will not be discussed herein in detail. Computing over the network can be enabled via wired or wireless connections and combinations thereof. In this example, the network includes the Internet, as the environment includes a Web server 806 for receiving requests and serving content in response thereto, although for other networks, an alternative device serving a similar purpose could be used, as would be apparent to one of ordinary skill in the art.
  • The illustrative environment includes at least one application server 808 and a data store 810. It should be understood that there can be several application servers, layers or other elements, processes or components, which may be chained or otherwise configured, which can interact to perform tasks such as obtaining data from an appropriate data store. As used herein, the term “data store” refers to any device or combination of devices capable of storing, accessing and retrieving data, which may include any combination and number of data servers, databases, data storage devices and data storage media, in any standard, distributed or clustered environment. The application server 808 can include any appropriate hardware and software for integrating with the data store 810 as needed to execute aspects of one or more applications for the computing device and handling a majority of the data access and business logic for an application. The application server provides access control services in cooperation with the data store and is able to generate content such as text, graphics, audio and/or video to be transferred to the user, which may be served to the user by the Web server 806 in the form of HTML, XML or another appropriate structured language in this example. The handling of all requests and responses, as well as the delivery of content between the computing device 802 and the application server 808, can be handled by the Web server 806. It should be understood that the Web and application servers are not required and are merely example components, as structured code discussed herein can be executed on any appropriate device or host machine as discussed elsewhere herein.
  • The data store 810 can include several separate data tables, databases or other data storage mechanisms and media for storing data relating to a particular aspect. For example, the data store illustrated includes mechanisms for storing content (e.g., production data) 812 and user information 816, which can be used to serve content for the production side. The user information 816 may include user preference, historical data, user demographic data, and audio system of the user devices associated with users. Demographic data of users may include user age, user gender, user educational background, user marital status, user income level, user ethnicity, user postal code, user primary language, or user spending habit. The audio system may include headphone (e.g., earphone, ear bud, and the like), speaker (e.g., tablet speaker, blue tooth speaker, computer speaker, bookshelf speaker, center-channel speaker, floor speaker, in-wall and in-ceiling speaker, outdoor speaker, sound bar, portable speaker, and woofer/sub-woofer speaker), or various types of audio amplifiers. The data store is also shown to include a mechanism for storing log or session data 814. It should be understood that there can be many other aspects that may need to be stored in the data store, such as page image information and access rights information, which can be stored in any of the above listed mechanisms as appropriate or in additional mechanisms in the data store 810. The data store 810 is operable, through logic associated therewith, to receive instructions from the application server 808 and obtain, update or otherwise process data in response thereto. In one example, a user might submit a search request for a certain type of item. In this case, the data store might access the user information to verify the identity of the user and can access the catalog detail information to obtain information about items of that type. The information can then be returned to the user, such as in a results listing on a Web page that the user is able to view via a browser on the user device 802. Information for a particular item of interest can be viewed in a dedicated page or window of the browser.
  • Each server typically will include an operating system that provides executable program instructions for the general administration and operation of that server and typically will include computer-readable medium storing instructions that, when executed by a processor of the server, allow the server to perform its intended functions. Suitable implementations for the operating system and general functionality of the servers are known or commercially available and are readily implemented by persons having ordinary skill in the art, particularly in light of the disclosure herein.
  • The environment in one embodiment is a distributed computing environment utilizing several computer systems and components that are interconnected via computing links, using one or more computer networks or direct connections. However, it will be appreciated by those of ordinary skill in the art that such a system could operate equally well in a system having fewer or a greater number of components than are illustrated in FIG. 8. Thus, the depiction of the system 800 in FIG. 8 should be taken as being illustrative in nature and not limiting to the scope of the disclosure.
  • The various embodiments can be further implemented in a wide variety of operating environments, which in some cases can include one or more user computers or computing devices which can be used to operate any of a number of applications. User or computing devices can include any of a number of general purpose personal computers, such as desktop or laptop computers running a standard operating system, as well as cellular, wireless and handheld devices running mobile software and capable of supporting a number of networking and messaging protocols. Such a system can also include a number of workstations running any of a variety of commercially-available operating systems and other known applications for purposes such as development and database management. These devices can also include other electronic devices, such as dummy terminals, thin-clients, gaming systems and other devices capable of communicating via a network.
  • Most embodiments utilize at least one network that would be familiar to those skilled in the art for supporting communications using any of a variety of commercially-available protocols, such as TCP/IP, OSI, FTP, UPnP, NFS, CIFS and AppleTalk. The network can be, for example, a local area network, a wide-area network, a virtual private network, the Internet, an intranet, an extranet, a public switched telephone network, an infrared network, a wireless network and any combination thereof.
  • In embodiments utilizing a Web server, the Web server can run any of a variety of server or mid-tier applications, including HTTP servers, FTP servers, CGI servers, data servers, Java servers and business application servers. The server(s) may also be capable of executing programs or scripts in response requests from user devices, such as by executing one or more Web applications that may be implemented as one or more scripts or programs written in any programming language, such as Java®, C, C# or C++ or any scripting language, such as Perl, Python or TCL, as well as combinations thereof. The server(s) may also include database servers, including without limitation those commercially available from Oracle®, Microsoft®, Sybase® and IBM®.
  • The environment can include a variety of data stores and other memory and storage media as discussed above. These can reside in a variety of locations, such as on a storage medium local to (and/or resident in) one or more of the computers or remote from any or all of the computers across the network. In a particular set of embodiments, the information may reside in a storage-area network (SAN) familiar to those skilled in the art. Similarly, any necessary files for performing the functions attributed to the computers, servers or other network devices may be stored locally and/or remotely, as appropriate. Where a system includes computerized devices, each such device can include hardware elements that may be electrically coupled via a bus, the elements including, for example, at least one central processing unit (CPU), at least one input device (e.g., a mouse, keyboard, controller, touch-sensitive display element or keypad) and at least one output device (e.g., a display device, printer or speaker). Such a system may also include one or more storage devices, such as disk drives, optical storage devices and solid-state storage devices such as random access memory (RAM) or read-only memory (ROM), as well as removable media devices, memory cards, flash cards, etc.
  • Such devices can also include a computer-readable storage media reader, a communications device (e.g., a modem, a network card (wireless or wired), an infrared computing device) and working memory as described above. The computer-readable storage media reader can be connected with, or configured to receive, a computer-readable storage medium representing remote, local, fixed and/or removable storage devices as well as storage media for temporarily and/or more permanently containing, storing, transmitting and retrieving computer-readable information. The system and various devices also typically will include a number of software applications, modules, services or other elements located within at least one working memory device, including an operating system and application programs such as a client application or Web browser. It should be appreciated that alternate embodiments may have numerous variations from that described above. For example, customized hardware might also be used and/or particular elements might be implemented in hardware, software (including portable software, such as applets) or both. Further, connection to other computing devices such as network input/output devices may be employed.
  • Storage media and computer readable media for containing code, or portions of code, can include any appropriate media known or used in the art, including storage media and computing media, such as but not limited to volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage and/or transmission of information such as computer readable instructions, data structures, program modules or other data, including RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices or any other medium which can be used to store the desired information and which can be accessed by a system device. Based on the disclosure and teachings provided herein, a person of ordinary skill in the art will appreciate other ways and/or methods to implement the various embodiments.
  • The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that various modifications and changes may be made thereunto without departing from the broader spirit and scope of the invention as set forth in the claims.

Claims (20)

What is claimed is:
1. A computer-implemented method, comprising:
acquiring, by a computing device, an image of at least a partial view of a point of interest, the image corresponding to a point of view of an image capture element of the computing device and including one or more visual features;
identifying the point of interest, based on at least in part on a comparison of the one or more visual features to a plurality of candidate images and a respective confidence score of at least one candidate image;
causing at least one of a link or content to be presented on an interface layer of the computing device, the interface layer including at least one overlay level over a live video or still image view and at least one level of transparency; and
enabling customization of the interface layer corresponding to the point of interest by association of one or more additional links or additional content to the point of interest and presentation of the one or more additional links or additional content on the interface layer.
2. The computer-implemented method of claim 1, wherein the causing at least one of a link or content to be presented on the interface layer of the computing device is based at least in part upon a distance between the computing device and the point of interest, the point of view of the image capture element, a user profile, or a profile of an owner of the point of interest.
3. The computer-implemented method of claim 1, wherein the customized presentation of the one or more additional links or additional content on the interface layer includes at least one of glowing effect, bold effect, billboard effect, or a visual three-dimensional element.
4. The computer-implemented method of claim 1, further comprising:
receiving a selection of at least one display area on the interface layer; and
associating at least one additional link or additional content to the point of interest at a location corresponding to the at least one display area.
5. The computer-implemented method of claim 1, further comprising:
determining the point of view of the image capture element based at least in part upon one of Global Positioning System (GPS) locations, Inertial Measurement Unit (IMU) orientations, compass data, or one or more visual matching algorithms.
6. The computer-implemented method of claim 1, wherein each image from the plurality of candidate images is associated with at least one location that is in a proximity of a location of the computing device.
7. The computer-implemented method of claim 1, wherein the at least one of a link or content includes at least one of promotional coupons, menus, advertisements, reservation systems, floor plans, videos, customer reviews, music, chat walls, audio, wait time, attractions of the point of interest, instant or daily specials, recommendations on specific items, hyperlinks to reviews of the place on third party review sites, or alternative points of interest.
8. A non-transitory computer-readable storage medium including instructions that, when executed by at least one processor of a computing system, cause the computing system to:
acquire an image of at least a partial view of a point of interest, the image corresponding to a point of view of an image capture element of a computing device and including one or more visual features;
identify the point of interest, based on at least in part on a comparison of the one or more visual features to a plurality of candidate images and a respective confidence score of at least one candidate image;
cause at least one of a link or content to be presented on an interface layer of the computing device, the interface layer including at least one overlay level over a live video or still image view and at least one level of transparency; and
enable customization of the interface layer corresponding to the point of interest by association of one or more additional links or additional content to the point of interest and presentation of the one or more additional links or additional content on the interface layer.
9. The non-transitory computer-readable storage medium of claim 8, wherein the causing at least one of a link or content to be presented on the interface layer of the computing device is based at least in part upon a distance between the computing device and the point of interest, the point of view of the image capture element, a user profile, or a profile of an owner of the point of interest.
10. The non-transitory computer-readable storage medium of claim 8, wherein the customized presentation of the one or more additional links or additional content on the interface layer includes at least one of glowing effect, bold effect, billboard effect, or a visual three-dimensional element.
11. The non-transitory computer-readable storage medium of claim 8, wherein the instructions, when executed by the at least one processor of the computing system, further cause the computing system to:
receive a selection of at least one display area on the interface layer;
associate at least one additional link or additional content to the point of interest at a location corresponding to the at least one display area.
12. The non-transitory computer-readable storage medium of claim 8, wherein the instructions, when executed by the at least one processor of the computing system, further cause the computing system to:
determine the point of view of the image capture element based at least in part upon one of Global Positioning System (GPS) locations, Inertial Measurement Unit (IMU) orientations, compass data, or one or more visual matching algorithms.
13. The non-transitory computer-readable storage medium of claim 8, wherein each image from the plurality of candidate images is associated with at least one location that is in a proximity of a location of the computing device.
14. The non-transitory computer-readable storage medium of claim 8, wherein the at least one of a link or content includes at least one of promotional coupons, menus, advertisements, reservation systems, floor plans, videos, customer reviews, music, chat walls, audio, wait time, attractions of the point of interest, instant or daily specials, recommendations on specific items, hyperlinks to reviews of the place on third party review sites, or alternative points of interest.
15. A system, comprising:
an image capture element;
a display;
at least one processor; and
memory including instructions that, when executed by the at least one processor, cause the system to:
acquire an image of at least a partial view of a point of interest, the image corresponding to a point of view of the image capture element and including one or more visual features;
identify the point of interest, based on at least in part on a comparison of the one or more visual features to a plurality of candidate images and a respective confidence score of at least one candidate image;
cause at least one of a link or content to be presented on an interface layer-of the display, the interface layer including at least one overlay level over a live video or still image view and at least one level of transparency; and
enable customization of the interface layer corresponding to the point of interest by association of one or more additional links or additional content to the point of interest and presentation of the one or more additional links or additional content on the interface layer.
16. The system of claim 15, wherein the causing at least one of a link or content to be presented on the interface layer of the computing device is based at least in part upon a distance between the computing device and the point of interest, the point of view of the image capture element, a user profile, or a profile of an owner of the point of interest.
17. The system of claim 15, wherein the customized presentation of the one or more additional links or additional content on the interface layer includes at least one of glowing effect, bold effect, billboard effect, or a visual three-dimensional element.
18. The system of claim 15, wherein the instructions, when executed by the at least one processor of the computing system, further cause the system to:
receive a selection of at least one display area on the interface layer;
associate at least one additional link or additional content to the point of interest at a location corresponding to the at least one display area.
19. The system of claim 15, wherein the instructions, when executed by the at least one processor of the computing system, further cause the computing system to:
determine the point of view of the image capture element based at least in part upon one of Global Positioning System (GPS) locations, Inertial Measurement Unit (IMU) orientations, compass data, or one or more visual matching algorithms.
20. The system of claim 15, wherein each image from the plurality of candidate images is associated with at least one location that is in a proximity of a location of the system.
US16/218,015 2014-03-28 2018-12-12 Sharing links in an augmented reality environment Active US10839605B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/218,015 US10839605B2 (en) 2014-03-28 2018-12-12 Sharing links in an augmented reality environment

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US14/229,678 US9432421B1 (en) 2014-03-28 2014-03-28 Sharing links in an augmented reality environment
US15/248,944 US10163267B2 (en) 2014-03-28 2016-08-26 Sharing links in an augmented reality environment
US16/218,015 US10839605B2 (en) 2014-03-28 2018-12-12 Sharing links in an augmented reality environment

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US15/248,944 Continuation US10163267B2 (en) 2014-03-28 2016-08-26 Sharing links in an augmented reality environment

Publications (3)

Publication Number Publication Date
US20190114839A1 US20190114839A1 (en) 2019-04-18
US20200334906A9 true US20200334906A9 (en) 2020-10-22
US10839605B2 US10839605B2 (en) 2020-11-17

Family

ID=56739622

Family Applications (3)

Application Number Title Priority Date Filing Date
US14/229,678 Active 2034-12-11 US9432421B1 (en) 2014-03-28 2014-03-28 Sharing links in an augmented reality environment
US15/248,944 Active US10163267B2 (en) 2014-03-28 2016-08-26 Sharing links in an augmented reality environment
US16/218,015 Active US10839605B2 (en) 2014-03-28 2018-12-12 Sharing links in an augmented reality environment

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US14/229,678 Active 2034-12-11 US9432421B1 (en) 2014-03-28 2014-03-28 Sharing links in an augmented reality environment
US15/248,944 Active US10163267B2 (en) 2014-03-28 2016-08-26 Sharing links in an augmented reality environment

Country Status (1)

Country Link
US (3) US9432421B1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11798274B2 (en) 2017-12-18 2023-10-24 Naver Labs Corporation Method and system for crowdsourcing geofencing-based content

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2835120C (en) * 2011-05-06 2019-05-28 Magic Leap, Inc. Massive simultaneous remote digital presence world
US9520002B1 (en) * 2015-06-24 2016-12-13 Microsoft Technology Licensing, Llc Virtual place-located anchor
US10011228B2 (en) * 2015-12-17 2018-07-03 Ford Global Technologies, Llc Hitch angle detection for trailer backup assist system using multiple imaging devices
CN106982240B (en) * 2016-01-18 2021-01-15 腾讯科技(北京)有限公司 Information display method and device
US9905244B2 (en) * 2016-02-02 2018-02-27 Ebay Inc. Personalized, real-time audio processing
WO2017201569A1 (en) * 2016-05-23 2017-11-30 tagSpace Pty Ltd Fine-grain placement and viewing of virtual objects in wide-area augmented reality environments
US11159631B2 (en) * 2016-08-12 2021-10-26 International Business Machines Corporation Integration of social interactions into media sharing
US10348663B2 (en) 2016-08-12 2019-07-09 International Business Machines Corporation Integration of social interactions into media sharing
CN107944895A (en) * 2016-10-12 2018-04-20 朱恩辛 A kind of interactive marketing system and method transboundary
CN106981000B (en) 2016-10-13 2020-06-09 阿里巴巴集团控股有限公司 Multi-person offline interaction and ordering method and system based on augmented reality
US10019831B2 (en) * 2016-10-20 2018-07-10 Zspace, Inc. Integrating real world conditions into virtual imagery
US10482340B2 (en) * 2016-12-06 2019-11-19 Samsung Electronics Co., Ltd. System and method for object recognition and ranging by deformation of projected shapes in a multimodal vision and sensing system for autonomous devices
CN106980690A (en) * 2017-03-31 2017-07-25 联想(北京)有限公司 A kind of data processing method and electronic equipment
IT201700058961A1 (en) 2017-05-30 2018-11-30 Artglass S R L METHOD AND SYSTEM OF FRUITION OF AN EDITORIAL CONTENT IN A PREFERABLY CULTURAL, ARTISTIC OR LANDSCAPE OR NATURALISTIC OR EXHIBITION OR EXHIBITION SITE
US11417246B2 (en) * 2017-10-19 2022-08-16 The Quantum Group, Inc. Personal augmented reality
US10542371B2 (en) * 2017-10-26 2020-01-21 Verizon Patent And Licensing Inc. System and method for providing customized point-of-interest information
US11126846B2 (en) * 2018-01-18 2021-09-21 Ebay Inc. Augmented reality, computer vision, and digital ticketing systems
CN110335351B (en) * 2019-07-02 2023-03-24 北京百度网讯科技有限公司 Multi-modal AR processing method, device, system, equipment and readable storage medium
JP7332197B2 (en) * 2019-10-25 2023-08-23 Necソリューションイノベータ株式会社 INFORMATION SHARING DEVICE, EVENT SUPPORT SYSTEM, INFORMATION SHARING METHOD, AND EVENT SUPPORT SYSTEM PRODUCTION METHOD
WO2021091552A1 (en) * 2019-11-06 2021-05-14 Google Llc Use of image sensors to query real world for geo-reference information
US11263818B2 (en) * 2020-02-24 2022-03-01 Palo Alto Research Center Incorporated Augmented reality system using visual object recognition and stored geometry to create and render virtual objects
US11733959B2 (en) * 2020-04-17 2023-08-22 Apple Inc. Physical companion devices for use with extended reality systems
KR20220132863A (en) 2021-03-24 2022-10-04 현대자동차주식회사 Mobile device and Vehicle
US11967147B2 (en) * 2021-10-01 2024-04-23 At&T Intellectual Proerty I, L.P. Augmented reality visualization of enclosed spaces

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7460953B2 (en) * 2004-06-30 2008-12-02 Navteq North America, Llc Method of operating a navigation system using images
WO2008133237A1 (en) * 2007-04-23 2008-11-06 Sharp Kabushiki Kaisha Image picking-up device, computer readable recording medium including recorded program for control of the device, and control method
US8315423B1 (en) * 2007-12-28 2012-11-20 Google Inc. Providing information in an image-based information retrieval system
US8311556B2 (en) * 2009-01-22 2012-11-13 Htc Corporation Method and system for managing images and geographic location data in a mobile device
US8429173B1 (en) 2009-04-20 2013-04-23 Google Inc. Method, system, and computer readable medium for identifying result images based on an image query
US8385591B1 (en) * 2009-04-28 2013-02-26 Google Inc. System and method of using images to determine correspondence between locations
US20100332571A1 (en) 2009-06-30 2010-12-30 Jennifer Healey Device augmented food identification
US8239130B1 (en) * 2009-11-12 2012-08-07 Google Inc. Enhanced identification of interesting points-of-interest
US8566029B1 (en) * 2009-11-12 2013-10-22 Google Inc. Enhanced identification of interesting points-of-interest
US9488488B2 (en) 2010-02-12 2016-11-08 Apple Inc. Augmented reality maps
US8332429B2 (en) * 2010-06-22 2012-12-11 Xerox Corporation Photography assistant and method for assisting a user in photographing landmarks and scenes
KR101347518B1 (en) 2010-08-12 2014-01-07 주식회사 팬택 Apparatus, Method and Server for Selecting Filter
JP5194149B2 (en) * 2010-08-23 2013-05-08 東芝テック株式会社 Store system and program
US8768071B2 (en) * 2011-08-02 2014-07-01 Toyota Motor Engineering & Manufacturing North America, Inc. Object category recognition methods and robots utilizing the same
JP5821526B2 (en) 2011-10-27 2015-11-24 ソニー株式会社 Image processing apparatus, image processing method, and program
US20130110620A1 (en) 2011-10-31 2013-05-02 Yongtai Zhu Selecting images based on textual description
US9223902B1 (en) * 2011-11-29 2015-12-29 Amazon Technologies, Inc. Architectures for content identification
US20130328926A1 (en) * 2012-06-08 2013-12-12 Samsung Electronics Co., Ltd Augmented reality arrangement of nearby location information
US20140015858A1 (en) 2012-07-13 2014-01-16 ClearWorld Media Augmented reality system
US20140058825A1 (en) 2012-08-24 2014-02-27 Verizon Patent And Licensing Inc. Augmented-reality-based offer management system
US8990194B2 (en) * 2012-11-02 2015-03-24 Google Inc. Adjusting content delivery based on user submissions of photographs
US9550419B2 (en) * 2014-01-21 2017-01-24 Honda Motor Co., Ltd. System and method for providing an augmented reality vehicle interface
US10147399B1 (en) 2014-09-02 2018-12-04 A9.Com, Inc. Adaptive fiducials for image match recognition and tracking

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11798274B2 (en) 2017-12-18 2023-10-24 Naver Labs Corporation Method and system for crowdsourcing geofencing-based content

Also Published As

Publication number Publication date
US20170053451A1 (en) 2017-02-23
US10163267B2 (en) 2018-12-25
US10839605B2 (en) 2020-11-17
US9432421B1 (en) 2016-08-30
US20190114839A1 (en) 2019-04-18

Similar Documents

Publication Publication Date Title
US10839605B2 (en) Sharing links in an augmented reality environment
US11227326B2 (en) Augmented reality recommendations
US20190333478A1 (en) Adaptive fiducials for image match recognition and tracking
US9870633B2 (en) Automated highlighting of identified text
US9269011B1 (en) Graphical refinement for points of interest
US9661214B2 (en) Depth determination using camera focus
US9342930B1 (en) Information aggregation for recognized locations
US9160993B1 (en) Using projection for visual recognition
US20190086214A1 (en) Image processing device, image processing method, and program
US9536161B1 (en) Visual and audio recognition for scene change events
US9881084B1 (en) Image match based video search
WO2013051180A1 (en) Image processing apparatus, image processing method, and program
US10606824B1 (en) Update service in a distributed environment
Anagnostopoulos et al. Gaze-Informed location-based services
US9467660B1 (en) Map generation using map features from user captured images
US9857177B1 (en) Personalized points of interest for mapping applications
US9600720B1 (en) Using available data to assist in object recognition
US20130121528A1 (en) Information presentation device, information presentation method, information presentation system, information registration device, information registration method, information registration system, and program
WO2016005799A1 (en) Social networking system and method
US9262689B1 (en) Optimizing pre-processing times for faster response
US10600060B1 (en) Predictive analytics from visual data
US10733491B2 (en) Fingerprint-based experience generation
JP7531473B2 (en) Information processing device, information processing method, and information processing program
JP7476163B2 (en) Information processing device, information processing method, and information processing program
JP7459038B2 (en) Information processing device, information processing method, and information processing program

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

FEPP Fee payment procedure

Free format text: PETITION RELATED TO MAINTENANCE FEES GRANTED (ORIGINAL EVENT CODE: PTGR); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4