US20180293771A1 - Systems and methods for creating, sharing, and performing augmented reality - Google Patents

Systems and methods for creating, sharing, and performing augmented reality Download PDF

Info

Publication number
US20180293771A1
US20180293771A1 US15/696,157 US201715696157A US2018293771A1 US 20180293771 A1 US20180293771 A1 US 20180293771A1 US 201715696157 A US201715696157 A US 201715696157A US 2018293771 A1 US2018293771 A1 US 2018293771A1
Authority
US
United States
Prior art keywords
augmented reality
information
anchor
communications device
real world
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/696,157
Inventor
Patrick S. Piemonte
Ryan P. Staake
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Staake Ryan P
Original Assignee
Staake Ryan P
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Staake Ryan P filed Critical Staake Ryan P
Priority to US15/696,157 priority Critical patent/US20180293771A1/en
Assigned to Mirage Worlds, Inc. reassignment Mirage Worlds, Inc. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: STAAKE, RYAN P., PIEMONTE, PATRICK S.
Assigned to STAAKE, RYAN P., PIEMONTE, PATRICK S. reassignment STAAKE, RYAN P. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: Mirage Worlds, Inc.
Publication of US20180293771A1 publication Critical patent/US20180293771A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/131Protocols for games, networked simulations or virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/002Specific input/output arrangements not covered by G06F3/01 - G06F3/16
    • G06F3/005Input arrangements through a video camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • G06K9/6202
    • G06K9/6215
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/08Payment architectures
    • G06Q20/12Payment architectures specially adapted for electronic shopping systems
    • G06Q20/123Shopping for digital content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/30Payment architectures, schemes or protocols characterised by the use of specific devices or networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • H04L67/38
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/021Services related to particular areas, e.g. point of interest [POI] services, venue services or geofences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • G06Q30/0261Targeted advertisements based on user location
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W84/00Network topologies
    • H04W84/02Hierarchically pre-organised networks, e.g. paging networks, cellular networks, WLAN [Wireless Local Area Network] or WLL [Wireless Local Loop]
    • H04W84/10Small scale networks; Flat hierarchical networks
    • H04W84/12WLAN [Wireless Local Area Networks]

Definitions

  • Augmented reality applications may use a simple overlay of graphical/animated subject matter on live or recorded video or still images.
  • a user or application may position a static graphic, text, or other visual element superimposed on the underlying video or image.
  • Other augmented reality applications may blend augmented reality subject matter with underlying visual data, or at least use the underlying visual data to position the augmented reality subject matter.
  • a human face may be recognized in a video feed or image still, and the augmented reality application may apply coloration, designs, distortions, etc. that track only the face in the video or image, so as to further the augmented reality effect that the face actually has such characteristics.
  • augmented reality subject matter Aside from human faces, other objects and/or image information for tracking and/or formatting augmented reality subject matter may be used.
  • a QR code may be recognized as a target for augmented reality overlay, both in subject matter and positioning of the augmented reality subject matter.
  • a zoom level of a video feed may determine sizing and/or resolution of augmented reality subject matter on the zoomed video.
  • augmented reality subject matter may be added and/or formatted after video or image capture, following further processing of the video and/or image data.
  • Augmented reality subject matter is typically provided by the application receiving the visual data.
  • an application may offer a set of stickers, labels, drawn text, cartoons, etc. that can be applied to live or captured visual information and then saved together with the visual data as an augmented reality visual.
  • a set of facial overlays, game images and objectives, graphical head-ups displays or GUIs, etc. can be offered by augmented reality applications for overlay/intermixing with visual data to created augmented reality visuals for users.
  • Augmented reality subject matter may be geofenced or chronofenced, where various labels, game characters, filters, distortions, and/or any other augmented reality application may only be available at particular locations and times.
  • a picture overlay of “San Antonio Tex.” or “Spurs 24—Pacers 32” may be available only when a user is determined to be in the San Antonio basketball arena through location services on a mobile device, and/or only during a Spurs-Pacers game when the score is 24 to 32.
  • Example embodiments and methods create, transmit, and/or perform augmented reality in context with underlying real world subject matter.
  • Example embodiments include communications devices that can create and transmit augmented reality and anchor information for reproduction in similar media at a separate instance by a user in receipt of the information, as well as application hosts for facilitating the same. Through a computer processor in the communications device configured with example methods, augmented reality created on many types of real world subject matter may be selectively performed by others.
  • Example methods use a computer processor to create augmented reality with augmented reality information having perceivable elements added to underlying real world subject matter and anchor information to properly place or time or otherwise configure the augmented reality information in the underlying media.
  • the augmented reality information and anchor information may be transmitted at one instance, as combined media representing the augmented reality or as distinct pieces of information.
  • the real world subject matter may be scrutinized for distinctiveness and/or quality anchor data. If there is sufficient distinctiveness an augmented reality build graphical user interface can be presented to a user of the communications device. If not, the user may be alerted of the lack of distinctiveness and prompted to find new real world subject matter for augmentation.
  • Submitted augmented reality may be searchable with a string or other information provided to the user.
  • Example methods handle several forms of information with a computer processor to ultimately perform augmented reality, including augmented reality information having perceivable elements to be added to underlying media, anchor information to properly place or time or otherwise configure the augmented reality information in the underlying media, origin and limitation information to control when and how information is exchanged and compared, if at all, and the actual media information.
  • the augmented reality information and anchor information may be received at one instance, as combined media representing the augmented reality or as distinct pieces of information.
  • the actual media may be received at another instance, potentially from a wholly distinct user and/or time.
  • the computer processor compares the anchor information with the media to determine if the anchor information matches, is found in, or otherwise triggered by the media.
  • the augmented reality information is performed in the media, in the manner dictated by the anchor information, so as to recreate the augmented reality in the context of the media, even in real time with the capture of the media.
  • the comparison and/or receipt of the actual media may be conditioned upon a user or device satisfying other origin limits, such as being within a designated geographical area that matches the anchor, being controlled by a particular authenticated user, being executed at a particular time, being executed at and while a particular event is occurring, having been paid for, etc. In this way, potentially burdensome media and augmented reality information transfer, comparison, and performance together can be reserved for particular context-matching circumstances, preserving example embodiment network and communication devices resources.
  • FIG. 1 is an illustration of an example embodiment network configured to share and perform augmented reality.
  • FIG. 2 is an illustration of an example embodiment communications device configured to share and perform augmented reality.
  • FIG. 3 is an illustration of an example embodiment GUI for creating augmented reality.
  • FIG. 4 is a flow chart of an example method of creating and sharing augmented reality
  • FIG. 5 is an illustration of an operations sequence in an example method of creating and sharing augmented reality.
  • FIG. 6 is a flow chart illustrating an example method of sharing and performing augmented reality.
  • FIG. 7 is an illustration of an operations sequence in an example method of sharing and performing augmented reality.
  • first, second, etc. may be used herein to describe various elements, these elements should not be limited to any order by these terms. These terms are used only to distinguish one element from another; where there are “second” or higher ordinals, there merely must be that many number of elements, without necessarily any difference or other relationship.
  • a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of example embodiments or methods.
  • the term “and/or” includes all combinations of one or more of the associated listed items. The use of “etc.” is defined as “et cetera” and indicates the inclusion of all other elements belonging to the same group of the preceding items, in any “and/or” combination(s).
  • augmented reality offers a useful way of communicating additional information about media typically encountered on communication devices.
  • augmented reality is conventionally available only in connection with very specific objects, such as QR codes or predefined files from a single source, that define how augmented reality elements should be displayed, often without any contextual connection with the underlying object.
  • the inventors have newly recognized a problem where augmented reality does not contextually describe or associate with other commonly-encountered media, where it is more useful.
  • the inventors have recognized that it is extremely difficult for individuals to create and share augmented reality in a non-pre-set context. That is, users are unable to augment arbitrary media with contextual subject matter, and share the same in a form where others can independently experience the augmented reality. Similarly, the inventors have recognized that it is extremely burdensome in computing environments to transmit and compare the amount of data required to offer augmented reality in connection with specific media, especially in a potentially unlimited network such as the Internet, because of the size of such media and the computational requirements in triggering and presenting a particular augmented reality, out of millions or more that could be offered, in connection with appropriate objects. To overcome these newly-recognized problems as well as others, the inventors have developed example embodiments and methods described below to address these and other problems recognized by the Inventors with unique solutions enabled by example embodiments.
  • the present invention is devices, software as stored or executed on tangible computer-readable media, and methods for creating, sharing, and/or performing contextual augmented reality.
  • the few example embodiments and example methods discussed below illustrate just a subset of the variety of different configurations that can be used as and/or in connection with the present invention.
  • FIG. 1 is an illustration of an example embodiment network useable to create and share augmented reality content among and to multiple users.
  • a network 10 provides communicative connection among several different communications devices 20 .
  • network 10 could be the Internet or another TCP/IP protocol network such as a WAN or LAN or intranet, or network 10 could be a wireless cell network operating on CDMA, WiFi, Bluetooth, GPS, near field communications, etc.
  • Network 10 may thus be any structure or protocol that allows meaningful communicative connections between communications devices 20 and/or other information sources.
  • Communications devices 20 may be directly communicatively connected among one another to sideload or directly transmit data between devices 20 , such as through NFC, WiFi, Infrared, etc.
  • communications devices 20 are shown in connection with a network 10 , it is understood that devices 20 and ultimately users 1 may be directly connected to each other, and potentially only each other, through such sideloading or direct communications and/or only directly connected to content providers 50 without use of network 10 .
  • One or more content providers 50 connect to one or more user devices 20 , either directly or via network 10 or another network.
  • Providers 50 can be any content, media, functionality, software, and/or operations providers for communication devices 20 .
  • providers 50 may include mobile software developers with server backends, application hosts, and/or access portals for downloading and running software and/or streaming media on devices 20 .
  • providers 50 may include a network operator, such as a cellphone and mobile data carrier operating network 10 and controlling access rights of users 20 as well as general operation of network 10 .
  • providers 50 may be application storefronts providing search, download, operations connectivity, updating, etc. for apps on communication devices 20 .
  • providers 50 may be a website or ftp server offering downloadable files or other content that may be displayed or otherwise consumed through devices 20 . Although providers 50 are mostly shown clustered around network 10 for connectivity to devices 20 , it is understood that any direct or indirect connection between any provider 50 and any device 20 is useable in example embodiments.
  • Example embodiment application host 100 provides storage and delivery of augmented reality content and/or potentially other networking functionality among devices 20 and ultimately users 1 , optionally through providers 50 and/or network 10 .
  • host 100 may be connected to several different devices 20 through a network 10 .
  • application host 100 may be connected directly to, and controlled by, a content provider 50 , to provide augmented reality information and/or functionality among devices 20 .
  • host 100 may connect directly to a device 20 . This flexibility in networking can achieve a variety of different augmented reality functionalities, content control, and commercial transactions among potentially independent hosts 100 , providers 50 , network 10 , and/or devices 20 .
  • example embodiment application host 100 may be connected to or include computer hardware processors, server functionality, and or one or more databases 105 , which may store augmented reality information, functionality, and/or user or network profile or operational data for successful interaction among various networked components. In this way, host 100 may accept, persist, and analyze data from user communications devices 20 , network 10 , and/or providers 50 . Although shown as separate elements in FIG.
  • host 100 may be integrated with content provider 50 , databases 105 , and/or network 10 , such as an application portal accessed through a mobile app or program on devices 20 that provides application updating, augmented reality content, other application functionality, login, registration, ecommerce transactions, instructions, technical support, etc., like a full-service application portal available from a computerized user device 20 in order to execute all aspects of example methods discussed below.
  • content provider 50 databases 105 , and/or network 10
  • network 10 such as an application portal accessed through a mobile app or program on devices 20 that provides application updating, augmented reality content, other application functionality, login, registration, ecommerce transactions, instructions, technical support, etc., like a full-service application portal available from a computerized user device 20 in order to execute all aspects of example methods discussed below.
  • “communications device(s)” including user communications devices 20 of FIG. 2 —is defined as processor-based electronic devices configured to receive, transmit, create, and/or perform augmented reality content. Information exchange, and any communicative connect, between communications devices must include non-human communications, such as digital information transfer between computers.
  • “augmented reality” including augmented reality 101 of FIG. 2 —is defined as subject matter including a mixture of both real-life audio, visual, tactile, and/or other sensory media and added audio, visual, tactile, and/or other sensory subject matter that is explicitly based on the underlying real-life media.
  • augmented reality could include a real-time video feed 1 with audio captured without intentional modification by a camera and microphone in combination with an extraneous graphic, mask, text, animation, filter, noise, vibration, etc. that positionally tracks the underlying real-life subject matter.
  • FIG. 2 is a schematic of an example embodiment user device communications 20 illustrating components thereof that may permit creation and sharing of augmented reality 101 as described in example methods below.
  • communications device 20 may include a camera package 110 including a lens and image sensor, microphone 115 , a computer processor 120 , persistent and/or transient storage 130 , external communications 140 , display screen 180 , and/or input device and input sensors 185 .
  • a camera package 110 including a lens and image sensor, microphone 115 , a computer processor 120 , persistent and/or transient storage 130 , external communications 140 , display screen 180 , and/or input device and input sensors 185 .
  • Processor 120 may include one or more computer processors connected to and programmed or otherwise configured to control the various elements of example embodiment device 20 . Processor 120 may further be configured to execute example methods, including creating, transmitting, and performing augmented reality in accordance with user input, and controlling display 180 and sensor 185 /camera 110 /microphone 115 , for example.
  • Processor 120 can be any computer processor, potentially with associated processor cache, transient memory, video buffer, etc., configured or programmed to processes augmented reality content.
  • Processor 120 may further process any input to device 20 , including visual, tactile, and/or audio information received from microphone 115 , camera 110 , a vibration-sensitive transducer, etc., for augmented reality creation, transmission, and/or performance.
  • Processor 120 may also receive sensor information from sensors 185 , e.g., touch or cursor information, and process the same as user interaction or input.
  • Processor 120 may further execute software or include configured hardware that allows for execution of example methods discussed below.
  • Storage 130 may be a dedicated data storage drive or may be a partition of a general data store in which augmented reality information, origin or limitation information, application information, and/or device operations and raw data can be saved.
  • Storage 130 may be, for example, random access memory (RAM), read only memory (ROM), programmable read only memory (PROM), erasable programmable read only memory (EPROM), electronically erasable programmable read only memory (EEPROM), flash memory, a hard disk, a processor cache, optical media, and/or other computer readable media.
  • RAM random access memory
  • ROM read only memory
  • PROM programmable read only memory
  • EPROM erasable programmable read only memory
  • EEPROM electronically erasable programmable read only memory
  • flash memory a hard disk, a processor cache, optical media, and/or other computer readable media.
  • Camera 110 may include one or more lenses and/or apertures that may be controlled by actuators that move the lenses and apertures among different positions to focus captured optical data. Similarly, camera 110 may adjust focus digitally or in response to user input defining focus locations in the scene being captured. Camera 110 may include image sensor elements such as a charge coupled device (CCD) array, a photodiode array, or any other image sensing device that receives light, potentially via the lens, and generates image data in response to the received light. Camera 110 may include a light to aid in reflection and/or focusing laser. Camera 110 may be further configured to obtain or adjust image information such as focus, zoom, white balance, exposure, saturation, and/or other image functions. Camera 110 and/or processor 120 may be further configured with one or more video codecs or other image processing software or drivers to capture, process, and store external independent media such as actual video from the environment 1 as well as augmented reality.
  • image sensor elements such as a charge coupled device (CCD) array, a photodiode array, or any other image sensing
  • Microphone 115 may be any auditory transmission and/or reception device capable of audio pickup and/or playback.
  • microphone 115 may include an embedded speaker and/or an embedded induction microphone in a mobile device.
  • Display 180 may be a screen, viewfinder, monitor, or any other device capable of visually displaying visual augmented reality 101 .
  • display 180 may be a touchscreen on a smartphone like an iPhone or Android devices or on a tablet like an iPad or Surface, or display may be an LCD monitor or projector, for example.
  • Sensors 185 provide input information.
  • sensors may be embedded multi- or single-touch capacitive sensors capable of detecting finger or stylus touch, pressure, movement, etc., with respect to display 180 , during operation of device 20 .
  • sensors 185 may be an accelerometer of magnetized compass with associated hardware or software capable of determining device orientation and/or movement, potentially with respect to display 180 during operation of device 100 .
  • sensors 185 may be a button or an external mouse or joystick and associated hardware or software capable of controlling and determining cursor position and/or activation with respect to display 180 during operation of device 100 .
  • Sensors 180 are connected to processor 120 and can deliver sensed input information to processor 120 with respect to display 180 , including cursor or contact position, duration, numerosity, pressure, movement speed, etc.
  • Example embodiment video device 20 may further include a communications port 140 for external wired or wireless communication.
  • communications port 140 may be an antenna configured to transmit and receive on CDMA bands, a Wi-Fi antenna, a near field communications transmitter/receiver, a GPS receiver, an external serial port or external disk drive, etc.
  • Processor 120 may provide data from storage 130 , input data from camera 110 , sensors 185 , microphone 115 etc., to external devices through communications port 140 , as well as receive application and/or augmented reality and other information from providers through port 140 .
  • communications port 140 may function as another input source for sensors 185 .
  • example embodiment device 20 Although networked elements and functionalities of example embodiment device 20 are shown in FIG. 2 as individual components with specific groupings and subcomponents, it is understood that these elements may be co-located in a single device having adequately differentiated data storage and/or file systems and processing configurations. Alternatively, the elements shown in FIG. 2 may be remote and plural, with functionality shared across several pieces of hardware, each communicatively connected at adequate speeds to provide necessary data transfer and analysis, if, for example, more resources or better logistics are available in distinct locations.
  • FIG. 3 is an illustration of an example embodiment graphical user interface 300 that can be presented on a communications device, such as touchscreen 180 of example embodiment device 20 .
  • GUI 300 permits creation of augmented reality in context with captured real-word media 1 .
  • GUI 300 may present captured reality 1 offset, at an angle, and/or against a high-contrast background to highlight its uniqueness and/or permit 3-dimensional building on the same.
  • GUI 300 may present various tools, such as a text tool 301 , object or icon tool 302 , picture tool 303 , and/or drawing tool 304 for selection by a user.
  • GUI 300 may present further options for adding a corresponding type of augmented reality to real-world media 1 .
  • GUI 300 may present further options for adding a corresponding type of augmented reality to real-world media 1 .
  • other platform- and service-provided tools and even APIs may be provided in GUI 300 .
  • a tool may permit overlay of a tweet, direct message, sms content, Instagram photo, Giphy animation, image search result, etc. on captured reality 1 along with further stylization.
  • GUI 300 may permit zooming, rotation, translation, or other movement of captured reality 1 through tactile input, including long-touching, tapping, pinching, spinning, dragging, etc. of finger(s) or stylus(es) across a touchscreen presenting the same, potentially in combination with the above-discussed tools.
  • added augmented elements may be moved, anchored, repositioned, animated, reshaped, deleted, recolored, etc. through such tactile input as well as through tool stylization.
  • a user may tong-touch a particular image or graphic to fix it at a locked position in the underlying captured reality 1 , and the image or graphic may track the underlying reality in size, orientation, position, etc. so as to appear as a part of the underlying reality.
  • example embodiment communications device 20 may permit tactile and auditory feedback and input in creation of augmented reality through example embodiment GUI 300 , such as through microphones, transducers, etc. that capture such tactile, auditory, or any other sensory feedback and add the same to underlying real-world media in the GUI.
  • example embodiment devices may be structured in a variety of ways to provide desired functionality.
  • Other divisions and/or omissions of structures and functionalities among any number of separate modules, processors, and/or servers are useable with example embodiment devices, including execution on a single machine or among distant, exclusive servers and processors.
  • the example embodiment device 20 of FIG. 2 and the example embodiment GUI 300 of FIG. 3 are capable of executing and can be configured with example methods, it is understood that example methods are useable with other network configurations and devices, and device 20 is useable with other methods.
  • FIG. 4 is an illustration of an example method of creating and/or transmitting augmented reality information that can then be independently reproduced by others accessing the same underlying media and anchor of the augmented reality.
  • real-world media and a candidate anchor describing the same is received.
  • the real-world subject matter is any perceivable media as faithfully captured by a communications device, including still images captured by a camera, video, audio, haptic feedback, locally-stored photographs and video, websites, social media profiles, pictures, posts, etc. that can be used as an underlying basis on which to build augmented reality, and, in other methods, identified by a receiving user for its anchors to replicate the augmented reality.
  • anchor information in S 401 may be or identify a still image, video, audio-visual presentation, etc. having elements on which augmented reality information can be overlaid, positioned with, synchronized with, moved with, and/or otherwise presented in context with underlying real-world media as dictated by the anchor data.
  • anchor data in S 401 may be extracted from a media feed captured by a user's communications device onto which augmented reality may be displayed.
  • Anchor data may also be an external file, QR code, NFC signal, WiFi packet, or other information that contains the augmented reality information in context of separate real-world subject matter.
  • the anchor data, and potentially real-world media containing the same is examined for distinctiveness, particularly, the ease or ability for the anchor to be identified and used to replicate augmented reality in independently captured media.
  • a processor may perform an edge-detection on a captured visual image, a Fourier transform and/or frequency domain analysis on captured sound waves, etc. to determine how unique or easily aspects of the image or sound may be identified again and augmented by anchor data when independently re-captured.
  • a visual image may be examined for blurriness and/or lack of light as factors discounting distinctiveness in any anchor data therein.
  • video or image data may be subject to several types of criteria for satisfactorily-unique aspects through a best bin classifier or K-D search tree and/or tested for workability in an image tracker, such as that described in “Learning to Track at 100 FPS with Deep Regression Networks” by Held et al., Stanford University, incorporated herein by reference in its entirety.
  • the type of distinctiveness analysis in S 410 may vary based on the type of anchor captured in S 401 and/or type of anchor or anchor fingerprint used to replicate augmented reality in captured media.
  • the example method of FIGS. 5 and 6 may use a number of different computer-processor-driven anchor-matching techniques, and the same type of anchors used there may be examined for distinctiveness in S 410 .
  • anchor data may be visual or audio information inherent within captured real-life media, such as edges, distinctive pixels, audio frequency transforms, etc.
  • anchor data may also be descriptive information of the same, such as a particular location and/or time, orientation, etc. where the media is found and would be perceivable to another communications device, transmitted in a digital file and/or QR code for example.
  • the learning tracking method of Held, et al., incorporated above may be used to anchor augmented reality information in underlying real-world media, and information captured in S 401 may be tested under that method for successful tracking, which would indicate anchor satisfaction “Y” in S 420 .
  • the user may be notified of the unsuitability in S 421 .
  • a user may be prompted to submit a new anchor in S 401 .
  • a user may proceed with example methods, with a warning that any augmented reality may have poor reproducibility or other sharing problems.
  • a user may be notified in S 405 of other or better alternatives for candidate anchors, including objects, video, and/or other media that other users have successfully used for anchors having desired levels of distinctiveness.
  • the suggestion in S 405 may be automatic from the start of example methods, or prompted after failure in S 420 and S 421 .
  • the suggesting in S 405 may list successful nearby anchors to the user or may identify purchasable subject matter prompted by a third-party advertiser, for example. Use of a previous anchor many automatically result in a “Y” at S 420 , because the previously-used anchor was specifically included in S 405 for its successful distinctiveness.
  • the underlying media having or associated with the anchor is opened for creation of augmented reality in S 430 .
  • GUI 300 FIG. 3
  • GUI 300 may be displayed in S 430 with the captured media 1 having satisfied the anchor requirement from S 420 “Y”.
  • a user uses the build GUI or other input, a user creates the augmented reality using underlying real-world media. For example, a user may position text, synchronize sounds, add timed vibrations, add animation, draw graffiti, stick emojis or banners, associate a weblink, filter the media, overlay a picture or image, and/or add any additional perceivable elements to the underlying media, which may be a video, a still image, a song or sound sequence, etc.
  • the added elements may track position, timing, size, distance, volume and/or other elements of the underlying real-world media.
  • the underlying real-world media and added contextual elements combined to be perceivable together by a user, create the augmented reality, and the anchor information dictates whether and how the augmented elements are to be performed in association with the underlying real-world subject matter.
  • a user may also associate origin information with the augmented reality that limits or conditions its sharing and/or performance beyond the contextual anchor information. For example, a user may provide a location, or a location may be automatically determined through a communications device, or an expiration date may be set, or a restriction to particular users may be given, or the content may be locked behind payment requirements, or the augmented reality may be associated with a particular check-in or event. The user may willfully provide such origin information or it may be automatically collected by a communications device and associated with the augmented reality.
  • the created augmented reality, embedded or accompanying anchor data, and/or any additional origin information are transmitted by a communications device.
  • the communications device may create the anchor data itself, such as positional information indicating placement of augmented reality information based on underlying identified real world media, or the anchor data may be created by a receiving host from a received composite augmented reality.
  • the anchor data may be inherently in the real world subject matter or an external piece of data that dictates how the augmented reality information is to be combined with the underlying real world subject matter.
  • the transmission in S 301 may be the counterpart to the receipt in S 301 of the example method of FIGS.
  • the information created and used in example methods herein may be the same as, and/or used in, methods of performing the created augmented reality on other devices in the example method of FIGS. 6-7 .
  • the information transmitted in S 301 may be a minimum level of data required to reproduce augmented reality aspects in separately-captured real-world media, allowing sharing and performance of the augmented reality in S 440 without requiring large amounts of network resources by full transmission of the entire augmented reality.
  • the creating user may be notified of the completed transmission of the augmented reality information.
  • Ecommerce options such as payment for the transfer, may be completed.
  • a user may receive payment in S 450 for a number of views or performance of the augmented reality by other users.
  • a user may be provided feedback, such a rating, achievements, unlockable content, comments, replies, updated account information, notification of additional augmented reality added to a same anchor, and/or removal notice for augmented reality violating terms of use in S 450 .
  • the user may also be provided a unique search term, hashtag, or other ID that allows the user to share the augmented reality.
  • a unique string may be associated with the information transmitted in S 301 , and even with the type of information and tools used to create the augmented reality in S 440 , and the user may share the string.
  • search for the string they may be pointed to the anchor and/or any other information, such as a specific geofenced location, for discovering the anchor and performance of the associated augmented reality.
  • an example method creates augmented reality as added graphics and other feedback to visual or video data at a particular location.
  • underlying real-world media 1 here a café or restaurant
  • a communications device through its display, such as by a user operating a mobile device like a smartphone or tablet or wearable device with a camera.
  • An application or program installed on the communications device may facilitate all features shown in the flow diagram of FIG. 5 .
  • the anchor data for the real-word media 1 may be distinctive pixel patterns or shapes made up of the same inherently within real-world media 1 of this example, such that receipt of the visual information on the communications device is receipt of anchor data in S 401 , shown by the first two screens in the flow diagram of FIG. 5 .
  • anchor data may be separate information that defines whether and how added augmented elements are performed in the context of underlying real-world subject matter.
  • the communications device analyzes the candidate image for useable anchor data in S 410 and S 420 in the example of FIG. 5 .
  • the program determines that the area is insufficiently distinctive, such as lacking unique edges, shapes, pixel contrasts, etc. (“N” in S 420 ).
  • the user is then notified of the unsuitability in S 421 , as shown in FIG. 5 , by a message prompting the user to seek a more recognizable or distinctive anchor.
  • New anchor data is identified in the fourth screen in S 401 , including several frames with well-recognizable edges and shapes.
  • the program determines that the area is sufficiently distinctive (“Y” in S 420 ).
  • example embodiment GUI 300 may include several tools for creating augmented reality on the underlying captured real-world visual scene, as set by anchor data therein.
  • the user creates the augmented reality.
  • a text tool is selected, allowing the user to type words, such as “Mirage” in any font, shape, size, color, animation, shadow, or other text effect through GUI 300 .
  • the user may move, align, adjust, and/or set movement of the wording based on the underlying real-word frame image, such presentation being stored as anchor data. For example, as shown in FIG. 5 , the user has aligned the term “Mirage” to display at an upper edge of the frames.
  • an image or sticker icon may be selected, allowing the user to place a pre-set icon, image, or sticker on the underlying media.
  • a car sticker As shown in FIG. 5 , for example, the user has selected a car sticker and placed it over the bottom, middle frame.
  • Stickers available through an example embodiment method and/or GUI may be pre-set, standard emojis, symbols, and/or location-specific or time-specific captions, banners, icons, images, etc.
  • the user may select a camera capture tool to add custom-captured visual data to the real-world media.
  • the user has taken a picture of a coffee cup and extracted only the coffee cup, such as through an edge-detection or object detection lasso, and placed the coffee cup below the car.
  • a drawing icon may be selected, allowing free-form drawing in any desired color, thickness, line break, etc.
  • FIG. 5 for example, the user has drawn an additional spiked outline around the underlying frames.
  • Each individual augmented reality aspect including font, drawing, sticker, picture, etc. may be selectively undone, repositioned, reformatted, shrank, etc. through proper selection.
  • the added visual aspects in FIG. 5 all appear static, they may be animated, potentially in particular sequence with the underlying real-world media, stored as anchor information.
  • pre-set, pre-recorded, and/or newly-recorded sounds may be added as augmented reality.
  • tactile feedback such as vibration, alert buzzes, shocks, etc. may be added as augmented reality information, as well as any other sensory addition to the underlying real-world media to create augmented reality in S 440 .
  • anchor data may include all image recognition of underlying real-world media 1 along with data of positional, animation, behavior, and/or other performance parameters of the augmented reality elements added with the tools in S 440 in the context of the underlying real-world media (such as positioning at particular image points, tracking type or behavior, etc.).
  • This anchor data may be, for example, a full, augmented reality image for full reproduction or considerably smaller ID and vector data or reproduction of the augmented reality aspects in independently-captured real-world media.
  • example methods may conserve bandwidth and/or storage limitations in computer networks by transmitting in S 301 only the minimum augmented reality and anchor data necessary to recreate the augmented reality created in S 440 at a separate instance with receipt of matching real-world media.
  • the user may also set origin information such as GPS coordinates, event check-ins, geofenced areas, specific users, time limitations, etc. that control the distribution and performance of the augmented reality created in S 440 .
  • This information is transmitted with the augmented reality and anchor information in S 301 or at another time in association with the augmented reality information.
  • the user is shown a transmission confirmation, potentially with tactile and/or auditory feedback.
  • the user is also provided with a unique identifier for the augmented reality, for example, a string like #id-sf17-018924 that can be searched by the user or other users to locate or even unlock the augmented reality.
  • the user may also be debited an amount from a preset or associated account or funding source for the share.
  • the user may also receive feedback of performing third-parties, comments, tips, replies, viewership or reach statistics, add-on augmented realities from other users on the same anchor, expiration, removal, etc. of the augmented reality.
  • the user may unlock an achievement for the location of the created augmented reality (such as a “café badge”), or be provided with additional functionality or account status based on the augmented reality (such as always having a coffee cup image available in their tool palette).
  • an achievement for the location of the created augmented reality such as a “café badge”
  • additional functionality or account status based on the augmented reality such as always having a coffee cup image available in their tool palette.
  • FIG. 5 uses visual underlying media, anchor data, and augmented reality information
  • several different types of non-visual information may be used instead or in addition, including those types discussed in the example method of FIGS. 6-7 .
  • a user of example methods may create any type of augmented reality perceivable to users with various sharing and/or performance conditions that are entirely compatible with the various example methods of selective performance in the example method of FIGS. 6-7 .
  • FIG. 6 is an illustration of an example method of creating and/or transmitting augmented reality information.
  • S 301 augmented reality information, including or alongside anchoring information and/or origin information, is received.
  • the receipt in S 301 is by any of a user's communications device, network operator, application host, and/or other computer-processor-based device capable of electronic receipt, processing, and storage of the information in S 301 .
  • the information received in S 301 is created by a party selecting desired additional media that is combined with underlying real information to create augmented reality.
  • the augmented reality GUI, system, and method from FIGS. 3-5 may provide a computer-processor-driven system for creating and transmitting the augmented reality information in S 301 .
  • the augmented reality could be graphical, animated, audio-visual, auditory, haptic, etc., including graphics overlaid on public posters or personal photographs and videos, GUIs responsive to and overlaid on streaming audio-visuals, textual commentary on specific labels like license plates, UPC bar codes, or QR codes, tactile sensations such as haptic vibration or shock added to particular film scenes, or any other sensory media added to a real-life experience and reproducible with independent media.
  • the augmented reality information received in S 301 may be complete augmented reality, that is additional media combined with underlying real media, or may be only the additional media to be added to underlying independent media to create augmented reality.
  • origin information may be received, including user information, geographical information, encryption information, distribution information, routing information, timing/expiration information, event information, ecommerce or promoted status information, restrictions based on any of the forgoing, and metadata of the augmented reality and anchoring information.
  • Anchoring information received in S 301 is data useable to trigger and position the display of augmented reality information in context with independently-captured media to perform augmented reality in a form similar to that input by the creating user of the augmented reality information.
  • anchoring information may be image, video, and/or sound information, for comparison with the independently-captured media to determine augmented reality triggering.
  • anchoring information may be mapping or orientation information for placement, sizing, and/or configuration of an augmented reality element on independently-captured media.
  • independent media is received, and the independent media is distinct from the augmented reality information.
  • the receipt in S 302 may be by a same processor or connected device to that receiving other information in example methods, configured for receipt, processing, and display of the independent media.
  • Independent media received in S 302 may be image, video, audio, vibratory and/or any other information captured and received at a communications device that can be analyzed and compared with anchor information.
  • independent media in S 302 may be a live, unaltered audio-video stream 1 ( FIG. 2 ) of surroundings of a mobile device recording the same with a camera 110 ( FIG. 2 ).
  • independent media in S 302 may be a modified or enhanced photograph retrieved from memory 130 ( FIG. 2 ).
  • additional limitation information may be received by a same or different connected communications device, network, etc.
  • Additional limitation information may be location information of a device capturing the independent media, user information such as name, account number, user ID, etc., credentials such as passphrases, promotional codes, OAUTH codes, RealID information, etc., a local or absolute time and/or date of the user and/or when the independent media was captured, as well as any other limitation information such as subject matter screening or filters, age-based limitations, ecommerce authorizations, etc.
  • Additional limitation information in S 303 may be gathered automatically by a communications device, such as through GPS- or wifi-based location services, and/or manually input by a human user, such as a password input by a user.
  • Receipt of information in S 301 , S 302 , and S 303 may occur in any order and between different or same users, communications devices, and/or other network actors.
  • any communications device 20 may receive any of the information in S 301 , S 302 , and S 303 , and any of network 10 , application host 100 , content providers 50 may receive the same or different information in S 301 , S 302 , and S 303 .
  • receipt of information in S 301 , S 302 , and S 303 may be individually repeated, and/or receipt of limitation information in S 303 and origin information in S 301 may be omitted entirely.
  • information received in S 301 , S 302 , and S 303 may be received in a single transmission, as a single piece of information, file, or media.
  • anchor information, augmented reality information, and user location information compatible with that received in S 301 and S 303 may all be present in, or determinable from, a single image or video stream, with any other mixture, combination, and/or derivability of information among that received in S 301 -S 303 possible.
  • Users may be alerted of the existence (or non-existence) of anchor and/or origin information for augmented reality in S 305 , to encourage or aid seeking out of independent media and other conditions that match the anchor and origin information.
  • a user may be guided with a map to particular geofenced locations, or may be given a prompt for a password, or may be alerted to a cost, etc. required for presentation of augmented reality.
  • a user may search by a keyword, a hashtag posted by the creating user, in a map or by inputting a zipcode, and/or by other metadata of the augmented reality, origin, and/or anchor information and see matching and available augmented reality for such a search in S 305 .
  • users may be aware of specific conditions required for performance of augmented reality and may act to best comply with those conditions, potentially saving computational resources in transmitting and analyzing information among S 301 , S 302 , and S 303 .
  • Results in S 305 may be transaction-based, with a user having to make a payment or satisfy another ecommerce condition, such as having a premium or paying account status, credit card on file, a number of finding credits, etc., to be shown available augmented reality in S 305 .
  • results in S 305 may be screened or available based on satisfaction of origin conditions in S 320 , such as having a particular user account name, location, phone type, relationship with a creating user, etc., that satisfies limitation information with any augmented reality that may be suggested or alertable in S 305 .
  • one or more comparisons of various received information from S 301 , S 302 , and/or S 303 ultimately determine if the augmented reality is performed.
  • an origin match in S 310 and S 320 may be performed to screen or limit any anchor analysis in S 330 , and potentially transmission of information from S 301 and/or S 302 , to only eligible users or independent media providers. Origin match in S 310 and S 320 may also be omitted or performed after any anchor analysis.
  • the received user limitation information from S 303 is compared for a match or indication of acceptability from the origin information received in S 301 .
  • origin information in S 301 may be a defined, geofenced area or location check-in, user ID or social network connection, account type, time of day, password, payment status, subscription information, etc. that limits what circumstances or to whom the augmented reality is performable.
  • Limitation information in S 303 may be comparable or corresponding to origin information, such as a detected or entered user location, confirmed check-in, provided user ID, account verification, detected time, entered password, payment verification, etc. potentially from the creator of the augmented reality subject matter received in S 301 . Where the origin and limitation information are comparable, a match may be determined in S 310 .
  • the example method may proceed to performance of the augmented reality in S 350 .
  • the limitation information is absent or does not match the origin information
  • no modification with augmented reality or other performance of augmented reality may be performed in S 321 .
  • a user may be informed in S 321 as to a non-matching condition and/or re-prompted in S 305 to seek out or enter such information to facilitate a match in S 310 and S 320 .
  • Comparison in S 310 may be performed at other points, with or without receipt of all or partial information from S 301 , S 302 , and S 303 .
  • comparison S 310 may be performed upon receipt of only origin information in S 301 and only limitation information in S 303 alone. Where a match is determined in S 320 on that information alone, additional information, such as augmented reality information in S 301 , independent media in S 302 , and/or other limitation information in S 303 may then be received, so as to limit transmission requirements to situations more likely leading to performance in S 350 .
  • comparison in S 310 may be performed iteratively, for potentially incremental or real-time receipts of information in S 301 , S 302 , and/or S 303 , so as to continuously monitor when user limitation information and origin information will match to proceed with further, more resource-intensive actions in example methods.
  • the independent media received in S 302 is compared with the anchor information in S 302 to determine if and how the augmented reality information can be applied to the independent media to perform augmented reality.
  • the matching in S 330 may be a direct comparison between the independent media and anchor data such as underlying real-life media components of the augmented reality information.
  • the anchor information may include an underlying poster, QR code, street view, artwork, product or label, logo, tonal sequence, and/or song to which additional media is added to create the augmented reality, and this underlying information may be compared against the independent media to determine a match and, further, where and how the additional media is added to recreate the augmented reality in S 350 .
  • the comparison in S 330 may use image processing and recognition techniques, including the algorithms identified in the Held reference, US Patent Publication 2012/0026354 to Hamada, published Feb. 2, 2012 and US Patent Publication 2016/0004934 to Ebata et al., published Jan. 7, 2016, these documents being incorporated herein by reference in their entireties.
  • the matching in S 330 may use anchor data independent from the augmented reality or independent media.
  • the anchor data may be a separately-received file, or NFC tag, or facial profile, or sound, etc., that identifies or describes independent media eligible for augmented reality and/or describes how augmented reality information should be added to the same.
  • the anchor data may still be comparable to the independent media to determine eligibility for augmented reality and, if so, in S 350 , the exact parameters for such display of the augmented reality information in combination with the independent media.
  • Anchor data used in S 330 may be fingerprint-type data, that is, smaller or less-resource-intensive information that is characteristic of and comparable to independent media to determine a match.
  • comparison methods may use simplified processes to readily identify matches among potentially large amounts of anchors and independent media, potentially using this fingerprint data.
  • the anchor data may be a smaller grayscale or edge-detected representation of the eligible independent media.
  • the received independent media may also be reduced to a comparable grayscale or edge-detected representation.
  • Such smaller and simplified images may be readily compared, such as using, for example, sum of squared differences, sum of absolute differences, and/or zero mean normalized cross-correlation between the pixels in the images, to determine a match, or a level of agreement within a matching threshold in S 330 .
  • Other useable zero mean normalized cross-correlation among images are described in the incorporated '934 and '354 patent publications.
  • independent media and anchor data may be transformed into comparable finger-print type data through a Fourier transform of a waveform signal of the anchor and independent media, highlights from the media and anchor frequency domain, detected time/space domain, other type of correlation, cepstrum or wavelet transform, and/or other detectable and comparable characteristics that may be created through image, audio, tactile, or other processing.
  • Appropriate matching thresholds between the transformed anchor information and independent media transformed can then be used to identify matches in S 330 .
  • a very high volume of anchor data and independent media can be compared, even by a simpler communications device or over a slower remote communicative connection, without requiring large computing resources or time to determine an acceptable match in S 330 .
  • augmented reality is performed in S 350 . If the anchor does not match in S 340 , the independent media may be performed as is, or no action may be taken, but no augmented reality using the received information is performed in S 341 . A user may be notified of the non-match in S 341 and/or prompted with information as to how to elicit a match such as through S 305 .
  • the augmented reality is performed using the received independent media, anchor information, and augmented reality information.
  • the augmented reality information may be only additional subject matter that is added to the independent media at position/orientation/timing/sizing dictated by the anchor information to replicate the augmented reality received in S 301 .
  • the augmented reality information may be full augmented reality having mixed underlying reality and added-in media performed to appear as the independent media with additional elements.
  • the performance in S 350 may be, for example, in real-time with capture of independent media in S 302 so as to be perceived as a real-life altered reality in accordance with the augmented reality or may be performed at a later time, such as if the augmented reality is saved and/or transmitted elsewhere in response to a match in S 340 and performed at that other instance.
  • the performance in S 350 may be executed by any output device connected to a processor creating, formatting, or transmitting the augmented reality created from the received independent media, anchor information, and/or augmented reality information.
  • Such an output device must include at least one output capable of being sensed by human senses so as to perceive the augmented reality, such as a screen 180 of example embodiment communication device 20 of FIG. 2 outputting visual imagery as augmented reality 101 , speaker, tactile device, buzzer, or even a taste or smell element.
  • a registration of the performance may be generated in S 351 , such as an alert to a creator or tracking system of a number of views or other performances.
  • a user may comment, reply, rate, report as violating terms of use or laws, request technical support for, etc. the performed augmented reality.
  • payment, registration, subscription status, or other ecommerce origin information was required for performance, those options may be achieved in S 351 following performance.
  • augmented reality information and any other received information may be locally and/or remotely deleted for privacy concerns. Example methods may then be repeated or looped with the same received information or one or more new pieces of information in S 301 , S 302 , and S 303 .
  • an example method performs augmented reality as an added graphic to visual or video data at particular locations.
  • augmented reality 101 is created on a first communications device 20 ′ through its touchscreen 180 , such as by a user adding a dragon image over a stop sign in the accurately-captured street scene 1 underlying augmented reality 101 .
  • First communications device 20 ′ has the underlying image location from a location service such as GPS native to the device or input by a user.
  • First communications device 20 ′ further has anchor data of the stop sign for positioning, sizing, and/or orienting the added dragon of augmented reality 101 with respect to the stop sign.
  • This creation of augmented reality, anchor, and origin information may be performed in the example method of FIGS. 3-5 . All this data is transmitted and received over a network as augmented reality information, anchor information, and geofencing origin information in S 301 .
  • the street scene augmented reality 101 may be received by a processor in a network 10 , application host 100 , communications device processor 120 , and/or any other processor.
  • a wireless network icon is shown to one side of processor element 10 / 100 / 120 , it is understood that a processor anywhere may perform the receipt in S 301 .
  • the processor may be in first communications device 20 ′ or second communication device 20 ′′, with all data transmitted to the hosting or connected device for use in example methods.
  • second communications device 20 ′′ feeds video in real time picked up through a camera on screen 180 .
  • Second communications device 20 ′′ and first communications device 20 ′ may have separate owners, operators, network affiliations, be operated at different times and dates, etc.
  • prompting or alerting in S 305 may show required origin information, in this example geofenced areas where second communications device 20 ′′ must be present for performance of augmented reality 101 .
  • Second communications device 20 ′′ captures independent media 100 , including similar underlying street scenes as augmented reality 101 .
  • the independent media here the live video captured and displayed on second communications device 20 ′′, and limitation information, here the second user's location as determined by device 20 ′′, are received by the processor in network 10 or application host 100 or even processor 120 ( FIG. 2 ) in device 20 ′′.
  • Such receipt in S 302 and S 303 may occur continuously, at discreet intervals, at instructed times by a user, etc.
  • receipt in S 302 and receipt of augmented reality and anchor information in S 301 may occur after the successful comparison of the location of the second user device 20 ′′ with matching geofenced area in the origin information.
  • the processor compares the received location information of second communications device 20 ′′ with the received origin information of the first communications device 20 ′ in S 302 . Upon determination that the location matches, or at least that second device 20 ′′ is within a geofenced area received as origin information with augmented reality 101 (first “ ⁇ ”) from first device 20 ′, the processor then compares the received independent media, here live video information 100 , with the anchor data, here the stop sign, to determine if the anchor is present in S 330 . Upon determination that the anchoring stop sign is present in the underlying video 100 (second “ ⁇ ”), information of augmented reality 101 —the dragon—is performed on screen 180 of second communications device 20 ′′, in the manner indicated by the anchor data—extending from the stop sign. As seen in FIG. 7 , augmented reality 101 is performed on second communications device 20 ′′ in a visually similar style, with added graphical dragon extending from the stop sign in the live captured street scene.
  • the performing in S 350 may be executed by the same processor receiving and comparing data in this example method. However, it is also understood that each receipt, comparison, authorization, performance, and performance may be executed by discreet processors under potentially separate ownership and control, with access to data required for each action in example methods. For example the processor in an application host 100 may merely authorize the performance in S 350 , and a processor in second communications device 20 ′′ may actually perform augmented reality 101 on second communications device 20 ′′.
  • Information of augmented reality 101 , anchor data, origin information, as well as information of independent media 100 and user location may be encrypted, compressed, or reduced to a fingerprint for easier transmission and comparison.
  • the processor uses a zero mean normalized cross-correlation to identify an anchor in independent media 100
  • simplified, black-and-white only information of the anchor stop sign and independent media 100 may be transmitted and compared in S 320 and S 330 .
  • the processor uses a Fourier transform, frequency domain, cepstrum, etc. analysis, appropriately reduced fingerprints of anchor stop sign and independent media 100 may be generated, received, and compared in S 320 and S 330 .
  • a high tolerance or match threshold may be used between significantly compressed or simplified data for comparison, as even a rougher image match between anchor and underlying media is a reliable match given the location matching in S 310 and S 320 .
  • a stop sign is used as a visual piece of anchor information for comparison against video containing the stop sign to determine a match and placement of the augmented reality information
  • anchor information could be used.
  • lines on the road, edges of building tops, the light post, etc. could be used alone or together as anchor information checked against the independent media.
  • Anchor information that is particularly distinctive such as high-edge value or high-contrast objects, or anchor information that is particularly suited for the comparison in S 330 , such as objects with easy image processing signals for zero mean normalized cross-correlation or object with unique Fourier transforms, may be selected and transmitted in S 301 for use in the comparison in S 330 .
  • a user, communications device 20 ′, network processor, and/or other actor may select this best anchor information, and second communications device 20 ′′ may display such differently-selected anchors as heads-up images in S 305 .
  • augmented reality information from S 301 includes just a dragon image to be added to underlying actually-captured independent media 100 from S 203 —a street scene—to create augmented reality 101 in the performance of S 350
  • augmented reality 101 may include several more elements, up to the entire street scene with dragon. That is, a user may actually be viewing only received augmented reality information from S 301 (the entire dragon plus street scene as a video or still image) and no elements of independent media from S 302 , which may be used only for triggering and/or positioning of augmented reality 101 , in the performance of S 350 .
  • the independent media 100 of a street scene from second communications device 20 ′′ appears very similar to the underlying scene in augmented reality 101 , some tolerance may be permitted in example methods, such as different angles, device orientation, distances from the stop sign, vantage of the stop sign, etc. that may still trigger the performance of augmented reality 101 .
  • the different vantage, positioning, and/or appearance of the anchor stop sign may further dictate the sizing, positioning, orientation, and other parameters of the augmented dragon element so as to preserve a similar augmented reality 101 .
  • augmented reality 101 may be performed with a proportionally larger dragon.
  • Such changes may be made in real time with modification of the independent media.
  • the augmented reality 101 on a screen of device 20 ′′ may increase the size of the dragon proportionally, while keeping the dragon positioned in a same relative position to the stop sign.
  • the first communications device 20 ′ may be alerted of the performance, a number of performances, users' viewing the performance, receive payment for the performance, etc.
  • a user of the second communications device 20 ′′ may reply, comment, rate, report, or otherwise give feedback on the augmented reality 101 .
  • augmented reality may be applicable to a same anchor, and following performance of one in S 350 , a next augmented reality that still satisfies the origin and anchor requirements may be performed.
  • a viewing order, or a selection of augmented realities may be received before performance in S 350 to control the same.
  • the various augmented realities otherwise matching a same piece of independent media in S 310 -S 340 could be performed together or simultaneously.
  • ecommerce options may be completed upon the performance, such as debiting a users' account an agreed-upon amount or amount specified in origin information, deducting a number of permitted views from a user's subscription, invoicing the user, all in association with second communications device 20 ′′ and/or any user account associated therewith.
  • the example of FIG. 7 can be iterated millions, or really any, times, with receipt in S 301 occurring from numerous distinct users across the globe. Because independent media from S 302 is compared in S 330 against the information received in S 301 , from a potentially remote or distant user or at a vastly different point in time, this may be resource-intensive on any network, processor, or communications device executing all or part of example methods.
  • the example of FIG. 7 improves resource usage by limiting the comparison in S 330 —the search for the stop sign anchor in the live video—to users already in the geofenced area specified by the origin information as determined by S 310 .
  • example methods may save actual comparison in S 330 , and even receipt/transmission of augmented reality and anchor information in S 301 and independent media in S 302 , for situations where independent media is most likely to result in a match in S 330 .
  • additional origin limitations may be used to further screen methods.
  • a passcode or user ID may be entered into second communications device 20 ′′ in S 303 as additional user limitation information and compared against received origin information from S 301 as another check before receiving or comparing independent media and anchor data.
  • a leader or offering member takes a picture of a page of a textbook or novel on their communications device and adds notes to the margins, as well as highlighting the text as augmentation of the image of the page.
  • the augmentation may be executed through an application on the leader's communications device configured to execute at least parts of example methods.
  • the creating leader sets a list of other users, here, the other members of the group or club, as users with permission to perform the augmented reality, here, the notes added to the text page.
  • This list of users may be identified by user ID in the application, phone number, social media profile etc.; the list may be identified by a sharing option whereby the augmented reality is shared through a social media platform or listserve to a specific group, such as followers, Facebook group members, email chain, etc.
  • the application then transmits the augmented reality information along with anchor information and origin information over a network to an application host.
  • the application sends the highlight and note visual information as the augmented reality information, the underlying page as the anchor information, and the invited/permitted users from the book club or study group as additional origin information.
  • the anchor information may be an image of the underlying page itself, or fingerprint-type data of the page image that is smaller, along with positional information for the notes and highlighting with respect to the underlying page.
  • the augmented reality that is, the image of the page with the highlighting and notes may be used as both augmented reality information and anchor data, with the application or other processor extracting identifying features for performance of the augmented data from the page image.
  • the highlight and note visual information as augmented reality information and the underlying page as the anchor information may be sent directly to the study group member's emails as a file(s) to be performed on their devices.
  • the application host may receive all the data from the creator/leader of the book club in S 301 .
  • the data may further be transmitted to communications devices of only the identified users, which receive it as S 301 , depending on how data is to be distributed in example methods.
  • the other identified user members of the book club may receive a notification, such as through a same application installed on their communications devices, of available augmented reality, because their user name or other identifying information, such as phone number, user ID, group membership, etc. matching those specified by the creating leader in S 305 .
  • a notification may be displayed or otherwise communicated to these other members, such as through an image of the anchor data—the underlying page—or description of the same, such as “notes available for augmented reality of page 87,” through communications devices or otherwise in S 305 .
  • the other members may then, through their communications device, take a picture of, record, live stream, etc. the page in question through a camera/screen functionality, potentially through the application and/or a native function of the communications device.
  • This independent media may be sent to the application host, which receives it in S 302 , potentially along with user ID or other matching information to identify the user as limitation in formation in S 303 . It is possible that the receipt of the image on the user's communications device itself is the receipt in S 302 , and all other information and comparisons of example methods may be sent to, and performed locally on, that device.
  • the independent media may not be received in S 302 until after the capturing user has been verified as an identified member of the book club in S 310 and S 320 “Y”—in order to conserve the amount of data needing to be received and compared.
  • the user member's captured page is then compared with the anchor data from the creating leader in S 330 , at the application host, on the user's communications device, or at some other computer processor properly configured, by using image recognition between the anchor and independent media.
  • the augmented reality is performed for the group member in S 350 .
  • the screen of the group member's device may display the underlying page with the notes and highlights added thereto in the same positioning, proportion, color, etc. as created by the leader.
  • the augmented reality may then be discontinued or removed in S 341 , as the anchor no longer matches the independent media in S 340 .
  • example methods may thus be used to share commentary, description, art, access, tagging, etc. in the same sensed context of underlying subject matter.
  • the underlying subject matter may be unique and/or only at a specific location, or may be mass-replicated at several different locations, with each potentially triggering augmented reality.
  • Creating and receiving users may limit augmented reality performance to specific users, locations, times of day, dates, group members, number of views, payment status, etc. in order to selectively share and control such content.
  • users may rate, reply to, report, share, tip, add view indicators, and/or comment on performed augmented reality to potentially guide creators toward better practices, avoid or remove harmful or illegal content, make others aware of particularly useful or interesting augmented realities, show support, etc.
  • a final specific implementation of example methods may use sound as well as visuals.
  • a product supplier may publish a commercial advertisement for a particular product, and the commercial may be an audio-visual performance broadcast on TV, as web ads, etc.
  • the supplier may provide to an application host augmented reality information and anchor information associated with the commercial advertisement in S 301 .
  • the anchor information may be a unique sound sequence of the advertisement
  • the augmented reality may be additional audio feedback in sync with the advertisement, such as vocally providing additional information about the product featured, making jokes about the advertisement in a self-deprecating manner, adding an audio track to the advertisement, providing humorous sound effects, etc.
  • the augmented reality may also include visual information, such as further textual descriptions, web links, humorous imagery or cartoons of the product, etc., that are synched in time with the advertisement.
  • the product supplier may also furnish origin information, such as user demographic information and/or event limitation.
  • origin information such as user demographic information and/or event limitation.
  • the supplier may desire the augmented reality to be performed only for users in a particular age range or home market.
  • the supplier may also desire the augmented reality to be performed only in contemporaneous or authentic settings, such as a first run of a commercial or at a particular convention, and not during later reruns or after the event.
  • Origin information supplied in S 301 may include these restrictions, which can be compared against user type, location, time, an event being active, a user being checked-in to an event, etc.
  • the application host and/or content provider controlling the same and potentially a communications device application may receive this augmented reality, anchor, and/or origin information from the product supplier in S 301 and/or may push it to individual communications devices in S 301 .
  • the providing of augmented reality information and anchor information to communications devices from the application host may occur after a satisfactory comparison of origin and limitation information in S 310 and S 320 .
  • the application host might only provide the augmented reality and/or anchor information after determining that a communications device is being operated at a time or at an active event when a triggering commercial is known to be live and in its first run.
  • the augmented reality and anchor information may be pushed to communications devices regardless, and this determination of limitation satisfying origin information in S 310 and S 320 may be executed at any time prior to actually performing the augmented reality.
  • the product supplier may pay the application host and/or developer to provide the augmented reality information to users experiencing the commercial; for example, payment may be arranged for each end user performing the augmented reality.
  • Users of communications devices may then be prompted to activate their application or otherwise enable receipt of independent media during the advertisement in S 305 , such as by a pop-up or text alert that the advertisement features augmented reality.
  • the advertisement itself may make users aware that augmented reality of the advertisement is available for performance.
  • the user may activate or present their communications device to receive the audio of the advertisement in S 302 , which is compared against the anchor data of the unique sounds of the advertisement in S 330 . Such comparison may be made using a comparison of sound waves received and present in anchor data, within a matching threshold, by comparison of frequency domains of the audio signals, and/or any other type of audio identification.
  • the user's communications device may then play back the augmented reality information—additional audio commentary and/or visual sequences at times and speeds in sync with the advertisement—on its speaker and/or screen in S 350 .
  • the combined advertisement with additional audio and/or visual is an augmented reality that may provide additional information or user experience.
  • the product developer may then be debited or charged an amount for each performance of the augmented reality in S 351 .
  • example methods may thus be used to create commentary, description, art, access, tagging, instruction, etc. in the same sensed context of underlying subject matter.
  • the underlying subject matter may be unique and/or only at a specific location, or may be mass-replicated at several different locations, with each having associated augmented reality.
  • Creating users may limit augmented reality performance to specific users, locations, times of day, dates, group members, number of views, payment status, etc. in order to selectively share and control such content.
  • users may rate, reply to, report, share, tip, add view indicators, and/or comment on performed augmented reality to potentially guide creators toward better practices, avoid or remove harmful or illegal content, make others aware of particularly useful or interesting augmented realities, show support, etc.
  • Actions throughout example methods may include user authentication, data verification, privacy controls, and/or content screening.
  • users may never be provided with identifying information of another, such that a party creating augmented reality content and/or a party consuming the same may remain anonymous to the other.
  • data may be encrypted and not retained at one or all points in example methods, such that there may be no discoverable record of augmented reality, independent media, origin and/or limitation information in regard to such content, existence, performance, etc.
  • a third party or application host may sample or review some or all augmented reality information for potentially harmful, wasteful, or illegal content and remove the same, as well as monitor user feedback to identify such content.
  • example methods may take advantage of a user login model requiring user authentication with a password over a secured connection and/or using operating-system-native security control and verification on communications devices, to ensure only verified, permitted human users access example methods and potentially user accounts.
  • Example methods may also require payment verification, such as credit card or bank account authentication, to verify identity and/or ability to pay before allowing users to participate in creating, transmitting, and/or receiving augmented reality in example methods.
  • Example methods may further use location and input verification available through operating system controls or other network functionalities, potentially in combination with user feedback, to prevent or punish location spoofing, user account compromising, bot access, and/or harassment or waste in example methods.
  • Example methods may be used in combination and/or repetitively to produce multiple options and functionalities for users of communications devices.
  • Example methods may be performed through proper computer programming or hardware configuring of networks and communications devices to receive augmented reality, origin, and limitation information and act in accordance with example methods, at any number of different processor-based devices that are communicatively connected.
  • example methods may be embodied on non-transitory computer-readable media that directly instruct computer processors to execute example methods and/or, through installation in memory operable in conjunction with a processor and user interface, configure general-purpose computers having the same into specific communications machines that execute example methods.
  • example embodiments may be varied through routine experimentation and without further inventive activity.
  • a direct image analysis may be used to determine useable anchors in visual real-world media to be augments
  • vastly more complex analysis and input may be used to determine anchors in or alongside auditory, video, or other perceivable media.
  • Variations are not to be regarded as departure from the spirit and scope of the exemplary embodiments, and all such modifications as would be obvious to one skilled in the art are intended to be included within the scope of the following claims.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Accounting & Taxation (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Finance (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Methods, hardware, and software create and transmit augmented reality in context with captured real world media, so as to replicate a similar augmented reality at a different instance. A computer processor in a communications device handles a combination of augmented reality information, anchor information that provides the context-matching, and captured real world media information. The computer processor determines if the real world subject matter has suitable anchor information to control how augmented reality elements should appear contextually with such media. A graphical user interface on the communications device may provide a user with several options for creation of augmented reality. The augmented, anchor, and any additional information is transmitted to a different device to identify triggering or context-matching media. The augmented reality is performed based on the triggering media as perceived on the different device. Media matching, geographical matching, and user characteristics may each be used to determine a trigger.

Description

    RELATED APPLICATIONS
  • This application claims priority under 35 U.S.C. § 120 to, and is a continuation of, co-pending U.S. patent application Ser. No. 15/482,670 and Ser. No. 15/482,644, both filed Apr. 7, 2017, by Piemonte and Staake; the '670 and '644 applications are incorporated by reference herein in their entireties.
  • BACKGROUND
  • Augmented reality applications may use a simple overlay of graphical/animated subject matter on live or recorded video or still images. In these simple systems, a user or application may position a static graphic, text, or other visual element superimposed on the underlying video or image. Other augmented reality applications may blend augmented reality subject matter with underlying visual data, or at least use the underlying visual data to position the augmented reality subject matter. For example, a human face may be recognized in a video feed or image still, and the augmented reality application may apply coloration, designs, distortions, etc. that track only the face in the video or image, so as to further the augmented reality effect that the face actually has such characteristics.
  • Aside from human faces, other objects and/or image information for tracking and/or formatting augmented reality subject matter may be used. For example, a QR code may be recognized as a target for augmented reality overlay, both in subject matter and positioning of the augmented reality subject matter. Similarly, a zoom level of a video feed may determine sizing and/or resolution of augmented reality subject matter on the zoomed video. Still further, augmented reality subject matter may be added and/or formatted after video or image capture, following further processing of the video and/or image data.
  • Augmented reality subject matter is typically provided by the application receiving the visual data. For example, an application may offer a set of stickers, labels, drawn text, cartoons, etc. that can be applied to live or captured visual information and then saved together with the visual data as an augmented reality visual. Or, for example, a set of facial overlays, game images and objectives, graphical head-ups displays or GUIs, etc. can be offered by augmented reality applications for overlay/intermixing with visual data to created augmented reality visuals for users. Augmented reality subject matter may be geofenced or chronofenced, where various labels, game characters, filters, distortions, and/or any other augmented reality application may only be available at particular locations and times. For example, a picture overlay of “San Antonio Tex.” or “Spurs 24—Pacers 32” may be available only when a user is determined to be in the San Antonio basketball arena through location services on a mobile device, and/or only during a Spurs-Pacers game when the score is 24 to 32.
  • SUMMARY
  • Example embodiments and methods create, transmit, and/or perform augmented reality in context with underlying real world subject matter. Example embodiments include communications devices that can create and transmit augmented reality and anchor information for reproduction in similar media at a separate instance by a user in receipt of the information, as well as application hosts for facilitating the same. Through a computer processor in the communications device configured with example methods, augmented reality created on many types of real world subject matter may be selectively performed by others.
  • Example methods use a computer processor to create augmented reality with augmented reality information having perceivable elements added to underlying real world subject matter and anchor information to properly place or time or otherwise configure the augmented reality information in the underlying media. The augmented reality information and anchor information may be transmitted at one instance, as combined media representing the augmented reality or as distinct pieces of information. The real world subject matter may be scrutinized for distinctiveness and/or quality anchor data. If there is sufficient distinctiveness an augmented reality build graphical user interface can be presented to a user of the communications device. If not, the user may be alerted of the lack of distinctiveness and prompted to find new real world subject matter for augmentation. Submitted augmented reality may be searchable with a string or other information provided to the user.
  • Example methods handle several forms of information with a computer processor to ultimately perform augmented reality, including augmented reality information having perceivable elements to be added to underlying media, anchor information to properly place or time or otherwise configure the augmented reality information in the underlying media, origin and limitation information to control when and how information is exchanged and compared, if at all, and the actual media information. The augmented reality information and anchor information may be received at one instance, as combined media representing the augmented reality or as distinct pieces of information. The actual media may be received at another instance, potentially from a wholly distinct user and/or time. The computer processor compares the anchor information with the media to determine if the anchor information matches, is found in, or otherwise triggered by the media. If the comparison is favorable, the augmented reality information is performed in the media, in the manner dictated by the anchor information, so as to recreate the augmented reality in the context of the media, even in real time with the capture of the media. The comparison and/or receipt of the actual media may be conditioned upon a user or device satisfying other origin limits, such as being within a designated geographical area that matches the anchor, being controlled by a particular authenticated user, being executed at a particular time, being executed at and while a particular event is occurring, having been paid for, etc. In this way, potentially burdensome media and augmented reality information transfer, comparison, and performance together can be reserved for particular context-matching circumstances, preserving example embodiment network and communication devices resources.
  • BRIEF DESCRIPTIONS OF THE DRAWINGS
  • Example embodiments will become more apparent by describing, in detail, the attached drawings, wherein like elements are represented by like reference numerals, which are given by way of illustration only and thus do not limit the example embodiments herein.
  • FIG. 1 is an illustration of an example embodiment network configured to share and perform augmented reality.
  • FIG. 2 is an illustration of an example embodiment communications device configured to share and perform augmented reality.
  • FIG. 3 is an illustration of an example embodiment GUI for creating augmented reality.
  • FIG. 4 is a flow chart of an example method of creating and sharing augmented reality
  • FIG. 5 is an illustration of an operations sequence in an example method of creating and sharing augmented reality.
  • FIG. 6 is a flow chart illustrating an example method of sharing and performing augmented reality.
  • FIG. 7 is an illustration of an operations sequence in an example method of sharing and performing augmented reality.
  • DETAILED DESCRIPTION
  • Because this is a patent document, general broad rules of construction should be applied when reading it. Everything described and shown in this document is an example of subject matter falling within the scope of the claims, appended below. Any specific structural and functional details disclosed herein are merely for purposes of describing how to make and use examples. Several different embodiments and methods not specifically disclosed herein may fall within the claim scope; as such, the claims may be embodied in many alternate forms and should not be construed as limited to only examples set forth herein.
  • It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited to any order by these terms. These terms are used only to distinguish one element from another; where there are “second” or higher ordinals, there merely must be that many number of elements, without necessarily any difference or other relationship. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of example embodiments or methods. As used herein, the term “and/or” includes all combinations of one or more of the associated listed items. The use of “etc.” is defined as “et cetera” and indicates the inclusion of all other elements belonging to the same group of the preceding items, in any “and/or” combination(s).
  • It will be understood that when an element is referred to as being “connected,” “coupled,” “mated,” “attached,” “fixed,” etc. to another element, it can be directly connected to the other element, or intervening elements may be present. In contrast, when an element is referred to as being “directly connected,” “directly coupled,” etc. to another element, there are no intervening elements present. Other words used to describe the relationship between elements should be interpreted in a like fashion (e.g., “between” versus “directly between,” “adjacent” versus “directly adjacent,” etc.). Similarly, a term such as “communicatively connected” includes all variations of information exchange and routing between two electronic devices, including intermediary devices, networks, etc., connected wirelessly or not.
  • As used herein, the singular forms “a,” “an,” and “the” are intended to include both the singular and plural forms, unless the language explicitly indicates otherwise. It will be further understood that the terms “comprises,” “comprising,” “includes,” and/or “including,” when used herein, specify the presence of stated features, characteristics, steps, operations, elements, and/or components, but do not themselves preclude the presence or addition of one or more other features, characteristics, steps, operations, elements, components, and/or groups thereof.
  • The structures and operations discussed below may occur out of the order described and/or noted in the figures. For example, two operations and/or figures shown in succession may in fact be executed concurrently or may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Similarly, individual operations within example methods described below may be executed repetitively, individually or sequentially, so as to provide looping or other series of operations aside from single operations described below. It should be presumed that any embodiment or method having features and functionality described below, in any workable combination, falls within the scope of example embodiments.
  • The inventors have recognized that augmented reality offers a useful way of communicating additional information about media typically encountered on communication devices. However, augmented reality is conventionally available only in connection with very specific objects, such as QR codes or predefined files from a single source, that define how augmented reality elements should be displayed, often without any contextual connection with the underlying object. Thus, the inventors have newly recognized a problem where augmented reality does not contextually describe or associate with other commonly-encountered media, where it is more useful.
  • The inventors have recognized that it is extremely difficult for individuals to create and share augmented reality in a non-pre-set context. That is, users are unable to augment arbitrary media with contextual subject matter, and share the same in a form where others can independently experience the augmented reality. Similarly, the inventors have recognized that it is extremely burdensome in computing environments to transmit and compare the amount of data required to offer augmented reality in connection with specific media, especially in a potentially unlimited network such as the Internet, because of the size of such media and the computational requirements in triggering and presenting a particular augmented reality, out of millions or more that could be offered, in connection with appropriate objects. To overcome these newly-recognized problems as well as others, the inventors have developed example embodiments and methods described below to address these and other problems recognized by the Inventors with unique solutions enabled by example embodiments.
  • The present invention is devices, software as stored or executed on tangible computer-readable media, and methods for creating, sharing, and/or performing contextual augmented reality. In contrast to the present invention, the few example embodiments and example methods discussed below illustrate just a subset of the variety of different configurations that can be used as and/or in connection with the present invention.
  • FIG. 1 is an illustration of an example embodiment network useable to create and share augmented reality content among and to multiple users. As shown in FIG. 1, a network 10 provides communicative connection among several different communications devices 20. For example, network 10 could be the Internet or another TCP/IP protocol network such as a WAN or LAN or intranet, or network 10 could be a wireless cell network operating on CDMA, WiFi, Bluetooth, GPS, near field communications, etc. Network 10 may thus be any structure or protocol that allows meaningful communicative connections between communications devices 20 and/or other information sources. Communications devices 20 may be directly communicatively connected among one another to sideload or directly transmit data between devices 20, such as through NFC, WiFi, Infrared, etc. Although communications devices 20 are shown in connection with a network 10, it is understood that devices 20 and ultimately users 1 may be directly connected to each other, and potentially only each other, through such sideloading or direct communications and/or only directly connected to content providers 50 without use of network 10.
  • One or more content providers 50 connect to one or more user devices 20, either directly or via network 10 or another network. Providers 50 can be any content, media, functionality, software, and/or operations providers for communication devices 20. For example, providers 50 may include mobile software developers with server backends, application hosts, and/or access portals for downloading and running software and/or streaming media on devices 20. Or providers 50 may include a network operator, such as a cellphone and mobile data carrier operating network 10 and controlling access rights of users 20 as well as general operation of network 10. Or providers 50 may be application storefronts providing search, download, operations connectivity, updating, etc. for apps on communication devices 20. Or providers 50 may be a website or ftp server offering downloadable files or other content that may be displayed or otherwise consumed through devices 20. Although providers 50 are mostly shown clustered around network 10 for connectivity to devices 20, it is understood that any direct or indirect connection between any provider 50 and any device 20 is useable in example embodiments.
  • Example embodiment application host 100 provides storage and delivery of augmented reality content and/or potentially other networking functionality among devices 20 and ultimately users 1, optionally through providers 50 and/or network 10. For example, host 100 may be connected to several different devices 20 through a network 10. Or application host 100 may be connected directly to, and controlled by, a content provider 50, to provide augmented reality information and/or functionality among devices 20. Still further, host 100 may connect directly to a device 20. This flexibility in networking can achieve a variety of different augmented reality functionalities, content control, and commercial transactions among potentially independent hosts 100, providers 50, network 10, and/or devices 20.
  • As shown in FIG. 1, example embodiment application host 100 may be connected to or include computer hardware processors, server functionality, and or one or more databases 105, which may store augmented reality information, functionality, and/or user or network profile or operational data for successful interaction among various networked components. In this way, host 100 may accept, persist, and analyze data from user communications devices 20, network 10, and/or providers 50. Although shown as separate elements in FIG. 1, it is understood that host 100 may be integrated with content provider 50, databases 105, and/or network 10, such as an application portal accessed through a mobile app or program on devices 20 that provides application updating, augmented reality content, other application functionality, login, registration, ecommerce transactions, instructions, technical support, etc., like a full-service application portal available from a computerized user device 20 in order to execute all aspects of example methods discussed below.
  • As used herein, “communications device(s)”—including user communications devices 20 of FIG. 2—is defined as processor-based electronic devices configured to receive, transmit, create, and/or perform augmented reality content. Information exchange, and any communicative connect, between communications devices must include non-human communications, such as digital information transfer between computers. As used herein, “augmented reality”—including augmented reality 101 of FIG. 2—is defined as subject matter including a mixture of both real-life audio, visual, tactile, and/or other sensory media and added audio, visual, tactile, and/or other sensory subject matter that is explicitly based on the underlying real-life media. For example, augmented reality could include a real-time video feed 1 with audio captured without intentional modification by a camera and microphone in combination with an extraneous graphic, mask, text, animation, filter, noise, vibration, etc. that positionally tracks the underlying real-life subject matter.
  • FIG. 2 is a schematic of an example embodiment user device communications 20 illustrating components thereof that may permit creation and sharing of augmented reality 101 as described in example methods below. For example, communications device 20 may include a camera package 110 including a lens and image sensor, microphone 115, a computer processor 120, persistent and/or transient storage 130, external communications 140, display screen 180, and/or input device and input sensors 185. Although elements are shown within a single device 20, it is understood that any element may be separate and connected through appropriate communications such as an external bus for a peripheral or wired or wireless connection.
  • Processor 120 may include one or more computer processors connected to and programmed or otherwise configured to control the various elements of example embodiment device 20. Processor 120 may further be configured to execute example methods, including creating, transmitting, and performing augmented reality in accordance with user input, and controlling display 180 and sensor 185/camera 110/microphone 115, for example. Processor 120 can be any computer processor, potentially with associated processor cache, transient memory, video buffer, etc., configured or programmed to processes augmented reality content. Processor 120 may further process any input to device 20, including visual, tactile, and/or audio information received from microphone 115, camera 110, a vibration-sensitive transducer, etc., for augmented reality creation, transmission, and/or performance. Processor 120 may also receive sensor information from sensors 185, e.g., touch or cursor information, and process the same as user interaction or input. Processor 120 may further execute software or include configured hardware that allows for execution of example methods discussed below.
  • Storage 130 may be a dedicated data storage drive or may be a partition of a general data store in which augmented reality information, origin or limitation information, application information, and/or device operations and raw data can be saved. Storage 130 may be, for example, random access memory (RAM), read only memory (ROM), programmable read only memory (PROM), erasable programmable read only memory (EPROM), electronically erasable programmable read only memory (EEPROM), flash memory, a hard disk, a processor cache, optical media, and/or other computer readable media.
  • Camera 110 may include one or more lenses and/or apertures that may be controlled by actuators that move the lenses and apertures among different positions to focus captured optical data. Similarly, camera 110 may adjust focus digitally or in response to user input defining focus locations in the scene being captured. Camera 110 may include image sensor elements such as a charge coupled device (CCD) array, a photodiode array, or any other image sensing device that receives light, potentially via the lens, and generates image data in response to the received light. Camera 110 may include a light to aid in reflection and/or focusing laser. Camera 110 may be further configured to obtain or adjust image information such as focus, zoom, white balance, exposure, saturation, and/or other image functions. Camera 110 and/or processor 120 may be further configured with one or more video codecs or other image processing software or drivers to capture, process, and store external independent media such as actual video from the environment 1 as well as augmented reality.
  • Microphone 115 may be any auditory transmission and/or reception device capable of audio pickup and/or playback. For example, microphone 115 may include an embedded speaker and/or an embedded induction microphone in a mobile device. Display 180 may be a screen, viewfinder, monitor, or any other device capable of visually displaying visual augmented reality 101. For example, display 180 may be a touchscreen on a smartphone like an iPhone or Android devices or on a tablet like an iPad or Surface, or display may be an LCD monitor or projector, for example.
  • Sensors 185 provide input information. For example, if display 180 is a touchscreen, sensors may be embedded multi- or single-touch capacitive sensors capable of detecting finger or stylus touch, pressure, movement, etc., with respect to display 180, during operation of device 20. Or for example, sensors 185 may be an accelerometer of magnetized compass with associated hardware or software capable of determining device orientation and/or movement, potentially with respect to display 180 during operation of device 100. Or for example, sensors 185 may be a button or an external mouse or joystick and associated hardware or software capable of controlling and determining cursor position and/or activation with respect to display 180 during operation of device 100. Sensors 180 are connected to processor 120 and can deliver sensed input information to processor 120 with respect to display 180, including cursor or contact position, duration, numerosity, pressure, movement speed, etc.
  • Example embodiment video device 20 may further include a communications port 140 for external wired or wireless communication. For example, communications port 140 may be an antenna configured to transmit and receive on CDMA bands, a Wi-Fi antenna, a near field communications transmitter/receiver, a GPS receiver, an external serial port or external disk drive, etc. Processor 120 may provide data from storage 130, input data from camera 110, sensors 185, microphone 115 etc., to external devices through communications port 140, as well as receive application and/or augmented reality and other information from providers through port 140. Further, communications port 140 may function as another input source for sensors 185.
  • Although networked elements and functionalities of example embodiment device 20 are shown in FIG. 2 as individual components with specific groupings and subcomponents, it is understood that these elements may be co-located in a single device having adequately differentiated data storage and/or file systems and processing configurations. Alternatively, the elements shown in FIG. 2 may be remote and plural, with functionality shared across several pieces of hardware, each communicatively connected at adequate speeds to provide necessary data transfer and analysis, if, for example, more resources or better logistics are available in distinct locations.
  • FIG. 3 is an illustration of an example embodiment graphical user interface 300 that can be presented on a communications device, such as touchscreen 180 of example embodiment device 20. As seen in FIG. 3, GUI 300 permits creation of augmented reality in context with captured real-word media 1. GUI 300 may present captured reality 1 offset, at an angle, and/or against a high-contrast background to highlight its uniqueness and/or permit 3-dimensional building on the same. GUI 300 may present various tools, such as a text tool 301, object or icon tool 302, picture tool 303, and/or drawing tool 304 for selection by a user. For example, a user may touch any of tools 301, 302, 303, and/or 304, and GUI 300 may present further options for adding a corresponding type of augmented reality to real-world media 1. Similarly, other platform- and service-provided tools and even APIs may be provided in GUI 300. For example, a tool may permit overlay of a tweet, direct message, sms content, Instagram photo, Giphy animation, image search result, etc. on captured reality 1 along with further stylization.
  • GUI 300 may permit zooming, rotation, translation, or other movement of captured reality 1 through tactile input, including long-touching, tapping, pinching, spinning, dragging, etc. of finger(s) or stylus(es) across a touchscreen presenting the same, potentially in combination with the above-discussed tools. Similarly, added augmented elements may be moved, anchored, repositioned, animated, reshaped, deleted, recolored, etc. through such tactile input as well as through tool stylization. For example, a user may tong-touch a particular image or graphic to fix it at a locked position in the underlying captured reality 1, and the image or graphic may track the underlying reality in size, orientation, position, etc. so as to appear as a part of the underlying reality. Of course, other input, such as through a keyboard, mouse, programming, etc. can manipulate objects on GUI 300. In addition to visual confirmation and display, example embodiment communications device 20 may permit tactile and auditory feedback and input in creation of augmented reality through example embodiment GUI 300, such as through microphones, transducers, etc. that capture such tactile, auditory, or any other sensory feedback and add the same to underlying real-world media in the GUI.
  • Given the variety of example functions described herein, example embodiment devices may be structured in a variety of ways to provide desired functionality. Other divisions and/or omissions of structures and functionalities among any number of separate modules, processors, and/or servers are useable with example embodiment devices, including execution on a single machine or among distant, exclusive servers and processors. Similarly, although the example embodiment device 20 of FIG. 2 and the example embodiment GUI 300 of FIG. 3 are capable of executing and can be configured with example methods, it is understood that example methods are useable with other network configurations and devices, and device 20 is useable with other methods.
  • Example Methods
  • FIG. 4 is an illustration of an example method of creating and/or transmitting augmented reality information that can then be independently reproduced by others accessing the same underlying media and anchor of the augmented reality. As seen in FIG. 4, in S401, real-world media and a candidate anchor describing the same is received. The real-world subject matter is any perceivable media as faithfully captured by a communications device, including still images captured by a camera, video, audio, haptic feedback, locally-stored photographs and video, websites, social media profiles, pictures, posts, etc. that can be used as an underlying basis on which to build augmented reality, and, in other methods, identified by a receiving user for its anchors to replicate the augmented reality.
  • An anchor is information in or alongside real-life subject matter that dictates whether and/or how augmented reality is to be performed in connection with the real-life subject matter. For example, anchor information in S401 may be or identify a still image, video, audio-visual presentation, etc. having elements on which augmented reality information can be overlaid, positioned with, synchronized with, moved with, and/or otherwise presented in context with underlying real-world media as dictated by the anchor data. For example, anchor data in S401 may be extracted from a media feed captured by a user's communications device onto which augmented reality may be displayed. Anchor data may also be an external file, QR code, NFC signal, WiFi packet, or other information that contains the augmented reality information in context of separate real-world subject matter.
  • In S410, the anchor data, and potentially real-world media containing the same, is examined for distinctiveness, particularly, the ease or ability for the anchor to be identified and used to replicate augmented reality in independently captured media. For example, in S410, a processor may perform an edge-detection on a captured visual image, a Fourier transform and/or frequency domain analysis on captured sound waves, etc. to determine how unique or easily aspects of the image or sound may be identified again and augmented by anchor data when independently re-captured. Or, for example, in S410, a visual image may be examined for blurriness and/or lack of light as factors discounting distinctiveness in any anchor data therein. Still further, video or image data may be subject to several types of criteria for satisfactorily-unique aspects through a best bin classifier or K-D search tree and/or tested for workability in an image tracker, such as that described in “Learning to Track at 100 FPS with Deep Regression Networks” by Held et al., Stanford University, incorporated herein by reference in its entirety.
  • The type of distinctiveness analysis in S410 may vary based on the type of anchor captured in S401 and/or type of anchor or anchor fingerprint used to replicate augmented reality in captured media. For example, the example method of FIGS. 5 and 6 may use a number of different computer-processor-driven anchor-matching techniques, and the same type of anchors used there may be examined for distinctiveness in S410. As mentioned above, while anchor data may be visual or audio information inherent within captured real-life media, such as edges, distinctive pixels, audio frequency transforms, etc., anchor data may also be descriptive information of the same, such as a particular location and/or time, orientation, etc. where the media is found and would be perceivable to another communications device, transmitted in a digital file and/or QR code for example. For example, the learning tracking method of Held, et al., incorporated above, may be used to anchor augmented reality information in underlying real-world media, and information captured in S401 may be tested under that method for successful tracking, which would indicate anchor satisfaction “Y” in S420.
  • In S420, if the candidate anchor is unsuitable “N”, such as lacking a threshold amount of distinctiveness or uniquely-identifiable features, the user may be notified of the unsuitability in S421. In such instance, a user may be prompted to submit a new anchor in S401. Or, for example, a user may proceed with example methods, with a warning that any augmented reality may have poor reproducibility or other sharing problems.
  • When submitting and/or resubmitting candidate anchors in S401, a user may be notified in S405 of other or better alternatives for candidate anchors, including objects, video, and/or other media that other users have successfully used for anchors having desired levels of distinctiveness. The suggestion in S405 may be automatic from the start of example methods, or prompted after failure in S420 and S421. The suggesting in S405 may list successful nearby anchors to the user or may identify purchasable subject matter prompted by a third-party advertiser, for example. Use of a previous anchor many automatically result in a “Y” at S420, because the previously-used anchor was specifically included in S405 for its successful distinctiveness.
  • In S420, if the candidate anchor is suitable “Y”, such as having a threshold amount of distinctiveness or uniquely-identifiable features, the underlying media having or associated with the anchor is opened for creation of augmented reality in S430. For example, GUI 300 (FIG. 3) may be displayed in S430 with the captured media 1 having satisfied the anchor requirement from S420 “Y”.
  • In S440, using the build GUI or other input, a user creates the augmented reality using underlying real-world media. For example, a user may position text, synchronize sounds, add timed vibrations, add animation, draw graffiti, stick emojis or banners, associate a weblink, filter the media, overlay a picture or image, and/or add any additional perceivable elements to the underlying media, which may be a video, a still image, a song or sound sequence, etc. The added elements may track position, timing, size, distance, volume and/or other elements of the underlying real-world media. The underlying real-world media and added contextual elements, combined to be perceivable together by a user, create the augmented reality, and the anchor information dictates whether and how the augmented elements are to be performed in association with the underlying real-world subject matter.
  • In S440, a user may also associate origin information with the augmented reality that limits or conditions its sharing and/or performance beyond the contextual anchor information. For example, a user may provide a location, or a location may be automatically determined through a communications device, or an expiration date may be set, or a restriction to particular users may be given, or the content may be locked behind payment requirements, or the augmented reality may be associated with a particular check-in or event. The user may willfully provide such origin information or it may be automatically collected by a communications device and associated with the augmented reality.
  • In S301, the created augmented reality, embedded or accompanying anchor data, and/or any additional origin information are transmitted by a communications device. The communications device may create the anchor data itself, such as positional information indicating placement of augmented reality information based on underlying identified real world media, or the anchor data may be created by a receiving host from a received composite augmented reality. As discussed, the anchor data may be inherently in the real world subject matter or an external piece of data that dictates how the augmented reality information is to be combined with the underlying real world subject matter. The transmission in S301 may be the counterpart to the receipt in S301 of the example method of FIGS. 6-7, and the information created and used in example methods herein may be the same as, and/or used in, methods of performing the created augmented reality on other devices in the example method of FIGS. 6-7. As such, the information transmitted in S301 may be a minimum level of data required to reproduce augmented reality aspects in separately-captured real-world media, allowing sharing and performance of the augmented reality in S440 without requiring large amounts of network resources by full transmission of the entire augmented reality.
  • In S450, the creating user may be notified of the completed transmission of the augmented reality information. Ecommerce options, such as payment for the transfer, may be completed. Similarly, a user may receive payment in S450 for a number of views or performance of the augmented reality by other users. Similarly, a user may be provided feedback, such a rating, achievements, unlockable content, comments, replies, updated account information, notification of additional augmented reality added to a same anchor, and/or removal notice for augmented reality violating terms of use in S450. The user may also be provided a unique search term, hashtag, or other ID that allows the user to share the augmented reality. For example, a unique string may be associated with the information transmitted in S301, and even with the type of information and tools used to create the augmented reality in S440, and the user may share the string. When other users search for the string, they may be pointed to the anchor and/or any other information, such as a specific geofenced location, for discovering the anchor and performance of the associated augmented reality.
  • A more specific example of implementation of the example method of FIG. 4 is discussed below. In an example operations sequence, here a screen flow diagram, of FIG. 5 an example method creates augmented reality as added graphics and other feedback to visual or video data at a particular location. For example, underlying real-world media 1, here a café or restaurant, is captured and/or viewed on a communications device through its display, such as by a user operating a mobile device like a smartphone or tablet or wearable device with a camera. An application or program installed on the communications device may facilitate all features shown in the flow diagram of FIG. 5. The anchor data for the real-word media 1 may be distinctive pixel patterns or shapes made up of the same inherently within real-world media 1 of this example, such that receipt of the visual information on the communications device is receipt of anchor data in S401, shown by the first two screens in the flow diagram of FIG. 5. As discussed above, it is understood that anchor data may be separate information that defines whether and how added augmented elements are performed in the context of underlying real-world subject matter.
  • The communications device, or potentially a separate application host and/or remote server, analyzes the candidate image for useable anchor data in S410 and S420 in the example of FIG. 5. In identifying chair legs as the anchor data, the program determines that the area is insufficiently distinctive, such as lacking unique edges, shapes, pixel contrasts, etc. (“N” in S420). The user is then notified of the unsuitability in S421, as shown in FIG. 5, by a message prompting the user to seek a more recognizable or distinctive anchor. New anchor data is identified in the fourth screen in S401, including several frames with well-recognizable edges and shapes. Upon repeating the analysis in S410 and S420, the program determines that the area is sufficiently distinctive (“Y” in S420).
  • In S430, a build GUI is opened, such as example embodiment GUI 300. As described above, example embodiment GUI 300 may include several tools for creating augmented reality on the underlying captured real-world visual scene, as set by anchor data therein. For example, as shown by the next three frames of S440, the user creates the augmented reality. First, a text tool is selected, allowing the user to type words, such as “Mirage” in any font, shape, size, color, animation, shadow, or other text effect through GUI 300. The user may move, align, adjust, and/or set movement of the wording based on the underlying real-word frame image, such presentation being stored as anchor data. For example, as shown in FIG. 5, the user has aligned the term “Mirage” to display at an upper edge of the frames.
  • Second an image or sticker icon may be selected, allowing the user to place a pre-set icon, image, or sticker on the underlying media. As shown in FIG. 5, for example, the user has selected a car sticker and placed it over the bottom, middle frame. Stickers available through an example embodiment method and/or GUI may be pre-set, standard emojis, symbols, and/or location-specific or time-specific captions, banners, icons, images, etc. Further, in the bottom middle screen the user may select a camera capture tool to add custom-captured visual data to the real-world media. In the example of FIG. 5, the user has taken a picture of a coffee cup and extracted only the coffee cup, such as through an edge-detection or object detection lasso, and placed the coffee cup below the car.
  • Third, a drawing icon may be selected, allowing free-form drawing in any desired color, thickness, line break, etc. As shown in FIG. 5, for example, the user has drawn an additional spiked outline around the underlying frames. Each individual augmented reality aspect, including font, drawing, sticker, picture, etc. may be selectively undone, repositioned, reformatted, shrank, etc. through proper selection. Similarly, although the added visual aspects in FIG. 5 all appear static, they may be animated, potentially in particular sequence with the underlying real-world media, stored as anchor information. Still further, pre-set, pre-recorded, and/or newly-recorded sounds may be added as augmented reality. Still further, tactile feedback, such as vibration, alert buzzes, shocks, etc. may be added as augmented reality information, as well as any other sensory addition to the underlying real-world media to create augmented reality in S440.
  • Once the user is satisfied with the created augmented reality, it, along with separate or inherent anchor information, is transmitted to another party or user in S301, such as by tapping an accept or send button that causes the communications device to transmit the information, potentially over a network. The communications device hosting the GUI, or another device such as an application host, other users' communications device, etc., may prepare and transmit all anchor data. Here, anchor data may include all image recognition of underlying real-world media 1 along with data of positional, animation, behavior, and/or other performance parameters of the augmented reality elements added with the tools in S440 in the context of the underlying real-world media (such as positioning at particular image points, tracking type or behavior, etc.). This anchor data may be, for example, a full, augmented reality image for full reproduction or considerably smaller ID and vector data or reproduction of the augmented reality aspects in independently-captured real-world media. Thus, example methods may conserve bandwidth and/or storage limitations in computer networks by transmitting in S301 only the minimum augmented reality and anchor data necessary to recreate the augmented reality created in S440 at a separate instance with receipt of matching real-world media.
  • Although not shown in FIG. 5, the user may also set origin information such as GPS coordinates, event check-ins, geofenced areas, specific users, time limitations, etc. that control the distribution and performance of the augmented reality created in S440. This information is transmitted with the augmented reality and anchor information in S301 or at another time in association with the augmented reality information.
  • In S450, the user is shown a transmission confirmation, potentially with tactile and/or auditory feedback. The user is also provided with a unique identifier for the augmented reality, for example, a string like #id-sf17-018924 that can be searched by the user or other users to locate or even unlock the augmented reality. In S450, the user may also be debited an amount from a preset or associated account or funding source for the share. The user may also receive feedback of performing third-parties, comments, tips, replies, viewership or reach statistics, add-on augmented realities from other users on the same anchor, expiration, removal, etc. of the augmented reality. As a specific example, the user may unlock an achievement for the location of the created augmented reality (such as a “café badge”), or be provided with additional functionality or account status based on the augmented reality (such as always having a coffee cup image available in their tool palette).
  • Although the example of FIG. 5 uses visual underlying media, anchor data, and augmented reality information, it is understood that several different types of non-visual information may be used instead or in addition, including those types discussed in the example method of FIGS. 6-7. In this way, a user of example methods may create any type of augmented reality perceivable to users with various sharing and/or performance conditions that are entirely compatible with the various example methods of selective performance in the example method of FIGS. 6-7.
  • FIG. 6 is an illustration of an example method of creating and/or transmitting augmented reality information. As seen in FIG. 6, in S301, augmented reality information, including or alongside anchoring information and/or origin information, is received. The receipt in S301 is by any of a user's communications device, network operator, application host, and/or other computer-processor-based device capable of electronic receipt, processing, and storage of the information in S301. The information received in S301 is created by a party selecting desired additional media that is combined with underlying real information to create augmented reality. For example, the augmented reality GUI, system, and method from FIGS. 3-5 may provide a computer-processor-driven system for creating and transmitting the augmented reality information in S301. The augmented reality could be graphical, animated, audio-visual, auditory, haptic, etc., including graphics overlaid on public posters or personal photographs and videos, GUIs responsive to and overlaid on streaming audio-visuals, textual commentary on specific labels like license plates, UPC bar codes, or QR codes, tactile sensations such as haptic vibration or shock added to particular film scenes, or any other sensory media added to a real-life experience and reproducible with independent media. The augmented reality information received in S301 may be complete augmented reality, that is additional media combined with underlying real media, or may be only the additional media to be added to underlying independent media to create augmented reality.
  • In S301, origin information may be received, including user information, geographical information, encryption information, distribution information, routing information, timing/expiration information, event information, ecommerce or promoted status information, restrictions based on any of the forgoing, and metadata of the augmented reality and anchoring information.
  • Anchoring information received in S301 is data useable to trigger and position the display of augmented reality information in context with independently-captured media to perform augmented reality in a form similar to that input by the creating user of the augmented reality information. For example, anchoring information may be image, video, and/or sound information, for comparison with the independently-captured media to determine augmented reality triggering. As a further example, anchoring information may be mapping or orientation information for placement, sizing, and/or configuration of an augmented reality element on independently-captured media.
  • In S302, independent media is received, and the independent media is distinct from the augmented reality information. The receipt in S302 may be by a same processor or connected device to that receiving other information in example methods, configured for receipt, processing, and display of the independent media. Independent media received in S302 may be image, video, audio, vibratory and/or any other information captured and received at a communications device that can be analyzed and compared with anchor information. For example, independent media in S302 may be a live, unaltered audio-video stream 1 (FIG. 2) of surroundings of a mobile device recording the same with a camera 110 (FIG. 2). Or, for example, independent media in S302 may be a modified or enhanced photograph retrieved from memory 130 (FIG. 2).
  • In S303, additional limitation information may be received by a same or different connected communications device, network, etc. Additional limitation information may be location information of a device capturing the independent media, user information such as name, account number, user ID, etc., credentials such as passphrases, promotional codes, OAUTH codes, RealID information, etc., a local or absolute time and/or date of the user and/or when the independent media was captured, as well as any other limitation information such as subject matter screening or filters, age-based limitations, ecommerce authorizations, etc. Additional limitation information in S303 may be gathered automatically by a communications device, such as through GPS- or wifi-based location services, and/or manually input by a human user, such as a password input by a user.
  • Receipt of information in S301, S302, and S303 may occur in any order and between different or same users, communications devices, and/or other network actors. For example, using FIG. 1, any communications device 20 may receive any of the information in S301, S302, and S303, and any of network 10, application host 100, content providers 50 may receive the same or different information in S301, S302, and S303. Similarly, receipt of information in S301, S302, and S303 may be individually repeated, and/or receipt of limitation information in S303 and origin information in S301 may be omitted entirely. Moreover, information received in S301, S302, and S303 may be received in a single transmission, as a single piece of information, file, or media. For example, anchor information, augmented reality information, and user location information compatible with that received in S301 and S303 may all be present in, or determinable from, a single image or video stream, with any other mixture, combination, and/or derivability of information among that received in S301-S303 possible.
  • Users may be alerted of the existence (or non-existence) of anchor and/or origin information for augmented reality in S305, to encourage or aid seeking out of independent media and other conditions that match the anchor and origin information. For example, a user may be guided with a map to particular geofenced locations, or may be given a prompt for a password, or may be alerted to a cost, etc. required for presentation of augmented reality. Or, for example, a user may search by a keyword, a hashtag posted by the creating user, in a map or by inputting a zipcode, and/or by other metadata of the augmented reality, origin, and/or anchor information and see matching and available augmented reality for such a search in S305. In this way, users may be aware of specific conditions required for performance of augmented reality and may act to best comply with those conditions, potentially saving computational resources in transmitting and analyzing information among S301, S302, and S303.
  • Results in S305 may be transaction-based, with a user having to make a payment or satisfy another ecommerce condition, such as having a premium or paying account status, credit card on file, a number of finding credits, etc., to be shown available augmented reality in S305. Similarly, results in S305 may be screened or available based on satisfaction of origin conditions in S320, such as having a particular user account name, location, phone type, relationship with a creating user, etc., that satisfies limitation information with any augmented reality that may be suggested or alertable in S305.
  • As shown in S320-S351, one or more comparisons of various received information from S301, S302, and/or S303 ultimately determine if the augmented reality is performed. Because the recognition of an anchor in S340 may be resource intensive, especially among millions or more of pieces of received augmented reality and independent media information, an origin match in S310 and S320 may be performed to screen or limit any anchor analysis in S330, and potentially transmission of information from S301 and/or S302, to only eligible users or independent media providers. Origin match in S310 and S320 may also be omitted or performed after any anchor analysis.
  • In S310, the received user limitation information from S303 is compared for a match or indication of acceptability from the origin information received in S301. For example, origin information in S301 may be a defined, geofenced area or location check-in, user ID or social network connection, account type, time of day, password, payment status, subscription information, etc. that limits what circumstances or to whom the augmented reality is performable. Limitation information in S303 may be comparable or corresponding to origin information, such as a detected or entered user location, confirmed check-in, provided user ID, account verification, detected time, entered password, payment verification, etc. potentially from the creator of the augmented reality subject matter received in S301. Where the origin and limitation information are comparable, a match may be determined in S310. In S320, if a match or other satisfying condition between the origin and limitation information is determined, the example method may proceed to performance of the augmented reality in S350. Where the limitation information is absent or does not match the origin information, no modification with augmented reality or other performance of augmented reality may be performed in S321. A user may be informed in S321 as to a non-matching condition and/or re-prompted in S305 to seek out or enter such information to facilitate a match in S310 and S320.
  • Comparison in S310 may be performed at other points, with or without receipt of all or partial information from S301, S302, and S303. For example, comparison S310 may be performed upon receipt of only origin information in S301 and only limitation information in S303 alone. Where a match is determined in S320 on that information alone, additional information, such as augmented reality information in S301, independent media in S302, and/or other limitation information in S303 may then be received, so as to limit transmission requirements to situations more likely leading to performance in S350. Of course, as discussed at the outset, comparison in S310 may be performed iteratively, for potentially incremental or real-time receipts of information in S301, S302, and/or S303, so as to continuously monitor when user limitation information and origin information will match to proceed with further, more resource-intensive actions in example methods.
  • In S330, the independent media received in S302 is compared with the anchor information in S302 to determine if and how the augmented reality information can be applied to the independent media to perform augmented reality. The matching in S330 may be a direct comparison between the independent media and anchor data such as underlying real-life media components of the augmented reality information. For example, the anchor information may include an underlying poster, QR code, street view, artwork, product or label, logo, tonal sequence, and/or song to which additional media is added to create the augmented reality, and this underlying information may be compared against the independent media to determine a match and, further, where and how the additional media is added to recreate the augmented reality in S350. The comparison in S330 may use image processing and recognition techniques, including the algorithms identified in the Held reference, US Patent Publication 2012/0026354 to Hamada, published Feb. 2, 2012 and US Patent Publication 2016/0004934 to Ebata et al., published Jan. 7, 2016, these documents being incorporated herein by reference in their entireties.
  • The matching in S330 may use anchor data independent from the augmented reality or independent media. For example, the anchor data may be a separately-received file, or NFC tag, or facial profile, or sound, etc., that identifies or describes independent media eligible for augmented reality and/or describes how augmented reality information should be added to the same. In this example, the anchor data may still be comparable to the independent media to determine eligibility for augmented reality and, if so, in S350, the exact parameters for such display of the augmented reality information in combination with the independent media.
  • Anchor data used in S330 may be fingerprint-type data, that is, smaller or less-resource-intensive information that is characteristic of and comparable to independent media to determine a match. Similarly, comparison methods may use simplified processes to readily identify matches among potentially large amounts of anchors and independent media, potentially using this fingerprint data. For example, in the case of image-based independent media and anchor data, the anchor data may be a smaller grayscale or edge-detected representation of the eligible independent media. The received independent media may also be reduced to a comparable grayscale or edge-detected representation. Such smaller and simplified images may be readily compared, such as using, for example, sum of squared differences, sum of absolute differences, and/or zero mean normalized cross-correlation between the pixels in the images, to determine a match, or a level of agreement within a matching threshold in S330. Other useable zero mean normalized cross-correlation among images are described in the incorporated '934 and '354 patent publications.
  • Or for example, independent media and anchor data may be transformed into comparable finger-print type data through a Fourier transform of a waveform signal of the anchor and independent media, highlights from the media and anchor frequency domain, detected time/space domain, other type of correlation, cepstrum or wavelet transform, and/or other detectable and comparable characteristics that may be created through image, audio, tactile, or other processing. Appropriate matching thresholds between the transformed anchor information and independent media transformed can then be used to identify matches in S330. In this way, a very high volume of anchor data and independent media (potentially continuously captured and analyzed) can be compared, even by a simpler communications device or over a slower remote communicative connection, without requiring large computing resources or time to determine an acceptable match in S330.
  • In S340, if the anchor does match, indicate, or can be found in the independent media, augmented reality is performed in S350. If the anchor does not match in S340, the independent media may be performed as is, or no action may be taken, but no augmented reality using the received information is performed in S341. A user may be notified of the non-match in S341 and/or prompted with information as to how to elicit a match such as through S305.
  • In S350, the augmented reality is performed using the received independent media, anchor information, and augmented reality information. For example, the augmented reality information may be only additional subject matter that is added to the independent media at position/orientation/timing/sizing dictated by the anchor information to replicate the augmented reality received in S301. Or, the augmented reality information may be full augmented reality having mixed underlying reality and added-in media performed to appear as the independent media with additional elements. The performance in S350 may be, for example, in real-time with capture of independent media in S302 so as to be perceived as a real-life altered reality in accordance with the augmented reality or may be performed at a later time, such as if the augmented reality is saved and/or transmitted elsewhere in response to a match in S340 and performed at that other instance.
  • The performance in S350 may be executed by any output device connected to a processor creating, formatting, or transmitting the augmented reality created from the received independent media, anchor information, and/or augmented reality information. Such an output device must include at least one output capable of being sensed by human senses so as to perceive the augmented reality, such as a screen 180 of example embodiment communication device 20 of FIG. 2 outputting visual imagery as augmented reality 101, speaker, tactile device, buzzer, or even a taste or smell element.
  • Upon performance of the augmented reality in S350, a registration of the performance may be generated in S351, such as an alert to a creator or tracking system of a number of views or other performances. Similarly, in S351, a user may comment, reply, rate, report as violating terms of use or laws, request technical support for, etc. the performed augmented reality. Still further, in S351, if payment, registration, subscription status, or other ecommerce origin information was required for performance, those options may be achieved in S351 following performance. Still further in S351, augmented reality information and any other received information may be locally and/or remotely deleted for privacy concerns. Example methods may then be repeated or looped with the same received information or one or more new pieces of information in S301, S302, and S303.
  • Some specific examples of implementation of the example method of FIG. 6 are discussed below. In a first example of FIG. 7, an example method performs augmented reality as an added graphic to visual or video data at particular locations. For example, augmented reality 101 is created on a first communications device 20′ through its touchscreen 180, such as by a user adding a dragon image over a stop sign in the accurately-captured street scene 1 underlying augmented reality 101. First communications device 20′ has the underlying image location from a location service such as GPS native to the device or input by a user. First communications device 20′ further has anchor data of the stop sign for positioning, sizing, and/or orienting the added dragon of augmented reality 101 with respect to the stop sign. This creation of augmented reality, anchor, and origin information may be performed in the example method of FIGS. 3-5. All this data is transmitted and received over a network as augmented reality information, anchor information, and geofencing origin information in S301.
  • The street scene augmented reality 101 may be received by a processor in a network 10, application host 100, communications device processor 120, and/or any other processor. Although a wireless network icon is shown to one side of processor element 10/100/120, it is understood that a processor anywhere may perform the receipt in S301. For example, the processor may be in first communications device 20′ or second communication device 20″, with all data transmitted to the hosting or connected device for use in example methods.
  • As shown in FIG. 7, second communications device 20″ feeds video in real time picked up through a camera on screen 180. Second communications device 20″ and first communications device 20′ may have separate owners, operators, network affiliations, be operated at different times and dates, etc. Through an application installed on second user device 20″ or native video display functionality, prompting or alerting in S305 may show required origin information, in this example geofenced areas where second communications device 20″ must be present for performance of augmented reality 101.
  • Second communications device 20″ captures independent media 100, including similar underlying street scenes as augmented reality 101. The independent media, here the live video captured and displayed on second communications device 20″, and limitation information, here the second user's location as determined by device 20″, are received by the processor in network 10 or application host 100 or even processor 120 (FIG. 2) in device 20″. Such receipt in S302 and S303 may occur continuously, at discreet intervals, at instructed times by a user, etc. Similarly, receipt in S302 and receipt of augmented reality and anchor information in S301, and really any handling of the same, may occur after the successful comparison of the location of the second user device 20″ with matching geofenced area in the origin information.
  • The processor compares the received location information of second communications device 20″ with the received origin information of the first communications device 20′ in S302. Upon determination that the location matches, or at least that second device 20″ is within a geofenced area received as origin information with augmented reality 101 (first “✓”) from first device 20′, the processor then compares the received independent media, here live video information 100, with the anchor data, here the stop sign, to determine if the anchor is present in S330. Upon determination that the anchoring stop sign is present in the underlying video 100 (second “✓”), information of augmented reality 101—the dragon—is performed on screen 180 of second communications device 20″, in the manner indicated by the anchor data—extending from the stop sign. As seen in FIG. 7, augmented reality 101 is performed on second communications device 20″ in a visually similar style, with added graphical dragon extending from the stop sign in the live captured street scene.
  • The performing in S350 may be executed by the same processor receiving and comparing data in this example method. However, it is also understood that each receipt, comparison, authorization, performance, and performance may be executed by discreet processors under potentially separate ownership and control, with access to data required for each action in example methods. For example the processor in an application host 100 may merely authorize the performance in S350, and a processor in second communications device 20″ may actually perform augmented reality 101 on second communications device 20″.
  • Information of augmented reality 101, anchor data, origin information, as well as information of independent media 100 and user location may be encrypted, compressed, or reduced to a fingerprint for easier transmission and comparison. For example, if the processor uses a zero mean normalized cross-correlation to identify an anchor in independent media 100, simplified, black-and-white only information of the anchor stop sign and independent media 100 may be transmitted and compared in S320 and S330. Similarly, if the processor uses a Fourier transform, frequency domain, cepstrum, etc. analysis, appropriately reduced fingerprints of anchor stop sign and independent media 100 may be generated, received, and compared in S320 and S330. Particularly in this example with geofenced origin and limitation information, a high tolerance or match threshold may be used between significantly compressed or simplified data for comparison, as even a rougher image match between anchor and underlying media is a reliable match given the location matching in S310 and S320.
  • Although in the example of FIG. 7, a stop sign is used as a visual piece of anchor information for comparison against video containing the stop sign to determine a match and placement of the augmented reality information, it is understood that other anchor information could be used. For example, lines on the road, edges of building tops, the light post, etc. could be used alone or together as anchor information checked against the independent media. Anchor information that is particularly distinctive, such as high-edge value or high-contrast objects, or anchor information that is particularly suited for the comparison in S330, such as objects with easy image processing signals for zero mean normalized cross-correlation or object with unique Fourier transforms, may be selected and transmitted in S301 for use in the comparison in S330. A user, communications device 20′, network processor, and/or other actor may select this best anchor information, and second communications device 20″ may display such differently-selected anchors as heads-up images in S305.
  • Similarly, although in the example of FIG. 7 augmented reality information from S301 includes just a dragon image to be added to underlying actually-captured independent media 100 from S203—a street scene—to create augmented reality 101 in the performance of S350, it is understood that augmented reality 101 may include several more elements, up to the entire street scene with dragon. That is, a user may actually be viewing only received augmented reality information from S301 (the entire dragon plus street scene as a video or still image) and no elements of independent media from S302, which may be used only for triggering and/or positioning of augmented reality 101, in the performance of S350.
  • Although the independent media 100 of a street scene from second communications device 20″ appears very similar to the underlying scene in augmented reality 101, some tolerance may be permitted in example methods, such as different angles, device orientation, distances from the stop sign, vantage of the stop sign, etc. that may still trigger the performance of augmented reality 101. The different vantage, positioning, and/or appearance of the anchor stop sign may further dictate the sizing, positioning, orientation, and other parameters of the augmented dragon element so as to preserve a similar augmented reality 101. For example, if the stop sign is larger on second communications device 20″ due to a closer position, augmented reality 101 may be performed with a proportionally larger dragon. Such changes may be made in real time with modification of the independent media. For example, as second communications device 20″ approaches the stop sign and it becomes larger, the augmented reality 101 on a screen of device 20″ may increase the size of the dragon proportionally, while keeping the dragon positioned in a same relative position to the stop sign.
  • As examples of S351 in FIG. 7, upon performance in S350, the first communications device 20′ may be alerted of the performance, a number of performances, users' viewing the performance, receive payment for the performance, etc. Similarly, upon performance in S350, a user of the second communications device 20″ may reply, comment, rate, report, or otherwise give feedback on the augmented reality 101.
  • Multiple pieces of augmented reality, potentially created from several different sources, may be applicable to a same anchor, and following performance of one in S350, a next augmented reality that still satisfies the origin and anchor requirements may be performed. A viewing order, or a selection of augmented realities, may be received before performance in S350 to control the same. Still further, the various augmented realities otherwise matching a same piece of independent media in S310-S340 could be performed together or simultaneously. Still further, ecommerce options may be completed upon the performance, such as debiting a users' account an agreed-upon amount or amount specified in origin information, deducting a number of permitted views from a user's subscription, invoicing the user, all in association with second communications device 20″ and/or any user account associated therewith.
  • The example of FIG. 7 can be iterated millions, or really any, times, with receipt in S301 occurring from numerous distinct users across the globe. Because independent media from S302 is compared in S330 against the information received in S301, from a potentially remote or distant user or at a vastly different point in time, this may be resource-intensive on any network, processor, or communications device executing all or part of example methods. The example of FIG. 7 improves resource usage by limiting the comparison in S330—the search for the stop sign anchor in the live video—to users already in the geofenced area specified by the origin information as determined by S310. In this way, in a universe of potentially many, many different pieces of augmented reality, example methods may save actual comparison in S330, and even receipt/transmission of augmented reality and anchor information in S301 and independent media in S302, for situations where independent media is most likely to result in a match in S330. Of course, additional origin limitations may be used to further screen methods. For example, a passcode or user ID may be entered into second communications device 20″ in S303 as additional user limitation information and compared against received origin information from S301 as another check before receiving or comparing independent media and anchor data.
  • Another example uses a study group or book club. In this example, a leader or offering member takes a picture of a page of a textbook or novel on their communications device and adds notes to the margins, as well as highlighting the text as augmentation of the image of the page. The augmentation may be executed through an application on the leader's communications device configured to execute at least parts of example methods. The creating leader then sets a list of other users, here, the other members of the group or club, as users with permission to perform the augmented reality, here, the notes added to the text page. This list of users may be identified by user ID in the application, phone number, social media profile etc.; the list may be identified by a sharing option whereby the augmented reality is shared through a social media platform or listserve to a specific group, such as followers, Facebook group members, email chain, etc.
  • The application then transmits the augmented reality information along with anchor information and origin information over a network to an application host. In this example, the application sends the highlight and note visual information as the augmented reality information, the underlying page as the anchor information, and the invited/permitted users from the book club or study group as additional origin information. The anchor information may be an image of the underlying page itself, or fingerprint-type data of the page image that is smaller, along with positional information for the notes and highlighting with respect to the underlying page. Of course, the augmented reality, that is, the image of the page with the highlighting and notes may be used as both augmented reality information and anchor data, with the application or other processor extracting identifying features for performance of the augmented data from the page image. In another example using email or direct communications, the highlight and note visual information as augmented reality information and the underlying page as the anchor information may be sent directly to the study group member's emails as a file(s) to be performed on their devices.
  • The application host may receive all the data from the creator/leader of the book club in S301. The data may further be transmitted to communications devices of only the identified users, which receive it as S301, depending on how data is to be distributed in example methods. The other identified user members of the book club may receive a notification, such as through a same application installed on their communications devices, of available augmented reality, because their user name or other identifying information, such as phone number, user ID, group membership, etc. matching those specified by the creating leader in S305. A notification may be displayed or otherwise communicated to these other members, such as through an image of the anchor data—the underlying page—or description of the same, such as “notes available for augmented reality of page 87,” through communications devices or otherwise in S305.
  • The other members may then, through their communications device, take a picture of, record, live stream, etc. the page in question through a camera/screen functionality, potentially through the application and/or a native function of the communications device. This independent media may be sent to the application host, which receives it in S302, potentially along with user ID or other matching information to identify the user as limitation in formation in S303. It is possible that the receipt of the image on the user's communications device itself is the receipt in S302, and all other information and comparisons of example methods may be sent to, and performed locally on, that device. Similarly, the independent media, the page picture, may not be received in S302 until after the capturing user has been verified as an identified member of the book club in S310 and S320 “Y”—in order to conserve the amount of data needing to be received and compared.
  • The user member's captured page is then compared with the anchor data from the creating leader in S330, at the application host, on the user's communications device, or at some other computer processor properly configured, by using image recognition between the anchor and independent media. In S330 and S340, recognizing the same page as that augmented by the creating leader, the augmented reality is performed for the group member in S350. For example, the screen of the group member's device may display the underlying page with the notes and highlights added thereto in the same positioning, proportion, color, etc. as created by the leader. When the user turns the page or captures different media, the augmented reality may then be discontinued or removed in S341, as the anchor no longer matches the independent media in S340.
  • As seen, example methods may thus be used to share commentary, description, art, access, tagging, etc. in the same sensed context of underlying subject matter. The underlying subject matter may be unique and/or only at a specific location, or may be mass-replicated at several different locations, with each potentially triggering augmented reality. Creating and receiving users may limit augmented reality performance to specific users, locations, times of day, dates, group members, number of views, payment status, etc. in order to selectively share and control such content. Similarly, users may rate, reply to, report, share, tip, add view indicators, and/or comment on performed augmented reality to potentially guide creators toward better practices, avoid or remove harmful or illegal content, make others aware of particularly useful or interesting augmented realities, show support, etc.
  • A final specific implementation of example methods may use sound as well as visuals. A product supplier may publish a commercial advertisement for a particular product, and the commercial may be an audio-visual performance broadcast on TV, as web ads, etc. The supplier may provide to an application host augmented reality information and anchor information associated with the commercial advertisement in S301. In this example, the anchor information may be a unique sound sequence of the advertisement, and the augmented reality may be additional audio feedback in sync with the advertisement, such as vocally providing additional information about the product featured, making jokes about the advertisement in a self-deprecating manner, adding an audio track to the advertisement, providing humorous sound effects, etc. The augmented reality may also include visual information, such as further textual descriptions, web links, humorous imagery or cartoons of the product, etc., that are synched in time with the advertisement.
  • The product supplier may also furnish origin information, such as user demographic information and/or event limitation. For example, the supplier may desire the augmented reality to be performed only for users in a particular age range or home market. The supplier may also desire the augmented reality to be performed only in contemporaneous or authentic settings, such as a first run of a commercial or at a particular convention, and not during later reruns or after the event. Origin information supplied in S301 may include these restrictions, which can be compared against user type, location, time, an event being active, a user being checked-in to an event, etc.
  • The application host and/or content provider controlling the same and potentially a communications device application may receive this augmented reality, anchor, and/or origin information from the product supplier in S301 and/or may push it to individual communications devices in S301. The providing of augmented reality information and anchor information to communications devices from the application host may occur after a satisfactory comparison of origin and limitation information in S310 and S320. For example, the application host might only provide the augmented reality and/or anchor information after determining that a communications device is being operated at a time or at an active event when a triggering commercial is known to be live and in its first run. Similarly, the augmented reality and anchor information may be pushed to communications devices regardless, and this determination of limitation satisfying origin information in S310 and S320 may be executed at any time prior to actually performing the augmented reality.
  • The product supplier may pay the application host and/or developer to provide the augmented reality information to users experiencing the commercial; for example, payment may be arranged for each end user performing the augmented reality. Users of communications devices may then be prompted to activate their application or otherwise enable receipt of independent media during the advertisement in S305, such as by a pop-up or text alert that the advertisement features augmented reality. The advertisement itself may make users aware that augmented reality of the advertisement is available for performance.
  • As the commercial plays, such as on their TV, radio, or communications device, the user may activate or present their communications device to receive the audio of the advertisement in S302, which is compared against the anchor data of the unique sounds of the advertisement in S330. Such comparison may be made using a comparison of sound waves received and present in anchor data, within a matching threshold, by comparison of frequency domains of the audio signals, and/or any other type of audio identification. Upon detecting the audio of the commercial in S340, the user's communications device may then play back the augmented reality information—additional audio commentary and/or visual sequences at times and speeds in sync with the advertisement—on its speaker and/or screen in S350. The combined advertisement with additional audio and/or visual is an augmented reality that may provide additional information or user experience. The product developer may then be debited or charged an amount for each performance of the augmented reality in S351. As such, it is possible to use example methods in the context of any perceivable augmented reality, not just visual information.
  • As seen, example methods may thus be used to create commentary, description, art, access, tagging, instruction, etc. in the same sensed context of underlying subject matter. The underlying subject matter may be unique and/or only at a specific location, or may be mass-replicated at several different locations, with each having associated augmented reality. Creating users may limit augmented reality performance to specific users, locations, times of day, dates, group members, number of views, payment status, etc. in order to selectively share and control such content. Similarly, users may rate, reply to, report, share, tip, add view indicators, and/or comment on performed augmented reality to potentially guide creators toward better practices, avoid or remove harmful or illegal content, make others aware of particularly useful or interesting augmented realities, show support, etc.
  • Actions throughout example methods may include user authentication, data verification, privacy controls, and/or content screening. For example, in example methods, users may never be provided with identifying information of another, such that a party creating augmented reality content and/or a party consuming the same may remain anonymous to the other. For example, data may be encrypted and not retained at one or all points in example methods, such that there may be no discoverable record of augmented reality, independent media, origin and/or limitation information in regard to such content, existence, performance, etc. For example, a third party or application host may sample or review some or all augmented reality information for potentially harmful, wasteful, or illegal content and remove the same, as well as monitor user feedback to identify such content. For example, a monitoring or feedback functionality may process augmented reality information to identify AR crimes and other problems identified in Lemley, “Law, Virtual Reality, and Augmented Reality,” Mar. 15, 2017 (available at ssrn.com/abstract=2933867), incorporated herein by reference in its entirety, and tag or remove augmented reality information containing the same.
  • As to verification, example methods may take advantage of a user login model requiring user authentication with a password over a secured connection and/or using operating-system-native security control and verification on communications devices, to ensure only verified, permitted human users access example methods and potentially user accounts. Example methods may also require payment verification, such as credit card or bank account authentication, to verify identity and/or ability to pay before allowing users to participate in creating, transmitting, and/or receiving augmented reality in example methods. Example methods may further use location and input verification available through operating system controls or other network functionalities, potentially in combination with user feedback, to prevent or punish location spoofing, user account compromising, bot access, and/or harassment or waste in example methods.
  • Some example methods being described here, it is understood that one or more example methods may be used in combination and/or repetitively to produce multiple options and functionalities for users of communications devices. Example methods may be performed through proper computer programming or hardware configuring of networks and communications devices to receive augmented reality, origin, and limitation information and act in accordance with example methods, at any number of different processor-based devices that are communicatively connected. Similarly, example methods may be embodied on non-transitory computer-readable media that directly instruct computer processors to execute example methods and/or, through installation in memory operable in conjunction with a processor and user interface, configure general-purpose computers having the same into specific communications machines that execute example methods.
  • Example methods and embodiments thus being described, it will be appreciated by one skilled in the art that example embodiments may be varied through routine experimentation and without further inventive activity. For example, although a direct image analysis may be used to determine useable anchors in visual real-world media to be augments, it is understood that vastly more complex analysis and input may be used to determine anchors in or alongside auditory, video, or other perceivable media. Variations are not to be regarded as departure from the spirit and scope of the exemplary embodiments, and all such modifications as would be obvious to one skilled in the art are intended to be included within the scope of the following claims.

Claims (20)

What is claimed is:
1. An application host communicatively connected with a first communications device and a second communications device separately-operated from the first communications device, to create, transmit, and perform augmented reality, the host comprising:
an application server including a database storing user information for the first and the second communications devices;
a computer processor configured to,
authenticate the first and the second communications devices with the user information,
receive anchor data and augmented reality information from the first communications device, wherein the anchor data determines whether and how augmented reality information is performed in combination with real world subject matter received by the first communications device to create augmented reality, and
transmit the anchor data and the augmented reality information to the second communications device such that the augmented reality can be recreated by the second communications device separately receiving the real world subject matter.
2. The host of claim 1, wherein the real world subject matter and the augmented reality information include visual information, wherein the anchor data identifies aspects of the real world subject matter that trigger performance of the augmented reality information as a visual component of the real world subject matter to recreate the augmented reality.
3. The host of claim 1, wherein the computer processor is further configured to,
receive the real world subject matter,
determine whether the real world subject matter has a threshold distinctiveness to be uniquely identified by the anchor data, and
if the real world subject matter does not have the threshold distinctiveness, at least one of, prompt receipt of different real world subject matter by the first communications device, and suggest other real world subject matter that has the threshold distinctiveness to the first communications device.
4. The host of claim 3, wherein the other real world subject matter was received from another communications device and has previously been determined to have the threshold distinctiveness.
5. The host of claim 1, wherein the real world subject matter is a first visual scene captured by a camera of the first communications device, the anchor information is a position and orientation in the first visual scene, and the augmented reality information is three-dimensional in the real world subject matter at the position and the orientation.
6. The host of claim 1, wherein the computer processor is further configured to,
receive origin information from the first communications device that limits at least one of a time and a location where the augmented reality may be performed.
7. The host of claim 6, wherein the computer processor is further configured to,
determine whether the second communications device satisfies the origin information, wherein the transmitting the anchor data and the augmented reality information is performed only if the determining determines that the second communication device satisfies the origin information.
8. The host of claim 7, wherein the computer processor is further configured to,
receive a location from the second communications device, wherein the location is generated natively by the second communications device without user input, and wherein the origin information includes a geographic area, and wherein the second communications device satisfies the origin information only if the location is within the geographic area.
9. The host of claim 1, wherein the computer processor is further configured to,
receive a registration from the second communications device confirming the augmented reality was performed on the second communications device
delete the augmented reality information, and
instruct the second communications device to delete the augmented reality information.
10. A user communications device comprising:
a display;
a camera;
a communications port; and
a computer processor configured to,
receive first augmented reality information and first anchor information for the augmented reality information from a separately-operated source via the communications port,
capture first real world media from at least the camera that is independent of the first augmented reality information and the first anchor information at a different time from creation of the first augmented reality information,
compare the first anchor information with the first real world media, and
if the first real world media is identified by the first anchor information based on the comparing, perform, on the display, the first augmented reality information in the first real world media in a manner defined by the first anchor information.
11. The device of claim 10, wherein the first real world subject matter and the first augmented reality information include visual information, wherein the first anchor data identifies aspects of the first real world subject matter that trigger the performing on the display, and wherein the performing is not executed if the first anchor information does not identify aspects of the first real world subject matter.
12. The device of claim 10, wherein the computer processor is further configured to,
receive origin information via the communications port that limits at least one of a time and a location where the performing may be executed.
13. The device of claim 12, further comprising:
a GPS antenna, wherein the computer processor is further configured to determine whether the device satisfies the location with the GPS antenna, wherein the transmitting the anchor data and the augmented reality information is performed only if the determining determines that the device satisfies the location.
14. The user communications device of claim 10, wherein the computer processor is further configured to,
capture second real world subject matter from at least the camera,
receive second augmented reality information from a human user in combination with the second real world subject matter as augmented reality,
create second anchor data that determines whether and how the second augmented reality information is performed with the second real world subject matter to recreate the augmented reality on a different device receiving similar real world subject matter, and
transmit the second anchor data and the second augmented reality information such that the augmented reality can be recreated by the different device using the second anchor data and the second augmented reality information upon receipt of the similar real world subject matter.
15. The user communications device of claim 14, wherein the computer processor is further configured to,
transmit origin information via the communications port that limits at least one of a time and a location where the augmented reality can be recreated.
16. The user communications device of claim 10, wherein the computer processor is further configured to,
transmit credentials for authenticating a user of the user communications device.
17. The user communications device of claim 16, wherein the separately-operated source is an application host configured to receive and transmit the first augmented reality information and first anchor information from a different communications device authenticated with a different user.
18. The user communications device of claim 10, wherein the computer processor is further configured to,
transmit a registration confirming the performing on the device.
19. The user communications device of claim 10, wherein the device is at least one of a mobile phone and a wearable communications device.
20. A non-transitory computer readable medium storing functional data structures that when executed by a computer processor in a user communications device cause the computer processor to:
receive first augmented reality information and first anchor information for the augmented reality information from a separately-operated source from a communications port;
capture first real world media from at least a camera that is independent of the first augmented reality information and the first anchor information at a different time from creation of the first augmented reality information;
compare the first anchor information with the first real world media; and
if the first real world media is identified by the first anchor information based on the comparing, perform, on the display, the first augmented reality information in the first real world media in a manner defined by the first anchor information.
US15/696,157 2017-04-07 2017-09-05 Systems and methods for creating, sharing, and performing augmented reality Abandoned US20180293771A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/696,157 US20180293771A1 (en) 2017-04-07 2017-09-05 Systems and methods for creating, sharing, and performing augmented reality

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201715482670A 2017-04-07 2017-04-07
US15/482,644 US9754397B1 (en) 2017-04-07 2017-04-07 Systems and methods for contextual augmented reality sharing and performance
US15/696,157 US20180293771A1 (en) 2017-04-07 2017-09-05 Systems and methods for creating, sharing, and performing augmented reality

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US15/482,644 Continuation US9754397B1 (en) 2017-04-07 2017-04-07 Systems and methods for contextual augmented reality sharing and performance

Publications (1)

Publication Number Publication Date
US20180293771A1 true US20180293771A1 (en) 2018-10-11

Family

ID=59702241

Family Applications (2)

Application Number Title Priority Date Filing Date
US15/482,644 Expired - Fee Related US9754397B1 (en) 2017-04-07 2017-04-07 Systems and methods for contextual augmented reality sharing and performance
US15/696,157 Abandoned US20180293771A1 (en) 2017-04-07 2017-09-05 Systems and methods for creating, sharing, and performing augmented reality

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US15/482,644 Expired - Fee Related US9754397B1 (en) 2017-04-07 2017-04-07 Systems and methods for contextual augmented reality sharing and performance

Country Status (1)

Country Link
US (2) US9754397B1 (en)

Cited By (53)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180130244A1 (en) * 2016-01-18 2018-05-10 Tencent Technology (Shenzhen) Company Limited Reality-augmented information display method and apparatus
CN110418185A (en) * 2019-07-22 2019-11-05 广州市天正科技有限公司 The localization method and its system of anchor point in a kind of augmented reality video pictures
WO2019217443A1 (en) * 2018-05-07 2019-11-14 Google Llc Systems and methods for anchoring virtual objects to physical locations
US20190354235A1 (en) * 2018-05-21 2019-11-21 Motorola Mobility Llc Methods and Systems for Augmenting Images in an Electronic Device
US10593084B2 (en) * 2016-08-01 2020-03-17 Facebook, Inc. Systems and methods for content interaction
US20200143024A1 (en) * 2018-11-02 2020-05-07 Jpmorgan Chase Bank, N.A. Methods for augmented reality data decryption and devices thereof
US11057682B2 (en) 2019-03-24 2021-07-06 Apple Inc. User interfaces including selectable representations of content items
US11069091B2 (en) 2019-08-19 2021-07-20 Patrick S. Piemonte Systems and methods for presentation of and interaction with immersive content
US11070889B2 (en) 2012-12-10 2021-07-20 Apple Inc. Channel bar user interface
US11194546B2 (en) 2012-12-31 2021-12-07 Apple Inc. Multi-user TV user interface
US11222478B1 (en) 2020-04-10 2022-01-11 Design Interactive, Inc. System and method for automated transformation of multimedia content into a unitary augmented reality module
US11232635B2 (en) * 2018-10-05 2022-01-25 Magic Leap, Inc. Rendering location specific virtual content in any location
US11245967B2 (en) 2012-12-13 2022-02-08 Apple Inc. TV side bar user interface
US11257294B2 (en) 2019-10-15 2022-02-22 Magic Leap, Inc. Cross reality system supporting multiple device types
US11290762B2 (en) 2012-11-27 2022-03-29 Apple Inc. Agnostic media delivery system
US11297392B2 (en) 2012-12-18 2022-04-05 Apple Inc. Devices and method for providing remote control hints on a display
US11386629B2 (en) 2018-08-13 2022-07-12 Magic Leap, Inc. Cross reality system
US11386627B2 (en) 2019-11-12 2022-07-12 Magic Leap, Inc. Cross reality system with localization service and shared location-based content
US11410395B2 (en) 2020-02-13 2022-08-09 Magic Leap, Inc. Cross reality system with accurate shared maps
US11430216B2 (en) * 2018-10-22 2022-08-30 Hewlett-Packard Development Company, L.P. Displaying data related to objects in images
US20220279048A1 (en) * 2021-03-01 2022-09-01 Daniel Goddard Augmented Reality Positioning and Matching System
US11461397B2 (en) 2014-06-24 2022-10-04 Apple Inc. Column interface for navigating in a user interface
US11467726B2 (en) 2019-03-24 2022-10-11 Apple Inc. User interfaces for viewing and accessing content on an electronic device
US11520858B2 (en) 2016-06-12 2022-12-06 Apple Inc. Device-level authorization for viewing content
US11520467B2 (en) 2014-06-24 2022-12-06 Apple Inc. Input device and user interface interactions
US11543938B2 (en) 2016-06-12 2023-01-03 Apple Inc. Identifying applications on which content is available
US11551430B2 (en) 2020-02-26 2023-01-10 Magic Leap, Inc. Cross reality system with fast localization
US11562542B2 (en) 2019-12-09 2023-01-24 Magic Leap, Inc. Cross reality system with simplified programming of virtual content
US11562525B2 (en) 2020-02-13 2023-01-24 Magic Leap, Inc. Cross reality system with map processing using multi-resolution frame descriptors
US11568605B2 (en) 2019-10-15 2023-01-31 Magic Leap, Inc. Cross reality system with localization service
US11582517B2 (en) 2018-06-03 2023-02-14 Apple Inc. Setup procedures for an electronic device
US11609678B2 (en) 2016-10-26 2023-03-21 Apple Inc. User interfaces for browsing content from multiple content applications on an electronic device
US11632679B2 (en) 2019-10-15 2023-04-18 Magic Leap, Inc. Cross reality system with wireless fingerprints
US20230186573A1 (en) * 2020-03-06 2023-06-15 Sandvik Ltd Computer enhanced maintenance system
US11683565B2 (en) 2019-03-24 2023-06-20 Apple Inc. User interfaces for interacting with channels that provide content that plays in a media browsing application
US11720229B2 (en) 2020-12-07 2023-08-08 Apple Inc. User interfaces for browsing and presenting content
US11797606B2 (en) 2019-05-31 2023-10-24 Apple Inc. User interfaces for a podcast browsing and playback application
US11808941B2 (en) * 2018-11-30 2023-11-07 Google Llc Augmented image generation using virtual content from wearable heads up display
US11830149B2 (en) 2020-02-13 2023-11-28 Magic Leap, Inc. Cross reality system with prioritization of geolocation information for localization
US11843838B2 (en) 2020-03-24 2023-12-12 Apple Inc. User interfaces for accessing episodes of a content series
US11863837B2 (en) * 2019-05-31 2024-01-02 Apple Inc. Notification of augmented reality content on an electronic device
US11899895B2 (en) 2020-06-21 2024-02-13 Apple Inc. User interfaces for setting up an electronic device
US11900547B2 (en) 2020-04-29 2024-02-13 Magic Leap, Inc. Cross reality system for large scale environments
US11934640B2 (en) 2021-01-29 2024-03-19 Apple Inc. User interfaces for record labels
USD1019684S1 (en) * 2021-09-14 2024-03-26 Beijing Kuaimajiabian Technology Co., Ltd. Display screen or portion thereof with a graphical user interface
USD1019678S1 (en) * 2021-07-30 2024-03-26 Beijing Kuaimajiabian Technology Co., Ltd. Display screen or portion thereof with a graphical user interface
USD1021926S1 (en) * 2021-08-12 2024-04-09 Beijing Kuaimajiabian Technology Co., Ltd. Display screen or portion thereof with a graphical user interface
USD1021935S1 (en) * 2021-05-14 2024-04-09 Beijing Kuaimajiabian Technology Co., Ltd. Display screen or portion thereof with an animated graphical user interface
USD1021927S1 (en) * 2021-09-13 2024-04-09 Beijing Kuaimajiabian Technology Co., Ltd. Display screen or portion thereof with an animated graphical user interface
US11962836B2 (en) 2019-03-24 2024-04-16 Apple Inc. User interfaces for a media browsing application
US11978159B2 (en) 2018-08-13 2024-05-07 Magic Leap, Inc. Cross reality system
US12067772B2 (en) 2019-05-05 2024-08-20 Google Llc Methods and apparatus for venue based augmented reality
US12100108B2 (en) 2020-10-29 2024-09-24 Magic Leap, Inc. Cross reality system with quality information about persistent coordinate frames

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180131856A (en) * 2017-06-01 2018-12-11 에스케이플래닛 주식회사 Method for providing of information about delivering products and apparatus terefor
US10504288B2 (en) 2018-04-17 2019-12-10 Patrick Piemonte & Ryan Staake Systems and methods for shared creation of augmented reality
CN112106103B (en) 2018-05-11 2024-09-06 莱斯特有限公司 System and method for determining approximate transformations between coordinate systems
CN112513941A (en) 2018-08-02 2021-03-16 莱斯特有限公司 System and method for generating a set of approximately coordinated regional maps
US11212331B1 (en) * 2018-10-26 2021-12-28 Snap Inc. Triggering changes to real-time special effects included in a live streaming video
US20200184737A1 (en) * 2018-12-05 2020-06-11 Xerox Corporation Environment blended packaging
US11090561B2 (en) 2019-02-15 2021-08-17 Microsoft Technology Licensing, Llc Aligning location for a shared augmented reality experience
US11763503B2 (en) * 2019-02-25 2023-09-19 Life Impact Solutions Media alteration based on variable geolocation metadata
US11097194B2 (en) 2019-05-16 2021-08-24 Microsoft Technology Licensing, Llc Shared augmented reality game within a shared coordinate space
US11500226B1 (en) * 2019-09-26 2022-11-15 Scott Phillip Muske Viewing area management for smart glasses
CN113467603B (en) * 2020-03-31 2024-03-08 抖音视界有限公司 Audio processing method and device, readable medium and electronic equipment
CN111722871B (en) * 2020-06-17 2023-06-20 抖音视界有限公司 Information stream anchor point processing method and device, electronic equipment and computer storage medium
CN115803779A (en) * 2020-06-29 2023-03-14 斯纳普公司 Analyzing augmented reality content usage data
US11587316B2 (en) 2021-06-11 2023-02-21 Kyndryl, Inc. Segmenting visual surrounding to create template for user experience
US11886767B2 (en) 2022-06-17 2024-01-30 T-Mobile Usa, Inc. Enable interaction between a user and an agent of a 5G wireless telecommunication network using augmented reality glasses

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8810599B1 (en) * 2010-11-02 2014-08-19 Google Inc. Image recognition in an augmented reality application
US20170046878A1 (en) * 2015-08-13 2017-02-16 Revistor LLC Augmented reality mobile application
US20170124713A1 (en) * 2015-10-30 2017-05-04 Snapchat, Inc. Image based tracking in augmented reality systems

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7391424B2 (en) * 2003-08-15 2008-06-24 Werner Gerhard Lonsing Method and apparatus for producing composite images which contain virtual objects
WO2010116885A1 (en) 2009-04-06 2010-10-14 日本電気株式会社 Data processing device, image matching method, program, and image matching system
US8839121B2 (en) * 2009-05-06 2014-09-16 Joseph Bertolami Systems and methods for unifying coordinate systems in augmented reality applications
US8922718B2 (en) * 2009-10-21 2014-12-30 Disney Enterprises, Inc. Key generation through spatial detection of dynamic objects
TWI482108B (en) * 2011-12-29 2015-04-21 Univ Nat Taiwan To bring virtual social networks into real-life social systems and methods
US20130178257A1 (en) * 2012-01-06 2013-07-11 Augaroo, Inc. System and method for interacting with virtual objects in augmented realities
IN2015KN00682A (en) * 2012-09-03 2015-07-17 Sensomotoric Instr Ges Für Innovative Sensorik Mbh
JP6063315B2 (en) 2013-03-26 2017-01-18 富士フイルム株式会社 Authenticity determination system, feature point registration apparatus and operation control method thereof, and collation determination apparatus and operation control method thereof
US9129430B2 (en) * 2013-06-25 2015-09-08 Microsoft Technology Licensing, Llc Indicating out-of-view augmented reality images
US20150046284A1 (en) * 2013-08-12 2015-02-12 Airvirtise Method of Using an Augmented Reality Device
JP6192483B2 (en) * 2013-10-18 2017-09-06 任天堂株式会社 Information processing program, information processing apparatus, information processing system, and information processing method
KR102206060B1 (en) * 2013-11-19 2021-01-21 삼성전자주식회사 Effect display method of electronic apparatus and electronic appparatus thereof
US20150302650A1 (en) * 2014-04-16 2015-10-22 Hazem M. Abdelmoati Methods and Systems for Providing Procedures in Real-Time
US10339701B2 (en) 2014-05-13 2019-07-02 Pcp Vr Inc. Method, system and apparatus for generation and playback of virtual reality multimedia
US10056054B2 (en) * 2014-07-03 2018-08-21 Federico Fraccaroli Method, system, and apparatus for optimising the augmentation of radio emissions
US20160112479A1 (en) * 2014-10-16 2016-04-21 Wipro Limited System and method for distributed augmented reality
TWI540522B (en) * 2015-02-26 2016-07-01 宅妝股份有限公司 Virtual shopping system and method utilizing virtual reality and augmented reality technology
JP6421670B2 (en) * 2015-03-26 2018-11-14 富士通株式会社 Display control method, display control program, and information processing apparatus
US20170084082A1 (en) * 2015-09-17 2017-03-23 HuntAR Corporation Systems and methods for providing an augmented reality experience

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8810599B1 (en) * 2010-11-02 2014-08-19 Google Inc. Image recognition in an augmented reality application
US20170046878A1 (en) * 2015-08-13 2017-02-16 Revistor LLC Augmented reality mobile application
US20170124713A1 (en) * 2015-10-30 2017-05-04 Snapchat, Inc. Image based tracking in augmented reality systems

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HELD, "Learning to Track at 100 FPS with Deep Regression Networks," 2016, Stanford University (available at arxiv.)rg/abs/1604.01802) *

Cited By (73)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11290762B2 (en) 2012-11-27 2022-03-29 Apple Inc. Agnostic media delivery system
US11070889B2 (en) 2012-12-10 2021-07-20 Apple Inc. Channel bar user interface
US11317161B2 (en) 2012-12-13 2022-04-26 Apple Inc. TV side bar user interface
US11245967B2 (en) 2012-12-13 2022-02-08 Apple Inc. TV side bar user interface
US11297392B2 (en) 2012-12-18 2022-04-05 Apple Inc. Devices and method for providing remote control hints on a display
US11822858B2 (en) 2012-12-31 2023-11-21 Apple Inc. Multi-user TV user interface
US11194546B2 (en) 2012-12-31 2021-12-07 Apple Inc. Multi-user TV user interface
US11520467B2 (en) 2014-06-24 2022-12-06 Apple Inc. Input device and user interface interactions
US12086186B2 (en) 2014-06-24 2024-09-10 Apple Inc. Interactive interface for navigating in a user interface associated with a series of content
US11461397B2 (en) 2014-06-24 2022-10-04 Apple Inc. Column interface for navigating in a user interface
US10475224B2 (en) * 2016-01-18 2019-11-12 Tencent Technology (Shenzhen) Company Limited Reality-augmented information display method and apparatus
US20180130244A1 (en) * 2016-01-18 2018-05-10 Tencent Technology (Shenzhen) Company Limited Reality-augmented information display method and apparatus
US11520858B2 (en) 2016-06-12 2022-12-06 Apple Inc. Device-level authorization for viewing content
US11543938B2 (en) 2016-06-12 2023-01-03 Apple Inc. Identifying applications on which content is available
US10593084B2 (en) * 2016-08-01 2020-03-17 Facebook, Inc. Systems and methods for content interaction
US10600220B2 (en) 2016-08-01 2020-03-24 Facebook, Inc. Systems and methods for content interaction
US11609678B2 (en) 2016-10-26 2023-03-21 Apple Inc. User interfaces for browsing content from multiple content applications on an electronic device
US11966560B2 (en) 2016-10-26 2024-04-23 Apple Inc. User interfaces for browsing content from multiple content applications on an electronic device
WO2019217443A1 (en) * 2018-05-07 2019-11-14 Google Llc Systems and methods for anchoring virtual objects to physical locations
US10937249B2 (en) * 2018-05-07 2021-03-02 Google Llc Systems and methods for anchoring virtual objects to physical locations
US20190354235A1 (en) * 2018-05-21 2019-11-21 Motorola Mobility Llc Methods and Systems for Augmenting Images in an Electronic Device
US10845921B2 (en) * 2018-05-21 2020-11-24 Motorola Mobility Llc Methods and systems for augmenting images in an electronic device
US11582517B2 (en) 2018-06-03 2023-02-14 Apple Inc. Setup procedures for an electronic device
US11386629B2 (en) 2018-08-13 2022-07-12 Magic Leap, Inc. Cross reality system
US11978159B2 (en) 2018-08-13 2024-05-07 Magic Leap, Inc. Cross reality system
US11232635B2 (en) * 2018-10-05 2022-01-25 Magic Leap, Inc. Rendering location specific virtual content in any location
US11789524B2 (en) 2018-10-05 2023-10-17 Magic Leap, Inc. Rendering location specific virtual content in any location
US11430216B2 (en) * 2018-10-22 2022-08-30 Hewlett-Packard Development Company, L.P. Displaying data related to objects in images
US11704395B2 (en) * 2018-11-02 2023-07-18 Jpmorgan Chase Bank, N.A. Methods for augmented reality data decryption and devices thereof
US20200143024A1 (en) * 2018-11-02 2020-05-07 Jpmorgan Chase Bank, N.A. Methods for augmented reality data decryption and devices thereof
US11808941B2 (en) * 2018-11-30 2023-11-07 Google Llc Augmented image generation using virtual content from wearable heads up display
US12008232B2 (en) 2019-03-24 2024-06-11 Apple Inc. User interfaces for viewing and accessing content on an electronic device
US11962836B2 (en) 2019-03-24 2024-04-16 Apple Inc. User interfaces for a media browsing application
US11467726B2 (en) 2019-03-24 2022-10-11 Apple Inc. User interfaces for viewing and accessing content on an electronic device
US11750888B2 (en) 2019-03-24 2023-09-05 Apple Inc. User interfaces including selectable representations of content items
US11057682B2 (en) 2019-03-24 2021-07-06 Apple Inc. User interfaces including selectable representations of content items
US11683565B2 (en) 2019-03-24 2023-06-20 Apple Inc. User interfaces for interacting with channels that provide content that plays in a media browsing application
US11445263B2 (en) 2019-03-24 2022-09-13 Apple Inc. User interfaces including selectable representations of content items
US12067772B2 (en) 2019-05-05 2024-08-20 Google Llc Methods and apparatus for venue based augmented reality
US11863837B2 (en) * 2019-05-31 2024-01-02 Apple Inc. Notification of augmented reality content on an electronic device
US11797606B2 (en) 2019-05-31 2023-10-24 Apple Inc. User interfaces for a podcast browsing and playback application
CN110418185A (en) * 2019-07-22 2019-11-05 广州市天正科技有限公司 The localization method and its system of anchor point in a kind of augmented reality video pictures
US11069091B2 (en) 2019-08-19 2021-07-20 Patrick S. Piemonte Systems and methods for presentation of and interaction with immersive content
US11568605B2 (en) 2019-10-15 2023-01-31 Magic Leap, Inc. Cross reality system with localization service
US11257294B2 (en) 2019-10-15 2022-02-22 Magic Leap, Inc. Cross reality system supporting multiple device types
US11632679B2 (en) 2019-10-15 2023-04-18 Magic Leap, Inc. Cross reality system with wireless fingerprints
US11995782B2 (en) 2019-10-15 2024-05-28 Magic Leap, Inc. Cross reality system with localization service
US11869158B2 (en) 2019-11-12 2024-01-09 Magic Leap, Inc. Cross reality system with localization service and shared location-based content
US11386627B2 (en) 2019-11-12 2022-07-12 Magic Leap, Inc. Cross reality system with localization service and shared location-based content
US12067687B2 (en) 2019-12-09 2024-08-20 Magic Leap, Inc. Cross reality system with simplified programming of virtual content
US11562542B2 (en) 2019-12-09 2023-01-24 Magic Leap, Inc. Cross reality system with simplified programming of virtual content
US11748963B2 (en) 2019-12-09 2023-09-05 Magic Leap, Inc. Cross reality system with simplified programming of virtual content
US11967020B2 (en) 2020-02-13 2024-04-23 Magic Leap, Inc. Cross reality system with map processing using multi-resolution frame descriptors
US11830149B2 (en) 2020-02-13 2023-11-28 Magic Leap, Inc. Cross reality system with prioritization of geolocation information for localization
US11790619B2 (en) 2020-02-13 2023-10-17 Magic Leap, Inc. Cross reality system with accurate shared maps
US11562525B2 (en) 2020-02-13 2023-01-24 Magic Leap, Inc. Cross reality system with map processing using multi-resolution frame descriptors
US11410395B2 (en) 2020-02-13 2022-08-09 Magic Leap, Inc. Cross reality system with accurate shared maps
US11551430B2 (en) 2020-02-26 2023-01-10 Magic Leap, Inc. Cross reality system with fast localization
US20230186573A1 (en) * 2020-03-06 2023-06-15 Sandvik Ltd Computer enhanced maintenance system
US11843838B2 (en) 2020-03-24 2023-12-12 Apple Inc. User interfaces for accessing episodes of a content series
US11222478B1 (en) 2020-04-10 2022-01-11 Design Interactive, Inc. System and method for automated transformation of multimedia content into a unitary augmented reality module
US11900547B2 (en) 2020-04-29 2024-02-13 Magic Leap, Inc. Cross reality system for large scale environments
US11899895B2 (en) 2020-06-21 2024-02-13 Apple Inc. User interfaces for setting up an electronic device
US12100108B2 (en) 2020-10-29 2024-09-24 Magic Leap, Inc. Cross reality system with quality information about persistent coordinate frames
US11720229B2 (en) 2020-12-07 2023-08-08 Apple Inc. User interfaces for browsing and presenting content
US11934640B2 (en) 2021-01-29 2024-03-19 Apple Inc. User interfaces for record labels
US20220279048A1 (en) * 2021-03-01 2022-09-01 Daniel Goddard Augmented Reality Positioning and Matching System
US11558472B2 (en) * 2021-03-01 2023-01-17 Daniel Goddard Augmented reality positioning and matching system
USD1021935S1 (en) * 2021-05-14 2024-04-09 Beijing Kuaimajiabian Technology Co., Ltd. Display screen or portion thereof with an animated graphical user interface
USD1019678S1 (en) * 2021-07-30 2024-03-26 Beijing Kuaimajiabian Technology Co., Ltd. Display screen or portion thereof with a graphical user interface
USD1021926S1 (en) * 2021-08-12 2024-04-09 Beijing Kuaimajiabian Technology Co., Ltd. Display screen or portion thereof with a graphical user interface
USD1021927S1 (en) * 2021-09-13 2024-04-09 Beijing Kuaimajiabian Technology Co., Ltd. Display screen or portion thereof with an animated graphical user interface
USD1019684S1 (en) * 2021-09-14 2024-03-26 Beijing Kuaimajiabian Technology Co., Ltd. Display screen or portion thereof with a graphical user interface

Also Published As

Publication number Publication date
US9754397B1 (en) 2017-09-05

Similar Documents

Publication Publication Date Title
US9754397B1 (en) Systems and methods for contextual augmented reality sharing and performance
KR102344482B1 (en) Geo-fence rating system
US11670058B2 (en) Visual display systems and method for manipulating images of a real scene using augmented reality
US9256806B2 (en) Methods and systems for determining image processing operations relevant to particular imagery
US20200294293A1 (en) Persistent augmented reality objects
KR102451508B1 (en) Media item attachment system
US20140079281A1 (en) Augmented reality creation and consumption
KR102320723B1 (en) Method and system for verifying users
KR20120075487A (en) Sensor-based mobile search, related methods and systems
CN105323066B (en) Identity verification method and device
US20140078174A1 (en) Augmented reality creation and consumption
KR20210107139A (en) Deriving audiences through filter activity
US11410396B2 (en) Passing augmented reality content between devices
KR20200094025A (en) Augmented Reality system for advertising platform based image analysis
KR20240016271A (en) Systems and methods for management of non-fungible tokens and corresponding digital assets
US9269114B2 (en) Dynamic negotiation and authorization system to record rights-managed content
KR20240128967A (en) API that provides product cards
KR20240131412A (en) Product Cards by Augmented Reality Content Creators
US20180181596A1 (en) Method and system for remote management of virtual message for a moving object
KR20240016273A (en) Systems and methods for management of non-fungible tokens and corresponding digital assets
US10101885B1 (en) Interact with TV using phone camera and touch
US11030638B2 (en) System and method for time and space based digital authentication for in-person and online events
US11386719B2 (en) Electronic device and operating method therefor
US20190266461A1 (en) Fingerprint-based experience generation
KR20240016272A (en) Systems and methods for management of non-fungible tokens and corresponding digital assets

Legal Events

Date Code Title Description
AS Assignment

Owner name: MIRAGE WORLDS, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PIEMONTE, PATRICK S.;STAAKE, RYAN P.;SIGNING DATES FROM 20170405 TO 20170407;REEL/FRAME:043492/0684

AS Assignment

Owner name: STAAKE, RYAN P., NEW YORK

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MIRAGE WORLDS, INC.;REEL/FRAME:046473/0971

Effective date: 20180725

Owner name: PIEMONTE, PATRICK S., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MIRAGE WORLDS, INC.;REEL/FRAME:046473/0971

Effective date: 20180725

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION