US20120200667A1 - Systems and methods to facilitate interactions with virtual content - Google Patents
Systems and methods to facilitate interactions with virtual content Download PDFInfo
- Publication number
- US20120200667A1 US20120200667A1 US13/292,560 US201113292560A US2012200667A1 US 20120200667 A1 US20120200667 A1 US 20120200667A1 US 201113292560 A US201113292560 A US 201113292560A US 2012200667 A1 US2012200667 A1 US 2012200667A1
- Authority
- US
- United States
- Prior art keywords
- person
- signal
- virtual object
- supplemental
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/20—Input arrangements for video game devices
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/20—Input arrangements for video game devices
- A63F13/21—Input arrangements for video game devices characterised by their sensors, purposes or types
- A63F13/213—Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/40—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/52—Controlling the output signals based on the game progress involving aspects of the displayed game scene
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/272—Means for inserting a foreground image in a background image, i.e. inlay, outlay
Definitions
- the present invention relates to systems and methods to provide video signals that include both a person and a virtual object. Some embodiments relate to systems and methods to efficiently and dynamically generate a supplemental video signal to be displayed for the person.
- An audio, visual or audio-visual program may include virtual content (e.g., computer generated, holographic, etc.).
- virtual content e.g., computer generated, holographic, etc.
- a sports anchorperson might be seen (from the vantage point of the ‘audience’) evaluating the batting stance of a computer generated baseball player that is not physically present in the studio.
- the person may interact with virtual content (e.g., by walking around and pointing to various portions of the baseball player's body). It can be difficult, however, for the person to accurately and naturally interact with the virtual content that he or she cannot actually see.
- a monitor in the studio might display the blended broadcast image (that is, including both the person and the virtual content).
- the person may keep glancing at the monitor to determine if he or she is standing the right area and/or is looking in the right direction.
- An anchorperson's difficulty in determining where or how to interact with the virtual image can be distracting to viewers of the broadcast and detracting to the quality of the anchorperson's overall interaction, making the entire scene, including the virtual content look less believable, let alone difficult to produce.
- FIG. 1 is an illustration of a video system.
- FIG. 2 provides examples of images associated with a scene.
- FIG. 3 is an illustration of a video system in accordance with some embodiments.
- FIG. 4 provides examples of images associated with a scene according to some embodiments.
- FIGS. 5A and 5B are flow charts of methods in accordance with some embodiments of the present invention.
- FIG. 6 is block diagram of a system that may be provided in accordance with some embodiments.
- FIG. 7 is a block diagram of a graphics platform in accordance with some embodiments of the present invention.
- FIG. 8 is a tabular representation of a portion of data representing a virtual object and 3D information about a person, such as his or her position and/or orientation in accordance with some embodiments of the present invention.
- FIG. 1 illustrates a system 100 wherein a set or scene 110 includes a person 120 and a virtual object 130 . That is, the virtual object 130 is not actually physically present within the scene 110 , but the image of the virtual object will be added either simultaneously or later (e.g., by a graphics rendering engine).
- a video camera 140 may be pointed at the scene 110 to generate a video signal provided to a graphics platform 150 .
- FIG. 2 illustrates an image 210 associated with such a video signal.
- the image 210 generated by the camera 140 includes an image of a person 220 (e.g., a news anchorperson) but not a virtual object.
- the person or the virtual image do not necessarily need both to be in the final scene as presented to the audience.
- the invention solves the problem of allowing the person to relate to the virtual image, irrespective of whether both the person and the image are both presented finally to the viewing audience.
- the graphics platform 150 may receive information about the virtual object 130 , such as the object's location, pose, motion, appearance, audio, color, etc.
- the graphics platform 150 may use this information to create a viewer signal such as a broadcast signal or other signal to be output to a viewer, whether recorded or not, that includes images of both the person 120 and the virtual object 130 , or only one or the other of the images.
- FIG. 2 illustrates an image 212 associated with such a viewer signal.
- the image 212 output by the graphics platform 150 includes images of both a person 222 and a virtual object 232 (e.g., a dragon).
- the person 120 may be desirable for the person 120 to appear to interact with the virtual object 130 .
- the person 120 might want to appear to maintain eye contact with the virtual object 130 (e.g., along a line of sight 225 illustrated in FIG. 2 ). This can be difficult, however, because the person 120 cannot see the virtual object 130 .
- a monitor 160 might be provided with a display 162 so that the person 120 can view the broadcast or viewer signal. In this way, the person 120 can periodically glance at the display to determine if he or she is in the relatively correct position and/or orientation with respect to the virtual object 130 . Such an approach, however, can be distracting for both the person 120 and viewers (who may wonder why the person keeps looking away).
- FIG. 3 illustrates a system 300 according to some embodiments.
- a set or scene 310 includes a person 320 and a virtual object 330 , and a video camera 340 may be pointed at the scene 310 to generate a video and audio signal provided to a graphics platform 350 .
- other types of sensory signals such as an audio, thermal, or haptic signal (e.g., created by the graphics platform) could be used to signal the position or location of a virtual image in relation to the person (e.g., a “beep” might indicate when an anchorperson's hand is touching the “bat” of a virtual batter).
- audio may replace the “supplemental video” but note that the video camera may generate the “viewer video” that, besides being served to the audience, is also used to model the person's pose, appearance and/or location in the studio. Further note that some or all of this modeling may be done by other sensors.
- the graphics platform 350 may, according to some embodiments, execute a rendering application, such as the Brainstorm eStudio® three dimensional real-time graphics software package.
- a rendering application such as the Brainstorm eStudio® three dimensional real-time graphics software package.
- the graphics platform 350 could be implemented using a Personal Computer (PC) running a Windows® Operating System (“OS”) or an Apple® computing platform, or a cloud-based program (e.g., Google® Chrome®).
- the graphics platform 350 may use information about the virtual object 330 (e.g., the object's location, motion, appearance, etc.) to create a broadcast or viewer signal that includes images of both the person 320 and the virtual object 330 .
- FIG. 4 illustrates an image 412 that includes images of both a person 422 and a virtual object 432 .
- 3D information about the person may be provided to the graphics platform 350 through the processing of data captured by the video camera 340 and/or various other sensors.
- the phrase “3D information” might include location or position information, body pose, line of sight direction, etc.
- the graphics platform 350 may then use this information to generate a supplemental video signal to be provided to a display 360 associated with the person 320 .
- a Head Mounted Video Display (HMVD) may be used to display the supplemental video signal.
- the supplemental video signal may include an image generated by the graphics platform 350 that includes a view of the virtual object 330 as it would be seen from the person's perspective.
- the graphics platform 350 may render a supplemental video feed in substantially real-time based on a spatial relationship between the person 320 and a virtual object 330 .
- the phrase “graphics platform” may refer to any device (or set of devices) that can perform the functions of the various embodiments described herein.
- the graphics platform 350 may send an audio signal that indicates a situation where the person is within a certain distance to a virtual object or the relative direction of the virtual object to the person, for example. Note that a similar effect might be created using an audio or pressure signal or other type of signal (e.g., thermal) to indicate positioning.
- FIG. 4 illustrates that the person 422 may be wearing a display 462 wherein an image of the supplemental video signal is projected onto lenses worn by the person 422 .
- the supplemental video signal may include an image 474 of the virtual object as it would appear from the person's point of view.
- the image 474 of the virtual object may comprise a skeleton or Computer Aided-Design (“CAD”) vector representation view of the virtual object.
- CAD Computer Aided-Design
- a high definition version might be provided instead. Note that a background behind the image 474 may or may not be seen in the supplemental image 414 .
- viewing any representation of the virtual object from the performing person's perspective may allow the person 422 to more realistically interact with the virtual object 432 (e.g., the person 422 may know how to position himself or herself or gesture relative to the virtual object, or where to look in order appear to maintain eye contact along a line of sight 425 .
- the actual line of sight of the person 422 may be determined, such as by using retina detectors incorporated into the display 462 (to take into account that the person 422 can look around without turning his or her head).
- a view could be controlled using a joystick. For example, when a joystick in a default or home position the viewpoint might represent a normal line of site, and when the joystick is moved or engaged the viewpoint might be adjusted such that it is away or from the normal position.
- FIG. 5A illustrates a method that might be performed, for example, by some or all of the elements described herein.
- the flow charts described herein do not imply a fixed order to the steps, and embodiments of the present invention may be practiced in any order that is practicable. Note that any of the methods described herein may be performed by hardware, software, or any combination of these approaches.
- a computer-readable storage medium may store thereon instructions that when executed by a machine result in performance according to any of the embodiments described herein.
- 3D information about a virtual object is received at a graphics platform. For example, a location and dimensions of the virtual object may be determined by the graphics platform.
- 3D information associated with a person in a scene may be determined. The 3D information associated with the person might include the person's location, orientation, line of sight, pose, etc. and may be received, for example, from a video camera and/or one or more RTLS sensors using technologies such as RFID, infrared, and Ultra-wideband. .
- the graphics platform may create: (i) “a viewer signal” (possibly a video and/or audio signal) of the scene in relation to the person (whether or not actually including the person).
- a viewer signal may include the virtual element and an animated figure of the person, and (ii) a supplemental signal of the scene (e.g. a video and/or audio signal), wherein the video signal and the supplemental signal are from different perspectives based at least in part on the 3D information.
- the viewer signal might represent the scene from the point of view of a video camera filming the scene while the supplemental video signal represents the scene from the person's point of view.
- the supplemental video signal is displayed (or transmitted, e.g., via audio) to person to help him or her interact with the virtual object.
- the performing person may be a robot.
- FIG. 5B is a flow chart of a method that may be performed in accordance with some embodiments described herein.
- a video signal including an image of a person may be received, and a virtual object may be added to the video signal to create a viewer signal at 514 .
- the virtual object may be any type of virtual image, including for example, a Computer Generated Image (“CGI”) object, including a virtual human and/or a video game character, object or sound.
- CGI Computer Generated Image
- location information associated with a spatial relationship between the person and the virtual object may be determined.
- the location information may be determined by sensors or by analyzing the video signal from the camera.
- a plurality of video signals might be received and analyzed by a graphics platform to model the person appearance and to determine a three dimensional location of the person.
- Other types or location information may include a distance between the person and virtual object, one or more angles associated with the person and virtual object, and/or an orientation of the person (e.g., where he or she is currently looking).
- RTLS sensor e.g., using sound waves or any other way of measuring distance.
- a supplemental signal may be created based on the location information.
- the supplemental signal may include a view of the virtual object or a perspective of the virtual object as would be seen or perceived from the person's perspective.
- the perception of the virtual object might comprise a marker (e.g., a dot or “x” indicating where a person should look, or a sound when a person looks at the right direction), a lower resolution image as compared with the viewer signal, updated with a lower frame rate image as compared with the viewer signal, and/or include a dynamically generated occlusion zone.
- the supplemental signal is further based on an orientation of the person's line of sight (e.g., the supplemental video signal may be updated when a person turns his or her head).
- the supplemental video signal may be updated when a person turns his or her head.
- multiple people and/or virtual objects may be involved in the scene and/or included in the supplemental signal.
- a supplemental signal may be created for each person, and each supplemental signal would include a view or perception of the virtual objects as would be seen or perceived from that person's perspective.
- the supplemental signal may then be transmitted to a secondary device (e.g., a display device).
- a secondary device e.g., a display device
- the display device may be worn by the person, such as an eyeglasses display, a retinal display, and/or a contact lens display, or a hearing aid (for rending sound information).
- the supplemental signal is wirelessly transmitted to the secondary device, hence, having the supplemental signal and its display to the performing person may be almost transparent to a viewer of the final broadcast.
- a command from the person may be detected and, responsive to said detection, the virtual object may be adjusted.
- a command might comprise, for example, an audible command and/or a body gesture command.
- a graphics platform might detect that the person has “grabbed” a virtual object and then move the image of the virtual object as the person moves his or her hands.
- a person may gesture or verbally order that the motion of a virtual object be paused and/or modified.
- a guest or another third person may gesture or verbally order motion of the virtual object, causing the virtual object to move (and for such movement to be perceived from the perspective of the original person wearing the detection device). For example, when an audience claps or laughs, the sound might cause the virtual object to take a bow, which the person may then be able to perceive via information provided in the supplemental feed
- video feed and “image” may refer to any signal conveying information about a moving or still image, including audio signals and including a High Definition-Serial Data Interface (“HD-SDI”) signal transmitted in accordance with the Society of Motion Picture and Television Engineers 292M standard.
- HD signals may be described in some examples presented herein, note that embodiments may be associated with any other type of video feed, including a standard broadcast feed and/or a three dimensional image feed.
- video feeds and/or received images might comprise, for example, an HD-SDI signal exchanged through a fiber cable and/or a satellite transmission.
- the video cameras described herein may be any device capable of generating a video feed, such as a Sony® studio (or outside) broadcast camera.
- system and methods may be provided to improve the production of video presentation involving augmented reality technology.
- some embodiments may produce an improved immersive video mixing subjects and a virtual environment. This might be achieved, for example, by reconstructing the subject's video and/or presenting the subject with a “subject-view” of the virtual environment. This may facilitate interactions between subjects and the virtual elements and, according to some embodiments, let a subject alter a progression and/or appearance of virtual imagery through gestures or audible sounds.
- Augmented reality may fuse real scene video with computer generated imagery.
- the virtual environment may be rendered from the perspective of a camera or other device that is used to capture the real scene video (or audio).
- knowledge of the camera's parameters may be required along with distances of real and virtual objects relative to the camera to resolve occlusion.
- the image of part of a virtual element may be occluded by the image of a physical element in the scene or vice versa.
- Another aspect of enhancing video presentation through augmented reality is handling the interaction between the real and the virtual elements.
- a sports anchorperson may analyze maneuvers during a game or play segment.
- a producer might request a graphical presentation of a certain play in a game that the anchor wants to analyze.
- This virtual playbook might comprise a code module that, when executed on a three dimensional rendering engine, may generate a three dimensional rendering of the play.
- the synthesized play may then be projected from the perspective of the studio camera.
- the anchor's video image may be rendered so that he or she appears standing on the court (while actually remaining in a studio) among the virtual players. He or she may then deliver the analysis while virtually engaging with the players.
- the anchor To position himself or herself relative to the virtual players, the anchor typically looks at a camera screen and rehearses the movements beforehand. Even then, it may be a challenge to make the interaction between a real person and a virtual person look natural.
- FIG. 6 is block diagram of a system 600 that may be provided in accordance with some embodiments.
- the system 600 creates an augmented reality environment using a camera 640 to capture a video sequence of a real-world scene 610 including a person 620 generates a graphical sequence that includes a virtual object 630 .
- the broadcast camera 640 may record the person 620 (possibly using a “green screen” in the background) or on location “in the field.”
- a “virtual camera” (the perspective used by a graphic engine to render a virtual environment) may be aligned with the broadcast camera 640 so that the rendered environment matches the person's scale, movements, etc.
- the broadcast camera's perspective (including position, roll, pan, tilt, and focal-length) is extracted using sensors mounted on the broadcast camera 640 or by analyzing the video frames received from the broadcast camera 640 .
- capabilities to improve the accuracy and realism of the mixed real and virtual production may be provided by the system 600 .
- the system 600 disclosed herein may improve and extend the interaction between the person 620 and the virtual object 630 and allow the person 620 to spatially and temporally affect the rendering of the virtual object 630 during the production.
- the video 640 of the person 620 may be altered to refine his or her pose (and/or possibly appearance) before mixing it with the virtual environment. This may be done by determining the person's three dimensional model including obtaining a three dimensional surface and skeleton representation (for example, based on an analysis of videos from multiple views) so that the image of the person 620 at a certain location and pose in the scene may be altered in relation to the virtual object.
- the person 620 may be equipped with a HMVD 660 (e.g., three dimensional glasses, virtual retinal displays, etc.) through which he or she can view the virtual environment, including the virtual object 630 from his or her perspective. That is, the virtual object 630 may be displayed to the person from his or her own perspective in a way that enhances the person's ability to navigate through the virtual world and to interact with the content without overly complicating the production workflow.
- HMVD 660 e.g., three dimensional glasses, virtual retinal displays, etc.
- a 3D model of the person 620 may be obtained through an analysis of the broadcast camera 640 and potentially auxiliary cameras and/or sensors 642 (attached to the person or external to the person). Once a 3D model of the person 620 is obtained, the image of the person 620 may be reconstructed into a new image that shows the person with a new pose and/or appearance relative to the virtual elements. According to some embodiments, the “viewer video” may be served to the audience.
- the person's location and pose may be submitted to a three dimensional graphic engine 680 to render a supplemental virtual environment view (e.g., a second virtual camera view) from the perspective of the person 620 .
- This second virtual camera view is presented to the person 620 through the HMVD 660 or any other display device.
- the second virtual camera view may be presented in a semi-transparent manner, so the person 620 can still see his or her surrounding real-world environment (e.g., studio, cameras, another person 622 , etc.).
- the second camera view might be presented to the person on a separate screen at the studio.
- Such an approach may eliminate the need to wear a visible display such as HMVD 660 , but will require the person 620 to look at a monitor instead of directly at the virtual object 630 .
- some embodiments use computer-vision techniques to recognize and track the people 620 , 622 in the scene 610 .
- a three dimensional model of an anchor may be estimated and used to reconstruct his or her image at different orientations and poses relative to the virtual players or objects 630 , 632 .
- the anchor (or any object relative to the anchor) may be reconstructed, according to some embodiments, at different relative sizes, locations, or appearances.
- Three dimensional reconstruction of objects may be done, for example, based on an analysis of video sequences from which three dimensional information of static and dynamic objects was extracted.
- an object's or person's structure and characteristics might be modeled. For example, based on stereoscopic matching of corresponding pixels from two or more views of a physical object, the cameras' parameters (pose) may be estimated. Note that knowledge of the cameras' poses in turn may provide for each object's pixel the corresponding real-world-coordinates. As a result, when fusing the image of a physical object with a virtual content (e.g., computer-generated imagery) the physical object's position in the real-world-coordinates may be considered relative to the virtual content to resolve problems of overlap and order (e.g., occlusion issues).
- a virtual content e.g., computer-generated imagery
- an order among physical and graphical elements may be facilitating using a depth map.
- a depth map of a video image may provide the distance between a point in the scene 610 (projected in the image) and the camera 640 .
- a depth map may be used to determine what part of the image of a physical element should be rendered into the computer generated image, for example, and what part is occluded by a virtual element (and therefore should not be rendered).
- this information may be encoded in a binary occlusion mask. For example, mask pixel set to “1” might indicates that a physical element's image pixel should be keyed-in (i.e., rendered) while “0” indicates that it should not be keyed-in.
- a depth map may be generated, according to some embodiments, either by processing the video sequences of multiple views of the scene or by a three dimensional cameras such as a Light Detection And Ranging (“LIDAR”) camera.
- LIDAR camera may be associated with an optical remote sensing technology that measures the distance to, or other properties, of a target by illuminating the target with light (e.g., using laser pulses).
- a LIDAR camera may use ultraviolet, visible, or near infrared light to locate and image objects based on the reflected time of flight. This information may then be used in connection with any of the embodiments described herein.
- Other technologies utilizing RF, infrared, and Ultra-wideband signals may be used to measure relative distances of objects in the scene. Note that a similar effect might be achieved using sound waves to determine an anchorperson's location.
- a covered scene might include one or more persons (physical objects) 620 , 622 that perform relative to virtual objects or elements 630 , 632 .
- the person 620 may have a general idea as to the whereabouts and motion of these virtual elements 630 , 632 , although he or she cannot “see” them in real life.
- Capturing the scene may be a broadcast (main) camera 640 .
- a control-system 660 may drive the broadcast camera 640 automatically or via an operator.
- the control-system 660 operated either automatically or by an operator, may manage the production process. For instance, a game (e.g., a sequence of computer-generated imagery data) including a court/field with playing athletes (e.g., the virtual objects 630 , 632 ) may be selected from a CGI database 670 . A camera perspective may then be determined and submitted to the broadcast camera 640 as well as to a three dimensional graphic engine 680 .
- the graphic engine 680 may, according to some embodiments, receive the broadcast camera's model directly from camera-mounted sensors. According to other embodiments, vision-based methods may be utilized to estimate the broadcast camera's model.
- the three dimensional graphic engine 680 may render the game from the same camera perspective as the broadcast camera 640 .
- the virtual render of the game may be fused with video capture of the people 620 , 622 in the scene 110 to show all of the elements in an immersive fashion.
- a “person” may be able to know where he or she is in relation to the virtual object without having to disengage from the scene itself.
- the person may never have to see the virtual object in order to react logically to its eerie presence, the totality of which is being transmitted to the viewing audience. Note that this interaction may be conveyed to the audience, such as by merging the virtual and physical into the “viewer video.”
- various information details may be derived, such as: (i) the image foreground region of the physical elements (or persons), (ii) three dimensional modeling and characteristics, and/or (iii) real world locations. Relative to the virtual elements' presence in the scene 610 , as defined by the game (e.g., in the CGI database 670 ), the physical elements may be reconstructed. Moreover, the pose and appearance of each physical element may be reconstructed resulting in a new video or rendition (replacing the main camera 640 video) in which the new pose and appearance is in a more appropriate relation to the virtual objects 630 , 632 .
- the video processor 650 may also generate an occlusion mask that, together with the video, may be fed into a mixer 690 , where fusion of the video and the computer-generated-imagery takes place.
- the person 620 interacting with one or more virtual objects 630 , 632 uses, for example, an HMVD 660 to “see” or perceive these virtual elements from his or her vantage point.
- the person 620 (or other persons) may be able, using gestures or voice, to affect the virtual object's motion.
- a head-mounted device for tracking the person's gaze may be used as means for interaction.
- the video processor 650 may send the calculated person's perspective or an altered person's perspective to the three dimensional graphic engine 680 .
- the graphic engine 680 may render the virtual elements and/or environment from the received person's perspective (vantage point) and send this computer generated imagery, wirelessly, to the person's HMVD 660 .
- the person's gesture and/or voice may be measured by the plurality of cameras and sensors 642 , and may be recognized and translated by the video processor 650 to be interpreted as a command. These commands may be used to alter the progression and appearance of the virtual play (e.g., pause, slow-down, replay, any special effect, etc.).
- a version of the virtual image may be transmitted separately to the person 620 .
- this version can be simple, such as CAD-like drawings or an audio “beep,” or more complex, such as the entire look and feel of the virtual image used for the broadcast program.
- the person 620 may see the virtual image from his or her own perspective, and the virtual image presented as part of the programming may change.
- the person 620 may interact with the virtual objects 630 , 632 (to make them appear, disappear, move, change, multiply, shrink, grow, etc.) through gestures, voice, or any other means.
- This image may then be transmitted to the person's “normal” eyeglasses (or contact lenses) through which the image is beamed to the person's retina (e.g., a virtual retinal displays) or projected on the eyeglasses' lenses.
- the person's retina e.g., a virtual retinal displays
- hearing devices e.g., where a certain sound is transmitted as the person interacts with the virtual object.
- some embodiments may be applied to facilitate interaction between two or more persons captured by two or more different cameras from different locations (and, possibly, different times). For example, an interviewer at the studio, with the help of an HMVD may “see” the video reconstruction of an interviewee. This video reconstruction may be from the perspective of the interviewer. Similarly, the interviewee may be able to “see” the interviewer from his or her perspective. Such a capability may facilitate a more realistic interaction between the two people.
- FIG. 7 is a block diagram of a graphics platform 700 that might be associated with, for example, the system 300 of FIG. 3 and/or the system 600 of FIG. 6 in accordance with some embodiments of the present invention.
- the graphics platform 700 comprises a processor 710 , such as one or more INTEL® Pentium® processors, coupled to communication devices 720 configured to communicate with remote devices (not shown in FIG. 7 ).
- the communication devices 720 may be used, for example, to receive a video feed, information about a virtual object, and/or location information about a person.
- the processor 710 is also in communication with an input device 740 .
- the input device 740 may comprise, for example, a keyboard, a mouse, computer media reader, or even a system such as that described by this invention. Such an input device 740 may be used, for example, to enter information about a virtual object, a background, or remote and/or studio camera set-ups.
- the processor 710 is also in communication with an output device 750 .
- the output device 750 may comprise, for example, a display screen or printer or audio speaker. Such an output device 750 may be used, for example, to provide information about a camera set-up to an operator.
- the processor 710 is also in communication with a storage device 730 .
- the storage device 730 may comprise any appropriate information storage device, including combinations of magnetic storage devices (e.g., hard disk drives), optical storage devices, and/or semiconductor memory devices such as Random Access Memory (RAM) devices and Read Only Memory (ROM) devices.
- RAM Random Access Memory
- ROM Read Only Memory
- the storage device 730 stores a graphics platform application 735 for controlling the processor 710 .
- the processor 710 performs instructions of the application 735 , and thereby operates in accordance any embodiments of the present invention described herein.
- the processor 710 may receive a scene signal, whether or not including an image of a person, from a video camera.
- the processor 710 may insert a virtual object into the scene signal to create a viewer signal, such the viewer signal perspective is from a view of the virtual object as would be seen from the video camera's perspective.
- the processor 710 may also create a supplemental signal, such that the supplemental video includes information related to the view of the virtual object as would be seen from the person's perspective.
- information may be “received” by or “transmitted” to, for example: (i) the graphics platform 700 from other devices; or (ii) a software application or module within graphics platform 700 from another software application, module, or any other source.
- the storage device 730 also a rendering engine application 735 and virtual object and location data 800 .
- a database 800 that may be used in connection with the graphics platform 700 will now be described in detail with respect to FIG. 8 .
- the illustration and accompanying descriptions of the database presented herein are exemplary, and any number of other database arrangements could be employed besides those suggested by the figures.
- FIG. 8 is a tabular representation of a portion of a virtual object and location data table 800 in accordance with some embodiments of the present invention.
- the table 800 includes entries associated with virtual objects and location information about people in a scene.
- the table 800 also defines fields for each of the entries.
- the fields might specify a virtual object or person identifier, a three dimensional location of an object or person, angular orientation information, a distance between a person and an object, occlusion information, field of view data, etc.
- the information in the database 800 may be periodically created and updated based on information received from a location sensor worn by a person.
- embodiments described herein may use three dimensional information to adjust and/or tune the rendering of a person or object in a scene, and thereby simplify preparation of a program segment. It may let the person focus on the content delivery, knowing that his or her performance may be refined by reconstructing his or her pose, location, and, according to some embodiments, appearance relative to the virtual environment. Moreover, the person may receive a different image and/or perspective from what is provided to the viewer. This may significantly improve the person's ability to interact with the virtual content (e.g., reducing the learning curve for the person and allowing production to happen with fewer takes).
- a person may avoid interacting with an actual monitor (like a touch screen) or pre-coordinate his or her movements so as to appear as if an interaction is happening. That is, according to some embodiments described herein, a person's movements (or spoken words, etc.) can cause the virtual images to change, move, etc. Further, embodiments may reduce the use of bulky monitors, which may free up studio space and increase the portability of the operation (freeing a person to work in a variety of studio environments, including indoor and outdoor environments).
Abstract
According to some embodiments, a graphics platform may receive a video signal, including an image of a person, from a video camera. The graphics platform may then add a virtual object to the video signal to create a viewer or broadcast signal. 3D information associated with a spatial relationship between the person and the virtual object is determined. The graphics platform may then create a supplemental signal based on the 3D information, wherein the supplemental signal includes sufficient information to enable the person to interact with the virtual object as if such object was ‘seen’ or sensed from the person's perspective. The supplemental signal may comprise video, audio and/or pressure all as necessary to enable the person to interact with the virtual object as if he/she were physically present with the virtual object.
Description
- This patent application claim the benefit of U.S. Provisional Patent Application No. 61/440,675 entitled “Interaction with Content Through Human Computer Interface” and filed on Feb. 8, 2011. The entire contents of that application are hereby incorporated by reference.
- The present invention relates to systems and methods to provide video signals that include both a person and a virtual object. Some embodiments relate to systems and methods to efficiently and dynamically generate a supplemental video signal to be displayed for the person.
- An audio, visual or audio-visual program (e.g., a television broadcast) may include virtual content (e.g., computer generated, holographic, etc.). For example, a sports anchorperson might be seen (from the vantage point of the ‘audience’) evaluating the batting stance of a computer generated baseball player that is not physically present in the studio. Moreover, in some cases, the person may interact with virtual content (e.g., by walking around and pointing to various portions of the baseball player's body). It can be difficult, however, for the person to accurately and naturally interact with the virtual content that he or she cannot actually see. This may occur whether or not the studio anchorperson is actually in the final cut of the scene as broadcast to the ‘audience.’ In some cases, a monitor in the studio might display the blended broadcast image (that is, including both the person and the virtual content). With this approach, however, the person may keep glancing at the monitor to determine if he or she is standing the right area and/or is looking in the right direction. An anchorperson's difficulty in determining where or how to interact with the virtual image can be distracting to viewers of the broadcast and detracting to the quality of the anchorperson's overall interaction, making the entire scene, including the virtual content look less believable, let alone difficult to produce.
-
FIG. 1 is an illustration of a video system. -
FIG. 2 provides examples of images associated with a scene. -
FIG. 3 is an illustration of a video system in accordance with some embodiments. -
FIG. 4 provides examples of images associated with a scene according to some embodiments. -
FIGS. 5A and 5B are flow charts of methods in accordance with some embodiments of the present invention. -
FIG. 6 is block diagram of a system that may be provided in accordance with some embodiments. -
FIG. 7 is a block diagram of a graphics platform in accordance with some embodiments of the present invention. -
FIG. 8 is a tabular representation of a portion of data representing a virtual object and 3D information about a person, such as his or her position and/or orientation in accordance with some embodiments of the present invention. - Applicants have recognized that there is a need for methods, systems, apparatus, means and computer program products to efficiently and dynamically facilitate interactions between a person and virtual content. For example,
FIG. 1 illustrates asystem 100 wherein a set orscene 110 includes aperson 120 and avirtual object 130. That is, thevirtual object 130 is not actually physically present within thescene 110, but the image of the virtual object will be added either simultaneously or later (e.g., by a graphics rendering engine). Avideo camera 140 may be pointed at thescene 110 to generate a video signal provided to agraphics platform 150. By way of example,FIG. 2 illustrates animage 210 associated with such a video signal. Note that theimage 210 generated by thecamera 140 includes an image of a person 220 (e.g., a news anchorperson) but not a virtual object. The person or the virtual image do not necessarily need both to be in the final scene as presented to the audience. The invention solves the problem of allowing the person to relate to the virtual image, irrespective of whether both the person and the image are both presented finally to the viewing audience. - Referring again to
FIG. 1 , thegraphics platform 150 may receive information about thevirtual object 130, such as the object's location, pose, motion, appearance, audio, color, etc. Thegraphics platform 150 may use this information to create a viewer signal such as a broadcast signal or other signal to be output to a viewer, whether recorded or not, that includes images of both theperson 120 and thevirtual object 130, or only one or the other of the images. For example,FIG. 2 illustrates animage 212 associated with such a viewer signal. For example, but without limitation, theimage 212 output by thegraphics platform 150 includes images of both aperson 222 and a virtual object 232 (e.g., a dragon). - Referring again to
FIG. 1 , it may be desirable for theperson 120 to appear to interact with thevirtual object 130. For example, theperson 120 might want to appear to maintain eye contact with the virtual object 130 (e.g., along a line ofsight 225 illustrated inFIG. 2 ). This can be difficult, however, because theperson 120 cannot see thevirtual object 130. In some cases, amonitor 160 might be provided with adisplay 162 so that theperson 120 can view the broadcast or viewer signal. In this way, theperson 120 can periodically glance at the display to determine if he or she is in the relatively correct position and/or orientation with respect to thevirtual object 130. Such an approach, however, can be distracting for both theperson 120 and viewers (who may wonder why the person keeps looking away). - To efficiently and dynamically facilitate interactions between a person and virtual content,
FIG. 3 illustrates asystem 300 according to some embodiments. As before, a set orscene 310 includes aperson 320 and avirtual object 330, and avideo camera 340 may be pointed at thescene 310 to generate a video and audio signal provided to agraphics platform 350. According to some embodiments, other types of sensory signals, such as an audio, thermal, or haptic signal (e.g., created by the graphics platform) could be used to signal the position or location of a virtual image in relation to the person (e.g., a “beep” might indicate when an anchorperson's hand is touching the “bat” of a virtual batter). According to some embodiments, audio may replace the “supplemental video” but note that the video camera may generate the “viewer video” that, besides being served to the audience, is also used to model the person's pose, appearance and/or location in the studio. Further note that some or all of this modeling may be done by other sensors. - The
graphics platform 350 may, according to some embodiments, execute a rendering application, such as the Brainstorm eStudio® three dimensional real-time graphics software package. Note that thegraphics platform 350 could be implemented using a Personal Computer (PC) running a Windows® Operating System (“OS”) or an Apple® computing platform, or a cloud-based program (e.g., Google® Chrome®). Thegraphics platform 350 may use information about the virtual object 330 (e.g., the object's location, motion, appearance, etc.) to create a broadcast or viewer signal that includes images of both theperson 320 and thevirtual object 330. For example,FIG. 4 illustrates animage 412 that includes images of both aperson 422 and avirtual object 432. - Referring again to
FIG. 3 , to facilitate an appearance of interactions between theperson 320 and thevirtual object graphics platform 350 through the processing of data captured by thevideo camera 340 and/or various other sensors. As used herein, the phrase “3D information” might include location or position information, body pose, line of sight direction, etc. Thegraphics platform 350 may then use this information to generate a supplemental video signal to be provided to adisplay 360 associated with theperson 320. For example, a Head Mounted Video Display (HMVD) may be used to display the supplemental video signal. In particular, the supplemental video signal may include an image generated by thegraphics platform 350 that includes a view of thevirtual object 330 as it would be seen from the person's perspective. That is, thegraphics platform 350 may render a supplemental video feed in substantially real-time based on a spatial relationship between theperson 320 and avirtual object 330. Note that, as used herein, the phrase “graphics platform” may refer to any device (or set of devices) that can perform the functions of the various embodiments described herein. Alternatively, or in addition, thegraphics platform 350 may send an audio signal that indicates a situation where the person is within a certain distance to a virtual object or the relative direction of the virtual object to the person, for example. Note that a similar effect might be created using an audio or pressure signal or other type of signal (e.g., thermal) to indicate positioning. -
FIG. 4 illustrates that theperson 422 may be wearing adisplay 462 wherein an image of the supplemental video signal is projected onto lenses worn by theperson 422. Moreover, as illustrated by thesupplemental image 414, the supplemental video signal may include animage 474 of the virtual object as it would appear from the person's point of view. As illustrated inFIG. 4 , theimage 474 of the virtual object may comprise a skeleton or Computer Aided-Design (“CAD”) vector representation view of the virtual object. According to other embodiment, a high definition version might be provided instead. Note that a background behind theimage 474 may or may not be seen in thesupplemental image 414. In either case, viewing any representation of the virtual object from the performing person's perspective may allow theperson 422 to more realistically interact with the virtual object 432 (e.g., theperson 422 may know how to position himself or herself or gesture relative to the virtual object, or where to look in order appear to maintain eye contact along a line ofsight 425. According to some embodiments, the actual line of sight of theperson 422 may be determined, such as by using retina detectors incorporated into the display 462 (to take into account that theperson 422 can look around without turning his or her head). According to other embodiments, a view could be controlled using a joystick. For example, when a joystick in a default or home position the viewpoint might represent a normal line of site, and when the joystick is moved or engaged the viewpoint might be adjusted such that it is away or from the normal position. -
FIG. 5A illustrates a method that might be performed, for example, by some or all of the elements described herein. The flow charts described herein do not imply a fixed order to the steps, and embodiments of the present invention may be practiced in any order that is practicable. Note that any of the methods described herein may be performed by hardware, software, or any combination of these approaches. For example, a computer-readable storage medium may store thereon instructions that when executed by a machine result in performance according to any of the embodiments described herein. - At 502, 3D information about a virtual object is received at a graphics platform. For example, a location and dimensions of the virtual object may be determined by the graphics platform. At 504, 3D information associated with a person in a scene may be determined. The 3D information associated with the person might include the person's location, orientation, line of sight, pose, etc. and may be received, for example, from a video camera and/or one or more RTLS sensors using technologies such as RFID, infrared, and Ultra-wideband. . At 504, the graphics platform may create: (i) “a viewer signal” (possibly a video and/or audio signal) of the scene in relation to the person (whether or not actually including the person). For example, a viewer signal may include the virtual element and an animated figure of the person, and (ii) a supplemental signal of the scene (e.g. a video and/or audio signal), wherein the video signal and the supplemental signal are from different perspectives based at least in part on the 3D information. For example, the viewer signal might represent the scene from the point of view of a video camera filming the scene while the supplemental video signal represents the scene from the person's point of view. According to some embodiments, the supplemental video signal is displayed (or transmitted, e.g., via audio) to person to help him or her interact with the virtual object. In an embodiment of this invention the performing person may be a robot.
-
FIG. 5B is a flow chart of a method that may be performed in accordance with some embodiments described herein. At 512, a video signal including an image of a person may be received, and a virtual object may be added to the video signal to create a viewer signal at 514. The virtual object may be any type of virtual image, including for example, a Computer Generated Image (“CGI”) object, including a virtual human and/or a video game character, object or sound. - At 516, location information associated with a spatial relationship between the person and the virtual object may be determined. According to some embodiments, the location information may be determined by sensors or by analyzing the video signal from the camera. Moreover, a plurality of video signals might be received and analyzed by a graphics platform to model the person appearance and to determine a three dimensional location of the person. Other types or location information may include a distance between the person and virtual object, one or more angles associated with the person and virtual object, and/or an orientation of the person (e.g., where he or she is currently looking). Note that other types of RTLS sensor (e.g., using sound waves or any other way of measuring distance).
- At 518, a supplemental signal may be created based on the location information. In a particular, the supplemental signal may include a view of the virtual object or a perspective of the virtual object as would be seen or perceived from the person's perspective. The perception of the virtual object might comprise a marker (e.g., a dot or “x” indicating where a person should look, or a sound when a person looks at the right direction), a lower resolution image as compared with the viewer signal, updated with a lower frame rate image as compared with the viewer signal, and/or include a dynamically generated occlusion zone. According to some embodiments, the supplemental signal is further based on an orientation of the person's line of sight (e.g., the supplemental video signal may be updated when a person turns his or her head). Moreover, multiple people and/or virtual objects may be involved in the scene and/or included in the supplemental signal. In this case, a supplemental signal may be created for each person, and each supplemental signal would include a view or perception of the virtual objects as would be seen or perceived from that person's perspective.
- The supplemental signal may then be transmitted to a secondary device (e.g., a display device). According to some embodiments, the display device may be worn by the person, such as an eyeglasses display, a retinal display, and/or a contact lens display, or a hearing aid (for rending sound information). Moreover, according to some embodiments, the supplemental signal is wirelessly transmitted to the secondary device, hence, having the supplemental signal and its display to the performing person may be almost transparent to a viewer of the final broadcast.
- Moreover, according to some embodiments, a command from the person may be detected and, responsive to said detection, the virtual object may be adjusted. Such a command might comprise, for example, an audible command and/or a body gesture command. For example, a graphics platform might detect that the person has “grabbed” a virtual object and then move the image of the virtual object as the person moves his or her hands. As another example, a person may gesture or verbally order that the motion of a virtual object be paused and/or modified. As another example, a guest or another third person (or group of persons), without access to the devices enabling perception of the virtual image, may gesture or verbally order motion of the virtual object, causing the virtual object to move (and for such movement to be perceived from the perspective of the original person wearing the detection device). For example, when an audience claps or laughs, the sound might cause the virtual object to take a bow, which the person may then be able to perceive via information provided in the supplemental feed
- As used herein, the phrases “video feed” and “image” may refer to any signal conveying information about a moving or still image, including audio signals and including a High Definition-Serial Data Interface (“HD-SDI”) signal transmitted in accordance with the Society of Motion Picture and Television Engineers 292M standard. Although HD signals may be described in some examples presented herein, note that embodiments may be associated with any other type of video feed, including a standard broadcast feed and/or a three dimensional image feed. Moreover, video feeds and/or received images might comprise, for example, an HD-SDI signal exchanged through a fiber cable and/or a satellite transmission. Moreover, the video cameras described herein may be any device capable of generating a video feed, such as a Sony® studio (or outside) broadcast camera.
- Thus, system and methods may be provided to improve the production of video presentation involving augmented reality technology. Specifically, some embodiments may produce an improved immersive video mixing subjects and a virtual environment. This might be achieved, for example, by reconstructing the subject's video and/or presenting the subject with a “subject-view” of the virtual environment. This may facilitate interactions between subjects and the virtual elements and, according to some embodiments, let a subject alter a progression and/or appearance of virtual imagery through gestures or audible sounds.
- Augmented reality may fuse real scene video with computer generated imagery. In such a fusion, the virtual environment may be rendered from the perspective of a camera or other device that is used to capture the real scene video (or audio). Hence, knowledge of the camera's parameters may be required along with distances of real and virtual objects relative to the camera to resolve occlusion. For example, the image of part of a virtual element may be occluded by the image of a physical element in the scene or vice versa. Another aspect of enhancing video presentation through augmented reality is handling the interaction between the real and the virtual elements.
- For example, a sports anchorperson may analyze maneuvers during a game or play segment. In preparation for a show, a producer might request a graphical presentation of a certain play in a game that the anchor wants to analyze. This virtual playbook might comprise a code module that, when executed on a three dimensional rendering engine, may generate a three dimensional rendering of the play. The synthesized play may then be projected from the perspective of the studio camera. To analyze the play, the anchor's video image may be rendered so that he or she appears standing on the court (while actually remaining in a studio) among the virtual players. He or she may then deliver the analysis while virtually engaging with the players. To position himself or herself relative to the virtual players, the anchor typically looks at a camera screen and rehearses the movements beforehand. Even then, it may be a challenge to make the interaction between a real person and a virtual person look natural.
- Thus, when one or more persons interact with virtual content they may occasionally shift their focus to a video feed of the broadcast signal. This may create two problems. First, a person may appear to program viewers as unfocused because his or her gaze is directed slightly off from the camera shooting the program. Second, the person might not easily interact with the virtual elements, or move through or around a group of virtual elements (whether such virtual elements are static or dynamic). A person who appears somewhat disconnected from the virtual content may undermine the immersive effect of the show. Also note that interactions may be laborious from a production standpoint (requiring several re-takes and re-shoots when the person looks away from the camera, misses a line due to interacting incorrectly with virtual elements, etc.).
- To improve interactions with virtual content,
FIG. 6 is block diagram of asystem 600 that may be provided in accordance with some embodiments. Thesystem 600 creates an augmented reality environment using acamera 640 to capture a video sequence of a real-world scene 610 including aperson 620 generates a graphical sequence that includes a virtual object 630. For example, in a studio, thebroadcast camera 640 may record the person 620 (possibly using a “green screen” in the background) or on location “in the field.” A “virtual camera” (the perspective used by a graphic engine to render a virtual environment) may be aligned with thebroadcast camera 640 so that the rendered environment matches the person's scale, movements, etc. Typically, the broadcast camera's perspective (including position, roll, pan, tilt, and focal-length) is extracted using sensors mounted on thebroadcast camera 640 or by analyzing the video frames received from thebroadcast camera 640. According to some embodiments described herein, capabilities to improve the accuracy and realism of the mixed real and virtual production may be provided by thesystem 600. Note that aperson 620 who cannot “see” the virtual object 630 he or she interacts with will have less natural and accurate interactions with the virtual content. Thesystem 600 disclosed herein may improve and extend the interaction between theperson 620 and the virtual object 630 and allow theperson 620 to spatially and temporally affect the rendering of the virtual object 630 during the production. - According to some embodiments, the
video 640 of theperson 620 may be altered to refine his or her pose (and/or possibly appearance) before mixing it with the virtual environment. This may be done by determining the person's three dimensional model including obtaining a three dimensional surface and skeleton representation (for example, based on an analysis of videos from multiple views) so that the image of theperson 620 at a certain location and pose in the scene may be altered in relation to the virtual object. According to some embodiments, theperson 620 may be equipped with a HMVD 660 (e.g., three dimensional glasses, virtual retinal displays, etc.) through which he or she can view the virtual environment, including the virtual object 630 from his or her perspective. That is, the virtual object 630 may be displayed to the person from his or her own perspective in a way that enhances the person's ability to navigate through the virtual world and to interact with the content without overly complicating the production workflow. - According to some embodiments, a 3D model of the person 620 (including his or her location, pose, surface, and texture and color characteristics) may be obtained through an analysis of the
broadcast camera 640 and potentially auxiliary cameras and/or sensors 642 (attached to the person or external to the person). Once a 3D model of theperson 620 is obtained, the image of theperson 620 may be reconstructed into a new image that shows the person with a new pose and/or appearance relative to the virtual elements. According to some embodiments, the “viewer video” may be served to the audience. In addition, according to some embodiments, the person's location and pose may be submitted to a three dimensionalgraphic engine 680 to render a supplemental virtual environment view (e.g., a second virtual camera view) from the perspective of theperson 620. This second virtual camera view is presented to theperson 620 through theHMVD 660 or any other display device. In one embodiment, the second virtual camera view may be presented in a semi-transparent manner, so theperson 620 can still see his or her surrounding real-world environment (e.g., studio, cameras, anotherperson 622, etc.). In yet another embodiment, the second camera view might be presented to the person on a separate screen at the studio. Such an approach may eliminate the need to wear a visible display such asHMVD 660, but will require theperson 620 to look at a monitor instead of directly at the virtual object 630. - To improve the interaction among real and virtual objects, some embodiments use computer-vision techniques to recognize and track the
people scene 610. A three dimensional model of an anchor may be estimated and used to reconstruct his or her image at different orientations and poses relative to the virtual players orobjects 630, 632. The anchor (or any object relative to the anchor) may be reconstructed, according to some embodiments, at different relative sizes, locations, or appearances. Three dimensional reconstruction of objects may be done, for example, based on an analysis of video sequences from which three dimensional information of static and dynamic objects was extracted. - With two or more camera views, according to some embodiments, an object's or person's structure and characteristics might be modeled. For example, based on stereoscopic matching of corresponding pixels from two or more views of a physical object, the cameras' parameters (pose) may be estimated. Note that knowledge of the cameras' poses in turn may provide for each object's pixel the corresponding real-world-coordinates. As a result, when fusing the image of a physical object with a virtual content (e.g., computer-generated imagery) the physical object's position in the real-world-coordinates may be considered relative to the virtual content to resolve problems of overlap and order (e.g., occlusion issues).
- According to some embodiments, an order among physical and graphical elements may be facilitating using a depth map. A depth map of a video image may provide the distance between a point in the scene 610 (projected in the image) and the
camera 640. Hence, a depth map may be used to determine what part of the image of a physical element should be rendered into the computer generated image, for example, and what part is occluded by a virtual element (and therefore should not be rendered). According to some embodiments, this information may be encoded in a binary occlusion mask. For example, mask pixel set to “1” might indicates that a physical element's image pixel should be keyed-in (i.e., rendered) while “0” indicates that it should not be keyed-in. A depth map may be generated, according to some embodiments, either by processing the video sequences of multiple views of the scene or by a three dimensional cameras such as a Light Detection And Ranging (“LIDAR”) camera. A LIDAR camera may be associated with an optical remote sensing technology that measures the distance to, or other properties, of a target by illuminating the target with light (e.g., using laser pulses). A LIDAR camera may use ultraviolet, visible, or near infrared light to locate and image objects based on the reflected time of flight. This information may then be used in connection with any of the embodiments described herein. Other technologies utilizing RF, infrared, and Ultra-wideband signals may be used to measure relative distances of objects in the scene. Note that a similar effect might be achieved using sound waves to determine an anchorperson's location. - Note that a covered scene might include one or more persons (physical objects) 620, 622 that perform relative to virtual objects or
elements 630, 632. Theperson 620 may have a general idea as to the whereabouts and motion of thesevirtual elements 630, 632, although he or she cannot “see” them in real life. Capturing the scene may be a broadcast (main)camera 640. A control-system 660 may drive thebroadcast camera 640 automatically or via an operator. In addition to themain camera 640, there may be any number of additional cameras and/orsensors 642 that are positioned at thescene 610 to capture video or any other telemetry (e.g. RF, UWB, audio, etc.) measuring the appearance and structure of thescene 610. - The control-
system 660, operated either automatically or by an operator, may manage the production process. For instance, a game (e.g., a sequence of computer-generated imagery data) including a court/field with playing athletes (e.g., the virtual objects 630, 632) may be selected from aCGI database 670. A camera perspective may then be determined and submitted to thebroadcast camera 640 as well as to a three dimensionalgraphic engine 680. Thegraphic engine 680 may, according to some embodiments, receive the broadcast camera's model directly from camera-mounted sensors. According to other embodiments, vision-based methods may be utilized to estimate the broadcast camera's model. The three dimensionalgraphic engine 680 may render the game from the same camera perspective as thebroadcast camera 640. Next, the virtual render of the game may be fused with video capture of thepeople scene 110 to show all of the elements in an immersive fashion. According to some embodiments, a “person” may be able to know where he or she is in relation to the virtual object without having to disengage from the scene itself. In the event of a “horror” movie, for instance, the person may never have to see the virtual object in order to react logically to its eerie presence, the totality of which is being transmitted to the viewing audience. Note that this interaction may be conveyed to the audience, such as by merging the virtual and physical into the “viewer video.” - Based on analyses of the video, data, and/or audio streams and the telemetry signals fed to the
video processor unit 650, various information details may be derived, such as: (i) the image foreground region of the physical elements (or persons), (ii) three dimensional modeling and characteristics, and/or (iii) real world locations. Relative to the virtual elements' presence in thescene 610, as defined by the game (e.g., in the CGI database 670), the physical elements may be reconstructed. Moreover, the pose and appearance of each physical element may be reconstructed resulting in a new video or rendition (replacing themain camera 640 video) in which the new pose and appearance is in a more appropriate relation to thevirtual objects 630, 632. Thevideo processor 650 may also generate an occlusion mask that, together with the video, may be fed into amixer 690, where fusion of the video and the computer-generated-imagery takes place. - In some embodiments, the
person 620 interacting with one or morevirtual objects 630, 632, uses, for example, anHMVD 660 to “see” or perceive these virtual elements from his or her vantage point. Moreover, the person 620 (or other persons) may be able, using gestures or voice, to affect the virtual object's motion. In some embodiments, a head-mounted device for tracking the person's gaze may be used as means for interaction. - The
video processor 650, where the person's location and pose in the real world coordinates are computed, may send the calculated person's perspective or an altered person's perspective to the three dimensionalgraphic engine 680. Thegraphic engine 680, in turn, may render the virtual elements and/or environment from the received person's perspective (vantage point) and send this computer generated imagery, wirelessly, to the person'sHMVD 660. The person's gesture and/or voice may be measured by the plurality of cameras andsensors 642, and may be recognized and translated by thevideo processor 650 to be interpreted as a command. These commands may be used to alter the progression and appearance of the virtual play (e.g., pause, slow-down, replay, any special effect, etc.). - Thus, based on the person's location and movements, a version of the virtual image may be transmitted separately to the
person 620. Note that this version can be simple, such as CAD-like drawings or an audio “beep,” or more complex, such as the entire look and feel of the virtual image used for the broadcast program. As theperson 620 moves, or as thevirtual objects 630, 632 move around theperson 620, theperson 620 may see the virtual image from his or her own perspective, and the virtual image presented as part of the programming may change. Theperson 620 may interact with the virtual objects 630, 632 (to make them appear, disappear, move, change, multiply, shrink, grow, etc.) through gestures, voice, or any other means. This image may then be transmitted to the person's “normal” eyeglasses (or contact lenses) through which the image is beamed to the person's retina (e.g., a virtual retinal displays) or projected on the eyeglasses' lenses. A similar effect could be obtained using hearing devices (e.g., where a certain sound is transmitted as the person interacts with the virtual object). - Note that some embodiments may be applied to facilitate interaction between two or more persons captured by two or more different cameras from different locations (and, possibly, different times). For example, an interviewer at the studio, with the help of an HMVD may “see” the video reconstruction of an interviewee. This video reconstruction may be from the perspective of the interviewer. Similarly, the interviewee may be able to “see” the interviewer from his or her perspective. Such a capability may facilitate a more realistic interaction between the two people.
-
FIG. 7 is a block diagram of agraphics platform 700 that might be associated with, for example, thesystem 300 ofFIG. 3 and/or thesystem 600 ofFIG. 6 in accordance with some embodiments of the present invention. Thegraphics platform 700 comprises aprocessor 710, such as one or more INTEL® Pentium® processors, coupled tocommunication devices 720 configured to communicate with remote devices (not shown inFIG. 7 ). Thecommunication devices 720 may be used, for example, to receive a video feed, information about a virtual object, and/or location information about a person. - The
processor 710 is also in communication with aninput device 740. Theinput device 740 may comprise, for example, a keyboard, a mouse, computer media reader, or even a system such as that described by this invention. Such aninput device 740 may be used, for example, to enter information about a virtual object, a background, or remote and/or studio camera set-ups. Theprocessor 710 is also in communication with anoutput device 750. Theoutput device 750 may comprise, for example, a display screen or printer or audio speaker. Such anoutput device 750 may be used, for example, to provide information about a camera set-up to an operator. - The
processor 710 is also in communication with astorage device 730. Thestorage device 730 may comprise any appropriate information storage device, including combinations of magnetic storage devices (e.g., hard disk drives), optical storage devices, and/or semiconductor memory devices such as Random Access Memory (RAM) devices and Read Only Memory (ROM) devices. - The
storage device 730 stores agraphics platform application 735 for controlling theprocessor 710. Theprocessor 710 performs instructions of theapplication 735, and thereby operates in accordance any embodiments of the present invention described herein. For example, theprocessor 710 may receive a scene signal, whether or not including an image of a person, from a video camera. Theprocessor 710 may insert a virtual object into the scene signal to create a viewer signal, such the viewer signal perspective is from a view of the virtual object as would be seen from the video camera's perspective. Theprocessor 710 may also create a supplemental signal, such that the supplemental video includes information related to the view of the virtual object as would be seen from the person's perspective. - As used herein, information may be “received” by or “transmitted” to, for example: (i) the
graphics platform 700 from other devices; or (ii) a software application or module withingraphics platform 700 from another software application, module, or any other source. - As shown in
FIG. 7 , thestorage device 730 also arendering engine application 735 and virtual object andlocation data 800. One example of such adatabase 800 that may be used in connection with thegraphics platform 700 will now be described in detail with respect toFIG. 8 . The illustration and accompanying descriptions of the database presented herein are exemplary, and any number of other database arrangements could be employed besides those suggested by the figures. -
FIG. 8 is a tabular representation of a portion of a virtual object and location data table 800 in accordance with some embodiments of the present invention. The table 800 includes entries associated with virtual objects and location information about people in a scene. The table 800 also defines fields for each of the entries. The fields might specify a virtual object or person identifier, a three dimensional location of an object or person, angular orientation information, a distance between a person and an object, occlusion information, field of view data, etc. The information in thedatabase 800 may be periodically created and updated based on information received from a location sensor worn by a person. - Thus, embodiments described herein may use three dimensional information to adjust and/or tune the rendering of a person or object in a scene, and thereby simplify preparation of a program segment. It may let the person focus on the content delivery, knowing that his or her performance may be refined by reconstructing his or her pose, location, and, according to some embodiments, appearance relative to the virtual environment. Moreover, the person may receive a different image and/or perspective from what is provided to the viewer. This may significantly improve the person's ability to interact with the virtual content (e.g., reducing the learning curve for the person and allowing production to happen with fewer takes). In addition, to change or move a virtual element, a person may avoid interacting with an actual monitor (like a touch screen) or pre-coordinate his or her movements so as to appear as if an interaction is happening. That is, according to some embodiments described herein, a person's movements (or spoken words, etc.) can cause the virtual images to change, move, etc. Further, embodiments may reduce the use of bulky monitors, which may free up studio space and increase the portability of the operation (freeing a person to work in a variety of studio environments, including indoor and outdoor environments).
- The following illustrates various additional embodiments of the invention. These do not constitute a definition of all possible embodiments, and those skilled in the art will understand that the present invention is applicable to many other embodiments. Further, although the following embodiments are briefly described for clarity, those skilled in the art will understand how to make any changes, if necessary, to the above-described apparatus and methods to accommodate these and other embodiments and applications.
- Although three dimensional effects have been described in some of the examples presented herein, note that other effects might be incorporated in addition to (or instead of) three dimensional effects in accordance with the present invention. Moreover, although specific hardware and data configurations have been described herein, note that any number of other configurations may be provided in accordance with embodiments of the present invention (e.g., some of the information associated with the databases and engines described herein may be split, combined, and/or handled by external systems). Further note that embodiments may be associated with any number of different types of broadcast programs (e.g., sports, news, and weather programs).
- The present invention has been described in terms of several embodiments solely for the purpose of illustration. Persons skilled in the art will recognize from this description that the invention is not limited to the embodiments described, but may be practiced with modifications and alterations limited only by the spirit and scope of the appended claims.
Claims (26)
1. A method comprising:
receiving, at a graphics platform, 3D information about a virtual object;
determining 3D information associated with a person in a scene; and
creating, by the graphics platform: (i) a viewer signal of the scene and (ii) a supplemental signal, wherein the viewer signal and the supplemental signal are from different perspectives.
2. The method of claim 1 , wherein the viewer signal is a viewer video signal generated from a perspective of a video camera filming the scene and wherein the viewer video signal includes the virtual object and a representation of the person.
3. The method of claim 1 , wherein and the supplemental signal is a view of the virtual object from the perspective of the person.
4. The method of claim 1 , wherein the supplemental signal is a sensory signal indicative of the location of the virtual object relative to the person.
5. The method of claim 4 , wherein the sensory signal is associated with at least one of video information, audio information, pressure information, or thermal information.
6. The method of claim 1 , wherein the 3D information associated with the person is received from one or more sensors.
7. The method of claim 1 , wherein the 3D information associated with the person is determined by analyzing at least one of (i) video from at least one video camera or (ii) telemetry data from one or more sensors.
8. The method of claim 7 , wherein a plurality of video signals are received and analyzed by the graphics platform to determine at least one of: (i) a three dimensional location of the person, (ii) a distance between the person and virtual object, (iii) one or more angles associated with the person and virtual object, or (iv) an orientation of the person.
9. The method of claim 1 , wherein the view of the virtual object within the supplemental signal is associated with at least one of: (i) a marker, (ii) a lower or different resolution image as compared with the viewer signal, (iii) a lower or different frame rate image as compared with the viewer signal, or (iv) a dynamically generated occlusion zone.
10. The method of claim 9 , further comprising:
transmitting the supplemental signal to a display device worn by the person.
11. The method of claim 10 , wherein the display device comprises at least one of: (i) an eyeglasses display, (ii) a retinal display, (iii) a contact lens display, or (iv) a hearing aid or other in-“ear” device.
12. The method of claim 10 , wherein the supplemental signal is wirelessly transmitted to the display device.
13. The method of claim 1 , wherein the supplemental signal is further based on an orientation of the person's line of sight and/or body position.
14. The method of claim 1 , wherein multiple people and virtual objects are able to interact, whether or not any or all such images are included in the final scene presented to the viewing audience, and further comprising:
creating a supplemental signal for each person, wherein each supplemental signal includes a view or a means for perceiving the location of the virtual objects as would be seen or perceived from that person's perspective.
15. The method of claim 1 , further comprising:
detecting a command from an entity; and
responsive to said detection, adjusting the virtual object.
16. The method of claim 15 , wherein the command comprises at least one of: (i) an audible command, (ii) a gesture command, or (iii) a third party or second virtual object command.
17. The method of claim 1 , wherein the virtual object is associated with at least one of: (i) a virtual human, (ii) a video game, or (iii) an avatar.
18. A system, comprising:
a device to capture a feed including an image or location data of a person;
a platform to receive the feed from the device and to render a supplemental feed in substantially real-time, based on 3D information relating to the person and 3D information relating to a virtual object, wherein the supplemental feed includes sufficient information to provide the person with a means of perceiving a location of the virtual object as would be perceived from the person's perspective; and
a device worn by the person to receive and present to the person the supplemental feed.
19. The system of claim 18 , wherein the platform includes at least one of: (i) a computer generated image database, (ii) a control system, (iii) a video processor, (iv) an audio processor, (v) a three dimensional graphics engine, (vi) a mixer, or (vii) a location sensor to detect a location of the person.
20. The system of claim 18 , wherein the device comprises a light detection and ranging camera.
21. The system of claim 18 , wherein the platform is further to render a viewer feed including a information of the virtual object as would be necessary to perceive the virtual object from a camera's perspective.
22. The system of claim 18 , wherein the platform receives two feeds that include images of the human and performs stereoscopic matching of pixels within the feeds to determine a three dimensional location of the person.
23. The system of claim 18 , wherein the platform uses a depth map to create a binary occlusion mask for the supplemental feed.
24. A non-transitory, computer-readable medium storing instructions adapted to be executed by a processor to perform a method, the method comprising:
receiving a signal from a camera, the signal including an image of a person;
inserting a virtual object into the signal to create a viewer signal, wherein the viewer signal includes a view of the virtual object as would be seen from the camera's perspective; and
creating a supplemental video signal, wherein the supplemental video signal includes a view of the virtual object as would be seen from the person's perspective.
25. The medium of claim 24 , wherein the method further comprises:
outputting the supplemental feed to a device to be perceived by the person.
26. The medium of claim 24 , wherein the method further comprises:
modeling the person based on the received video signal; and
using the model to adjust image information associated with the person in the viewer signal.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/292,560 US20120200667A1 (en) | 2011-02-08 | 2011-11-09 | Systems and methods to facilitate interactions with virtual content |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201161440675P | 2011-02-08 | 2011-02-08 | |
US13/292,560 US20120200667A1 (en) | 2011-02-08 | 2011-11-09 | Systems and methods to facilitate interactions with virtual content |
Publications (1)
Publication Number | Publication Date |
---|---|
US20120200667A1 true US20120200667A1 (en) | 2012-08-09 |
Family
ID=46600388
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/292,560 Abandoned US20120200667A1 (en) | 2011-02-08 | 2011-11-09 | Systems and methods to facilitate interactions with virtual content |
US13/368,510 Active US9242177B2 (en) | 2011-02-08 | 2012-02-08 | Simulated sports events utilizing authentic event information |
US13/368,895 Active 2032-04-13 US8990842B2 (en) | 2011-02-08 | 2012-02-08 | Presenting content and augmenting a broadcast |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/368,510 Active US9242177B2 (en) | 2011-02-08 | 2012-02-08 | Simulated sports events utilizing authentic event information |
US13/368,895 Active 2032-04-13 US8990842B2 (en) | 2011-02-08 | 2012-02-08 | Presenting content and augmenting a broadcast |
Country Status (1)
Country | Link |
---|---|
US (3) | US20120200667A1 (en) |
Cited By (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060056679A1 (en) * | 2003-01-17 | 2006-03-16 | Koninklijke Philips Electronics, N.V. | Full depth map acquisition |
US20120050465A1 (en) * | 2010-08-30 | 2012-03-01 | Samsung Electronics Co., Ltd. | Image processing apparatus and method using 3D image format |
US20130257877A1 (en) * | 2012-03-30 | 2013-10-03 | Videx, Inc. | Systems and Methods for Generating an Interactive Avatar Model |
US20130286045A1 (en) * | 2012-04-27 | 2013-10-31 | Viewitech Co., Ltd. | Method of simulating lens using augmented reality |
US20130335405A1 (en) * | 2012-06-18 | 2013-12-19 | Michael J. Scavezze | Virtual object generation within a virtual environment |
US8633970B1 (en) * | 2012-08-30 | 2014-01-21 | Google Inc. | Augmented reality with earth data |
US20140118716A1 (en) * | 2012-10-31 | 2014-05-01 | Raytheon Company | Video and lidar target detection and tracking system and method for segmenting moving targets |
US20140292642A1 (en) * | 2011-06-15 | 2014-10-02 | Ifakt Gmbh | Method and device for determining and reproducing virtual, location-based information for a region of space |
US20140320404A1 (en) * | 2012-02-10 | 2014-10-30 | Sony Corporation | Image processing device, image processing method, and program |
US20150223017A1 (en) * | 2012-08-16 | 2015-08-06 | Alcatel Lucent | Method for provisioning a person with information associated with an event |
WO2015142732A1 (en) * | 2014-03-21 | 2015-09-24 | Audience Entertainment, Llc | Adaptive group interactive motion control system and method for 2d and 3d video |
US20150302625A1 (en) * | 2014-04-18 | 2015-10-22 | Magic Leap, Inc. | Generating a sound wavefront in augmented or virtual reality systems |
US9232173B1 (en) * | 2014-07-18 | 2016-01-05 | Adobe Systems Incorporated | Method and apparatus for providing engaging experience in an asset |
US9626887B2 (en) | 2013-04-26 | 2017-04-18 | Samsung Electronics Co., Ltd. | Image display device and method and apparatus for implementing augmented reality using unidirectional beam |
US9741215B2 (en) | 2014-12-11 | 2017-08-22 | Elwha Llc | Wearable haptic feedback devices and methods of fabricating wearable haptic feedback devices |
US9795877B2 (en) | 2014-12-11 | 2017-10-24 | Elwha Llc | Centralized system proving notification of incoming projectiles |
US20180036636A1 (en) * | 2016-08-04 | 2018-02-08 | Creative Technology Ltd | Companion display module to a main display screen for displaying auxiliary information not displayed by the main display screen and a processing method therefor |
US9922518B2 (en) | 2014-12-11 | 2018-03-20 | Elwha Llc | Notification of incoming projectiles |
US9934613B2 (en) * | 2014-04-29 | 2018-04-03 | The Florida International University Board Of Trustees | Systems for controlling a movable object |
US20190258058A1 (en) * | 2016-02-18 | 2019-08-22 | Apple Inc. | Head-mounted display for virtual and mixed reality with inside-out positional, user body and environment tracking |
CN110177286A (en) * | 2019-05-30 | 2019-08-27 | 上海云甫智能科技有限公司 | A kind of live broadcasting method, system and intelligent glasses |
US10449445B2 (en) | 2014-12-11 | 2019-10-22 | Elwha Llc | Feedback for enhanced situational awareness |
US10521662B2 (en) * | 2018-01-12 | 2019-12-31 | Microsoft Technology Licensing, Llc | Unguided passive biometric enrollment |
EP3687164A1 (en) * | 2013-02-20 | 2020-07-29 | Microsoft Technology Licensing, LLC | Providing a tele-immersive experience using a mirror metaphor |
US10818093B2 (en) | 2018-05-25 | 2020-10-27 | Tiff's Treats Holdings, Inc. | Apparatus, method, and system for presentation of multimedia content including augmented reality content |
US10984600B2 (en) | 2018-05-25 | 2021-04-20 | Tiff's Treats Holdings, Inc. | Apparatus, method, and system for presentation of multimedia content including augmented reality content |
US11069138B2 (en) * | 2014-06-10 | 2021-07-20 | Ripple, Inc. Of Delaware | Audio content of a digital object associated with a geographical location |
US11367251B2 (en) * | 2019-06-24 | 2022-06-21 | Imec Vzw | Device using local depth information to generate an augmented reality image |
US11403797B2 (en) | 2014-06-10 | 2022-08-02 | Ripple, Inc. Of Delaware | Dynamic location based digital element |
US11496696B2 (en) | 2015-11-24 | 2022-11-08 | Samsung Electronics Co., Ltd. | Digital photographing apparatus including a plurality of optical systems for acquiring images under different conditions and method of operating the same |
US11508125B1 (en) * | 2014-05-28 | 2022-11-22 | Lucasfilm Entertainment Company Ltd. | Navigating a virtual environment of a media content item |
US11606546B1 (en) * | 2018-11-08 | 2023-03-14 | Tanzle, Inc. | Perspective based green screening |
US11620787B2 (en) * | 2018-06-22 | 2023-04-04 | Roblox Corporation | Systems and methods for asset generation in immersive cognition assessments |
Families Citing this family (80)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9352411B2 (en) | 2008-05-28 | 2016-05-31 | Illinois Tool Works Inc. | Welding training system |
US9857868B2 (en) | 2011-03-19 | 2018-01-02 | The Board Of Trustees Of The Leland Stanford Junior University | Method and system for ergonomic touch-free interface |
US8840466B2 (en) | 2011-04-25 | 2014-09-23 | Aquifi, Inc. | Method and system to create three-dimensional mapping in a two-dimensional game |
US9101994B2 (en) | 2011-08-10 | 2015-08-11 | Illinois Tool Works Inc. | System and device for welding training |
US8854433B1 (en) | 2012-02-03 | 2014-10-07 | Aquifi, Inc. | Method and system enabling natural user interface gestures with an electronic system |
US9573215B2 (en) | 2012-02-10 | 2017-02-21 | Illinois Tool Works Inc. | Sound-based weld travel speed sensing system and method |
US20130288211A1 (en) * | 2012-04-27 | 2013-10-31 | Illinois Tool Works Inc. | Systems and methods for training a welding operator |
US9098739B2 (en) | 2012-06-25 | 2015-08-04 | Aquifi, Inc. | Systems and methods for tracking human hands using parts based template matching |
US9998789B1 (en) | 2012-07-27 | 2018-06-12 | Dp Technologies, Inc. | Audience interaction system |
US8836768B1 (en) | 2012-09-04 | 2014-09-16 | Aquifi, Inc. | Method and system enabling natural user interface gestures with user wearable glasses |
US9368045B2 (en) | 2012-11-09 | 2016-06-14 | Illinois Tool Works Inc. | System and device for welding training |
US9583014B2 (en) | 2012-11-09 | 2017-02-28 | Illinois Tool Works Inc. | System and device for welding training |
US9129155B2 (en) | 2013-01-30 | 2015-09-08 | Aquifi, Inc. | Systems and methods for initializing motion tracking of human hands using template matching within bounded regions determined using a depth map |
US9713852B2 (en) | 2013-03-15 | 2017-07-25 | Illinois Tool Works Inc. | Welding training systems and devices |
US9672757B2 (en) | 2013-03-15 | 2017-06-06 | Illinois Tool Works Inc. | Multi-mode software and method for a welding training system |
US9666100B2 (en) | 2013-03-15 | 2017-05-30 | Illinois Tool Works Inc. | Calibration devices for a welding training system |
US9583023B2 (en) | 2013-03-15 | 2017-02-28 | Illinois Tool Works Inc. | Welding torch for a welding training system |
US9728103B2 (en) | 2013-03-15 | 2017-08-08 | Illinois Tool Works Inc. | Data storage and analysis for a welding training system |
US9298266B2 (en) | 2013-04-02 | 2016-03-29 | Aquifi, Inc. | Systems and methods for implementing three-dimensional (3D) gesture based graphical user interfaces (GUI) that incorporate gesture reactive interface objects |
US11090753B2 (en) | 2013-06-21 | 2021-08-17 | Illinois Tool Works Inc. | System and method for determining weld travel speed |
US10231024B2 (en) * | 2013-09-12 | 2019-03-12 | Blizzard Entertainment, Inc. | Selectively incorporating feedback from a remote audience |
US10056010B2 (en) | 2013-12-03 | 2018-08-21 | Illinois Tool Works Inc. | Systems and methods for a weld training system |
US9751149B2 (en) | 2014-01-07 | 2017-09-05 | Illinois Tool Works Inc. | Welding stand for a welding system |
US9757819B2 (en) | 2014-01-07 | 2017-09-12 | Illinois Tool Works Inc. | Calibration tool and method for a welding system |
US9507417B2 (en) | 2014-01-07 | 2016-11-29 | Aquifi, Inc. | Systems and methods for implementing head tracking based graphical user interfaces (GUI) that incorporate gesture reactive interface objects |
US9724788B2 (en) | 2014-01-07 | 2017-08-08 | Illinois Tool Works Inc. | Electrical assemblies for a welding system |
US10105782B2 (en) | 2014-01-07 | 2018-10-23 | Illinois Tool Works Inc. | Feedback from a welding torch of a welding system |
US9589481B2 (en) | 2014-01-07 | 2017-03-07 | Illinois Tool Works Inc. | Welding software for detection and control of devices and for analysis of data |
US10170019B2 (en) | 2014-01-07 | 2019-01-01 | Illinois Tool Works Inc. | Feedback from a welding torch of a welding system |
US9414115B1 (en) | 2014-03-28 | 2016-08-09 | Aquifi, Inc. | Use of natural user interface realtime feedback to customize user viewable ads presented on broadcast media |
US10665128B2 (en) | 2014-06-27 | 2020-05-26 | Illinois Tool Works Inc. | System and method of monitoring welding information |
US9937578B2 (en) | 2014-06-27 | 2018-04-10 | Illinois Tool Works Inc. | System and method for remote welding training |
US9862049B2 (en) | 2014-06-27 | 2018-01-09 | Illinois Tool Works Inc. | System and method of welding system operator identification |
US10307853B2 (en) | 2014-06-27 | 2019-06-04 | Illinois Tool Works Inc. | System and method for managing welding data |
US9474933B1 (en) | 2014-07-11 | 2016-10-25 | ProSports Technologies, LLC | Professional workout simulator |
US9398213B1 (en) | 2014-07-11 | 2016-07-19 | ProSports Technologies, LLC | Smart field goal detector |
US9724588B1 (en) | 2014-07-11 | 2017-08-08 | ProSports Technologies, LLC | Player hit system |
US9502018B2 (en) | 2014-07-11 | 2016-11-22 | ProSports Technologies, LLC | Whistle play stopper |
US9610491B2 (en) | 2014-07-11 | 2017-04-04 | ProSports Technologies, LLC | Playbook processor |
US9305441B1 (en) | 2014-07-11 | 2016-04-05 | ProSports Technologies, LLC | Sensor experience shirt |
US9724787B2 (en) | 2014-08-07 | 2017-08-08 | Illinois Tool Works Inc. | System and method of monitoring a welding environment |
US11014183B2 (en) | 2014-08-07 | 2021-05-25 | Illinois Tool Works Inc. | System and method of marking a welding workpiece |
US9875665B2 (en) | 2014-08-18 | 2018-01-23 | Illinois Tool Works Inc. | Weld training system and method |
US10264175B2 (en) | 2014-09-09 | 2019-04-16 | ProSports Technologies, LLC | Facial recognition for event venue cameras |
US11247289B2 (en) | 2014-10-16 | 2022-02-15 | Illinois Tool Works Inc. | Remote power supply parameter adjustment |
US10239147B2 (en) | 2014-10-16 | 2019-03-26 | Illinois Tool Works Inc. | Sensor-based power controls for a welding system |
US10417934B2 (en) | 2014-11-05 | 2019-09-17 | Illinois Tool Works Inc. | System and method of reviewing weld data |
US10373304B2 (en) | 2014-11-05 | 2019-08-06 | Illinois Tool Works Inc. | System and method of arranging welding device markers |
US10402959B2 (en) | 2014-11-05 | 2019-09-03 | Illinois Tool Works Inc. | System and method of active torch marker control |
US10204406B2 (en) | 2014-11-05 | 2019-02-12 | Illinois Tool Works Inc. | System and method of controlling welding system camera exposure and marker illumination |
US10210773B2 (en) | 2014-11-05 | 2019-02-19 | Illinois Tool Works Inc. | System and method for welding torch display |
US10490098B2 (en) | 2014-11-05 | 2019-11-26 | Illinois Tool Works Inc. | System and method of recording multi-run data |
US10427239B2 (en) | 2015-04-02 | 2019-10-01 | Illinois Tool Works Inc. | Systems and methods for tracking weld training arc parameters |
EP3316980A1 (en) * | 2015-06-30 | 2018-05-09 | Amazon Technologies Inc. | Integrating games systems with a spectating system |
US10657839B2 (en) | 2015-08-12 | 2020-05-19 | Illinois Tool Works Inc. | Stick welding electrode holders with real-time feedback features |
US10438505B2 (en) | 2015-08-12 | 2019-10-08 | Illinois Tool Works | Welding training system interface |
US10373517B2 (en) | 2015-08-12 | 2019-08-06 | Illinois Tool Works Inc. | Simulation stick welding electrode holder systems and methods |
US10593230B2 (en) | 2015-08-12 | 2020-03-17 | Illinois Tool Works Inc. | Stick welding electrode holder systems and methods |
US9866575B2 (en) * | 2015-10-02 | 2018-01-09 | General Electric Company | Management and distribution of virtual cyber sensors |
US11115720B2 (en) * | 2016-12-06 | 2021-09-07 | Facebook, Inc. | Providing a live poll within a video presentation |
US10636449B2 (en) * | 2017-11-06 | 2020-04-28 | International Business Machines Corporation | Dynamic generation of videos based on emotion and sentiment recognition |
CN108434698B (en) * | 2018-03-05 | 2020-02-07 | 西安财经学院 | Sports ball game teaching system |
US10621983B2 (en) * | 2018-04-20 | 2020-04-14 | Spotify Ab | Systems and methods for enhancing responsiveness to utterances having detectable emotion |
EP3870320A4 (en) | 2018-10-22 | 2022-06-22 | Sony Interactive Entertainment LLC | Remote networked services for providing contextual game guidance |
EP3647909A1 (en) * | 2018-10-30 | 2020-05-06 | Nokia Technologies Oy | Multi-user environment |
US11568713B2 (en) * | 2019-01-21 | 2023-01-31 | Tempus Ex Machina, Inc. | Systems and methods for making use of telemetry tracking devices to enable event based analysis at a live game |
US11311808B2 (en) | 2019-01-21 | 2022-04-26 | Tempus Ex Machina, Inc. | Systems and methods to predict a future outcome at a live sport event |
US11381739B2 (en) * | 2019-01-23 | 2022-07-05 | Intel Corporation | Panoramic virtual reality framework providing a dynamic user experience |
JP6722316B1 (en) * | 2019-03-05 | 2020-07-15 | 株式会社コロプラ | Distribution program, distribution method, computer, and viewing terminal |
US11776423B2 (en) | 2019-07-22 | 2023-10-03 | Illinois Tool Works Inc. | Connection boxes for gas tungsten arc welding training systems |
US11288978B2 (en) | 2019-07-22 | 2022-03-29 | Illinois Tool Works Inc. | Gas tungsten arc welding training systems |
US11356720B2 (en) | 2020-01-30 | 2022-06-07 | Snap Inc. | Video generation system to render frames on demand |
US11036781B1 (en) | 2020-01-30 | 2021-06-15 | Snap Inc. | Video generation system to render frames on demand using a fleet of servers |
KR20220133249A (en) | 2020-01-30 | 2022-10-04 | 스냅 인코포레이티드 | A system for creating media content items on demand |
US11284144B2 (en) * | 2020-01-30 | 2022-03-22 | Snap Inc. | Video generation system to render frames on demand using a fleet of GPUs |
US11561610B2 (en) | 2020-03-11 | 2023-01-24 | Moea Technologies, Inc. | Augmented audio conditioning system |
US11305195B2 (en) * | 2020-05-08 | 2022-04-19 | T-Mobile Usa, Inc. | Extended environmental using real-world environment data |
US11452940B2 (en) | 2020-06-09 | 2022-09-27 | International Business Machines Corporation | Real-world activity simulation augmentation with real-world data of the activity |
CN112138407A (en) * | 2020-08-31 | 2020-12-29 | 杭州威佩网络科技有限公司 | Information display method and device |
US11904244B1 (en) * | 2021-02-16 | 2024-02-20 | Carrick J. Pierce | Multidimensional sports system |
Citations (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030210259A1 (en) * | 2001-11-14 | 2003-11-13 | Liu Alan V. | Multi-tactile display haptic interface device |
US20040032410A1 (en) * | 2002-05-09 | 2004-02-19 | John Ryan | System and method for generating a structured two-dimensional virtual presentation from less than all of a three-dimensional virtual reality model |
US20040105573A1 (en) * | 2002-10-15 | 2004-06-03 | Ulrich Neumann | Augmented virtual environments |
US20050255434A1 (en) * | 2004-02-27 | 2005-11-17 | University Of Florida Research Foundation, Inc. | Interactive virtual characters for training including medical diagnosis training |
US7007236B2 (en) * | 2001-09-14 | 2006-02-28 | Accenture Global Services Gmbh | Lab window collaboration |
US20060170652A1 (en) * | 2005-01-31 | 2006-08-03 | Canon Kabushiki Kaisha | System, image processing apparatus, and information processing method |
US20070271301A1 (en) * | 2006-05-03 | 2007-11-22 | Affinity Media Uk Limited | Method and system for presenting virtual world environment |
US20070285506A1 (en) * | 2006-04-28 | 2007-12-13 | Roland Schneider | Court video teleconferencing system & method |
US20080147475A1 (en) * | 2006-12-15 | 2008-06-19 | Matthew Gruttadauria | State of the shelf analysis with virtual reality tools |
US7427996B2 (en) * | 2002-10-16 | 2008-09-23 | Canon Kabushiki Kaisha | Image processing apparatus and image processing method |
US20080291277A1 (en) * | 2007-01-12 | 2008-11-27 | Jacobsen Jeffrey J | Monocular display device |
US20090033737A1 (en) * | 2007-08-02 | 2009-02-05 | Stuart Goose | Method and System for Video Conferencing in a Virtual Environment |
US20100091112A1 (en) * | 2006-11-10 | 2010-04-15 | Stefan Veeser | Object position and orientation detection system |
US20100103196A1 (en) * | 2008-10-27 | 2010-04-29 | Rakesh Kumar | System and method for generating a mixed reality environment |
US20100220932A1 (en) * | 2007-06-20 | 2010-09-02 | Dong-Qing Zhang | System and method for stereo matching of images |
US20100271983A1 (en) * | 2009-04-22 | 2010-10-28 | Bryant Joshua R | Wireless headset communication system |
US7986803B1 (en) * | 2007-05-10 | 2011-07-26 | Plantronics, Inc. | Ear bud speaker earphone with retainer tab |
US20120093320A1 (en) * | 2010-10-13 | 2012-04-19 | Microsoft Corporation | System and method for high-precision 3-dimensional audio for augmented reality |
US9266017B1 (en) * | 2008-12-03 | 2016-02-23 | Electronic Arts Inc. | Virtual playbook with user controls |
Family Cites Families (64)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6080063A (en) * | 1997-01-06 | 2000-06-27 | Khosla; Vinod | Simulated real time game play with live event |
GB9708061D0 (en) * | 1997-04-22 | 1997-06-11 | Two Way Tv Ltd | Interactive, predictive game control system |
US6292706B1 (en) * | 1998-04-17 | 2001-09-18 | William E. Welch | Simulated baseball game |
TW463503B (en) * | 1998-08-26 | 2001-11-11 | United Video Properties Inc | Television chat system |
EP1003313B1 (en) * | 1998-09-11 | 2004-11-17 | Two Way Media Limited | Delivering interactive applications |
US7211000B2 (en) * | 1998-12-22 | 2007-05-01 | Intel Corporation | Gaming utilizing actual telemetry data |
US7120880B1 (en) * | 1999-02-25 | 2006-10-10 | International Business Machines Corporation | Method and system for real-time determination of a subject's interest level to media content |
US6259486B1 (en) * | 1999-10-20 | 2001-07-10 | A. Pascal Mahvi | Sensor unit for controlling television set operation |
WO2001045004A1 (en) * | 1999-12-17 | 2001-06-21 | Promo Vu | Interactive promotional information communicating system |
WO2002009833A1 (en) * | 2000-08-02 | 2002-02-07 | Timothy James Ball | Simulation system |
JP4765182B2 (en) * | 2001-01-19 | 2011-09-07 | ソニー株式会社 | Interactive television communication method and interactive television communication client device |
JP2002224441A (en) * | 2001-02-01 | 2002-08-13 | Konami Computer Entertainment Osaka:Kk | Game progress control program, game server and game progress control method |
US7113916B1 (en) * | 2001-09-07 | 2006-09-26 | Hill Daniel A | Method of facial coding monitoring for the purpose of gauging the impact and appeal of commercially-related stimuli |
US20030063222A1 (en) * | 2001-10-03 | 2003-04-03 | Sony Corporation | System and method for establishing TV setting based on viewer mood |
JP4028708B2 (en) * | 2001-10-19 | 2007-12-26 | 株式会社コナミデジタルエンタテインメント | GAME DEVICE AND GAME SYSTEM |
US20060177109A1 (en) * | 2001-12-21 | 2006-08-10 | Leonard Storch | Combination casino table game imaging system for automatically recognizing the faces of players--as well as terrorists and other undesirables-- and for recognizing wagered gaming chips |
AU2002368117A1 (en) * | 2002-07-26 | 2004-02-16 | National Institute Of Information And Communications Technology Incorporated Administrative Agency | Image recognition apparatus and image recognition program |
US8176518B1 (en) * | 2002-08-30 | 2012-05-08 | Rovi Technologies Corporation | Systems and methods for providing fantasy sports contests based on subevents |
US8540575B2 (en) * | 2002-10-08 | 2013-09-24 | White Knuckle Gaming, Llc | Method and system for increased realism in video games |
US8012003B2 (en) * | 2003-04-10 | 2011-09-06 | Nintendo Co., Ltd. | Baseball videogame having pitching meter, hero mode and user customization features |
WO2005113099A2 (en) * | 2003-05-30 | 2005-12-01 | America Online, Inc. | Personalizing content |
JP4238678B2 (en) * | 2003-09-08 | 2009-03-18 | ソニー株式会社 | Receiving apparatus and receiving method, recording medium, and program |
US8323106B2 (en) * | 2008-05-30 | 2012-12-04 | Sony Computer Entertainment America Llc | Determination of controller three-dimensional location using image analysis and ultrasonic communication |
US20050130725A1 (en) * | 2003-12-15 | 2005-06-16 | International Business Machines Corporation | Combined virtual and video game |
US8190907B2 (en) * | 2004-08-11 | 2012-05-29 | Sony Computer Entertainment Inc. | Process and apparatus for automatically identifying user of consumer electronics |
US8083589B1 (en) * | 2005-04-15 | 2011-12-27 | Reference, LLC | Capture and utilization of real-world data for use in gaming systems such as video games |
US8094928B2 (en) * | 2005-11-14 | 2012-01-10 | Microsoft Corporation | Stereo video for gaming |
US7991770B2 (en) * | 2005-11-29 | 2011-08-02 | Google Inc. | Detecting repeating content in broadcast media |
US20070150916A1 (en) * | 2005-12-28 | 2007-06-28 | James Begole | Using sensors to provide feedback on the access of digital content |
KR101111913B1 (en) * | 2006-01-05 | 2012-02-15 | 삼성전자주식회사 | Display Apparatus And Power Control Method Thereof |
US20070203911A1 (en) * | 2006-02-07 | 2007-08-30 | Fu-Sheng Chiu | Video weblog |
US8973083B2 (en) * | 2006-05-05 | 2015-03-03 | Thomas A. Belton | Phantom gaming in broadcast media system and method |
US20070296723A1 (en) * | 2006-06-26 | 2007-12-27 | Electronic Arts Inc. | Electronic simulation of events via computer-based gaming technologies |
JP2008054085A (en) * | 2006-08-25 | 2008-03-06 | Hitachi Ltd | Broadcast receiving apparatus and starting method thereof |
US8932124B2 (en) * | 2006-08-31 | 2015-01-13 | Cfph, Llc | Game of chance systems and methods |
US8758109B2 (en) * | 2008-08-20 | 2014-06-24 | Cfph, Llc | Game of chance systems and methods |
US20080169930A1 (en) * | 2007-01-17 | 2008-07-17 | Sony Computer Entertainment Inc. | Method and system for measuring a user's level of attention to content |
US20090029754A1 (en) * | 2007-07-23 | 2009-01-29 | Cybersports, Inc | Tracking and Interactive Simulation of Real Sports Equipment |
US20090258685A1 (en) * | 2007-10-10 | 2009-10-15 | Gabriel Gaidos | Method for merging live sports game data with Internet based computer games |
US8419545B2 (en) * | 2007-11-28 | 2013-04-16 | Ailive, Inc. | Method and system for controlling movements of objects in a videogame |
US8734214B2 (en) * | 2007-11-29 | 2014-05-27 | International Business Machines Corporation | Simulation of sporting events in a virtual environment |
US20090158309A1 (en) * | 2007-12-12 | 2009-06-18 | Hankyu Moon | Method and system for media audience measurement and spatial extrapolation based on site, display, crowd, and viewership characterization |
US20090164917A1 (en) * | 2007-12-19 | 2009-06-25 | Kelly Kevin M | System and method for remote delivery of healthcare and treatment services |
US7889073B2 (en) * | 2008-01-31 | 2011-02-15 | Sony Computer Entertainment America Llc | Laugh detector and system and method for tracking an emotional response to a media presentation |
JP5089453B2 (en) * | 2008-03-24 | 2012-12-05 | 株式会社コナミデジタルエンタテインメント | Image processing apparatus, image processing apparatus control method, and program |
US8430750B2 (en) * | 2008-05-22 | 2013-04-30 | Broadcom Corporation | Video gaming device with image identification |
JP4536134B2 (en) * | 2008-06-02 | 2010-09-01 | 株式会社コナミデジタルエンタテインメント | GAME SYSTEM USING NETWORK, GAME PROGRAM, GAME DEVICE, AND GAME CONTROL METHOD USING NETWORK |
US8213689B2 (en) * | 2008-07-14 | 2012-07-03 | Google Inc. | Method and system for automated annotation of persons in video content |
US8267781B2 (en) | 2009-01-30 | 2012-09-18 | Microsoft Corporation | Visual target tracking |
US8291328B2 (en) | 2009-03-24 | 2012-10-16 | Disney Enterprises, Inc. | System and method for synchronizing a real-time performance with a virtual object |
US8375311B2 (en) | 2009-03-24 | 2013-02-12 | Disney Enterprises, Inc. | System and method for determining placement of a virtual object according to a real-time performance |
US20100271367A1 (en) * | 2009-04-22 | 2010-10-28 | Sony Computer Entertainment America Inc. | Method and apparatus for combining a real world event and a computer simulation |
US8388429B2 (en) * | 2009-05-29 | 2013-03-05 | Universal Entertainment Corporation | Player tracking apparatus and gaming machine and control method thereof |
US8821256B2 (en) * | 2009-05-29 | 2014-09-02 | Universal Entertainment Corporation | Game system |
US9129644B2 (en) | 2009-06-23 | 2015-09-08 | Disney Enterprises, Inc. | System and method for rendering in accordance with location of virtual objects in real-time |
US8712110B2 (en) * | 2009-12-23 | 2014-04-29 | The Invention Science Fund I, LC | Identifying a characteristic of an individual utilizing facial recognition and providing a display for the individual |
US9089775B1 (en) * | 2010-06-24 | 2015-07-28 | Isaac S. Daniel | Interactive game system and methods for a television audience member to mimic physical movements occurring in television broadcast content |
US20120142421A1 (en) * | 2010-12-03 | 2012-06-07 | Kennedy Jr Thomas William | Device for interactive entertainment |
JP2012174237A (en) * | 2011-02-24 | 2012-09-10 | Nintendo Co Ltd | Display control program, display control device, display control system and display control method |
US8401343B2 (en) * | 2011-03-27 | 2013-03-19 | Edwin Braun | System and method for defining an augmented reality character in computer generated virtual reality using coded stickers |
US8860805B2 (en) * | 2011-04-12 | 2014-10-14 | Lg Electronics Inc. | Electronic device and method of controlling the same |
US8769556B2 (en) * | 2011-10-28 | 2014-07-01 | Motorola Solutions, Inc. | Targeted advertisement based on face clustering for time-varying video |
US8819738B2 (en) * | 2012-05-16 | 2014-08-26 | Yottio, Inc. | System and method for real-time composite broadcast with moderation mechanism for multiple media feeds |
US8689250B2 (en) * | 2012-06-29 | 2014-04-01 | International Business Machines Corporation | Crowd sourced, content aware smarter television systems |
-
2011
- 2011-11-09 US US13/292,560 patent/US20120200667A1/en not_active Abandoned
-
2012
- 2012-02-08 US US13/368,510 patent/US9242177B2/en active Active
- 2012-02-08 US US13/368,895 patent/US8990842B2/en active Active
Patent Citations (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7007236B2 (en) * | 2001-09-14 | 2006-02-28 | Accenture Global Services Gmbh | Lab window collaboration |
US20030210259A1 (en) * | 2001-11-14 | 2003-11-13 | Liu Alan V. | Multi-tactile display haptic interface device |
US20040032410A1 (en) * | 2002-05-09 | 2004-02-19 | John Ryan | System and method for generating a structured two-dimensional virtual presentation from less than all of a three-dimensional virtual reality model |
US20040105573A1 (en) * | 2002-10-15 | 2004-06-03 | Ulrich Neumann | Augmented virtual environments |
US7427996B2 (en) * | 2002-10-16 | 2008-09-23 | Canon Kabushiki Kaisha | Image processing apparatus and image processing method |
US20050255434A1 (en) * | 2004-02-27 | 2005-11-17 | University Of Florida Research Foundation, Inc. | Interactive virtual characters for training including medical diagnosis training |
US20060170652A1 (en) * | 2005-01-31 | 2006-08-03 | Canon Kabushiki Kaisha | System, image processing apparatus, and information processing method |
US20070285506A1 (en) * | 2006-04-28 | 2007-12-13 | Roland Schneider | Court video teleconferencing system & method |
US20070271301A1 (en) * | 2006-05-03 | 2007-11-22 | Affinity Media Uk Limited | Method and system for presenting virtual world environment |
US20100091112A1 (en) * | 2006-11-10 | 2010-04-15 | Stefan Veeser | Object position and orientation detection system |
US20080147475A1 (en) * | 2006-12-15 | 2008-06-19 | Matthew Gruttadauria | State of the shelf analysis with virtual reality tools |
US20080291277A1 (en) * | 2007-01-12 | 2008-11-27 | Jacobsen Jeffrey J | Monocular display device |
US7986803B1 (en) * | 2007-05-10 | 2011-07-26 | Plantronics, Inc. | Ear bud speaker earphone with retainer tab |
US20100220932A1 (en) * | 2007-06-20 | 2010-09-02 | Dong-Qing Zhang | System and method for stereo matching of images |
US20090033737A1 (en) * | 2007-08-02 | 2009-02-05 | Stuart Goose | Method and System for Video Conferencing in a Virtual Environment |
US20100103196A1 (en) * | 2008-10-27 | 2010-04-29 | Rakesh Kumar | System and method for generating a mixed reality environment |
US9266017B1 (en) * | 2008-12-03 | 2016-02-23 | Electronic Arts Inc. | Virtual playbook with user controls |
US20100271983A1 (en) * | 2009-04-22 | 2010-10-28 | Bryant Joshua R | Wireless headset communication system |
US20120093320A1 (en) * | 2010-10-13 | 2012-04-19 | Microsoft Corporation | System and method for high-precision 3-dimensional audio for augmented reality |
Non-Patent Citations (2)
Title |
---|
David E. Breen et al., "Interactive Occlusion and Collision of Real and Virtual Objects in Augmented Reality," European Computer-Industry Research Centre (ECRC) GmbH (Forschungszentrum), Technical Report ECRC-95-02, Munich, Germany, 1995 * |
Zarda, "ESPN's Virtual Playbook," Popular Science, December 5, 2008, http://www.popsci.com/entertainment-amp-gaming/article/2008-12/espns-virtual-playbook, accessed 8/16/2016. * |
Cited By (77)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8599403B2 (en) * | 2003-01-17 | 2013-12-03 | Koninklijke Philips N.V. | Full depth map acquisition |
US20060056679A1 (en) * | 2003-01-17 | 2006-03-16 | Koninklijke Philips Electronics, N.V. | Full depth map acquisition |
US20120050465A1 (en) * | 2010-08-30 | 2012-03-01 | Samsung Electronics Co., Ltd. | Image processing apparatus and method using 3D image format |
US20140292642A1 (en) * | 2011-06-15 | 2014-10-02 | Ifakt Gmbh | Method and device for determining and reproducing virtual, location-based information for a region of space |
US9268410B2 (en) * | 2012-02-10 | 2016-02-23 | Sony Corporation | Image processing device, image processing method, and program |
US20140320404A1 (en) * | 2012-02-10 | 2014-10-30 | Sony Corporation | Image processing device, image processing method, and program |
US20130257877A1 (en) * | 2012-03-30 | 2013-10-03 | Videx, Inc. | Systems and Methods for Generating an Interactive Avatar Model |
US8823742B2 (en) * | 2012-04-27 | 2014-09-02 | Viewitech Co., Ltd. | Method of simulating lens using augmented reality |
US20130286045A1 (en) * | 2012-04-27 | 2013-10-31 | Viewitech Co., Ltd. | Method of simulating lens using augmented reality |
US20130335405A1 (en) * | 2012-06-18 | 2013-12-19 | Michael J. Scavezze | Virtual object generation within a virtual environment |
US20150223017A1 (en) * | 2012-08-16 | 2015-08-06 | Alcatel Lucent | Method for provisioning a person with information associated with an event |
US8633970B1 (en) * | 2012-08-30 | 2014-01-21 | Google Inc. | Augmented reality with earth data |
US8963999B1 (en) | 2012-08-30 | 2015-02-24 | Google Inc. | Augmented reality with earth data |
US20140118716A1 (en) * | 2012-10-31 | 2014-05-01 | Raytheon Company | Video and lidar target detection and tracking system and method for segmenting moving targets |
US9111444B2 (en) * | 2012-10-31 | 2015-08-18 | Raytheon Company | Video and lidar target detection and tracking system and method for segmenting moving targets |
EP3687164A1 (en) * | 2013-02-20 | 2020-07-29 | Microsoft Technology Licensing, LLC | Providing a tele-immersive experience using a mirror metaphor |
US9626887B2 (en) | 2013-04-26 | 2017-04-18 | Samsung Electronics Co., Ltd. | Image display device and method and apparatus for implementing augmented reality using unidirectional beam |
WO2015142732A1 (en) * | 2014-03-21 | 2015-09-24 | Audience Entertainment, Llc | Adaptive group interactive motion control system and method for 2d and 3d video |
US10846930B2 (en) | 2014-04-18 | 2020-11-24 | Magic Leap, Inc. | Using passable world model for augmented or virtual reality |
US10198864B2 (en) | 2014-04-18 | 2019-02-05 | Magic Leap, Inc. | Running object recognizers in a passable world model for augmented or virtual reality |
US11205304B2 (en) | 2014-04-18 | 2021-12-21 | Magic Leap, Inc. | Systems and methods for rendering user interfaces for augmented or virtual reality |
US9761055B2 (en) | 2014-04-18 | 2017-09-12 | Magic Leap, Inc. | Using object recognizers in an augmented or virtual reality system |
US9767616B2 (en) | 2014-04-18 | 2017-09-19 | Magic Leap, Inc. | Recognizing objects in a passable world model in an augmented or virtual reality system |
US9766703B2 (en) | 2014-04-18 | 2017-09-19 | Magic Leap, Inc. | Triangulation of points using known points in augmented or virtual reality systems |
US10909760B2 (en) | 2014-04-18 | 2021-02-02 | Magic Leap, Inc. | Creating a topological map for localization in augmented or virtual reality systems |
US9852548B2 (en) | 2014-04-18 | 2017-12-26 | Magic Leap, Inc. | Systems and methods for generating sound wavefronts in augmented or virtual reality systems |
US9881420B2 (en) | 2014-04-18 | 2018-01-30 | Magic Leap, Inc. | Inferential avatar rendering techniques in augmented or virtual reality systems |
US20150302625A1 (en) * | 2014-04-18 | 2015-10-22 | Magic Leap, Inc. | Generating a sound wavefront in augmented or virtual reality systems |
US9911233B2 (en) | 2014-04-18 | 2018-03-06 | Magic Leap, Inc. | Systems and methods for using image based light solutions for augmented or virtual reality |
US9911234B2 (en) | 2014-04-18 | 2018-03-06 | Magic Leap, Inc. | User interface rendering in augmented or virtual reality systems |
US10825248B2 (en) * | 2014-04-18 | 2020-11-03 | Magic Leap, Inc. | Eye tracking systems and method for augmented or virtual reality |
US9922462B2 (en) | 2014-04-18 | 2018-03-20 | Magic Leap, Inc. | Interacting with totems in augmented or virtual reality systems |
US9928654B2 (en) | 2014-04-18 | 2018-03-27 | Magic Leap, Inc. | Utilizing pseudo-random patterns for eye tracking in augmented or virtual reality systems |
US10665018B2 (en) | 2014-04-18 | 2020-05-26 | Magic Leap, Inc. | Reducing stresses in the passable world model in augmented or virtual reality systems |
US9972132B2 (en) | 2014-04-18 | 2018-05-15 | Magic Leap, Inc. | Utilizing image based light solutions for augmented or virtual reality |
US9984506B2 (en) | 2014-04-18 | 2018-05-29 | Magic Leap, Inc. | Stress reduction in geometric maps of passable world model in augmented or virtual reality systems |
US9996977B2 (en) | 2014-04-18 | 2018-06-12 | Magic Leap, Inc. | Compensating for ambient light in augmented or virtual reality systems |
US10008038B2 (en) | 2014-04-18 | 2018-06-26 | Magic Leap, Inc. | Utilizing totems for augmented or virtual reality systems |
US10013806B2 (en) | 2014-04-18 | 2018-07-03 | Magic Leap, Inc. | Ambient light compensation for augmented or virtual reality |
US10262462B2 (en) | 2014-04-18 | 2019-04-16 | Magic Leap, Inc. | Systems and methods for augmented and virtual reality |
US10043312B2 (en) | 2014-04-18 | 2018-08-07 | Magic Leap, Inc. | Rendering techniques to find new map points in augmented or virtual reality systems |
US10109108B2 (en) | 2014-04-18 | 2018-10-23 | Magic Leap, Inc. | Finding new points by render rather than search in augmented or virtual reality systems |
US10115232B2 (en) | 2014-04-18 | 2018-10-30 | Magic Leap, Inc. | Using a map of the world for augmented or virtual reality systems |
US10115233B2 (en) | 2014-04-18 | 2018-10-30 | Magic Leap, Inc. | Methods and systems for mapping virtual objects in an augmented or virtual reality system |
US10127723B2 (en) | 2014-04-18 | 2018-11-13 | Magic Leap, Inc. | Room based sensors in an augmented reality system |
US10186085B2 (en) * | 2014-04-18 | 2019-01-22 | Magic Leap, Inc. | Generating a sound wavefront in augmented or virtual reality systems |
US9934613B2 (en) * | 2014-04-29 | 2018-04-03 | The Florida International University Board Of Trustees | Systems for controlling a movable object |
US11508125B1 (en) * | 2014-05-28 | 2022-11-22 | Lucasfilm Entertainment Company Ltd. | Navigating a virtual environment of a media content item |
US11403797B2 (en) | 2014-06-10 | 2022-08-02 | Ripple, Inc. Of Delaware | Dynamic location based digital element |
US11532140B2 (en) | 2014-06-10 | 2022-12-20 | Ripple, Inc. Of Delaware | Audio content of a digital object associated with a geographical location |
US11069138B2 (en) * | 2014-06-10 | 2021-07-20 | Ripple, Inc. Of Delaware | Audio content of a digital object associated with a geographical location |
US9232173B1 (en) * | 2014-07-18 | 2016-01-05 | Adobe Systems Incorporated | Method and apparatus for providing engaging experience in an asset |
US20160105633A1 (en) * | 2014-07-18 | 2016-04-14 | Adobe Systems Incorporated | Method and apparatus for providing engaging experience in an asset |
US10044973B2 (en) * | 2014-07-18 | 2018-08-07 | Adobe Systems Incorporated | Method and apparatus for providing engaging experience in an asset |
US10449445B2 (en) | 2014-12-11 | 2019-10-22 | Elwha Llc | Feedback for enhanced situational awareness |
US9922518B2 (en) | 2014-12-11 | 2018-03-20 | Elwha Llc | Notification of incoming projectiles |
US9795877B2 (en) | 2014-12-11 | 2017-10-24 | Elwha Llc | Centralized system proving notification of incoming projectiles |
US10166466B2 (en) | 2014-12-11 | 2019-01-01 | Elwha Llc | Feedback for enhanced situational awareness |
US9741215B2 (en) | 2014-12-11 | 2017-08-22 | Elwha Llc | Wearable haptic feedback devices and methods of fabricating wearable haptic feedback devices |
US11496696B2 (en) | 2015-11-24 | 2022-11-08 | Samsung Electronics Co., Ltd. | Digital photographing apparatus including a plurality of optical systems for acquiring images under different conditions and method of operating the same |
US11693242B2 (en) | 2016-02-18 | 2023-07-04 | Apple Inc. | Head-mounted display for virtual and mixed reality with inside-out positional, user body and environment tracking |
US10838206B2 (en) * | 2016-02-18 | 2020-11-17 | Apple Inc. | Head-mounted display for virtual and mixed reality with inside-out positional, user body and environment tracking |
US20190258058A1 (en) * | 2016-02-18 | 2019-08-22 | Apple Inc. | Head-mounted display for virtual and mixed reality with inside-out positional, user body and environment tracking |
CN114895471A (en) * | 2016-02-18 | 2022-08-12 | 苹果公司 | Head mounted display for virtual reality and mixed reality with inside-outside position tracking, user body tracking, and environment tracking |
US11199706B2 (en) | 2016-02-18 | 2021-12-14 | Apple Inc. | Head-mounted display for virtual and mixed reality with inside-out positional, user body and environment tracking |
US20180036636A1 (en) * | 2016-08-04 | 2018-02-08 | Creative Technology Ltd | Companion display module to a main display screen for displaying auxiliary information not displayed by the main display screen and a processing method therefor |
US11571621B2 (en) | 2016-08-04 | 2023-02-07 | Creative Technology Ltd | Companion display module to a main display screen for displaying auxiliary information not displayed by the main display screen and a processing method therefor |
US10521662B2 (en) * | 2018-01-12 | 2019-12-31 | Microsoft Technology Licensing, Llc | Unguided passive biometric enrollment |
US10984600B2 (en) | 2018-05-25 | 2021-04-20 | Tiff's Treats Holdings, Inc. | Apparatus, method, and system for presentation of multimedia content including augmented reality content |
US11494994B2 (en) | 2018-05-25 | 2022-11-08 | Tiff's Treats Holdings, Inc. | Apparatus, method, and system for presentation of multimedia content including augmented reality content |
US10818093B2 (en) | 2018-05-25 | 2020-10-27 | Tiff's Treats Holdings, Inc. | Apparatus, method, and system for presentation of multimedia content including augmented reality content |
US11605205B2 (en) | 2018-05-25 | 2023-03-14 | Tiff's Treats Holdings, Inc. | Apparatus, method, and system for presentation of multimedia content including augmented reality content |
US11620787B2 (en) * | 2018-06-22 | 2023-04-04 | Roblox Corporation | Systems and methods for asset generation in immersive cognition assessments |
US11606546B1 (en) * | 2018-11-08 | 2023-03-14 | Tanzle, Inc. | Perspective based green screening |
US11936840B1 (en) | 2018-11-08 | 2024-03-19 | Tanzle, Inc. | Perspective based green screening |
CN110177286A (en) * | 2019-05-30 | 2019-08-27 | 上海云甫智能科技有限公司 | A kind of live broadcasting method, system and intelligent glasses |
US11367251B2 (en) * | 2019-06-24 | 2022-06-21 | Imec Vzw | Device using local depth information to generate an augmented reality image |
Also Published As
Publication number | Publication date |
---|---|
US20120202594A1 (en) | 2012-08-09 |
US20120204202A1 (en) | 2012-08-09 |
US8990842B2 (en) | 2015-03-24 |
US9242177B2 (en) | 2016-01-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20120200667A1 (en) | Systems and methods to facilitate interactions with virtual content | |
US9842433B2 (en) | Method, apparatus, and smart wearable device for fusing augmented reality and virtual reality | |
US10078917B1 (en) | Augmented reality simulation | |
US11257233B2 (en) | Volumetric depth video recording and playback | |
US9710973B2 (en) | Low-latency fusing of virtual and real content | |
US7817104B2 (en) | Augmented reality apparatus and method | |
KR101925658B1 (en) | Volumetric video presentation | |
JP2019092170A (en) | System and method for generating 3-d plenoptic video images | |
US20130342572A1 (en) | Control of displayed content in virtual environments | |
US20150312561A1 (en) | Virtual 3d monitor | |
US20130328925A1 (en) | Object focus in a mixed reality environment | |
US20140176591A1 (en) | Low-latency fusing of color image data | |
US11128984B1 (en) | Content presentation and layering across multiple devices | |
TWI669635B (en) | Method and device for displaying barrage and non-volatile computer readable storage medium | |
KR101892735B1 (en) | Apparatus and Method for Intuitive Interaction | |
US11119567B2 (en) | Method and apparatus for providing immersive reality content | |
CN107810634A (en) | Display for three-dimensional augmented reality | |
US20230215079A1 (en) | Method and Device for Tailoring a Synthesized Reality Experience to a Physical Setting | |
KR100917100B1 (en) | Apparatus for displaying three-dimensional image and method for controlling location of display in the apparatus | |
US11187895B2 (en) | Content generation apparatus and method | |
US20170064296A1 (en) | Device and method of creating an augmented interactive virtual reality system | |
JP7403256B2 (en) | Video presentation device and program | |
CN117452637A (en) | Head mounted display and image display method | |
Hough | Towards achieving convincing live interaction in a mixed reality environment for television studios |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: DISNEY ENTERPRISES, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GAY, MICHAEL F.;GOLDING, FRANK;GEFEN, SMADAR;SIGNING DATES FROM 20111017 TO 20111109;REEL/FRAME:027201/0068 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |