US20210217247A1 - Body pose message system - Google Patents
Body pose message system Download PDFInfo
- Publication number
- US20210217247A1 US20210217247A1 US17/214,811 US202117214811A US2021217247A1 US 20210217247 A1 US20210217247 A1 US 20210217247A1 US 202117214811 A US202117214811 A US 202117214811A US 2021217247 A1 US2021217247 A1 US 2021217247A1
- Authority
- US
- United States
- Prior art keywords
- individual
- body pose
- user
- display
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 230000004044 response Effects 0.000 claims abstract description 33
- 238000000034 method Methods 0.000 claims description 129
- 238000004519 manufacturing process Methods 0.000 claims description 8
- 230000001902 propagating effect Effects 0.000 claims description 4
- 230000001815 facial effect Effects 0.000 claims description 2
- 238000004891 communication Methods 0.000 abstract description 9
- 230000009471 action Effects 0.000 description 50
- 230000033001 locomotion Effects 0.000 description 38
- 238000001514 detection method Methods 0.000 description 23
- 210000003128 head Anatomy 0.000 description 19
- 238000012790 confirmation Methods 0.000 description 16
- 238000004590 computer program Methods 0.000 description 14
- 238000010586 diagram Methods 0.000 description 13
- 230000006870 function Effects 0.000 description 12
- 238000012545 processing Methods 0.000 description 10
- 238000005516 engineering process Methods 0.000 description 9
- 230000005540 biological transmission Effects 0.000 description 6
- 230000008569 process Effects 0.000 description 5
- 230000035943 smell Effects 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 4
- 230000003190 augmentative effect Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 4
- 230000009021 linear effect Effects 0.000 description 4
- 238000009877 rendering Methods 0.000 description 4
- 238000013507 mapping Methods 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- 230000029058 respiratory gaseous exchange Effects 0.000 description 3
- 238000001228 spectrum Methods 0.000 description 3
- 230000003068 static effect Effects 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 241000282472 Canis lupus familiaris Species 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 230000005670 electromagnetic radiation Effects 0.000 description 2
- 210000001097 facial muscle Anatomy 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 230000003278 mimic effect Effects 0.000 description 2
- 238000002156 mixing Methods 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 239000003973 paint Substances 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000011664 signaling Effects 0.000 description 2
- 238000002604 ultrasonography Methods 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 238000001429 visible spectrum Methods 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Chemical compound O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- 238000000018 DNA microarray Methods 0.000 description 1
- 230000005355 Hall effect Effects 0.000 description 1
- 238000000342 Monte Carlo simulation Methods 0.000 description 1
- 206010028347 Muscle twitching Diseases 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 239000000853 adhesive Substances 0.000 description 1
- 230000001070 adhesive effect Effects 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 230000003466 anti-cipated effect Effects 0.000 description 1
- QVGXLLKOCUKJST-UHFFFAOYSA-N atomic oxygen Chemical compound [O] QVGXLLKOCUKJST-UHFFFAOYSA-N 0.000 description 1
- 230000004888 barrier function Effects 0.000 description 1
- 238000013398 bayesian method Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000004397 blinking Effects 0.000 description 1
- 238000007664 blowing Methods 0.000 description 1
- 230000037237 body shape Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- ZPUCINDJVBIVPJ-LJISPDSOSA-N cocaine Chemical compound O([C@H]1C[C@@H]2CC[C@@H](N2C)[C@H]1C(=O)OC)C(=O)C1=CC=CC=C1 ZPUCINDJVBIVPJ-LJISPDSOSA-N 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000002950 deficient Effects 0.000 description 1
- 210000005069 ears Anatomy 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000005684 electric field Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 239000012530 fluid Substances 0.000 description 1
- 239000000446 fuel Substances 0.000 description 1
- 239000007789 gas Substances 0.000 description 1
- 230000002068 genetic effect Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 231100001261 hazardous Toxicity 0.000 description 1
- 238000002329 infrared spectrum Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000002044 microwave spectrum Methods 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000003472 neutralizing effect Effects 0.000 description 1
- 239000001301 oxygen Substances 0.000 description 1
- 229910052760 oxygen Inorganic materials 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 239000013618 particulate matter Substances 0.000 description 1
- 238000009428 plumbing Methods 0.000 description 1
- 238000002310 reflectometry Methods 0.000 description 1
- 210000001525 retina Anatomy 0.000 description 1
- 239000000779 smoke Substances 0.000 description 1
- 229910000679 solder Inorganic materials 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 210000002784 stomach Anatomy 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/0346—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/0138—Head-up displays characterised by optical features comprising image capture systems, e.g. camera
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/0141—Head-up displays characterised by optical features characterised by the informative content of the display
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0179—Display position adjusting means not related to the information to be displayed
- G02B2027/0187—Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/04—Indexing scheme for image data processing or generation, in general involving 3D image data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/24—Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
Definitions
- the present subject matter relates to using a Virtual Reality (VR) or Augmented Reality (AR) system to allow other personnel to provide messages to a device without using a network communications link.
- VR Virtual Reality
- AR Augmented Reality
- HMD head-mounted display
- VR virtual reality
- AR augmented reality
- Textual information can be presented to the emergency responder through the display and the information can be updated in real-time through the digital wireless interface from a command center or other information sources.
- FIG. 1A shows a scene on the display of an embodiment of a head-mounted display depicting a first body pose detected using body parts
- FIG. 1B shows a scene on the display of an embodiment of a head-mounted display depicting a second body pose detected using body parts
- FIG. 1C shows a scene on the display of an embodiment of a head-mounted display depicting a third body pose detected using body parts
- FIG. 1D shows a scene on the display of an embodiment of a head-mounted display depicting a fourth body pose detected using body parts
- FIG. 2A shows a scene on the display of an embodiment of a head-mounted display depicting a first body pose detected using body targets;
- FIG. 2B shows a scene on the display of an embodiment of a head-mounted display depicting a second body pose detected using body targets;
- FIG. 2C shows a scene on the display of an embodiment of a head-mounted display depicting a third body pose detected using body targets;
- FIG. 2D shows a scene on the display of an embodiment of a head-mounted display depicting a fourth body pose detected using body targets;
- FIG. 3 shows an embodiment of an HR system used to create network connections where there are none
- FIG. 4 shows a block diagram of an embodiment of an HR system
- FIG. 5 is a flowchart of an embodiment of a method for communicating using body poses
- FIG. 6 is a flowchart of an embodiment of a method for communicating using targets.
- Hybrid-reality refers to an image that merges real-world imagery with imagery created in a computer, which is sometimes called virtual imagery. While an HR image can be a still image, it can also be a moving image, such as imagery created using a video stream. HR can be displayed by a traditional two-dimensional display device, such as a computer monitor, one or more projectors, or a smartphone screen. An HR system can be based on a device such as a microscope, binoculars, or a telescope, with virtual imagery is superimposed over the image captured by the device. In such HR systems, an eyepiece of a device may be considered the display of the system. HR imagery can also be displayed by a head-mounted display (HMD).
- HMD head-mounted display
- a virtual reality (VR) HMD system may receive images of a real-world object, objects, or scene, and composite those images with a virtual object, objects, or scene to create an HR image.
- An augmented reality (AR) HMD system may present a virtual object, objects, or scene on a transparent screen which then naturally mixes the virtual imagery with a view of a scene in the real-world.
- a display which mixes live video with virtual objects is sometimes denoted AR, but for the purposes of this disclosure, an AR HMD includes at least a portion of the display area that is transparent to allow at least some of the user's view of the real-world to be directly viewed through the transparent portion of the AR HMD.
- the display used by an HR system represents a scene which is a visible portion of the whole environment.
- the term “scene” and “field of view” (FOV) are used to indicate what is visible to a user.
- the word “occlude” is used herein to mean that a pixel of a virtual element is mixed with an image of another object to change the way the object is perceived by a viewer.
- this can be done through use of a compositing process to mix the two images, a Z-buffer technique to remove elements of the image that are hidden from view, a painter's algorithm to render closer objects later in the rendering process, or any other technique that can replace a pixel of the image of the real-world object with a different pixel value generated from any blend of real-world object pixel value and an HR system determined pixel value.
- the virtual object occludes the real-world object if the virtual object is rendered, transparently or opaquely, in the line of sight of the user as they view the real-world object.
- the terms “occlude”, “transparency”, “rendering” and “overlay” are used to denote the mixing or blending of new pixel values with existing object pixel values in an HR display.
- a sensor may be mounted on or near the display, on the viewer's body, or be remote from the user.
- Remote sensors may include, but are not limited to, fixed sensors attached in an environment, sensors attached to robotic extensions, sensors attached to autonomous or semi-autonomous drones, or sensors attached to other persons.
- Data from the sensors may be raw or filtered.
- Data from the sensors may be transmitted wirelessly or using a wired connection.
- Sensors used by some embodiments of HR systems include, but are not limited to, a camera that captures images in the visible spectrum, an infrared depth camera, a microphone, a sound locator, a Hall effect sensor, an air-flow meter, a fuel level sensor, an oxygen sensor, an electronic nose, a gas detector, an anemometer, a mass flow sensor, a Geiger counter, a gyroscope, an infrared temperature sensor, a flame detector, a barometer, a pressure sensor, a pyrometer, a time-of-flight camera, radar, or lidar.
- Sensors in some HR system embodiments that may be attached to the user include, but are not limited to, a biosensor, a biochip, a heartbeat sensor, a pedometer, a skin resistance detector, or skin temperature detector.
- the display technology used by an HR system embodiment may include any method of projecting an image to an eye.
- Conventional technologies include, but are not limited to, cathode ray tube (CRT), liquid crystal display (LCD), light emitting diode (LED), plasma, or organic LED (OLED) screens, or projectors based on those technologies or digital micromirror devices (DMD).
- CTR cathode ray tube
- LCD liquid crystal display
- LED light emitting diode
- OLED organic LED
- virtual retina displays such as direct drawing on the eye's retina using a holographic grating, may be used.
- direct machine to brain interfaces may be used in the future.
- the display of an HR system may also be an HMD or a separate device, such as, but not limited to, a hand-held mobile phone, a tablet, a fixed monitor or a TV screen.
- connection technology used by an HR system may include any physical link and associated protocols, such as, but not limited to, wires, transmission lines, solder bumps, near-field connections, infra-red connections, or radio frequency (RF) connections such as cellular, satellite or Wi-Fi® (a registered trademark of the Wi-Fi Alliance).
- RF radio frequency
- Virtual connections, such as software links, may also be used to connect to external networks and/or external compute.
- aural stimuli and information may be provided by a sound system.
- the sound technology may include monaural, binaural, or multi-channel systems.
- a binaural system may include a headset or another two-speaker system but may also include systems with more than two speakers directed to the ears.
- the sounds may be presented as 3D audio, where each sound has a perceived position in space, achieved by using reverberation and head-related transfer functions to mimic how sounds change as they move in a particular space.
- objects in the display may move.
- the movement may be due to the user moving within the environment, for example walking, crouching, turning, or tilting the head.
- the movement may be due to an object moving, for example a dog running away, a car coming towards the user, or a person entering the FOV.
- the movement may also be due to an artificial movement, for example the user moving an object on a display or changing the size of the FOV.
- the motion may be due to the user deliberately distorting all or part of the FOV, for example adding a virtual fish-eye lens.
- all motion is considered relative; any motion may be resolved to a motion from a single frame of reference, for example the user's viewpoint.
- the perspective of any generated object overlay may be corrected so that it changes with the shape and position of the associated real-world object. This may be done with any conventional point-of-view transformation based on the angle of the object from the viewer; note that the transformation is not limited to simple linear or rotational functions, with some embodiments using non-Abelian transformations. It is contemplated that motion effects, for example blur or deliberate edge distortion, may also be added to a generated object overlay.
- images from cameras may be processed before algorithms are executed.
- Algorithms used after image processing for embodiments disclosed herein may include, but are not limited to, object recognition, motion detection, camera motion and zoom detection, light detection, facial recognition, text recognition, or mapping an unknown environment.
- the image processing may also use conventional filtering techniques, such as, but not limited to, static, adaptive, linear, non-linear, and Kalman filters. Deep-learning neural networks may be trained in some embodiments to mimic functions which are hard to create algorithmically. Image processing may also be used to prepare the image, for example by reducing noise, restoring the image, edge enhancement, or smoothing.
- objects may be detected in the FOV of one or more cameras.
- Objects may be detected by using conventional algorithms, such as, but not limited to, edge detection, feature detection (for example surface patches, corners and edges), greyscale matching, gradient matching, pose consistency, or database look-up using geometric hashing.
- edge detection for example surface patches, corners and edges
- feature detection for example surface patches, corners and edges
- greyscale matching for example surface patches, corners and edges
- gradient matching for example surface patches, corners and edges
- pose consistency for example geometric hashing.
- database look-up using geometric hashing such as, but not limited to, edge detection, feature detection (for example surface patches, corners and edges), greyscale matching, gradient matching, pose consistency, or database look-up using geometric hashing.
- Genetic algorithms and trained neural networks using unsupervised learning techniques may also be used in embodiments to detect types of objects, for example people, dogs, or trees.
- object may be performed on a single frame of a video stream, although techniques using multiple frames are also envisioned.
- Advanced techniques such as, but not limited to, Optical Flow, camera motion, and object motion detection may be used between frames to enhance object recognition in each frame.
- rendering the object may be done by the HR system embodiment using databases of similar objects, the geometry of the detected object, or how the object is lit, for example specular reflections or bumps.
- the locations of objects may be generated from maps and object recognition from sensor data.
- Mapping data may be generated on the fly using conventional techniques, for example the Simultaneous Location and Mapping (SLAM) algorithm used to estimate locations using Bayesian methods, or extended Kalman filtering which linearizes a non-linear Kalman filter to optimally estimate the mean or covariance of a state (map), or particle filters which use Monte Carlo methods to estimate hidden states (map).
- SLAM Simultaneous Location and Mapping
- the locations of objects may also be determined a priori, using techniques such as, but not limited to, reading blueprints, reading maps, receiving GPS locations, receiving relative positions to a known point (such as a cell tower, access point, or other person) determined using depth sensors, WiFi time-of-flight, or triangulation to at least three other points.
- a known point such as a cell tower, access point, or other person
- Gyroscope sensors on or near the HMD may be used in some embodiments to determine head position and to generate relative motion vectors which can be used to estimate location.
- sound data from one or microphones may be processed to detect specific sounds. Sounds that might be identified include, but are not limited to, human voices, glass breaking, human screams, gunshots, explosions, door slams, or a sound pattern a particular machine makes when defective.
- Gaussian Mixture Models and Hidden Markov Models may be used to generate statistical classifiers that are combined and looked up in a database of sound models.
- One advantage of using statistical classifiers is that sounds can be detected more consistently in noisy environments.
- Eye tracking of one or both viewer's eyes may be performed. Eye tracking may be used to measure the point of the viewer's gaze.
- the position of each eye is known, and so there is a reference frame for determining head-to-eye angles, and so the position and rotation of each eye can be used to estimate the gaze point.
- Eye position determination may be done using any suitable technique and/or device, including, but not limited to, devices attached to an eye, tracking the eye position using infra-red reflections, for example Purkinje images, or using the electric potential of the eye detected by electrodes placed near the eye which uses the electrical field generated by an eye independently of whether the eye is closed or not.
- input is used to control the HR system, either from the user of the HR system or from external actors.
- the methods of input used varies by embodiment, and each input type may control any or a subset of an HR system's function.
- gestures are used as control input.
- a gesture may be detected by using other systems coupled to the HR system, such as, but not limited to, a camera, a stereo camera, a depth camera, a wired glove, or a controller.
- the video stream is analyzed to detect the position and movement of an object, for example a hand, a finger, or a body pose.
- the position and motion can be used to generate a 3D or 2D path and, by using stochastic or pattern matching techniques, determine the most likely gesture used.
- the user's head position and movement may be used as a gesture or direct control.
- the head position and movement may be determined by gyroscopes mounted into an HMD.
- a fixed source such as an electromagnetic beam may be affixed to a user or mounted in an HMD; coupled sensors can then track the electromagnetic beam as the user's head is moved.
- the user may have a touch-pad or a plurality of touch sensors affixed to the body, for example built-in to a glove, a suit, or an HMD, coupled to the HR system.
- a touch-pad or a plurality of touch sensors affixed to the body, for example built-in to a glove, a suit, or an HMD, coupled to the HR system.
- touch sensors By touching a specific point, different input data can be generated.
- the time of a touch or the pattern of touches may also generate different input types.
- touchless sensors using a proximity to the sensor can be used.
- a physical input device is coupled to the HR system.
- the physical input device may be a mouse, a pen, a keyboard, or a wand. If a wand controller is used, the HR system tracks the position and location of the wand as well as presses of any buttons on the wand; the wand may be tracked using a camera, for example using object boundary recognition, using target tracking where a specific shape or target is detected in each video frame, or by wired/wireless data from the wand received by the HR system.
- a physical input device may be virtual, where a device is rendered on the head-mounted display and the user interacts with the virtual controller using other HR systems, such as, but not limited to, gaze direction, hand tracking, finger tracking, or gesture detection.
- HR systems such as, but not limited to, gaze direction, hand tracking, finger tracking, or gesture detection.
- gaze direction interaction with virtual menus rendered on the display may be used.
- a backwards-facing camera mounted in an HMD may be used to detect blinking or facial muscle movement. By tracking blink patterns or facial muscle motion, input gestures can be determined.
- a “situation” of an object may refer to any aspect of the objects position and/or orientation in three-dimensional space.
- the situation may refer to any value of an object's six degrees of freedom, such as up/down, forward/back, left/right, roll, pitch, and yaw.
- the situation of an object may refer to only the position of an object with respect to a 3-dimnesional axis without referring to its orientation (e.g. roll, pitch, and yaw), or the situation may refer only to the object's orientation without referring to its position.
- the situation of the object may refer to one or more position and/or orientation values.
- the situation of the object may also include an aspect of its velocity or acceleration vector such as the speed or direction of movement of the object.
- breathing patterns may be detected using a pressure sensor mounted in a breathing system coupled to the HR system to detect changes in pressure. Breath patterns such as, but not limited to, blowing softly, exhaling hard, or inhaling suddenly may be used as input data for an HR control system.
- sounds may be detected by one or more microphones coupled to the HR system.
- Specific sounds such as, but limited to, vocalizations (e.g. scream, shout, lip buzz, snort, whistle), stamping, or clapping, may detected using stochastic or pattern matching techniques on received audio data.
- more than one microphone may be used to place a sound in a location, allowing the position of a sound, for example a clap, to provide additional input control data.
- voice control using natural language is used; speech recognition techniques such as trained neural networks or hidden Markov model algorithms are used by an HR system to determine what has been said.
- direct neural interfaces may be used in some embodiments to control an HR system.
- HR imagery are becoming increasingly common and are making their way from entertainment and gaming into industrial and commercial applications.
- Examples of systems that may find HR imagery useful include aiding a person doing a task in a hazardous environment, for example repairing machinery, neutralizing danger, or responding to an emergency.
- An HR system may be used to track the motion of other personnel in the environment using sensors.
- the body shape of a team member may be determined using sensors to provide a method of relaying information.
- the tracking of personnel using line-of-sight cameras is disclosed herein, along with techniques for tracking personnel who are out of direct sight lines.
- An HR system may also be used in conjunction with other systems to relay information where network connectivity is not available at each hop.
- a body may be detected by an embodiment of HR system using object recognition from a video feed. Detecting human bodies in a video may be done using a trained neural network, as is done in many smartphones today.
- the camera may be pointing forward from the user wearing the headset (corresponding to the user's current field of view), pointing to the side or behind of the user, or from a camera with a wide-angle view or an array of cameras generating a wide angle view up to and including a 360° view.
- the video feed is transmitted to the HR system using a network from other cameras in the environment.
- a sensor detecting non-visible light may be used to generate the video feed.
- an infra-red camera may be used to “see through” obscuring atmosphere, such as smoke or water vapor.
- ultra-high-frequency sensors may be used to “see through” solid barriers such as walls or ceilings.
- the senor is not being carried by the user of an HR system.
- An external sensor may be carried by another team member, mounted on a drone or robot, or fixed in the environment, to give three non-limiting examples.
- the body detection processing may be done using computing resources within the HR system, using an external computing resource with a result of body detection computation transmitted to the HR system using a network connection, or some combination thereof.
- a network connection is used to transmit data from an external sensor to an HR system, there is no requirement that the person signaling is connected to the HR system.
- a body pose command does not interrupt or block the signaler's input systems and is quick and easy to perform, and so is advantageous.
- an electronic network connection from the signaler may be available but may be very low bandwidth or extremely intermittent.
- an example HR system may transmit a single, small packet with error correction parity that indicates that a sequence of body pose commands is to be performed. The packet may then be received by other HR systems in the vicinity to start body pose detection algorithms.
- one or more factors may be used to determine whether a pose is present, such as, but not limited to, keeping one or more body parts fixed in space for a minimum period of time, maintaining a relationship between two or more body parts (e.g. a subtended angle from center of body between two hands), or tracing a known path using one or body parts (e.g. a gesture).
- a single body pose may not have 100% accurate detection because of adverse environmental conditions.
- some embodiments may require a sequence or combination of separate body poses to be valid.
- the detection of the body pose may generate a system message, such as, but not limited to, alerts, actions or commands, by looking up an associated action in a database using the pose as an index; in some embodiments, the database may be loaded beforehand to define the actions specific to a particular mission.
- a system message such as, but not limited to, alerts, actions or commands
- the detection of the body pose does not generate a system action directly, allowing the HR system user to interpret the meaning of the pose.
- the HR system renders an identifier on the display, for example an icon or text describing the detected pose.
- the HR may highlight the body parts used during a pose to aid pose recognition by the HR system user, such as, but not limited to, adding color, increasing brightness, increasing the size, or adding virtual elements (e.g. a beam).
- the detection of a body pose may be performed by determining the positions and motions of targets attached to a signaler, such as, but not limited to, registration marks, infra-red sensitive paint, or low-power fixed transmitters.
- a signaler such as, but not limited to, registration marks, infra-red sensitive paint, or low-power fixed transmitters.
- a more limited sensor may be used some HR embodiments.
- the registration mark may be an unnatural shape not encountered in an environment, for example a cross, a bullseye, or a barcode.
- the registration mark may be made using infra-red reflective paint, allowing the marks easily to be detected in a dark environment or an environment full of obscuring particulate matter or water vapor.
- the facility of some HR embodiments to use a body pose to relay a message as described herein creates the potential to combine the facility with other devices to bridge connectivity gaps in a network, either because of bandwidth overload, no connection or intermittent connection.
- a lack of a network connection between two team members can be bridged by using simple body pose messages.
- a team member can communicate a body pose to a sensor on a device connected to the network.
- a network gap can be bridged at the start of a communication, at the end of a communication, or at any point or points in between.
- a network gap can be bridged by sending a “do body pose” message to a connected team member, so creating a connection to a device not connected to the previous network.
- FIG. 1A shows a scene on the display of an embodiment of a head-mounted display showing a body 100 in a first situation.
- an HR system may detect the presence of body 100 and the presence of hands 110 , 115 using object detection. Once detected, an HR system may generate key positions 122 , 124 , 126 corresponding to a left hand, a right hand and a torso, which may also be referred to as a body center, trunk, stomach, or chest. The HR system may deduce a first pose using the distance between the key points and/or the angles in the formed triangle.
- FIG. 1B shows a scene on the display of an embodiment of a head-mounted display showing the body 100 in a second situation.
- an HR system may detect the presence of the body 100 and the presence of the hands 110 , 115 using object detection. Once detected, an HR system may generate key positions 142 , 144 , 146 corresponding to the left hand, the right hand and the torso. The HR system may deduce a second pose using the distance between the key points and/or the angles in the formed triangle. Note that the second pose detected at the time of FIG. 1B is different from the first pose depicted at the time of FIG. 1A .
- FIG. 1C shows a scene on the display of an embodiment of a head-mounted display showing the body 100 in a third situation.
- an HR system may detect the presence of the body 100 and the presence of the hands 110 , 115 using object detection. Once detected, an HR system may generate key positions 162 , 164 , 166 corresponding to the left hand, the right hand and the torso. The HR system may deduce a third pose using the distance between the key points and/or the angles in the formed triangle. In the third pose shown in FIG. 1C , key position 164 is not static, but rotates in a clockwise direction as indicated by arrow 190 . Note that the third pose detected at the time of FIG. 1C is interpreted differently from the first pose depicted at the time of FIG. 1A because of the motion 190 even though the triangle formed by key points 162 , 164 , 166 is similar to triangle 122 , 124 , 126 .
- FIG. 1D shows a scene on the display of an embodiment of a head-mounted display showing the body 100 in a fourth situation.
- an HR system may detect the presence of the body 100 and the presence of the hands 110 , 115 using object detection. Once detected, an HR system may generate key positions 182 , 184 , 186 corresponding to the left hand, the right hand and the torso. The HR system may deduce the fourth pose using the distance between the key points and/or the angles in the formed triangle. In the fourth pose shown in FIG. 1D , key position 184 is held for a fixed time as indicated by stopwatch 192 . Note that the fourth pose detected at the time of FIG. 1D is interpreted differently from the second pose depicted at the time of FIG.
- the time 192 may be a maximum time or a minimum time.
- FIG. 2A shows a scene on the display of an embodiment of a head-mounted display showing a body 200 in a first situation.
- targets 222 A, 224 A, 226 A corresponding to a left hand, a right hand and torso position are positioned on the body and visible to at least one sensor.
- An HR system may detect the presence of targets 222 A, 224 A, 126 A using object detection. Once detected, an HR system may generate key positions associated with the targets 222 A, 224 A, 226 A. The HR system may deduce a first pose using the distance between the key points and/or the angles in the formed triangle.
- FIG. 2B shows a scene on the display of an embodiment of a head-mounted display showing the body 200 in a second situation.
- targets 222 B, 224 B, 226 B corresponding to the left hand, the right hand and the torso position are positioned on the body and visible to at least one sensor.
- An HR system may detect the presence of targets 222 B, 224 B, 226 B using object detection. Once detected, an HR system may generate key positions associated with targets 222 B, 224 B, 226 B. The HR system may deduce a second pose using the distance between the key points and/or the angles in the formed triangle. Note that the second pose detected at the time of FIG. 2B is different from the first pose depicted at the time of FIG. 2A .
- FIG. 2C shows a scene on the display of an embodiment of a head-mounted display showing a body 200 in a third situation.
- targets 222 C, 224 C, 226 C corresponding to the left hand, the right hand and the torso position are positioned on the body and visible to at least one sensor.
- An HR system may detect the presence of targets 222 C, 224 C, 226 C using object detection. Once detected, an HR system may generate key positions associated with targets 222 C, 224 C, 226 C. The HR system may deduce a third pose using the distance between the key points and/or the angles in the formed triangle. In the third pose shown in FIG.
- key position 264 is not static, but rotates in a clockwise direction as indicated by arrow 290 .
- the third pose detected at the time of FIG. 2C is interpreted differently from the first pose depicted at the time of FIG. 2A because of the motion 290 even though the triangle formed by key points associated with targets 222 C, 224 C, 226 C is similar to triangle 222 A, 224 A, 226 A.
- FIG. 2D shows a scene on the display of an embodiment of a head-mounted display showing a body 200 in a fourth situation.
- targets 222 D, 224 D, 226 D corresponding to the left hand, the right hand and the torso position are positioned on the body and visible to at least one sensor.
- An HR system may detect the presence of targets 222 D, 224 D, 226 D using object detection. Once detected, an HR system may generate key positions associated with 222 D, 224 D, 226 D corresponding to a left hand, a right hand and torso. The HR system may deduce a pose using the distance between the key points and/or the angles in the formed triangle. In the fourth pose shown in FIG.
- key position 284 is held for a fixed time as indicated by stopwatch 292 .
- the pose detected at the time of FIG. 2D is interpreted differently from the pose depicted at the time of FIG. 2B even though the triangle formed by key points associated with targets 222 D, 224 D, 226 D is similar to triangle 242 B, 244 B, 246 B because the pose is held for a specific time 292 .
- the time 292 may be a maximum time or a minimum time.
- a first target on the left hand is shown using reference 222 A in FIG. 2A, 222B in FIG. 2B, 222C in FIG. 2C, and 222D in FIG. 2D to show the different positions of the first target.
- a second target on the right hand is shown using 224 A in FIG. 2A, 224B in FIG. 2B, 224C in FIG. 2C, and 224D in FIG. 2D to show the different positions of the second target
- a third target on the torso is shown using 226 A in FIG. 2A, 226B in FIG. 2B, 226C in FIG. 2C, and 226D in FIG. 2D to show the different positions of the third target.
- the body poses may be constructed so that only two of three key positions are required to uniquely identify some poses, so creating some redundancy and thus error tolerance.
- a sequence of body poses that combine to create a message may be constructed so that reception errors may be detected and/or corrected.
- FIG. 3 shows a scenario where partially connected networks are present.
- the first network comprising wireless connections 340 , 342 and wired connection 350 ; and the second network comprising wireless connection 344 and wired connection 352 .
- the connected devices in the first network include the HR system worn by a first team member 302 , the HR system worn by a second team member 304 , a first network device 310 , and a second network device 312 .
- the connected devices in the second network include a camera 320 and third network device 314 ; note that the third network device 314 may provide a connection to the external network via wireless link 344 .
- the HR system worn by team member 300 is not connected to either the first or the second network, and, the first network and second network are not connected.
- a message 330 associated with a body pose signal from the third team member 300 to the HR system worn by the first team member 302 who is proximal may be relayed.
- the message may then be routed through the first network to the HR system worn by the second team member 304 , for example instructing the second team member 304 to repeat a pose.
- the pose made by the second team member 304 may create an associated message 332 relayed to proximal camera 320 .
- the message is routed to the external world via the second network.
- FIG. 4 is a block diagram of an embodiment of an HR system 400 which may have some components implemented as part of a head-mounted assembly.
- the HR system 400 may be considered a computer system that can be adapted to be worn on the head, carried by hand, or otherwise attached to a user.
- a structure 405 is included which is adapted to be worn on the head of a user.
- the structure 405 may include straps, a helmet, a hat, or any other type of mechanism to hold the HR system on the head of the user as an HMD.
- the HR system 400 also includes a display 450 coupled to position the display 450 in a field-of-view (FOV) of the user.
- the structure 405 may position the display 450 in a field of view of the user.
- the display 450 may be a stereoscopic display with two separate views of the FOV, such as view 452 for the user's left eye, and view 454 for the user's right eye.
- the two views 452 , 454 may be shown as two images on a single display device or may be shown using separate display devices that are included in the display 450 .
- the display 450 may be transparent, such as in an augmented reality (AR) HMD.
- AR augmented reality
- the view of the FOV of the real-world as seen through the display 450 by the user is composited with virtual objects that are shown on the display 450 .
- the virtual objects may occlude real objects in the FOV as overlay elements and may themselves be transparent or opaque, depending on the technology used for the display 450 and the rendering of the virtual object.
- a virtual object, such as an overlay element may be positioned in a virtual space, which could be two-dimensional or three-dimensional, depending on the embodiment, to be in the same position as an associated real object in real space.
- the display 450 is a stereoscopic display
- two different views of the overlay element may be rendered and shown in two different relative positions on the two views 452 , 454 , depending on the disparity as defined by the inter-ocular distance of a viewer.
- the HR system 400 includes one or more sensors in a sensing block 440 to sense at least a portion of the FOV of the user by gathering the appropriate information for that sensor, for example visible light from a visible light camera, from the FOV of the user. Any number of any type of sensor, including sensors described previously herein, may be included in the sensor block 440 , depending on the embodiment.
- the HR system 400 may also include an I/O block 420 to allow communication with external devices.
- the I/O block 420 may include one or both of a wireless network adapter 422 coupled to an antenna 424 and a network adapter 426 coupled to a wired connection 428 .
- the wired connection 428 may be plugged into a portable device, for example a mobile phone, or may be a component of an umbilical system such as used in extreme environments.
- the HR system 400 includes a sound processor 460 which takes input from one or more microphones 462 .
- the microphones 462 may be attached to the user.
- External microphones for example attached to an autonomous drone, may send sound data samples through wireless or wired connections to I/O block 420 instead of, or in addition to, the sound data received from the microphones 462 .
- the sound processor 460 may generate sound data which is transferred to one or more speakers 464 , which are a type of sound reproduction device.
- the generated sound data may be analog samples or digital values. If more than one speaker 464 is used, the sound processor may generate or simulate 2D or 3D sound placement.
- a first speaker may be positioned to provide sound to the left ear of the user and a second speaker may be positioned to provide sound to the right ear of the user. Together, the first speaker and the second speaker may provide binaural sound to the user.
- the HR system 400 includes a stimulus block 470 .
- the stimulus block 470 is used to provide other stimuli to expand the HR system user experience.
- Embodiments may include numerous haptic pads attached to the user that provide a touch stimulus.
- Embodiments may also include other stimuli, such as, but not limited to, changing the temperature of a glove, changing the moisture level or breathability of a suit, or adding smells to a breathing system.
- the HR system 400 may include a processor 410 and one or more memory devices 430 , which may also be referred to as a tangible medium or a computer readable medium.
- the processor 410 is coupled to the display 450 , the sensing block 440 , the memory 430 , I/O block 420 , sound block 460 , and stimulus block 470 , and is configured to execute the instructions 432 encoded on (i.e. stored in) the memory 430 .
- the HR system 400 may include an article of manufacture comprising a tangible medium 430 , that is not a transitory propagating signal, encoding computer-readable instructions 432 that, when applied to a computer system 400 , instruct the computer system 400 to perform one or more methods described herein, thereby configuring the processor 400 .
- the processor 410 included in the HR system 400 may be able to perform methods described herein autonomously, in some embodiments, processing facilities outside of that provided by the processor 410 included inside of the HR system 400 may be used to perform one or more elements of methods described herein.
- the processor 410 may receive information from one or more of the sensors 440 and send that information through the wireless network adapter 422 to an external processor, such as a cloud processing system or an external server.
- the external processor may then process the sensor information to identify a pose by another individual and then send information about the pose to the processor 410 through the wireless network adapter 422 .
- the processor 410 may then use that information to initiate an action, an alarm, a notification of the user, or provide any other response to that information.
- the instructions 432 may instruct the HR system 400 to interpret a body-pose message.
- the instructions 432 may instruct the HR system 400 to receive sensor data either transmitted through the wireless network adapter 422 or received from the sensing block 440 from, for example a camera.
- the instructions 432 may instruct the HR system 400 to detect a first body in the sensor data and determine one or more a body parts, for example hands or arms, using object recognition.
- the instructions 432 may further instruct the HR system 400 to compute a first point associated with the body part at a first time, and sometime later, receive updated sensor data and compute a second point associated with the body part.
- the instructions 432 may instruct the HR system 400 to use the first and second points to associate an apparent body pose which may be looked up in a table to generate a message.
- the instructions 432 may instruct the HR system 400 to transmit the message, for example presenting an alert on the display 450 .
- the instructions 432 may instruct the HR system 400 to interpret a body-pose message.
- the instructions 432 may instruct the HR system 300 to receive sensor data either transmitted through the wireless network adapter 422 or received from the sensing block 440 from, for example an infra-red camera.
- the instructions 432 may instruct the HR system 400 to detect a first shape in the sensor data, for example a cross using object detection.
- the instructions 432 may further instruct the HR system 400 to compute a first point associated with the shape at a first time, and sometime later, receive updated sensor data and compute a second point associated with the new shape position.
- the instructions 432 may instruct the HR system 400 to use the first and second points to associate an apparent body pose which may be looked up in a table to generate a message.
- the instructions 432 may instruct the HR system 400 to transmit the message, for example presenting text on the display 450 .
- These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
- the computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
- each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
- the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
- FIG. 5 is a flowchart 500 of an embodiment of a method for communicating using body poses. This may include communication without using an electronic network link.
- One or more parts of this method is performed by an HR system which may include a head-mounted display (HMD) worn by a user.
- the method includes receiving data from a sensor 503 .
- the sensor may be a still camera or a video camera sensitive to visible light, infrared light, or any other spectrum of light.
- the sensor may be an ultrasound sensor, a sonar sensor, or a radar sensor. Any type of sensor may be used, depending on the embodiment.
- An individual is detected in the sensor data 505 .
- the individual is not the same person as the user of the HR system.
- the sensor may be remote from the user and the HR system, such as sensors fixedly mounted in the environment or a sensor mounted on a drone which may be controlled by the user of the HR system, the individual detected by the sensor, or by a third party.
- the individual detected by the sensor may be outside of a field of view of the user and the HR system.
- the sensor is conveyed by the user of the HR system and the individual may be within a field of view of the HR system user.
- the sensor may be integrated into the HR system, worn as a separate device by the user, or carried by the user.
- the sensor has a 360° field of view.
- a first situation of at least one body part of the individual in 3D space is ascertained 510 at a first time.
- the ascertaining of the first situation of the at least one body part is performed by a computer remote from the HR system, but in other embodiments, the ascertaining is performed by a processor in the HR system.
- the at least one body part may include one, two, three, or more body parts.
- the body parts may be any part of the individual's body, including, but not limited to, a head, an arm, a hand, a torso, a leg, or a foot.
- the at least one body part includes a first hand and at least one of a second hand, a head, or a torso.
- the method continues by determining a body pose 520 based on the first situation of the at least one body part.
- the situation of the at least one body part may include a position and/or orientation of one or more of the individual body parts included in the at least one body part.
- a triangle may be created 524 with a first vertex based on a first position of a first body part, a second vertex based on a second position of a second body part, and a third vertex based on a third position of the third body part.
- the body pose may then be determined based on characteristics of the triangle.
- the body pose may be determined by an angle of the individual's elbow if their upper arm is near horizontal, or a distance between the individual's feet while they have both hands raised above their head.
- traditional hand signals such as touching the top of one's head with the fingers pointed down to signify “OK” or waving both arms up and down to signify “I need help!”, may be recognized (i.e. determined).
- additional situations of the at least one body parts may be ascertained 522 at later times which may then be used with the first situation to determine the body pose.
- the method may include ascertaining a second situation of the at least one body part in 3D space at a second time and determining the body pose based on both the first situation of the at least one body part and the second situation of the at least one body part.
- the body pose may be determined based on a motion between the first situation of the at least one body part and the second situation of the at least one body part.
- the body pose may be determined, at least in part, by calculating a velocity (distance divided by time) of a body part based on two situations of the body part.
- the velocity may be used in combination with a position at the start or end of a movement to determine the body pose.
- a body pose is only determined after it is held for a predetermined interval, such as one second, two seconds, five seconds, ten seconds or some other interval, depending on the embodiment.
- the determination of the body pose may include computing a first difference between the first situation of the at least one body part at a first time and the second situation of the at least one body part at a second time, with the body pose determined in response to the first difference being less than a threshold.
- the first time and the second time are separated by a predetermined interval, which may be at least 1 second long in some embodiments, and the threshold is an amount of movement that is allowed for movement by the individual during the time that they are holding the body pose.
- Some embodiments may include recognizing a gesture 532 . This may be accomplished by ascertaining a second situation of the at least one body part in 3D space at a second time and ascertaining a third situation of the at least one body part in 3D space at a third time. The gesture may then be recognized based on the body pose ascertained from the first situation of the at least one body part at a first time, the second situation of the at least one body part and the third situation of the at least one body part.
- the method also includes deciding on an action 530 based on the body pose.
- the action may be based on the gesture.
- the action may be decided on based on a sequence two or more body poses.
- the body pose may be looked up 534 in a table of pre-determined body poses and/or body pose motions to determine the action in some embodiments.
- Deciding on the action may include determining a message to deliver to the user.
- the message may be provided as data during table lookup 534 .
- the table may be a pre-loaded local database, whose contents may be specific to a mission, a particular circumstance, a particular location, a type of individual (e.g. a trained soldier, a civilian, a first responder, or the like), or a specific individual.
- the type of individual may also be determined using sensor data, such as by recognizing a uniform or some characteristic of the individual.
- the message may be an alert, a command, or an alarm.
- the message may include a command to the user to replicate the body pose. This may be done to relay the message from the individual to another entity through the user using the HR system. This may be accomplished, for example, by presenting a virtual image of the relay body pose to the user on a display of the HR system.
- a message may be sent 542 to the user of the HR system as text on a display of the HR system, an icon on the display, an animated sequence on the display, video images on the display, a still image on the display, audible speech, sound, a haptic indication, a smell, or by any other way of communicating to the user.
- a virtual image may be displayed associated with the at least one body part on the display of the HR system. Non-limiting examples of this include a stopwatch as shown in FIG. 1D , an icon showing the type of individual, or an indicator of the urgency of the message.
- the action may include sending a confirmation 544 to the individual that the body pose was determined.
- the confirmation may be sent in a way to be understandable by the individual using the individual's unaided natural senses as the individual may not have an electronic device, binoculars, or another device to aid them in receiving the confirmation.
- the confirmation may be sent to the individual by use of a bright green light, a loud sound, or a body pose performed by the user of the HR system in response to a prompt.
- sending the confirmation may include presenting a prompt to the user to perform a response body pose.
- FIG. 6 is a flowchart 600 of an embodiment of a method for communicating using targets.
- the method includes receiving data from a sensor 603 .
- the sensor may be a still camera or a video camera sensitive to visible light, infrared light, or any other spectrum of light.
- the sensor may be a time-of-flight camera or some other type of depth-sensing camera or sensor.
- the sensor may be an ultrasound sensor, a sonar sensor, or a radar sensor. Any type of sensor may be used, depending on the embodiment.
- At least one target is detected in the sensor data 605 .
- a situation of the at least one target is controlled by an individual is not the same person as the user of the HR system.
- the sensor may be remote from the user and the HR system, such as sensors fixedly mounted in the environment or a sensor mounted on a drone which may be controlled by the user of the HR system, the individual, or by a third party.
- the at least one target detected by the sensor may be outside of a field of view of the HR system.
- the sensor is conveyed by the user of the HR system and the at least one target may be within a field of view of the HR system user.
- the sensor may be integrated into the HR system, worn as a separate device by the user, or carried by the user.
- the sensor has a 360° field of view.
- a target may be any recognizable marking, device, or object.
- the target may have a shape that is recognizable, a specific color pattern that is recognizable, or other recognizable properties such as reflectivity of a particular spectrum of electromagnetic radiation, such as infrared light. Multiple characteristics may be used together to recognize a target, such as a specific shape with particular color or color pattern.
- a target may be a passive device detected by impinging electromagnetic signals, or it may be an active device that emits electromagnetic radiation such as radio-frequency messages or light.
- one or more targets may be printed on or woven into an article or articles of clothing worn by the individual, such as a jacket or a hat.
- a target can also be a stand-alone object that is affixed to the user by using straps, pins, clips, adhesive, or any other mechanism.
- a first situation of at least one target in 3D space is ascertained 610 at a first time.
- the ascertaining of the first situation of the at least one target is performed by a computer remote from the HR system, but in other embodiments, the ascertaining is performed by a processor in the HR system.
- the at least one target may include one, two, three, or more targets.
- each target of the at least one target is positioned on a respective body part of the individual.
- a first target of the at least one target is positioned on a body part of the individual, such as a head, an arm, a hand, a torso, a leg, or a foot, and a second target of the at least one target is positioned on an object separate from the individual.
- the at least one target includes a first target positioned on a first hand of the individual and a second target positioned on a second hand of the individual, a head of the individual, or a torso of the individual.
- the method continues by determining a target configuration 620 based on the first situation of the at least one target.
- the situation of the at least one target may include a position and/or orientation of one or more of the individual targets included in the at least one target.
- a triangle may be created 624 with a first vertex based on a first position of a first target, a second vertex based on a second position of a second target, and a third vertex based on a third position of the third target.
- the target configuration may then be determined based on characteristics of the triangle.
- additional situations of the at least one target may be ascertained 622 at later times which may then be used with the first situation to determine the target configuration.
- the method may include ascertaining a second situation of the at least one target in 3D space at a second time and determining the target configuration based on both the first situation of the at least one target and the second situation of the at least one target.
- the target configuration may be determined based on a motion between the first situation of the at least one target and the second situation of the at least one target.
- the target configuration may be determined, at least in part, by calculating a velocity (distance divided by time) of a target based on two situations of the target.
- the velocity may be used in combination with a position at the start or end of a movement to determine the target configuration.
- a target configuration is only determined after it is held for a predetermined interval, such as one second, two seconds, five seconds, ten seconds or some other interval, depending on the embodiment.
- the determination of the target configuration may include computing a first difference between the first situation of the at least one target at a first time and the second situation of the at least one target at a second time, with the target configuration determined in response to the first difference being less than a threshold.
- the first time and the second time are separated by a predetermined interval, which may be at least 1 second long in some embodiments, and the threshold is an amount of movement that is allowed for movement of the target during the time that they are holding the target configuration, as the individual may not be able to hold perfectly still.
- Some embodiments may include recognizing a gesture 632 . This may be accomplished by ascertaining a second situation of the at least one target in 3D space at a second time and ascertaining a third situation of the at least one target in 3D space at a third time. The gesture may then be recognized based on the target configuration ascertained from the first situation of the at least one target at a first time, the second situation of the at least one target and the third situation of the at least one target.
- the method also includes deciding on an action 630 based on the target configuration.
- the action may be based on the gesture.
- the action may be decided on based on a sequence two or more target configurations.
- the target configuration may be looked up 634 in a table of pre-determined target configurations and/or target motions to determine the action in some embodiments.
- Deciding on the action may include determining a message to deliver to the user.
- the message may be provided as data during table lookup 534 .
- the table may be a pre-loaded local database whose contents may be specific to a mission, a particular circumstance, a particular location, a type of individual (e.g. a trained soldier, a civilian, a first responder, or the like) controlling the at least one target, or a specific individual controlling the at least one target.
- the type of individual may also be determined using sensor data, such as by recognizing a uniform or some characteristic of the individual.
- the message may be an alert, a command, or an alarm.
- the message may include a command to the user to perform a body pose consistent with the target configuration. This may be done to relay the message from the individual to another entity through the user of the HR system. This may be accomplished by presenting a virtual image of the body pose consistent with the target configuration to the user on a display of the HR system.
- the method includes performing the action 640 on an HR system used by a user different than the individual. This may include sending 642 a message to a computer remote from the HR system, for example through a wireless or wired network connection.
- a message may sent 642 to the user of the HR system as text on a display of the HR system, an icon on the display, an animated sequence on the display, video images on the display, a still image on the display, audible speech, sound, a haptic indication, a smell, or by any other way of communicating to the user.
- a virtual image may be displayed associated with the at least one target on the display. Non-limiting examples of this include a stopwatch as shown in FIG. 2D , an icon showing the type of individual, or an indicator of the urgency of the message.
- the action may include sending a confirmation 644 to the individual controlling the at least one target that the target configuration was determined.
- the confirmation may be sent in a way to be understandable by the individual using the individual's unaided natural senses as the individual may not have an electronic device, binoculars, or another device to aid them in receiving the confirmation.
- the confirmation may be sent to the individual by use of a bright green light, a loud sound, or a body pose performed by the user of the HR system in response to a prompt.
- sending the confirmation may include presenting a prompt to the user to perform a response body pose.
- aspects of the various embodiments may be embodied as a system, device, method, or computer program product apparatus. Accordingly, elements of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, or the like) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “server,” “circuit,” “module,” “client,” “computer,” “logic,” or “system,” or other terms. Furthermore, aspects of the various embodiments may take the form of a computer program product embodied in one or more computer-readable medium(s) having computer program code stored thereon.
- a computer-readable storage medium may be embodied as, for example, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or other like storage devices known to those of ordinary skill in the art, or any suitable combination of computer-readable storage mediums described herein.
- a computer-readable storage medium may be any tangible medium that can contain, or store a program and/or data for use by or in connection with an instruction execution system, apparatus, or device.
- a computer data transmission medium such as a transmission line, a coaxial cable, a radio-frequency carrier, and the like, may also be able to store data, although any data storage in a data transmission medium can be said to be transitory storage. Nonetheless, a computer-readable storage medium, as the term is used herein, does not include a computer data transmission medium.
- Computer program code for carrying out operations for aspects of various embodiments may be written in any combination of one or more programming languages, including object oriented programming languages such as Java, Python, C++, or the like, conventional procedural programming languages, such as the “C” programming language or similar programming languages, or low-level computer languages, such as assembly language or microcode.
- object oriented programming languages such as Java, Python, C++, or the like
- conventional procedural programming languages such as the “C” programming language or similar programming languages
- low-level computer languages such as assembly language or microcode.
- the computer program code if loaded onto a computer, or other programmable apparatus, produces a computer implemented method.
- the instructions which execute on the computer or other programmable apparatus may provide the mechanism for implementing some or all of the functions/acts specified in the flowchart and/or block diagram block or blocks.
- the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server, such as a cloud-based server.
- the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
- LAN local area network
- WAN wide area network
- the computer program code stored in/on (i.e. embodied therewith) the non-transitory computer-readable medium produces an article of manufacture.
- the computer program code if executed by a processor causes physical changes in the electronic devices of the processor which change the physical flow of electrons through the devices. This alters the connections between devices which changes the functionality of the circuit. For example, if two transistors in a processor are wired to perform a multiplexing operation under control of the computer program code, if a first computer instruction is executed, electrons from a first source flow through the first transistor to a destination, but if a different computer instruction is executed, electrons from the first source are blocked from reaching the destination, but electrons from a second source are allowed to flow through the second transistor to the destination. So a processor programmed to perform a task is transformed from what the processor was before being programmed to perform that task, much like a physical plumbing system with different valves can be controlled to change the physical flow of a fluid.
- the singular forms “a”, “an”, and “the” include plural referents unless the content clearly dictates otherwise.
- the term “or” is generally employed in its sense including “and/or” unless the content clearly dictates otherwise.
- the term “coupled” includes direct and indirect connections. Moreover, where first and second devices are coupled, intervening devices including active devices may be located there between.
- Embodiment 1 A method comprising: receiving data from a sensor; detecting an individual in the sensor data; ascertaining a first situation of at least one body part of the individual in 3D space at a first time; determining a body pose based on the first situation of the at least one body part; deciding on an action based on the body pose; and performing the action on a hybrid reality (HR) system used by a user different than the individual.
- HR hybrid reality
- Embodiment 2 The method of embodiment 33, the HR system comprising a head-mounted display.
- Embodiment 3 The method of embodiment 1, wherein said ascertaining the first situation of the at least one body part is performed by a computer remote from the HR system.
- Embodiment 4 The method of embodiment 1, the at least one body part comprising a head, an arm, a hand, a torso, a leg, or a foot.
- Embodiment 5 The method of embodiment 1, the at least one body part comprising a first hand and at least one of a second hand, a head, or a torso.
- Embodiment 6 The method of embodiment 1, the at least one body part comprising a first body part, a second body part, and a third body part; the method further comprising: creating a triangle with a first vertex based on a first position of the first body part, a second vertex based on a second position of the second body part, and a third vertex based on a third position of the third body part; and determining the body pose based on characteristics of the triangle.
- Embodiment 7 The method of embodiment 1, the action comprising sending a confirmation to the individual that the body pose was determined.
- Embodiment 8 The method of embodiment 7, the confirmation sent in a way to be understandable by the individual using the individual's unaided natural senses.
- Embodiment 9 The method of embodiment 7, said sending the confirmation comprising presenting a prompt to the user to perform a response body pose.
- Embodiment 10 The method of embodiment 1, said deciding on the action comprising selecting a particular action from a set of predetermined actions based on the body pose.
- Embodiment 11 The method of embodiment 1, said deciding on the action comprising determining a message to deliver to the user based on the body pose.
- Embodiment 12 The method of embodiment 11, wherein the message is an alert, a command, or an alarm.
- Embodiment 13 The method of embodiment 11, wherein the message is delivered to the user as text on a display of the HR system, an icon on the display, an animated sequence on the display, video images on the display, a still image on a display of the HR system, audible speech, sound, a haptic indication, or a smell.
- Embodiment 14 The method of embodiment 11, the message comprising a command to the user to replicate the body pose.
- Embodiment 15 The method of embodiment 14, further comprising presenting a virtual image of the body pose to the user on a display of the HR system.
- Embodiment 16 The method of embodiment 1, wherein the sensor is remote from the user and the HR system.
- Embodiment 17 The method of embodiment 16, wherein the sensor is mounted on a drone controlled by the user.
- Embodiment 18 The method of embodiment 16, wherein the individual is outside of a field of view of the HR system.
- Embodiment 19 The method of embodiment 1, the action comprising displaying a virtual image associated with the at least one body part on the display of the HR system.
- Embodiment 20 The method of embodiment 1, wherein the sensor is conveyed by the user.
- Embodiment 21 The method of embodiment 20, wherein the sensor is integrated into the HR system.
- Embodiment 22 The method of embodiment 20, wherein the sensor has a 360° field of view.
- Embodiment 23 The method of embodiment 20, wherein the individual is within a field of view of the user.
- Embodiment 24 The method of embodiment 1, the action comprising sending a message to a computer remote from the HR system, wherein the message is based on the body pose.
- Embodiment 25 The method of embodiment 1, further comprising: ascertaining a second situation of the at least one body part in 3D space at a second time; determining the body pose based on both the first situation of the at least one body part and the second situation of the at least one body part.
- Embodiment 26 The method of embodiment 25, further comprising determining the body pose based on a motion between the first situation of the at least one body part and the second situation of the at least one body part.
- Embodiment 27 The method of embodiment 25, further comprising: computing a first difference between the first situation of the at least one body part and the second situation of the at least one body part; and determining the body pose in response to the first difference being less than a threshold; wherein the first time and the second time are separated by a predetermined interval.
- Embodiment 28 The method of embodiment 27, the predetermined interval being at least 1 second long.
- Embodiment 29 The method of embodiment 1, further comprising: ascertaining a second situation of the at least one body part in 3D space at a second time; ascertaining a third situation of the at least one body part in 3D space at a third time; recognizing a gesture based on the body pose, the second situation of the at least one body part and the third situation of the at least one body part; and deciding on the action based on the gesture.
- Embodiment 30 The method of embodiment 1, further comprising: ascertaining a second situation of the at least one body part in 3D space at a second time; determining a different body pose based on the second situation of the at least one body part; and deciding on the action based on a sequence of the body pose followed by the different body pose.
- Embodiment 31 An article of manufacture comprising a tangible medium, that is not a transitory propagating signal, encoding computer-readable instructions that, when applied to a computer system, instruct the computer system to perform a method comprising: receiving data from a sensor; detecting an individual in the sensor data; ascertaining a first situation of at least one body part of the individual in 3D space at a first time; determining a body pose based on the first situation of the at least one body part; deciding on an action based on the body pose; and performing the action on a hybrid reality (HR) system worn by a user different than the individual.
- HR hybrid reality
- Embodiment 32 A head-mounted display (HMD) comprising: a display; a structure, coupled to the display and adapted to position the display in a field-of-view (FOV) of the user; and a processor, coupled to the display and the sound reproduction device, the processor configured to: receive data from a sensor; detect an individual not wearing the HMD in the sensor data; ascertain a first situation of at least one body part of the individual in 3D space at a first time; determine a body pose based on the first situation of the at least one body part; decide on an action based on the body pose; and perform the action.
- HMD head-mounted display
- FOV field-of-view
- Embodiment 33 A method comprising: receiving data from a sensor; detecting at least one target in the sensor data, a situation of the at least one target controlled by an individual; ascertaining a first situation of the at least one target in 3D space at a first time; determining a target configuration based on the first situation of the at least one target; deciding on an action based on the target configuration; and performing the action on a hybrid reality (HR) system used by a user different than the individual.
- HR hybrid reality
- Embodiment 34 The method of embodiment 33, the HR system comprising a head-mounted display.
- Embodiment 35 The method of embodiment 33, wherein said ascertaining the first situation of the at least one target is performed by a computer remote from the HR system.
- Embodiment 36 The method of embodiment 33, wherein each target of the at least one target is positioned on the individual.
- Embodiment 37 The method of embodiment 33, wherein a first target of the at least one target is positioned on a body part of the individual.
- Embodiment 38 The method of embodiment 37, the body part being a head, an arm, a hand, a torso, a leg, or a foot.
- Embodiment 39 The method of embodiment 33, wherein a second target of the at least one target is positioned on an object separate from the individual.
- Embodiment 40 The method of embodiment 33, the at least one target comprising a first target positioned on a first hand of the individual and a second target positioned on a second hand of the individual, a head of the individual, or a torso of the individual.
- Embodiment 41 The method of embodiment 33, the at least one target comprising a first target, a second target, and a third target; the method further comprising: creating a triangle with a first vertex based on a first position of the first target, a second vertex based on a second position of the second target, and a third vertex of the third target; and determining the target configuration based on characteristics of the triangle.
- Embodiment 42 The method of embodiment 33, the action comprising sending a confirmation to the individual that the target configuration was determined.
- Embodiment 43 The method of embodiment 42, the confirmation sent in a way to be understandable by the individual using the individual's unaided natural senses.
- Embodiment 44 The method of embodiment 42, said sending the confirmation comprising presenting a prompt to the user to perform a response body pose.
- Embodiment 45 The method of embodiment 33, said deciding on the action comprising selecting a particular action from a set of predetermined actions based on the target configuration.
- Embodiment 46 The method of embodiment 33, said deciding on the action comprising determining a message to deliver to the user based on the target configuration.
- Embodiment 47 The method of embodiment 46, wherein the message is one of an alert, a command, or an alarm.
- Embodiment 48 The method of embodiment 46, wherein the message is delivered to the user as text on a display of the HR system, an icon on the display, an animated sequence on the display, video images on the display, a still image on the display, audible speech, sound, a haptic indication, or a smell.
- Embodiment 49 The method of embodiment 46, the message comprising a command to the user to perform a body pose consistent with the target configuration.
- Embodiment 50 The method of embodiment 49, further comprising presenting a virtual image of the body pose to the user on a display of the HR system.
- Embodiment 51 The method of embodiment 33, wherein the sensor is remote from the user and the HR system.
- Embodiment 52 The method of embodiment 51, wherein the sensor is mounted on a drone controlled by the user.
- Embodiment 53 The method of embodiment 51, wherein the at least one target is outside of a field of view of the HR system.
- Embodiment 54 The method of embodiment 33, the action comprising displaying a virtual image associated with the at least one target on a display of the HR system.
- Embodiment 55 The method of embodiment 33, wherein the sensor is conveyed by the user.
- Embodiment 56 The method of embodiment 55, wherein the sensor is integrated into the HR system.
- Embodiment 57 The method of embodiment 55, wherein the sensor has a 360° field of view.
- Embodiment 58 The method of embodiment 55, wherein the at least one target is within a field of view of the user.
- Embodiment 59 The method of embodiment 33, the action comprising sending a message to a computer remote from the HR system, wherein the message is based on the target configuration.
- Embodiment 60 The method of embodiment 33, further comprising: ascertaining a second situation of the at least one target in 3D space at a second time; determining the target configuration based on both the first situation of the at least one target and the second situation of the at least one target.
- Embodiment 61 The method of embodiment 60, further comprising determining the target configuration based on a motion between the first situation of the at least one target and the second situation of the at least one target.
- Embodiment 62 The method of embodiment 60, further comprising: computing a first difference between the first situation of the at least one target and the second situation of the at least one target; and determining the target configuration in response to the first difference being less than a threshold; wherein the first time and the second time are separated by a predetermined interval.
- Embodiment 63 The method of embodiment 62, the predetermined interval being at least 1 second long.
- Embodiment 64 The method of embodiment 33, further comprising: ascertaining a second situation of the at least one target in 3D space at a second time; ascertaining a third situation of the at least one target in 3D space at a third time; recognizing a gesture based on the target configuration, the second situation of the at least one target and the third situation of the at least one target; and deciding on the action based on the gesture.
- Embodiment 65 The method of embodiment 33, further comprising: ascertaining a second situation of the at least one target in 3D space at a second time; determining a different target configuration based on the second situation of the at least one target; and deciding on the action based on a sequence of the target configuration followed by the different target configuration.
- Embodiment 66 An article of manufacture comprising a tangible medium, that is not a transitory propagating signal, encoding computer-readable instructions that, when applied to a computer system, instruct the computer system to perform a method comprising: receiving data from a sensor; detecting at least one target in the sensor data, a situation of the at least one target controlled by an individual; ascertaining a first situation of the at least one target in 3D space at a first time; determining a target configuration based on the first situation of the at least one target; deciding on an action based on the target configuration; and performing the action on a hybrid reality (HR) system worn by a user different than the individual.
- HR hybrid reality
- a head-mounted display comprising: a display; a structure, coupled to the display and adapted to position the display in a field-of-view (FOV) of the user; and a processor, coupled to the display and the sound reproduction device, the processor configured to: receive data from a sensor; detect at least one target in the sensor data, a situation of the at least one target controlled by an individual not wearing the HMD; ascertain a first situation of at least one target of the individual in 3D space at a first time; determine a target configuration based on the first situation of the at least one target; decide on an action based on the target configuration; and perform the action.
- HMD head-mounted display
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Optics & Photonics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
- This application is a continuation of U.S. patent application Ser. No. 16/230,278, entitled Body Pose Message System, filed on Dec. 21, 2018, the contents of which are incorporated by reference herein for any and all purposes.
- The present subject matter relates to using a Virtual Reality (VR) or Augmented Reality (AR) system to allow other personnel to provide messages to a device without using a network communications link.
- Many situations require the presentation information to a user in a way that the user can receive the information when it is needed and ensures that the user acts accordingly. One of many different professions where this is important is for emergency responders where the ability to receive the right information at the right time can be a matter of life or death. Traditionally, emergency responders have relied on audio transmissions over a radio for a majority of their information, but that is changing with the advent of widespread wireless digital communication.
- Another new technology that is making its way into the world of emergency responders is digital displays. These displays may be on a handheld device, such as a mobile phone, or on a head-mounted display (HMD), such as a virtual reality (VR) display or an augmented reality (AR) display, which may be integrated into their emergency equipment, such as their helmet. Textual information can be presented to the emergency responder through the display and the information can be updated in real-time through the digital wireless interface from a command center or other information sources.
- The accompanying drawings, which are incorporated in and constitute part of the specification, illustrate various embodiments. Together with the general description, the drawings serve to explain various principles. In the drawings:
-
FIG. 1A shows a scene on the display of an embodiment of a head-mounted display depicting a first body pose detected using body parts; -
FIG. 1B shows a scene on the display of an embodiment of a head-mounted display depicting a second body pose detected using body parts; -
FIG. 1C shows a scene on the display of an embodiment of a head-mounted display depicting a third body pose detected using body parts; -
FIG. 1D shows a scene on the display of an embodiment of a head-mounted display depicting a fourth body pose detected using body parts; -
FIG. 2A shows a scene on the display of an embodiment of a head-mounted display depicting a first body pose detected using body targets; -
FIG. 2B shows a scene on the display of an embodiment of a head-mounted display depicting a second body pose detected using body targets; -
FIG. 2C shows a scene on the display of an embodiment of a head-mounted display depicting a third body pose detected using body targets; -
FIG. 2D shows a scene on the display of an embodiment of a head-mounted display depicting a fourth body pose detected using body targets; -
FIG. 3 shows an embodiment of an HR system used to create network connections where there are none; -
FIG. 4 shows a block diagram of an embodiment of an HR system; -
FIG. 5 is a flowchart of an embodiment of a method for communicating using body poses; - and
-
FIG. 6 is a flowchart of an embodiment of a method for communicating using targets. - In the following detailed description, numerous specific details are set forth by way of examples in order to provide a thorough understanding of the relevant teachings. However, it should be apparent to those skilled in the art that the present teachings may be practiced without such details. In other instances, well known methods, procedures and components have been described at a relatively high-level, without detail, in order to avoid unnecessarily obscuring aspects of the present concepts. A number of descriptive terms and phrases are used in describing the various embodiments of this disclosure. These descriptive terms and phrases are used to convey a generally agreed upon meaning to those skilled in the art unless a different definition is given in this specification. Some descriptive terms and phrases are presented in the following paragraphs for clarity.
- Hybrid-reality (HR), as the phrase is used herein, refers to an image that merges real-world imagery with imagery created in a computer, which is sometimes called virtual imagery. While an HR image can be a still image, it can also be a moving image, such as imagery created using a video stream. HR can be displayed by a traditional two-dimensional display device, such as a computer monitor, one or more projectors, or a smartphone screen. An HR system can be based on a device such as a microscope, binoculars, or a telescope, with virtual imagery is superimposed over the image captured by the device. In such HR systems, an eyepiece of a device may be considered the display of the system. HR imagery can also be displayed by a head-mounted display (HMD). Many different technologies can be used in an HMD to display HR imagery. A virtual reality (VR) HMD system may receive images of a real-world object, objects, or scene, and composite those images with a virtual object, objects, or scene to create an HR image. An augmented reality (AR) HMD system may present a virtual object, objects, or scene on a transparent screen which then naturally mixes the virtual imagery with a view of a scene in the real-world. A display which mixes live video with virtual objects is sometimes denoted AR, but for the purposes of this disclosure, an AR HMD includes at least a portion of the display area that is transparent to allow at least some of the user's view of the real-world to be directly viewed through the transparent portion of the AR HMD. The display used by an HR system represents a scene which is a visible portion of the whole environment. As used herein, the term “scene” and “field of view” (FOV) are used to indicate what is visible to a user.
- The word “occlude” is used herein to mean that a pixel of a virtual element is mixed with an image of another object to change the way the object is perceived by a viewer. In a VR HMD, this can be done through use of a compositing process to mix the two images, a Z-buffer technique to remove elements of the image that are hidden from view, a painter's algorithm to render closer objects later in the rendering process, or any other technique that can replace a pixel of the image of the real-world object with a different pixel value generated from any blend of real-world object pixel value and an HR system determined pixel value. In an AR HMD, the virtual object occludes the real-world object if the virtual object is rendered, transparently or opaquely, in the line of sight of the user as they view the real-world object. In the following description, the terms “occlude”, “transparency”, “rendering” and “overlay” are used to denote the mixing or blending of new pixel values with existing object pixel values in an HR display.
- In some embodiments of HR systems, there are sensors which provide the information used to render the HR imagery. A sensor may be mounted on or near the display, on the viewer's body, or be remote from the user. Remote sensors may include, but are not limited to, fixed sensors attached in an environment, sensors attached to robotic extensions, sensors attached to autonomous or semi-autonomous drones, or sensors attached to other persons. Data from the sensors may be raw or filtered. Data from the sensors may be transmitted wirelessly or using a wired connection.
- Sensors used by some embodiments of HR systems include, but are not limited to, a camera that captures images in the visible spectrum, an infrared depth camera, a microphone, a sound locator, a Hall effect sensor, an air-flow meter, a fuel level sensor, an oxygen sensor, an electronic nose, a gas detector, an anemometer, a mass flow sensor, a Geiger counter, a gyroscope, an infrared temperature sensor, a flame detector, a barometer, a pressure sensor, a pyrometer, a time-of-flight camera, radar, or lidar. Sensors in some HR system embodiments that may be attached to the user include, but are not limited to, a biosensor, a biochip, a heartbeat sensor, a pedometer, a skin resistance detector, or skin temperature detector.
- The display technology used by an HR system embodiment may include any method of projecting an image to an eye. Conventional technologies include, but are not limited to, cathode ray tube (CRT), liquid crystal display (LCD), light emitting diode (LED), plasma, or organic LED (OLED) screens, or projectors based on those technologies or digital micromirror devices (DMD). It is also contemplated that virtual retina displays, such as direct drawing on the eye's retina using a holographic grating, may be used. It is also contemplated that direct machine to brain interfaces may be used in the future.
- The display of an HR system may also be an HMD or a separate device, such as, but not limited to, a hand-held mobile phone, a tablet, a fixed monitor or a TV screen.
- The connection technology used by an HR system may include any physical link and associated protocols, such as, but not limited to, wires, transmission lines, solder bumps, near-field connections, infra-red connections, or radio frequency (RF) connections such as cellular, satellite or Wi-Fi® (a registered trademark of the Wi-Fi Alliance). Virtual connections, such as software links, may also be used to connect to external networks and/or external compute.
- In many HR embodiments, aural stimuli and information may be provided by a sound system. The sound technology may include monaural, binaural, or multi-channel systems. A binaural system may include a headset or another two-speaker system but may also include systems with more than two speakers directed to the ears. The sounds may be presented as 3D audio, where each sound has a perceived position in space, achieved by using reverberation and head-related transfer functions to mimic how sounds change as they move in a particular space.
- In many HR system embodiments, objects in the display may move. The movement may be due to the user moving within the environment, for example walking, crouching, turning, or tilting the head. The movement may be due to an object moving, for example a dog running away, a car coming towards the user, or a person entering the FOV. The movement may also be due to an artificial movement, for example the user moving an object on a display or changing the size of the FOV. In one embodiment, the motion may be due to the user deliberately distorting all or part of the FOV, for example adding a virtual fish-eye lens. In the following description, all motion is considered relative; any motion may be resolved to a motion from a single frame of reference, for example the user's viewpoint.
- When there is motion in an HR system, the perspective of any generated object overlay may be corrected so that it changes with the shape and position of the associated real-world object. This may be done with any conventional point-of-view transformation based on the angle of the object from the viewer; note that the transformation is not limited to simple linear or rotational functions, with some embodiments using non-Abelian transformations. It is contemplated that motion effects, for example blur or deliberate edge distortion, may also be added to a generated object overlay.
- In some HR embodiments, images from cameras, whether sensitive to one or more of visible, infra-red, or microwave spectra, may be processed before algorithms are executed. Algorithms used after image processing for embodiments disclosed herein may include, but are not limited to, object recognition, motion detection, camera motion and zoom detection, light detection, facial recognition, text recognition, or mapping an unknown environment. The image processing may also use conventional filtering techniques, such as, but not limited to, static, adaptive, linear, non-linear, and Kalman filters. Deep-learning neural networks may be trained in some embodiments to mimic functions which are hard to create algorithmically. Image processing may also be used to prepare the image, for example by reducing noise, restoring the image, edge enhancement, or smoothing.
- In some HR embodiments, objects may be detected in the FOV of one or more cameras. Objects may be detected by using conventional algorithms, such as, but not limited to, edge detection, feature detection (for example surface patches, corners and edges), greyscale matching, gradient matching, pose consistency, or database look-up using geometric hashing. Genetic algorithms and trained neural networks using unsupervised learning techniques may also be used in embodiments to detect types of objects, for example people, dogs, or trees.
- In embodiments of an HR system, object may be performed on a single frame of a video stream, although techniques using multiple frames are also envisioned. Advanced techniques, such as, but not limited to, Optical Flow, camera motion, and object motion detection may be used between frames to enhance object recognition in each frame.
- After object recognition, rendering the object may be done by the HR system embodiment using databases of similar objects, the geometry of the detected object, or how the object is lit, for example specular reflections or bumps.
- In some embodiments of an HR system, the locations of objects may be generated from maps and object recognition from sensor data. Mapping data may be generated on the fly using conventional techniques, for example the Simultaneous Location and Mapping (SLAM) algorithm used to estimate locations using Bayesian methods, or extended Kalman filtering which linearizes a non-linear Kalman filter to optimally estimate the mean or covariance of a state (map), or particle filters which use Monte Carlo methods to estimate hidden states (map). The locations of objects may also be determined a priori, using techniques such as, but not limited to, reading blueprints, reading maps, receiving GPS locations, receiving relative positions to a known point (such as a cell tower, access point, or other person) determined using depth sensors, WiFi time-of-flight, or triangulation to at least three other points.
- Gyroscope sensors on or near the HMD may be used in some embodiments to determine head position and to generate relative motion vectors which can be used to estimate location.
- In embodiments of an HR system, sound data from one or microphones may be processed to detect specific sounds. Sounds that might be identified include, but are not limited to, human voices, glass breaking, human screams, gunshots, explosions, door slams, or a sound pattern a particular machine makes when defective. Gaussian Mixture Models and Hidden Markov Models may be used to generate statistical classifiers that are combined and looked up in a database of sound models. One advantage of using statistical classifiers is that sounds can be detected more consistently in noisy environments.
- In some embodiments of an HR system, eye tracking of one or both viewer's eyes may be performed. Eye tracking may be used to measure the point of the viewer's gaze. In an HMD, the position of each eye is known, and so there is a reference frame for determining head-to-eye angles, and so the position and rotation of each eye can be used to estimate the gaze point. Eye position determination may be done using any suitable technique and/or device, including, but not limited to, devices attached to an eye, tracking the eye position using infra-red reflections, for example Purkinje images, or using the electric potential of the eye detected by electrodes placed near the eye which uses the electrical field generated by an eye independently of whether the eye is closed or not.
- In some HR embodiments, input is used to control the HR system, either from the user of the HR system or from external actors. The methods of input used varies by embodiment, and each input type may control any or a subset of an HR system's function. For example, in some embodiments gestures are used as control input. A gesture may be detected by using other systems coupled to the HR system, such as, but not limited to, a camera, a stereo camera, a depth camera, a wired glove, or a controller. In some embodiments using a camera for gesture detection, the video stream is analyzed to detect the position and movement of an object, for example a hand, a finger, or a body pose. The position and motion can be used to generate a 3D or 2D path and, by using stochastic or pattern matching techniques, determine the most likely gesture used.
- In another example embodiment, the user's head position and movement may be used as a gesture or direct control. The head position and movement may be determined by gyroscopes mounted into an HMD. In another example, a fixed source such as an electromagnetic beam may be affixed to a user or mounted in an HMD; coupled sensors can then track the electromagnetic beam as the user's head is moved.
- In yet other example embodiments, the user may have a touch-pad or a plurality of touch sensors affixed to the body, for example built-in to a glove, a suit, or an HMD, coupled to the HR system. By touching a specific point, different input data can be generated. Note that the time of a touch or the pattern of touches may also generate different input types. In some technologies, touchless sensors using a proximity to the sensor can be used.
- In some embodiments a physical input device is coupled to the HR system. The physical input device may be a mouse, a pen, a keyboard, or a wand. If a wand controller is used, the HR system tracks the position and location of the wand as well as presses of any buttons on the wand; the wand may be tracked using a camera, for example using object boundary recognition, using target tracking where a specific shape or target is detected in each video frame, or by wired/wireless data from the wand received by the HR system. In other example embodiments, a physical input device may be virtual, where a device is rendered on the head-mounted display and the user interacts with the virtual controller using other HR systems, such as, but not limited to, gaze direction, hand tracking, finger tracking, or gesture detection. In embodiments which use gaze direction as input, interaction with virtual menus rendered on the display may be used.
- Further, in another example embodiment, a backwards-facing camera mounted in an HMD may be used to detect blinking or facial muscle movement. By tracking blink patterns or facial muscle motion, input gestures can be determined.
- As used herein, a “situation” of an object may refer to any aspect of the objects position and/or orientation in three-dimensional space. The situation may refer to any value of an object's six degrees of freedom, such as up/down, forward/back, left/right, roll, pitch, and yaw. In some cases, the situation of an object may refer to only the position of an object with respect to a 3-dimnesional axis without referring to its orientation (e.g. roll, pitch, and yaw), or the situation may refer only to the object's orientation without referring to its position. But other cases, the situation of the object may refer to one or more position and/or orientation values. In some cases, the situation of the object may also include an aspect of its velocity or acceleration vector such as the speed or direction of movement of the object.
- In some embodiments, breathing patterns may be detected using a pressure sensor mounted in a breathing system coupled to the HR system to detect changes in pressure. Breath patterns such as, but not limited to, blowing softly, exhaling hard, or inhaling suddenly may be used as input data for an HR control system.
- In yet other example embodiments, sounds may be detected by one or more microphones coupled to the HR system. Specific sounds, such as, but limited to, vocalizations (e.g. scream, shout, lip buzz, snort, whistle), stamping, or clapping, may detected using stochastic or pattern matching techniques on received audio data. In some embodiments, more than one microphone may be used to place a sound in a location, allowing the position of a sound, for example a clap, to provide additional input control data. In some embodiments, voice control using natural language is used; speech recognition techniques such as trained neural networks or hidden Markov model algorithms are used by an HR system to determine what has been said.
- It is anticipated that direct neural interfaces may be used in some embodiments to control an HR system.
- Turning now to the current disclosure, systems that display HR imagery are becoming increasingly common and are making their way from entertainment and gaming into industrial and commercial applications. Examples of systems that may find HR imagery useful include aiding a person doing a task in a hazardous environment, for example repairing machinery, neutralizing danger, or responding to an emergency.
- Many of the same environments where HR imagery might be used may have intermittent or no network connectivity. In these environments safety is a priority, and so simple signaling between team members is an imperative. Accordingly, systems and methods for allowing signals to be transmitted without required an active network communications link may be useful.
- An HR system may be used to track the motion of other personnel in the environment using sensors. The body shape of a team member may be determined using sensors to provide a method of relaying information. The tracking of personnel using line-of-sight cameras is disclosed herein, along with techniques for tracking personnel who are out of direct sight lines. An HR system may also be used in conjunction with other systems to relay information where network connectivity is not available at each hop.
- A body may be detected by an embodiment of HR system using object recognition from a video feed. Detecting human bodies in a video may be done using a trained neural network, as is done in many smartphones today. Note that in some embodiments the camera may be pointing forward from the user wearing the headset (corresponding to the user's current field of view), pointing to the side or behind of the user, or from a camera with a wide-angle view or an array of cameras generating a wide angle view up to and including a 360° view. In some embodiments, the video feed is transmitted to the HR system using a network from other cameras in the environment.
- In some embodiments, a sensor detecting non-visible light may be used to generate the video feed. For example, an infra-red camera may be used to “see through” obscuring atmosphere, such as smoke or water vapor. In another example, ultra-high-frequency sensors may be used to “see through” solid barriers such as walls or ceilings.
- In some embodiments the sensor is not being carried by the user of an HR system. An external sensor may be carried by another team member, mounted on a drone or robot, or fixed in the environment, to give three non-limiting examples. The body detection processing may be done using computing resources within the HR system, using an external computing resource with a result of body detection computation transmitted to the HR system using a network connection, or some combination thereof.
- Note that if a network connection is used to transmit data from an external sensor to an HR system, there is no requirement that the person signaling is connected to the HR system. In some scenarios, there may be a network connection between team members, but the signaler may be occupied interacting with an HR system, for example using voice commands, eye gaze, and muscle twitch. In these scenarios, a body pose command does not interrupt or block the signaler's input systems and is quick and easy to perform, and so is advantageous.
- In some potential scenarios, an electronic network connection from the signaler may be available but may be very low bandwidth or extremely intermittent. In these scenarios, an example HR system may transmit a single, small packet with error correction parity that indicates that a sequence of body pose commands is to be performed. The packet may then be received by other HR systems in the vicinity to start body pose detection algorithms.
- When creating and detecting a body pose using an example embodiment, one or more factors may be used to determine whether a pose is present, such as, but not limited to, keeping one or more body parts fixed in space for a minimum period of time, maintaining a relationship between two or more body parts (e.g. a subtended angle from center of body between two hands), or tracing a known path using one or body parts (e.g. a gesture). In some scenarios, a single body pose may not have 100% accurate detection because of adverse environmental conditions. To increase the success rate of detecting a body pose, some embodiments may require a sequence or combination of separate body poses to be valid.
- In example embodiments, the detection of the body pose may generate a system message, such as, but not limited to, alerts, actions or commands, by looking up an associated action in a database using the pose as an index; in some embodiments, the database may be loaded beforehand to define the actions specific to a particular mission.
- In some embodiments, the detection of the body pose does not generate a system action directly, allowing the HR system user to interpret the meaning of the pose. In some of these embodiments, the HR system renders an identifier on the display, for example an icon or text describing the detected pose. In other embodiments, the HR may highlight the body parts used during a pose to aid pose recognition by the HR system user, such as, but not limited to, adding color, increasing brightness, increasing the size, or adding virtual elements (e.g. a beam).
- In some embodiments, the detection of a body pose may be performed by determining the positions and motions of targets attached to a signaler, such as, but not limited to, registration marks, infra-red sensitive paint, or low-power fixed transmitters. When a low-power transmitter is affixed to a moving body, a more limited sensor may be used some HR embodiments. In some example embodiments the registration mark may be an unnatural shape not encountered in an environment, for example a cross, a bullseye, or a barcode. In some embodiments, the registration mark may be made using infra-red reflective paint, allowing the marks easily to be detected in a dark environment or an environment full of obscuring particulate matter or water vapor.
- The facility of some HR embodiments to use a body pose to relay a message as described herein creates the potential to combine the facility with other devices to bridge connectivity gaps in a network, either because of bandwidth overload, no connection or intermittent connection. For example, a lack of a network connection between two team members can be bridged by using simple body pose messages. In a further example, a team member can communicate a body pose to a sensor on a device connected to the network. Note that a network gap can be bridged at the start of a communication, at the end of a communication, or at any point or points in between. In some scenarios, a network gap can be bridged by sending a “do body pose” message to a connected team member, so creating a connection to a device not connected to the previous network.
-
FIG. 1A shows a scene on the display of an embodiment of a head-mounted display showing abody 100 in a first situation. In some embodiments, an HR system may detect the presence ofbody 100 and the presence ofhands key positions -
FIG. 1B shows a scene on the display of an embodiment of a head-mounted display showing thebody 100 in a second situation. In some embodiments, an HR system may detect the presence of thebody 100 and the presence of thehands key positions FIG. 1B is different from the first pose depicted at the time ofFIG. 1A . -
FIG. 1C shows a scene on the display of an embodiment of a head-mounted display showing thebody 100 in a third situation. In some embodiments, an HR system may detect the presence of thebody 100 and the presence of thehands key positions FIG. 1C ,key position 164 is not static, but rotates in a clockwise direction as indicated byarrow 190. Note that the third pose detected at the time ofFIG. 1C is interpreted differently from the first pose depicted at the time ofFIG. 1A because of themotion 190 even though the triangle formed bykey points triangle -
FIG. 1D shows a scene on the display of an embodiment of a head-mounted display showing thebody 100 in a fourth situation. In some embodiments, an HR system may detect the presence of thebody 100 and the presence of thehands key positions FIG. 1D ,key position 184 is held for a fixed time as indicated bystopwatch 192. Note that the fourth pose detected at the time ofFIG. 1D is interpreted differently from the second pose depicted at the time ofFIG. 1B even though the triangle formed bykey points triangle specific time 192. In some embodiments, thetime 192 may be a maximum time or a minimum time. -
FIG. 2A shows a scene on the display of an embodiment of a head-mounted display showing abody 200 in a first situation. In some embodiments, targets 222A, 224A, 226A corresponding to a left hand, a right hand and torso position are positioned on the body and visible to at least one sensor. An HR system may detect the presence oftargets targets -
FIG. 2B shows a scene on the display of an embodiment of a head-mounted display showing thebody 200 in a second situation. In some embodiments,targets targets targets FIG. 2B is different from the first pose depicted at the time ofFIG. 2A . -
FIG. 2C shows a scene on the display of an embodiment of a head-mounted display showing abody 200 in a third situation. In some embodiments, targets 222C, 224C, 226C corresponding to the left hand, the right hand and the torso position are positioned on the body and visible to at least one sensor. An HR system may detect the presence oftargets targets FIG. 2C , key position 264 is not static, but rotates in a clockwise direction as indicated byarrow 290. Note that the third pose detected at the time ofFIG. 2C is interpreted differently from the first pose depicted at the time ofFIG. 2A because of themotion 290 even though the triangle formed by key points associated withtargets triangle -
FIG. 2D shows a scene on the display of an embodiment of a head-mounted display showing abody 200 in a fourth situation. In some embodiments, targets 222D, 224D, 226D corresponding to the left hand, the right hand and the torso position are positioned on the body and visible to at least one sensor. An HR system may detect the presence oftargets FIG. 2D , key position 284 is held for a fixed time as indicated bystopwatch 292. Note that the pose detected at the time ofFIG. 2D is interpreted differently from the pose depicted at the time ofFIG. 2B even though the triangle formed by key points associated withtargets specific time 292. In some embodiments, thetime 292 may be a maximum time or a minimum time. - Please note that a first target on the left hand is shown using
reference 222A inFIG. 2A, 222B inFIG. 2B, 222C inFIG. 2C, and 222D inFIG. 2D to show the different positions of the first target. A second target on the right hand is shown using 224A inFIG. 2A, 224B inFIG. 2B, 224C inFIG. 2C, and 224D inFIG. 2D to show the different positions of the second target and a third target on the torso is shown using 226A inFIG. 2A, 226B inFIG. 2B, 226C inFIG. 2C, and 226D inFIG. 2D to show the different positions of the third target. - In some HR embodiments, the body poses may be constructed so that only two of three key positions are required to uniquely identify some poses, so creating some redundancy and thus error tolerance. In some example HR embodiments, a sequence of body poses that combine to create a message may be constructed so that reception errors may be detected and/or corrected.
-
FIG. 3 shows a scenario where partially connected networks are present. At the start of the vignette ofFIG. 3 , there are two partially connected networks: the first network comprisingwireless connections wired connection 350; and the second network comprisingwireless connection 344 andwired connection 352. The connected devices in the first network include the HR system worn by afirst team member 302, the HR system worn by asecond team member 304, afirst network device 310, and asecond network device 312. The connected devices in the second network include acamera 320 andthird network device 314; note that thethird network device 314 may provide a connection to the external network viawireless link 344. At the start of the vignette ofFIG. 3 , the HR system worn byteam member 300 is not connected to either the first or the second network, and, the first network and second network are not connected. - To relay a message from the HR system worn by a
third team member 300 to the outside world, two gaps in the network must be bridged. First amessage 330 associated with a body pose signal from thethird team member 300 to the HR system worn by thefirst team member 302 who is proximal may be relayed. The message may then be routed through the first network to the HR system worn by thesecond team member 304, for example instructing thesecond team member 304 to repeat a pose. The pose made by thesecond team member 304 may create an associatedmessage 332 relayed toproximal camera 320. Finally, the message is routed to the external world via the second network. -
FIG. 4 is a block diagram of an embodiment of anHR system 400 which may have some components implemented as part of a head-mounted assembly. TheHR system 400 may be considered a computer system that can be adapted to be worn on the head, carried by hand, or otherwise attached to a user. In the embodiment of theHR system 400 shown, astructure 405 is included which is adapted to be worn on the head of a user. Thestructure 405 may include straps, a helmet, a hat, or any other type of mechanism to hold the HR system on the head of the user as an HMD. - The
HR system 400 also includes adisplay 450 coupled to position thedisplay 450 in a field-of-view (FOV) of the user. Thestructure 405 may position thedisplay 450 in a field of view of the user. In some embodiments, thedisplay 450 may be a stereoscopic display with two separate views of the FOV, such asview 452 for the user's left eye, and view 454 for the user's right eye. The twoviews display 450. In some embodiments, thedisplay 450 may be transparent, such as in an augmented reality (AR) HMD. In systems where thedisplay 450 is transparent, the view of the FOV of the real-world as seen through thedisplay 450 by the user is composited with virtual objects that are shown on thedisplay 450. The virtual objects may occlude real objects in the FOV as overlay elements and may themselves be transparent or opaque, depending on the technology used for thedisplay 450 and the rendering of the virtual object. A virtual object, such as an overlay element, may be positioned in a virtual space, which could be two-dimensional or three-dimensional, depending on the embodiment, to be in the same position as an associated real object in real space. Note that if thedisplay 450 is a stereoscopic display, two different views of the overlay element may be rendered and shown in two different relative positions on the twoviews - In some embodiments, the
HR system 400 includes one or more sensors in asensing block 440 to sense at least a portion of the FOV of the user by gathering the appropriate information for that sensor, for example visible light from a visible light camera, from the FOV of the user. Any number of any type of sensor, including sensors described previously herein, may be included in thesensor block 440, depending on the embodiment. - The
HR system 400 may also include an I/O block 420 to allow communication with external devices. The I/O block 420 may include one or both of awireless network adapter 422 coupled to anantenna 424 and anetwork adapter 426 coupled to awired connection 428. Thewired connection 428 may be plugged into a portable device, for example a mobile phone, or may be a component of an umbilical system such as used in extreme environments. - In some embodiments, the
HR system 400 includes asound processor 460 which takes input from one ormore microphones 462. In someHR systems 400, themicrophones 462 may be attached to the user. External microphones, for example attached to an autonomous drone, may send sound data samples through wireless or wired connections to I/O block 420 instead of, or in addition to, the sound data received from themicrophones 462. Thesound processor 460 may generate sound data which is transferred to one ormore speakers 464, which are a type of sound reproduction device. The generated sound data may be analog samples or digital values. If more than onespeaker 464 is used, the sound processor may generate or simulate 2D or 3D sound placement. In someHR systems 400, a first speaker may be positioned to provide sound to the left ear of the user and a second speaker may be positioned to provide sound to the right ear of the user. Together, the first speaker and the second speaker may provide binaural sound to the user. - In some embodiments, the
HR system 400 includes astimulus block 470. Thestimulus block 470 is used to provide other stimuli to expand the HR system user experience. Embodiments may include numerous haptic pads attached to the user that provide a touch stimulus. Embodiments may also include other stimuli, such as, but not limited to, changing the temperature of a glove, changing the moisture level or breathability of a suit, or adding smells to a breathing system. - The
HR system 400 may include aprocessor 410 and one ormore memory devices 430, which may also be referred to as a tangible medium or a computer readable medium. Theprocessor 410 is coupled to thedisplay 450, thesensing block 440, thememory 430, I/O block 420,sound block 460, andstimulus block 470, and is configured to execute theinstructions 432 encoded on (i.e. stored in) thememory 430. Thus, theHR system 400 may include an article of manufacture comprising atangible medium 430, that is not a transitory propagating signal, encoding computer-readable instructions 432 that, when applied to acomputer system 400, instruct thecomputer system 400 to perform one or more methods described herein, thereby configuring theprocessor 400. - While the
processor 410 included in theHR system 400 may be able to perform methods described herein autonomously, in some embodiments, processing facilities outside of that provided by theprocessor 410 included inside of theHR system 400 may be used to perform one or more elements of methods described herein. In one non-limiting example, theprocessor 410 may receive information from one or more of thesensors 440 and send that information through thewireless network adapter 422 to an external processor, such as a cloud processing system or an external server. The external processor may then process the sensor information to identify a pose by another individual and then send information about the pose to theprocessor 410 through thewireless network adapter 422. Theprocessor 410 may then use that information to initiate an action, an alarm, a notification of the user, or provide any other response to that information. - In some embodiments, the
instructions 432 may instruct theHR system 400 to interpret a body-pose message. Theinstructions 432 may instruct theHR system 400 to receive sensor data either transmitted through thewireless network adapter 422 or received from thesensing block 440 from, for example a camera. Theinstructions 432 may instruct theHR system 400 to detect a first body in the sensor data and determine one or more a body parts, for example hands or arms, using object recognition. Theinstructions 432 may further instruct theHR system 400 to compute a first point associated with the body part at a first time, and sometime later, receive updated sensor data and compute a second point associated with the body part. Theinstructions 432 may instruct theHR system 400 to use the first and second points to associate an apparent body pose which may be looked up in a table to generate a message. Theinstructions 432 may instruct theHR system 400 to transmit the message, for example presenting an alert on thedisplay 450. - In some embodiments, the
instructions 432 may instruct theHR system 400 to interpret a body-pose message. Theinstructions 432 may instruct theHR system 300 to receive sensor data either transmitted through thewireless network adapter 422 or received from thesensing block 440 from, for example an infra-red camera. Theinstructions 432 may instruct theHR system 400 to detect a first shape in the sensor data, for example a cross using object detection. Theinstructions 432 may further instruct theHR system 400 to compute a first point associated with the shape at a first time, and sometime later, receive updated sensor data and compute a second point associated with the new shape position. Theinstructions 432 may instruct theHR system 400 to use the first and second points to associate an apparent body pose which may be looked up in a table to generate a message. Theinstructions 432 may instruct theHR system 400 to transmit the message, for example presenting text on thedisplay 450. - Aspects of various embodiments are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus, systems, and computer program products according to various embodiments disclosed herein. It will be understood that various blocks of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general-purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
- These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks. The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
- The flowchart and/or block diagrams in the figures help to illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products of various embodiments. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
-
FIG. 5 is aflowchart 500 of an embodiment of a method for communicating using body poses. This may include communication without using an electronic network link. One or more parts of this method is performed by an HR system which may include a head-mounted display (HMD) worn by a user. The method includes receiving data from asensor 503. The sensor may be a still camera or a video camera sensitive to visible light, infrared light, or any other spectrum of light. The sensor may be an ultrasound sensor, a sonar sensor, or a radar sensor. Any type of sensor may be used, depending on the embodiment. - An individual is detected in the
sensor data 505. The individual is not the same person as the user of the HR system. Depending on the embodiment, the sensor may be remote from the user and the HR system, such as sensors fixedly mounted in the environment or a sensor mounted on a drone which may be controlled by the user of the HR system, the individual detected by the sensor, or by a third party. In some embodiments, the individual detected by the sensor may be outside of a field of view of the user and the HR system. In some embodiments, the sensor is conveyed by the user of the HR system and the individual may be within a field of view of the HR system user. The sensor may be integrated into the HR system, worn as a separate device by the user, or carried by the user. In some embodiments, the sensor has a 360° field of view. - A first situation of at least one body part of the individual in 3D space is ascertained 510 at a first time. In some embodiments, the ascertaining of the first situation of the at least one body part is performed by a computer remote from the HR system, but in other embodiments, the ascertaining is performed by a processor in the HR system. In embodiments, the at least one body part may include one, two, three, or more body parts. The body parts may be any part of the individual's body, including, but not limited to, a head, an arm, a hand, a torso, a leg, or a foot. In some embodiments, the at least one body part includes a first hand and at least one of a second hand, a head, or a torso.
- The method continues by determining a
body pose 520 based on the first situation of the at least one body part. The situation of the at least one body part may include a position and/or orientation of one or more of the individual body parts included in the at least one body part. As one non-limiting example, a triangle may be created 524 with a first vertex based on a first position of a first body part, a second vertex based on a second position of a second body part, and a third vertex based on a third position of the third body part. The body pose may then be determined based on characteristics of the triangle. In other embodiments, the body pose may be determined by an angle of the individual's elbow if their upper arm is near horizontal, or a distance between the individual's feet while they have both hands raised above their head. In some embodiments, traditional hand signals, such as touching the top of one's head with the fingers pointed down to signify “OK” or waving both arms up and down to signify “I need help!”, may be recognized (i.e. determined). - In some embodiments, additional situations of the at least one body parts may be ascertained 522 at later times which may then be used with the first situation to determine the body pose. For example, the method may include ascertaining a second situation of the at least one body part in 3D space at a second time and determining the body pose based on both the first situation of the at least one body part and the second situation of the at least one body part. In some embodiments the body pose may be determined based on a motion between the first situation of the at least one body part and the second situation of the at least one body part. In another example embodiment, the body pose may be determined, at least in part, by calculating a velocity (distance divided by time) of a body part based on two situations of the body part. The velocity may be used in combination with a position at the start or end of a movement to determine the body pose. In some embodiments, a body pose is only determined after it is held for a predetermined interval, such as one second, two seconds, five seconds, ten seconds or some other interval, depending on the embodiment. Thus, the determination of the body pose may include computing a first difference between the first situation of the at least one body part at a first time and the second situation of the at least one body part at a second time, with the body pose determined in response to the first difference being less than a threshold. In such embodiments the first time and the second time are separated by a predetermined interval, which may be at least 1 second long in some embodiments, and the threshold is an amount of movement that is allowed for movement by the individual during the time that they are holding the body pose.
- Some embodiments may include recognizing a
gesture 532. This may be accomplished by ascertaining a second situation of the at least one body part in 3D space at a second time and ascertaining a third situation of the at least one body part in 3D space at a third time. The gesture may then be recognized based on the body pose ascertained from the first situation of the at least one body part at a first time, the second situation of the at least one body part and the third situation of the at least one body part. - The method also includes deciding on an
action 530 based on the body pose. In embodiments where a gesture is recognized, the action may be based on the gesture. In some embodiments, the action may be decided on based on a sequence two or more body poses. The body pose may be looked up 534 in a table of pre-determined body poses and/or body pose motions to determine the action in some embodiments. - Deciding on the action may include determining a message to deliver to the user. In at least one embodiment, the message may be provided as data during
table lookup 534. The table may be a pre-loaded local database, whose contents may be specific to a mission, a particular circumstance, a particular location, a type of individual (e.g. a trained soldier, a civilian, a first responder, or the like), or a specific individual. In some embodiments, the type of individual may also be determined using sensor data, such as by recognizing a uniform or some characteristic of the individual. The message may be an alert, a command, or an alarm. In some embodiments, the message may include a command to the user to replicate the body pose. This may be done to relay the message from the individual to another entity through the user using the HR system. This may be accomplished, for example, by presenting a virtual image of the relay body pose to the user on a display of the HR system. - The method included performing the
action 540 on an HR system used by a user different than the individual. This may include sending 542 a message to a computer remote from the HR system, for example through a wireless or wired network connection. In many embodiments, a message may be sent 542 to the user of the HR system as text on a display of the HR system, an icon on the display, an animated sequence on the display, video images on the display, a still image on the display, audible speech, sound, a haptic indication, a smell, or by any other way of communicating to the user. In at least one embodiment a virtual image may be displayed associated with the at least one body part on the display of the HR system. Non-limiting examples of this include a stopwatch as shown inFIG. 1D , an icon showing the type of individual, or an indicator of the urgency of the message. - In some embodiments, the action may include sending a
confirmation 544 to the individual that the body pose was determined. The confirmation may be sent in a way to be understandable by the individual using the individual's unaided natural senses as the individual may not have an electronic device, binoculars, or another device to aid them in receiving the confirmation. The confirmation may be sent to the individual by use of a bright green light, a loud sound, or a body pose performed by the user of the HR system in response to a prompt. Thus, sending the confirmation may include presenting a prompt to the user to perform a response body pose. -
FIG. 6 is aflowchart 600 of an embodiment of a method for communicating using targets. - This may include communication without using an electronic network link. One or more parts of this method is performed by an HR system which may include a head-mounted display (HMD) worn by a user. The method includes receiving data from a
sensor 603. The sensor may be a still camera or a video camera sensitive to visible light, infrared light, or any other spectrum of light. The sensor may be a time-of-flight camera or some other type of depth-sensing camera or sensor. The sensor may be an ultrasound sensor, a sonar sensor, or a radar sensor. Any type of sensor may be used, depending on the embodiment. - At least one target is detected in the
sensor data 605. A situation of the at least one target is controlled by an individual is not the same person as the user of the HR system. Depending on the embodiment, the sensor may be remote from the user and the HR system, such as sensors fixedly mounted in the environment or a sensor mounted on a drone which may be controlled by the user of the HR system, the individual, or by a third party. In some embodiments, the at least one target detected by the sensor may be outside of a field of view of the HR system. In some embodiments, the sensor is conveyed by the user of the HR system and the at least one target may be within a field of view of the HR system user. The sensor may be integrated into the HR system, worn as a separate device by the user, or carried by the user. In some embodiments, the sensor has a 360° field of view. - A target may be any recognizable marking, device, or object. The target may have a shape that is recognizable, a specific color pattern that is recognizable, or other recognizable properties such as reflectivity of a particular spectrum of electromagnetic radiation, such as infrared light. Multiple characteristics may be used together to recognize a target, such as a specific shape with particular color or color pattern. A target may be a passive device detected by impinging electromagnetic signals, or it may be an active device that emits electromagnetic radiation such as radio-frequency messages or light. In some embodiments, one or more targets may be printed on or woven into an article or articles of clothing worn by the individual, such as a jacket or a hat. A target can also be a stand-alone object that is affixed to the user by using straps, pins, clips, adhesive, or any other mechanism.
- A first situation of at least one target in 3D space is ascertained 610 at a first time. In some embodiments, the ascertaining of the first situation of the at least one target is performed by a computer remote from the HR system, but in other embodiments, the ascertaining is performed by a processor in the HR system. In embodiments, the at least one target may include one, two, three, or more targets. In at least one embodiment, each target of the at least one target is positioned on a respective body part of the individual. In some embodiments a first target of the at least one target is positioned on a body part of the individual, such as a head, an arm, a hand, a torso, a leg, or a foot, and a second target of the at least one target is positioned on an object separate from the individual. In at least one embodiment, the at least one target includes a first target positioned on a first hand of the individual and a second target positioned on a second hand of the individual, a head of the individual, or a torso of the individual.
- The method continues by determining a
target configuration 620 based on the first situation of the at least one target. The situation of the at least one target may include a position and/or orientation of one or more of the individual targets included in the at least one target. As one non-limiting example, a triangle may be created 624 with a first vertex based on a first position of a first target, a second vertex based on a second position of a second target, and a third vertex based on a third position of the third target. The target configuration may then be determined based on characteristics of the triangle. - In some embodiments, additional situations of the at least one target may be ascertained 622 at later times which may then be used with the first situation to determine the target configuration. For example, the method may include ascertaining a second situation of the at least one target in 3D space at a second time and determining the target configuration based on both the first situation of the at least one target and the second situation of the at least one target. In some embodiments the target configuration may be determined based on a motion between the first situation of the at least one target and the second situation of the at least one target. In another example embodiment, the target configuration may be determined, at least in part, by calculating a velocity (distance divided by time) of a target based on two situations of the target. The velocity may be used in combination with a position at the start or end of a movement to determine the target configuration. In some embodiments, a target configuration is only determined after it is held for a predetermined interval, such as one second, two seconds, five seconds, ten seconds or some other interval, depending on the embodiment. Thus, the determination of the target configuration may include computing a first difference between the first situation of the at least one target at a first time and the second situation of the at least one target at a second time, with the target configuration determined in response to the first difference being less than a threshold. In such embodiments the first time and the second time are separated by a predetermined interval, which may be at least 1 second long in some embodiments, and the threshold is an amount of movement that is allowed for movement of the target during the time that they are holding the target configuration, as the individual may not be able to hold perfectly still.
- Some embodiments may include recognizing a
gesture 632. This may be accomplished by ascertaining a second situation of the at least one target in 3D space at a second time and ascertaining a third situation of the at least one target in 3D space at a third time. The gesture may then be recognized based on the target configuration ascertained from the first situation of the at least one target at a first time, the second situation of the at least one target and the third situation of the at least one target. - The method also includes deciding on an
action 630 based on the target configuration. In embodiment where a gesture is recognized, the action may be based on the gesture. In some embodiments, the action may be decided on based on a sequence two or more target configurations. The target configuration may be looked up 634 in a table of pre-determined target configurations and/or target motions to determine the action in some embodiments. - Deciding on the action may include determining a message to deliver to the user. In at least one embodiment, the message may be provided as data during
table lookup 534. The table may be a pre-loaded local database whose contents may be specific to a mission, a particular circumstance, a particular location, a type of individual (e.g. a trained soldier, a civilian, a first responder, or the like) controlling the at least one target, or a specific individual controlling the at least one target. In some embodiments, the type of individual may also be determined using sensor data, such as by recognizing a uniform or some characteristic of the individual. The message may be an alert, a command, or an alarm. In some embodiments, the message may include a command to the user to perform a body pose consistent with the target configuration. This may be done to relay the message from the individual to another entity through the user of the HR system. This may be accomplished by presenting a virtual image of the body pose consistent with the target configuration to the user on a display of the HR system. - The method includes performing the
action 640 on an HR system used by a user different than the individual. This may include sending 642 a message to a computer remote from the HR system, for example through a wireless or wired network connection. In many embodiments, a message may sent 642 to the user of the HR system as text on a display of the HR system, an icon on the display, an animated sequence on the display, video images on the display, a still image on the display, audible speech, sound, a haptic indication, a smell, or by any other way of communicating to the user. In at least one embodiment a virtual image may be displayed associated with the at least one target on the display. Non-limiting examples of this include a stopwatch as shown inFIG. 2D , an icon showing the type of individual, or an indicator of the urgency of the message. - In some embodiments, the action may include sending a
confirmation 644 to the individual controlling the at least one target that the target configuration was determined. The confirmation may be sent in a way to be understandable by the individual using the individual's unaided natural senses as the individual may not have an electronic device, binoculars, or another device to aid them in receiving the confirmation. The confirmation may be sent to the individual by use of a bright green light, a loud sound, or a body pose performed by the user of the HR system in response to a prompt. Thus, sending the confirmation may include presenting a prompt to the user to perform a response body pose. - As will be appreciated by those of ordinary skill in the art, aspects of the various embodiments may be embodied as a system, device, method, or computer program product apparatus. Accordingly, elements of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, or the like) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “server,” “circuit,” “module,” “client,” “computer,” “logic,” or “system,” or other terms. Furthermore, aspects of the various embodiments may take the form of a computer program product embodied in one or more computer-readable medium(s) having computer program code stored thereon.
- Any combination of one or more computer-readable storage medium(s) may be utilized. A computer-readable storage medium may be embodied as, for example, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or other like storage devices known to those of ordinary skill in the art, or any suitable combination of computer-readable storage mediums described herein. In the context of this document, a computer-readable storage medium may be any tangible medium that can contain, or store a program and/or data for use by or in connection with an instruction execution system, apparatus, or device. Even if the data in the computer-readable storage medium requires action to maintain the storage of data, such as in a traditional semiconductor-based dynamic random access memory, the data storage in a computer-readable storage medium can be considered to be non-transitory. A computer data transmission medium, such as a transmission line, a coaxial cable, a radio-frequency carrier, and the like, may also be able to store data, although any data storage in a data transmission medium can be said to be transitory storage. Nonetheless, a computer-readable storage medium, as the term is used herein, does not include a computer data transmission medium.
- Computer program code for carrying out operations for aspects of various embodiments may be written in any combination of one or more programming languages, including object oriented programming languages such as Java, Python, C++, or the like, conventional procedural programming languages, such as the “C” programming language or similar programming languages, or low-level computer languages, such as assembly language or microcode. The computer program code if loaded onto a computer, or other programmable apparatus, produces a computer implemented method. The instructions which execute on the computer or other programmable apparatus may provide the mechanism for implementing some or all of the functions/acts specified in the flowchart and/or block diagram block or blocks. In accordance with various implementations, the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server, such as a cloud-based server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). The computer program code stored in/on (i.e. embodied therewith) the non-transitory computer-readable medium produces an article of manufacture.
- The computer program code, if executed by a processor causes physical changes in the electronic devices of the processor which change the physical flow of electrons through the devices. This alters the connections between devices which changes the functionality of the circuit. For example, if two transistors in a processor are wired to perform a multiplexing operation under control of the computer program code, if a first computer instruction is executed, electrons from a first source flow through the first transistor to a destination, but if a different computer instruction is executed, electrons from the first source are blocked from reaching the destination, but electrons from a second source are allowed to flow through the second transistor to the destination. So a processor programmed to perform a task is transformed from what the processor was before being programmed to perform that task, much like a physical plumbing system with different valves can be controlled to change the physical flow of a fluid.
- Unless otherwise indicated, all numbers expressing quantities, properties, measurements, and so forth, used in the specification and claims are to be understood as being modified in all instances by the term “about.” The recitation of numerical ranges by endpoints includes all numbers subsumed within that range, including the endpoints (e.g. 1 to 5 includes 1, 2.78, π, 3.
33 , 4, and 5). - As used in this specification and the appended claims, the singular forms “a”, “an”, and “the” include plural referents unless the content clearly dictates otherwise. Furthermore, as used in this specification and the appended claims, the term “or” is generally employed in its sense including “and/or” unless the content clearly dictates otherwise. As used herein, the term “coupled” includes direct and indirect connections. Moreover, where first and second devices are coupled, intervening devices including active devices may be located there between.
- Various embodiments are included following:
- Embodiment 1. A method comprising: receiving data from a sensor; detecting an individual in the sensor data; ascertaining a first situation of at least one body part of the individual in 3D space at a first time; determining a body pose based on the first situation of the at least one body part; deciding on an action based on the body pose; and performing the action on a hybrid reality (HR) system used by a user different than the individual.
- Embodiment 2. The method of embodiment 33, the HR system comprising a head-mounted display.
- Embodiment 3. The method of embodiment 1, wherein said ascertaining the first situation of the at least one body part is performed by a computer remote from the HR system.
- Embodiment 4. The method of embodiment 1, the at least one body part comprising a head, an arm, a hand, a torso, a leg, or a foot.
- Embodiment 5. The method of embodiment 1, the at least one body part comprising a first hand and at least one of a second hand, a head, or a torso.
- Embodiment 6. The method of embodiment 1, the at least one body part comprising a first body part, a second body part, and a third body part; the method further comprising: creating a triangle with a first vertex based on a first position of the first body part, a second vertex based on a second position of the second body part, and a third vertex based on a third position of the third body part; and determining the body pose based on characteristics of the triangle.
- Embodiment 7. The method of embodiment 1, the action comprising sending a confirmation to the individual that the body pose was determined.
- Embodiment 8. The method of embodiment 7, the confirmation sent in a way to be understandable by the individual using the individual's unaided natural senses.
- Embodiment 9. The method of embodiment 7, said sending the confirmation comprising presenting a prompt to the user to perform a response body pose.
- Embodiment 10. The method of embodiment 1, said deciding on the action comprising selecting a particular action from a set of predetermined actions based on the body pose.
- Embodiment 11. The method of embodiment 1, said deciding on the action comprising determining a message to deliver to the user based on the body pose.
- Embodiment 12. The method of embodiment 11, wherein the message is an alert, a command, or an alarm.
- Embodiment 13. The method of embodiment 11, wherein the message is delivered to the user as text on a display of the HR system, an icon on the display, an animated sequence on the display, video images on the display, a still image on a display of the HR system, audible speech, sound, a haptic indication, or a smell.
- Embodiment 14. The method of embodiment 11, the message comprising a command to the user to replicate the body pose.
- Embodiment 15. The method of embodiment 14, further comprising presenting a virtual image of the body pose to the user on a display of the HR system.
- Embodiment 16. The method of embodiment 1, wherein the sensor is remote from the user and the HR system.
- Embodiment 17. The method of embodiment 16, wherein the sensor is mounted on a drone controlled by the user.
- Embodiment 18. The method of embodiment 16, wherein the individual is outside of a field of view of the HR system.
- Embodiment 19. The method of embodiment 1, the action comprising displaying a virtual image associated with the at least one body part on the display of the HR system.
- Embodiment 20. The method of embodiment 1, wherein the sensor is conveyed by the user.
- Embodiment 21. The method of embodiment 20, wherein the sensor is integrated into the HR system.
- Embodiment 22. The method of embodiment 20, wherein the sensor has a 360° field of view.
- Embodiment 23. The method of embodiment 20, wherein the individual is within a field of view of the user.
- Embodiment 24. The method of embodiment 1, the action comprising sending a message to a computer remote from the HR system, wherein the message is based on the body pose.
- Embodiment 25. The method of embodiment 1, further comprising: ascertaining a second situation of the at least one body part in 3D space at a second time; determining the body pose based on both the first situation of the at least one body part and the second situation of the at least one body part.
- Embodiment 26. The method of embodiment 25, further comprising determining the body pose based on a motion between the first situation of the at least one body part and the second situation of the at least one body part.
- Embodiment 27. The method of embodiment 25, further comprising: computing a first difference between the first situation of the at least one body part and the second situation of the at least one body part; and determining the body pose in response to the first difference being less than a threshold; wherein the first time and the second time are separated by a predetermined interval.
- Embodiment 28. The method of embodiment 27, the predetermined interval being at least 1 second long.
- Embodiment 29. The method of embodiment 1, further comprising: ascertaining a second situation of the at least one body part in 3D space at a second time; ascertaining a third situation of the at least one body part in 3D space at a third time; recognizing a gesture based on the body pose, the second situation of the at least one body part and the third situation of the at least one body part; and deciding on the action based on the gesture.
- Embodiment 30. The method of embodiment 1, further comprising: ascertaining a second situation of the at least one body part in 3D space at a second time; determining a different body pose based on the second situation of the at least one body part; and deciding on the action based on a sequence of the body pose followed by the different body pose.
- Embodiment 31. An article of manufacture comprising a tangible medium, that is not a transitory propagating signal, encoding computer-readable instructions that, when applied to a computer system, instruct the computer system to perform a method comprising: receiving data from a sensor; detecting an individual in the sensor data; ascertaining a first situation of at least one body part of the individual in 3D space at a first time; determining a body pose based on the first situation of the at least one body part; deciding on an action based on the body pose; and performing the action on a hybrid reality (HR) system worn by a user different than the individual.
- Embodiment 32. A head-mounted display (HMD) comprising: a display; a structure, coupled to the display and adapted to position the display in a field-of-view (FOV) of the user; and a processor, coupled to the display and the sound reproduction device, the processor configured to: receive data from a sensor; detect an individual not wearing the HMD in the sensor data; ascertain a first situation of at least one body part of the individual in 3D space at a first time; determine a body pose based on the first situation of the at least one body part; decide on an action based on the body pose; and perform the action.
- Embodiment 33. A method comprising: receiving data from a sensor; detecting at least one target in the sensor data, a situation of the at least one target controlled by an individual; ascertaining a first situation of the at least one target in 3D space at a first time; determining a target configuration based on the first situation of the at least one target; deciding on an action based on the target configuration; and performing the action on a hybrid reality (HR) system used by a user different than the individual.
- Embodiment 34. The method of embodiment 33, the HR system comprising a head-mounted display.
- Embodiment 35. The method of embodiment 33, wherein said ascertaining the first situation of the at least one target is performed by a computer remote from the HR system.
- Embodiment 36. The method of embodiment 33, wherein each target of the at least one target is positioned on the individual.
- Embodiment 37. The method of embodiment 33, wherein a first target of the at least one target is positioned on a body part of the individual.
- Embodiment 38. The method of embodiment 37, the body part being a head, an arm, a hand, a torso, a leg, or a foot.
- Embodiment 39. The method of embodiment 33, wherein a second target of the at least one target is positioned on an object separate from the individual.
- Embodiment 40. The method of embodiment 33, the at least one target comprising a first target positioned on a first hand of the individual and a second target positioned on a second hand of the individual, a head of the individual, or a torso of the individual.
- Embodiment 41. The method of embodiment 33, the at least one target comprising a first target, a second target, and a third target; the method further comprising: creating a triangle with a first vertex based on a first position of the first target, a second vertex based on a second position of the second target, and a third vertex of the third target; and determining the target configuration based on characteristics of the triangle.
- Embodiment 42. The method of embodiment 33, the action comprising sending a confirmation to the individual that the target configuration was determined.
- Embodiment 43. The method of embodiment 42, the confirmation sent in a way to be understandable by the individual using the individual's unaided natural senses.
- Embodiment 44. The method of embodiment 42, said sending the confirmation comprising presenting a prompt to the user to perform a response body pose.
- Embodiment 45. The method of embodiment 33, said deciding on the action comprising selecting a particular action from a set of predetermined actions based on the target configuration.
- Embodiment 46. The method of embodiment 33, said deciding on the action comprising determining a message to deliver to the user based on the target configuration.
- Embodiment 47. The method of embodiment 46, wherein the message is one of an alert, a command, or an alarm.
- Embodiment 48. The method of embodiment 46, wherein the message is delivered to the user as text on a display of the HR system, an icon on the display, an animated sequence on the display, video images on the display, a still image on the display, audible speech, sound, a haptic indication, or a smell.
- Embodiment 49. The method of embodiment 46, the message comprising a command to the user to perform a body pose consistent with the target configuration.
- Embodiment 50. The method of embodiment 49, further comprising presenting a virtual image of the body pose to the user on a display of the HR system.
- Embodiment 51. The method of embodiment 33, wherein the sensor is remote from the user and the HR system.
- Embodiment 52. The method of embodiment 51, wherein the sensor is mounted on a drone controlled by the user.
- Embodiment 53. The method of embodiment 51, wherein the at least one target is outside of a field of view of the HR system.
- Embodiment 54. The method of embodiment 33, the action comprising displaying a virtual image associated with the at least one target on a display of the HR system.
- Embodiment 55. The method of embodiment 33, wherein the sensor is conveyed by the user.
- Embodiment 56. The method of embodiment 55, wherein the sensor is integrated into the HR system.
- Embodiment 57. The method of embodiment 55, wherein the sensor has a 360° field of view.
- Embodiment 58. The method of embodiment 55, wherein the at least one target is within a field of view of the user.
- Embodiment 59. The method of embodiment 33, the action comprising sending a message to a computer remote from the HR system, wherein the message is based on the target configuration.
- Embodiment 60. The method of embodiment 33, further comprising: ascertaining a second situation of the at least one target in 3D space at a second time; determining the target configuration based on both the first situation of the at least one target and the second situation of the at least one target.
- Embodiment 61. The method of embodiment 60, further comprising determining the target configuration based on a motion between the first situation of the at least one target and the second situation of the at least one target.
- Embodiment 62. The method of embodiment 60, further comprising: computing a first difference between the first situation of the at least one target and the second situation of the at least one target; and determining the target configuration in response to the first difference being less than a threshold; wherein the first time and the second time are separated by a predetermined interval.
- Embodiment 63. The method of embodiment 62, the predetermined interval being at least 1 second long.
- Embodiment 64. The method of embodiment 33, further comprising: ascertaining a second situation of the at least one target in 3D space at a second time; ascertaining a third situation of the at least one target in 3D space at a third time; recognizing a gesture based on the target configuration, the second situation of the at least one target and the third situation of the at least one target; and deciding on the action based on the gesture.
- Embodiment 65. The method of embodiment 33, further comprising: ascertaining a second situation of the at least one target in 3D space at a second time; determining a different target configuration based on the second situation of the at least one target; and deciding on the action based on a sequence of the target configuration followed by the different target configuration.
- Embodiment 66. An article of manufacture comprising a tangible medium, that is not a transitory propagating signal, encoding computer-readable instructions that, when applied to a computer system, instruct the computer system to perform a method comprising: receiving data from a sensor; detecting at least one target in the sensor data, a situation of the at least one target controlled by an individual; ascertaining a first situation of the at least one target in 3D space at a first time; determining a target configuration based on the first situation of the at least one target; deciding on an action based on the target configuration; and performing the action on a hybrid reality (HR) system worn by a user different than the individual.
- Embodiment 67. A head-mounted display (HMD) comprising: a display; a structure, coupled to the display and adapted to position the display in a field-of-view (FOV) of the user; and a processor, coupled to the display and the sound reproduction device, the processor configured to: receive data from a sensor; detect at least one target in the sensor data, a situation of the at least one target controlled by an individual not wearing the HMD; ascertain a first situation of at least one target of the individual in 3D space at a first time; determine a target configuration based on the first situation of the at least one target; decide on an action based on the target configuration; and perform the action.
- The description of the various embodiments provided above is illustrative in nature and is not intended to limit this disclosure, its application, or uses. Thus, different variations beyond those described herein are intended to be within the scope of embodiments. Such variations are not to be regarded as a departure from the intended scope of this disclosure. As such, the breadth and scope of the present disclosure should not be limited by the above-described exemplary embodiments, but should be defined only in accordance with the following claims and equivalents thereof.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/214,811 US20210217247A1 (en) | 2018-12-21 | 2021-03-27 | Body pose message system |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/230,278 US10970935B2 (en) | 2018-12-21 | 2018-12-21 | Body pose message system |
US17/214,811 US20210217247A1 (en) | 2018-12-21 | 2021-03-27 | Body pose message system |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/230,278 Continuation US10970935B2 (en) | 2018-12-21 | 2018-12-21 | Body pose message system |
Publications (1)
Publication Number | Publication Date |
---|---|
US20210217247A1 true US20210217247A1 (en) | 2021-07-15 |
Family
ID=71098761
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/230,278 Active US10970935B2 (en) | 2018-12-21 | 2018-12-21 | Body pose message system |
US16/353,885 Abandoned US20200202628A1 (en) | 2018-12-21 | 2019-03-14 | Body target signaling |
US17/214,811 Abandoned US20210217247A1 (en) | 2018-12-21 | 2021-03-27 | Body pose message system |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/230,278 Active US10970935B2 (en) | 2018-12-21 | 2018-12-21 | Body pose message system |
US16/353,885 Abandoned US20200202628A1 (en) | 2018-12-21 | 2019-03-14 | Body target signaling |
Country Status (1)
Country | Link |
---|---|
US (3) | US10970935B2 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US12130971B2 (en) * | 2022-01-07 | 2024-10-29 | Sony Interactive Entertainment Europe Limited | Method for obtaining a position of a peripheral device |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190377538A1 (en) | 2018-06-08 | 2019-12-12 | Curious Company, LLC | Information Presentation Through Ambient Sounds |
US10902678B2 (en) | 2018-09-06 | 2021-01-26 | Curious Company, LLC | Display of hidden information |
US11055913B2 (en) | 2018-12-04 | 2021-07-06 | Curious Company, LLC | Directional instructions in an hybrid reality system |
KR20230026398A (en) * | 2020-06-12 | 2023-02-24 | 유니버시티 오브 워싱턴 | Eye Tracking in Near Eye Displays |
US11353700B2 (en) | 2020-10-07 | 2022-06-07 | Industrial Technology Research Institute | Orientation predicting method, virtual reality headset and non-transitory computer-readable medium |
WO2023023628A1 (en) * | 2021-08-18 | 2023-02-23 | Advanced Neuromodulation Systems, Inc. | Systems and methods for providing digital health services |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090051648A1 (en) * | 2007-08-20 | 2009-02-26 | Gesturetek, Inc. | Gesture-based mobile interaction |
US20160054807A1 (en) * | 2012-11-08 | 2016-02-25 | PlayVision Labs, Inc. | Systems and methods for extensions to alternative control of touch-based devices |
US20200349966A1 (en) * | 2018-05-04 | 2020-11-05 | Google Llc | Hot-word free adaptation of automated assistant function(s) |
Family Cites Families (126)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3861350A (en) | 1971-07-23 | 1975-01-21 | Albert B Selleck | Warning system and device, and malodorous warning composition of matter and process for its preparation |
US5309169A (en) | 1993-02-01 | 1994-05-03 | Honeywell Inc. | Visor display with fiber optic faceplate correction |
US5815411A (en) | 1993-09-10 | 1998-09-29 | Criticom Corporation | Electro-optic vision system which exploits position and attitude |
US7406214B2 (en) * | 1999-05-19 | 2008-07-29 | Digimarc Corporation | Methods and devices employing optical sensors and/or steganography |
US20020176259A1 (en) | 1999-11-18 | 2002-11-28 | Ducharme Alfred D. | Systems and methods for converting illumination |
US20020196202A1 (en) | 2000-08-09 | 2002-12-26 | Bastian Mark Stanley | Method for displaying emergency first responder command, control, and safety information using augmented reality |
US20020191004A1 (en) | 2000-08-09 | 2002-12-19 | Ebersole John Franklin | Method for visualization of hazards utilizing computer-generated three-dimensional representations |
US6903752B2 (en) | 2001-07-16 | 2005-06-07 | Information Decision Technologies, Llc | Method to view unseen atmospheric phenomenon using augmented reality |
EP1466300A1 (en) | 2002-01-15 | 2004-10-13 | Information Decision Technologies LLC | Method and system to display both visible and invisible hazards and hazard information |
US20030210812A1 (en) | 2002-02-26 | 2003-11-13 | Ali Khamene | Apparatus and method for surgical navigation |
JP5145534B2 (en) | 2005-07-01 | 2013-02-20 | 独立行政法人物質・材料研究機構 | Phosphor, method of manufacturing the same, and lighting fixture |
US20070045641A1 (en) | 2005-08-23 | 2007-03-01 | Yin Chua Janet B | Light source with UV LED and UV reflector |
US7853061B2 (en) | 2007-04-26 | 2010-12-14 | General Electric Company | System and method to improve visibility of an object in an imaged subject |
US9015029B2 (en) | 2007-06-04 | 2015-04-21 | Sony Corporation | Camera dictionary based on object recognition |
US20090065715A1 (en) | 2007-08-24 | 2009-03-12 | Lee Wainright | Universal ultraviolet/ IR/ visible light emitting module |
US7961202B2 (en) | 2007-10-26 | 2011-06-14 | Mitel Networks Corporation | Method and apparatus for maintaining a visual appearance of at least one window when a resolution of the screen changes |
WO2009120303A1 (en) | 2008-03-24 | 2009-10-01 | Google Inc. | Panoramic images within driving directions |
US9398266B2 (en) | 2008-04-02 | 2016-07-19 | Hernan Carzalo | Object content navigation |
US20100117828A1 (en) | 2008-11-07 | 2010-05-13 | Stuart Owen Goldman | Alarm scheme with olfactory alerting component |
US8009022B2 (en) | 2009-05-29 | 2011-08-30 | Microsoft Corporation | Systems and methods for immersive interaction with virtual objects |
US10019634B2 (en) | 2010-06-04 | 2018-07-10 | Masoud Vaziri | Method and apparatus for an eye tracking wearable computer |
US20110270135A1 (en) | 2009-11-30 | 2011-11-03 | Christopher John Dooley | Augmented reality for testing and training of human performance |
US8614539B2 (en) | 2010-10-05 | 2013-12-24 | Intematix Corporation | Wavelength conversion component with scattering particles |
US9122053B2 (en) | 2010-10-15 | 2015-09-01 | Microsoft Technology Licensing, Llc | Realistic occlusion for a head mounted augmented reality display |
US9348141B2 (en) | 2010-10-27 | 2016-05-24 | Microsoft Technology Licensing, Llc | Low-latency fusing of virtual and real content |
JP5960796B2 (en) | 2011-03-29 | 2016-08-02 | クアルコム,インコーポレイテッド | Modular mobile connected pico projector for local multi-user collaboration |
US8863039B2 (en) | 2011-04-18 | 2014-10-14 | Microsoft Corporation | Multi-dimensional boundary effects |
US20120289290A1 (en) | 2011-05-12 | 2012-11-15 | KT Corporation, KT TECH INC. | Transferring objects between application windows displayed on mobile terminal |
US20130249947A1 (en) | 2011-08-26 | 2013-09-26 | Reincloud Corporation | Communication using augmented reality |
US20130249948A1 (en) | 2011-08-26 | 2013-09-26 | Reincloud Corporation | Providing interactive travel content at a display device |
US20130222371A1 (en) | 2011-08-26 | 2013-08-29 | Reincloud Corporation | Enhancing a sensory perception in a field of view of a real-time source within a display screen through augmented reality |
KR101407670B1 (en) | 2011-09-15 | 2014-06-16 | 주식회사 팬택 | Mobile terminal, server and method for forming communication channel using augmented reality |
US9229231B2 (en) * | 2011-12-07 | 2016-01-05 | Microsoft Technology Licensing, Llc | Updating printed content with personalized virtual data |
US9361530B2 (en) | 2012-01-20 | 2016-06-07 | Medivators Inc. | Use of human input recognition to prevent contamination |
US9165381B2 (en) * | 2012-05-31 | 2015-10-20 | Microsoft Technology Licensing, Llc | Augmented books in a mixed reality environment |
JP5991039B2 (en) * | 2012-06-18 | 2016-09-14 | 株式会社リコー | Information processing apparatus and conference system |
US9645394B2 (en) | 2012-06-25 | 2017-05-09 | Microsoft Technology Licensing, Llc | Configured virtual environments |
US9292085B2 (en) | 2012-06-29 | 2016-03-22 | Microsoft Technology Licensing, Llc | Configuring an interaction zone within an augmented reality environment |
US8953841B1 (en) | 2012-09-07 | 2015-02-10 | Amazon Technologies, Inc. | User transportable device with hazard monitoring |
WO2014078811A1 (en) | 2012-11-16 | 2014-05-22 | Flir Systems, Inc. | Synchronized infrared beacon / infrared detection system |
AU2014204252B2 (en) | 2013-01-03 | 2017-12-14 | Meta View, Inc. | Extramissive spatial imaging digital eye glass for virtual or augmediated vision |
KR101962062B1 (en) | 2013-03-14 | 2019-03-25 | 애플 인크. | Acoustic beacon for broadcasting the orientation of a device |
JP5915813B2 (en) | 2013-03-19 | 2016-05-11 | 株式会社村田製作所 | Multilayer ceramic electronic components |
US9367136B2 (en) | 2013-04-12 | 2016-06-14 | Microsoft Technology Licensing, Llc | Holographic object feedback |
US9286725B2 (en) | 2013-11-14 | 2016-03-15 | Nintendo Co., Ltd. | Visually convincing depiction of object interactions in augmented reality images |
WO2015077767A1 (en) | 2013-11-25 | 2015-05-28 | Daniel Ryan | System and method for communication with a mobile device via a positioning system including rf communication devices and modulated beacon light sources |
CN103697900A (en) | 2013-12-10 | 2014-04-02 | 郭海锋 | Method for early warning on danger through augmented reality by vehicle-mounted emotional robot |
US10586395B2 (en) | 2013-12-30 | 2020-03-10 | Daqri, Llc | Remote object detection and local tracking using visual odometry |
KR20150101612A (en) | 2014-02-27 | 2015-09-04 | 엘지전자 주식회사 | Head Mounted Display with closed-view and Method for controlling the same |
US10203762B2 (en) * | 2014-03-11 | 2019-02-12 | Magic Leap, Inc. | Methods and systems for creating virtual and augmented reality |
US10430985B2 (en) * | 2014-03-14 | 2019-10-01 | Magic Leap, Inc. | Augmented reality systems and methods utilizing reflections |
US20160294960A1 (en) | 2014-03-30 | 2016-10-06 | Gary Stephen Shuster | Systems, Devices And Methods For Person And Object Tracking And Data Exchange |
KR20150118813A (en) | 2014-04-15 | 2015-10-23 | 삼성전자주식회사 | Providing Method for Haptic Information and Electronic Device supporting the same |
US20150325047A1 (en) | 2014-05-06 | 2015-11-12 | Honeywell International Inc. | Apparatus and method for providing augmented reality for maintenance applications |
US9588586B2 (en) | 2014-06-09 | 2017-03-07 | Immersion Corporation | Programmable haptic devices and methods for modifying haptic strength based on perspective and/or proximity |
KR20160013748A (en) | 2014-07-28 | 2016-02-05 | 엘지전자 주식회사 | Protable electronic device and control method thereof |
US20160147408A1 (en) | 2014-11-25 | 2016-05-26 | Johnathan Bevis | Virtual measurement tool for a wearable visualization device |
CN107004363B (en) | 2014-12-10 | 2020-02-18 | 三菱电机株式会社 | Image processing device, on-vehicle display system, display device, and image processing method |
US10065074B1 (en) | 2014-12-12 | 2018-09-04 | Enflux, Inc. | Training systems with wearable sensors for providing users with feedback |
US9746921B2 (en) | 2014-12-31 | 2017-08-29 | Sony Interactive Entertainment Inc. | Signal generation and detector systems and methods for determining positions of fingers of a user |
US9953216B2 (en) * | 2015-01-13 | 2018-04-24 | Google Llc | Systems and methods for performing actions in response to user gestures in captured images |
KR102317803B1 (en) | 2015-01-23 | 2021-10-27 | 삼성전자주식회사 | Electronic device and method for controlling a plurality of displays |
US10212355B2 (en) | 2015-03-13 | 2019-02-19 | Thales Defense & Security, Inc. | Dual-mode illuminator for imaging under different lighting conditions |
NZ773826A (en) | 2015-03-16 | 2022-07-29 | Magic Leap Inc | Methods and systems for diagnosing and treating health ailments |
WO2016154123A2 (en) | 2015-03-21 | 2016-09-29 | Mine One Gmbh | Virtual 3d methods, systems and software |
US10809372B2 (en) | 2015-05-11 | 2020-10-20 | Vayyar Imaging Ltd. | System, device and methods for imaging of objects using electromagnetic array |
JP6609994B2 (en) | 2015-05-22 | 2019-11-27 | 富士通株式会社 | Display control method, information processing apparatus, and display control program |
CN208354713U (en) | 2015-06-19 | 2019-01-11 | 奥克利有限公司 | Eyewear adapter, the helmet, component, modularization headgear system and modularization sports helmet |
KR102447438B1 (en) | 2015-07-01 | 2022-09-27 | 삼성전자주식회사 | Alarm device and method for informing location of objects thereof |
US20170103440A1 (en) | 2015-08-01 | 2017-04-13 | Zhou Tian Xing | Wearable augmented reality eyeglass communication device including mobile phone and mobile computing via virtual touch screen gesture control and neuron command |
US9852599B1 (en) | 2015-08-17 | 2017-12-26 | Alarm.Com Incorporated | Safety monitoring platform |
WO2017039308A1 (en) | 2015-08-31 | 2017-03-09 | Samsung Electronics Co., Ltd. | Virtual reality display apparatus and display method thereof |
US10297129B2 (en) | 2015-09-24 | 2019-05-21 | Tyco Fire & Security Gmbh | Fire/security service system with augmented reality |
FR3042925B1 (en) | 2015-10-26 | 2017-12-22 | St Microelectronics Crolles 2 Sas | SYSTEM FOR CONVERTING THERMAL ENERGY INTO ELECTRICAL ENERGY. |
US20170169170A1 (en) | 2015-12-11 | 2017-06-15 | Yosko, Inc. | Methods and systems for location-based access to clinical information |
US20180239417A1 (en) | 2015-12-30 | 2018-08-23 | Shenzhen Royole Technologies Co. Ltd. | Head-mounted display device, head-mounted display system, and input method |
US20170193705A1 (en) | 2015-12-31 | 2017-07-06 | Daqri, Llc | Path visualization for motion planning |
US20170192091A1 (en) | 2016-01-06 | 2017-07-06 | Ford Global Technologies, Llc | System and method for augmented reality reduced visibility navigation |
US9978180B2 (en) * | 2016-01-25 | 2018-05-22 | Microsoft Technology Licensing, Llc | Frame projection for augmented reality environments |
AU2017227708A1 (en) | 2016-03-01 | 2018-10-18 | ARIS MD, Inc. | Systems and methods for rendering immersive environments |
CN105781618A (en) | 2016-03-15 | 2016-07-20 | 华洋通信科技股份有限公司 | Coal mine safety integrated monitoring system based on Internet of Things |
WO2017161192A1 (en) | 2016-03-16 | 2017-09-21 | Nils Forsblom | Immersive virtual experience using a mobile communication device |
US20170277257A1 (en) | 2016-03-23 | 2017-09-28 | Jeffrey Ota | Gaze-based sound selection |
US10551826B2 (en) | 2016-03-24 | 2020-02-04 | Andrei Popa-Simil | Method and system to increase operator awareness |
US9928662B2 (en) * | 2016-05-09 | 2018-03-27 | Unity IPR ApS | System and method for temporal manipulation in virtual environments |
US9922464B2 (en) | 2016-05-10 | 2018-03-20 | Disney Enterprises, Inc. | Occluded virtual image display |
US9925920B2 (en) | 2016-05-24 | 2018-03-27 | Ford Global Technologies, Llc | Extended lane blind spot detection |
US10046236B2 (en) | 2016-06-13 | 2018-08-14 | Sony Interactive Entertainment America, LLC | Browser-based cloud gaming |
WO2017221216A1 (en) | 2016-06-23 | 2017-12-28 | Killham Josh | Positional audio assignment system |
US10102732B2 (en) | 2016-06-28 | 2018-10-16 | Infinite Designs, LLC | Danger monitoring system |
US9906885B2 (en) | 2016-07-15 | 2018-02-27 | Qualcomm Incorporated | Methods and systems for inserting virtual sounds into an environment |
KR20230133940A (en) * | 2016-07-25 | 2023-09-19 | 매직 립, 인코포레이티드 | Imaging modification, display and visualization using augmented and virtual reality eyewear |
CN107657662A (en) | 2016-07-26 | 2018-02-02 | 金德奎 | Augmented reality equipment and its system and method that can be directly interactive between a kind of user |
US10486742B2 (en) | 2016-08-01 | 2019-11-26 | Magna Electronics Inc. | Parking assist system using light projections |
US10155159B2 (en) | 2016-08-18 | 2018-12-18 | Activision Publishing, Inc. | Tactile feedback systems and methods for augmented reality and virtual reality systems |
US10656731B2 (en) | 2016-09-15 | 2020-05-19 | Daqri, Llc | Peripheral device for head-mounted display |
US10134192B2 (en) * | 2016-10-17 | 2018-11-20 | Microsoft Technology Licensing, Llc | Generating and displaying a computer generated image on a future pose of a real world object |
US10281982B2 (en) | 2016-10-17 | 2019-05-07 | Facebook Technologies, Llc | Inflatable actuators in virtual reality |
US10088902B2 (en) | 2016-11-01 | 2018-10-02 | Oculus Vr, Llc | Fiducial rings in virtual reality |
DE102016121663A1 (en) | 2016-11-11 | 2018-05-17 | Osram Gmbh | Activating a transmitting device of a lighting device |
US9911020B1 (en) | 2016-12-08 | 2018-03-06 | At&T Intellectual Property I, L.P. | Method and apparatus for tracking via a radio frequency identification device |
US11347054B2 (en) | 2017-02-16 | 2022-05-31 | Magic Leap, Inc. | Systems and methods for augmented reality |
CN110337318B (en) | 2017-02-28 | 2024-06-14 | 奇跃公司 | Virtual and real object recording in mixed reality devices |
US10250328B2 (en) | 2017-03-09 | 2019-04-02 | General Electric Company | Positioning system based on visible light communications |
US10408624B2 (en) | 2017-04-18 | 2019-09-10 | Microsoft Technology Licensing, Llc | Providing familiarizing directional information |
US10460585B2 (en) | 2017-06-05 | 2019-10-29 | Symbol Technologies, Llc | RFID directed video snapshots capturing targets of interest |
US10528228B2 (en) * | 2017-06-21 | 2020-01-07 | Microsoft Technology Licensing, Llc | Interaction with notifications across devices with a digital assistant |
US20190007548A1 (en) | 2017-06-28 | 2019-01-03 | The Travelers Indemnity Company | Systems and methods for discrete location-based functionality |
EP3422149B1 (en) | 2017-06-30 | 2023-03-01 | Nokia Technologies Oy | Methods, apparatus, systems, computer programs for enabling consumption of virtual content for mediated reality |
US10867205B2 (en) | 2017-07-18 | 2020-12-15 | Lenovo (Singapore) Pte. Ltd. | Indication of characteristic based on condition |
WO2019023271A1 (en) | 2017-07-24 | 2019-01-31 | Cyalume Technologies, Inc. | Light weight appliance to be used with smart devices to produce shortwave infrared emission |
US10725537B2 (en) | 2017-10-02 | 2020-07-28 | Facebook Technologies, Llc | Eye tracking system using dense structured light patterns |
JP7449856B2 (en) | 2017-10-17 | 2024-03-14 | マジック リープ, インコーポレイテッド | mixed reality spatial audio |
US10748426B2 (en) | 2017-10-18 | 2020-08-18 | Toyota Research Institute, Inc. | Systems and methods for detection and presentation of occluded objects |
US20190132815A1 (en) | 2017-10-27 | 2019-05-02 | Sentry Centers Holdings LLC | Systems and methods for beacon integrated with displays |
WO2019122912A1 (en) * | 2017-12-22 | 2019-06-27 | Ultrahaptics Limited | Tracking in haptic systems |
KR102050999B1 (en) | 2017-12-27 | 2019-12-05 | 성균관대학교산학협력단 | Method and apparatus for transmitting of energy and method and node for receiving of energy |
US10773169B2 (en) | 2018-01-22 | 2020-09-15 | Google Llc | Providing multiplayer augmented reality experiences |
US11032662B2 (en) | 2018-05-30 | 2021-06-08 | Qualcomm Incorporated | Adjusting audio characteristics for augmented reality |
US20190377538A1 (en) | 2018-06-08 | 2019-12-12 | Curious Company, LLC | Information Presentation Through Ambient Sounds |
US10706629B2 (en) * | 2018-06-15 | 2020-07-07 | Dell Products, L.P. | Coordinate override in virtual, augmented, and mixed reality (xR) applications |
US10650600B2 (en) | 2018-07-10 | 2020-05-12 | Curious Company, LLC | Virtual path display |
US10818088B2 (en) | 2018-07-10 | 2020-10-27 | Curious Company, LLC | Virtual barrier objects |
US10599381B2 (en) * | 2018-08-20 | 2020-03-24 | Dell Products, L.P. | Collaboration between head-mounted devices (HMDs) in co-located virtual, augmented, and mixed reality (xR) applications |
US10902678B2 (en) | 2018-09-06 | 2021-01-26 | Curious Company, LLC | Display of hidden information |
US10872584B2 (en) | 2019-03-14 | 2020-12-22 | Curious Company, LLC | Providing positional information using beacon devices |
-
2018
- 2018-12-21 US US16/230,278 patent/US10970935B2/en active Active
-
2019
- 2019-03-14 US US16/353,885 patent/US20200202628A1/en not_active Abandoned
-
2021
- 2021-03-27 US US17/214,811 patent/US20210217247A1/en not_active Abandoned
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090051648A1 (en) * | 2007-08-20 | 2009-02-26 | Gesturetek, Inc. | Gesture-based mobile interaction |
US20160054807A1 (en) * | 2012-11-08 | 2016-02-25 | PlayVision Labs, Inc. | Systems and methods for extensions to alternative control of touch-based devices |
US20200349966A1 (en) * | 2018-05-04 | 2020-11-05 | Google Llc | Hot-word free adaptation of automated assistant function(s) |
Non-Patent Citations (2)
Title |
---|
Kumar SS, Wangyal T, Saboo V, Srinath R. Time series neural networks for real time sign language translation. In2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA) 2018 Dec 17 (pp. 243-248). IEEE. * |
San-Segundo R, Barra R, Córdoba R, d'Haro LF, Fernández F, Ferreiros J, Lucas JM, Macías-Guarasa J, Montero JM, Pardo JM. Speech to sign language translation system for spanish. Speech Communication. 2008 Oct 30;50(11-12):1009. * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US12130971B2 (en) * | 2022-01-07 | 2024-10-29 | Sony Interactive Entertainment Europe Limited | Method for obtaining a position of a peripheral device |
Also Published As
Publication number | Publication date |
---|---|
US20200202625A1 (en) | 2020-06-25 |
US20200202628A1 (en) | 2020-06-25 |
US10970935B2 (en) | 2021-04-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11995772B2 (en) | Directional instructions in an hybrid-reality system | |
US20210217247A1 (en) | Body pose message system | |
US11238666B2 (en) | Display of an occluded object in a hybrid-reality system | |
US10650600B2 (en) | Virtual path display | |
US20210043007A1 (en) | Virtual Path Presentation | |
US10872584B2 (en) | Providing positional information using beacon devices | |
US11282248B2 (en) | Information display by overlay on an object | |
US10764705B2 (en) | Perception of sound objects in mediated reality | |
US10366542B2 (en) | Audio processing for virtual objects in three-dimensional virtual visual space | |
JP2021060627A (en) | Information processing apparatus, information processing method, and program | |
US20220377486A1 (en) | Audio enhanced augmented reality | |
JP2023531849A (en) | AUGMENTED REALITY DEVICE FOR AUDIO RECOGNITION AND ITS CONTROL METHOD |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: CURIOUS COMPANY, LLC, OREGON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:JONES, ANTHONY MARK;JONES, JESSICA A.F.;YOUNG, BRUCE ARNOLD;REEL/FRAME:056197/0908 Effective date: 20181204 |
|
STCT | Information on status: administrative procedure adjustment |
Free format text: PROSECUTION SUSPENDED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |