WO2021021085A1 - Modification de lumière structurée projetée en fonction de points identifiés dans une image capturée - Google Patents

Modification de lumière structurée projetée en fonction de points identifiés dans une image capturée Download PDF

Info

Publication number
WO2021021085A1
WO2021021085A1 PCT/US2019/043701 US2019043701W WO2021021085A1 WO 2021021085 A1 WO2021021085 A1 WO 2021021085A1 US 2019043701 W US2019043701 W US 2019043701W WO 2021021085 A1 WO2021021085 A1 WO 2021021085A1
Authority
WO
WIPO (PCT)
Prior art keywords
structured light
points
reference object
identified
captured
Prior art date
Application number
PCT/US2019/043701
Other languages
English (en)
Inventor
Joseph NOURI
Robert Paul Martin
Mark LESSMAN
Tsung-Nung HUANG
Original Assignee
Hewlett-Packard Development Company, L.P.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett-Packard Development Company, L.P. filed Critical Hewlett-Packard Development Company, L.P.
Priority to US17/415,000 priority Critical patent/US11676357B2/en
Priority to CN201980098835.6A priority patent/CN114175629A/zh
Priority to EP19939908.0A priority patent/EP3973697A4/fr
Priority to PCT/US2019/043701 priority patent/WO2021021085A1/fr
Publication of WO2021021085A1 publication Critical patent/WO2021021085A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/145Illumination specially adapted for pattern recognition, e.g. using gratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/56Cameras or camera modules comprising electronic image sensors; Control thereof provided with illuminating means

Definitions

  • Extended reality (XR) technologies include virtual reality (VR), augmented reality (AR), and mixed reality (MR) technologies, and quite literally extend the reality that users experience.
  • XR technologies may employ head-mounted displays (HMDs), for instance.
  • An HMD is a display device worn on the head or as part of a helmet.
  • VR technologies the HMD wearer is immersed in an entirely virtual world, whereas in AR technologies, the HMD wearer’s direct or indirect view of the physical, real-world
  • the HMD wearer experiences the merging of real and virtual worlds.
  • FIG. 1 is a flowchart of an example method for identifying reference object points within captured images of an object illuminated by projected structured light by modifying the structured light in a recursively iterative manner.
  • FIGs. 2A and 2B are diagrams depicting example performance of the method of FIG. 1.
  • FIGs. 3A and 3B are diagrams depicting another example performance of the method of FIG. 1.
  • FIG. 4 is a diagram of an example head-mounted display (HMD).
  • HMD head-mounted display
  • FIG. 5 is a flowchart of an example method.
  • FIG. 6 is a diagram of an example non-transitory computer- readable data storage medium.
  • a head-mounted display can be employed as an extended reality (XR) technology to extend the reality experienced by the HMD’s wearer.
  • An HMD can include a small display in front of one or each eye of the wearer, as well as various sensors to detect or sense the wearer so that the images projected on the HMD’s display convincingly immerse the wearer within an XR, be it a virtual reality (VR), augmented reality (AR), a mixed reality (MR), or another type of XR.
  • VR virtual reality
  • AR augmented reality
  • MR mixed reality
  • sensors can include global positioning system (GPS) or other geolocation sensors to determine the geographic location of the wearer, as well as accelerometers, gyroscopes, compasses and other such sensors to detect motion and orientation of the wearer.
  • GPS global positioning system
  • accelerometers gyroscopes
  • compasses compasses and other such sensors to detect motion and orientation of the wearer.
  • An HMD can further effectively include as a sensor a camera, which is an image-capturing device that captures still or motion images.
  • the camera of an HMD may be employed to capture images of the wearer’s lower face, including the mouth, so that the wearer’s facial expressions and correspondingly derived information, like facial cues and emotions, of the wearer can be assessed.
  • Detecting facial features of the wearer of an HMD provides for even fuller immersion within an XR, so that the XR suitably responds to the wearer’s facial expressions, facial cues, and emotions, and so that any graphical representation of the wearer within the XR, such as an avatar, changes in correspondence with changes in the wearer’s actual facial expressions.
  • Machine learning models can be trained to detect facial features of HMD wearers from captured images, by specifically identifying reference points corresponding to facial landmarks of the wearer’s facial features. For example, a machine learning model may identify the center point of the bottom of an HMD wearer’s upper lip and the center point of the top of the wearer’s lower lip. From this information, whether the wearer of the HMD has his or her mouth open or closed can be assessed. As another example, a machine learning model may identify the corners of the wearer’s mouth, and from this information in comparison to the center points of the wearer’s lips assess whether the user is smiling or frowning.
  • the images For the resulting captured images to serve as machine learning model training data, the images have to be annotated to identify the reference points corresponding to the desired landmarks of the wearers’ facial features, on which model training occurs.
  • Techniques described herein ameliorate such issues associated with acquiring machine learning model training data in which reference points of interest are identified within captured images. While an object, such as an HMD wearer, is illuminated by projected structured light, the techniques recursively capture images of the object, identify the reference points within the captured images, and modify the structured light projected onto the object based on the identified reference points. At each recursive iteration, the techniques modify the structured light to improve identification of additional reference points from images of the object as illuminated by the modified structured light that are captured in the next iteration.
  • FIG. 1 shows an example method 100 for identifying reference object points within captured images of an object illuminated by projected structured light, by modifying the structured light in a recursively iterative manner.
  • the object may be a human, such as a human face or a portion thereof (e.g., the lower facial region of a person, including his or her mouth).
  • the method 100 may be performed by a processor executing program code stored on a non-transitory computer-readable data storage medium.
  • the method 100 includes projecting structured light onto the object (102).
  • Structured light is light of a known spatial pattern or shape that is projected onto an object to permit determination of surface, depth, and/or other information regarding the object.
  • Examples of structured light include a grid of intersecting horizontal and vertical lines, a sequence of parallel (e.g., horizontal or vertical) lines, and a single line.
  • Other examples include one or more circles, ovals, squares, other rectangles, triangles, and other shapes. When there is more than one such shape, the shapes may be organized within a grid.
  • the method 100 includes capturing an image of the object as illuminated by the projected structured light (104), and identifying referencing object points within the captured image (106).
  • the reference object points are reference points of the object that are of interest; that is, the reference object points are the points of the object that are desired to be identified within captured images.
  • the points are object points in that they are points of the object within the captured images; the points are reference points in that they are the points that are of interest.
  • the reference object points may be reference points on which a machine learning model can be trained, for instance.
  • the reference object points may be identified using a suitable image processing, pattern recognition, computer vision, or other technique. Such techniques include employing Hough lines and circles, and contouring, as well as image-gradient techniques to then perform feature extraction.
  • image-gradient techniques include scale invariant feature transform (SIFT), speed up robust feature (SURF), and binary robust independent elementary features (BRIEF) techniques, for instance.
  • SIFT scale invariant feature transform
  • SURF speed up robust feature
  • BRIEF binary robust independent elementary features
  • the reference object points identified in a particular performance instance of part 106 are not all the reference points of the object to be identified via the method 100. It may be said that the first time part 106 is performed, first reference object points are identified; the second time part 106 is performed, second reference object points are identified; the third time part 106 is performed, third reference object points are identified; and so on. Therefore, after the first iteration of part 106, there will be additional reference object points of the object to be identified (108) in one or more further iterations.
  • the method 100 includes modifying the structured light based on the reference object points that have been identified (110).
  • the structured light is modified so as to improve identification of additional reference object points the next time part 106 is performed.
  • the structured light is modified based on the reference object points that have been identified in that how or where the structured light is projected onto the object is modified according to the location, number, and so on, of the reference object points that have already been identified in prior iterations of part 106.
  • Modification of the structured light can include, for instance, changing the shape of the structured light projected onto the object, the position at which the structured light is projected on the object, and so on.
  • the method 100 is then repeated at part 104, with the capture of an image of the object as is now illuminated by the projected structured light as has been modified. Additional reference object points within this most recently captured image are identified in part 106.
  • the additional reference object points may be able to be better identified (or identified at all) the second time part 106 is performed as compared to the first time part 106 was performed, due to the projected structured light illuminating the object having been modified. That is, illumination of the object by the modified structured light permits or at least improves detectability of the additional reference object points by the image processing, pattern recognition, or computer vision technique being used.
  • the method 100 is finished (112).
  • the method 100 thus identifies reference object points within captured images of an object illuminated by projected structured light, over a number of recursive iterations 114 in which the projected structured light is modified.
  • the structured light is modified based on the reference object points that have already been identified, to permit or improve detection of reference object points in the next iteration 114.
  • the iterations 114 are recursive in that the structured light is modified in each iteration 114 based on at least the reference object points identified in the immediately prior iteration 114.
  • FIGs. 2A and 2B show example performance of the method 100.
  • the object onto which structured light is projected is the lower region of a human face 200, including a portion of the nose 202, and also the mouth’s upper lip 204A and lower lip 204B, which are collectively referred to as the lips 204.
  • 2A and 2B is slightly smiling.
  • the lips 204 are not touching but rather are slightly open, revealing a gap 206 between the lips 204.
  • structured light in the form of a single vertical line 208 is projected onto the center of the face 200.
  • the person whose face 200 is depicted may be wearing an FIMD in the form of glasses or goggles and that has a light source which can project the structured light.
  • the light source is able to project the single vertical line 208 specifically down the center of the face 200 of the wearer of the HMD due to the HMD being in the form of glasses or goggles, such that the HMD is positioned centrally from left to right on the wearer’s face 200.
  • reference object points 210A and 210B collectively referred to as the points 210
  • reference object points 212A and 212B collectively referred to as the points 212
  • the reference object points 210 and 212 correspond to visually discernible edges of the lips 204 against the rest of the face 200.
  • the point 210A identifies where the line 208 intersects the lower edge of the upper lip 204A and the point 210B identifies where the line 208 intersects the upper edge of the lower lip 204B.
  • the point 212A identifies where the line 208 intersects the upper edge of the upper lip 204 and the point 212B identifies where the line 208 intersects the lower edge of the lower lip 204B.
  • the reference object points 210 and 212 can be considered first reference object points that are identified in a first iteration 114 of the method 100 of FIG. 1.
  • the vertical line 208 is the structured light projected in part 102
  • the reference object points 210 and 212 are the reference object points identified in part 106 within the image, captured in part 104, of the face 200 as illuminated by the projected vertical line 208.
  • the structured light projected onto the face 200 is modified in part 110 based on the already identified reference object points 210 and 212.
  • the shape of the projected structured light changes from one vertical line 208 in FIG. 2A to two horizontal lines 218A and 218B, collectively referred to as the horizontal lines 218, in FIG. 2B.
  • the structured light projected onto the face 200 is modified based on the points 210 and 212 in the example of FIGs. 2A and 2B in that the where the horizontal lines 218 are projected (i.e. , their locations on the face 200) are particularly controlled by the points 210.
  • the line 218A is projected onto the face 200 so that it is tangential to the already identified point 210A corresponding to the lower edge of the upper lip 204A.
  • the line 218B is projected onto the face 200 so that it is tangential to the already identified point 210B corresponding to the upper edge of the lower lip 204B.
  • FIG. 2B from the captured image of the face 200 illuminated by the horizontal lines 218, additional reference object points 220A and 220B, collectively referred to as the points 220, and additional reference object points 222A and 22B, collectively referred to as the points 222, are identified.
  • the reference object points 220 and 222 like the points 210 and 212, correspond to visually discernible edges of the lips 204 against the rest of the face 200.
  • the points 220A and 220B identify where the lines 218A and 218B respectively intersect an outermost edge of the lips 204 at the face 200’s right side.
  • the points 222A and 222B identify where the lines 218A and 218B respectively intersect an outermost edge of the lips 204 at the face 200’s left side.
  • the points 220 and 222 each specifically intersect the lower lip 204B, which may be because the face 210 has a smiling expression.
  • the reference object points 220 and 222 can be considered second reference points that are identified in a second iteration 114 of the method 100 of FIG. 1.
  • the horizontal lines 218 are the projected structured light modified in part 110 of the prior, first iteration 114.
  • the reference object points 220 and 222 are the reference object points identified in the second iteration of part 106, from the image of the face 200 as illuminated by the projected lines 218.
  • the method 100 is thus finished after completion of the second iteration 114.
  • FIGs. 2A and 2B shows how projected
  • structured light in the form of lines can be modified so that the lines are projected over successive recursive iterations in precise locations on an object. Such modification of the projected structured light can thus permit desired reference object points of interest to be identified. If the projected structured light were not modified over recursive iterations, the reference object points of interest may not otherwise be able to be identified as precisely.
  • FIGs. 3A and 3B show another example performance of the method 100.
  • the object onto which structured light is projected is again the lower region of a human face 200, including a portion of the nose 202, and also the mouth’s lips 204.
  • the lips 204 as before are not touching, but rather are slightly open, revealing a gap 206 between the lips 204.
  • structured light in the form a patterned grid 300 of horizontal and vertical lines is projected onto the lower region of the face 200.
  • the grid 300 is depicted as a two-dimensional (2D) overlay, but in actuality may fit the contours of the face 200.
  • the person whose face 200 is depicted may be wearing an HMD having a light source that can project the structured light.
  • FIG. 3A from the captured image of the face 200 illuminated by the patterned grid 300, reference objection points 302 are identified.
  • the points 302 are depicted as filled-in circles in FIG. 3A.
  • the reference object points 302 each correspond to visually discernible edges of the lips 204 against the rest of the face 200.
  • Each point 302 corresponds to where a horizontal line and/or a vertical line of the grid 300 intersects an edge of the lips 204.
  • the reference object points 302 are first reference object points that are identified in a first iteration 114 of the method 100 of FIG. 1.
  • the patterned grid 300 is the structured light projected in part 102, and the reference object points 302 are the reference object points identified in part 106 within the image, captured in part 104, of the face 200 as illuminated by the grid 300.
  • the structured light projected onto the face 200 is modified in part 110 based on the already identified object points 302. Specifically, the shape of the projected structured light changes from the patterned grid 300 in FIG. 3A to the patterned grid 310 in FIG. 3B, in which the horizontal and vertical lines are closer together than in the grid 300.
  • the structured light projected onto the face 200 is modified based on the points 302 in the example of FIGs. 3A and 3B in that where the grid 310 is projected on the face 200 is controlled by the points 302. Specifically, the grid 310 is projected from left to right from a set distance beyond the left-most point 302 to a set distance beyond the right most point 302.
  • the grid 310 is projected from top to bottom from a set distance beyond the upper-most point 302 to a set distance beyond the lower-most point 302. Projecting a smaller grid 310 as compared to the grid 300 may permit the lines of the grid 310 to be closer together than in the grid 300.
  • FIG. 3B from the captured image of the face 200 illuminated by the patterned grid 310, additional reference object points 312 are identified.
  • the points 312 are depicted as crosshatches in FIG. 3B.
  • the reference object points 312, like the points 302, correspond to visually discernible edges of the lips 204 against the rest of the face 200.
  • the points 312 identify where the horizontal lines and/or vertical lines of the grid 310 intersect edges of the lips 204.
  • the reference object points 312 are second reference points that are identified in a second iteration 114 of the method 100 of FIG. 1.
  • the patterned grid 310 is the projected structured light modified in part 110 of the prior, first iteration 114.
  • the reference object points 312 are the reference object points identified in the second iteration of part 104 from the image of the face 200 as illuminated by the projected grid 310.
  • the method 100 is thus finished after completion of the second iteration 114.
  • 3A and 3B shows how projected structured light in the form of a patterned grid can be modified so that the lines are projected over successive recursive iterations in precise locations on an object.
  • Such modification of the projected structured light can therefore permit definition of the facial feature in question - the lips 204 in this case - via more reference object points than if the structured light were not modified over successive iterations.
  • the number of lines of the grid that intersect the facial feature of interest increases.
  • the facial feature is defined at a greater resolution than may be possible if the projected structured light were not modified over recursive iterations.
  • the projected structured light is in the form of lines, either one or more parallel lines in FIGs. 2A and 2B, or a grid of intersecting lines in FIGs. 3A and 3B.
  • the structured light can be of other shapes or forms as well.
  • the structured light may be in the form or shape of circles, ovals, squares, other rectangles, triangles, and other shapes, which may be organized within a grid. Between iterations, the positions and/or shapes of the structured light can be modified to permit identification of desired reference object points over a number of recursive iterations.
  • the object of which reference object points are identified within images of the object is a human face, particularly the lower facial region and more particularly still the lips of the mouth.
  • the object can be a different part of the human face, such as a different facial feature and/or facial region.
  • eyes may be a facial feature of interest, so that eye position can then be tracked.
  • the object may be a different part of a person as well, like the users fingers, arms, wrists, and so on. The object may not be a person in other implementations.
  • FIG. 4 shows an example HMD 404 that can be worn by a user 400.
  • the HMD 404 may be in the form of goggles or glasses such that the
  • the HMD 404 is positioned incident to the user 400’s eyes 402.
  • the HMD 404 includes a light source 406, a camera 408, and a processor 410.
  • the HMD 404 can include other components as well, such as one or more displays opposite the eyes 402 of the user 400.
  • the HMD 404 can be employed as an XR technology.
  • the light source 406 projects structured light onto the lower region of the user 400’s face 200, including a portion of the user’s nose 202 as well as the user’s lips 204, as has been described.
  • the light source 406 may be a microelectromechanical systems (MEMS) light emitter, a digital-light processing (DLP) light source, or another type of light source.
  • MEMS microelectromechanical systems
  • DLP digital-light processing
  • the structured light that the light source 406 projects may be in the visible spectrum, or may be in a non-visible spectrum, such as the infrared (IR) or ultraviolet (UV) spectrum.
  • the camera 408 captures images of the face 200 of the user 400, as illuminated by the structured light that the light source 406 projects.
  • the camera 408 can be a still image or a moving image (i.e., video) capturing device.
  • Examples of the camera 408 include semiconductor image sensors like charge-coupled device (CCD) image sensors and complementary metal- oxide semiconductor (CMOS) image sensors.
  • CMOS complementary metal- oxide semiconductor
  • the processor 410 may be a general-purpose processor, such as a central processing unit (CPU), or a special-purpose processor, such as an application-specific integrated circuit (ASIC).
  • CPU central processing unit
  • ASIC application-specific integrated circuit
  • the processor 410 may directly perform processing to identify the reference object points and to determine how to then modify the structured light.
  • the processor 410 may identify the reference object points indirectly, by transmitting the captured images to a computing system or device to which the HMD 404 is communicatively coupled that then directly performs processing to identify the points.
  • the computing system or device may also determine how to modify the structured light, and transmit corresponding commands or instructions back to the processor 410.
  • the processor 410 in turn controls the light source 406 to modify the structured light, in accordance with the received commands or instructions.
  • Instances of the HMD 404 may be worn by a variety of different users to collect a rich set of machine learning model training data without having to manually annotate captured images with reference object points (i.e., without interaction by a developer, analyst, or other user).
  • the reference object points are instead programmatically identified, such as in real-time, in a recursive iterative manner as has been described.
  • end or production use of the HMD 404 can thus entail facial feature or expression recognition using reference object points that are identified by the model.
  • the reference object points may be used for purposes other than machine learning model training as well.
  • end or production use of the HMD 404 may achieve facial feature or expression recognition using reference object points that are identified via the techniques that have been described herein.
  • the techniques described herein may not be used for training a machine learning model that then identifies such reference object points in an end use or production
  • the light source 406 and/or the camera 408 may not be a part of the HMD 404.
  • the techniques described herein can thus be performed in relation to the capture of images of an object illuminated by projected structured light in non-HMD contexts.
  • a light source and a camera may be integrated within the display of a computing device. With the user’s face incident to the display, the light source may project structured light onto and capture images of the user’s face, from which reference object points are identified.
  • the computing device can modify the structured light based on the identified points and identify additional points from additional images in a recursively iterative manner as has been described.
  • FIG. 5 shows an example non-transitory computer-readable data storage medium 500.
  • the computer-readable data storage medium 500 stores program code 502.
  • the program code 502 is executable by a processor, such as the processor of an HMD or a computing system or device to which an HMD is communicatively coupled, to perform processing.
  • the processing includes projecting structured light onto an object (504), and capturing a (first) image of the object as illuminated by the projected structured light (506).
  • the processing includes identifying (first) reference object points within the captured (first) image (508), and modifying the structured light projected onto the object based on the identified (first) reference object points (510).
  • the processing includes capturing an additional (second) image of the object as illuminated by the modified projected structured light (512), and identifying additional (second) reference object points within the captured (second) additional image (514).
  • One or more additional recursive iterations can be performed to identify further reference object points.
  • the structured light projected onto the object may be modified a second time, based on the identified second reference object points, and a third image of the object, as illuminated by the projected structured light as modified the second time, captured.
  • Third reference object points can then be identified within the captured third image.
  • FIG. 6 shows an example method 600.
  • a light source projects structured light onto an object (602), and a camera captures images of the object as illuminated by the projected structured light (604).
  • a computing device which may include or be an HMD, identifies reference object points within the captured images of the object (606). The computing device then modifies the structured light projected onto the object based on the identified reference object points, as the images are captured and the reference object points are identified (608).
  • the techniques described herein thus capture images of an object and identify reference object points over recursive iterations in which the structured light illuminating the object is modified. At each recursive iteration, a current image of the object is captured, new reference object points are identified within the captured current image, and the structured light is modified based on the newly identified points. Such projected structured light modification can permit identification of reference object points that otherwise may not be able to be identified.

Abstract

Selon la présente invention, une lumière structurée est projetée sur un objet, et une image de l'objet tel qu'éclairé par la lumière structurée projetée est capturée. Des points d'objet de référence sont identifiés dans l'image capturée, et la lumière structurée projetée sur l'objet est modifiée en fonction des points d'objet de référence identifiés. Une image supplémentaire de l'objet tel qu'éclairé par la lumière structurée projetée modifiée est capturée, et des points d'objet de référence supplémentaires sont identifiés dans l'image supplémentaire capturée.
PCT/US2019/043701 2019-07-26 2019-07-26 Modification de lumière structurée projetée en fonction de points identifiés dans une image capturée WO2021021085A1 (fr)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US17/415,000 US11676357B2 (en) 2019-07-26 2019-07-26 Modification of projected structured light based on identified points within captured image
CN201980098835.6A CN114175629A (zh) 2019-07-26 2019-07-26 基于捕捉的图像内的被识别的点来修改投射的结构光
EP19939908.0A EP3973697A4 (fr) 2019-07-26 2019-07-26 Modification de lumière structurée projetée en fonction de points identifiés dans une image capturée
PCT/US2019/043701 WO2021021085A1 (fr) 2019-07-26 2019-07-26 Modification de lumière structurée projetée en fonction de points identifiés dans une image capturée

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2019/043701 WO2021021085A1 (fr) 2019-07-26 2019-07-26 Modification de lumière structurée projetée en fonction de points identifiés dans une image capturée

Publications (1)

Publication Number Publication Date
WO2021021085A1 true WO2021021085A1 (fr) 2021-02-04

Family

ID=74230471

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2019/043701 WO2021021085A1 (fr) 2019-07-26 2019-07-26 Modification de lumière structurée projetée en fonction de points identifiés dans une image capturée

Country Status (4)

Country Link
US (1) US11676357B2 (fr)
EP (1) EP3973697A4 (fr)
CN (1) CN114175629A (fr)
WO (1) WO2021021085A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220408067A1 (en) * 2021-06-22 2022-12-22 Industrial Technology Research Institute Visual recognition based method and system for projecting patterned light, method and system applied to oral inspection, and machining system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120062719A1 (en) 2010-09-09 2012-03-15 University Of Southern California Head-Mounted Photometric Facial Performance Capture
US20150332459A1 (en) * 2012-12-18 2015-11-19 Koninklijke Philips N.V. Scanning device and method for positioning a scanning device
US20190029528A1 (en) * 2015-06-14 2019-01-31 Facense Ltd. Head mounted system to collect facial expressions
US10295827B1 (en) * 2017-04-27 2019-05-21 Facebook Technologies, Llc Diffractive optics beam shaping for structured light generator

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7185319B2 (en) 2002-07-09 2007-02-27 Microsoft Corporation Debugging distributed applications
US9443310B2 (en) * 2013-10-09 2016-09-13 Microsoft Technology Licensing, Llc Illumination modules that emit structured light
GB201407267D0 (en) * 2014-04-24 2014-06-11 Cathx Res Ltd Underwater surveys
KR20170124559A (ko) * 2015-02-25 2017-11-10 페이스북, 인크. 물체에 의해 반사된 광의 특성에 기반한 볼륨 내 물체의 식별
US10512508B2 (en) * 2015-06-15 2019-12-24 The University Of British Columbia Imagery system
US11069082B1 (en) * 2015-08-23 2021-07-20 AI Incorporated Remote distance estimation system and method
DE112016004437T5 (de) 2015-09-29 2018-07-05 BinaryVR, Inc. Head-Mounted-Display mit Gesichtsausdruck-Erkennungsfähigkeit
US9983709B2 (en) * 2015-11-02 2018-05-29 Oculus Vr, Llc Eye tracking using structured light
CN105912986B (zh) * 2016-04-01 2019-06-07 北京旷视科技有限公司 一种活体检测方法和系统
US10282530B2 (en) * 2016-10-03 2019-05-07 Microsoft Technology Licensing, Llc Verifying identity based on facial dynamics
JP6266736B1 (ja) 2016-12-07 2018-01-24 株式会社コロプラ 仮想空間を介して通信するための方法、当該方法をコンピュータに実行させるためのプログラム、および当該プログラムを実行するための情報処理装置
WO2019009894A1 (fr) * 2017-07-06 2019-01-10 Hewlett-Packard Development Company, L.P. Obturateurs pare-lumière pour appareils photo
WO2019012535A1 (fr) * 2017-07-12 2019-01-17 Guardian Optical Technologies Ltd. Systèmes et procédés d'acquisition d'informations d'un environnement
US10586342B2 (en) 2017-08-31 2020-03-10 Facebook Technologies, Llc Shifting diffractive optical element for adjustable depth sensing resolution
US10863146B1 (en) * 2017-09-12 2020-12-08 Amazon Technologies, Inc. Setup and configuration of audio/video recording and communication devices
US10928190B2 (en) * 2017-09-27 2021-02-23 Brown University Techniques for shape measurement using high frequency patterns and related systems and methods
US10248842B1 (en) 2018-01-09 2019-04-02 Facebook Technologies, Llc Face tracking using structured light within a head-mounted display
US11153503B1 (en) * 2018-04-26 2021-10-19 AI Incorporated Method and apparatus for overexposing images captured by drones
US11182914B2 (en) * 2018-05-21 2021-11-23 Facebook Technologies, Llc Dynamic structured light for depth sensing systems based on contrast in a local area
US10901092B1 (en) * 2018-10-02 2021-01-26 Facebook Technologies, Llc Depth sensing using dynamic illumination with range extension
US11910125B2 (en) * 2018-12-13 2024-02-20 Lg Innotek Co., Ltd. Camera device
US11694433B2 (en) * 2019-02-15 2023-07-04 Google Llc Detection of projected infrared patterns using difference of Gaussian and blob identification
KR20200137227A (ko) * 2019-05-29 2020-12-09 엘지이노텍 주식회사 카메라 모듈

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120062719A1 (en) 2010-09-09 2012-03-15 University Of Southern California Head-Mounted Photometric Facial Performance Capture
US20150332459A1 (en) * 2012-12-18 2015-11-19 Koninklijke Philips N.V. Scanning device and method for positioning a scanning device
US20190029528A1 (en) * 2015-06-14 2019-01-31 Facense Ltd. Head mounted system to collect facial expressions
US10295827B1 (en) * 2017-04-27 2019-05-21 Facebook Technologies, Llc Diffractive optics beam shaping for structured light generator

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3973697A4

Also Published As

Publication number Publication date
EP3973697A4 (fr) 2023-03-15
US11676357B2 (en) 2023-06-13
CN114175629A (zh) 2022-03-11
EP3973697A1 (fr) 2022-03-30
US20220189131A1 (en) 2022-06-16

Similar Documents

Publication Publication Date Title
US11749025B2 (en) Eye pose identification using eye features
Memo et al. Head-mounted gesture controlled interface for human-computer interaction
US9898651B2 (en) Upper-body skeleton extraction from depth maps
CN108717531B (zh) 基于Faster R-CNN的人体姿态估计方法
Lee et al. Handy AR: Markerless inspection of augmented reality objects using fingertip tracking
CN112184705B (zh) 一种基于计算机视觉技术的人体穴位识别、定位及应用系统
US9019267B2 (en) Depth mapping with enhanced resolution
CN102096471B (zh) 一种基于机器视觉的人机交互方法
JP2009536731A (ja) 深度マップによるヒューマノイド形状のモデル化
JP2018518750A (ja) 反射マップ表現による奥行きマップ表現の増補
WO2021052208A1 (fr) Dispositif de photographie auxiliaire destiné à l'analyse de maladies associées à des troubles moteurs, procédé et appareil de commande
CN110276239A (zh) 眼球追踪方法、电子装置及非暂态电脑可读取记录媒体
Perra et al. Adaptive eye-camera calibration for head-worn devices
CN111178170B (zh) 一种手势识别方法和一种电子设备
CN110910426A (zh) 动作过程和动作趋势识别方法、存储介质和电子装置
US11676357B2 (en) Modification of projected structured light based on identified points within captured image
CN117238031A (zh) 一种虚拟人的动作捕捉方法与系统
KR101861096B1 (ko) 사용자의 손 동작을 인식하여 화면에 표시되는 정보를 제어하는 방법 및 장치
Jiménez et al. Face tracking and pose estimation with automatic three-dimensional model construction
JP5688514B2 (ja) 視線計測システム、方法およびプログラム
KR101844367B1 (ko) 부분 포즈 추정에 의하여 개략적인 전체 초기설정을 사용하는 머리 포즈 추정 방법 및 장치
US11954943B2 (en) Method for generating synthetic data
Diaz et al. Preliminary experimental study of marker-based hand gesture recognition system
KR101385373B1 (ko) 얼굴 검출 기반 손 제스처 인식 방법
Diaz et al. Toward haptic perception of objects in a visual and depth guided navigation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19939908

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019939908

Country of ref document: EP

Effective date: 20211223

NENP Non-entry into the national phase

Ref country code: DE