WO2012033578A1 - Depth camera based on structured light and stereo vision - Google Patents

Depth camera based on structured light and stereo vision Download PDF

Info

Publication number
WO2012033578A1
WO2012033578A1 PCT/US2011/046139 US2011046139W WO2012033578A1 WO 2012033578 A1 WO2012033578 A1 WO 2012033578A1 US 2011046139 W US2011046139 W US 2011046139W WO 2012033578 A1 WO2012033578 A1 WO 2012033578A1
Authority
WO
WIPO (PCT)
Prior art keywords
depth
sensor
frame
structured light
pixel data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2011/046139
Other languages
English (en)
French (fr)
Inventor
Sagi Katz
Avishai Adler
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Corp
Original Assignee
Microsoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Corp filed Critical Microsoft Corp
Priority to CA2809240A priority Critical patent/CA2809240A1/en
Priority to KR1020137005894A priority patent/KR20140019765A/ko
Priority to EP11823916.9A priority patent/EP2614405A4/en
Priority to JP2013528202A priority patent/JP5865910B2/ja
Publication of WO2012033578A1 publication Critical patent/WO2012033578A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/254Image signal generators using stereoscopic image cameras in combination with electromagnetic radiation sources for illuminating objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/25Image signal generators using stereoscopic image cameras using two or more image sensors with different characteristics other than in their location or field of view, e.g. having different resolutions or colour pickup characteristics; using image signals from one sensor to control the characteristics of another sensor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S3/00Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
    • G01S3/78Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received using electromagnetic waves other than radio waves
    • G01S3/782Systems for determining direction or deviation from predetermined direction
    • G01S3/785Systems for determining direction or deviation from predetermined direction using adjustment of orientation of directivity characteristics of a detector or detector system to give a desired condition of signal derived from that detector or detector system
    • G01S3/786Systems for determining direction or deviation from predetermined direction using adjustment of orientation of directivity characteristics of a detector or detector system to give a desired condition of signal derived from that detector or detector system the desired condition being maintained automatically
    • G01S3/7864T.V. type tracking systems
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Definitions

  • a real-time depth camera is able to determine the distance to a human or other object in a field of view of the camera, and to update the distance substantially in real time based on a frame rate of the camera.
  • a depth camera can be used in motion capture systems, for instance, to obtain data regarding the location and movement of a human body or other subject in a physical space, and can use the data as an input to an application in a computing system.
  • the depth camera includes an illuminator which illuminates the field of view, and an image sensor which senses light from the field of view to form an image.
  • a depth camera system uses at least two image sensors, and a combination of structured light image processing and stereoscopic image processing to obtain a depth map of a scene in substantially real time.
  • the depth map can be updated for each new frame of pixel data which is acquired by the sensors.
  • the image sensors can be mounted at different distances from an illuminator, and can have different characteristics, to allow a more accurate depth map to be obtained while reducing the likelihood of occlusions.
  • a depth camera system includes an illuminator which illuminates an object in a field of view with a pattern of structured light, at least first and second sensors, and at least one control circuit.
  • the first sensor senses reflected light from the object to obtain a first frame of pixel data, and is optimized for shorter range imaging. This optimization can be realized in terms of, e.g., a relatively shorter distance between the first sensor and the illuminator, or a relatively small exposure time, spatial resolution and/or sensitivity to light of the first sensor.
  • the depth camera system further includes a second sensor which senses reflected light from the object to obtain a second frame of pixel data, where the second sensor is optimized for longer range imaging.
  • the depth camera system further includes at least one control circuit, which can be in a common housing with the sensors and illuminators, and/or in a separate component such as a computing environment.
  • the at least one control circuit derives a first structured light depth map of the object by comparing the first frame of pixel data to the pattern of the structured light, derives a second structured light depth map of the object by comparing the second frame of pixel data to the pattern of the structured light, and derives a merged depth map which is based on the first and second structured light depth maps.
  • Each depth map can include a depth value for each pixel location, such as in a grid of pixels.
  • stereoscopic image processing is also used to refine depth values.
  • the use of stereoscopic image processing may be triggered when one or more pixels of the first and/or second frames of pixel data are not successfully matched to a pattern of structured light, or when a depth value indicates a large distance that requires a larger base line to achieve good accuracy, for instance. In this manner, further refinement is provided to the depth values only as needed, to avoid unnecessary processing steps.
  • the depth data obtained by a sensor can be assigned weights based on characteristics of the sensor, and/or accuracy measures based on a degree of confidence in depth values.
  • the final depth map can be used an input to an application in a motion capture system, for instance, where the object is a human which is tracked by the motion capture system, and where the application changes a display of the motion capture system in response to a gesture or movement by the human, such as by animating an avatar, navigating an on-screen menu, or performing some other action.
  • FIG. 1 depicts an example embodiment of a motion capture system.
  • FIG. 2 depicts an example block diagram of the motion capture system of FIG. 1.
  • FIG. 3 depicts an example block diagram of a computing environment that may be used in the motion capture system of FIG. 1.
  • FIG. 4 depicts another example block diagram of a computing environment that may be used in the motion capture system of FIG. 1.
  • FIG. 5A depicts an illumination frame and a captured frame in a structured light system.
  • FIG. 5B depicts two captured frames in a stereoscopic light system.
  • FIG. 6A depicts an imaging component having two sensors on a common side of an illuminator.
  • FIG. 6B depicts an imaging component having two sensors on one side of an illuminator, and one sensor on an opposite side of the illuminator.
  • FIG. 6C depicts an imaging component having three sensors on a common side of an illuminator.
  • FIG. 6D depicts an imaging component having two sensors on opposing sides of an illuminator, showing how the two sensors sense different portions of an object.
  • FIG. 7 A depicts a process for obtaining a depth map of a field of view.
  • FIG. 7B depicts further details of step 706 of FIG. 7A, in which two structured light depth maps are merged.
  • FIG. 7C depicts further details of step 706 of FIG. 7A, in which two structured light depth maps and two stereoscopic depth maps are merged.
  • FIG. 7D depicts further details of step 706 of FIG. 7A, in which depth values are refined as needed using stereoscopic matching.
  • FIG. 7E depicts further details of another approach to step 706 of FIG. 7A, in which depth values of a merged depth map are refined as needed using stereoscopic matching.
  • FIG. 8 depicts an example method for tracking a human target using a control input as set forth in step 708 of FIG. 7A.
  • FIG. 9 depicts an example model of a human target as set forth in step 808 of FIG. 8.
  • a depth camera is provided for use in tracking one or more objects in a field of view.
  • the depth camera is used in a motion tracking system to track a human user.
  • the depth camera includes two or more sensors which are optimized to address variables such as lighting conditions, surface textures and colors, and the potential for occlusions.
  • the optimization can include optimizing placement of the sensors relative to one another and relative to an illuminator, as well as optimizing spatial resolution, sensitivity and exposure time of the sensors.
  • the optimization can also include optimizing how depth map data is obtained, such as by matching a frame of pixel data to a pattern of structured light and/or by matching a frame of pixel data to another frame.
  • the above mentioned disadvantages can be overcome by using a constellation of two or more sensors with a single illumination device to effectively extract 3D samples as if three depth cameras were used.
  • the two sensors can provide depth data by matching to a structured light pattern, while the third camera is achieved by matching the two images from the two sensors by applying stereo technology.
  • data fusion it is possible to enhance the robustness of the 3D measurements, including robustness to inter- camera disruptions.
  • FIG. 1 depicts an example embodiment of a motion capture system 10 in which a human 8 interacts with an application, such as in the home of a user.
  • the motion capture system 10 includes a display 196, a depth camera system 20, and a computing environment or apparatus 12.
  • the depth camera system 20 may include an imaging component 22 having an illuminator 26, such as an infrared (IR) light emitter, an image sensor 26, such as an infrared camera, and a color (such as a red-green-blue RGB) camera 28.
  • IR infrared
  • RGB red-green-blue RGB
  • the depth camera system 20, and computing environment 12 provide an application in which an avatar 197 on the display 196 track the movements of the human 8.
  • the avatar may raise an arm when the human raises an arm.
  • the avatar 197 is standing on a road 198 in a 3-D virtual world.
  • a Cartesian world coordinate system may be defined which includes a z-axis which extends along the focal length of the depth camera system 20, e.g., horizontally, a y-axis which extends vertically, and an x-axis which extends laterally and horizontally.
  • the perspective of the drawing is modified as a simplification, as the display 196 extends vertically in the y-axis direction and the z-axis extends out from the depth camera system, perpendicular to the y- axis and the x-axis, and parallel to a ground surface on which the user 8 stands.
  • the motion capture system 10 is used to recognize, analyze, and/or track one or more human targets.
  • the computing environment 12 can include a computer, a gaming system or console, or the like, as well as hardware components and/or software components to execute applications.
  • the depth camera system 20 may be used to visually monitor one or more people, such as the human 8, such that gestures and/or movements performed by the human may be captured, analyzed, and tracked to perform one or more controls or actions within an application, such as animating an avatar or on-screen character or selecting a menu item in a user interface (UI).
  • UI user interface
  • the motion capture system 10 may be connected to an audiovisual device such as the display 196, e.g., a television, a monitor, a high-definition television (HDTV), or the like, or even a projection on a wall or other surface that provides a visual and audio output to the user.
  • An audio output can also be provided via a separate device.
  • the computing environment 12 may include a video adapter such as a graphics card and/or an audio adapter such as a sound card that provides audiovisual signals associated with an application.
  • the display 196 may be connected to the computing environment 12.
  • the human 8 may be tracked using the depth camera system 20 such that the gestures and/or movements of the user are captured and used to animate an avatar or onscreen character and/or interpreted as input controls to the application being executed by computer environment 12.
  • Some movements of the human 8 may be interpreted as controls that may correspond to actions other than controlling an avatar.
  • the player may use movements to end, pause, or save a game, select a level, view high scores, communicate with a friend, and so forth.
  • the player may use movements to select the game or other application from a main user interface, or to otherwise navigate a menu of options.
  • a full range of motion of the human 8 may be available, used, and analyzed in any suitable manner to interact with an application.
  • the motion capture system 10 may further be used to interpret target movements as operating system and/or application controls that are outside the realm of games and other applications which are meant for entertainment and leisure. For example, virtually any controllable aspect of an operating system and/or application may be controlled by movements of the human 8.
  • FIG. 2 depicts an example block diagram of the motion capture system 10 of FIG. la.
  • the depth camera system 20 may be configured to capture video with depth information including a depth image that may include depth values, via any suitable technique including, for example, time-of-flight, structured light, stereo image, or the like.
  • the depth camera system 20 may organize the depth information into "Z layers," or layers that may be perpendicular to a Z axis extending from the depth camera along its line of sight.
  • the depth camera system 20 may include an imaging component 22 that captures the depth image of a scene in a physical space.
  • a depth image or depth map may include a two-dimensional (2-D) pixel area of the captured scene, where each pixel in the 2-D pixel area has an associated depth value which represents a linear distance from the imaging component 22 to the object, thereby providing a 3-D depth image.
  • the imaging component 22 includes an illuminator 26, a first image sensor (SI) 24, a second image sensor (S2) 29, and a visible color camera 28.
  • the sensors SI and S2 can be used to capture the depth image of a scene.
  • the illuminator 26 is an infrared (IR) light emitter, and the first and second sensors are infrared light sensors.
  • IR infrared
  • a 3-D depth camera is formed by the combination of the illuminator 26 and the one or more sensors.
  • a depth map can be obtained by each sensor using various techniques.
  • the depth camera system 20 may use a structured light to capture depth information.
  • patterned light i.e., light displayed as a known pattern such as grid pattern or a stripe pattern
  • the pattern may become deformed in response.
  • Such a deformation of the pattern may be captured by, for example, the sensors 24 or 29 and/or the color camera 28 and may then be analyzed to determine a physical distance from the depth camera system to a particular location on the targets or objects.
  • the sensors 24 and 29 are located on opposite sides of the illuminator 26, and at different baseline distances from the illuminator.
  • the sensor 24 is located at a distance BL1 from the illuminator 26, and the sensor 29 is located at a distance BL2 from the illuminator 26.
  • the distance between a sensor and the illuminator may be expressed in terms of a distance between central points, such as optical axes, of the sensor and the illuminator.
  • a sensor can be optimized for viewing objects which are closer in the field of view by placing the sensor relatively closer to the illuminator, while another sensor can be optimized for viewing objects which are further in the field of view by placing the sensor relatively further from the illuminator.
  • the sensor 24 can be considered to be optimized for shorter range imaging while the sensor 29 can be considered to be optimized for longer range imaging.
  • the sensors 24 and 29 can be collinear, such that they placed along a common line which passes through the illuminator.
  • other configurations regarding the positioning of the sensors 24 and 29 are possible.
  • the sensors could be arranged circumferentially around an object which is to be scanned, or around a location in which a hologram is to be projected. It is also possible to arrange multiple depth cameras systems, each with an illuminator and sensors, around an object. This can allow viewing of different sides of an object, providing a rotating view around the object. By using more depth cameras, we add more visible regions of the object.
  • the depth camera system 20 may include a processor 32 that is in communication with the 3-D depth camera 22.
  • the processor 32 may include a standardized processor, a specialized processor, a microprocessor, or the like that may execute instructions including, for example, instructions for receiving a depth image; generating a grid of voxels based on the depth image; removing a background included in the grid of voxels to isolate one or more voxels associated with a human target; determining a location or position of one or more extremities of the isolated human target; adjusting a model based on the location or position of the one or more extremities, or any other suitable instruction, which will be described in more detail below.
  • the processor 32 can access a memory 31 to use software 33 which derives a structured light depth map, software 34 which derives a stereoscopic vision depth map, and software 35 which performs depth map merging calculations.
  • the processor 32 can be considered to be at least one control circuit which derives a structured light depth map of an object by comparing a frame of pixel data to a pattern of the structured light which is emitted by the illuminator in an illumination plane.
  • the at least one control circuit can derive a first structured light depth map of an object by comparing a first frame of pixel data which is obtained by the sensor 24 to a pattern of the structured light which is emitted by the illuminator 26, and derive a second structured light depth map of the object by comparing a second frame of pixel data which is obtained by the sensor 29 to the pattern of the structured light.
  • the at least one control circuit can use the software 35 to derive a merged depth map which is based on the first and second structured light depth maps.
  • a structured light depth map is discussed further below, e.g., in connection with FIG. 5A.
  • the at least one control circuit can use the software 34 to derive at least a first stereoscopic depth map of the object by stereoscopic matching of a first frame of pixel data obtained by the sensor 24 to a second frame of pixel data obtained by the sensor 29, and to derive at least a second stereoscopic depth map of the object by stereoscopic matching of the second frame of pixel data to the first frame of pixel data.
  • the software 25 can merge one or more structured light depth maps and/or stereoscopic depth maps. A stereoscopic depth map is discussed further below, e.g., in connection with FIG. 5B.
  • the at least one control circuit can be provided by a processor which is outside the depth camera system as well, such as the processor 192 or any other processor.
  • the at least one control circuit can access software from the memory 31, for instance, which can be a tangible computer readable storage having computer readable software embodied thereon for programming at least one processor or controller 32 to perform a method for processing image data in a depth camera system as described herein.
  • the memory 31 can store instructions that are executed by the processor 32, as well as storing images such as frames of pixel data 36, captured by the sensors or color camera.
  • the memory 31 may include random access memory (RAM), read only memory (ROM), cache, flash memory, a hard disk, or any other suitable tangible computer readable storage component.
  • RAM random access memory
  • ROM read only memory
  • cache flash memory
  • hard disk or any other suitable tangible computer readable storage component.
  • the memory component 31 may be a separate component in communication with the image capture component 22 and the processor 32 via a bus 21. According to another embodiment, the memory component 31 may be integrated into the processor 32 and/or the image capture component 22.
  • the depth camera system 20 may be in communication with the computing environment 12 via a communication link 37, such as a wired and/or a wireless connection.
  • the computing environment 12 may provide a clock signal to the depth camera system 20 via the communication link 37 that indicates when to capture image data from the physical space which is in the field of view of the depth camera system 20.
  • the depth camera system 20 may provide the depth information and images captured by, for example, the image sensors 24 and 29 and/or the color camera 28, and/or a skeletal model that may be generated by the depth camera system 20 to the computing environment 12 via the communication link 37.
  • the computing environment 12 may then use the model, depth information, and captured images to control an application.
  • the computing environment 12 may include a gestures library 190, such as a collection of gesture filters, each having information concerning a gesture that may be performed by the skeletal model (as the user moves).
  • a gesture filter can be provided for various hand gestures, such as swiping or flinging of the hands.
  • the data captured by the depth camera system 20 in the form of the skeletal model and movements associated with it may be compared to the gesture filters in the gesture library 190 to identify when a user (as represented by the skeletal model) has performed one or more specific movements. Those movements may be associated with various controls of an application.
  • the computing environment may also include a processor 192 for executing instructions which are stored in a memory 194 to provide audio-video output signals to the display device 196 and to achieve other functionality as described herein.
  • FIG. 3 depicts an example block diagram of a computing environment that may be used in the motion capture system of FIG. 1.
  • the computing environment can be used to interpret one or more gestures or other movements and, in response, update a visual space on a display.
  • the computing environment such as the computing environment 12 described above may include a multimedia console 100, such as a gaming console.
  • the multimedia console 100 has a central processing unit (CPU) 101 having a level 1 cache 102, a level 2 cache 104, and a flash ROM (Read Only Memory) 106.
  • the level 1 cache 102 and a level 2 cache 104 temporarily store data and hence reduce the number of memory access cycles, thereby improving processing speed and throughput.
  • the CPU 101 may be provided having more than one core, and thus, additional level 1 and level 2 caches 102 and 104.
  • the memory 106 such as flash ROM may store executable code that is loaded during an initial phase of a boot process when the multimedia console 100 is powered on.
  • a graphics processing unit (GPU) 108 and a video encoder/video codec (coder/decoder) 114 form a video processing pipeline for high speed and high resolution graphics processing. Data is carried from the graphics processing unit 108 to the video encoder/video codec 114 via a bus. The video processing pipeline outputs data to an A/V (audio/video) port 140 for transmission to a television or other display.
  • a memory controller 110 is connected to the GPU 108 to facilitate processor access to various types of memory 112, such as RAM (Random Access Memory).
  • the multimedia console 100 includes an I/O controller 120, a system management controller 122, an audio processing unit 123, a network interface 124, a first USB host controller 126, a second USB controller 128 and a front panel I/O subassembly 130 that are preferably implemented on a module 118.
  • the USB controllers 126 and 128 serve as hosts for peripheral controllers 142(1)- 142(2), a wireless adapter 148, and an external memory device 146 (e.g., flash memory, external CD/DVD ROM drive, removable media, etc.).
  • the network interface (NW IF) 124 and/or wireless adapter 148 provide access to a network (e.g., the Internet, home network, etc.) and may be any of a wide variety of various wired or wireless adapter components including an Ethernet card, a modem, a Bluetooth module, a cable modem, and the like.
  • System memory 143 is provided to store application data that is loaded during the boot process.
  • a media drive 144 is provided and may comprise a DVD/CD drive, hard drive, or other removable media drive.
  • the media drive 144 may be internal or external to the multimedia console 100.
  • Application data may be accessed via the media drive 144 for execution, playback, etc. by the multimedia console 100.
  • the media drive 144 is connected to the I/O controller 120 via a bus, such as a Serial ATA bus or other high speed connection.
  • the system management controller 122 provides a variety of service functions related to assuring availability of the multimedia console 100.
  • the audio processing unit 123 and an audio codec 132 form a corresponding audio processing pipeline with high fidelity and stereo processing. Audio data is carried between the audio processing unit 123 and the audio codec 132 via a communication link.
  • the audio processing pipeline outputs data to the A/V port 140 for reproduction by an external audio player or device having audio capabilities.
  • the front panel I/O subassembly 130 supports the functionality of the power button 150 and the eject button 152, as well as any LEDs (light emitting diodes) or other indicators exposed on the outer surface of the multimedia console 100.
  • a system power supply module 136 provides power to the components of the multimedia console 100.
  • a fan 138 cools the circuitry within the multimedia console 100.
  • the CPU 101, GPU 108, memory controller 110, and various other components within the multimedia console 100 are interconnected via one or more buses, including serial and parallel buses, a memory bus, a peripheral bus, and a processor or local bus using any of a variety of bus architectures.
  • application data may be loaded from the system memory 143 into memory 112 and/or caches 102, 104 and executed on the CPU 101.
  • the application may present a graphical user interface that provides a consistent user experience when navigating to different media types available on the multimedia console 100.
  • applications and/or other media contained within the media drive 144 may be launched or played from the media drive 144 to provide additional functionalities to the multimedia console 100.
  • the multimedia console 100 may be operated as a standalone system by simply connecting the system to a television or other display. In this standalone mode, the multimedia console 100 allows one or more users to interact with the system, watch movies, or listen to music. However, with the integration of broadband connectivity made available through the network interface 124 or the wireless adapter 148, the multimedia console 100 may further be operated as a participant in a larger network community.
  • a specified amount of hardware resources are reserved for system use by the multimedia console operating system. These resources may include a reservation of memory (e.g., 16MB), CPU and GPU cycles (e.g., 5%), networking bandwidth (e.g., 8 kbs), etc. Because these resources are reserved at system boot time, the reserved resources do not exist from the application's view.
  • the memory reservation preferably is large enough to contain the launch kernel, concurrent system applications and drivers.
  • the CPU reservation is preferably constant such that if the reserved CPU usage is not used by the system applications, an idle thread will consume any unused cycles.
  • lightweight messages generated by the system applications are displayed by using a GPU interrupt to schedule code to render popup into an overlay.
  • the amount of memory required for an overlay depends on the overlay area size and the overlay preferably scales with screen resolution. Where a full user interface is used by the concurrent system application, it is preferable to use a resolution independent of application resolution. A scaler may be used to set this resolution such that the need to change frequency and cause a TV resynch is eliminated.
  • the multimedia console 100 boots and system resources are reserved, concurrent system applications execute to provide system functionalities.
  • the system functionalities are encapsulated in a set of system applications that execute within the reserved system resources described above.
  • the operating system kernel identifies threads that are system application threads versus gaming application threads.
  • the system applications are preferably scheduled to run on the CPU 101 at predetermined times and intervals in order to provide a consistent system resource view to the application. The scheduling is to minimize cache disruption for the gaming application running on the console.
  • a multimedia console application manager controls the gaming application audio level (e.g., mute, attenuate) when system applications are active.
  • Input devices e.g., controllers 142(1) and 142(2)
  • the input devices are not reserved resources, but are to be switched between system applications and the gaming application such that each will have a focus of the device.
  • the application manager preferably controls the switching of input stream, without knowledge the gaming application's knowledge and a driver maintains state information regarding focus switches.
  • the console 100 may receive additional inputs from the depth camera system 20 of FIG. 2, including the sensors 24 and 29.
  • FIG. 4 depicts another example block diagram of a computing environment that may be used in the motion capture system of FIG. 1.
  • the computing environment can be used to interpret one or more gestures or other movements and, in response, update a visual space on a display.
  • the computing environment 220 comprises a computer 241, which typically includes a variety of tangible computer readable storage media. This can be any available media that can be accessed by computer 241 and includes both volatile and nonvolatile media, removable and nonremovable media.
  • the system memory 222 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 223 and random access memory (RAM) 260.
  • ROM read only memory
  • RAM random access memory
  • a basic input/output system 224 (BIOS), containing the basic routines that help to transfer information between elements within computer 241, such as during start-up, is typically stored in ROM 223.
  • RAM 260 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 259.
  • a graphics interface 231 communicates with a GPU 229.
  • FIG. 4 depicts operating system 225, application programs 226, other program modules 227, and program data 228.
  • the computer 241 may also include other removable/non-removable, volatile/nonvolatile computer storage media, e.g., a hard disk drive 238 that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive 239 that reads from or writes to a removable, nonvolatile magnetic disk 254, and an optical disk drive 240 that reads from or writes to a removable, nonvolatile optical disk 253 such as a CD ROM or other optical media.
  • a hard disk drive 238 that reads from or writes to non-removable, nonvolatile magnetic media
  • a magnetic disk drive 239 that reads from or writes to a removable, nonvolatile magnetic disk 254
  • an optical disk drive 240 that reads from or writes to a removable, nonvolatile optical disk 253 such as a CD ROM or other optical media.
  • removable/non-removable, volatile/nonvolatile tangible computer readable storage media that can be used in the exemplary operating environment include, but are not limited to, magnetic tape cassettes, flash memory cards, digital versatile disks, digital video tape, solid state RAM, solid state ROM, and the like.
  • the hard disk drive 238 is typically connected to the system bus 221 through an non- removable memory interface such as interface 234, and magnetic disk drive 239 and optical disk drive 240 are typically connected to the system bus 221 by a removable memory interface, such as interface 235.
  • the drives and their associated computer storage media discussed above and depicted in FIG. 4, provide storage of computer readable instructions, data structures, program modules and other data for the computer 241.
  • hard disk drive 238 is depicted as storing operating system 258, application programs 257, other program modules 256, and program data 255.
  • operating system 258, application programs 257, other program modules 256, and program data 255 are given different numbers here to depict that, at a minimum, they are different copies.
  • a user may enter commands and information into the computer 241 through input devices such as a keyboard 251 and pointing device 252, commonly referred to as a mouse, trackball or touch pad.
  • Other input devices may include a microphone, joystick, game pad, satellite dish, scanner, or the like.
  • These and other input devices are often connected to the processing unit 259 through a user input interface 236 that is coupled to the system bus, but may be connected by other interface and bus structures, such as a parallel port, game port or a universal serial bus (USB).
  • the depth camera system 20 of FIG. 2, including sensors 24 and 29, may define additional input devices for the console 100.
  • a monitor 242 or other type of display is also connected to the system bus 221 via an interface, such as a video interface 232.
  • computers may also include other peripheral output devices such as speakers 244 and printer 243, which may be connected through a output peripheral interface 233.
  • the computer 241 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 246.
  • the remote computer 246 may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 241, although only a memory storage device 247 has been depicted in FIG. 4.
  • the logical connections include a local area network (LAN) 245 and a wide area network (WAN) 249, but may also include other networks.
  • LAN local area network
  • WAN wide area network
  • Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet.
  • the computer 241 When used in a LAN networking environment, the computer 241 is connected to the LAN 245 through a network interface or adapter 237. When used in a WAN networking environment, the computer 241 typically includes a modem 250 or other means for establishing communications over the WAN 249, such as the Internet.
  • the modem 250 which may be internal or external, may be connected to the system bus 221 via the user input interface 236, or other appropriate mechanism.
  • program modules depicted relative to the computer 241, or portions thereof may be stored in the remote memory storage device.
  • FIG. 4 depicts remote application programs 248 as residing on memory device 247. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used.
  • the computing environment can include tangible computer readable storage having computer readable software embodied thereon for programming at least one processor to perform a method for processing image data in a depth camera system as described herein.
  • the tangible computer readable storage can include, e.g., one or more of components 31, 194, 222, 234, 235, 230, 253 and 254.
  • a processor can include, e.g., one or more of components 32, 192, 229 and 259.
  • FIG. 5A depicts an illumination frame and a captured frame in a structured light system.
  • An illumination frame 500 represents an image plane of the illuminator, which emits structured light onto an object 520 in a field of view of the illuminator.
  • the illumination frame 500 includes an axis system with x 2 , y 2 and z 2 orthogonal axes.
  • F 2 is a focal point of the illuminator and 0 2 is an origin of the axis system, such as at a center of the illumination frame 500.
  • the emitted structured light can include stripes, spots or other known illumination pattern.
  • a captured frame 510 represents an image plane of a sensor, such as sensor 24 or 29 discussed in connection with FIG. 2.
  • the captured frame 510 includes an axis system with x ls yi and zi orthogonal axes. Fi is a focal point of the sensor and Oi is an origin of the axis system, such as at a center of the captured frame 510.
  • yi and y 2 are aligned collinearly and zi and z 2 are parallel, for simplicity, although this is not required.
  • two or more sensors can be used but only one sensor is depicted here, for simplicity.
  • Rays of projected structured light are emitted from different x 2 , y 2 locations in the illuminator plane, such as an example ray 502 which is emitted from a point P 2 on the illumination frame 500.
  • the ray 502 strikes the object 520, e.g., a person, at a point P 0 and is reflected in many directions.
  • a ray 512 is an example reflected ray which travels from Po to a point Pi on the captured frame 510.
  • Pi is represented by a pixel in the sensor so that its xi, yi location is known.
  • P 2 lies on a plane which includes Pi, Fi and F 2 .
  • a portion of this plane which intersects the illumination frame 500 is the epi-polar line 505.
  • the location of P 2 along the epi-polar line 505 can be identified.
  • P 2 is a corresponding point of Pi. The closer the depth of the object, the longer the length of the epi-polar line.
  • the depth of P 0 along the zi axis can be determined by triangulation. This is a depth value which is assigned to the pixel Pi in a depth map. For some points in the illumination frame 500, there may not be a corresponding pixel in the captured frame 510, such as due to an occlusion or due to the limited field of view of the sensor. For each pixel in the captured frame 510 for which a corresponding point is identified in the illumination frame 500, a depth value can be obtained. The set of depth values for the captured frame 510 provides a depth map of the captured frame 510. A similar process can be carried out for additional sensors and their respective captured frames. Moreover, when successive frames of video data are obtained, the process can be carried out for each frame.
  • FIG. 5B depicts two captured frames in a stereoscopic light system.
  • Stereoscopic processing is similar to the processing described in FIG. 5A in that corresponding points in two frames are identified. However, in this case, corresponding pixels in two captured frames are identified, and the illumination is provided separately.
  • An illuminator 550 provides projected light on the object 520 in the field of view of the illuminator. This light is reflected by the object and sensed by two sensors, for example. A first sensor obtains a frame 530 of pixel data, while a second sensor obtains a frame 540 of pixel data.
  • An example ray 532 extends from a point P 0 on the object to a pixel P 2 in the frame 530, passing through a focal point F 2 of the associated sensor.
  • an example ray 542 extends from a point Po on the object to a pixel Pi in the frame 540, passing through a focal point Fi of the associated sensor.
  • stereo matching can involve identifying the point P 2 on the epi-polar line 545 which corresponds to Pi.
  • stereo matching can involve identifying the point Pi on the epi-polar line 548 which corresponds to P 2 .
  • stereo matching can be performed separately, once for each frame of a pair of frames. In some cases, stereo matching in one direction, from a first frame to a second frame, can be performed without performing stereo matching in the other direction, from the second frame to the first frame.
  • the depth of Po along the zi axis can be determined by triangulation. This is a depth value which is assigned to the pixel Pi in a depth map. For some points in the frame 540, there may not be a corresponding pixel in the frame 530, such as due to an occlusion or due to the limited field of view of the sensor. For each pixel in the frame 540 for which a corresponding pixel is identified in the frame 530, a depth value can be obtained. The set of depth values for the frame 540 provides a depth map of the frame 540.
  • the depth of P 2 along the z 2 axis can be determined by triangulation. This is a depth value which is assigned to the pixel P 2 in a depth map. For some points in the frame 530, there may not be a corresponding pixel in the frame 540, such as due to an occlusion or due to the limited field of view of the sensor. For each pixel in the frame 530 for which a corresponding pixel is identified in the frame 540, a depth value can be obtained. The set of depth values for the frame 530 provides a depth map of the frame 530.
  • a similar process can be carried out for additional sensors and their respective captured frames. Moreover, when successive frames of video data are obtained, the process can be carried out for each frame.
  • FIG. 6A depicts an imaging component 600 having two sensors on a common side of an illuminator.
  • the illuminator 26 is a projector which illuminates a human target or other object in a field of view with a structured light pattern
  • the light source can be an infrared laser, for instance, having a wavelength of 700 nm - 3,000 nm, including near- infrared light, having a wavelength of 0.75 ⁇ -1.4 ⁇ , mid-wavelength infrared light having a wavelength of 3 ⁇ - 8 ⁇ , and long-wavelength infrared light having a wavelength of 8 ⁇ - 15 ⁇ , which is a thermal imaging region which is closest to the infrared radiation emitted by humans.
  • the illuminator can include a diffractive optical element (DOE) which receives the laser light and outputs multiple diffracted light beams.
  • DOE diffractive optical element
  • a DOE is used to provide multiple smaller light beams, such as thousands of smaller light beams, from a single collimated light beam. Each smaller light beam has a small fraction of the power of the single collimated light beam and the smaller, diffracted light beams may have a nominally equal intensity.
  • the smaller light beams define a field of view of the illuminator in a desired predetermined pattern.
  • the DOE is a beam replicator, so all the output beams will have the same geometry as the input beam.
  • the field of view should extend in a sufficiently wide angle, in height and width, to illuminate the entire height and width of the human and an area in which the human may move around when interacting with an application of a motion tracking system.
  • An appropriate field of view can be set based on factors such as the expected height and width of the human, including the arm span when the arms are raised overhead or out to the sides, the size of the area over which the human may move when interacting with the application, the expected distance of the human from the camera and the focal length of the camera.
  • RGB camera 28 discussed previously, may also be provided.
  • An RGB camera may also be provided in FIGs. 6B and 6C but is not depicted for simplicity.
  • the sensors 24 and 29 are on a common side of the illuminator 26.
  • the sensor 24 is at a baseline distance BL1 from the illuminator 26, and the sensor 29 is at a baseline distance BL2 from the illuminator 26.
  • the sensor 29 is optimized for shorter range imaging by virtue of its smaller baseline, while the sensor 24 is optimized for longer range imaging by virtue of its longer baseline.
  • a longer baseline can be achieved for the sensor which is furthest from the illuminator, for a fixed size of the imaging component 600 which typically includes a housing which is limited in size.
  • a shorter baseline improves shorter range imaging because the sensor can focus on closer objects, assuming a given focal length, thereby allowing a more accurate depth measurement for shorter distances.
  • a shorter baseline results in a smaller disparity and minimum occlusions.
  • a longer baseline improves the accuracy of longer range imaging because there is a larger angle between the light rays of corresponding points, which means that image pixels can detect smaller differences in the distance.
  • FIG. 5A it can be seen that an angle between rays 502 and 512 will be greater if the frames 500 and 510 are further apart.
  • FIG. 5B it can be seen that an angle between rays 532 and 542 will be greater if the frames 530 and 540 are further apart.
  • the process of triangulation to determine depth is more accurate when the sensors are further apart so that the angle between the light rays is greater.
  • a spatial resolution of a camera can be optimized.
  • the spatial resolution of a sensor such as a charge-coupled device (CCD) is a function of the number of pixels and their size relative to the projected image, and is a measure of how fine a detail can be detected by the sensor.
  • CCD charge-coupled device
  • a lower spatial resolution can be achieved by using relatively fewer pixels in a frame, and/or relatively larger pixels, because the pixel size relative to the project image is relatively greater due to the shorter depth of the detected object in the field of view. This can result in cost savings and reduced energy consumption.
  • a higher spatial resolution should be used, compared to a sensor which is optimized for shorter range imaging.
  • a higher spatial resolution can be achieved by using relatively more pixels in a frame, and/or relatively smaller pixels, because the pixel size relative to the project image is relatively smaller due to the longer depth of the detected object in the field of view.
  • a higher resolution produces a higher accuracy in the depth measurement.
  • Sensitivity refers to the extent to which a sensor reacts to incident light.
  • quantum efficiency is the percentage of photons incident upon a photoreactive surface of the sensor, such as a pixel, that will produce an electron-hole pair.
  • a lower sensitivity is acceptable because relatively more photons will be incident upon each pixel due to the closer distance of the object which reflects the photons back to the sensor.
  • a lower sensitivity can be achieved, e.g., by a lower quality sensor, resulting in cost savings.
  • a higher sensitivity should be used, compared to a sensor which is optimized for shorter range imaging.
  • a higher sensitivity can be achieved by using a higher quality sensor, to allow detection where relatively fewer photons will be incident upon each pixel due to the further distance of the object which reflects the photons back to the sensor.
  • Exposure time is the amount of time in which light is allowed to fall on the pixels of the sensor during the process of obtaining a frame of image data, e.g., the time in which a camera shutter is open. During the exposure time, the pixels of the sensor accumulate or integrate charge. Exposure time is related to sensitivity, in that a longer exposure time can compensate for a lower sensitivity. However, a shorter exposure time is desirable to accurately capture motion sequences at shorter range, since a given movement of the imaged object translates to larger pixel offsets when the object is closer.
  • a shorter exposure time can be used for a sensor which is optimized for shorter range imaging, while a longer exposure time can be used for a sensor which is optimized for longer range imaging.
  • a longer exposure time can be used for a sensor which is optimized for longer range imaging.
  • FIG. 6B depicts an imaging component 610 having two sensors on one side of an illuminator, and one sensor on an opposite side of the illuminator. Adding a third sensor in this manner can result in imaging of an object with fewer occlusions, as well as more accurate imaging due to the additional depth measurements which are obtained.
  • One sensor such as sensor 612 can be positioned close to the illuminator, while the other two sensors are on opposite sides of the illuminator.
  • the sensor 24 is at a baseline distance BLl from the illuminator 26
  • the sensor 29 is at a baseline distance BL2 from the illuminator 26
  • the third sensor 612 is at a baseline distance BL3 from the illuminator 26.
  • FIG. 6C depicts an imaging component 620 having three sensors on a common side of an illuminator. Adding a third sensor in this manner can result in more accurate imaging due to the additional depth measurements which are obtained.
  • each sensor can be optimized for a different depth range. For example, sensor 24, at the larger baseline distance BL3 from the illuminator, can be optimized for longer range imaging. Sensor 29, at the intermediate baseline distance BL2 from the illuminator, can be optimized for medium range imaging. And, sensor 612, at the smaller baseline distance BLl from the illuminator, can be optimized for shorter range imaging. Similarly, spatial resolution, sensitivity and/or exposure times can be optimized to longer range levels for the sensor 24, intermediate range levels for the sensor 29, and shorter range levels for the sensor 612.
  • FIG. 6D depicts an imaging component 630 having two sensors on opposing sides of an illuminator, showing how the two sensors sense different portions of an object.
  • a sensor SI 24 is at a baseline distance BLl from the illuminator 26 and is optimized for shorter range imaging.
  • a sensor S2 29 is at a baseline distance BL2>BL1 from the illuminator 26 and is optimized for longer range imaging.
  • An RGB camera 28 is also depicted.
  • An object 660 is present in a field of view. Note that the perspective of the drawing is modified as a simplification, as the imaging component 630 is shown from a front view and the object 660 is shown from a top view.
  • Rays 640 and 642 are example rays of light which are projected by the illuminator 26.
  • Rays 632, 634 and 636 are example rays of reflected light which are sensed by the sensor SI 24, and rays 650 and 652 are example rays of reflected light which are sensed by the sensor S2 29.
  • the object includes five surfaces which are sensed by the sensors SI 24 and S2 29. However, due to occlusions, not all surfaces are sensed by both sensors. For example, a surface 661 is sensed by sensor SI 24 only and is occluded from the perspective of sensor S2 29. A surface 662 is also sensed by sensor SI 24 only and is occluded from the perspective of sensor S2 29. A surface 663 is sensed by both sensors SI and S2. A surface 664 is sensed by sensor S2 only and is occluded from the perspective of sensor SI . A surface 665 is sensed by sensor S2 only and is occluded from the perspective of sensor SI . A surface 666 is sensed by both sensors SI and S2. This indicates how the addition of a second sensor, or other additional sensors, can be used to image portions of an object which would otherwise be occluded. Furthermore, placing the sensors as far as a practical from the illuminator is often desirable to minimize occlusions.
  • FIG. 7 A depicts a process for obtaining a depth map of a field of view.
  • Step 700 includes illuminating a field of view with a pattern of structured light. Any type of structured light can be used, including coded structured light.
  • Steps 702 and 704 can be performed concurrently at least in part.
  • Step 702 includes detecting reflected infrared light at a first sensor, to obtain a first frame of pixel data. This pixel data can indicate, e.g., an amount of charge which was accumulated by each pixel during an exposure time, as an indication of an amount of light which was incident upon the pixel from the field of view.
  • step 704 includes detecting reflected infrared light at a second sensor, to obtain a second frame of pixel data.
  • step 706 includes processing the pixel data from both frames to derive a merged depth map. This can involve different techniques such as discussed further in connection with FIGs. 7B-7E.
  • Step 708 includes providing a control input to an application based on the merged depth map. This control input can be used for various purposes such as updating the position of an avatar on a display, selecting a menu item in a user interface (UI), or many other possible actions.
  • UI user interface
  • FIG. 7B depicts further details of step 706 of FIG. 7A, in which two structured light depth maps are merged.
  • first and second structured light depth maps are obtained from the first and second frames, respectively, and the two depth maps are merged.
  • the process can be extended to merge any number of two or more depth maps.
  • step 720 for each pixel in the first frame of pixel data (obtained in step 702 of FIG. 7A), an attempt is made to determine a corresponding point in the illumination frame, by matching the pattern of structured light. In some case, due to occlusions or other factors, a corresponding point in the illumination frame may not be successfully determined for one or more pixels in the first frame.
  • a first structured light depth map is provided.
  • This depth map can identify each pixel in the first frame and a corresponding depth value.
  • step 724 for each pixel in the second frame of pixel data (obtained in step 704 of FIG. 7A), an attempt is made to determine a corresponding point in the illumination frame. In some case, due to occlusions or other factors, a corresponding point in the illumination frame may not be successfully determine for one or more pixels in the second frame.
  • a second structured light depth map is provided. This depth map can identify each pixel in the second frame and a corresponding depth value. Steps 720 and 722 can be performed concurrently at least in part with steps 724 and 726.
  • the structured light depth maps are merged to derive the merged depth app of step 706 of FIG. 7A.
  • the merging can be based on different approaches, including approaches which involve unweighted averaging, weighted averaging, accuracy measures and/or confidence measures.
  • the depth values are averaged among the two or more depth maps.
  • An example unweighted average of a depth value dl for an ith pixel in the first frame and a depth value d2 for an ith pixel in the second frame is (dl+d2)/2.
  • An example weighted average of a depth value dl of weight wl for an ith pixel in the first frame and a depth value d2 of weight w2 for an ith pixel in the second frame is (wl *dl+w2*d2)/[(wl+w2)].
  • One approach to merging depth values assigns a weight to the depth values of a frame based on the baseline distance between the sensor and the illuminator, so that a higher weight, indicating a higher confidence, is assigned when the baseline distance is greater, and a lower weight, indicating a lower confidence, is assigned when the baseline distance is less. This is done since a larger baseline distance yields a more accurate depth value.
  • the weights can be applied on a per-pixel or per-depth value basis.
  • the above example could be augmented with a depth value obtained from stereoscopic matching of an image from the sensor SI to an image from the sensor S2 based on the distance BL1+BL2 in FIG. 6D.
  • wl BLl/(BLl+BL2+BLl+BL2) to a depth value from sensor SI
  • w2 BL2/(BLl+BL2+BLl+BL2) to a depth value from sensor S2
  • w3 (BLl+BL2)/(BLl+BL2+BLl+BL2) to a depth value obtained from stereoscopic matching from SI to S2.
  • a depth value is obtained from stereoscopic matching of an image from the sensor S2 to an image from the sensor S 1 in FIG. 6D.
  • wl BLl/(BLl+BL2+BLl+BL2+BLl+BL2) to a depth value from sensor SI
  • a weight of w2 BL2/(BLl+BL2+BLl+BL2+BL2) to a depth value from sensor S2
  • a weight of w3 (BLl+BL2)/(BLl+BL2+BLl+BL2+BLl+BL2) to a depth value obtained from stereoscopic matching from SI to S2
  • w4 (BLl+BL2)/(BLl+BL2+BLl+BL2+BL2) to a depth value obtained from stereoscopic matching from S2 to SI
  • wl l/9
  • w2 2/9
  • w3 3/9
  • w4 3/9. This is merely one possibility.
  • a weight can also be provided based on a confidence measure, such that a depth value with a higher confidence measure is assigned a higher weight.
  • a confidence measure is a measure of noise in the depth value.
  • a "master" camera coordinate system is defined, and we transform and resample the other depth image to the "master" coordinate system.
  • An average is one solution, but not necessarily the best one as it doesn't solve cases of occlusions, where each camera might successfully observe a different location in space.
  • a confidence measure can be associated with each depth value in the depth maps.
  • Another approach is to merge the data in 3D space, where image pixels do not exist. In 3-D, volumetric methods can be utilized.
  • a weight can also be provided based on an accuracy measure, such that a depth value with a higher accuracy measure is assigned a higher weight. For example, based on the spatial resolution and the base line distances between the sensors and the illuminator, and between the sensors, we can assign an accuracy measure for each depth sample. Various techniques are known for determining accuracy measures. For example, see “Stereo Accuracy and Error Modeling," by Point Grey Research, Richmond, BC, Canada, April 19, 2004, http://www.ptgrey.com/support/kb/data/kbStereoAccuracyShort.pdf. We can then calculate a weighted-average, based on these accuracies.
  • FIG. 7C depicts further details of step 706 of FIG. 7A, in which two structured light depth maps and two stereoscopic depth maps are merged.
  • first and second structured light depth maps are obtained from the first and second frames, respectively. Additionally, one or more stereoscopic depth maps are obtained. The first and second structured light depth maps and the one or more stereoscopic depth maps are merged.
  • the process can be extended to merge any number of two or more depth maps. Steps 740 and 742 can be performed concurrently at least in part with steps 744 and 746, steps 748 and 750, and steps 752 and 754.
  • step 740 for each pixel in the first frame of pixel data, we determine a corresponding point in the illumination frame and at step 742 we provide a first structured light depth map.
  • step 744 for each pixel in the first frame of pixel data, we determine a corresponding pixel in the second frame of pixel data and at step 746 we provide a first stereoscopic depth map.
  • step 748 for each pixel in a second frame of pixel data, we determine a corresponding point in the illumination frame and at step 750 we provide a second structured light depth map.
  • step 752 for each pixel in the second frame of pixel data, we determine a corresponding pixel in the first frame of pixel data and at step 754 we provide a second stereoscopic depth map.
  • Step 756 includes merging the different depth maps.
  • the merging can be based on different approaches, including approaches which involve unweighted averaging, weighted averaging, accuracy measures and/or confidence measures.
  • two stereoscopic depth maps are merged with two structured light depth maps.
  • the merging considers all depth maps together in a single merging step.
  • the merging occurs in multiple steps.
  • the structured light depth maps can be merged to obtain a first merged depth map
  • the stereoscopic depth maps can be merged to obtain a second merged depth map
  • the first and second merged depth maps are merged to obtain a final merged depth map.
  • the first structured light depth map is merged with the first stereoscopic depth map to obtain a first merged depth map
  • the second structured light depth map is merged with the second stereoscopic depth map to obtain a second merged depth map
  • the first and second merged depth maps are merged to obtain a final merged depth map.
  • Other approaches are possible as well.
  • only one stereoscopic depth map is merged with two structured light depth maps. The merging can occur in one or more steps.
  • the first structured light depth map is merged with the stereoscopic depth map to obtain a first merged depth map
  • the second structured light depth map is merged with the stereoscopic depth map to obtain the final merged depth map.
  • the two structured light depth maps are merged to obtain a first merged depth map
  • the first merged depth map is merged with the stereoscopic depth map to obtain the final merged depth map.
  • Other approaches are possible.
  • FIG. 7D depicts further details of step 706 of FIG. 7A, in which depth values are refined as needed using stereoscopic matching.
  • This approach is adaptive in that stereoscopic matching is used to refine one or more depth values in response to detecting a condition that indicates refinement is desirable.
  • the stereoscopic matching can be performed for only a subset of the pixels in a frame.
  • refinement of the depth value of a pixel is desirable when the pixel cannot be matched to the structured light pattern, so that the depth value is null or a default value.
  • a pixel may not be matched to the structured light pattern due to occlusions, shadowing, lighting conditions, surface textures, or other reasons.
  • stereoscopic matching can provide a depth value where no depth value was previous obtained, or can provide a more accurate depth value, in some cases, due to the sensors being spaced apart by a larger baseline, compared to the baseline spacing between the sensors and the illuminator. See FIGs. 2, 6B and 6D, for instance.
  • refinement of the depth value of a pixel is desirable when the depth value exceeds a threshold distance, indicating that the corresponding point on the object is relatively far from the sensor.
  • stereoscopic matching can provide a more accurate depth value, in case the baseline between the sensors is larger than the baseline between each of the sensors and the illuminator.
  • the refinement can involve providing a depth value where none was provided before, or merging depth values, e.g., based on different approaches which involve unweighted averaging, weighted averaging, accuracy measures and/or confidence measures. Further, the refinement can be performed for the frames of each sensor separately, before the depth values are merged.
  • stereoscopic matching By performing stereoscopic matching only for pixels for which a condition is detected indicating that refinement is desirable, unnecessary processing is avoided. Stereoscopic matching is not performed for pixels for which a condition is not detected indicating that refinement is desirable. However, it is also possible to perform stereoscopic matching for an entire frame when a condition is detected indicating that refinement is desirable for one or more pixels of the frame. In one approach, stereoscopic matching for an entire frame is initiated when refinement is indicated for a minimum number of portions of pixels in a frame.
  • step 760 for each pixel in the first frame of pixel data, we determine a corresponding point in the illumination frame and at step 761, we provide a corresponding first structured light depth map.
  • Decision step 762 determines if a refinement of a depth value is indicated.
  • a criterion can be evaluated for each pixel in the first frame of pixel data, and, in one approach, can indicate whether refinement of the depth value associated with the pixel is desirable. In one approach, refinement is desirable when the associated depth value is unavailable or unreliable. Unreliability can be based on an accuracy measure and/or confidence measure, for instance. If the confidence measure exceeds a threshold confidence measure, the depth value may be deemed to be reliable. Or, if the accuracy measure exceeds a threshold accuracy measure, the depth value may be deemed to be reliable. In another approach, the confidence measure and the accuracy measure must both exceed respective threshold levels for the depth value to be deemed to be reliable.
  • step 763 performs stereoscopic matching of one or more pixels in the first frame of pixel data to one or more pixels in the second frame of pixel data. This results in one or more additional depth values of the first frame of pixel data.
  • step 764 for each pixel in the second frame of pixel data, we determine a corresponding point in the illumination frame and at step 765, we provide a corresponding second structured light depth map.
  • Decision step 766 determines if a refinement of a depth value is indicated. If refinement is desired, step 767 performs stereoscopic matching of one or more pixels in the second frame of pixel data to one or more pixels in the first frame of pixel data. This results in one or more additional depth values of the second frame of pixel data.
  • Step 768 merges the depth maps of the first and second frames of pixel data, where the merging include depth values obtained from the stereoscopic matching of steps 763 and/or 767.
  • the merging can be based on different approaches, including approaches which involve unweighted averaging, weighted averaging, accuracy measures and/or confidence measures.
  • the merging can merge a depth value from the first structured light depth map, a depth value from the second structured light depth map, and one or more depth values from stereoscopic matching.
  • This approach can provide a more reliable result compared to an approach which discards a depth value from structured light depth map and replaces it with a depth value from stereoscopic matching.
  • FIG. 7E depicts further details of another approach to step 706 of FIG. 7A, in which depth values of a merged depth map are refined as needed using stereoscopic matching.
  • the merging of the depth maps obtained by matching to a structured light pattern occurs before a refinement process.
  • Steps 760, 761, 764 and 765 are the same as the like-numbered steps in FIG. 7D.
  • Step 770 merges the structured light depth maps.
  • the merging can be based on different approaches, including approaches which involve unweighted averaging, weighted averaging, accuracy measures and/or confidence measures.
  • Step 771 is analogous to steps 762 and 766 of FIG. 7D and involves determining if refinement of a depth value is indicated.
  • a criterion can be evaluated for each pixel in the merged depth map, and, in one approach, can indicate whether refinement of the depth value associated with a pixel is desirable. In one approach, refinement is desirable when the associated depth value is unavailable or unreliable. Unreliability can be based on an accuracy measure and/or confidence measure, for instance. If the confidence measure exceeds a threshold confidence measure, the depth value may be deemed to be reliable. Or, if the accuracy measure exceeds a threshold accuracy measure, the depth value may be deemed to be reliable. In another approach, the confidence measure and the accuracy measure must both exceed respective threshold levels for the depth value to be deemed to be reliable.
  • step 772 and/or step 773 can be performed. In some cases, it is sufficient to perform stereoscopic matching in one direction, by matching a pixel in one frame to a pixel in another frame. In other cases, stereoscopic matching in both directions can be performed.
  • Step 772 performs stereoscopic matching of one or more pixels in the first frame of pixel data to one or more pixels in the second frame of pixel data. This results in one or more additional depth values of the first frame of pixel data.
  • Step 773 performs stereoscopic matching of one or more pixels in the second frame of pixel data to one or more pixels in the first frame of pixel data. This results in one or more additional depth values of the second frame of pixel data.
  • Step 774 refines the merged depth map of step 770 for one or more selected pixels for which stereoscopic matching was performed.
  • the refinement can involve merging depth value based on different approaches, including approaches which involve unweighted averaging, weighted averaging, accuracy measures and/or confidence measures.
  • step 771 If no refinement is desired at decision step 771, the process ends at step 775.
  • FIG. 8 depicts an example method for tracking a human target using a control input as set forth in step 708 of FIG. 7A.
  • a depth camera system can be used to track movements of a user, such as a gesture.
  • the movement can be processed as a control input at an application. For example, this could include updating the position of an avatar on a display, where the avatar represents the user, as depicted in FIG. 1, selecting a menu item in a user interface (UI), or many other possible actions.
  • UI user interface
  • the example method may be implemented using, for example, the depth camera system 20 and/or the computing environment 12, 100 or 420 as discussed in connection with FIGs. 2-4.
  • One or more human targets can be scanned to generate a model such as a skeletal model, a mesh human model, or any other suitable representation of a person.
  • a model such as a skeletal model, a mesh human model, or any other suitable representation of a person.
  • each body part may be characterized as a mathematical vector defining joints and bones of the skeletal model. Body parts can move relative to one another at the joints.
  • the model may then be used to interact with an application that is executed by the computing environment.
  • the scan to generate the model can occur when an application is started or launched, or at other times as controlled by the application of the scanned person.
  • the person may be scanned to generate a skeletal model that may be tracked such that physical movements or motions of the user may act as a real-time user interface that adjusts and/or controls parameters of an application.
  • the tracked movements of a person may be used to move an avatar or other on-screen character in an electronic role -playing game, to control an on-screen vehicle in an electronic racing game, to control the building or organization of objects in a virtual environment, or to perform any other suitable control of an application.
  • depth information is received, e.g., from the depth camera system.
  • the depth camera system may capture or observe a field of view that may include one or more targets.
  • the depth information may include a depth image or map having a plurality of observed pixels, where each observed pixel has an observed depth value, as discussed.
  • the depth image may be downsampled to a lower processing resolution so that it can be more easily used and processed with less computing overhead. Additionally, one or more high-variance and/or noisy depth values may be removed and/or smoothed from the depth image; portions of missing and/or removed depth information may be filled in and/or reconstructed; and/or any other suitable processing may be performed on the received depth information such that the depth information may used to generate a model such as a skeletal model (see FIG. 9).
  • Step 802 determines whether the depth image includes a human target. This can include flood filling each target or object in the depth image comparing each target or object to a pattern to determine whether the depth image includes a human target. For example, various depth values of pixels in a selected area or point of the depth image may be compared to determine edges that may define targets or objects as described above. The likely Z values of the Z layers may be flood filled based on the determined edges. For example, the pixels associated with the determined edges and the pixels of the area within the edges may be associated with each other to define a target or an object in the capture area that may be compared with a pattern, which will be described in more detail below.
  • step 804 If the depth image includes a human target, at decision step 804, step 806 is performed. If decision step 804 is false, additional depth information is received at step 800.
  • the pattern to which each target or object is compared may include one or more data structures having a set of variables that collectively define a typical body of a human. Information associated with the pixels of, for example, a human target and a non- human target in the field of view, may be compared with the variables to identify a human target.
  • each of the variables in the set may be weighted based on a body part. For example, various body parts such as a head and/or shoulders in the pattern may have weight value associated therewith that may be greater than other body parts such as a leg.
  • the weight values may be used when comparing a target with the variables to determine whether and which of the targets may be human. For example, matches between the variables and the target that have larger weight values may yield a greater likelihood of the target being human than matches with smaller weight values.
  • Step 806 includes scanning the human target for body parts.
  • the human target may be scanned to provide measurements such as length, width, or the like associated with one or more body parts of a person to provide an accurate model of the person.
  • the human target may be isolated and a bitmask of the human target may be created to scan for one or more body parts.
  • the bitmask may be created by, for example, flood filling the human target such that the human target may be separated from other targets or objects in the capture area elements.
  • the bitmask may then be analyzed for one or more body parts to generate a model such as a skeletal model, a mesh human model, or the like of the human target.
  • measurement values determined by the scanned bitmask may be used to define one or more joints in a skeletal model.
  • the one or more joints may be used to define one or more bones that may correspond to a body part of a human.
  • the top of the bitmask of the human target may be associated with a location of the top of the head.
  • the bitmask may be scanned downward to then determine a location of a neck, a location of the shoulders and so forth.
  • a width of the bitmask for example, at a position being scanned, may be compared to a threshold value of a typical width associated with, for example, a neck, shoulders, or the like.
  • the distance from a previous position scanned and associated with a body part in a bitmask may be used to determine the location of the neck, shoulders or the like.
  • Some body parts such as legs, feet, or the like may be calculated based on, for example, the location of other body parts.
  • a data structure is created that includes measurement values of the body part.
  • the data structure may include scan results averaged from multiple depth images which are provide at different points in time by the depth camera system.
  • Step 808 includes generating a model of the human target.
  • measurement values determined by the scanned bitmask may be used to define one or more joints in a skeletal model.
  • the one or more joints are used to define one or more bones that correspond to a body part of a human.
  • One or more joints may be adjusted until the joints are within a range of typical distances between a joint and a body part of a human to generate a more accurate skeletal model.
  • the model may further be adjusted based on, for example, a height associated with the human target.
  • the model is tracked by updating the person's location several times per second.
  • information from the depth camera system is used to adjust the skeletal model such that the skeletal model represents a person.
  • one or more forces may be applied to one or more force-receiving aspects of the skeletal model to adjust the skeletal model into a pose that more closely corresponds to the pose of the human target in physical space.
  • FIG. 9 depicts an example model of a human target as set forth in step 808 of FIG. 8.
  • the model 900 is facing the depth camera, in the -z direction of FIG. 1, so that the cross-section shown is in the x-y plane.
  • the model includes a number of reference points, such as the top of the head 902, bottom of the head or chin 913, right shoulder 904, right elbow 906, right wrist 908 and right hand 910, represented by a fingertip area, for instance.
  • the right and left side is defined from the user's perspective, facing the camera.
  • the model also includes a left shoulder 914, left elbow 916, left wrist 918 and left hand 920.
  • a waist region 922 is also depicted, along with a right hip 924, right knew 926, right foot 928, left hip 930, left knee 932 and left foot 934.
  • a shoulder line 912 is a line, typically horizontal, between the shoulders 904 and 914.
  • An upper torso centerline 925 which extends between the points 922 and 913, for example, is also depicted.
  • a depth camera system which has a number of advantages.
  • One advantage is reduced occlusions. Since a wider baseline is used, one sensor may see information that is occluded to the other sensor. Fusing of the two depth maps produces a 3D image with more observable objects compared to a map produced by a single sensor.
  • Another advantage is a reduced shadow effect. Structured light methods inherently produce a shadow effect in locations that are visible to the sensors but are not "visible" to the light source. By applying stereoscopic matching in these regions, this effect can be reduced.
  • Another advantage is robustness to external light. There are many scenarios where external lighting might disrupt the structured light camera, so that it is not able to produce valid results.
  • stereoscopic data is obtained as an additional measure since the external lighting may actually assist it in measuring the distance.
  • the external light may come from an identical camera looking at the same scene.
  • operating two or more of the suggested cameras, looking at the same scene becomes possible. This is due to the fact that, even though the light patterns produced by one camera may disrupt the other camera from properly matching the patterns, the stereoscopic matching is still likely to succeed.
  • Another advantage is that, using the suggested configuration, it is possible to achieve greater accuracy at far distances due to the fact that the two sensors have a wider baseline. Both structured light and stereo measurement accuracy depend heavily on the distance between the sensors/projector.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Electromagnetism (AREA)
  • Optics & Photonics (AREA)
  • Image Processing (AREA)
  • Details Of Cameras Including Film Mechanisms (AREA)
  • Stroboscope Apparatuses (AREA)
  • Cameras In General (AREA)
  • Stereoscopic And Panoramic Photography (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)
PCT/US2011/046139 2010-09-08 2011-08-01 Depth camera based on structured light and stereo vision Ceased WO2012033578A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CA2809240A CA2809240A1 (en) 2010-09-08 2011-08-01 Depth camera based on structured light and stereo vision
KR1020137005894A KR20140019765A (ko) 2010-09-08 2011-08-01 구조광 및 스테레오 비전에 기초한 깊이 카메라
EP11823916.9A EP2614405A4 (en) 2010-09-08 2011-08-01 Depth camera based on structured light and stereo vision
JP2013528202A JP5865910B2 (ja) 2010-09-08 2011-08-01 構造化光および立体視に基づく深度カメラ

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US12/877,595 US20120056982A1 (en) 2010-09-08 2010-09-08 Depth camera based on structured light and stereo vision
US12/877,595 2010-09-08

Publications (1)

Publication Number Publication Date
WO2012033578A1 true WO2012033578A1 (en) 2012-03-15

Family

ID=45770424

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2011/046139 Ceased WO2012033578A1 (en) 2010-09-08 2011-08-01 Depth camera based on structured light and stereo vision

Country Status (7)

Country Link
US (1) US20120056982A1 (enExample)
EP (1) EP2614405A4 (enExample)
JP (1) JP5865910B2 (enExample)
KR (1) KR20140019765A (enExample)
CN (1) CN102385237B (enExample)
CA (1) CA2809240A1 (enExample)
WO (1) WO2012033578A1 (enExample)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015198260A (ja) * 2014-03-31 2015-11-09 アイホン株式会社 監視カメラシステム
US10033949B2 (en) 2016-06-16 2018-07-24 Semiconductor Components Industries, Llc Imaging systems with high dynamic range and phase detection pixels
CN108564614A (zh) * 2018-04-03 2018-09-21 Oppo广东移动通信有限公司 深度获取方法和装置、计算机可读存储介质和计算机设备
US10593712B2 (en) 2017-08-23 2020-03-17 Semiconductor Components Industries, Llc Image sensors with high dynamic range and infrared imaging toroidal pixels
US10931902B2 (en) 2018-05-08 2021-02-23 Semiconductor Components Industries, Llc Image sensors with non-rectilinear image pixel arrays
US11538183B2 (en) 2018-03-29 2022-12-27 Twinner Gmbh 3D object sensing system
US11857153B2 (en) 2018-07-19 2024-01-02 Activ Surgical, Inc. Systems and methods for multi-modal sensing of depth in vision systems for automated surgical robots

Families Citing this family (353)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
PL2023812T3 (pl) 2006-05-19 2017-07-31 The Queen's Medical Center Układ śledzenia ruchu dla adaptacyjnego obrazowania w czasie rzeczywistym i spektroskopii
EP3328048B1 (en) 2008-05-20 2021-04-21 FotoNation Limited Capturing and processing of images using monolithic camera array with heterogeneous imagers
US11792538B2 (en) 2008-05-20 2023-10-17 Adeia Imaging Llc Capturing and processing of images including occlusions focused on an image sensor by a lens stack array
US8866920B2 (en) 2008-05-20 2014-10-21 Pelican Imaging Corporation Capturing and processing of images using monolithic camera array with heterogeneous imagers
US8908995B2 (en) 2009-01-12 2014-12-09 Intermec Ip Corp. Semi-automatic dimensioning with imager on a portable device
US8503720B2 (en) 2009-05-01 2013-08-06 Microsoft Corporation Human body pose estimation
KR101711619B1 (ko) 2009-09-22 2017-03-02 페블스텍 리미티드 컴퓨터 장치의 원격 제어
US8514491B2 (en) 2009-11-20 2013-08-20 Pelican Imaging Corporation Capturing and processing of images using monolithic camera array with heterogeneous imagers
SG10201503516VA (en) 2010-05-12 2015-06-29 Pelican Imaging Corp Architectures for imager arrays and array cameras
US8330822B2 (en) * 2010-06-09 2012-12-11 Microsoft Corporation Thermally-tuned depth camera light source
US8428342B2 (en) * 2010-08-12 2013-04-23 At&T Intellectual Property I, L.P. Apparatus and method for providing three dimensional media content
KR20120020627A (ko) * 2010-08-30 2012-03-08 삼성전자주식회사 3d 영상 포맷을 이용한 영상 처리 장치 및 방법
KR101708696B1 (ko) * 2010-09-15 2017-02-21 엘지전자 주식회사 휴대 단말기 및 그 동작 제어방법
US10776635B2 (en) * 2010-09-21 2020-09-15 Mobileye Vision Technologies Ltd. Monocular cued detection of three-dimensional structures from depth images
US8878950B2 (en) 2010-12-14 2014-11-04 Pelican Imaging Corporation Systems and methods for synthesizing high resolution images using super-resolution processes
US20120192088A1 (en) * 2011-01-20 2012-07-26 Avaya Inc. Method and system for physical mapping in a virtual world
US8942917B2 (en) 2011-02-14 2015-01-27 Microsoft Corporation Change invariant scene recognition by an agent
US8718748B2 (en) * 2011-03-29 2014-05-06 Kaliber Imaging Inc. System and methods for monitoring and assessing mobility
JP5138116B2 (ja) * 2011-04-19 2013-02-06 三洋電機株式会社 情報取得装置および物体検出装置
US8760499B2 (en) * 2011-04-29 2014-06-24 Austin Russell Three-dimensional imager and projection device
US8570372B2 (en) * 2011-04-29 2013-10-29 Austin Russell Three-dimensional imager and projection device
CN107404609B (zh) 2011-05-11 2020-02-11 快图有限公司 用于传送阵列照相机图像数据的方法
US20120287249A1 (en) * 2011-05-12 2012-11-15 Electronics And Telecommunications Research Institute Method for obtaining depth information and apparatus using the same
US20120293630A1 (en) * 2011-05-19 2012-11-22 Qualcomm Incorporated Method and apparatus for multi-camera motion capture enhancement using proximity sensors
CA2840083A1 (en) * 2011-06-29 2013-01-03 Smart Plate, Inc. System and methods for rendering content on a vehicle
RU2455676C2 (ru) * 2011-07-04 2012-07-10 Общество с ограниченной ответственностью "ТРИДИВИ" Способ управления устройством с помощью жестов и 3d-сенсор для его осуществления
WO2013015145A1 (ja) * 2011-07-22 2013-01-31 三洋電機株式会社 情報取得装置および物体検出装置
EP2747641A4 (en) 2011-08-26 2015-04-01 Kineticor Inc METHOD, SYSTEMS AND DEVICES FOR SCAN INTERNAL MOTION CORRECTION
WO2013043751A1 (en) 2011-09-19 2013-03-28 Pelican Imaging Corporation Systems and methods for controlling aliasing in images captured by an array camera for use in super resolution processing using pixel apertures
CN104081414B (zh) 2011-09-28 2017-08-01 Fotonation开曼有限公司 用于编码和解码光场图像文件的系统及方法
US8660362B2 (en) * 2011-11-21 2014-02-25 Microsoft Corporation Combined depth filtering and super resolution
US20130141433A1 (en) * 2011-12-02 2013-06-06 Per Astrand Methods, Systems and Computer Program Products for Creating Three Dimensional Meshes from Two Dimensional Images
WO2013126578A1 (en) 2012-02-21 2013-08-29 Pelican Imaging Corporation Systems and methods for the manipulation of captured light field image data
US10600235B2 (en) 2012-02-23 2020-03-24 Charles D. Huston System and method for capturing and sharing a location based experience
KR102038856B1 (ko) 2012-02-23 2019-10-31 찰스 디. 휴스턴 환경을 생성하고 환경내 위치기반 경험을 공유하는 시스템 및 방법
US10937239B2 (en) 2012-02-23 2021-03-02 Charles D. Huston System and method for creating an environment and for sharing an event
KR101862199B1 (ko) * 2012-02-29 2018-05-29 삼성전자주식회사 원거리 획득이 가능한 tof카메라와 스테레오 카메라의 합성 시스템 및 방법
KR102011169B1 (ko) * 2012-03-05 2019-08-14 마이크로소프트 테크놀로지 라이센싱, 엘엘씨 광 폴오프에 기초한 깊이 이미지의 생성 기법
US20150109513A1 (en) 2012-04-26 2015-04-23 The Trustees of Collumbia University in the City York Systems, methods, and media for providing interactive refocusing in images
US9210392B2 (en) 2012-05-01 2015-12-08 Pelican Imaging Coporation Camera modules patterned with pi filter groups
US8462155B1 (en) * 2012-05-01 2013-06-11 Google Inc. Merging three-dimensional models based on confidence scores
US9779546B2 (en) 2012-05-04 2017-10-03 Intermec Ip Corp. Volume dimensioning systems and methods
US9007368B2 (en) * 2012-05-07 2015-04-14 Intermec Ip Corp. Dimensioning system calibration systems and methods
US10007858B2 (en) 2012-05-15 2018-06-26 Honeywell International Inc. Terminals and methods for dimensioning objects
EP2853097B1 (en) * 2012-05-23 2018-07-11 Intel Corporation Depth gradient based tracking
US9724597B2 (en) * 2012-06-04 2017-08-08 Sony Interactive Entertainment Inc. Multi-image interactive gaming device
JP6008148B2 (ja) * 2012-06-28 2016-10-19 パナソニックIpマネジメント株式会社 撮像装置
US9100635B2 (en) 2012-06-28 2015-08-04 Pelican Imaging Corporation Systems and methods for detecting defective camera arrays and optic arrays
US8896594B2 (en) 2012-06-30 2014-11-25 Microsoft Corporation Depth sensing with depth-adaptive illumination
US20140002674A1 (en) 2012-06-30 2014-01-02 Pelican Imaging Corporation Systems and Methods for Manufacturing Camera Modules Using Active Alignment of Lens Stack Arrays and Sensors
KR101896666B1 (ko) * 2012-07-05 2018-09-07 삼성전자주식회사 이미지 센서 칩, 이의 동작 방법, 및 이를 포함하는 시스템
WO2014020604A1 (en) * 2012-07-31 2014-02-06 Inuitive Ltd. Multiple sensors processing system for natural user interface applications
US10321127B2 (en) 2012-08-20 2019-06-11 Intermec Ip Corp. Volume dimensioning system calibration systems and methods
AU2013305770A1 (en) 2012-08-21 2015-02-26 Pelican Imaging Corporation Systems and methods for parallax detection and correction in images captured using array cameras
EP2888698A4 (en) 2012-08-23 2016-06-29 Pelican Imaging Corp PROPERTY-BASED HIGH-RESOLUTION MOTION ESTIMATION FROM LOW-RESOLUTION IMAGES RECORDED WITH AN ARRAY SOURCE
KR101893788B1 (ko) 2012-08-27 2018-08-31 삼성전자주식회사 다시점 카메라간 정합 장치 및 방법
EP2893702A4 (en) * 2012-09-10 2016-06-08 Aemass Inc MULTI-DIMENSIONAL DATA ACQUISITION OF AN ENVIRONMENT WITH MULTIPLE DEVICES
US20140092281A1 (en) 2012-09-28 2014-04-03 Pelican Imaging Corporation Generating Images from Light Fields Utilizing Virtual Viewpoints
US9939259B2 (en) 2012-10-04 2018-04-10 Hand Held Products, Inc. Measuring object dimensions using mobile computer
US9633263B2 (en) 2012-10-09 2017-04-25 International Business Machines Corporation Appearance modeling for object re-identification using weighted brightness transfer functions
KR101874482B1 (ko) 2012-10-16 2018-07-05 삼성전자주식회사 깊이 영상으로부터 고해상도 3차원 영상을 복원하는 장치 및 방법
US9841311B2 (en) 2012-10-16 2017-12-12 Hand Held Products, Inc. Dimensioning system
US9064318B2 (en) 2012-10-25 2015-06-23 Adobe Systems Incorporated Image matting and alpha value techniques
DE102012110460A1 (de) 2012-10-31 2014-04-30 Audi Ag Verfahren zum Eingeben eines Steuerbefehls für eine Komponente eines Kraftwagens
US20140118240A1 (en) * 2012-11-01 2014-05-01 Motorola Mobility Llc Systems and Methods for Configuring the Display Resolution of an Electronic Device Based on Distance
US9811880B2 (en) * 2012-11-09 2017-11-07 The Boeing Company Backfilling points in a point cloud
US9304603B2 (en) * 2012-11-12 2016-04-05 Microsoft Technology Licensing, Llc Remote control using depth camera
US9201580B2 (en) 2012-11-13 2015-12-01 Adobe Systems Incorporated Sound alignment user interface
US9355649B2 (en) 2012-11-13 2016-05-31 Adobe Systems Incorporated Sound alignment using timing information
US9143711B2 (en) 2012-11-13 2015-09-22 Pelican Imaging Corporation Systems and methods for array camera focal plane control
US10368053B2 (en) * 2012-11-14 2019-07-30 Qualcomm Incorporated Structured light active depth sensing systems combining multiple images to compensate for differences in reflectivity and/or absorption
US9076205B2 (en) 2012-11-19 2015-07-07 Adobe Systems Incorporated Edge direction and curve based image de-blurring
US10249321B2 (en) 2012-11-20 2019-04-02 Adobe Inc. Sound rate modification
US9451304B2 (en) 2012-11-29 2016-09-20 Adobe Systems Incorporated Sound feature priority alignment
US10455219B2 (en) 2012-11-30 2019-10-22 Adobe Inc. Stereo correspondence and depth sensors
US9135710B2 (en) * 2012-11-30 2015-09-15 Adobe Systems Incorporated Depth map stereo correspondence techniques
US10249052B2 (en) 2012-12-19 2019-04-02 Adobe Systems Incorporated Stereo correspondence model fitting
US9208547B2 (en) 2012-12-19 2015-12-08 Adobe Systems Incorporated Stereo correspondence smoothness tool
US9214026B2 (en) 2012-12-20 2015-12-15 Adobe Systems Incorporated Belief propagation and affinity measures
US9857470B2 (en) 2012-12-28 2018-01-02 Microsoft Technology Licensing, Llc Using photometric stereo for 3D environment modeling
US9323346B2 (en) 2012-12-31 2016-04-26 Futurewei Technologies, Inc. Accurate 3D finger tracking with a single camera
SG11201504814VA (en) * 2013-01-03 2015-07-30 Saurav Suman A method and system enabling control of different digital devices using gesture or motion control
US10327708B2 (en) 2013-01-24 2019-06-25 Kineticor, Inc. Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
US9717461B2 (en) 2013-01-24 2017-08-01 Kineticor, Inc. Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
US9305365B2 (en) 2013-01-24 2016-04-05 Kineticor, Inc. Systems, devices, and methods for tracking moving targets
CN109008972A (zh) 2013-02-01 2018-12-18 凯内蒂科尔股份有限公司 生物医学成像中的实时适应性运动补偿的运动追踪系统
US9052746B2 (en) 2013-02-15 2015-06-09 Microsoft Technology Licensing, Llc User center-of-mass and mass distribution extraction using depth images
WO2014130849A1 (en) 2013-02-21 2014-08-28 Pelican Imaging Corporation Generating compressed light field representation data
US9940553B2 (en) 2013-02-22 2018-04-10 Microsoft Technology Licensing, Llc Camera/object pose from predicted coordinates
WO2014133974A1 (en) 2013-02-24 2014-09-04 Pelican Imaging Corporation Thin form computational and modular array cameras
US9774789B2 (en) 2013-03-08 2017-09-26 Fotonation Cayman Limited Systems and methods for high dynamic range imaging using array cameras
US9135516B2 (en) 2013-03-08 2015-09-15 Microsoft Technology Licensing, Llc User body angle, curvature and average extremity positions extraction using depth images
US8866912B2 (en) 2013-03-10 2014-10-21 Pelican Imaging Corporation System and methods for calibration of an array camera using a single captured image
US9134114B2 (en) * 2013-03-11 2015-09-15 Texas Instruments Incorporated Time of flight sensor binning
US20140267701A1 (en) * 2013-03-12 2014-09-18 Ziv Aviv Apparatus and techniques for determining object depth in images
WO2014164909A1 (en) 2013-03-13 2014-10-09 Pelican Imaging Corporation Array camera architecture implementing quantum film sensors
US9092657B2 (en) 2013-03-13 2015-07-28 Microsoft Technology Licensing, Llc Depth image processing
WO2014165244A1 (en) 2013-03-13 2014-10-09 Pelican Imaging Corporation Systems and methods for synthesizing images from image data captured by an array camera using restricted depth of field depth maps in which depth estimation precision varies
US9106784B2 (en) 2013-03-13 2015-08-11 Pelican Imaging Corporation Systems and methods for controlling aliasing in images captured by an array camera for use in super-resolution processing
US9080856B2 (en) 2013-03-13 2015-07-14 Intermec Ip Corp. Systems and methods for enhancing dimensioning, for example volume dimensioning
US9124831B2 (en) 2013-03-13 2015-09-01 Pelican Imaging Corporation System and methods for calibration of an array camera
WO2014153098A1 (en) 2013-03-14 2014-09-25 Pelican Imaging Corporation Photmetric normalization in array cameras
US20140278455A1 (en) * 2013-03-14 2014-09-18 Microsoft Corporation Providing Feedback Pertaining to Communication Style
US9142034B2 (en) 2013-03-14 2015-09-22 Microsoft Technology Licensing, Llc Center of mass state vector for analyzing user motion in 3D images
US9159140B2 (en) 2013-03-14 2015-10-13 Microsoft Technology Licensing, Llc Signal analysis for repetition detection and analysis
US9578259B2 (en) 2013-03-14 2017-02-21 Fotonation Cayman Limited Systems and methods for reducing motion blur in images or video in ultra low light with array cameras
US9438888B2 (en) 2013-03-15 2016-09-06 Pelican Imaging Corporation Systems and methods for stereo imaging with camera arrays
US10122993B2 (en) 2013-03-15 2018-11-06 Fotonation Limited Autofocus system for a conventional camera that uses depth information from an array camera
US9497429B2 (en) 2013-03-15 2016-11-15 Pelican Imaging Corporation Extended color processing on pelican array cameras
US9445003B1 (en) 2013-03-15 2016-09-13 Pelican Imaging Corporation Systems and methods for synthesizing high resolution images using image deconvolution based on motion and depth information
US20140307055A1 (en) * 2013-04-15 2014-10-16 Microsoft Corporation Intensity-modulated light pattern for active stereo
JP2014230179A (ja) * 2013-05-24 2014-12-08 ソニー株式会社 撮像装置及び撮像方法
US10228452B2 (en) 2013-06-07 2019-03-12 Hand Held Products, Inc. Method of error correction for 3D imaging device
US9239950B2 (en) 2013-07-01 2016-01-19 Hand Held Products, Inc. Dimensioning system
US10497140B2 (en) * 2013-08-15 2019-12-03 Intel Corporation Hybrid depth sensing pipeline
US9464885B2 (en) 2013-08-30 2016-10-11 Hand Held Products, Inc. System and method for package dimensioning
WO2015048694A2 (en) 2013-09-27 2015-04-02 Pelican Imaging Corporation Systems and methods for depth-assisted perspective distortion correction
US9565416B1 (en) 2013-09-30 2017-02-07 Google Inc. Depth-assisted focus in multi-camera systems
EP2869263A1 (en) * 2013-10-29 2015-05-06 Thomson Licensing Method and apparatus for generating depth map of a scene
US9426343B2 (en) 2013-11-07 2016-08-23 Pelican Imaging Corporation Array cameras incorporating independently aligned lens stacks
US9769459B2 (en) * 2013-11-12 2017-09-19 Microsoft Technology Licensing, Llc Power efficient laser diode driver circuit and method
US10119808B2 (en) 2013-11-18 2018-11-06 Fotonation Limited Systems and methods for estimating depth from projected texture using camera arrays
US9456134B2 (en) 2013-11-26 2016-09-27 Pelican Imaging Corporation Array camera configurations incorporating constituent array cameras and constituent cameras
CA2931529C (en) 2013-11-27 2022-08-23 Children's National Medical Center 3d corrected imaging
US9154697B2 (en) 2013-12-06 2015-10-06 Google Inc. Camera selection based on occlusion of field of view
EP2887311B1 (en) 2013-12-20 2016-09-14 Thomson Licensing Method and apparatus for performing depth estimation
EP2887029B1 (de) 2013-12-20 2016-03-09 Multipond Wägetechnik GmbH Befüllungsvorrichtung und Verfahren zum Erfassen einer Befüllung
KR102106080B1 (ko) * 2014-01-29 2020-04-29 엘지이노텍 주식회사 깊이 정보 추출 장치 및 방법
US11265534B2 (en) * 2014-02-08 2022-03-01 Microsoft Technology Licensing, Llc Environment-dependent active illumination for stereo matching
CN103810685B (zh) * 2014-02-25 2016-05-25 清华大学深圳研究生院 一种深度图的超分辨率处理方法
KR102166691B1 (ko) * 2014-02-27 2020-10-16 엘지전자 주식회사 객체의 3차원 형상을 산출하는 장치 및 방법
WO2015134996A1 (en) 2014-03-07 2015-09-11 Pelican Imaging Corporation System and methods for depth regularization and semiautomatic interactive matting using rgb-d images
WO2015148391A1 (en) 2014-03-24 2015-10-01 Thomas Michael Ernst Systems, methods, and devices for removing prospective motion correction from medical imaging scans
CN103869593B (zh) * 2014-03-26 2017-01-25 深圳科奥智能设备有限公司 三维成像装置、系统及方法
US10349037B2 (en) 2014-04-03 2019-07-09 Ams Sensors Singapore Pte. Ltd. Structured-stereo imaging assembly including separate imagers for different wavelengths
US20160277724A1 (en) * 2014-04-17 2016-09-22 Sony Corporation Depth assisted scene recognition for a camera
BR112015030886B1 (pt) 2014-04-18 2022-09-27 Autonomous Solutions, Inc. Veículo, sistema de visão para uso por um veículo e método de direcionamento de um veículo com o uso de um sistema de visão
US9589359B2 (en) * 2014-04-24 2017-03-07 Intel Corporation Structured stereo
KR101586010B1 (ko) * 2014-04-28 2016-01-15 (주)에프엑스기어 증강 현실 기반 가상 피팅을 위한 의상의 물리적 시뮬레이션 장치 및 방법
US20150309663A1 (en) * 2014-04-28 2015-10-29 Qualcomm Incorporated Flexible air and surface multi-touch detection in mobile platform
CN103971405A (zh) * 2014-05-06 2014-08-06 重庆大学 一种激光散斑结构光及深度信息的三维重建方法
US9684370B2 (en) * 2014-05-07 2017-06-20 Microsoft Technology Licensing, Llc Reducing camera interference using image analysis
US20150334309A1 (en) * 2014-05-16 2015-11-19 Htc Corporation Handheld electronic apparatus, image capturing apparatus and image capturing method thereof
US9311565B2 (en) * 2014-06-16 2016-04-12 Sony Corporation 3D scanning with depth cameras using mesh sculpting
US20150381972A1 (en) * 2014-06-30 2015-12-31 Microsoft Corporation Depth estimation using multi-view stereo and a calibrated projector
EP3188660A4 (en) 2014-07-23 2018-05-16 Kineticor, Inc. Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
US11290704B2 (en) * 2014-07-31 2022-03-29 Hewlett-Packard Development Company, L.P. Three dimensional scanning system and framework
KR102178978B1 (ko) 2014-07-31 2020-11-13 한국전자통신연구원 스테레오 매칭 방법 및 이를 수행하는 장치
US9823059B2 (en) 2014-08-06 2017-11-21 Hand Held Products, Inc. Dimensioning system with guided alignment
CN105451011B (zh) * 2014-08-20 2018-11-09 联想(北京)有限公司 调节功率的方法和装置
US9507995B2 (en) 2014-08-29 2016-11-29 X Development Llc Combination of stereo and structured-light processing
CN113256730B (zh) 2014-09-29 2023-09-05 快图有限公司 用于阵列相机的动态校准的系统和方法
US10810715B2 (en) 2014-10-10 2020-10-20 Hand Held Products, Inc System and method for picking validation
US10775165B2 (en) 2014-10-10 2020-09-15 Hand Held Products, Inc. Methods for improving the accuracy of dimensioning-system measurements
US9779276B2 (en) 2014-10-10 2017-10-03 Hand Held Products, Inc. Depth sensor based auto-focus system for an indicia scanner
US10060729B2 (en) 2014-10-21 2018-08-28 Hand Held Products, Inc. Handheld dimensioner with data-quality indication
US9897434B2 (en) 2014-10-21 2018-02-20 Hand Held Products, Inc. Handheld dimensioning system with measurement-conformance feedback
US9762793B2 (en) 2014-10-21 2017-09-12 Hand Held Products, Inc. System and method for dimensioning
US9752864B2 (en) 2014-10-21 2017-09-05 Hand Held Products, Inc. Handheld dimensioning system with feedback
US9557166B2 (en) 2014-10-21 2017-01-31 Hand Held Products, Inc. Dimensioning system with multipath interference mitigation
TWI591514B (zh) 2014-11-07 2017-07-11 鴻海精密工業股份有限公司 手勢創建系統及方法
KR102305998B1 (ko) * 2014-12-08 2021-09-28 엘지이노텍 주식회사 영상 처리 장치
WO2016095192A1 (en) * 2014-12-19 2016-06-23 SZ DJI Technology Co., Ltd. Optical-flow imaging system and method using ultrasonic depth sensing
US10404969B2 (en) * 2015-01-20 2019-09-03 Qualcomm Incorporated Method and apparatus for multiple technology depth map acquisition and fusion
US9958758B2 (en) 2015-01-21 2018-05-01 Microsoft Technology Licensing, Llc Multiple exposure structured light pattern
US10185463B2 (en) * 2015-02-13 2019-01-22 Nokia Technologies Oy Method and apparatus for providing model-centered rotation in a three-dimensional user interface
US20160255334A1 (en) * 2015-02-26 2016-09-01 Dual Aperture International Co. Ltd. Generating an improved depth map using a multi-aperture imaging system
US9948920B2 (en) 2015-02-27 2018-04-17 Qualcomm Incorporated Systems and methods for error correction in structured light
US10068338B2 (en) * 2015-03-12 2018-09-04 Qualcomm Incorporated Active sensing spatial resolution improvement through multiple receivers and code reuse
US9530215B2 (en) 2015-03-20 2016-12-27 Qualcomm Incorporated Systems and methods for enhanced depth map retrieval for moving objects using active sensing technology
CN107637074B (zh) * 2015-03-22 2020-09-18 脸谱科技有限责任公司 使用立体摄像机与结构化光的头戴显示器的深度绘制
US10178374B2 (en) * 2015-04-03 2019-01-08 Microsoft Technology Licensing, Llc Depth imaging of a surrounding environment
US10341632B2 (en) 2015-04-15 2019-07-02 Google Llc. Spatial random access enabled video system with a three-dimensional viewing volume
US10546424B2 (en) 2015-04-15 2020-01-28 Google Llc Layered content delivery for virtual and augmented reality experiences
US10419737B2 (en) 2015-04-15 2019-09-17 Google Llc Data structures and delivery methods for expediting virtual reality playback
US10412373B2 (en) * 2015-04-15 2019-09-10 Google Llc Image capture for virtual reality displays
US10567464B2 (en) 2015-04-15 2020-02-18 Google Llc Video compression with adaptive view-dependent lighting removal
US10469873B2 (en) 2015-04-15 2019-11-05 Google Llc Encoding and decoding virtual reality video
US10540818B2 (en) 2015-04-15 2020-01-21 Google Llc Stereo image generation and interactive playback
US10444931B2 (en) 2017-05-09 2019-10-15 Google Llc Vantage generation and interactive playback
US10440407B2 (en) 2017-05-09 2019-10-08 Google Llc Adaptive control for immersive experience delivery
US9942474B2 (en) 2015-04-17 2018-04-10 Fotonation Cayman Limited Systems and methods for performing high speed video capture and depth estimation using array cameras
US10066933B2 (en) * 2015-05-04 2018-09-04 Facebook, Inc. Camera depth mapping using structured light patterns
CN106210698B (zh) * 2015-05-08 2018-02-13 光宝电子(广州)有限公司 深度相机的控制方法
US10488192B2 (en) 2015-05-10 2019-11-26 Magik Eye Inc. Distance sensor projecting parallel patterns
US9786101B2 (en) 2015-05-19 2017-10-10 Hand Held Products, Inc. Evaluating image values
US10785393B2 (en) 2015-05-22 2020-09-22 Facebook, Inc. Methods and devices for selective flash illumination
US9683834B2 (en) * 2015-05-27 2017-06-20 Intel Corporation Adaptable depth sensing system
CA2970692C (en) * 2015-05-29 2018-04-03 Arb Labs Inc. Systems, methods and devices for monitoring betting activities
KR101639227B1 (ko) * 2015-06-08 2016-07-13 주식회사 고영테크놀러지 3차원 형상 측정장치
US10066982B2 (en) 2015-06-16 2018-09-04 Hand Held Products, Inc. Calibrating a volume dimensioner
US20160377414A1 (en) 2015-06-23 2016-12-29 Hand Held Products, Inc. Optical pattern projector
US9857167B2 (en) 2015-06-23 2018-01-02 Hand Held Products, Inc. Dual-projector three-dimensional scanner
US10560686B2 (en) 2015-06-23 2020-02-11 Huawei Technologies Co., Ltd. Photographing device and method for obtaining depth information
US9638791B2 (en) * 2015-06-25 2017-05-02 Qualcomm Incorporated Methods and apparatus for performing exposure estimation using a time-of-flight sensor
US9646410B2 (en) * 2015-06-30 2017-05-09 Microsoft Technology Licensing, Llc Mixed three dimensional scene reconstruction from plural surface models
US9835486B2 (en) 2015-07-07 2017-12-05 Hand Held Products, Inc. Mobile dimensioner apparatus for use in commerce
DE102016208049A1 (de) * 2015-07-09 2017-01-12 Inb Vision Ag Vorrichtung und Verfahren zur Bilderfassung einer vorzugsweise strukturierten Oberfläche eines Objekts
US10163247B2 (en) 2015-07-14 2018-12-25 Microsoft Technology Licensing, Llc Context-adaptive allocation of render model resources
EP3118576B1 (en) 2015-07-15 2018-09-12 Hand Held Products, Inc. Mobile dimensioning device with dynamic accuracy compatible with nist standard
US10094650B2 (en) 2015-07-16 2018-10-09 Hand Held Products, Inc. Dimensioning and imaging items
US20170017301A1 (en) 2015-07-16 2017-01-19 Hand Held Products, Inc. Adjusting dimensioning results using augmented reality
US9665978B2 (en) 2015-07-20 2017-05-30 Microsoft Technology Licensing, Llc Consistent tessellation via topology-aware surface tracking
US9943247B2 (en) 2015-07-28 2018-04-17 The University Of Hawai'i Systems, devices, and methods for detecting false movements for motion correction during a medical imaging scan
US9635339B2 (en) 2015-08-14 2017-04-25 Qualcomm Incorporated Memory-efficient coded light error correction
US9846943B2 (en) 2015-08-31 2017-12-19 Qualcomm Incorporated Code domain power control for structured light
CN105389845B (zh) * 2015-10-19 2017-03-22 北京旷视科技有限公司 三维重建的图像获取方法和系统、三维重建方法和系统
US10554956B2 (en) 2015-10-29 2020-02-04 Dell Products, Lp Depth masks for image segmentation for depth-based computational photography
US10249030B2 (en) 2015-10-30 2019-04-02 Hand Held Products, Inc. Image transformation for indicia reading
US10225544B2 (en) 2015-11-19 2019-03-05 Hand Held Products, Inc. High resolution dot pattern
EP3380007A4 (en) 2015-11-23 2019-09-04 Kineticor, Inc. SYSTEMS, APPARATUS AND METHOD FOR TRACKING AND COMPENSATING THE PATIENT MOVEMENT DURING IMAGING MEDICAL TRACING
US10021371B2 (en) 2015-11-24 2018-07-10 Dell Products, Lp Method and apparatus for gross-level user and input detection using similar or dissimilar camera pair
US10007994B2 (en) 2015-12-26 2018-06-26 Intel Corporation Stereodepth camera using VCSEL projector with controlled projection lens
WO2017126711A1 (ko) * 2016-01-19 2017-07-27 전자부품연구원 스테레오스코픽 카메라의 최적 깊이 인식을 위한 조명 제어 방법 및 시스템
US10025314B2 (en) 2016-01-27 2018-07-17 Hand Held Products, Inc. Vehicle positioning and object avoidance
US10229502B2 (en) * 2016-02-03 2019-03-12 Microsoft Technology Licensing, Llc Temporal time-of-flight
US10254402B2 (en) * 2016-02-04 2019-04-09 Goodrich Corporation Stereo range with lidar correction
US9912862B2 (en) 2016-02-29 2018-03-06 Aquifi, Inc. System and method for assisted 3D scanning
US10841491B2 (en) 2016-03-16 2020-11-17 Analog Devices, Inc. Reducing power consumption for time-of-flight depth imaging
CN105869167A (zh) * 2016-03-30 2016-08-17 天津大学 基于主被动融合的高分辨率深度图获取方法
US20170289515A1 (en) * 2016-04-01 2017-10-05 Intel Corporation High dynamic range depth generation for 3d imaging systems
JP6908025B2 (ja) * 2016-04-06 2021-07-21 ソニーグループ株式会社 画像処理装置と画像処理方法
US10136120B2 (en) 2016-04-15 2018-11-20 Microsoft Technology Licensing, Llc Depth sensing using structured illumination
KR101842141B1 (ko) 2016-05-13 2018-03-26 (주)칼리온 3차원 스캐닝 장치 및 방법
US10339352B2 (en) 2016-06-03 2019-07-02 Hand Held Products, Inc. Wearable metrological apparatus
US9940721B2 (en) 2016-06-10 2018-04-10 Hand Held Products, Inc. Scene change detection in a dimensioner
US10163216B2 (en) 2016-06-15 2018-12-25 Hand Held Products, Inc. Automatic mode switching in a volume dimensioner
KR102442594B1 (ko) 2016-06-23 2022-09-13 한국전자통신연구원 조명기를 구비한 스테레오 매칭 시스템에서의 코스트 볼륨 연산장치 및 그 방법
KR102529120B1 (ko) 2016-07-15 2023-05-08 삼성전자주식회사 영상을 획득하는 방법, 디바이스 및 기록매체
US9947099B2 (en) * 2016-07-27 2018-04-17 Microsoft Technology Licensing, Llc Reflectivity map estimate from dot based structured light systems
US10574909B2 (en) 2016-08-08 2020-02-25 Microsoft Technology Licensing, Llc Hybrid imaging sensor for structured light object capture
US10271033B2 (en) * 2016-10-31 2019-04-23 Verizon Patent And Licensing Inc. Methods and systems for generating depth data by converging independently-captured depth maps
US10204448B2 (en) 2016-11-04 2019-02-12 Aquifi, Inc. System and method for portable active 3D scanning
US10643498B1 (en) 2016-11-30 2020-05-05 Ralityworks, Inc. Arthritis experiential training tool and method
CN106682584B (zh) * 2016-12-01 2019-12-20 广州亿航智能技术有限公司 无人机障碍物检测方法及装置
US10469758B2 (en) 2016-12-06 2019-11-05 Microsoft Technology Licensing, Llc Structured light 3D sensors with variable focal length lenses and illuminators
US10554881B2 (en) 2016-12-06 2020-02-04 Microsoft Technology Licensing, Llc Passive and active stereo vision 3D sensors with variable focal length lenses
US10451714B2 (en) 2016-12-06 2019-10-22 Sony Corporation Optical micromesh for computerized devices
US10536684B2 (en) 2016-12-07 2020-01-14 Sony Corporation Color noise reduction in 3D depth map
JP7133554B2 (ja) 2016-12-07 2022-09-08 マジック アイ インコーポレイテッド 焦点調整可能な画像センサを含む距離センサ
US10909708B2 (en) 2016-12-09 2021-02-02 Hand Held Products, Inc. Calibrating a dimensioner using ratios of measurable parameters of optic ally-perceptible geometric elements
CN108399633A (zh) * 2017-02-06 2018-08-14 罗伯团队家居有限公司 用于立体视觉的方法和装置
CN106959075B (zh) * 2017-02-10 2019-12-13 深圳奥比中光科技有限公司 利用深度相机进行精确测量的方法和系统
US10495735B2 (en) 2017-02-14 2019-12-03 Sony Corporation Using micro mirrors to improve the field of view of a 3D depth map
US10628950B2 (en) * 2017-03-01 2020-04-21 Microsoft Technology Licensing, Llc Multi-spectrum illumination-and-sensor module for head tracking, gesture recognition and spatial mapping
US10795022B2 (en) * 2017-03-02 2020-10-06 Sony Corporation 3D depth map
US10666927B2 (en) 2017-03-15 2020-05-26 Baker Hughes, A Ge Company, Llc Method and device for inspection of an asset
CN110431841B (zh) 2017-03-21 2021-08-17 奇跃公司 虚拟、增强和混合现实系统的深度感测技术
US11047672B2 (en) 2017-03-28 2021-06-29 Hand Held Products, Inc. System for optically dimensioning
TWI672677B (zh) * 2017-03-31 2019-09-21 鈺立微電子股份有限公司 用以融合多深度圖的深度圖產生裝置
US10979687B2 (en) 2017-04-03 2021-04-13 Sony Corporation Using super imposition to render a 3D depth map
US10620316B2 (en) * 2017-05-05 2020-04-14 Qualcomm Incorporated Systems and methods for generating a structured light depth map with a non-uniform codeword pattern
US10474227B2 (en) 2017-05-09 2019-11-12 Google Llc Generation of virtual reality with 6 degrees of freedom from limited viewer data
CN109314776B (zh) * 2017-05-17 2021-02-26 深圳配天智能技术研究院有限公司 图像处理方法、图像处理设备及存储介质
US10542245B2 (en) * 2017-05-24 2020-01-21 Lg Electronics Inc. Mobile terminal and method for controlling the same
US10282857B1 (en) 2017-06-27 2019-05-07 Amazon Technologies, Inc. Self-validating structured light depth sensor system
CN109284653A (zh) * 2017-07-20 2019-01-29 微软技术许可有限责任公司 基于计算机视觉的细长物体检测
US10733748B2 (en) 2017-07-24 2020-08-04 Hand Held Products, Inc. Dual-pattern optical 3D dimensioning
US10482618B2 (en) 2017-08-21 2019-11-19 Fotonation Limited Systems and methods for hybrid depth regularization
EP3451023A1 (en) 2017-09-01 2019-03-06 Koninklijke Philips N.V. Time-of-flight depth camera with low resolution pixel imaging
US10613228B2 (en) 2017-09-08 2020-04-07 Microsoft Techology Licensing, Llc Time-of-flight augmented structured light range-sensor
US20190089939A1 (en) * 2017-09-18 2019-03-21 Intel Corporation Depth sensor optimization based on detected distance
US10521881B1 (en) * 2017-09-28 2019-12-31 Apple Inc. Error concealment for a head-mountable device
CN111492262B (zh) 2017-10-08 2024-06-28 魔眼公司 使用经向网格图案的距离测量
EP3692501B1 (en) 2017-10-08 2025-08-20 Magik Eye Inc. Calibrating a sensor system including multiple movable sensors
US11209528B2 (en) 2017-10-15 2021-12-28 Analog Devices, Inc. Time-of-flight depth image processing systems and methods
US10679076B2 (en) 2017-10-22 2020-06-09 Magik Eye Inc. Adjusting the projection system of a distance sensor to optimize a beam layout
CN107742631B (zh) * 2017-10-26 2020-02-14 京东方科技集团股份有限公司 深度摄像器件及制造方法、显示面板及制造方法、装置
US10484667B2 (en) 2017-10-31 2019-11-19 Sony Corporation Generating 3D depth map using parallax
US11393114B1 (en) * 2017-11-08 2022-07-19 AI Incorporated Method and system for collaborative construction of a map
EP3711021A4 (en) 2017-11-13 2021-07-21 Carmel-Haifa University Economic Corporation Ltd. MOTION TRACKING WITH MULTIPLE 3D CAMERAS
CN108174180B (zh) * 2018-01-02 2019-07-30 京东方科技集团股份有限公司 一种显示装置、显示系统及三维显示方法
US10948596B2 (en) * 2018-01-24 2021-03-16 Sony Semiconductor Solutions Corporation Time-of-flight image sensor with distance determination
JP7067091B2 (ja) * 2018-02-02 2022-05-16 株式会社リコー 撮像装置および撮像装置の制御方法
US10614292B2 (en) 2018-02-06 2020-04-07 Kneron Inc. Low-power face identification method capable of controlling power adaptively
US10306152B1 (en) * 2018-02-14 2019-05-28 Himax Technologies Limited Auto-exposure controller, auto-exposure control method and system based on structured light
JP2021518535A (ja) 2018-03-20 2021-08-02 マジック アイ インコーポレイテッド 様々な密度の投影パターンを使用する距離測定
CN112119628B (zh) * 2018-03-20 2022-06-03 魔眼公司 调整相机曝光以用于三维深度感测和二维成像
TWI719440B (zh) * 2018-04-02 2021-02-21 聯發科技股份有限公司 立體匹配方法及相應立體匹配裝置
CN110349196B (zh) * 2018-04-03 2024-03-29 联发科技股份有限公司 深度融合的方法和装置
US10584962B2 (en) 2018-05-01 2020-03-10 Hand Held Products, Inc System and method for validating physical-item security
US10663567B2 (en) 2018-05-04 2020-05-26 Microsoft Technology Licensing, Llc Field calibration of a structured light range-sensor
US11040452B2 (en) 2018-05-29 2021-06-22 Abb Schweiz Ag Depth sensing robotic hand-eye camera using structured light
JP7292315B2 (ja) 2018-06-06 2023-06-16 マジック アイ インコーポレイテッド 高密度投影パターンを使用した距離測定
US10549186B2 (en) 2018-06-26 2020-02-04 Sony Interactive Entertainment Inc. Multipoint SLAM capture
CN108921098B (zh) * 2018-07-03 2020-08-18 百度在线网络技术(北京)有限公司 人体运动分析方法、装置、设备及存储介质
CN118836845A (zh) 2018-07-13 2024-10-25 拉布拉多系统公司 能够在不同环境照明条件下操作的移动设备的视觉导航
WO2020019704A1 (zh) * 2018-07-27 2020-01-30 Oppo广东移动通信有限公司 结构光投射器的控制系统和电子装置
US11475584B2 (en) 2018-08-07 2022-10-18 Magik Eye Inc. Baffles for three-dimensional sensors having spherical fields of view
CN110855961A (zh) * 2018-08-20 2020-02-28 奇景光电股份有限公司 深度感测装置及其操作方法
US10877622B2 (en) * 2018-08-29 2020-12-29 Facebook Technologies, Llc Detection of structured light for depth sensing
JP7136507B2 (ja) * 2018-08-30 2022-09-13 ヴェオ ロボティクス, インコーポレイテッド 深度感知コンピュータビジョンシステム
US20200082160A1 (en) * 2018-09-12 2020-03-12 Kneron (Taiwan) Co., Ltd. Face recognition module with artificial intelligence models
CN109389674B (zh) * 2018-09-30 2021-08-13 Oppo广东移动通信有限公司 数据处理方法及装置、mec服务器及存储介质
US11158074B1 (en) 2018-10-02 2021-10-26 Facebook Technologies, Llc Depth sensing using temporal coding
US10901092B1 (en) 2018-10-02 2021-01-26 Facebook Technologies, Llc Depth sensing using dynamic illumination with range extension
US10896516B1 (en) * 2018-10-02 2021-01-19 Facebook Technologies, Llc Low-power depth sensing using dynamic illumination
US20210356552A1 (en) * 2018-10-15 2021-11-18 Nec Corporation Information processing apparatus, sensing system, method, program, and recording medium
CN111091592B (zh) * 2018-10-24 2023-08-15 Oppo广东移动通信有限公司 图像处理方法、图像处理装置、电子设备及可读存储介质
JP2021508386A (ja) 2018-11-09 2021-03-04 ベイジン ディディ インフィニティ テクノロジー アンド ディベロップメント カンパニー リミティッド 車両内の争いを検出するためのシステムおよび方法
CN109633661A (zh) * 2018-11-28 2019-04-16 杭州凌像科技有限公司 一种基于rgb-d传感器与超声波传感器融合的玻璃检测系统和方法
WO2020150131A1 (en) 2019-01-20 2020-07-23 Magik Eye Inc. Three-dimensional sensor including bandpass filter having multiple passbands
US11103748B1 (en) 2019-03-05 2021-08-31 Physmodo, Inc. System and method for human motion detection and tracking
EP3706070A1 (en) * 2019-03-05 2020-09-09 Koninklijke Philips N.V. Processing of depth maps for images
US11331006B2 (en) 2019-03-05 2022-05-17 Physmodo, Inc. System and method for human motion detection and tracking
US10897672B2 (en) * 2019-03-18 2021-01-19 Facebook, Inc. Speaker beam-steering based on microphone array and depth camera assembly input
US11474209B2 (en) 2019-03-25 2022-10-18 Magik Eye Inc. Distance measurement using high density projection patterns
CN113424522A (zh) * 2019-03-27 2021-09-21 Oppo广东移动通信有限公司 使用半球形或球形可见光深度图像进行三维跟踪
CN111829456B (zh) 2019-04-19 2024-12-27 株式会社三丰 三维形状测定装置以及三维形状测定方法
CN110069006B (zh) * 2019-04-30 2020-12-25 中国人民解放军陆军装甲兵学院 一种全息体视图合成视差图像生成方法及系统
US11019249B2 (en) 2019-05-12 2021-05-25 Magik Eye Inc. Mapping three-dimensional depth map data onto two-dimensional images
JP2020193946A (ja) * 2019-05-30 2020-12-03 本田技研工業株式会社 光学装置及び把持システム
CN110297491A (zh) * 2019-07-02 2019-10-01 湖南海森格诺信息技术有限公司 基于多个结构光双目ir相机的语义导航方法及其系统
CN110441784B (zh) * 2019-08-27 2025-01-10 浙江舜宇光学有限公司 深度图像成像系统和方法
WO2021055585A1 (en) 2019-09-17 2021-03-25 Boston Polarimetrics, Inc. Systems and methods for surface modeling using polarization cues
US11238641B2 (en) 2019-09-27 2022-02-01 Intel Corporation Architecture for contextual memories in map representation for 3D reconstruction and navigation
US11639846B2 (en) 2019-09-27 2023-05-02 Honeywell International Inc. Dual-pattern optical 3D dimensioning
CN114766003B (zh) 2019-10-07 2024-03-26 波士顿偏振测定公司 用于利用偏振增强传感器系统和成像系统的系统和方法
US11107271B2 (en) * 2019-11-05 2021-08-31 The Boeing Company Three-dimensional point data based on stereo reconstruction using structured light
WO2021108002A1 (en) 2019-11-30 2021-06-03 Boston Polarimetrics, Inc. Systems and methods for transparent object segmentation using polarization cues
WO2021113135A1 (en) 2019-12-01 2021-06-10 Magik Eye Inc. Enhancing triangulation-based three-dimensional distance measurements with time of flight information
US11450018B1 (en) * 2019-12-24 2022-09-20 X Development Llc Fusing multiple depth sensing modalities
EP4094181A4 (en) 2019-12-29 2024-04-03 Magik Eye Inc. Associating three-dimensional coordinates with two-dimensional feature points
JP7699132B2 (ja) 2020-01-05 2025-06-26 マジック アイ インコーポレイテッド 3次元カメラの座標系を2次元カメラの入射位置に移動させる方法
CN115552486A (zh) 2020-01-29 2022-12-30 因思创新有限责任公司 用于表征物体姿态检测和测量系统的系统和方法
CN115428028A (zh) 2020-01-30 2022-12-02 因思创新有限责任公司 用于合成用于在包括偏振图像的不同成像模态下训练统计模型的数据的系统和方法
US11508088B2 (en) 2020-02-04 2022-11-22 Mujin, Inc. Method and system for performing automatic camera calibration
JP6800506B1 (ja) 2020-02-04 2020-12-16 株式会社Mujin 自動カメラキャリブレーションを実施する方法及びシステム
WO2021191694A1 (en) * 2020-03-23 2021-09-30 Ricoh Company, Ltd. Information processing apparatus and method of processing information
WO2021243088A1 (en) 2020-05-27 2021-12-02 Boston Polarimetrics, Inc. Multi-aperture polarization optical systems using beam splitters
KR20220014495A (ko) * 2020-07-29 2022-02-07 삼성전자주식회사 전자 장치 및 그 제어 방법
CN112129262B (zh) * 2020-09-01 2023-01-06 珠海一微半导体股份有限公司 一种多摄像头组的视觉测距方法及视觉导航芯片
CN112033352B (zh) * 2020-09-01 2023-11-07 珠海一微半导体股份有限公司 多摄像头测距的机器人及视觉测距方法
CN112070700B (zh) * 2020-09-07 2024-03-29 深圳市凌云视迅科技有限责任公司 一种去除深度图像中突起干扰噪声的方法与装置
EP4285325A4 (en) * 2021-01-28 2024-12-25 Visionary Machines Pty Ltd SYSTEMS AND METHODS FOR COMBINING MULTIPLE DEPTH MAPS
US12069227B2 (en) 2021-03-10 2024-08-20 Intrinsic Innovation Llc Multi-modal and multi-spectral stereo camera arrays
US12020455B2 (en) 2021-03-10 2024-06-25 Intrinsic Innovation Llc Systems and methods for high dynamic range image reconstruction
CN112767435B (zh) * 2021-03-17 2024-12-10 深圳市归位科技有限公司 一种圈养的目标动物检测和跟踪方法与装置
US11954886B2 (en) 2021-04-15 2024-04-09 Intrinsic Innovation Llc Systems and methods for six-degree of freedom pose estimation of deformable objects
US11290658B1 (en) 2021-04-15 2022-03-29 Boston Polarimetrics, Inc. Systems and methods for camera exposure control
US12067746B2 (en) 2021-05-07 2024-08-20 Intrinsic Innovation Llc Systems and methods for using computer vision to pick up small objects
WO2022245855A1 (en) * 2021-05-18 2022-11-24 Snap Inc. Varied depth determination using stereo vision and phase detection auto focus (pdaf)
US12175741B2 (en) 2021-06-22 2024-12-24 Intrinsic Innovation Llc Systems and methods for a vision guided end effector
US12340538B2 (en) 2021-06-25 2025-06-24 Intrinsic Innovation Llc Systems and methods for generating and using visual datasets for training computer vision models
US12172310B2 (en) 2021-06-29 2024-12-24 Intrinsic Innovation Llc Systems and methods for picking objects using 3-D geometry and segmentation
US11689813B2 (en) 2021-07-01 2023-06-27 Intrinsic Innovation Llc Systems and methods for high dynamic range imaging using crossed polarizers
US12293535B2 (en) 2021-08-03 2025-05-06 Intrinsic Innovation Llc Systems and methods for training pose estimators in computer vision
US20240296682A1 (en) * 2021-12-30 2024-09-05 Mobileye Vision Technologies Ltd. Image position dependent blur control within hdr blending scheme
KR20240020854A (ko) * 2022-08-09 2024-02-16 삼성전자주식회사 깊이 맵을 획득하기 위한 전자 장치 및 방법
CN115049658B (zh) * 2022-08-15 2022-12-16 合肥的卢深视科技有限公司 Rgb-d相机质量检测方法、电子设备及存储介质
US12242672B1 (en) 2022-10-21 2025-03-04 Meta Platforms Technologies, Llc Triggering actions based on detected motions on an artificial reality device
KR102674408B1 (ko) * 2022-12-28 2024-06-12 에이아이다이콤 (주) 비 접촉식 의료 영상 제어 시스템
US20240221207A1 (en) * 2023-01-02 2024-07-04 Samsung Electronics Co., Ltd. Electronic device and method for providing content in virtual space

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6229913B1 (en) * 1995-06-07 2001-05-08 The Trustees Of Columbia University In The City Of New York Apparatus and methods for determining the three-dimensional shape of an object using active illumination and relative blurring in two-images due to defocus
US20050219552A1 (en) * 2002-06-07 2005-10-06 Ackerman Jermy D Methods and systems for laser based real-time structured light depth extraction
US20070189750A1 (en) * 2006-02-16 2007-08-16 Sony Corporation Method of and apparatus for simultaneously capturing and generating multiple blurred images
US20090016572A1 (en) * 2002-05-21 2009-01-15 University Of Kentucky Research Foundation (Ukrf), Colorado Non-Profit System and technique for retrieving depth information about a surface by projecting a composite image of modulated light patterns

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5818959A (en) * 1995-10-04 1998-10-06 Visual Interface, Inc. Method of producing a three-dimensional image from two-dimensional images
US6269175B1 (en) * 1998-08-28 2001-07-31 Sarnoff Corporation Method and apparatus for enhancing regions of aligned images using flow estimation
JP2001264033A (ja) * 2000-03-17 2001-09-26 Sony Corp 三次元形状計測装置とその方法、三次元モデリング装置とその方法、およびプログラム提供媒体
JP2002013918A (ja) * 2000-06-29 2002-01-18 Fuji Xerox Co Ltd 3次元画像生成装置および3次元画像生成方法
JP2002152776A (ja) * 2000-11-09 2002-05-24 Nippon Telegr & Teleph Corp <Ntt> 距離画像符号化方法及び装置、並びに、距離画像復号化方法及び装置
ES2264476T3 (es) * 2001-04-04 2007-01-01 Instro Precision Limited Medida del perfil superficial.
JP2004265222A (ja) * 2003-03-03 2004-09-24 Nippon Telegr & Teleph Corp <Ntt> インタフェース方法、装置、およびプログラム
CA2435935A1 (en) * 2003-07-24 2005-01-24 Guylain Lemelin Optical 3d digitizer with enlarged non-ambiguity zone
US8139109B2 (en) * 2006-06-19 2012-03-20 Oshkosh Corporation Vision system for an autonomous vehicle
US8090194B2 (en) * 2006-11-21 2012-01-03 Mantis Vision Ltd. 3D geometric modeling and motion capture using both single and dual imaging
CN101627280B (zh) * 2006-11-21 2013-09-25 曼蒂斯影像有限公司 三维几何建模和三维视频内容创建
DE102007031157A1 (de) * 2006-12-15 2008-06-26 Sick Ag Optoelektronischer Sensor sowie Verfahren zur Erfassung und Abstandsbestimmung eines Objekts
JP5120926B2 (ja) * 2007-07-27 2013-01-16 有限会社テクノドリーム二十一 画像処理装置、画像処理方法およびプログラム
EP2313737B1 (en) * 2008-08-06 2018-11-14 Creaform Inc. System for adaptive three-dimensional scanning of surface characteristics
CN101556696B (zh) * 2009-05-14 2011-09-14 浙江大学 基于阵列摄像机的深度图实时获取算法
CN101582165B (zh) * 2009-06-29 2011-11-16 浙江大学 基于灰度图像与空间深度数据的摄像机阵列标定算法
WO2011013079A1 (en) * 2009-07-30 2011-02-03 Primesense Ltd. Depth mapping based on pattern matching and stereoscopic information

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6229913B1 (en) * 1995-06-07 2001-05-08 The Trustees Of Columbia University In The City Of New York Apparatus and methods for determining the three-dimensional shape of an object using active illumination and relative blurring in two-images due to defocus
US20090016572A1 (en) * 2002-05-21 2009-01-15 University Of Kentucky Research Foundation (Ukrf), Colorado Non-Profit System and technique for retrieving depth information about a surface by projecting a composite image of modulated light patterns
US20050219552A1 (en) * 2002-06-07 2005-10-06 Ackerman Jermy D Methods and systems for laser based real-time structured light depth extraction
US20070189750A1 (en) * 2006-02-16 2007-08-16 Sony Corporation Method of and apparatus for simultaneously capturing and generating multiple blurred images

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2614405A4 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015198260A (ja) * 2014-03-31 2015-11-09 アイホン株式会社 監視カメラシステム
US10033949B2 (en) 2016-06-16 2018-07-24 Semiconductor Components Industries, Llc Imaging systems with high dynamic range and phase detection pixels
US10498990B2 (en) 2016-06-16 2019-12-03 Semiconductor Components Industries, Llc Imaging systems with high dynamic range and phase detection pixels
US10593712B2 (en) 2017-08-23 2020-03-17 Semiconductor Components Industries, Llc Image sensors with high dynamic range and infrared imaging toroidal pixels
US11538183B2 (en) 2018-03-29 2022-12-27 Twinner Gmbh 3D object sensing system
CN108564614A (zh) * 2018-04-03 2018-09-21 Oppo广东移动通信有限公司 深度获取方法和装置、计算机可读存储介质和计算机设备
US10931902B2 (en) 2018-05-08 2021-02-23 Semiconductor Components Industries, Llc Image sensors with non-rectilinear image pixel arrays
US11857153B2 (en) 2018-07-19 2024-01-02 Activ Surgical, Inc. Systems and methods for multi-modal sensing of depth in vision systems for automated surgical robots

Also Published As

Publication number Publication date
US20120056982A1 (en) 2012-03-08
JP5865910B2 (ja) 2016-02-17
JP2013544449A (ja) 2013-12-12
CA2809240A1 (en) 2013-03-15
KR20140019765A (ko) 2014-02-17
CN102385237B (zh) 2015-09-16
CN102385237A (zh) 2012-03-21
EP2614405A1 (en) 2013-07-17
EP2614405A4 (en) 2017-01-11

Similar Documents

Publication Publication Date Title
CN102385237B (zh) 基于结构化光和立体视觉的深度相机
US8558873B2 (en) Use of wavefront coding to create a depth image
US8602887B2 (en) Synthesis of information from multiple audiovisual sources
US9344707B2 (en) Probabilistic and constraint based articulated model fitting
US8610723B2 (en) Fully automatic dynamic articulated model calibration
US8279418B2 (en) Raster scanning for depth detection
US8866898B2 (en) Living room movie creation
JP5773944B2 (ja) 情報処理装置および情報処理方法
US8503766B2 (en) Systems and methods for detecting a tilt angle from a depth image
US8654152B2 (en) Compartmentalizing focus area within field of view
US8983233B2 (en) Time-of-flight depth imaging
US20110234481A1 (en) Enhancing presentations using depth sensing cameras
US20110190055A1 (en) Visual based identitiy tracking
KR20120093197A (ko) 인간 트래킹 시스템
WO2011087887A2 (en) Tracking groups of users in motion capture system
US20120311503A1 (en) Gesture to trigger application-pertinent information

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11823916

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2809240

Country of ref document: CA

ENP Entry into the national phase

Ref document number: 20137005894

Country of ref document: KR

Kind code of ref document: A

REEP Request for entry into the european phase

Ref document number: 2011823916

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2011823916

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2013528202

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE