CN110612429A - Three-dimensional image ranging system and method - Google Patents

Three-dimensional image ranging system and method Download PDF

Info

Publication number
CN110612429A
CN110612429A CN201880000670.XA CN201880000670A CN110612429A CN 110612429 A CN110612429 A CN 110612429A CN 201880000670 A CN201880000670 A CN 201880000670A CN 110612429 A CN110612429 A CN 110612429A
Authority
CN
China
Prior art keywords
image
phase
light
pixel
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201880000670.XA
Other languages
Chinese (zh)
Other versions
CN110612429B (en
Inventor
杨孟达
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Goodix Technology Co Ltd
Original Assignee
Shenzhen Goodix Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Goodix Technology Co Ltd filed Critical Shenzhen Goodix Technology Co Ltd
Publication of CN110612429A publication Critical patent/CN110612429A/en
Application granted granted Critical
Publication of CN110612429B publication Critical patent/CN110612429B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/22Measuring arrangements characterised by the use of optical techniques for measuring depth
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Measurement Of Optical Distance (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

A three-dimensional image ranging system (10) includes a light emitting module (12), a photosensitive pixel array (142), and an arithmetic unit (16). The light emitting module emits first structured light in a first time and emits second structured light in a second time, wherein a phase difference of the structured light between the first structured light and the second structured light is an odd multiple of (pi/2). The array of photosensitive pixels produces a first image (P1) at the first time and a second image (P2) at the second time. The arithmetic unit is used for generating an in-phase image (I) and an orthogonal image (Q) according to the first image and the second image; and generating a depth image (DP) corresponding to the target object according to the in-phase image and the orthogonal image.

Description

Three-dimensional image ranging system and method Technical Field
The present disclosure relates to a three-dimensional image ranging system and method, and more particularly, to a three-dimensional image ranging system and method capable of calculating a fine position of a structured light.
Background
With the rapid development of science and technology, the acquisition of three-dimensional information of objects has wide application prospects in various application fields, such as production automation, human-computer interaction, medical diagnosis, reverse engineering, digital modeling and the like. The structured light three-dimensional measurement method is widely applied as a non-contact three-dimensional information acquisition technology due to the advantages of simplicity in implementation, high speed, high precision and the like.
The basic idea of structured light three-dimensional measurement is to use the geometric relationships of structured light projections to obtain three-dimensional information of an object. Firstly, projecting a coded structured light template onto an object to be measured through projection equipment, recording a projection image by using a camera, matching the shot image with the projected structured light template, and solving three-dimensional information of the target object by using the triangular relation among projection points, matching points and the object after finding the matching points.
However, the conventional structured light three-dimensional measurement system is not precise enough to calculate the coordinate position of the object in the captured image, and thus the accuracy of the three-dimensional information is limited. Accordingly, there is a need for improvement in the art.
Disclosure of Invention
It is therefore an object of some embodiments of the present invention to provide a three-dimensional image ranging system and method capable of calculating a fine position of a structured light, so as to overcome the disadvantages of the prior art.
In order to solve the above technical problem, an embodiment of the present application provides a three-dimensional image ranging system, including a light emitting module, including a diffraction unit; the first light-emitting unit is used for emitting first light to the diffraction unit in a first time, and the diffraction unit performs diffraction on the first light in the first time to generate first structured light; the second light-emitting unit is used for emitting second light to the diffraction unit in a second time, the diffraction unit performs diffraction on the second light in the second time to generate second structured light, and the phase difference of the structured light between the first structured light and the second structured light is an odd multiple of (pi/2); a photosensitive pixel array for receiving the reflected light corresponding to the first structured light at the first time to generate a first image and receiving the reflected light corresponding to the second structured light at the second time to generate a second image; an arithmetic unit, coupled to the photosensitive pixel array, for performing the following steps: generating an in-phase image associated with the first structured light and a quadrature image associated with the second structured light based on the first image and the second image; and generating a depth image corresponding to the target object according to the in-phase image and the orthogonal image.
For example, the diffraction unit includes a first diffraction subunit and a second diffraction subunit, the first diffraction subunit diffracts the first light in the first time to generate the first structured light, and the second diffraction subunit diffracts the second light in the second time to generate the second structured light.
For example, the first light emitting unit receives a first pulse signal and emits the first light according to the first pulse signal, and the second light emitting unit receives a second pulse signal and emits the second light according to the second pulse signal.
For example, duty ratios of the first pulse signal and the second pulse signal are smaller than a specific duty ratio, and light emitting powers of the first light emitting unit and the second light emitting unit are larger than a specific power.
For example, the first light emitted by the first light emitting unit has a first incident angle with respect to the diffraction unit, the second light emitted by the second light emitting unit has a second incident angle with respect to the diffraction unit, and the first incident angle and the second incident angle are different, so that the structured-light phase difference between the first structured light and the second structured light is an odd multiple of (pi/2).
For example, the photosensitive pixel array includes a plurality of photosensitive pixel circuits, a first photosensitive pixel circuit of the plurality of photosensitive pixel circuits includes a photosensitive element; the first photoelectric reading circuit is coupled to the photosensitive element and comprises: a first transfer gate coupled to the photosensitive element, wherein the first transfer gate is turned on at the first time; a first output transistor coupled to the first transmission gate; and a first read transistor coupled to the first output transistor for outputting a first output signal; and a second photoelectric reading circuit, coupled to the photosensitive element, comprising: a second transfer gate coupled to the photosensitive element, wherein the second transfer gate is turned on at the second time; a second output transistor coupled to the second transmission gate; and a second read transistor coupled to the second output transistor for outputting a second output signal; wherein a first pixel value of the first image corresponding to the first photosensitive pixel circuit is the first output signal, and a second pixel value of the second image corresponding to the first photosensitive pixel circuit is the second output signal; wherein a first in-phase pixel value in the in-phase image corresponding to the first photosensitive pixel circuit is associated with the first output signal, and a first quadrature pixel value in the quadrature image corresponding to the first photosensitive pixel circuit is associated with the second output signal.
For example, the first light-sensitive pixel circuit further includes: a third photoelectric reading circuit, coupled to the photosensitive element, comprising: a third transfer gate coupled to the photosensitive element, wherein the third transfer gate is turned on at a third time, and neither the first light-emitting unit nor the second light-emitting unit emits light at the third time; a third output transistor coupled to the third transmission gate; and a third read transistor coupled to the third output transistor for outputting a third output signal; wherein the first in-phase pixel value corresponding to the first light-sensitive pixel circuit in the in-phase image is related to the first output signal minus the third output signal, and the first quadrature pixel value corresponding to the first light-sensitive pixel circuit in the quadrature image is related to the second output signal minus the third output signal; wherein, a plurality of third output signals output by the plurality of photosensitive pixel circuits form a third image.
For example, the first photosensitive pixel circuit further includes a fourth photoelectric readout circuit, coupled to the photosensitive element, including: a fourth transfer gate coupled to the photosensitive element, wherein the fourth transfer gate is turned on at a fourth time, and a time interval is formed between the first time and the fourth time; a fourth output transistor coupled to the fourth transmission gate; and a fourth read transistor coupled to the fourth output transistor for outputting a fourth output signal; wherein the first in-phase pixel value corresponding to the first photosensitive pixel circuit in the in-phase image is related to the first output signal and a fourth output signal; a plurality of fourth output signals output by the plurality of photosensitive pixel circuits form a fourth image; the arithmetic unit acquires a flight time distance corresponding to the target object according to the first image and the fourth image.
For example, the arithmetic unit is configured to perform the following steps to generate the depth image corresponding to the target object according to the in-phase image and the quadrature image: generating a phase image according to the in-phase image and the orthogonal image, wherein the phase image represents an image phase between the in-phase image formed by the first structured light and the orthogonal image formed by the second structured light; generating a first striation image corresponding to a first phase angle according to the phase image, wherein the first striation image records the image phase as a coordinate position of the first phase angle; and generating the depth image corresponding to the target object according to the first light stripe image.
For example, the first phase angle is 0.
For example, the arithmetic unit is configured to perform the following steps to generate the first light stripe image corresponding to the first phase angle according to the phase image: obtaining a first phase image pixel value located at a first pixel coordinate position in the phase image, and obtaining a second phase image pixel value located at a second pixel coordinate position in the phase image, wherein the first pixel coordinate position is directly adjacent to the second pixel coordinate position in a first dimension; determining whether the first phase angle is between the first phase image pixel value and the second phase image pixel value; when the first phase angle is between the first phase image pixel value and the second phase image pixel value, performing interpolation operation according to the first phase angle, the first phase image pixel value and the second phase image pixel value to obtain an interpolation result; and storing the interpolation result to the first pixel coordinate position of the first light stripe image, wherein the interpolation result is the light stripe image pixel value at the first pixel coordinate position in the first light stripe image.
For example, the operation unit is configured to perform the following steps to perform the interpolation operation according to the first phase angle, the first phase image pixel value, and the second phase image pixel value to obtain the interpolation result: calculating the interpolation result asWhere θ represents the first phase angle, (n, m) represents the first pixel coordinate position, (n-1, m) represents the second pixel coordinate position, PHI (n, m) represents the first phase image pixel value, and PHI (n-1, m) represents the second phase image pixel value.
For example, the arithmetic unit is further configured to perform the following steps to generate the first light streak image corresponding to the first phase angle according to the phase image: when the first phase angle is not between the first phase image pixel value and the second phase image pixel value in the phase image, the light stripe image pixel value corresponding to the first pixel coordinate position in the first light stripe image is 0.
For example, the arithmetic unit is configured to perform the following steps to generate the depth image corresponding to the target object according to the first light stripe image: obtaining a first two-dimensional image coordinate of the target object corresponding to a third pixel coordinate position, wherein the first two-dimensional image coordinate is (x)0,y0)=(m,LSP1(n,m)),y0Coordinates representing the coordinates of the first two-dimensional image in a first dimensionValue, x0A coordinate value representing the coordinate of the first two-dimensional image in a second dimension, n represents the coordinate value of the third pixel coordinate position in the first dimension, m represents the coordinate value of the third pixel coordinate position in the second dimension, and LSP1(n, m) represents the pixel value of the first striation image at the third pixel coordinate position; acquiring a first three-dimensional image coordinate of the target object corresponding to the third pixel coordinate position according to the first two-dimensional image coordinate; and calculating the depth image pixel value of the target object corresponding to the third pixel coordinate position according to the first three-dimensional image coordinate.
For example, the arithmetic unit is further configured to perform the following steps to generate the depth image corresponding to the target object according to the in-phase image and the quadrature image: generating at least one second light stripe image corresponding to at least one second phase angle according to the phase image, wherein the at least one second phase angle is different from the first phase angle, and the at least one second light stripe image is different from the first light stripe image phase; integrating the first light stripe image and the at least one second light stripe image into an integrated image; and generating the depth image corresponding to the target object according to the integrated image.
For example, the at least one second phase angle is an integer multiple of (2 π/L), L being a positive integer greater than 1.
For example, the arithmetic unit is configured to perform the following steps to integrate the first light stripe image and the at least one second light stripe image into the integrated image: generating the integrated image as an addition result of the first striation image and the at least one second striation image.
For example, the arithmetic unit is configured to perform the following steps to generate the depth image corresponding to the target object according to the integrated image: obtaining a second two-dimensional image coordinate of the target object in the integrated image corresponding to a fourth pixel coordinate position, wherein the second two-dimensional image coordinate is (x)0,y0)=(m,MG(n,m)),y0A coordinate value, x, representing the coordinates of the second two-dimensional image in the first dimension0A coordinate value representing the coordinate of the two-dimensional image in a second dimension, n represents the coordinate value of the fourth pixel coordinate position in the first dimension, m represents the coordinate value of the fourth pixel coordinate position in the second dimension, and MG (n, m) represents the pixel value of the integrated image in the fourth pixel coordinate position of the integrated image; acquiring a second three-dimensional image coordinate of the target object corresponding to the third pixel coordinate position according to the second two-dimensional image coordinate; and calculating the depth image pixel value of the target object corresponding to the third pixel coordinate position according to the second three-dimensional image coordinate.
For example, the photosensitive pixel array receives background light at a third time to generate a third image, the photosensitive pixel array receives reflected light corresponding to the first structured light at a fourth time to generate a fourth image, and the arithmetic unit is further configured to perform the following steps to generate the depth image corresponding to the target object according to the in-phase image and the quadrature image: generating a time-of-flight distance corresponding to the target object according to the first image and the fourth image; determining an angle according to the time-of-flight distance and the first light stripe image, wherein the angle represents an included angle between the target object, the light emitting module and the photosensitive pixel array; and generating the depth image corresponding to the target object according to the integrated image and the angle.
For example, the computing unit is further configured to perform the following steps to generate the depth image corresponding to the target object according to the integrated image and the angle: obtaining a second two-dimensional image coordinate of the target object in the integrated image corresponding to a fourth pixel coordinate position, wherein the second two-dimensional image coordinate is (x)0,y0)=(m,MG(n,m)),y0A coordinate value, x, representing the coordinates of the second two-dimensional image in the first dimension0A coordinate value representing the coordinate of the two-dimensional image in a second dimension, and n represents the position of the fourth pixel coordinateThe coordinate value of the first dimension, m represents the coordinate value of the fourth pixel coordinate position in the second dimension, and MG (n, m) represents the integrated image pixel value of the integrated image at the fourth pixel coordinate position; acquiring a second three-dimensional image coordinate of the target object corresponding to the third pixel coordinate position according to the second two-dimensional image coordinate and the angle; and calculating the depth image pixel value of the target object corresponding to the third pixel coordinate position according to the second three-dimensional image coordinate.
In order to solve the above technical problem, an embodiment of the present invention provides a three-dimensional image ranging method applied to a three-dimensional image ranging system, where the three-dimensional image ranging system includes a light emitting module and a photosensitive pixel array, the light emitting module emits a first structured light at a first time and emits a second structured light at a second time, the photosensitive pixel array receives a reflected light corresponding to the first structured light at the first time to generate a first image, and receives a reflected light corresponding to the second structured light at the second time to generate a second image, where a phase difference between the first structured light and the second structured light is an odd multiple of (pi/2), and the three-dimensional image ranging method includes: generating an in-phase image associated with the first structured light and a quadrature image associated with the second structured light based on the first image and the second image; and generating a depth image corresponding to the target object according to the in-phase image and the orthogonal image.
The embodiment of the application calculates the image phase by utilizing the first structured light and the second structured light with the structured light phase difference of (pi/2) or (3 pi/2); and calculates the fine position of the structured light in a first direction/dimension (perpendicular to the structured light) based on the image phase. Compared with the prior art, the depth information of the target object can be accurately obtained.
Drawings
Fig. 1 is an external view of a three-dimensional image ranging system according to an embodiment of the present disclosure;
FIG. 2 is a functional block diagram of the three-dimensional image ranging system of FIG. 1;
FIG. 3 is a schematic diagram of a diffraction unit according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a first structured light and a second structured light according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a process according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a light-sensitive pixel circuit according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of a light-sensitive pixel circuit according to an embodiment of the present disclosure;
FIG. 8 is a schematic diagram of a light-sensitive pixel circuit according to an embodiment of the present disclosure;
FIG. 9 is a timing diagram of a plurality of signals according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a process according to an embodiment of the present application;
FIG. 11 is a schematic view of a phase image according to an embodiment of the present application;
FIG. 12 is a schematic view of a process according to an embodiment of the present application;
FIG. 13 is a schematic view of a light streak image according to an embodiment of the present application;
FIG. 14 is a schematic view of a process according to an embodiment of the present application;
FIG. 15 is a schematic view of a process according to an embodiment of the present application;
FIG. 16 is a schematic view of a light streak image according to an embodiment of the present application;
FIG. 17 is a schematic view of a light streak image according to an embodiment of the present application;
FIG. 18 is a schematic view of a light streak image according to an embodiment of the present application;
FIG. 19 is a schematic diagram of an integrated image according to an embodiment of the present application;
FIG. 20 is a schematic view of a process according to an embodiment of the present application;
FIG. 21 is a diagram illustrating a relative position between a light-emitting module and a light-sensing module according to an embodiment of the present disclosure;
FIG. 22 is a schematic view of a process according to an embodiment of the present application;
FIG. 23 is a schematic view of a process according to an embodiment of the present application;
fig. 24 is a schematic view of a light emitting module according to an embodiment of the present application;
FIG. 25 is a schematic view of light projected onto an object in a stripe-like structure.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In the present specification and claims, the addition, subtraction, multiplication and division operations performed between the images a and B represent that the images a and B perform the addition, subtraction, multiplication and division operations between pixels. In detail, assume that a (n, m), B (n, m), and C (n, m) represent pixel values of image a, image B, and image C at pixel coordinate positions (n, m), respectively, image C is equal to image a plus image B (denoted as C ═ a + B) representing the (n, m) th pixel value C (n, m) of image C is C (n, m ═ a (n, m) + B (n, m), image C is equal to image a minus image B (denoted as C ═ a-B) representing the (n, m) th pixel value C (n, m) of image C is C (n, m) ═ a (n, m) -B (n, m), image C is equal to image a minus image B (denoted as C ═ a-B) representing the (n, m) th pixel value C (n, m), m is C (n, m) × m, m) of image C, the image C is equal to the image a divided by the (n, m) -th pixel value C (n, m) of the image B (denoted as C ═ a/B) representing the image C, which is C (n, m) ═ a (n, m)/B (n, m). Performing the function f operation on the image a represents performing the function f operation on each element in the image a, for example, C ═ f (a) represents that the (n, m) th pixel value C (n, m) of the image C is C (n, m) ═ f (a (n, m)). In addition, n is a Row pointer (Row Index) in the video, and m is a Column pointer (Column Index) in the video. In addition, the angle or phase is in units of radians (also known as radians).
Referring to fig. 25, fig. 25 is a schematic diagram illustrating a light with a stripe structure projected on an object (palm). The striped structured light is perpendicular to the first direction D1 (or the first dimension D1) and parallel to the second direction D2 (or the second dimension D2). As shown in fig. 25, the light of the stripe structure is distorted due to the shape of the object when projected on the object, and the image corresponding to the light of the stripe structure has a large variation in the first direction/dimension D1. In the prior art, the coordinate position of the light image of the stripe-shaped structure in the first direction/dimension D1 is limited by the pixel itself, which cannot be further refined, and affects the distance calculated by the triangulation method. The application can calculate the fine position of the stripe-shaped structured light in the first direction/dimension D1, thereby accurately obtaining the depth information of the object.
Specifically, fig. 1 is an external view of a three-dimensional image ranging system 10 according to an embodiment of the present disclosure, and fig. 2 is a functional block diagram of the three-dimensional image ranging system 10. The three-dimensional image ranging system 10 may be disposed in the electronic device 1, and the electronic device 1 may be a smart phone or a tablet computer. The three-dimensional image ranging system 10 includes a light emitting module 12, a light sensing module 14 and an operation unit 16, wherein the light emitting module 12 emits light at a first time T1Emits first structured light SL1 and at a second time T2And (3) emits second structured light SL2, wherein the first structured light SL1 and the second structured light SL2 have a structured light phase difference therebetween, and the structured light phase difference is an odd multiple of (pi/2), for example, the structured light phase difference may be (pi/2) or (3 pi/2). The sensor module 14 may include a Lens (Lens)146 and a sensor pixel array 142, the sensor pixel array 142 includes a plurality of sensor pixel circuits 144 arranged in an array, and the sensor pixel array 142 is used for a first time T1Receiving the reflected light corresponding to the first structured light SL1 to generate a first image P1 at a second time T2The reflected light corresponding to the second structured light SL2 is received to generate a second image P2. The operation unit 16 is coupled to the photosensitive pixel array 142, and may include a processor or a differential operational amplifier, and the operation unit 16 is configured to receive the first image P1 and the second image P2 and generate a depth image corresponding to the target object according to the first image P1 and the second image P2.
The light emitting module 12 includes a diffractive unit DE, which may include a single Diffractive Optical Element (DOE), a first light emitting unit LE1, and a second light emitting unit LE2, wherein the first light emitting unit LE1 and the second light emitting unit LE2 may be Light Emitting Diode (LED) or Laser (Laser) emitting units. The first light emitting unit LE1 is used for a first time T1Emits the first light L1 to the coilThe emitting unit DE and the diffracting unit DE at a first time T1Diffracting the first light L1 to generate a first structured light SL 1; the second light emitting unit LE2 is used for the second time T2Emitting the second light L1 to the diffraction unit DE, the diffraction unit DE emitting the second light L1 at the second time T2Diffract the second light L2 to generate a second structured light SL 2.
The first light-emitting unit LE1 and the second light-emitting unit LE2 can emit strong light (like a flash light of a general camera) instantly, and the first light L1 and the second light L1 can be Pulse Modulated light, so that the light signals related to the structured light SL1 and SL2 are not easily interfered by background light. In other words, the first light emitting unit LE1 receives the first pulse signal pm1 and emits the first light L1 according to the first pulse signal pm1, and the second light emitting unit LE2 receives the second pulse signal pm2 and emits the second light L2 according to the second pulse signal pm 2. The duty ratios of the first and second pulse signals pm1 and pm2 are less than a specific duty ratio, and the light emitting powers of the first and second light emitting units LE1 and LE2 are greater than a specific power. In an embodiment, the duty ratios of the first pulse signal pm1 and the second pulse signal pm2 may be smaller than 1/50 or smaller than 1/1000, and the light emitting powers of the first light emitting unit LE1 and the second light emitting unit LE2 may be larger than 4 watts. For the remaining technical details of the light emitting module 12 generating the structured light SL1 and SL2 according to the pulse signals pm1 and pm2, reference may be made to the disclosure of the existing documents, and the details are not repeated herein.
Fig. 3 is a schematic diagram of a first structured light SL1 and a second structured light SL2 according to an embodiment of the present disclosure. For convenience of illustration, in fig. 3a, the first structured Light SL1 and the second structured Light SL2 are respectively indicated by black stripes and diagonally shaded stripes, and the Light stripes of the first structured Light SL1 and the Light stripes (Light stripe) of the second structured Light SL2 are both perpendicular to the first direction/dimension D1 and parallel to the second direction/dimension D2. In fact, the black stripes and the hatched stripes represent the brightest light stripes of the first structured light SL1 and the second structured light SL2, respectively, and the curves CV1 and CV2 in fig. 3b represent the Intensity distribution curves (Intensity Profile) of the first structured light SL1 and the second structured light SL2 along the first direction D1, respectively, and the curves CV1 and CV2 are Sinusoidal (sinussoid) curves. In the example shown in fig. 3b, the curve CV2 can be regarded as being delayed by (pi/2) compared to the curve CV1 (or the curve CV1 can be advanced by (3 pi/2) compared to the curve CV 2), i.e., the phase of the structured light of the second structured light SL2 is delayed by (pi/2) compared to the phase of the structured light of the first structured light SL1 (or the phase of the structured light of the first structured light SL1 is advanced by (3 pi/2) compared to the phase of the structured light of the second structured light SL 2), and thus the curve CV1 can be regarded as a Sine Wave (Sine Wave) and the curve CV2 can be regarded as a Cosine Wave (Cosine Wave).
On the other hand, the light stripes (black stripes) corresponding to the first structured light SL1 are separated from each other by a distance PD, and the phase delay (pi/2) of the structured light of the second structured light SL2 compared to the phase delay of the structured light of the first structured light SL1 represents that the light stripes (i.e., the diagonally shaded stripes) corresponding to the second structured light SL2 are located at PD/4 below the light stripes (i.e., the black stripes) corresponding to the first structured light SL1, wherein PD may represent the period of the curve CV 1.
In order to achieve a retardation (pi/2) of the structured light phase of the second structured light SL2 compared to the structured light phase of the first structured light SL1, or to achieve that the light stripe (diagonally shaded stripe) corresponding to the second structured light SL2 is located at PD/4 below the light stripe (black stripe) corresponding to the first structured light SL1, the incident angles of the first light-emitting unit LE1 and the second light-emitting unit LE2 with respect to the diffraction unit DE may be adjusted. Referring to fig. 4, fig. 4 is a schematic diagram of a first light-emitting unit LE1, a second light-emitting unit LE2 and a diffraction unit DE according to an embodiment of the present application. As shown in FIG. 4, the first light L1 emitted from the first light-emitting unit LE1 has a first incident angle μ with respect to the diffraction unit DE after passing through the Collimator (Collimator) CM1The second light L2 emitted by the second light emitting unit LE2 has a second incident angle μ with respect to the diffraction unit DE after passing through the collimator CM2Wherein the first incident angle mu1At a second angle of incidence mu2The first incident angle mu can be adjusted differently1And a second angle of incidence mu2And the phase of the structured light of the second structured light SL2 is delayed (pi/2) compared with the phase of the structured light of the first structured light SL1, i.e., the phase difference of the structured light between the first structured light SL1 and the second structured light SL2 is (pi/2).
In addition, since the curve CV1 is a sine wave and the curve CV2 is a cosine wave, the arithmetic unit 16 can generate an In-Phase (In-Phase) image I associated with the first structured light SL1 and a Quadrature (Quadrature) image Q associated with the second structured light SL2 according to the first image P1 corresponding to the reflected light of the first structured light SL1 and the second image P2 corresponding to the reflected light of the second structured light SL2, calculate an image Phase between the In-Phase image I and the Quadrature image Q according to the In-Phase image I and the Quadrature image Q, and generate a depth image corresponding to the target object according to the image Phase.
In addition, in the present specification and claims, pixel values of different rows (Row) in the image represent pixel values captured by the photosensitive module 14 corresponding to different positions in the first direction/dimension D1, and pixel values of different columns (Column) in the image represent pixel values captured by the photosensitive module 14 corresponding to different positions in the second direction/dimension D2.
The operation of the arithmetic unit 16 can be summarized as a flow A0. Fig. 5 is a schematic diagram of a process a0 according to an embodiment of the present application. The process A0 can be performed by the arithmetic unit 16, the process A0 comprises the following steps:
step A02: an in-phase image I associated with the first structured light SL1 and an orthogonal image Q associated with the second structured light SL2 are generated according to the first image P1 and the second image P2.
Step A04: and generating a depth image DP corresponding to the target object OBJ according to the in-phase image I and the orthogonal image Q.
In step a02, the arithmetic unit 16 generates the in-phase picture I according to the first picture P1 and the quadrature picture Q according to the second picture P2. In detail, referring to fig. 6, 7 and 8, fig. 6, 7 and 8 are schematic diagrams of a photosensitive pixel circuit 60, a photosensitive pixel circuit 70 and a photosensitive pixel circuit 80, respectively, according to an embodiment of the present invention. The photosensitive pixel circuits 60, 70, and 80 may be used to implement the photosensitive pixel circuit 144, and for convenience of description, the photosensitive pixel circuits 60, 70, and 80 may be photosensitive pixel circuits (i.e., the (n, m) -th photosensitive pixel circuit) located at the pixel coordinate position (n, m) in the photosensitive pixel array 142.
The photosensitive pixel circuit 60 includes a photosensitive element PD, which may be a photosensitive Diode (Photo Diode), and photoelectric reading circuits 61 and 62. The photoelectric read circuit 61 includes a transmission gate TG1, an output transistor DV1, and a read transistor RD1, and the photoelectric read circuit 62 includes a transmission gate TG2, an output transistor DV2, and a read transistor RD 2. The transmission gates TG1 and TG2 are coupled to the photo sensor PD, the output transistors DV1 and DV2 are coupled to the transmission gates TG1 and TG2, respectively, and the read transistors RD1 and RD2 are coupled to the output transistors DV1 and DV2, respectively, and output a first output signal Pout1 and a second output signal Pout2, respectively. The transmission gates TG1 and TG2 receive signals TX1 and TX2, respectively, and the read transistors RD1 and RD2 receive a signal ROW. The photoelectric reading circuits 61 and 62 further include reset transistors RT1 and RT2, respectively, and the reset transistors RT1 and RT2 receive a reset signal Rst. The photosensitive pixel circuit 60 further includes an Anti-Blooming transistor AB for extracting photoelectrons generated by the photosensitive element PD receiving background light, so as to avoid affecting the normal operation of the circuit. The anti-blooming transistor AB receives the signal TX 5.
The light-sensitive pixel circuit 70 is similar to the light-sensitive pixel circuit 60, and like elements are labeled with like reference numerals. Unlike the light-sensing pixel circuit 60, the light-sensing pixel circuit 70 further includes a photo-electric readout circuit 63, and the photo-electric readout circuit 63 has the same circuit structure as the photo- electric readout circuits 61 and 62, and includes a transmission gate TG3, an output transistor DV3, and a readout transistor RD3, wherein the transmission gate TG3 receives the signal TX 3.
The light-sensitive pixel circuit 80 is similar to the light-sensitive pixel circuit 70, and like elements are labeled with like reference numerals. Unlike the light-sensitive pixel circuit 70, the light-sensitive pixel circuit 80 further includes a photo-reading circuit 64, and the photo-reading circuit 64 has the same circuit structure as the photo-reading circuits 61, 62, 63, and includes a transmission gate TG4, an output transistor DV4, and a read transistor RD4, wherein the transmission gate TG4 receives the signal TX 4. The light-sensing pixel circuit 80 can be used to calculate the Time of Flight (ToF) of the light emitted by the light-emitting module 12, and the operation unit 16 can obtain the Time of Flight distance corresponding to the target object by using a Time of Flight ranging method according to the Time of Flight of the light, which will be described in detail later.
Referring to fig. 9, fig. 9 shows a first pulse signal pm1, a second pulse signal pm2, signals TX1, TX2, TX3, and a third pulse signal pm2 according to an embodiment of the present invention,Timing diagrams of TX4, TX5, and reset signal Rst. As shown in FIG. 9, the first pulse signal pm1 and the second pulse signal pm2 are respectively at the time T1' and time T2' with pulsing, first light emitting unit LE1 at time T1' emitting a first light L1, a second light emitting unit LE2 at a time T2' emitting a second light L2, wherein time T1' with a first time T1Overlapping to be located at a first time T1Middle, time T2' and a second time T2Overlapping and located at the second time T2In (1).
At a first time T1The transfer gate TG1 of each photosensitive pixel circuit 144 (which may be a photosensitive pixel circuit 60, 70, 80) in the photosensitive pixel array 142 is turned on, the read transistor RD1 of each photosensitive pixel circuit 144 (which may be a photosensitive pixel circuit 60, 70, 80) in the photosensitive pixel array 142 outputs a first output signal Pout1, and the photosensitive pixel array 142 outputs a first image P1 according to the first output signal Pout1 output by each photosensitive pixel circuit 144.
At a second time T2The transfer gate TG2 of each photosensitive pixel circuit 144 (which may be a photosensitive pixel circuit 60, 70, 80) in the photosensitive pixel array 142 is turned on, the read transistor RD2 of each photosensitive pixel circuit 144 (which may be a photosensitive pixel circuit 60, 70, 80) in the photosensitive pixel array 142 outputs a second output signal Pout2, and the photosensitive pixel array 142 outputs a second image P2 according to the second output signal Pout2 output by each photosensitive pixel circuit 144.
At a third time T3The transfer gate TG3 of each photosensitive pixel circuit 144 (which may be a photosensitive pixel circuit 70, 80) in the photosensitive pixel array 142 is turned on, the read transistor RD3 of each photosensitive pixel circuit 144 (which may be a photosensitive pixel circuit 70, 80) in the photosensitive pixel array 142 outputs a third output signal Pout3, and the photosensitive pixel array 142 outputs a third image P3 according to the third output signal Pout3 output by each photosensitive pixel circuit 144. Wherein the third time T3And a first time T1And a second time T2Do not overlap with each other, i.e. at a third time T3In this embodiment, neither the first light emitting unit LE1 nor the second light emitting unit LE2 emits light. In other wordsSay, at the third time T3The photosensitive pixel array 142 receives the background light to generate a third image P3.
At a fourth time T4The transfer gate TG4 of each photosensitive pixel circuit 144 (which may be a photosensitive pixel circuit 80) in the photosensitive pixel array 142 is turned on, the read transistor RD4 of each photosensitive pixel circuit 144 (which may be a photosensitive pixel circuit 80) in the photosensitive pixel array 142 outputs the fourth output signal Pout4, and the photosensitive pixel array 142 outputs the fourth image P4 according to the fourth output signal Pout4 output by each photosensitive pixel circuit 144. Wherein the fourth time T4And a first time T1At a time interval Td
In one embodiment, the photosensitive pixel circuit 144 is implemented by the photosensitive pixel circuit 60, and the arithmetic unit 16 generates the in-phase image I as the first image P1 and generates the quadrature image Q as the second image P2, i.e., I ═ P1 (formula 1.1) and Q ═ P2 (formula 1.2), i.e., the (n, m) th in-phase image pixel value I (n, m) in the in-phase image I may be Pout1 and the (n, m) th quadrature image pixel value Q (n, m) in the quadrature image Q may be Pout 2. When the light emitting powers of the first light emitting unit LE1 and the second light emitting unit LE2 are strong enough that the background light is negligible, the arithmetic unit 16 can generate the in-phase image I and the quadrature image Q according to equations 1.1 and 1.2.
In one embodiment, the photosensitive pixel circuit 144 is implemented by the photosensitive pixel circuit 70, and the arithmetic unit 16 generates the in-phase image I as the first image P1 minus the third image P3, and generates the quadrature image Q as the second image P2 minus the third image P3, i.e., I ═ P1-P3 (formula 2.1), Q ═ P2-P3 (formula 2.2), i.e., the (n, m) th in-phase image pixel value I (n, m) in the in-phase image I can be Pout1-Pout3, and the (n, m) th quadrature image pixel value Q (n, m) in the quadrature image Q can be Pout2-Pout 3. When the operation unit 16 generates the in-phase image I and the quadrature image Q according to the formulas 2.1 and 2.2, the interference of the background light can be eliminated.
In one embodiment, the photosensitive pixel circuit 144 is implemented by the photosensitive pixel circuit 80, and the operation unit 16 generates the in-phase image I as the first image P1 plus the fourth image P4 and then subtracts 2 times of the third image P3, and generates the quadrature image Q as the second image P2 minus the third image P3, i.e., I ═ P1+ P4-2 × P3 (formula 3.1), Q ═ P2-P3 (formula 2.2), in other words, the (n, m) th in-phase image pixel value I (n, m) in the in-phase image I may be Pout1+ Pout4-2 × Pout3, and the (n, m) th quadrature image pixel value Q (n, m) in the quadrature image Q may be Pout2-Pout 3. When the photosensitive pixel circuit 144 is implemented by the photosensitive pixel circuit 80 and the operation unit 16 generates the in-phase image I and the quadrature image Q according to the equations 3.1 and 2.2, the operation unit 16 additionally calculates the flight time distance corresponding to the target object according to the fourth image P4. The equations 2.1, 2.2, and 3.1 in step a02 can be implemented by a differential operational amplifier in the operation unit 16.
In step a04, the arithmetic unit 16 generates a depth image DP corresponding to the target object OBJ based on the in-phase image I and the quadrature image Q. Referring to fig. 10, fig. 10 is a schematic diagram of a process B0 according to an embodiment of the present disclosure. Process B0 is a detailed operation of step A04, which includes the following steps:
step B02: a phase image PHI is generated from the in-phase image I and the quadrature image Q.
Step B04: generating a first phase angle theta according to the phase image PHI1Light stripe image LSP of1
Step B06: from light stripe images LSP1A depth image DP corresponding to the target object OBJ is generated.
In step B02, the arithmetic unit 16 generates the phase image PHI as PHI ═ tan-1(I/Q) (formula 4), that is, the (n, m) -th pixel value PHI (n, m) of the phase image PHI is PHI (n, m) ═ tan-1(I (n, m)/Q (n, m)). Since the first structured light SL1 and the second structured light SL2 have a (pi/2) structured light phase difference, the curve CV1 can be regarded as a sine wave and the curve CV2 can be regarded as a cosine wave, and thus the phase image PHI can be regarded as an image phase between the in-phase image I and the quadrature image Q.
Referring to fig. 11, fig. 11 is a schematic diagram of a phase image PHI according to an embodiment of the present disclosure. In fig. 11, the phase image PHI is a 20 × 10 image, and fig. 11 shows the phase image PHI after Mask (Mask) processing, in which the image of the non-effective region (or the non-target object OBJ) is excluded first, that is, after executing formula 4, the operation unit 16 may set the pixel value of the phase image PHI corresponding to the image of the non-effective region (or the non-target object OBJ) to a specific value other than 0 to 2 pi, which is-1 in the embodiment of fig. 11. In other words, the pixels with pixel values between 0 and 2 pi in the phase image PHI of fig. 11 form an image of the active area (or the target image corresponding to the target object OBJ). In addition, in one embodiment, the operation unit 16 may determine a masking region of the masking process, i.e., determine an active region or an inactive region, according to the signal strength of the output signals (e.g., Pout1, Pout2, Pout3, or Pout4) of the photosensitive pixel circuits.
In step B04, the operation unit 16 generates a first phase angle θ according to the phase image PHI1Light stripe image LSP of1. In one embodiment, the operation unit 16 may execute step B04 according to the masked phase image PHI (as shown in fig. 11). In one embodiment, the first phase angle θ1Is 0. Referring to fig. 12, fig. 12 is a schematic diagram of a process C0 according to an embodiment of the present disclosure. Generally, the process C0 is used to generate a (general) light stripe image LSP corresponding to a (general) phase angle θ; however, for step B04, the arithmetic unit 16 executes the process C0 to generate the phase angle θ corresponding to the (specific) phase angle θ1Of (specific) light stripe image LSP1That is, step B04 is for the arithmetic unit 16 to determine the (specific) phase angle θ1Substituted into flow C0 to generate a (specific) light streak image LSP1. In the following description of the process C0, the (general) phase angle θ and the (general) light streak image LSP will be described. The process C0 shown in fig. 12 includes the following steps:
step C00: and starting.
Step C02: the (n, m) -th pixel value PHI (n, m) of the phase image PHI is obtained.
Step C04: determine whether the pixel value PHI (n, m) is between 0 and 2 pi? If yes, go to step C06; if not, go to step C14.
Step C06: the (n-1, m) -th pixel value PHI (n-1, m) of the phase image PHI is obtained.
Step C08: determine whether the phase angle θ is between the pixel value PHI (n, m) and the pixel value PHI (n-1, m)? If yes, go to step C10; if not, if yes, go to step C14.
Step C10: an interpolation operation is performed to obtain an interpolation result r.
Step C12: let LSP (n, m) be r.
Step C14: let LSP (n, m) be 0.
Step C16: and (6) ending.
In step C00, the arithmetic unit 16 skips the line index n being 1(n ≧ 1) and starts the calculation from the line index n being greater than or equal to 2(n ≧ 2). In addition, in an embodiment, the operation unit 16 may pre-configure all the light stripe images LSP to have pixel values of 0.
In step C02, the arithmetic unit 16 obtains the (n, m) th pixel value PHI (n, m) (which may correspond to the first phase image pixel value in the claims) of the phase image PHI. In step C06, the operation unit 16 obtains the (n-1, m) th pixel value PHI (n-1, m) (which may correspond to the second phase image pixel value in the claims) of the phase image PHI. It should be noted that the (n, m) th pixel of the phase image PHI and the (n-1, m) th pixel of the phase image PHI are two pixels directly adjacent to each other in the first direction/dimension D1.
In step C08, the operation unit 16 determines whether the phase angle θ is between the pixel value PHI (n, m) and the pixel value PHI (n-1, m), and the operation unit 16 may determine whether PHI (n-1, m) ≦ θ < PHI (n, m) (or determine whether PHI (n-1, m) < θ ≦ PHI (n, m), PHI (n-1, m) ≦ θ ≦ PHI (n, m), PHI (n-1, m) < θ < PHI (n, m)) is True (True). In addition, when the phase angle θ is 0(θ ═ 0), the arithmetic unit 16 determines in step C08 whether PHI (n-1, m) + pi ≦ θ + pi < PHI (n, m) + pi (or whether formula 5 below) is true.
mod (PHI (n-1, m) + π,2 π) mod (θ + π,2 π) mod ≦ mod (PHI (n, m) + π,2 π) (equation 5)
In step C10, the computing unit 16 calculates the interpolation result r as the following formula 6 according to the phase angle θ, the phase image pixel values PHI (n, m) and PHI (n-1, m). The purpose of the computing unit 16 executing step C10 is to calculate the fine position where the structured light phase difference between the first structured light SL1 and the second structured light SL2 is the phase angle θ, and more precisely, the computing unit 16 can execute an interpolation operation using step C10, so that the structured light phase difference is really the fine position where the phase angle θ is in the first direction D1 (or the first dimension D1).
Figure PCTCN2018082441-APPB-000002
In step C12, the arithmetic unit 16 stores the interpolation result r into the light stripe image LSP1And the interpolation result r is the light stripe image pixel value LSP (n, m) at the pixel coordinate position (n, m) in the light stripe image LSP, i.e., LSP (n, m) ═ r.
In addition, the pixel value LSP (n, m) of the non-0 light stripe image obtained by the operation unit 16 executing the process C0 is between n-1 and n, i.e., n-1< LSP (n, m) < n. Further, when the light stripe image pixel value LSP (n, m) is not 0, it represents the existence of the fact that there is a structured light phase difference (of the first structured light SL1 and the second structured light SL 2) between the pixel coordinate position (n-1, m) and the pixel coordinate position (n, m) as the phase angle θ, and LSP (n, m) represents the fine position where the structured light phase difference (of the first structured light SL1 and the second structured light SL 2) occurs as the phase angle θ in the first direction/dimension D1.
Referring to fig. 13, fig. 13 shows the light stripe image LSP generated after the operation unit 16 executes step B041(wherein the phase angle theta is the first phase angle theta1And θ is θ10). For convenience of illustration, fig. 13 further indicates a first direction/dimension D1 and a second direction/dimension D2, wherein the first dimension D1 may correspond to the y-axis of the two-dimensional image coordinates, and the second dimension D2 may correspond to the x-axis of the two-dimensional image coordinates. As can be seen from FIG. 13, a difference in the structured light phase (of the first structured light SL1 and the second structured light SL 2) of 0 occurs in FIG. 13/light stripe image LSP1At the dotted shading. In the case where it is known that the structured-light phase difference of 0 occurs between the pixel coordinate position (n-1, m) and the pixel coordinate position (n, m), further, the fine position where the structured-light phase difference of 0 occurs in the first direction D1 (or the first dimension D1) is LSP1(n, m). Lifting deviceFor example, the pixel coordinate position (8,4) and the pixel coordinate position (9,4) where the structured light phase difference is 0 occur are between 8 and 9, i.e., the position where the structured light phase difference is 0 occurs in the first direction/dimension D1, and further, the position where the structured light phase difference is 0 occurs is 8.75 in the first direction/dimension D1.
In step B06, the operation unit 16 determines the light stripe image LSP according to the light stripe image LSP1A depth image DP corresponding to the target object OBJ is generated. In one embodiment, the computing unit 16 only targets the light stripe image LSP1And performing depth/distance calculation at the pixel coordinate position of which the pixel value of the middle fringe image is not 0. Referring to fig. 14, fig. 14 is a schematic diagram of a process D0 according to an embodiment of the present disclosure. Flow D0 is a detailed operation of step B06, which includes the following steps:
step D02: acquiring two-dimensional image coordinates (x) of the object OBJ corresponding to the pixel coordinate position (n, m)0,y0) Is (x)0,y0)=(m,LSP1(n,m))。
Step D04: from two-dimensional image coordinates (x)0,y0) Acquiring three-dimensional image coordinates (X) of the object OBJ corresponding to the pixel coordinate position (n, m)0,Y0,Z0)。
Step D06: from three-dimensional image coordinates (X)0,Y0,Z0) And calculating the depth image pixel value DP (n, m) of the object OBJ corresponding to the pixel coordinate position (n, m).
In step D02, the arithmetic unit 16 obtains the two-dimensional image coordinates (x) of the target object OBJ corresponding to the pixel coordinate position (n, m)0,y0) Is (x)0,y0)=(m,LSP1(n, m)), wherein x0Representing two-dimensional image coordinates (x)0,y0) Coordinate value, y, in a second dimension D20Representing two-dimensional image coordinates (x)0,y0) At the coordinate value of the first dimension D1, n represents the coordinate value of the pixel coordinate position (n, m) at the first dimension D1, m represents the coordinate value of the pixel coordinate position (n, m) at the second dimension D2, and the fringe image pixel value LSP1(n, m) is not 0.
In step D04, the operation unit 16 is based onTwo-dimensional image coordinates (x)0,y0) Acquiring three-dimensional image coordinates (X) of the object OBJ corresponding to the pixel coordinate position (n, m)0,Y0,Z0). In one embodiment, the computing unit 16 can utilize equation 7 (i.e., triangulation) to determine the coordinates (x) of the two-dimensional image0,y0) Calculating three-dimensional image coordinates (X)0,Y0,Z0) Where b is the distance between the light emitting module 12 and the light sensing module 14, α is the pitch Angle (Elevation Angle) of the target object OBJ with respect to the light emitting module 12, ρ is the Azimuth Angle (Azimuth Angle) of the target object OBJ with respect to the light emitting module 12, and f is the focal length of the lens. In addition, the details of equation 7/triangulation are known to those skilled in the art, and reference can be made to the disclosure of the prior art, and are not repeated herein.
Figure PCTCN2018082441-APPB-000003
In step D06, the operation unit 16 calculates the three-dimensional image coordinates (X)0,Y0,Z0) The depth image pixel value DP (n, m) is calculated. In one embodiment, the computing unit 16 can calculate the depth image pixel value DP (n, m) as
Figure PCTCN2018082441-APPB-000004
In addition, when LSP1When (n, m) is 0, in one embodiment, the depth image pixel value DP (n, m) may be 0 (i.e., DP (n, m) ═ 0).
In addition, in the prior art, the two-dimensional image coordinates (x) of the distance are calculated by triangulation0,y0) Is (x)0,y0) Where n is an integer, the coordinate position of the image phase of the stripe-shaped structured light in the first direction/dimension D1 cannot be expressed finely. In contrast, the present application calculates two-dimensional image coordinates (x) of distance using triangulation0,y0) Is (x)0,y0)=(m,LSP1(n, m)), wherein LSP1(n, m) a rational number between n-1 and nIn other words, the present application can finely express the coordinate position of the image phase of the light with the stripe structure in the first direction/dimension D1, thereby accurately obtaining the depth information of the object.
In brief, the three-dimensional image ranging system 10 utilizes the light emitting module 12 including the first light emitting unit LE1 and the second light emitting unit LE2 to emit the first structured light SL1 and the second structured light SL2 with the phase difference of (pi/2) or (3 pi/2); using the process a0, an in-phase image I associated with the first structured light SL1 and a quadrature image Q associated with the second structured light SL2 are generated; generating a phase image PHI representing the image phase between the in-phase image I and the quadrature image Q using the process B0; using the process C0, the first phase angle θ is generated1Light stripe image LSP of1(ii) a Using the procedure D0, according to the light stripe image LSP1A depth image DP corresponding to the target object OBJ is generated. Compared with the prior art, the present application can obtain the structured light phase difference as the first phase angle θ in the first direction/dimension D11The fine position of occurrence, which may more accurately calculate the depth value of the target object OBJ. Further, the three-dimensional range finding system 10 utilizes step B04/process C0 to calculate the fine position of the light with the striped structure in the first direction/dimension D1, so as to increase the fineness of the OBJ depth value of the target object. In addition, the three-dimensional image ranging system 10 calculates the depth value by using the image phase between the in-phase image I and the orthogonal image Q, and the image phase is independent of the reflected light intensity, so the depth value calculated by the three-dimensional image ranging system 10 is not affected by the light reflectivity of the target object OBJ.
However, as can be seen from FIG. 13, the light streak image LSP1The depth image DP is also Sparse due to a Sparse (Sparse) image, wherein the Sparse image represents that most of the pixels in the image have 0 values, and only a few pixels in the image have other than 0 values. In order to make the depth image DP denser (Dense), except for the first phase angle θ1Light stripe image LSP of1In addition, the present application may also generate at least one light stripe image LSP '(which may correspond to at least one second phase angle in the claims) corresponding to at least one phase angle θ' (which may correspond to at least one second phase angle in the claims)At least one second striation image in the depth image DP) to increase the density of the depth image DP. Wherein the at least one phase angle theta' is different from the first phase angle theta1At least one light stripe image LSP' is different from the light stripe image LSP1
Specifically, fig. 15 is a schematic diagram of a process E0 according to an embodiment of the present application, where the process E0 includes the following steps:
step E02: a phase image PHI is generated from the in-phase image I and the quadrature image Q.
Step E04: generating a first phase angle theta according to the phase image PHI1Light stripe image LSP of1
Step E06: generating at least one light stripe image LSP 'corresponding to at least one phase angle theta' different from the first phase angle theta according to the phase image PHI1At least one light stripe image LSP' is different from the light stripe image LSP1
Step E08: applying the light stripe image LSP1And at least one light stripe image LSP' is integrated into an integrated image MG.
Step E10: according to the integrated image MG, a depth image DP corresponding to the object OBJ is generated.
Steps E02 and E04 in the procedure E0 and steps B02 and B04 in the procedure B0 are not described herein again.
In step E06, the operation unit 16 generates at least one light stripe image LSP 'corresponding to at least one phase angle θ' according to the phase image PHI. In the following embodiments, the arithmetic unit 16 generates the phase-divided angle θ according to the phase image PHI1In other words, the computing unit 16 can generate the phase images PHI according to the 3 light stripe images LSP 'with the phase angles θ' being the same as the phase angles θ2、θ3、θ4Light stripe image LSP of2、LSP3、LSP4Wherein the phase angle theta2、θ3、θ4Can be respectively (pi/2), (pi) and (3 pi/2). Step E06 provides the arithmetic unit 16 with the (specific) phase angle θ2、θ3、θ4Substituted into the flow C0 to generate a (specific) light stripe image LSP2、LSP3、LSP4. For details of the operation of step E06, please refer to the paragraph related to the flow C0, which is not described herein again. Referring to FIGS. 16-18, FIG. 16 shows a graph corresponding to a phase angle θ22Pi/2) light fringe image LSP2FIG. 17 shows the phase angle θ33Pi) light fringe image LSP3FIG. 18 is a graph corresponding to the phase angle θ4(θ 43 pi/2) light fringe image LSP4. Light stripe image LSP2、LSP 3、LSP 4Characteristic of (D) and light streak image LSP1Similarly, no further description is provided herein.
In step E08, the operation unit 16 transforms the light stripe image LSP1And at least one light stripe image LSP' integration (Merge) into an integrated image MG. Due to the first phase angle theta1Different from at least one phase angle theta', the light stripe image LSP1And at least one of the light stripe images LSP' is different from the other light stripe image pixel position other than 0, so that the operation unit 16 can generate the integrated image MG as the light stripe image LSP1And at least one light streak image LSP'. That is, according to the above embodiment, the phase angle θ1、θ2、θ3、θ4Different from each other, light stripe image LSP1、LSP2、LSP3、LSP4The pixel positions of the non-0 light stripe images are all different, so that the operation unit 16 can apply the light stripe images LSP1、LSP2、LSP3、LSP4Adding to generate an integrated image MG as a light stripe image LSP1、LSP2、LSP3、LSP4As a result of addition of (i.e. MG ═ LSP)1+LSP2+LSP3+LSP4
Referring to fig. 19, fig. 19 shows the light stripe image LSP shown in fig. 13, 16, 17 and 181、LSP2、LSP3、LSP4And integrated into an integrated image MG. As can be seen from FIG. 19, the integrated image MG is compared with the striped imageLSP1And is dense.
In step E10, the arithmetic unit 16 generates a depth image DP corresponding to the object OBJ according to the integrated image MG. Referring to fig. 20, fig. 20 is a schematic diagram of a process F0 according to an embodiment of the present disclosure. Flow F0 is a detailed operation of step E10, which includes the following steps:
step F02: acquiring two-dimensional image coordinates (x) of the object OBJ corresponding to the pixel coordinate position (n, m)0,y0) Is (x)0,y0)=(m,MG(n,m))。
Step F04: from two-dimensional image coordinates (x)0,y0) Acquiring three-dimensional image coordinates (X) of the object OBJ corresponding to the pixel coordinate position (n, m)0,Y0,Z0)。
Step F06: from three-dimensional image coordinates (X)0,Y0,Z0) And calculating the depth image pixel value DP (n, m) of the object OBJ corresponding to the pixel coordinate position (n, m).
The flow F0 is similar to the flow D0, except that in step F02, the coordinate value y is0The integrated image pixel value MG (n, m) is a pixel value of the integrated image MG at the pixel coordinate position (n, m), and the integrated image pixel value MG (n, m) is not 0. For details of the operation of the process F0, please refer to the paragraph related to the process D0, which is not described herein again.
Briefly, the three-dimensional image ranging system 10 utilizes the process E0 to generate the phase angle θ1~θ4Light stripe image LSP of1~LSP4And applying the light stripe image LSP1~LSP4Integrated into a denser integrated image MG. Therefore, the depth image DP generated by the computing unit 16 executing the processes E0 and F0 is also denser, so that the number of non-0 depth image pixels of the depth image DP can be increased.
It should be noted that the processes D0 and F0 are all performed by simply using triangulation to calculate the depth/distance of the target object OBJ. However, the triangulation method has a problem of Light Plane blur (Light Plane Ambiguity). Referring to fig. 21, fig. 21 is a relative position relationship diagram of the light emitting module 12, the light receiving module 14, and the positions OA, OB, and OC.Briefly, the problem of the light plane blur is that the three-dimensional image ranging system cannot distinguish whether the object OBJ is located at the position OA, the position OB or the position OC (the positions OA, OB and OC respectively have different angles α with respect to the light emitting module 12)A、αB、αC) And adversely affect the accuracy of the ranging.
In order to solve the problem of optical plane blurring, the three-dimensional image distance measuring system 10 may first obtain the time-of-flight distance corresponding to the target object by using the time-of-flight distance measuring method (the photosensitive pixel circuit 144 is implemented by the photosensitive pixel circuit 80), then calculate the angle α required by the triangulation method according to the time-of-flight distance, and finally calculate the depth/distance corresponding to the target object OBJ by using the triangulation method according to the angle α. The angle α is a pitch angle of the target object OBJ relative to the light emitting module 12, i.e., an included angle between the target object OBJ, the light emitting module 12 and the photosensitive pixel array 142.
Referring to fig. 22, fig. 22 is a schematic diagram of a process G0 according to an embodiment of the present disclosure. The process G0 can be executed by the arithmetic unit 16, wherein the process G0 comprises the following steps:
step G02: generating a time-of-flight distance D corresponding to the target object OBJ from the first image P1 and the fourth image P4ToF
Step G04: according to the time of flight distance DToFAnd light stripe image LSP1An angle α _ opt is determined, wherein the angle α _ opt represents an angle between the target object OBJ, the light emitting module 12 and the photosensitive pixel array 142.
Step G06: and generating a depth image DP corresponding to the target object OBJ according to the integrated image MG and the angle alpha _ opt.
In step G02, the arithmetic unit 16 first obtains the time-of-flight image TF (wherein the photosensitive pixel circuit 144 is implemented by the photosensitive pixel circuit 80) according to the first image P1 and the fourth image P4, and then calculates the time-of-flight distance D corresponding to the target object OBJ according to the time-of-flight image TFToF. In one embodiment, the computing unit 16 may calculate the time-of-flight image TF as TF (P4-P3)/(P1+ P4-2P 3) × (c × T) (formula 8), where c represents the speed of light and T represents the time duration of the on-period of the transmission gate, in other words, the computing unit 16 may generate the (th) of the time-of-flight image TF (formula 8)n, m) in-phase image pixel values TF (n, m) are (Pout4-Pout3)/(Pout1+ Pout4-2 × Pout3) × (c × T). Each of the time-of-flight image pixel values TF (n, m) in the time-of-flight image TF represents a time-of-flight distance between the target object OBJ and the three-dimensional image ranging system 10 at the pixel coordinate position (n, m). In addition, the computing unit 16 can generate the flight time distance DToFAs a statistical value of the time-of-flight image TF, the computing unit 16 can generate the time-of-flight distance DToFThe statistical values are the average value, the maximum value, the median or the mode of the flight time image TF. In addition, the details of equation 8/time-of-flight ranging method are known to those skilled in the art and will not be described herein.
In step G04, the operation unit 16 can determine the flight time distance DToFAnd light stripe image LSP1The angle α _ opt is determined. The computing unit 16 can obtain a plurality of possible angles alpha in advance(1)~α(K)Angle alpha(1)~α(K)May be related to the characteristics of the diffraction unit DE. The computing unit 16 can be based on the angle alpha obtained in advance(1)~α(K)-The procedure D0 is executed K times, i.e. the angle alpha(1)~α(K)-The distances D are respectively substituted into equation 7 in step D04 to generate K distances D generated by triangulationΔ,(1)~DΔ,(K). The arithmetic unit 16 can convert the distance D generated by triangulationΔ,(1)~DΔ,(K)Distance D from the time of flight obtained in step G02ToFComparing to obtain DΔ,(k_opt)Is a distance DΔ,(1)~DΔ,(K)Closest distance to time of flight DToFThe distance of (c). The arithmetic unit 16 determines the angle α _ opt as the angle α(1)~α(K)In (1) corresponds to DΔ,(k_opt)Angle alpha of(k_opt)I.e. α _ opt ═ α(k_opt)
In step G06, the arithmetic unit 16 generates the depth image DP corresponding to the object OBJ according to the integrated image MG generated in step E08 and the angle α _ opt generated in step G04. Referring to fig. 23, fig. 23 is a schematic diagram of a process H0 according to an embodiment of the present disclosure. Process H0 is a detailed operation of step G06, which includes the following steps:
step H02: obtaining target object OBJ pairTwo-dimensional image coordinates (x) corresponding to pixel coordinate positions (n, m)0,y0) Is (x)0,y0)=(m,MG(n,m))。
Step H04: from two-dimensional image coordinates (x)0,y0) And an angle alpha _ opt for obtaining the three-dimensional image coordinate (X) of the object OBJ corresponding to the pixel coordinate position (n, m)0,Y0,Z0)。
Step H06: from three-dimensional image coordinates (X)0,Y0,Z0) And calculating the depth image pixel value DP (n, m) of the object OBJ corresponding to the pixel coordinate position (n, m).
The flow H0 is similar to the flows F0 and D0, and the difference between the flow H0 and the flow F0 is that in step H04, the computing unit 16 obtains the angle α _ opt together with (x) obtained in step H020,y0) Is substituted into equation 7 in step D04 to generate three-dimensional image coordinates (X) corresponding to the angle α _ opt0,Y0,Z0) In step H06, the computing unit 16 obtains the three-dimensional image coordinates (X) corresponding to the angle α _ opt from step H040,Y0,Z0) The depth image pixel value DP (n, m) is calculated. For details of the operation of the process H0, please refer to paragraphs related to the process F0 and the process D0, which are not described herein again.
In brief, the three-dimensional image ranging system 10 obtains the flight time distance D of the target object OBJ by using the process G0ToFAnd comparing the distances D generated by the triangulationΔ,(1)~DΔ,(K)Distance to time of flight DToFTo determine the angle α _ opt, and generate the depth image DP according to the integrated image MG and the angle α _ opt. In other words, the present application utilizes the time of flight distance DToFTo eliminate the possibility of light plane ambiguity while preserving the advantages of time-of-flight ranging and triangulation.
It should be noted that the above-mentioned embodiments are provided to illustrate the concept of the present invention, and those skilled in the art can make various modifications without limiting the scope of the invention. For example, the diffractive unit is not limited to include a single diffractive optical element, please refer to fig. 24, and fig. 24 is a schematic diagram of a light emitting module DE' according to an embodiment of the present application. Said diffractionThe unit DE' includes a first diffractive subunit DE1 and a second diffractive subunit DE2, the first diffractive subunit DE1 can be a diffractive optical element, and the second diffractive subunit DE2 can be another diffractive optical element. The first light emitting unit LE1 can emit light toward the first diffractive subunit DE1, and the second light emitting unit LE2 can emit light toward the second diffractive subunit DE 2. In other words, the first diffractive subunit DE1 is located at a first time T1A first diffraction subunit DE2 diffracting the first light L1 to generate a first structured light SL1 at a second time T2The second structured light SL2 is generated by diffracting the second light L2, and also falls into the scope of the present application.
In summary, the present application calculates the image phase by using the first structured light and the second structured light with the structured light phase difference of (pi/2) or (3 pi/2); and calculates the fine position of the structured light in a first direction/dimension (perpendicular to the structured light) based on the image phase. Compared with the prior art, the depth information of the target object can be accurately obtained.
The above description is only exemplary of the present application and should not be taken as limiting the present application, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present application should be included in the scope of the present application.

Claims (33)

  1. A three-dimensional image ranging system, comprising:
    a light emitting module comprising:
    a diffraction unit;
    the first light-emitting unit is used for emitting first light to the diffraction unit in a first time, and the diffraction unit performs diffraction on the first light in the first time to generate first structured light;
    the second light-emitting unit is used for emitting second light to the diffraction unit in a second time, the diffraction unit performs diffraction on the second light in the second time to generate second structured light, and the phase difference of the structured light between the first structured light and the second structured light is an odd multiple of (pi/2);
    a photosensitive pixel array for receiving the reflected light corresponding to the first structured light at the first time to generate a first image and receiving the reflected light corresponding to the second structured light at the second time to generate a second image;
    an arithmetic unit, coupled to the photosensitive pixel array, for performing the following steps:
    generating an in-phase image associated with the first structured light and a quadrature image associated with the second structured light based on the first image and the second image; and
    and generating a depth image corresponding to a target object according to the in-phase image and the orthogonal image.
  2. The three-dimensional image ranging system according to claim 1, wherein the diffractive unit comprises a first diffractive subunit and a second diffractive subunit, the first diffractive subunit diffracting the first light during the first time to generate the first structured light, and the second diffractive subunit diffracting the second light during the second time to generate the second structured light.
  3. The system of claim 1, wherein the first light emitting unit receives a first pulse signal and emits the first light according to the first pulse signal, and the second light emitting unit receives a second pulse signal and emits the second light according to the second pulse signal.
  4. The three-dimensional image ranging system of claim 3, wherein a duty ratio of the first pulse signal and the second pulse signal is smaller than a specific duty ratio, and a light emitting power of the first light emitting unit and the second light emitting unit is larger than a specific power.
  5. The three-dimensional image ranging system as claimed in claim 1, wherein the first light emitted by the first light emitting unit has a first incident angle with respect to the diffraction unit, the second light emitted by the second light emitting unit has a second incident angle with respect to the diffraction unit, and the first incident angle and the second incident angle are different, so that the structured-light phase difference between the first structured light and the second structured light is an odd multiple of (pi/2).
  6. The system of claim 1, wherein the array of photosensitive pixels comprises a plurality of photosensitive pixel circuits, a first photosensitive pixel circuit of the plurality of photosensitive pixel circuits comprising:
    a photosensitive element;
    the first photoelectric reading circuit is coupled to the photosensitive element and comprises:
    a first transfer gate coupled to the photosensitive element, wherein the first transfer gate is turned on at the first time;
    a first output transistor coupled to the first transmission gate; and
    a first reading transistor coupled to the first output transistor for outputting a first output signal; and
    a second photoelectric reading circuit coupled to the photosensitive element, comprising:
    a second transfer gate coupled to the photosensitive element, wherein the second transfer gate is turned on at the second time;
    a second output transistor coupled to the second transmission gate; and
    a second read transistor coupled to the second output transistor for outputting a second output signal;
    wherein a first pixel value of the first image corresponding to the first photosensitive pixel circuit is the first output signal, and a second pixel value of the second image corresponding to the first photosensitive pixel circuit is the second output signal;
    wherein a first in-phase pixel value in the in-phase image corresponding to the first photosensitive pixel circuit is associated with the first output signal, and a first quadrature pixel value in the quadrature image corresponding to the first photosensitive pixel circuit is associated with the second output signal.
  7. The three-dimensional video ranging system of claim 6, wherein the first light-sensitive pixel circuit further comprises:
    a third photoelectric reading circuit, coupled to the photosensitive element, comprising:
    a third transfer gate coupled to the photosensitive element, wherein the third transfer gate is turned on at a third time, and neither the first light-emitting unit nor the second light-emitting unit emits light at the third time;
    a third output transistor coupled to the third transmission gate; and
    a third read transistor coupled to the third output transistor for outputting a third output signal;
    wherein the first in-phase pixel value corresponding to the first light-sensitive pixel circuit in the in-phase image is related to the first output signal minus the third output signal, and the first quadrature pixel value corresponding to the first light-sensitive pixel circuit in the quadrature image is related to the second output signal minus the third output signal;
    wherein, a plurality of third output signals output by the plurality of photosensitive pixel circuits form a third image.
  8. The three-dimensional video ranging system of claim 6, wherein the first light-sensitive pixel circuit further comprises:
    a fourth photoelectric reading circuit, coupled to the photosensitive element, comprising:
    a fourth transfer gate coupled to the photosensitive element, wherein the fourth transfer gate is turned on at a fourth time, and a time interval is formed between the first time and the fourth time;
    a fourth output transistor coupled to the fourth transmission gate; and
    a fourth reading transistor coupled to the fourth output transistor for outputting a fourth output signal;
    wherein the first in-phase pixel value corresponding to the first photosensitive pixel circuit in the in-phase image is related to the first output signal and a fourth output signal;
    a plurality of fourth output signals output by the plurality of photosensitive pixel circuits form a fourth image;
    the arithmetic unit acquires a flight time distance corresponding to the target object according to the first image and the fourth image.
  9. The three-dimensional image ranging system of claim 1, wherein the computing unit is configured to perform the following steps to generate the depth image corresponding to the target object according to the in-phase image and the quadrature image:
    generating a phase image according to the in-phase image and the orthogonal image, wherein the phase image represents an image phase between the in-phase image formed by the first structured light and the orthogonal image formed by the second structured light;
    generating a first striation image corresponding to a first phase angle according to the phase image, wherein the first striation image records the image phase as a coordinate position of the first phase angle; and
    generating the depth image corresponding to the target object according to the first light stripe image.
  10. The three-dimensional image ranging system of claim 9, wherein the first phase angle is 0.
  11. The three-dimensional image ranging system as claimed in claim 9, wherein the computing unit is configured to perform the following steps to generate the first light stripe image corresponding to the first phase angle according to the phase image:
    obtaining a first phase image pixel value located at a first pixel coordinate position in the phase image, and obtaining a second phase image pixel value located at a second pixel coordinate position in the phase image, wherein the first pixel coordinate position is directly adjacent to the second pixel coordinate position in a first dimension;
    determining whether the first phase angle is between the first phase image pixel value and the second phase image pixel value;
    when the first phase angle is between the first phase image pixel value and the second phase image pixel value, performing interpolation operation according to the first phase angle, the first phase image pixel value and the second phase image pixel value to obtain an interpolation result;
    and storing the interpolation result to the first pixel coordinate position of the first light stripe image, wherein the interpolation result is the light stripe image pixel value at the first pixel coordinate position in the first light stripe image.
  12. The system as claimed in claim 11, wherein the computing unit is configured to perform the interpolation operation according to the first phase angle, the first phase image pixel value and the second phase image pixel value to obtain the interpolation result:
    calculating the interpolation result as
    Figure PCTCN2018082441-APPB-100001
    Where θ represents the first phase angle, (n, m) represents the first pixel coordinate position, (n-1, m) represents the second pixel coordinate position, PHI (n, m) represents the first phase image pixel value, and PHI (n-1, m) represents the second phase image pixel value.
  13. The system of claim 11, wherein the computing unit is further configured to generate the first light stripe image corresponding to the first phase angle according to the phase image by:
    when the first phase angle is not between the first phase image pixel value and the second phase image pixel value in the phase image, the light stripe image pixel value corresponding to the first pixel coordinate position in the first light stripe image is 0.
  14. The system as claimed in claim 9, wherein the computing unit is configured to perform the following steps to generate the depth image corresponding to the target object according to the first light stripe image:
    obtaining a first two-dimensional image coordinate of the target object corresponding to a third pixel coordinate position, wherein the first two-dimensional image coordinate is (x)0,y0)=(m,LSP1(n,m)),y0A coordinate value, x, representing the first two-dimensional image coordinate in a first dimension0A coordinate value representing the coordinate of the first two-dimensional image in a second dimension, n represents the coordinate value of the third pixel coordinate position in the first dimension, m represents the coordinate value of the third pixel coordinate position in the second dimension, and LSP1(n, m) represents the pixel value of the first striation image at the third pixel coordinate position;
    acquiring a first three-dimensional image coordinate of the target object corresponding to the third pixel coordinate position according to the first two-dimensional image coordinate; and
    and calculating the depth image pixel value of the target object corresponding to the third pixel coordinate position according to the first three-dimensional image coordinate.
  15. The three-dimensional image ranging system of claim 9, wherein the computing unit is further configured to perform the following steps to generate the depth image corresponding to the target object according to the in-phase image and the quadrature image:
    generating at least one second light stripe image corresponding to at least one second phase angle according to the phase image, wherein the at least one second phase angle is different from the first phase angle, and the at least one second light stripe image is different from the first light stripe image phase; and
    integrating the first light stripe image and the at least one second light stripe image into an integrated image; and
    and generating the depth image corresponding to the target object according to the integrated image.
  16. The system of claim 15, wherein the at least one second phase angle is an integer multiple of (2 pi/L), L being a positive integer greater than 1.
  17. The system as claimed in claim 15, wherein the computing unit is configured to perform the following steps to integrate the first light stripe image and the at least one second light stripe image into the integrated image:
    generating the integrated image as an addition result of the first striation image and the at least one second striation image.
  18. The three-dimensional image ranging system of claim 15, wherein the computing unit is configured to perform the following steps to generate the depth image corresponding to the target object according to the integrated image:
    obtaining a second two-dimensional image coordinate of the target object in the integrated image corresponding to a fourth pixel coordinate position, wherein the second two-dimensional image coordinate is (x)0,y0)=(m,MG(n,m)),y0A coordinate value, x, representing the coordinates of the second two-dimensional image in the first dimension0A coordinate value representing the coordinate of the two-dimensional image in a second dimension, n represents the coordinate value of the fourth pixel coordinate position in the first dimension, m represents the coordinate value of the fourth pixel coordinate position in the second dimension, and MG (n, m) represents the pixel value of the integrated image in the fourth pixel coordinate position of the integrated image;
    acquiring a second three-dimensional image coordinate of the target object corresponding to the third pixel coordinate position according to the second two-dimensional image coordinate; and
    and calculating the depth image pixel value of the target object corresponding to the third pixel coordinate position according to the second three-dimensional image coordinate.
  19. The three-dimensional image ranging system of claim 15, wherein the photosensitive pixel array receives background light at a third time to generate a third image, the photosensitive pixel array receives reflected light corresponding to the first structured light at a fourth time to generate a fourth image, and the computing unit is further configured to perform the following steps to generate the depth image corresponding to the target object according to the in-phase image and the quadrature image:
    generating a time-of-flight distance corresponding to the target object according to the first image and the fourth image;
    determining an angle according to the time-of-flight distance and the first light stripe image, wherein the angle represents an included angle between the target object, the light emitting module and the photosensitive pixel array; and
    and generating the depth image corresponding to the target object according to the integrated image and the angle.
  20. The system of claim 19, wherein the computing unit is further configured to perform the following steps to generate the depth image corresponding to the target object according to the integrated image and the angle:
    obtaining a second two-dimensional image coordinate of the target object in the integrated image corresponding to a fourth pixel coordinate position, wherein the second two-dimensional image coordinate is (x)0,y0)=(m,MG(n,m)),y0A coordinate value, x, representing the coordinates of the second two-dimensional image in the first dimension0A coordinate value representing the coordinate of the two-dimensional image in a second dimension, n represents the coordinate value of the fourth pixel coordinate position in the first dimension, and m represents the coordinate of the fourth pixel coordinate position in the second dimensionA value, MG (n, m), represents an integrated image pixel value of the integrated image at the fourth pixel coordinate location;
    acquiring a second three-dimensional image coordinate of the target object corresponding to the third pixel coordinate position according to the second two-dimensional image coordinate and the angle; and
    and calculating the depth image pixel value of the target object corresponding to the third pixel coordinate position according to the second three-dimensional image coordinate.
  21. A three-dimensional image ranging method is applied to a three-dimensional image ranging system and is characterized in that the three-dimensional image ranging system comprises a light emitting module and a photosensitive pixel array, the light emitting module emits first structured light in a first time and emits second structured light in a second time, the photosensitive pixel array receives reflected light corresponding to the first structured light in the first time to generate a first image and receives reflected light corresponding to the second structured light in the second time to generate a second image, and the phase difference of the structured light between the first structured light and the second structured light is odd times (pi/2), and the three-dimensional image ranging method comprises the following steps:
    generating an in-phase image associated with the first structured light and a quadrature image associated with the second structured light based on the first image and the second image; and
    and generating a depth image corresponding to a target object according to the in-phase image and the orthogonal image.
  22. The method of claim 21, wherein the step of generating the depth image corresponding to the target object according to the in-phase image and the quadrature image comprises:
    generating a phase image according to the in-phase image and the orthogonal image, wherein the phase image represents an image phase between the in-phase image formed by the first structured light and the orthogonal image formed by the second structured light;
    generating a first striation image corresponding to a first phase angle according to the phase image, wherein the first striation image records the image phase as a coordinate position of the first phase angle; and
    generating the depth image corresponding to the target object according to the first light stripe image.
  23. The three-dimensional image ranging method of claim 22, wherein the first phase angle is 0.
  24. The method as claimed in claim 22, wherein the step of generating the first light stripe image corresponding to the first phase angle according to the phase image comprises:
    obtaining a first phase image pixel value located at a first pixel coordinate position in the phase image, and obtaining a second phase image pixel value located at a second pixel coordinate position in the phase image, wherein the first pixel coordinate position is directly adjacent to the second pixel coordinate position in a first dimension;
    determining whether the first phase angle is between the first phase image pixel value and the second phase image pixel value;
    when the first phase angle is between the first phase image pixel value and the second phase image pixel value, performing interpolation operation according to the first phase angle, the first phase image pixel value and the second phase image pixel value to obtain an interpolation result;
    and storing the interpolation result to the first pixel coordinate position of the first light stripe image, wherein the interpolation result is the light stripe image pixel value at the first pixel coordinate position in the first light stripe image.
  25. The method of claim 24, wherein the step of performing the interpolation operation to obtain the interpolation result according to the first phase angle, the first phase image pixel value and the second phase image pixel value comprises:
    calculating the interpolation result as
    Figure PCTCN2018082441-APPB-100002
    Where θ represents the first phase angle, (n, m) represents the first pixel coordinate position, (n-1, m) represents the second pixel coordinate position, PHI (n, m) represents the first phase image pixel value, and PHI (n-1, m) represents the second phase image pixel value.
  26. The method as claimed in claim 24, wherein the step of generating the first light stripe image corresponding to the first phase angle according to the phase image further comprises:
    when the first phase angle is not between the first phase image pixel value and the second phase image pixel value in the phase image, the light stripe image pixel value corresponding to the first pixel coordinate position in the first light stripe image is 0.
  27. The method of claim 22, wherein the step of generating the depth image corresponding to the target object according to the first light stripe image comprises:
    obtaining a first two-dimensional image coordinate of the target object corresponding to a third pixel coordinate position, wherein the first two-dimensional image coordinate is (x)0,y0)=(m,LSP1(n,m)),y0A coordinate value, x, representing the first two-dimensional image coordinate in a first dimension0A coordinate value representing the coordinate of the first two-dimensional image in a second dimension, n represents the coordinate value of the third pixel coordinate position in the first dimension, m represents the coordinate value of the third pixel coordinate position in the second dimension, and LSP1(n, m) represents the pixel value of the first striation image at the third pixel coordinate position;
    acquiring a first three-dimensional image coordinate of the target object corresponding to the third pixel coordinate position according to the first two-dimensional image coordinate; and
    and calculating the depth image pixel value of the target object corresponding to the third pixel coordinate position according to the first three-dimensional image coordinate.
  28. The method of claim 22, wherein the step of generating the depth image corresponding to the target object according to the in-phase image and the quadrature image further comprises:
    generating at least one second light stripe image corresponding to at least one second phase angle according to the phase image, wherein the at least one second phase angle is different from the first phase angle, and the at least one second light stripe image is different from the first light stripe image phase; and
    integrating the first light stripe image and the at least one second light stripe image into an integrated image; and
    and generating the depth image corresponding to the target object according to the integrated image.
  29. The method as claimed in claim 28, wherein the at least one second phase angle is an integer multiple of (2 pi/L), and L is a positive integer greater than 1.
  30. The method as claimed in claim 28, wherein the step of integrating the first light stripe image and the at least one second light stripe image into the integrated image comprises:
    generating the integrated image as an addition result of the first striation image and the at least one second striation image.
  31. The three-dimensional image ranging method of claim 28, wherein the step of generating the depth image corresponding to the target object based on the integrated image comprises:
    obtaining a second two-dimensional image coordinate of the target object in the integrated image corresponding to a fourth pixel coordinate position, wherein the second two-dimensional image coordinate is (x)0,y0)=(m,MG(n,m)),y0A coordinate value, x, representing the coordinates of the second two-dimensional image in the first dimension0A coordinate value representing the coordinate of the two-dimensional image in a second dimension, n represents the coordinate value of the fourth pixel coordinate position in the first dimension, m represents the coordinate value of the fourth pixel coordinate position in the second dimension, and MG (n, m) represents the pixel value of the integrated image in the fourth pixel coordinate position of the integrated image;
    acquiring a second three-dimensional image coordinate of the target object corresponding to the third pixel coordinate position according to the second two-dimensional image coordinate; and
    and calculating the depth image pixel value of the target object corresponding to the third pixel coordinate position according to the second three-dimensional image coordinate.
  32. The method of claim 28, wherein the photosensitive pixel array receives background light at a third time to generate a third image, the photosensitive pixel array receives reflected light corresponding to the first structured light at a fourth time to generate a fourth image, and the step of generating the depth image corresponding to the target object according to the in-phase image and the quadrature image further comprises:
    generating a time-of-flight distance corresponding to the target object according to the first image and the fourth image;
    determining an angle according to the time-of-flight distance and the first light stripe image, wherein the angle represents an included angle between the target object, the light emitting module and the photosensitive pixel array; and
    and generating the depth image corresponding to the target object according to the integrated image and the angle.
  33. The method of claim 32, wherein the step of generating the depth image corresponding to the target object according to the integrated image and the angle further comprises:
    obtaining a second two-dimensional image coordinate of the target object in the integrated image corresponding to a fourth pixel coordinate position, wherein the second two-dimensional image coordinate is (x)0,y0)=(m,MG(n,m)),y0A coordinate value, x, representing the coordinates of the second two-dimensional image in the first dimension0A coordinate value representing the coordinate of the two-dimensional image in a second dimension, n represents the coordinate value of the fourth pixel coordinate in the first dimension, m represents the coordinate value of the fourth pixel coordinate in the second dimension, and MG (n, m) represents the pixel value of the integrated image in the fourth pixel coordinate;
    acquiring a second three-dimensional image coordinate of the target object corresponding to the third pixel coordinate position according to the second two-dimensional image coordinate and the angle; and
    and calculating the depth image pixel value of the target object corresponding to the third pixel coordinate position according to the second three-dimensional image coordinate.
CN201880000670.XA 2018-04-10 2018-04-10 Three-dimensional image ranging system and method Active CN110612429B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/082441 WO2019196001A1 (en) 2018-04-10 2018-04-10 Three-dimensional image ranging system and method

Publications (2)

Publication Number Publication Date
CN110612429A true CN110612429A (en) 2019-12-24
CN110612429B CN110612429B (en) 2021-03-26

Family

ID=68163405

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880000670.XA Active CN110612429B (en) 2018-04-10 2018-04-10 Three-dimensional image ranging system and method

Country Status (2)

Country Link
CN (1) CN110612429B (en)
WO (1) WO2019196001A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022016412A1 (en) * 2020-07-22 2022-01-27 深圳市汇顶科技股份有限公司 Depth information image acquisition apparatus and electronic device

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111366945B (en) * 2020-05-27 2020-10-16 深圳市汇顶科技股份有限公司 Ranging method based on flight time and related ranging system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1786658A (en) * 2005-12-29 2006-06-14 清华紫光股份有限公司 Method and apparatus for measuring profile of object by double wavelength structural light
US20160063309A1 (en) * 2014-08-29 2016-03-03 Google Inc. Combination of Stereo and Structured-Light Processing
CN106289092A (en) * 2015-05-15 2017-01-04 高准精密工业股份有限公司 Optical devices and light-emitting device thereof
US20170236014A1 (en) * 2014-11-05 2017-08-17 Trw Automotive U.S. Llc Augmented object detection using structured light
CN107690565A (en) * 2017-08-14 2018-02-13 深圳市汇顶科技股份有限公司 Three-dimensional filming system and electronic installation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1786658A (en) * 2005-12-29 2006-06-14 清华紫光股份有限公司 Method and apparatus for measuring profile of object by double wavelength structural light
US20160063309A1 (en) * 2014-08-29 2016-03-03 Google Inc. Combination of Stereo and Structured-Light Processing
US20170236014A1 (en) * 2014-11-05 2017-08-17 Trw Automotive U.S. Llc Augmented object detection using structured light
CN106289092A (en) * 2015-05-15 2017-01-04 高准精密工业股份有限公司 Optical devices and light-emitting device thereof
CN107690565A (en) * 2017-08-14 2018-02-13 深圳市汇顶科技股份有限公司 Three-dimensional filming system and electronic installation

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022016412A1 (en) * 2020-07-22 2022-01-27 深圳市汇顶科技股份有限公司 Depth information image acquisition apparatus and electronic device
CN114365019A (en) * 2020-07-22 2022-04-15 深圳市汇顶科技股份有限公司 Depth information image acquisition device and electronic equipment

Also Published As

Publication number Publication date
WO2019196001A1 (en) 2019-10-17
CN110612429B (en) 2021-03-26

Similar Documents

Publication Publication Date Title
CN107917701A (en) Measuring method and RGBD camera systems based on active binocular stereo vision
EP3135033B1 (en) Structured stereo
WO2021120402A1 (en) Fused depth measurement apparatus and measurement method
CN113012277B (en) DLP (digital light processing) -surface-based structured light multi-camera reconstruction method
WO2016076796A1 (en) Optoelectronic modules for distance measurements and/or multi-dimensional imaging
JP5633058B1 (en) 3D measuring apparatus and 3D measuring method
EP3791209B1 (en) Phase wrapping determination for time-of-flight camera
CN110612429B (en) Three-dimensional image ranging system and method
EP2939049A1 (en) A method and apparatus for de-noising data from a distance sensing camera
WO2019153626A1 (en) Depth image engine and depth image calculation method
CN109974624B (en) Method for reducing number of projection images based on multi-frequency phase shift
US20150062302A1 (en) Measurement device, measurement method, and computer program product
US20070109267A1 (en) Speckle-based two-dimensional motion tracking
US11709271B2 (en) Time of flight sensing system and image sensor used therein
Belhedi et al. Noise modelling in time‐of‐flight sensors with application to depth noise removal and uncertainty estimation in three‐dimensional measurement
WO2022241942A1 (en) Depth camera and depth calculation method
JP2017134561A (en) Image processing device, imaging apparatus and image processing program
US20200182971A1 (en) Time of Flight Sensor Module, Method, Apparatus and Computer Program for Determining Distance Information based on Time of Flight Sensor Data
CN115667989A (en) Depth image processing method and device
CN113327317B (en) Three-dimensional point cloud picture acquisition method and device, electronic equipment and storage medium
Chen et al. A configurable and real-time multi-frequency 3D image signal processor for indirect time-of-flight sensors
CN113298778B (en) Depth calculation method and system based on flight time and storage medium
Jungerman et al. 3D scene inference from transient histograms
WO2023279621A1 (en) Itof distance measurement system and method for calculating reflectivity of measured object
CN111815695B (en) Depth image acquisition method and device, mobile terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant