CN110244318B - 3D imaging method based on asynchronous ToF discrete point cloud - Google Patents
3D imaging method based on asynchronous ToF discrete point cloud Download PDFInfo
- Publication number
- CN110244318B CN110244318B CN201910364140.XA CN201910364140A CN110244318B CN 110244318 B CN110244318 B CN 110244318B CN 201910364140 A CN201910364140 A CN 201910364140A CN 110244318 B CN110244318 B CN 110244318B
- Authority
- CN
- China
- Prior art keywords
- discrete
- target object
- collimated light
- light beams
- asynchronous
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/481—Constructional features, e.g. arrangements of optical elements
- G01S7/4811—Constructional features, e.g. arrangements of optical elements common to transmitter and receiver
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Electromagnetism (AREA)
- Optical Radar Systems And Details Thereof (AREA)
Abstract
The invention provides a 3D imaging method based on asynchronous ToF discrete point cloud, which comprises the steps of sequentially projecting a plurality of discrete collimated light beams to different areas of a target object in a preset time period according to a time sequence through an asynchronous discrete light beam projector; sequentially receiving the plurality of discrete collimated light beams reflected by the target object through a light detector array imager and measuring the propagation time of the plurality of discrete collimated light beams, further obtaining the depth data of the plurality of regions of the target object, and generating the depth data of the surface of the target object according to the depth data of the plurality of regions of the target object. The invention can realize the optimized precision and point cloud density in a specific 3D imaging application scene, can keep lower optical power during each projection, and can meet the limits of power consumption and laser eye safety.
Description
Technical Field
The invention relates to the field of 3D imaging, in particular to a 3D imaging method based on asynchronous ToF discrete point cloud.
Background
The tof (time of flight) technique is a 3D imaging technique that emits measurement light from a projector and reflects the measurement light back to a receiver through a target object, thereby obtaining a spatial distance from the object to a sensor from a propagation time of the measurement light in the propagation path. Common ToF techniques include single point scanning projection methods and area light projection methods.
The ToF method of single-point scanning projection uses a single-point projector to project a single beam of collimated light whose projection direction is controlled by a scanning device so that it can be projected onto different target locations. After the collimated light of the single light beam is reflected by the target object, part of the light is received by the single-point light detector, and therefore the depth measurement data of the current projection direction is obtained. The method can concentrate all the optical power on one target point, thereby realizing high signal-to-noise ratio at a single target point and further realizing high-precision depth measurement. Scanning of the entire target object relies on scanning devices such as mechanical motors, MEMS, photo phase control radar, etc. And splicing the depth data points obtained by scanning to obtain the discrete point cloud data required by 3D imaging. This method is advantageous for long-range 3D imaging, but requires the use of complex projection scanning systems, which is costly.
The ToF method of surface light projection projects a surface light beam with a continuous energy distribution. The projected light continuously covers the target object surface. The light detector is a light detector array capable of acquiring the propagation time of the light beam. When the optical signal reflected by the target object is imaged on the optical detector through the optical imaging system, the depth obtained by each detector image point is the depth information of the object image relationship corresponding to the object position. This method can be free of complex scanning systems. However, since the optical power density of the surface light projection is much lower than that of the singular collimated light, the signal-to-noise ratio is greatly reduced compared with the method of single-point scanning projection, so that the method can only be applied to scenes with reduced distance and lower precision.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a 3D imaging method based on asynchronous ToF discrete point cloud. The invention adopts the asynchronous stray light beam projection method, can keep lower light power during each projection, and can meet the limits of power consumption and laser eye safety.
The invention provides a 3D imaging method based on asynchronous ToF discrete point cloud, which comprises the following steps:
projecting a plurality of discrete collimated beams sequentially in time sequence over a preset period of time by an asynchronous discrete beam projector onto different regions of a target object;
sequentially receiving the plurality of discrete collimated light beams reflected by the target object through a light detector array imager and measuring the propagation time of the plurality of discrete collimated light beams, further obtaining the depth data of the plurality of regions of the target object, and generating the depth data of the surface of the target object according to the depth data of the plurality of regions of the target object.
Preferably, the asynchronous discrete beam projector comprises an edge-emitting laser and a beam projector disposed on an optical path;
the edge-emitting laser is used for projecting laser to the beam projector;
the light beam projector is used for projecting the incident laser to different areas of the target object in sequence according to time in a preset time period.
Preferably, the asynchronous discrete beam projector comprises a laser array, a collimating lens and a beam splitting device which are arranged on an optical path;
the laser array comprises a plurality of lasers which are arranged in an array, the lasers are divided into a plurality of laser emission groups, and lasers with a first order of magnitude are projected to the collimating lens through each laser emission group in sequence according to time in a preset time period;
the collimating lens is used for collimating the incident multiple laser beams and then emitting collimated light beams with a first order of magnitude;
the beam splitting device is used for splitting the incident collimated light beam with the first order of magnitude to emit a collimated light beam with a second order of magnitude;
the second order of magnitude is greater than the first order of magnitude.
Preferably, the photodetector array imager comprises an optical imaging lens, a photodetector array and a driving circuit;
the optical detector array comprises a plurality of optical detectors distributed in an array, a plurality of optical detector emission components are a plurality of optical detector groups, and each optical detector group corresponds to one laser emission group;
the optical detector group is used for receiving collimated light beams which are emitted by the corresponding laser emitting group and then reflected by the target object;
the optical imaging lens is used for enabling direction vectors of the collimated light beams which penetrate through the optical imaging lens and enter the light detector array to be in one-to-one correspondence with the light detectors;
the driving circuit is used for measuring the propagation time of the plurality of discrete collimated light beams and further generating depth data of the plurality of regions of the surface of the target object, and further generating the depth data of the surface of the target object according to the depth data of the plurality of regions of the target object.
Preferably, the plurality of discrete collimated light beams are arranged periodically in a predetermined shape.
Preferably, the preset shape includes any one of the following shapes or any plurality of shapes that can be switched with each other:
straight line shape
-a triangle;
-a quadrilateral;
-a rectangle;
-circular;
-a hexagon;
-a pentagon.
Preferably, the beam projector comprises a first surface and a second surface, the first surface comprising a multi-grating structure;
the edge-emitting laser couples light into a beam projector to form a coupled-in beam, wherein the beam projector is configured to direct the coupled-in beam for total internal reflection between the first surface and the second surface;
the grating structure is configured to disrupt total internal reflection to cause at least part of the in-coupled beam to be coupled out of the beam projector, the part of the in-coupled beam coupled out of the beam projector forming a coupled-out beam; the multi-grating structures are in a plurality of numbers and are divided into a plurality of grating structure groups, and one grating structure group is controlled to be opened in time sequence in the preset time period so as to project a plurality of discrete collimated light beams to different areas of the target object.
Preferably, the asynchronous discrete beam projector comprises a plurality of sets of edge-emitting lasers and beam projectors arranged on an optical path;
the edge-emitting laser is used for projecting laser to the beam projector; the beam projector is used for projecting the incident laser light into a plurality of discrete collimated light beams;
and (c) projecting a plurality of discrete collimated light beams onto different regions of the target object by sequentially controlling a set of edge-emitting lasers and beam projectors arranged in an optical path to be turned on in a time sequence within the predetermined period of time.
The invention provides a 3D imaging method based on asynchronous ToF discrete point cloud, which comprises the following steps:
projecting a plurality of discrete collimated light beams to different areas of a target object sequentially in time within a preset time period through an asynchronous discrete light beam projector, so that the plurality of discrete collimated light beams penetrate through a display panel and then irradiate the target object;
and receiving a plurality of discrete collimated light beams which penetrate through the display panel after being reflected by the target object through a light detector array imager, obtaining depth data of a plurality of areas of the target object according to the propagation time of the plurality of discrete collimated light beams, and further generating depth data of the surface of the target object according to the depth data of the plurality of areas of the target object.
Compared with the prior art, the invention has the following beneficial effects:
according to the invention, the asynchronous discrete light beam projector projects a plurality of discrete collimated light beams to different regions of the target object in a time sequence within a preset time period, so that the light detector array imager can obtain the depth data of a plurality of regions of the target object, the depth data of the surface of the target object is generated according to the depth data of the plurality of regions of the target object, the power density of the light beams is improved, and the balance between the signal-to-noise ratio and the point cloud density is realized, thereby realizing the optimized precision and the point cloud density in a specific 3D imaging application scene, keeping lower light power during each projection, and meeting the limits of power consumption and laser eye safety.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts. Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a flowchart illustrating steps of a method for 3D imaging based on an asynchronous ToF discrete point cloud according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a 3D imaging apparatus based on a synchronous ToF discrete point cloud according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an asynchronous discrete beam projector according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of another configuration of an asynchronous discrete beam projector in accordance with an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of an optical imaging lens according to an embodiment of the present disclosure;
FIGS. 6(a), (b), and (c) are schematic diagrams of a periodic arrangement of a plurality of discrete collimated light beams according to an embodiment of the present invention; and
FIGS. 7(a), (b), and (c) are schematic illustrations of non-periodic arrangements of a plurality of discrete collimated light beams in an embodiment of the invention;
fig. 8 is a flowchart illustrating steps of a method for 3D imaging based on asynchronous ToF discrete point clouds according to another embodiment of the present invention.
In the figure:
1 is a light detector array imager;
2 is an asynchronous discrete beam projector;
3 is a target object;
101 is a photodetector array;
102 is an optical imaging lens;
201 is an edge-emitting laser;
202 is a beam projector;
203 is a laser array;
204 is a collimating lens;
205 is a beam splitting device.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that variations and modifications can be made by persons skilled in the art without departing from the spirit of the invention. All falling within the scope of the present invention.
It will be understood that when an element is referred to as being "secured to" or "disposed on" another element, it can be directly on the other element or be indirectly on the other element. When an element is referred to as being "connected to" another element, it can be directly connected to the other element or be indirectly connected to the other element. The connection may be for fixation or for circuit connection.
It is to be understood that the terms "length," "width," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like are used in an orientation or positional relationship indicated in the drawings for convenience in describing the embodiments of the present invention and to simplify the description, and are not intended to indicate or imply that the referenced device or element must have a particular orientation, be constructed in a particular orientation, and be in any way limiting of the present invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the embodiments of the present invention, "a plurality" means two or more unless specifically limited otherwise.
The invention provides a compromise scheme of a single-point scanning projection method and a surface light projection method, namely, a plurality of discrete collimated light beams are projected by one projector at the same time and matched with a ToF photodetector array, and synchronous 3D point cloud containing a plurality of target point depth data can be obtained in single measurement. The number of collimated beams projected simultaneously may vary from a few to tens of thousands, depending on the requirements of the actual application. The invention can realize the balance and optimization of the light beam power density (namely the signal-to-noise ratio) and the point cloud density under the same power by controlling the number of the light beams. When the number of the light beams is small, each point obtains higher signal-to-noise ratio and precision, but the point cloud is sparse; at higher numbers of beams, the point cloud is denser, but the signal-to-noise ratio and accuracy are relatively degraded, but still better than the method of surface light projection. In order to improve the precision and density of the point cloud at the same time, the invention adopts an asynchronous projection method, the distribution of the light beams projected each time is relatively sparse, and the distribution of the light beams projected at different moments is different, thereby obtaining the point cloud data of different areas of the target object. And splicing the point clouds obtained by multiple projection measurements to obtain relatively dense point clouds. Therefore, the optimized precision and point cloud density in a specific 3D imaging application scene can be realized, and lower optical power is kept during each projection, so that the limitations of power consumption and laser eye safety can be met.
Fig. 1 is a flowchart illustrating steps of a 3D imaging method based on an asynchronous ToF discrete point cloud according to an embodiment of the present invention, and as shown in fig. 1, the 3D imaging method based on an asynchronous ToF discrete point cloud according to the present invention includes the following steps:
step S101: projecting a plurality of discrete collimated light beams sequentially in time to different regions of the target object 3 by the asynchronous discrete light beam projector 2 over a preset period of time;
step S102: the light detector array imager 1 sequentially receives the plurality of discrete collimated light beams reflected by the target object 3 and measures the propagation time of the plurality of discrete collimated light beams, so that the depth data of the plurality of regions of the target object 3 can be obtained, and the depth data of the surface of the target object 3 is generated according to the depth data of the plurality of regions of the target object 3.
In an embodiment of the present invention, the preset time period may be set to 10 milliseconds; the target object 3 at least comprises a first area and a second area, so that a plurality of discrete collimated light beams can be firstly projected to the first area by a group of lasers in the asynchronous discrete light beam projector 2 within a preset time period to obtain depth data of the first area, then a plurality of discrete collimated light beams are projected to the second area by another group of lasers in the asynchronous discrete light beam projector 2 to obtain depth data of the second area, and depth data of the surface of the target object 3 is generated according to the depth data of the first area and the depth data of the second area. The first region and the second region may be adjacent regions or may be connected regions. The first region and the second region may also be interlaced together, for example, the first region includes a plurality of randomly distributed first sub-regions, and the second region includes a plurality of randomly distributed second sub-regions at the gaps of any adjacent first sub-regions. In a variant the target object 3 may comprise a plurality of zones, such as three zones, four zones, etc.
In an embodiment of the present invention, steps S101 to S10 may be repeatedly performed to generate depth data of the surfaces of the plurality of target objects 3, and then iteratively generate optimal depth data of the surfaces of the target objects 3 according to the depth data of the surfaces of the plurality of target objects 3.
Fig. 2 is a schematic structural diagram of a 3D imaging device based on a synchronous ToF discrete point cloud in the present invention, and as shown in fig. 2, the 3D imaging device based on a synchronous ToF discrete point cloud in the present invention is used for implementing the 3D imaging method based on an asynchronous ToF discrete point cloud in the present invention, and includes an asynchronous discrete light beam projector 2 and a light detector array imager 1;
the asynchronous discrete light beam projector 2 is used for projecting a plurality of discrete collimated light beams to different areas of the target object 3 in sequence in time within a preset time period;
the photodetector array imager 1 is configured to sequentially receive the plurality of discrete collimated light beams reflected by the target object 3 and measure propagation time of the plurality of discrete collimated light beams, so as to obtain depth data of a plurality of regions of the target object 3, and generate depth data of the surface of the target object 3 according to the depth data of the plurality of regions of the target object 3.
In this embodiment, the asynchronous discrete light beam projector 2 projects a plurality of discrete collimated light beams to different regions of the target object 3 in a time sequence within a preset time period, so that the photodetector array imager 1 can obtain depth data of a plurality of regions of the target object 3, and generates depth data of the surface of the target object 3 according to the depth data of the plurality of regions of the target object 3, thereby improving the light beam power density, and achieving a balance between the signal-to-noise ratio and the point cloud density, so that an optimized precision cloud density in a specific 3D imaging application scene can be achieved, and a lower light power and a point cloud density can be maintained at each projection, and the limitations of power consumption and laser eye safety can be satisfied.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, so that the above is the core idea of the present invention, and the above objects, features and advantages of the present invention can be more clearly understood. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In an embodiment of the present invention, a plurality of discrete collimated light beams projected by the asynchronous discrete light beam projector 2 in a discrete shape are reflected by the target object 3, the partially reflected collimated light beams are received by the photo detector array 101, each photo detector can obtain the flight time t from emission to reception of the corresponding light beam, so as to obtain the flight distance s ═ ct of the collimated light beam through the speed of light c, and thus the depth information of the surface position of each target object 3 irradiated by the discrete light beams can be measured. These discrete-position depth data points construct point cloud data that can replicate the 3D morphology of the object, enabling 3D imaging of the target object 3. The plurality of discrete collimated light beams is tapered.
In an embodiment of the invention, the number of discrete collimated light beams is between two and several tens of thousands of beams, such as 2 to 10 thousands of beams.
In an embodiment of the invention, the asynchronous ToF discrete point cloud based 3D imaging method provided by the invention comprises a driving circuit connected with the asynchronous discrete light beam projector 2 and the photodetector array imager 1. The drive circuitry is used to control the asynchronous discrete beam projector 2 and the photodetector array imager 1 to be turned on or off simultaneously.
The driving circuit may be a separate dedicated circuit, such as a dedicated SOC chip, an FPGA chip, an ASIC chip, or the like, or may include a general-purpose processor, such as when the depth camera is integrated into an intelligent terminal, such as a mobile phone, a television, a computer, or the like, and the processor in the terminal may serve as at least one part of the processing circuit
Fig. 3 is a schematic diagram of a structure of an asynchronous discrete beam projector according to the present invention, and as shown in fig. 3, the asynchronous discrete beam projector 2 includes an edge-emitting laser 201 and a beam projector 202 disposed on an optical path;
the edge-emitting laser 201 is used for projecting laser to the beam projector 202;
the beam projector 202 is configured to sequentially project the incident laser light to different regions of the target object 3 in a time sequence within a preset time period.
In one embodiment of the invention, the beam projector 202 is provided with a plurality of sets of beam projection ports for projecting a plurality of discrete collimated beams of light onto different areas of the target object 3 by controlling the sets of beam projection ports chronologically in sequence over the predetermined period of time. That is, each group of beam projection ports is opened once in a preset time period to project a plurality of discrete collimated beams, and when one group of beam projection ports is opened, the other groups of beam projection ports are closed. The opening and closing of the light beam projection port can be realized by driving a stop block through electromagnetic driving or a micro motor.
In one embodiment of the present invention, the beam projector 202 is capable of splitting incident light from the edge-emitting laser 201 into any number of collimated beams. The emitting direction of the edge-emitting laser 201 and the projecting direction of the beam projector 202 may be the same, or may be at 90 degrees or any angle required for the optical system design.
In an embodiment of the invention, the inner surface of the beam splitting projector is processed with a micro-nano structured optical chip and matched with an optical lens. The beam splitting projector can perform the function of splitting incident light from the edge-emitting laser 201 into any number of collimated beams. The emission direction of the edge-emitting laser 201 and the projection direction of the beam splitting projector may be the same, or may be at 90 degrees or any angle required for the optical system design.
In an embodiment of the present invention, the asynchronous discrete beam projector 2 comprises a plurality of sets of edge-emitting lasers 201 and beam projectors 202 arranged on an optical path;
the edge-emitting laser 201 is used for projecting laser to the beam projector 202; the beam projector 202 for projecting the incident laser light into a plurality of discrete collimated beams;
a plurality of discrete collimated light beams are projected to different regions of the target object 3 by turning on a set of the edge-emitting laser 201 and the beam projector 202 disposed on an optical path in time-series order for the preset period of time. That is, each set of the edge-emitting laser 201 and the beam projector 202 disposed on one optical path is turned on once for a predetermined period of time to project a plurality of discrete collimated beams, and when one set of the edge-emitting laser 201 and the beam projector 202 disposed on one optical path is turned on, the other set of the edge-emitting laser 201 and the beam projector 202 disposed on one optical path is turned off.
Fig. 4 is a schematic view showing another structure of the asynchronous discrete beam projector of the present invention, and as shown in fig. 4, the asynchronous discrete beam projector 2 includes a laser array 203, a collimator lens 204, and a beam splitting device 205 disposed on an optical path;
the laser array 203 comprises a plurality of lasers arranged in an array, the plurality of lasers are divided into a plurality of laser emission groups, and lasers with a first order of magnitude are projected to the collimating lens 204 sequentially through each laser emission group in time sequence within a preset time period;
the collimating lens 204 is configured to collimate the incident multiple laser beams and emit collimated light beams of a first order of magnitude;
the beam splitting device 205 is configured to split the incident collimated light beam of the first order of magnitude and emit a collimated light beam of a second order of magnitude;
the second order of magnitude is greater than the first order of magnitude.
In an embodiment of the present invention, each laser or each group of lasers may be controlled to emit light individually, so that by controlling each group of lasers to emit light in a preset time period, projection of different areas of the target object 3 is achieved. The number of laser emission groups corresponds to the number of regions of the target object 3.
In an embodiment of the present invention, the Laser array 203 may be formed by a plurality of Vertical Cavity Surface Emitting Lasers (VCSELs) or a plurality of Edge Emitting Lasers (EELs). The multiple laser beams can become highly parallel collimated beams after passing through the collimating lens 204. The beam splitting device 205 may be used to achieve more collimated beams as required by the number of discrete beams in practical applications. The beam splitting device 205 may employ a diffraction grating (DOE), a Spatial Light Modulator (SLM), and the like.
Fig. 5 is a schematic structural diagram of an optical imaging lens in the present invention, and as shown in fig. 5, the photodetector array imager 1 includes an optical imaging lens 102, a photodetector array 101, and a driving circuit;
the optical detector array 101 comprises a plurality of optical detectors distributed in an array, the emission components of the plurality of optical detectors are a plurality of optical detector groups, and each optical detector group corresponds to one laser emission group;
the optical detector group is used for receiving collimated light beams which are emitted by the corresponding laser emitting group and then reflected by the target object 3;
the optical imaging lens 102 is configured to enable a direction vector of the collimated light beam entering the light detector array 101 through the optical imaging lens 102 to have a one-to-one correspondence with the light detectors;
the driving circuit is configured to measure propagation times of the plurality of discrete collimated light beams and further generate depth data of a plurality of regions on the surface of the target object 3, and further generate depth data of the surface of the target object 3 according to the depth data of the plurality of regions of the target object 3.
In an embodiment of the present invention, the optical detector groups are in one-to-one correspondence with the laser emission groups, that is, when the spatial relationship between each detector and the projected collimated light beam is calibrated to correspond accurately, in an asynchronous mode, only the optical detector group corresponding to the laser group of the projected collimated light beam may be turned on, and the other optical detector groups may be turned off, so that the power consumption of the optical detector and the system noise interference may be reduced. The number of the sets of optical detectors corresponds to the number of regions of the target object 3.
In an embodiment of the present invention, in order to filter background noise, a narrow band filter is usually installed inside the optical imaging lens 102, so that the photodetector array 101 can only pass incident collimated light beams with preset wavelengths. The preset wavelength can be the wavelength of the incident collimated light beam, and can also be between 50 nanometers smaller than the incident collimated light beam and 50 nanometers larger than the incident collimated light beam. The photodetector array 101 may be arranged periodically or aperiodically. Each photodetector, in cooperation with an auxiliary circuit, may enable measurement of the time of flight of the collimated beam. The photodetector array 101 may be a combination of multiple single-point photodetectors or a sensor chip integrating multiple photodetectors, as required by the number of discrete collimated light beams. To further optimize the sensitivity of the light detectors, the illumination spot of one discrete collimated light beam on the target object 3 may correspond to one or more light detectors. When a plurality of light detectors correspond to the same irradiation light spot, signals of each detector can be communicated through a circuit, so that the light detectors with larger detection areas can be combined.
In an embodiment of the invention, the plurality of discrete collimated light beams are periodically arranged in a predetermined shape, that is, in a geometrically regular distribution.
Fig. 6(a), (b), and (c) are schematic diagrams of the periodic arrangement of a plurality of discrete collimated light beams in the present invention, and as shown in fig. 6, in an embodiment of the present invention, the preset shape includes any one of the following shapes or any plurality of shapes that can be switched with each other:
straight line shape
-a triangle;
-a quadrilateral;
-a rectangle;
-circular;
-a hexagon;
-a pentagon.
The shape of the periodic arrangement of the plurality of discrete collimated light beams is not limited to the above shape, and the plurality of discrete collimated light beams may be arranged in other shapes. As shown in fig. 6(a), when the preset shape is a rectangle, that is, the unit arrangement shape of the collimated light beams in one period is a rectangle, and is periodically repeated in space. As shown in fig. 6(b), when the preset shape is a triangle, that is, the unit arrangement shape of the collimated light beam in one period is a triangle, and is periodically repeated in space. As shown in fig. 6(c), when the preset shape is a hexagon, that is, the unit arrangement shape of the collimated light beams in one period is a hexagon, and is periodically repeated in space. Since the present invention is limited to an optical system in implementation, the arrangement of the actual collimated light beam in the cross section may have distortion, such as stretching, twisting, and the like. And the energy distribution of each collimated light beam in the cross section can be circular, circular ring or elliptical and the like. In such an arrangement as shown in fig. 5, it is advantageous to simplify the spatial correspondence of the plurality of discrete collimated light beams to the photodetector array 101.
In an embodiment of the invention, the plurality of discrete collimated light beams are non-periodically arranged in another predetermined shape.
In an embodiment of the present invention, the aperiodic arrangement includes any one of the following arrangements or any plurality of arrangements that can be switched with each other:
-a random arrangement;
-a spatial coding arrangement;
-a quasi-lattice arrangement.
The shape of the non-periodic arrangement of the plurality of discrete collimated light beams is not limited to the above shape, and the plurality of discrete collimated light beams may be arranged in other shapes. As shown in fig. 7(a), the spatial coding arrangement, specifically, in the periodic arrangement, a part of the light beams is deleted, so as to implement the spatial coding of the arrangement position, and the actually adopted coding is not limited to the example in fig. 7 (a); as shown in fig. 7(b), the random arrangement, specifically the arrangement of the collimated light beams, is randomly distributed so that the similarity of the arrangement pattern at different positions is small or close to zero, and as shown in fig. 7(c), the quasi-lattice arrangement, specifically the quasi-collimated light beams, are non-periodically arranged at close proximity positions and are periodically arranged at a long distance. Since the present invention is limited to an optical system in implementation, the arrangement of the actual collimated light beam in the cross section may have distortion, such as stretching, twisting, and the like. And the energy distribution of each collimated light beam in the cross section can be circular, circular ring or elliptical and the like. In this arrangement as shown in fig. 6, this arrangement facilitates uniform sampling of non-deterministic targets, optimizing the effect of the final 3D depth map.
In an embodiment of the present invention, the light detector employs any one of the following light sensors:
-a CMOS light sensor;
-a CCD light sensor;
SPAD light sensor.
The type of the light detector is not limited to the light sensor, and may also include other types of light sensors.
Fig. 8 is a flowchart illustrating steps of a 3D imaging method based on an asynchronous ToF discrete point cloud according to another embodiment of the present invention, and as shown in fig. 8, the 3D imaging method based on an asynchronous ToF discrete point cloud according to the present invention includes the following steps:
projecting a plurality of discrete collimated light beams sequentially to different areas of the target object 3 in time sequence within a preset time period by the asynchronous discrete light beam projector 2, so that the plurality of discrete collimated light beams irradiate the target object 3 after penetrating through the display panel;
receiving a plurality of discrete collimated light beams which penetrate through the display panel after being reflected by the target object 3 through the photodetector array imager 1, obtaining depth data of a plurality of areas of the target object 3 according to propagation time of the plurality of discrete collimated light beams, and further generating depth data of the surface of the target object 3 according to the depth data of the plurality of areas of the target object 3.
In an embodiment of the present invention, the photodetector array imager 1 ensures spatial position correspondence between the projected plurality of discrete collimated light beams and the photodetector array 101. So that each photodetector in the photodetector array 101 can measure the propagation time of light by using a ToF method of modulating a light beam or pulse continuously in time, and then calculate the distance traveled by light by means of the speed of light.
The pulse-based ToF method, also known as direct ToF method, is specifically: the light detector can sensitively detect the waveform of a light pulse and then obtain the time for the collimated light beam to travel between the asynchronous discrete light beam projector 2 and the light detector array imager 1 compared to the emission time of the light pulse. In this method, a Single Photon Avalanche Diode (SPAD) is a commonly used photodetector. Single photon avalanche diodes are capable of counting photons of an optical pulse very sensitively and at high speed. Namely, counting the number of photons at different times in a pulse time window, and recovering the integral waveform of the pulse. The pulse-based ToF method has low power consumption requirements for the projector and is advantageous for eliminating interference from multipath beams.
The ToF method based on time-continuously modulating a beam is also called an index ToF method. The method specifically comprises the following steps: the time continuous modulation is generally a sine wave modulation mode, the light detector can be realized by a CMOS or CCD photosensitive mode, and the asynchronous discrete light beam projector 2 continuously emits collimated light beams to the target object 3 under high-frequency modulation, and the collimated light beams are received by the light detector array 101 after being reflected by the target object 3. Each photodetector records the phase change of the emitted collimated beam and the received collimated beam, thereby enabling depth information of the surface position of the target object 3 to be obtained. Since the ToF method based on time-continuous modulation of the light beam is an energy integration process, the accuracy is higher compared to pulsed measurement, and the light source is not required to be a short-time high-intensity pulse, different types of light sources can be used, and different modulation methods can be applied.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes and modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention.
Claims (9)
1. A3D imaging method based on asynchronous ToF discrete point cloud is characterized by comprising the following steps:
projecting a plurality of discrete collimated beams sequentially in time sequence over a preset period of time by an asynchronous discrete beam projector onto different regions of a target object;
sequentially receiving the plurality of discrete collimated light beams reflected by the target object through a light detector array imager and measuring the propagation time of the plurality of discrete collimated light beams, further obtaining the depth data of a plurality of areas of the target object, and generating the depth data of the surface of the target object according to the depth data of the plurality of areas of the target object; the plurality of areas are adjacent areas;
the asynchronous discrete beam projector comprises an edge-emitting laser and a beam projector which are arranged on a light path;
the edge-emitting laser is used for projecting laser to the beam projector;
the light beam projector is used for projecting the incident laser to different areas of the target object in sequence according to time in a preset time period.
2. The asynchronous ToF discrete point cloud based 3D imaging method according to claim 1 wherein the asynchronous discrete beam projector comprises a laser array, a collimating lens and a beam splitting device arranged on an optical path;
the laser array comprises a plurality of lasers which are arranged in an array, the lasers are divided into a plurality of laser emission groups, and lasers with a first order of magnitude are projected to the collimating lens through each laser emission group in sequence according to time in a preset time period;
the collimating lens is used for collimating the incident multiple laser beams and then emitting collimated light beams with a first order of magnitude;
the beam splitting device is used for splitting the incident collimated light beam with the first order of magnitude to emit a collimated light beam with a second order of magnitude;
the second order of magnitude is greater than the first order of magnitude.
3. The asynchronous ToF discrete point cloud based 3D imaging method according to claim 2, wherein the photodetector array imager comprises an optical imaging lens, a photodetector array and a driving circuit;
the optical detector array comprises a plurality of optical detectors distributed in an array, a plurality of optical detector emission components are a plurality of optical detector groups, and each optical detector group corresponds to one laser emission group;
the optical detector group is used for receiving collimated light beams which are emitted by the corresponding laser emitting group and then reflected by the target object;
the optical imaging lens is used for enabling direction vectors of the collimated light beams which penetrate through the optical imaging lens and enter the light detector array to be in one-to-one correspondence with the light detectors;
the driving circuit is used for measuring the propagation time of the plurality of discrete collimated light beams and further generating depth data of the plurality of regions of the surface of the target object, and further generating the depth data of the surface of the target object according to the depth data of the plurality of regions of the target object.
4. The asynchronous ToF discrete point cloud based 3D imaging method according to claim 1, wherein the plurality of discrete collimated light beams are periodically arranged in a predetermined shape.
5. The asynchronous ToF discrete point cloud based 3D imaging method according to claim 4, wherein the preset shape comprises any one of the following shapes or any plurality of shapes that can be switched from one to another:
straight line shape
-a triangle;
-a quadrilateral;
-a rectangle;
-circular;
-a hexagon;
-a pentagon.
6. The asynchronous ToF discrete point cloud based 3D imaging method according to claim 1, wherein the plurality of discrete collimated light beams are non-periodically arranged in another predetermined shape; the aperiodic arrangement comprises any one of the following arrangements or any plurality of arrangements which can be switched with each other:
-a random arrangement;
-a spatial coding arrangement;
-a quasi-lattice arrangement.
7. The asynchronous ToF discrete point cloud based 3D imaging method according to claim 1 wherein the beam projector comprises a first surface and a second surface, the first surface comprising a multi-grating structure;
the edge-emitting laser couples light into a beam projector to form a coupled-in beam, wherein the beam projector is configured to direct the coupled-in beam for total internal reflection between the first surface and the second surface;
the grating structure is configured to disrupt total internal reflection to cause at least part of the in-coupled beam to be coupled out of the beam projector, the part of the in-coupled beam coupled out of the beam projector forming a coupled-out beam; the multi-grating structures are in a plurality of numbers and are divided into a plurality of grating structure groups, and one grating structure group is controlled to be opened in time sequence in the preset time period so as to project a plurality of discrete collimated light beams to different areas of the target object.
8. The asynchronous ToF discrete point cloud based 3D imaging method according to claim 1 wherein the asynchronous discrete beam projector comprises a plurality of sets of edge emitting lasers and beam projectors arranged in an optical path;
the edge-emitting laser is used for projecting laser to the beam projector; the beam projector is used for projecting the incident laser light into a plurality of discrete collimated light beams;
and (c) projecting a plurality of discrete collimated light beams onto different regions of the target object by sequentially controlling a set of edge-emitting lasers and beam projectors arranged in an optical path to be turned on in a time sequence within the predetermined period of time.
9. A3D imaging method based on asynchronous ToF discrete point cloud is characterized by comprising the following steps:
projecting a plurality of discrete collimated light beams to different areas of a target object sequentially in time within a preset time period through an asynchronous discrete light beam projector, so that the plurality of discrete collimated light beams penetrate through a display panel and then irradiate the target object;
and receiving a plurality of discrete collimated light beams which penetrate through the display panel after being reflected by the target object through a light detector array imager, obtaining depth data of a plurality of areas of the target object according to the propagation time of the plurality of discrete collimated light beams, and further generating depth data of the surface of the target object according to the depth data of the plurality of areas of the target object.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910364140.XA CN110244318B (en) | 2019-04-30 | 2019-04-30 | 3D imaging method based on asynchronous ToF discrete point cloud |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910364140.XA CN110244318B (en) | 2019-04-30 | 2019-04-30 | 3D imaging method based on asynchronous ToF discrete point cloud |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110244318A CN110244318A (en) | 2019-09-17 |
CN110244318B true CN110244318B (en) | 2021-08-17 |
Family
ID=67883594
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910364140.XA Active CN110244318B (en) | 2019-04-30 | 2019-04-30 | 3D imaging method based on asynchronous ToF discrete point cloud |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110244318B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112824935B (en) * | 2019-11-20 | 2023-02-28 | 深圳市光鉴科技有限公司 | Depth imaging system, method, device and medium based on modulated light field |
CN112824934B (en) * | 2019-11-20 | 2024-05-07 | 深圳市光鉴科技有限公司 | TOF multipath interference removal method, system, equipment and medium based on modulated light field |
CN111289990A (en) * | 2020-03-06 | 2020-06-16 | 浙江博升光电科技有限公司 | Distance measurement method based on vertical cavity surface emitting laser array |
CN111289954B (en) * | 2020-03-31 | 2022-03-15 | 四川长虹电器股份有限公司 | Point cloud division and track matching method for millimeter wave radar target tracking |
CN111398976B (en) * | 2020-04-01 | 2022-08-23 | 宁波飞芯电子科技有限公司 | Detection device and method |
CN111398977B (en) * | 2020-04-10 | 2021-12-03 | 深圳市灵明光子科技有限公司 | Imaging device capable of improving resolution, imaging method thereof and detection equipment |
WO2022032516A1 (en) * | 2020-08-12 | 2022-02-17 | 深圳市速腾聚创科技有限公司 | Laser radar and detection method therefor, storage medium, and detection system |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106574964A (en) * | 2014-12-22 | 2017-04-19 | 谷歌公司 | Integrated camera system having two dimensional image capture and three dimensional time-of-flight capture with a partitioned field of view |
CN108646260A (en) * | 2018-07-02 | 2018-10-12 | 中国科学院西安光学精密机械研究所 | Staring type lens-free laser three-dimensional imaging device and imaging method |
CN109313267A (en) * | 2016-06-08 | 2019-02-05 | 松下知识产权经营株式会社 | Range-measurement system and distance measuring method |
CN109343070A (en) * | 2018-11-21 | 2019-02-15 | 深圳奥比中光科技有限公司 | Time flight depth camera |
CN109416399A (en) * | 2016-04-26 | 2019-03-01 | 深瞳科技公司 | 3-D imaging system |
CN109597211A (en) * | 2018-12-25 | 2019-04-09 | 深圳奥比中光科技有限公司 | A kind of projective module group, depth camera and depth image acquisition method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9651417B2 (en) * | 2012-02-15 | 2017-05-16 | Apple Inc. | Scanning depth engine |
-
2019
- 2019-04-30 CN CN201910364140.XA patent/CN110244318B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106574964A (en) * | 2014-12-22 | 2017-04-19 | 谷歌公司 | Integrated camera system having two dimensional image capture and three dimensional time-of-flight capture with a partitioned field of view |
CN109416399A (en) * | 2016-04-26 | 2019-03-01 | 深瞳科技公司 | 3-D imaging system |
CN109313267A (en) * | 2016-06-08 | 2019-02-05 | 松下知识产权经营株式会社 | Range-measurement system and distance measuring method |
CN108646260A (en) * | 2018-07-02 | 2018-10-12 | 中国科学院西安光学精密机械研究所 | Staring type lens-free laser three-dimensional imaging device and imaging method |
CN109343070A (en) * | 2018-11-21 | 2019-02-15 | 深圳奥比中光科技有限公司 | Time flight depth camera |
CN109597211A (en) * | 2018-12-25 | 2019-04-09 | 深圳奥比中光科技有限公司 | A kind of projective module group, depth camera and depth image acquisition method |
Also Published As
Publication number | Publication date |
---|---|
CN110244318A (en) | 2019-09-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110221309B (en) | 3D imaging device and electronic equipment based on asynchronous ToF discrete point cloud | |
CN110244318B (en) | 3D imaging method based on asynchronous ToF discrete point cloud | |
CA3017735C (en) | Integrated illumination and detection for lidar based 3-d imaging | |
CA3012691C (en) | Lidar based 3-d imaging with far-field illumination overlap | |
US11435446B2 (en) | LIDAR signal acquisition | |
CN111722241B (en) | Multi-line scanning distance measuring system, method and electronic equipment | |
CN115575928A (en) | LIDAR data acquisition and control | |
CA3057460A1 (en) | Lidar based 3-d imaging with structured light and integrated illumination and detection | |
WO2020221188A1 (en) | Synchronous tof discrete point cloud-based 3d imaging apparatus, and electronic device | |
CA2650235A1 (en) | Distance measuring method and distance measuring element for detecting the spatial dimension of a target | |
CN112066906A (en) | Depth imaging device | |
CN110658529A (en) | Integrated beam splitting scanning unit and manufacturing method thereof | |
CN112596068A (en) | Collector, distance measurement system and electronic equipment | |
CN112066907B (en) | Depth imaging device | |
CN110716190A (en) | Transmitter and distance measurement system | |
CN210128694U (en) | Depth imaging device | |
CN110716189A (en) | Transmitter and distance measurement system | |
CN210835244U (en) | 3D imaging device and electronic equipment based on synchronous ToF discrete point cloud | |
US11971505B2 (en) | Methods and devices for peak signal detection | |
CN117629403A (en) | Active single photon detection array non-field imaging system | |
CN214122466U (en) | Collector, distance measurement system and electronic equipment | |
CN112068144B (en) | Light projection system and 3D imaging device | |
CN211426798U (en) | Integrated beam splitting scanning unit | |
CN111947565A (en) | 3D imaging method based on synchronous ToF discrete point cloud | |
CN111492264B (en) | LIDAR signal acquisition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |