CN216448823U - Device for obtaining object surface depth information - Google Patents

Device for obtaining object surface depth information Download PDF

Info

Publication number
CN216448823U
CN216448823U CN202120728817.6U CN202120728817U CN216448823U CN 216448823 U CN216448823 U CN 216448823U CN 202120728817 U CN202120728817 U CN 202120728817U CN 216448823 U CN216448823 U CN 216448823U
Authority
CN
China
Prior art keywords
image
light
interference
stripe
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202120728817.6U
Other languages
Chinese (zh)
Inventor
于亚冰
王志玲
谭磊
朱华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SG Micro Beijing Co Ltd
Original Assignee
SG Micro Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SG Micro Beijing Co Ltd filed Critical SG Micro Beijing Co Ltd
Priority to CN202120728817.6U priority Critical patent/CN216448823U/en
Application granted granted Critical
Publication of CN216448823U publication Critical patent/CN216448823U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The utility model discloses a device for acquiring depth information of an object surface. The device projects coherent light on the surface of a measured object, and obtains a first fringe image according to reflected light and reference light on the surface of the measured object; calculating to obtain a second stripe image with larger stripe width based on the first stripe image; and obtaining three-dimensional point cloud data representing the surface depth information of the measured object according to the second stripe image. Compared with the traditional laser scanning method and the flight time method, the method has lower cost on the premise of ensuring the measurement precision, and can be widely applied to the fields of three-dimensional face recognition, gesture recognition and the like.

Description

Device for obtaining object surface depth information
Technical Field
The utility model relates to the technical field of three-dimensional recognition, in particular to a device for acquiring depth information of an object surface.
Background
With the rapid development of computers, information processing and photoelectronic technologies, the three-dimensional profile measurement technology is widely applied. Particularly, the three-dimensional scanning technology has wide application prospect and research significance in the fields of industrial manufacturing, product inspection, medical images, film and television special effects, face recognition, cultural relic protection and the like. With the continuous development and perfection of photoelectric sensing technology, image detection and identification technology and information processing technology, the optical three-dimensional profile measurement technology is widely applied. The optical three-dimensional profile measurement technology has the advantages of no contact with a measured object, high measurement precision, high real-time responsivity and the like, and is widely regarded as the three-dimensional profile measurement technology with the greatest prospect.
The existing optical three-dimensional profile measuring technology is mainly divided into a laser scanning method, a flight time method, a grating projection method and the like.
The laser scanning method is characterized in that the whole object to be detected is scanned by line laser in a mode of controlling the rotation of a laser or the rotation of the object to be detected, the stripe trend image of the surface of the object to be detected changes along with the height change of the object, a camera acquires a corresponding image, and the three-dimensional surface data of the object can be obtained through the calibration and reduction technology of the camera. The laser scanning method has the advantages of simple post-image processing, high measurement precision and the like. However, the laser scanning system is expensive, mechanical errors are introduced in the moving process of the laser or the object, and calibration operation is required in the whole object scanning process, so that the measuring speed is slow.
The time-of-flight method is characterized in that a laser emits two laser pulses, wherein one laser pulse is directly received by a photoelectric sensor through reflection at an emission position, a timer starts timing at the moment, the other laser pulse enters the photoelectric sensor after being reflected by the surface of a measured object, and the timer is closed at the moment. The distance between the profile height of the measured object and the laser emitter can be determined through the time counted by the timer and the light speed under the medium. The disadvantages of the time-of-flight method are the need for high response frequency and high resolution electronics and the high cost of the measurement system. And the method is based on a point scanning mode, and the measuring speed is slow.
The difficulty of the three-dimensional profile measurement system based on the grating projection method of the optical triangulation lies in solving the demodulation phase function and solving the system stability. Moreover, because the distance between the stripes generated by the grating projection method is relatively large and does not contain the surface profile details of the object to be measured, the demodulation result cannot accurately reflect the height of the object when the surface of the object has sudden changes.
SUMMERY OF THE UTILITY MODEL
In view of the above problems, an object of the present invention is to provide an apparatus for acquiring depth information of an object surface, which has a lower cost than the conventional laser scanning method and time-of-flight method while ensuring measurement accuracy, and can be widely used in the fields of three-dimensional face recognition, gesture recognition, and the like.
According to an embodiment of the present invention, there is provided an apparatus for obtaining depth information of an object surface, including a light source emitting system, an image sensor, and an image processing system, wherein the light source emitting system is configured to project coherent light to a surface of an object to be measured and the image sensor in a first mode, and project coherent light to the surface of the object to be measured in a second mode, the image sensor is configured to acquire an interference image of the surface of the object to be measured in the first mode, and acquire an interference-free image of the surface of the object to be measured in the second mode, the image processing system is configured to obtain a first fringe image based on a difference between the interference image and the interference-free image, and calculate a second fringe image based on the first fringe image, the second fringe image is not consistent with a fringe width of the first fringe image, so that the image processing system obtains three-dimensional point cloud data representing the depth information of the measured object surface based on the second stripe image.
Optionally, the image processing system includes: the image extraction module is used for obtaining the first fringe image based on the interference image and the non-interference image; the stripe calculating module is used for calculating to obtain the second stripe image based on the first stripe image; and the depth information calculation module is used for obtaining the three-dimensional point cloud data based on the second stripe image.
Optionally, the stripe width of the second stripe image is greater than the stripe width of the first stripe image.
Optionally, the image extraction module obtains the first fringe image by filtering the interference-free image from the interference image.
Optionally, the fringe calculation module is configured to obtain light intensity information of interference fringes in the first fringe image, and multiply the light intensity information by a preset sine wave that changes along a phase change direction of the interference fringes to obtain a new interference fringe, so as to obtain the second fringe image.
Optionally, the depth information calculation module establishes a proportional relationship between the second stripe image and a world coordinate system according to a method for calibrating two points to obtain the distance, and obtains the three-dimensional point cloud data through a spatial reduction operation of the stripe image.
Optionally, the depth information calculation module calculates the three-dimensional point cloud data based on a binocular disparity method according to the first stripe image and the second stripe image.
Optionally, the light source emitting system includes: a light source for emitting the coherent light; and the optical module is used for receiving the coherent light, converting the coherent light, projecting the converted light to the surface of the object to be measured and the image sensor in the first mode, and projecting the converted light to the surface of the object to be measured in the second mode.
Optionally, the optical module includes: the beam expander and the beam expander are vertically arranged in the incidence direction of the coherent light; and the parallel flat plate is arranged at a certain angle with the incident direction of the coherent light, wherein the parallel flat plate works in a semi-transmitting and semi-reflecting mode in the first mode and works in a total reflecting mode in the second mode.
Optionally, the parallel plate includes an electro-optic absorption film and an electrode, and the selective passing of the coherent light is realized by controlling the on/off of the voltage applied to the parallel plate.
Optionally, the light source includes: a semiconductor laser for emitting coherent laser light; and the modulator is used for carrying out high-frequency modulation on the coherent laser and transmitting the modulated coherent laser to the optical module.
Optionally, the image processing system further includes: the light source control module is used for turning off a light source in a third mode so as to facilitate the image sensor to acquire a two-dimensional image of the surface of the measured object in a natural light environment; and the three-dimensional reconstruction module is used for associating the three-dimensional point cloud data with the two-dimensional image so as to obtain a three-dimensional model of the measured object.
Optionally, the three-dimensional reconstruction module is configured to construct a surface contour of the measured object based on the three-dimensional point cloud data, and perform mapping processing on the surface contour by using the two-dimensional image to obtain a three-dimensional model of the measured object.
The device for acquiring the object surface depth information adopts coherent light to irradiate a measured object to obtain a first stripe image, then obtains a second stripe image with larger stripe width through analog calculation according to the first stripe image, and obtains three-dimensional point cloud data representing the measured object surface depth information according to the second stripe image.
In addition, the device for acquiring the depth information of the object surface does not need an array laser and a sensor, has low requirement on the high-speed performance of a processing circuit, has lower cost on the premise of ensuring the measurement precision compared with the traditional laser scanning method and the flight time method, and can be widely applied to the fields of three-dimensional face recognition, gesture recognition and the like.
Furthermore, the device for acquiring the depth information of the object surface strips the interference-free image without the interference fringe information from the interference image with the interference fringe information to obtain the first fringe image, so that the influence of non-uniformity of laser beams and stripping of laser speckles from the interference image can be reduced, and the accuracy and the reliability of system measurement can be improved.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent from the following description of the embodiments of the present invention with reference to the accompanying drawings, in which:
fig. 1 is a schematic structural diagram of an apparatus for acquiring depth information of an object surface according to a first embodiment of the present invention;
FIG. 2 is a schematic diagram of an image processing system of the device for acquiring depth information of the object surface in FIG. 1;
fig. 3 is a flowchart illustrating a method for obtaining depth information of an object surface according to a second embodiment of the present invention.
Detailed Description
Various embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. Like elements in the various figures are denoted by the same or similar reference numerals. For purposes of clarity, the various features in the drawings are not necessarily drawn to scale. It should be understood that the specific embodiments described herein are merely illustrative of the utility model and are not intended to limit the utility model.
Fig. 1 is a schematic structural diagram of an apparatus for acquiring depth information of an object surface according to a first embodiment of the present invention. As shown in fig. 1, the apparatus 100 for obtaining depth information of an object surface includes a light source 110, an optical module 120, an image sensor 130, and an image processing system 140.
The light source 110 and the optical module 120 constitute a light source emitting system of the present apparatus that emits coherent light. Further, the light source 110 is selected from a semiconductor laser, and is used for emitting coherent laser light, which has the advantages of small volume and long service life. The optical module 120 receives and converts the coherent light emitted by the light source 110, and projects the converted light beam to the surface of the object 101 and the image sensor 130. The image sensor 130 is, for example, a CCD camera or a CMOS camera, and is used for real-time shooting to obtain an image of the surface of the object 101 to be measured. The image processing system 140 is configured to process an image obtained by the image sensor to obtain three-dimensional point cloud data representing depth information of the surface of the object 101.
In another embodiment, the light source emitting system further includes a modulator for modulating the coherent laser light emitted from the semiconductor laser at a high frequency and emitting the modulated coherent laser light to the optical module 120.
The principle of acquiring the depth information of the surface of the object 101 to be measured by the device 100 of the present invention is to split coherent light emitted by the light source 110, then, part of the coherent light beam is incident on the surface of the object 101 to be measured, the other part of the coherent light beam is incident on the image sensor 130 as reference light, an interference image with interference fringe information, which is deformed due to the height change of the surface of the object to be measured, is formed at the image sensor 130 by the first reflected light reflected by the surface of the object 101 to be measured and the reference light, then, a first fringe image is obtained according to the interference image, and three-dimensional point cloud data representing the depth information of the surface of the object 101 to be measured is obtained according to the first fringe image.
Wherein the step of obtaining the first fringe image based on the interference image is to reduce the influence of beam unevenness and to strip laser speckle in the interference image. Further, the step of obtaining the first fringe image according to the interference image comprises: after obtaining the interference image, all the coherent light emitted by the light source 110 is incident to the surface of the object 101 to be measured, then the image sensor 130 obtains the second reflected light reflected by the surface of the object 101 to be measured, shoots the non-interference image without interference fringe information, and then filters the non-interference image in the interference image to extract the interference fringe, thereby obtaining the first fringe image without the interference of the ambient light.
In one embodiment, the transformation process of the coherent light by the optical module 120 may be controlled such that it projects the transformed light to the surface of the object to be measured 101 and the image sensor 130 in the first mode, and projects the transformed light to the surface of the object to be measured 101 in the second mode.
Further, the optical module 120 may include a beam expander 121, a beam expander 122, a parallel plate 123, a parallel plate 124, and a beam expander 125. The beam expander 121 and the beam expander 122 are disposed perpendicular to the incident direction of the coherent light, and are used to change the beam diameter and the divergence angle of the coherent laser light. The parallel plate 123 is disposed at a first angle to the incident direction of the coherent light, and the parallel plate 124 is disposed at a second angle to the incident direction of the coherent light. The parallel plate 123 is configured to receive and convert the light beam converted by the beam splitter 122, and in the first mode, the light beam is in the half-transmitting and half-reflecting mode, and projects a part of the light beam converted by the beam splitter 122 onto the surface of the object 101 to be measured, and projects another part of the light beam onto the parallel plate 124, and the light beam is reflected by the parallel plate 124 and then projects onto the image sensor 130. The parallel flat plate 123 is further configured to be in a total reflection mode in the second mode, and project all the light beams transformed by the beam splitter 122 onto the surface of the object 101 to be measured. For example, the parallel plate 123 includes an electro-optic absorption film and electrodes, and selective passage of coherent light is achieved by controlling the on and off of the voltage applied to the parallel plate 123.
Further, the principle that the image processing system 140 obtains the three-dimensional point cloud data representing the surface depth information of the object 101 to be measured according to the first streak image is as follows: and calculating to obtain a second stripe image with the stripe width larger than that of the first stripe image based on the first stripe image, and then obtaining the three-dimensional point cloud data based on the second stripe image.
Coherent light irradiated to the image plane at different incident angles forms a coherent plane wave in an equal phase propagation direction on the image plane, the coherent plane wave and the direction vector of the object image plane wave are added to form a phase change direction of an interference fringe, so that after laser light fluctuates through the surface of the object to be measured, light intensity information of the interference fringe in a fringe image acquired by the image sensor 130 is a sine wave with a fixed period. Since the optical path for the object image within the specific aperture angle range and the illumination light within the specific aperture angle are fixed, the variation of the interference fringes is mainly caused by the variation of the positions of the fringes due to the variation of the surface depth of the object to be measured. Assuming that Di represents the depth change of an object image at a certain position, P represents the light intensity information that can be effectively detected by the smaller one of the coherent tube and the coherent portion in the imaging beam, N represents the light intensity information other than the coherent portion (for example, the light intensity information in the unfocused stray direction and the imaging light intensity information in the incoherent portion), X represents the line distance in the phase transition direction of the interference fringes measured from a certain starting point, and α, ω, λ and c represent the incident angle, frequency, wavelength and light speed of the coherent light, respectively, the light intensity information I of the interference fringes can be obtained as follows:
Figure BDA0003013473920000071
wherein, the fringe distance on X is expressed by the relationship between the wavelength d of the interference fringe and the change Di of the object image depth:
Di=λ=d×sinα
since the coherent light projected onto the surface of the object to be measured in this embodiment is a plurality of equidistant parallel curve arrays and the stripes are fine, the amount of calculation for directly obtaining the depth information of the object to be measured on the first stripe image is very large. Therefore, interference fringes with larger fringe width are generated by simulating the incident angle of coherent light, firstly, light intensity information I of the interference fringes in a first fringe image is obtained, then the light intensity information is modulated by a sine function, for example, the obtained light intensity information I is multiplied by a preset sine wave which changes along the phase change direction of the interference fringes to obtain a new interference fringe, so that a second fringe image with the fringe width larger than that of the first fringe image is obtained, and if the wavelength of the sine wave which is multiplied by the light intensity information I along the X change direction is (d + delta), the fringe width of the new interference fringes is increased to be (1+ d/delta) times of the wavelength of the sine wave, and then the depth information of the object to be detected is obtained by identifying the amplified fringes.
Further, the image processing system 140 of this embodiment establishes a proportional relationship between the second stripe image and the world coordinate system according to a method for calibrating two points to calculate a distance, and obtains the three-dimensional point cloud data through a spatial reduction operation of the stripe image.
In another embodiment, when effective depth information cannot be extracted from the second streak image, the three-dimensional point cloud data can also be calculated based on a binocular disparity method according to the first streak image and the second streak image. Similarly, assuming that the wavelength of the sine wave multiplied by the light intensity information I along the X changing direction is (d + δ), the incident angle of the new interference fringe changes by (δ/d) radian compared with the incident angle of the first fringe image, so that the change of the apparent height of the image caused by the depth of the object image at different viewing angles can be calculated by a binocular disparity method, and then the three-dimensional point cloud data with the surface depth information of the object to be measured is obtained.
The binocular vision measurement is a method for acquiring two images of the same target object by using a binocular camera according to the theory of similar triangle measurement, and calculating the difference of coordinates of matched pixels by searching matched pixels in the left image and the right image to obtain a third dimension distance. It forms an image of the difference in the positions of the matched pixels of the two images, called a parallax image, by calculating the difference in the positions of the matched pixels in the two images and setting this difference in the gray scale range of the images. After the parallax image is obtained, the three-dimensional coordinates of each pixel point of the parallax image are calculated according to the binocular camera similarity triangle principle, and the three-dimensional point cloud of the target object is obtained.
Further, the image processing system 140 is also configured to construct a three-dimensional model of the object 101 based on the three-dimensional point cloud data. For example, the image processing system 140 turns off the light source 110 in the third mode, and then obtains a three-dimensional model of the object 101 according to the two-dimensional image and the three-dimensional point cloud data acquired by the image sensor 130 in the natural light environment. For example, the image processing system 140 constructs an expression profile of the object to be measured 101 based on the three-dimensional point cloud data, and then performs mapping processing on the surface profile by using a two-dimensional image to obtain a three-dimensional model of the object to be measured.
It should be understood that the method for implementing three-dimensional reconstruction of the measured object according to the three-dimensional point cloud data of the present invention is not limited to the above-described embodiments, and other methods for implementing three-dimensional reconstruction of the measured object according to the three-dimensional point cloud data in the field are also applicable to the present invention.
Fig. 2 is a schematic diagram of the image processing system of the device for acquiring depth information of the object surface in fig. 1.
As shown in fig. 2, the image processing system 140 includes an image extraction module 141, a streak calculation module 142, a depth information calculation module 143, a light source control module 144, and a three-dimensional reconstruction module.
The image extraction module 141 is configured to obtain a first streak image according to the interference image P1 obtained by the image sensor 130 in the first mode and the non-interference image P2 obtained by the image sensor 130 in the second mode.
The stripe calculating module 142 is configured to calculate a second stripe image with a stripe width greater than that of the first stripe image based on the first stripe image. Specifically, the fringe calculating module 142 generates the interference fringes with a larger fringe width by simulating the incident angle of the coherent light, first obtains the light intensity information I of the interference fringes in the first fringe image, and then modulates the light intensity information with a sine function, for example, multiplies the obtained light intensity information I by a preset sine wave that changes along the phase change direction of the interference fringes to obtain new interference fringes, so as to obtain the second fringe image with a fringe width larger than that of the first fringe image.
The depth information calculating module 143 is configured to obtain the three-dimensional point cloud data according to the second streak image. In an embodiment, the depth information calculating module 143 establishes a proportional relationship between the second stripe image and a world coordinate system according to a method of calibrating a distance between two points, and obtains the three-dimensional point cloud data through a spatial reduction operation of the stripe image. In another embodiment, the depth information calculating module 143 calculates the three-dimensional point cloud data based on a binocular disparity method according to the first stripe image and the second stripe image.
The light source control module 144 is configured to turn off the light source 110 in the third operation. The three-dimensional reconstruction module 145 is used for obtaining a three-dimensional model of the measured object according to the two-dimensional image P3 and the three-dimensional point cloud data obtained by the image sensor in the natural light environment. Therein, the two-dimensional image P3 records the third reflected light of the ambient light on the surface of the object 101 to be measured. Further, the three-dimensional reconstruction module 145 constructs a surface contour of the measured object based on the three-dimensional point cloud data, and performs mapping processing on the surface contour by using the two-dimensional image P3 to obtain a three-dimensional model of the measured object.
Fig. 3 is a flowchart illustrating a method for obtaining depth information of a surface of an object according to a second embodiment of the present invention. As shown in fig. 3, the method for acquiring depth information of an object surface of the present embodiment includes steps S310 to S350.
In step S310, coherent light is projected onto the surface of the object to be measured and the image sensor, and an interference image is obtained on the image sensor. Specifically, after the coherent light emitted from the light source is split, a part of the coherent light beam is incident on the surface of the object to be measured, another part of the coherent light beam is incident on the image sensor 130 as the reference light, and the first reflected light reflected by the surface of the object to be measured and the reference light form an interference image with interference fringe information, which is deformed due to the height change of the surface of the object to be measured, at the image sensor.
In step S320, coherent light is projected onto the surface of the object to be measured, and an interference-free image is obtained on the image sensor. Specifically, after the interference image is obtained, all coherent light emitted by the light source is incident on the surface of the object to be measured, then the image sensor obtains second reflected light reflected by the surface of the object to be measured, and a non-interference image without interference fringe information is shot.
In step S330, a first fringe image is obtained according to the interference image and the non-interference image. Specifically, the non-interference image is filtered in the interference image to extract interference fringes so as to obtain a first fringe image, so that the influence of beam unevenness is reduced, and laser speckles are peeled in the interference image.
In step S340, a second streak image is calculated based on the first streak image. Specifically, the interference fringes with larger fringe width can be generated by simulating the incident angle of the coherent light, the light intensity information of the interference fringes in the first fringe image is obtained first, and then the light intensity information is modulated by a sine function, for example, the obtained light intensity information is multiplied by a preset sine wave which changes along the phase change direction of the interference fringes to obtain a new interference fringe, so as to obtain a second fringe image with the fringe width larger than that of the first fringe image.
In step S350, three-dimensional point cloud data is obtained based on the second streak image, and a three-dimensional model of the object to be measured is constructed based on the three-dimensional point cloud data. Specifically, a proportional relationship between the second stripe image and a world coordinate system can be established according to a method for calibrating two points to calculate a distance, and the three-dimensional point cloud data can be obtained through space reduction operation of the stripe image; or calculating the three-dimensional point cloud data based on a binocular disparity method according to the first stripe image and the second stripe image. And then constructing a surface contour of the measured object based on the three-dimensional point cloud data, and performing mapping processing on the surface contour by using a two-dimensional image which records third reflected light of the ambient light on the surface of the measured object to obtain a three-dimensional model of the measured object.
It should be understood that the method for implementing three-dimensional reconstruction of the measured object according to the three-dimensional point cloud data in step S350 is not limited to the above embodiment, and other methods for implementing three-dimensional reconstruction of the measured object according to the three-dimensional point cloud data in the art are also applicable to the present invention.
In summary, the device and method for acquiring depth information of an object surface according to the present invention employs coherent light to irradiate a measured object to obtain a first fringe image, then obtains a second fringe image with a larger fringe width through analog computation according to the first fringe image, and obtains three-dimensional point cloud data representing depth information of the measured object surface according to the second fringe image.
In addition, the device for acquiring the depth information of the object surface does not need an array laser and a sensor, has low requirement on the high-speed performance of a processing circuit, has lower cost on the premise of ensuring the measurement precision compared with the traditional laser scanning method and the flight time method, and can be widely applied to the fields of three-dimensional face recognition, gesture recognition and the like.
Furthermore, the device for acquiring the depth information of the object surface strips the interference-free image without the interference fringe information from the interference image with the interference fringe information to obtain the first fringe image, so that the influence of non-uniformity of laser beams and stripping of laser speckles from the interference image can be reduced, and the accuracy and the reliability of system measurement can be improved.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
In accordance with the present invention, as described above, these embodiments do not set forth all of the details or limit the utility model to only the specific embodiments described. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the utility model and the practical application, to thereby enable others skilled in the art to best utilize the utility model and various embodiments with various modifications as are suited to the particular use contemplated. The scope of the utility model should be determined from the following claims.

Claims (12)

1. An apparatus for acquiring depth information of an object surface, which is characterized by comprising a light source emitting system, an image sensor and an image processing system,
the light source emission system comprises a light source and an optical module, wherein the light source is used for emitting coherent light, the optical module is used for receiving the coherent light and converting the coherent light, the optical module is used for projecting the converted light to the surface of a measured object and the image sensor in a first mode and projecting the converted light to the surface of the measured object in a second mode,
the image sensor is used for acquiring an interference image of the surface of the measured object in the first mode and acquiring an interference-free image of the surface of the measured object in the second mode,
the image processing system is used for obtaining three-dimensional point cloud data representing the surface depth information of the measured object based on the interference image and the non-interference image.
2. The apparatus of claim 1, wherein the image processing system comprises:
the image extraction module is used for obtaining a first fringe image based on the interference image and the non-interference image;
the stripe calculating module is used for calculating to obtain a second stripe image based on the first stripe image; and
and the depth information calculation module is used for obtaining the three-dimensional point cloud data based on the second stripe image.
3. The apparatus of claim 2, wherein the stripe width of the second stripe image is greater than the stripe width of the first stripe image.
4. The apparatus of claim 2, wherein the image extraction module obtains the first streak image by filtering the interference-free image from the interference image.
5. The apparatus according to claim 2, wherein the fringe calculation module is configured to obtain light intensity information of interference fringes in the first fringe image, and multiply the light intensity information by a preset sine wave that changes along a phase change direction of the interference fringes to obtain new interference fringes, so as to obtain the second fringe image.
6. The apparatus according to claim 2, wherein the depth information calculation module establishes a proportional relationship between the second stripe image and a world coordinate system according to a method of calibrating a distance between two points, and obtains the three-dimensional point cloud data through a spatial reduction operation of the stripe image.
7. The apparatus of claim 2, wherein the depth information calculation module calculates the three-dimensional point cloud data based on a binocular disparity method according to the first streak image and the second streak image.
8. The apparatus of claim 1, wherein the optical module comprises:
the beam expander and the beam expander are vertically arranged in the incidence direction of the coherent light; and
a parallel plate disposed at a certain angle to the incident direction of the coherent light,
the parallel flat plate works in a semi-transparent and semi-reflective mode in the first mode and works in a total-reflective mode in the second mode.
9. The device of claim 8, wherein the parallel plates comprise electro-optically absorbing films and electrodes, and the selective passage of the coherent light is achieved by controlling the on and off of a voltage applied across the parallel plates.
10. The apparatus of claim 1, wherein the light source comprises:
a semiconductor laser for emitting coherent laser light; and
and the modulator is used for carrying out high-frequency modulation on the coherent laser and transmitting the modulated coherent laser to the optical module.
11. The apparatus of claim 1, wherein the image processing system further comprises:
the light source control module is used for turning off a light source in a third mode so as to facilitate the image sensor to acquire a two-dimensional image of the surface of the measured object in a natural light environment; and
and the three-dimensional reconstruction module is used for associating the three-dimensional point cloud data with the two-dimensional image to obtain a three-dimensional model of the measured object.
12. The device of claim 11, wherein the three-dimensional reconstruction module is configured to construct a surface contour of the object to be measured based on the three-dimensional point cloud data, and perform mapping on the surface contour using the two-dimensional image to obtain a three-dimensional model of the object to be measured.
CN202120728817.6U 2021-04-09 2021-04-09 Device for obtaining object surface depth information Active CN216448823U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202120728817.6U CN216448823U (en) 2021-04-09 2021-04-09 Device for obtaining object surface depth information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202120728817.6U CN216448823U (en) 2021-04-09 2021-04-09 Device for obtaining object surface depth information

Publications (1)

Publication Number Publication Date
CN216448823U true CN216448823U (en) 2022-05-06

Family

ID=81346750

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202120728817.6U Active CN216448823U (en) 2021-04-09 2021-04-09 Device for obtaining object surface depth information

Country Status (1)

Country Link
CN (1) CN216448823U (en)

Similar Documents

Publication Publication Date Title
US11166004B2 (en) Three-dimensional computational imaging method and apparatus based on single-pixel sensor, and non-transitory computer-readable storage medium
Giancola et al. A survey on 3D cameras: Metrological comparison of time-of-flight, structured-light and active stereoscopy technologies
EP3491332B1 (en) Reflectivity map estimate from dot based structured light systems
US9501833B2 (en) Method and system for providing three-dimensional and range inter-planar estimation
Beder et al. A comparison of PMD-cameras and stereo-vision for the task of surface reconstruction using patchlets
US20120154542A1 (en) Plural detector time-of-flight depth mapping
US10105906B2 (en) Structured light generating device and measuring system and method
CN110824490B (en) Dynamic distance measuring system and method
WO2022021797A1 (en) Distance measurement system and distance measurement method
US10996335B2 (en) Phase wrapping determination for time-of-flight camera
CN110658529A (en) Integrated beam splitting scanning unit and manufacturing method thereof
CN110716190A (en) Transmitter and distance measurement system
Li et al. Laser scanning based three dimensional measurement of vegetation canopy structure
KR101802894B1 (en) 3D image obtaining system
Yin et al. Real-time and accurate monocular 3D sensor using the reference plane calibration and an optimized SGM based on opencl acceleration
CN110716189A (en) Transmitter and distance measurement system
CN104296681A (en) Three-dimensional terrain sensing device and method based on laser dot matrix identification
CN216448823U (en) Device for obtaining object surface depth information
US4678324A (en) Range finding by diffraction
CN115824170A (en) Method for measuring ocean waves by combining photogrammetry and laser radar
CN115200510A (en) Device and method for acquiring depth information of object surface
Langmann Wide area 2D/3D imaging: development, analysis and applications
Lim et al. A novel one-body dual laser profile based vibration compensation in 3D scanning
WO1988005525A1 (en) Range finding by diffraction
Jawad et al. Measuring object dimensions and its distances based on image processing technique by analysis the image using sony camera

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant