CN114332341A - Point cloud reconstruction method, device and system - Google Patents

Point cloud reconstruction method, device and system Download PDF

Info

Publication number
CN114332341A
CN114332341A CN202011065325.XA CN202011065325A CN114332341A CN 114332341 A CN114332341 A CN 114332341A CN 202011065325 A CN202011065325 A CN 202011065325A CN 114332341 A CN114332341 A CN 114332341A
Authority
CN
China
Prior art keywords
point cloud
dimensional point
dimensional
light
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011065325.XA
Other languages
Chinese (zh)
Inventor
周开城
宋展
谷飞飞
罗洪鹍
唐梦研
周俊伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Huawei Technologies Co Ltd
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd, Shenzhen Institute of Advanced Technology of CAS filed Critical Huawei Technologies Co Ltd
Priority to CN202011065325.XA priority Critical patent/CN114332341A/en
Publication of CN114332341A publication Critical patent/CN114332341A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application discloses a point cloud reconstruction method, a point cloud reconstruction device and a point cloud reconstruction system, and relates to the technical field of three-dimensional reconstruction. The system comprises: the light projection module is used for projecting the first light signal and the second light signal to the surface of the object to be measured respectively; the photosensitive module is used for receiving the first optical signal and a third optical signal obtained after the surface of the object to be detected acts on the first optical signal, and is used for receiving the second optical signal and a fourth optical signal obtained after the surface of the object to be detected acts on the second optical signal; a point cloud reconstruction module for reconstructing a first three-dimensional point cloud based on the first optical signal and the third optical signal, and for reconstructing a second three-dimensional point cloud based on the second optical signal and the fourth optical signal; the point cloud fusion module is used for fusing the first three-dimensional point cloud and the second three-dimensional point cloud to obtain a target point cloud of the object to be detected; the precision of the first three-dimensional point cloud is higher than that of the second three-dimensional point cloud, and the density of the first three-dimensional point cloud is smaller than that of the second three-dimensional point cloud.

Description

Point cloud reconstruction method, device and system
Technical Field
The application relates to the technical field of three-dimensional reconstruction, in particular to a point cloud reconstruction method, device and system.
Background
Time of flight (TOF) depth detection technology is an important three-dimensional reconstruction technology in the field of computer vision, and is widely applied to the fields of automatic driving, Augmented Reality (AR)/Virtual Reality (VR), industrial robots and the like. In general, TOF technology calculates depth information of an object to be measured by continuously transmitting light pulses to the object to be measured and then measuring the time-of-flight interval between the transmission of the light pulses and the reception of the light pulses by a sensor. When three-dimensional reconstruction is carried out through the TOF technology, the method has the advantages of strong anti-interference performance, high detection speed, high three-dimensional point cloud density and the like.
However, when the TOF technology is used for detecting the three-dimensional point cloud of the object to be measured reconstructed from the depth information of the object to be measured, the problem of multipath reflection exists, so that the accuracy of the reconstructed three-dimensional point cloud is poor, and the application of the reconstructed three-dimensional point cloud in a scene with higher measurement accuracy requirement is limited.
Disclosure of Invention
The application provides a point cloud reconstruction method, a point cloud reconstruction device and a point cloud reconstruction system, which are used for realizing high-precision and high-density three-dimensional point cloud reconstruction by fusing dense point cloud with low precision (namely high three-dimensional point cloud density) and sparse point cloud with high precision (namely low three-dimensional point cloud density).
In order to achieve the above purpose, the present application provides the following technical solutions:
in a first aspect, the present application provides a point cloud reconstruction system, comprising: and the light projection module is used for projecting the first light signal and the second light signal to the surface of the object to be measured respectively. The first optical signal is used for reconstructing a first three-dimensional point cloud of the object to be measured, the second optical signal is used for reconstructing a second three-dimensional point cloud of the object to be measured, wherein the precision of the first three-dimensional point cloud is higher than that of the second three-dimensional point cloud, and the density of the first three-dimensional point cloud is smaller than that of the second three-dimensional point cloud. And the photosensitive module is used for receiving the first optical signal and a third optical signal obtained after the surface of the object to be detected acts on the first optical signal, and is used for receiving the second optical signal and obtaining a fourth optical signal after the second optical signal is reflected by the surface of the object to be detected. And the point cloud reconstruction module is used for reconstructing a first three-dimensional point cloud based on the first optical signal and the third optical signal and reconstructing a second three-dimensional point cloud based on the second optical signal and the fourth optical signal. And the point cloud fusion module is used for fusing the first three-dimensional point cloud and the second three-dimensional point cloud to obtain a target point cloud of the object to be detected.
Based on the system, the first three-dimensional point cloud with high precision and low density and the second three-dimensional point cloud with low precision and high density can be fused, so that the point cloud with high precision and high density can be reconstructed for the object to be measured.
In one possible design, if the first light signal is a light signal for projecting a predetermined code pattern, the first three-dimensional point cloud is a point cloud constructed by a structured light technique. If the first optical signal is an optical pulse signal, the first three-dimensional point cloud is a point cloud constructed based on a time of flight TOF technique.
With this possible design, the point cloud reconstruction system can reconstruct the first three-dimensional point cloud by means of structured light techniques or TOF techniques.
In another possible design, if the first optical signal is for projecting an optical signal with a preset coding pattern, the optical projection module includes a light blocking grating (or a light blocking mask) for modulating a light source signal of the optical projection module into an optical signal for projecting the preset coding pattern.
With this possible design, miniaturization of the light projection module projecting the first light signal can be achieved.
In another possible embodiment, if the second light signal is a light pulse signal, the second three-dimensional point cloud is a point cloud constructed by TOF technology.
Through the possible design mode, reconstruction of the second three-dimensional point cloud through the TOF technology can be achieved.
In another possible design manner, the point cloud fusion module is specifically configured to correct the second three-dimensional point cloud based on the first three-dimensional point cloud to obtain the target point cloud.
In another possible design manner, the point cloud fusion module is specifically configured to determine a first triangulated point cloud based on the first three-dimensional point cloud, where the first triangulated point cloud includes n (n is a positive integer) triangular surfaces. And determining a second triangulated point cloud based on the first three-dimensional point cloud and the second three-dimensional point cloud, wherein the second triangulated point cloud comprises n sub-region surfaces. Here, the n triangular faces and the n sub-region faces correspond one to one. Next, the motion transformation relationships of the n triangular faces and the n sub-area faces are determined. And then, correcting the second three-dimensional point cloud according to the motion transformation relation to obtain the target point cloud.
In another possible design, the point cloud fusion module is further configured to align the first three-dimensional point cloud and the second three-dimensional point cloud before correcting the second three-dimensional point cloud based on the first three-dimensional point cloud.
In a second aspect, the present application provides a point reconstruction method, including: and acquiring a first three-dimensional point cloud and a second three-dimensional point cloud of the object to be detected. And fusing the first three-dimensional point cloud and the second three-dimensional point cloud to obtain a target point cloud of the object to be detected. The precision of the first three-dimensional point cloud is higher than that of the second three-dimensional point cloud, and the density of the first three-dimensional point cloud is smaller than that of the second three-dimensional point cloud.
In another possible design, the first three-dimensional point cloud is a point cloud constructed by a structured light technique or a time of flight TOF technique, and the second three-dimensional point cloud is a point cloud constructed by a TOF technique.
In another possible design, if the first three-dimensional point cloud is a point cloud constructed by a structured light technique, the light signal for projecting the surface of the object to be measured is a light signal for projecting a preset coding pattern. The preset coding pattern is a pattern projected by an optical signal obtained by modulating a light source signal of the light projection module by the shading grating.
In another possible design manner, the "fusing the first three-dimensional point cloud and the second three-dimensional point cloud to obtain the target point cloud of the object to be measured" includes: and correcting the second three-dimensional point cloud based on the first three-dimensional point cloud to obtain a target point cloud.
In another possible design manner, the "fusing the first three-dimensional point cloud and the second three-dimensional point cloud to obtain the target point cloud of the object to be measured" includes: a first triangulated point cloud is determined based on the first three-dimensional point cloud, the first triangulated point cloud including n (n is a positive integer) triangular faces. And determining a second triangulated point cloud based on the first three-dimensional point cloud and the second three-dimensional point cloud, wherein the second triangulated point cloud comprises n sub-region surfaces. Here, the n triangular faces and the n sub-region faces correspond one to one. And determining the motion transformation relation of the n triangular surfaces and the n sub-area surfaces. And correcting the second three-dimensional point cloud according to the motion transformation relation to obtain the target point cloud.
In another possible design, before "correcting the second three-dimensional point cloud based on the first three-dimensional point cloud" above, the method further includes: the first three-dimensional point cloud and the second three-dimensional point cloud are aligned.
The description of the beneficial effects of the second aspect and any one of the possible design manners thereof may be the description of the beneficial effects of the first aspect and any one of the possible design manners thereof, and will not be repeated herein.
In a third aspect, the present application provides a point cloud reconstruction apparatus.
In one possible embodiment, the point cloud reconstruction device is configured to perform any of the methods provided in the second aspect. The point cloud reconstruction device may be divided into functional modules according to any one of the methods provided by the second aspect. For example, the functional blocks may be divided for the respective functions, or two or more functions may be integrated into one processing block. For example, the point cloud reconstruction device can be divided into an acquisition unit, a fusion unit and the like according to functions. The above description of possible technical solutions and beneficial effects executed by each divided functional module can refer to the technical solution provided by the second aspect or its corresponding possible design, and will not be described herein again.
In another possible design, the point cloud reconstruction apparatus includes: the memory is coupled to the one or more processors. The memory is for storing computer instructions that the processor is adapted to invoke in order to perform any of the methods as provided by the second aspect and any of its possible designs.
In a fourth aspect, the present application provides a computer-readable storage medium, such as a computer non-transitory readable storage medium. Having stored thereon a computer program (or instructions) which, when run on a computer, causes the computer to perform any of the methods provided by any of the possible implementations of the second aspect described above.
In a fifth aspect, the present application provides a computer program product which, when run on a point cloud reconstruction device, causes any of the methods provided by any of the possible implementations of the second aspect to be performed.
In a sixth aspect, the present application provides a chip system, comprising: and the processor is used for calling and running the computer program stored in the memory from the memory and executing any method provided by the implementation mode in the second aspect.
It is understood that any one of the apparatuses, computer storage media, computer program products, or chip systems provided above can be applied to the corresponding methods provided above, and therefore, the beneficial effects achieved by the apparatuses, the computer storage media, the computer program products, or the chip systems can refer to the beneficial effects in the corresponding methods, and are not described herein again.
In the present application, the names of the point cloud reconstruction devices and systems are not limited to the devices or functional modules themselves, and in practical implementations, the devices or functional modules may be represented by other names. Insofar as the functions of the respective devices or functional modules are similar to those of the present application, they fall within the scope of the claims of the present application and their equivalents.
These and other aspects of the present application will be more readily apparent from the following description.
Drawings
Fig. 1 is a schematic diagram of a point cloud reconstruction system according to an embodiment of the present disclosure;
fig. 2 is a coding pattern based on 8 coding patterns provided by an embodiment of the present application;
fig. 3 is a schematic hardware structure diagram of a computing device according to an embodiment of the present disclosure;
fig. 4 is a schematic flow chart of a point cloud reconstruction method provided in the embodiment of the present application;
fig. 5 is a schematic triangulation diagram of a first two-dimensional image and a second two-dimensional image according to an embodiment of the present application;
fig. 6 is a schematic diagram of four-corner subdivision of a first two-dimensional image according to an embodiment of the present application;
fig. 7 is a schematic diagram of a triangular surface in a first triangulated point cloud and a sub-area surface in a second triangulated point cloud provided by an embodiment of the present application;
fig. 8 is a schematic structural diagram of a point cloud reconstruction apparatus according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of a chip system according to an embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of a computer program product according to an embodiment of the present application.
Detailed Description
For a clearer understanding of the embodiments of the present application, some terms or techniques referred to in the embodiments of the present application are described below:
1) point cloud
The collection of data of points of the object appearance surface may be referred to as a point cloud. Generally, a point cloud of an object may be measured using a three-dimensional measuring instrument.
The precision of the point cloud is used for measuring the difference between the point cloud of the object measured by the three-dimensional measuring instrument and the actual contour of the object. The higher the precision of the point cloud is, the higher the coincidence degree of the point cloud and the actual contour of the object is.
Generally, if the number of points on the surface of the object is small, and the distance between the points is large. In this case, the point cloud obtained by the measurement may be referred to as a sparse point cloud.
If the number of measured points on the appearance surface of the object is large and dense, the measured point cloud may be called dense point cloud, or dense point cloud.
2) Other terms
In the embodiments of the present application, words such as "exemplary" or "for example" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the embodiments of the present application, the terms "first", "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present application, "a plurality" means two or more unless otherwise specified.
It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. The term "and/or" is an associative relationship that describes an associated object, meaning that three relationships may exist, e.g., A and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in the present application generally indicates that the former and latter related objects are in an "or" relationship.
It should also be understood that, in the embodiments of the present application, the size of the serial number of each process does not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
It should be understood that determining B from a does not mean determining B from a alone, but may also be determined from a and/or other information.
It will be further understood that the terms "comprises," "comprising," "includes," and/or "including," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also understood that the term "if" may be interpreted to mean "when" ("where" or "upon") or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined." or "if [ a stated condition or event ] is detected" may be interpreted to mean "upon determining.. or" in response to determining. "or" upon detecting [ a stated condition or event ] or "in response to detecting [ a stated condition or event ]" depending on the context.
It should be appreciated that reference throughout this specification to "one embodiment," "an embodiment," "one possible implementation" means that a particular feature, structure, or characteristic described in connection with the embodiment or implementation is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" or "one possible implementation" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
The embodiment of the application provides a point cloud reconstruction method, a point cloud reconstruction device and a point cloud reconstruction system.
As an example, the first three-dimensional point cloud may be a sparse point cloud of the object to be measured with higher accuracy obtained by the structured light technique, and the second three-dimensional point cloud may be a dense point cloud of the object to be measured with lower accuracy obtained by the TOF technique. The TOF technique may include a direct time of flight (dToF) technique and an indirect time of flight (iToF) technique, which are not described in detail herein.
Of course. The second three-dimensional point cloud may also be a point cloud constructed by technologies such as radar ranging or acoustic ranging, which is not limited in the embodiment of the present application.
In the following description of the embodiment of the present application, a point cloud reconstruction system provided in the embodiment of the present application is described by taking an example that a first three-dimensional point cloud is a sparse point cloud with higher accuracy of an object to be measured obtained by a structured light technique, and a second three-dimensional point cloud is a dense point cloud with lower accuracy of the object to be measured obtained by a TOF technique.
Referring to fig. 1, an embodiment of the present application provides a point cloud reconstruction system 10, where the point cloud reconstruction system 10 includes a light projection module 11, a light sensing module 12, a point cloud reconstruction module 13, and a point cloud fusion module 14.
The light projection module 11 may include a first light projection component and a second light projection component, and the first light projection component may be configured to generate and project a first light signal onto the surface of the object 101 to be measured, where the first light signal is used to reconstruct a first three-dimensional point cloud of the object to be measured. The second light projecting assembly may be configured to generate and project a second light signal onto the surface of the object 101 to be measured, the second light signal being used to reconstruct a second three-dimensional point cloud of the object to be measured. The precision of the first three-dimensional point cloud is higher than that of the second three-dimensional point cloud, and the density of the first three-dimensional point cloud is lower than that of the second three-dimensional point cloud.
Wherein the first light signal may be a light signal for projecting a preset coding pattern, and the preset coding pattern may be a pattern designed in advance by a developer. When the first light projection assembly projects the first light signal on the surface of the object 101 to be measured, the predetermined coding pattern is projected on the surface of the object 101 to be measured. If the surface of the object 101 to be measured is uneven, different points on the surface of the object 101 to be measured modulate the predetermined coding pattern, so as to reflect the modulated third optical signal to the photosensitive module 12. Or, it can be understood that, after the first optical signal is projected on the surface of the object 101 to be measured, the preset coding pattern is overlapped with the surface of the object 101 to be measured, so as to obtain an overlapped image of the preset coding pattern and the surface of the object 101 to be measured. In this way, the surface of the object 101 may reflect the third optical signal representing the superimposed image to the photosensitive module 12.
The light source of the first light projection assembly may be a light source that emits a continuous light signal, or may be a light source that emits a light pulse signal, which is not specifically limited in this embodiment of the present application.
Optionally, the preset coding pattern may be a pattern obtained by coding different coding patterns.
Exemplarily, referring to fig. 2, fig. 2 shows an encoding pattern encoded based on 8 kinds of encoding patterns provided by an embodiment of the present application. As shown in fig. 2 (a), fig. 2 (a) shows 8 kinds of encoding patterns. Fig. 2 (b) shows the coding pattern obtained by coding 8 of the coding pattern shown in fig. 2 (a) with a size of 2 × 2 as a coding window based on a 27 × 36 coding grid. And the grid intersection point on each coding window is the main coding point. For example, the grid crossing point 12 on the coding window 11 is a main coding point.
Since each encoding mesh can be embedded into any one of the encoding patterns shown in (a) of fig. 2 by encoding during the encoding process, it serves as one encoding element. In this case, a coding window of size 2 × 2 may embed 2 × 2 coding patterns, i.e. a coding window of 2 × 2 may include 2 × 2 coding elements. Since the coding pattern embedded in each coding mesh may be any one of the coding patterns in 8 shown in fig. 2 (a), 2 × 2 coding windows includeThe coding element may have 84Seed combinations, i.e. one coding window can represent 84And (4) carrying out seed culture. The size of the coding grid is related to the hardware configuration of the projection device in the projection module 11, and is not described herein again.
It should be understood that the above-mentioned coding patterns and coding patterns are only exemplary, and the embodiments of the present application are not limited thereto.
Of course, the first optical signal may also be an optical signal for projecting a speckle array, or may be an optical signal for projecting a regular pattern, for example, a uniform black-and-white stripe, and the like, which is not specifically limited in this embodiment of the present application.
Alternatively, the preset encoding image may be encoded in advance by any device (e.g., a computer device) or apparatus with encoding processing capability, and then the encoding control signal is sent to the first light projection assembly, so that the first light projection assembly can project the first light signal with the preset encoding pattern. In this case, the first light projecting component may be a projection device, such as a projector or the like.
Optionally, the preset encoding pattern may also be etched on a Diffractive Optical Element (DOE) in advance through an etching device, so as to obtain a light-shielding grating with a preset encoding pattern structure. Alternatively, the predetermined coding pattern may be engraved on a mask (mask) to obtain a light-shielding mask having a predetermined coding pattern structure.
When the light-shielding grating or the light-shielding mask with the preset coding pattern structure is covered on the lens of the first light projection assembly, the light signal projected by the first light projection assembly can be modulated by the light-shielding grating or the light-shielding mask, and then the first light signal with the preset coding pattern can be obtained.
In this case, the first Light projecting component may be a vertical-cavity surface-Emitting laser (VCSEL), a Light Emitting Diode (LED), a Digital Light Processing (DLP), or the like, which is not limited in this embodiment.
By configuring the first light projecting component with a light shielding grating or a light shielding mask to generate the first light signal, it is possible to achieve efficient output of the first light signal and miniaturization of the first light projecting component.
The second optical signal may be an optical pulse signal, for example a homogeneous optical pulse signal.
In particular, the second light signal may be generated by modulating the emission frequency and/or intensity of the light source of the second light projecting assembly. When the second optical signal is projected onto the surface of the object 101 to be measured, the fourth optical signal is obtained by reflection on the surface of the object 101 to be measured.
The light source of the second optical signal may be a surface light source, and may be a dense light source, which is not limited in this embodiment of the present application. In this case, when the second three-dimensional point cloud is reconstructed by the TOF technique, if the light source of the second light signal is a surface light source or a dense light source, the problem of multipath reflection is significant, and thus the second three-dimensional point cloud reconstructed by the TOF technique in this case is a dense point cloud with lower accuracy.
Of course, the light source of the second optical signal may also be a sparse light source, and this embodiment of the present application is not limited thereto. At this time, when the second three-dimensional point cloud is reconstructed by the TOF technique, if the light source of the second optical signal is a sparse light source, the influence of multipath reflection can be effectively reduced, so that the reconstructed second three-dimensional point cloud is a sparse point cloud with higher accuracy. It can be seen that, in the embodiment of the present application, the high-precision and sparse first three-dimensional point cloud reconstructed by the structured light technology may be replaced by reconstructing the high-precision and sparse first three-dimensional point cloud by the TOF technology based on the sparse light source, which is not specifically limited in this respect.
Optionally, the second light projecting component may be a VCSEL, an LED, or a DLP, which is not limited in this embodiment of the present application.
It should be understood that the first and second light projecting components may share a light source or separate light sources may be used. When the first and second light projecting components share a light source, the light source may be a light source that is modulated and may generate a light pulse signal. In this case, the first light signal projected by the first light projecting assembly may be a light pulse signal for projecting a preset encoding pattern. This is not particularly limited.
It should be understood that the first and second light projecting components described above may be two separate sets of light projecting components. Of course, the first light projection assembly and the second light projection assembly may be the same light projection assembly.
When the first and second light projecting components are the same set of light projecting components, the light projecting components may include optical devices such as VCSELs, LEDs or DLPs, as well as the light blocking grids (or masks) and motorized stages described above. The electric platform is used for moving the shading grating (or the shading mask) so that the shading grating (or the shading mask) can cover the lens of the light projection component, and the light projection component can be used for generating a first light signal.
When the first light projecting component and the second light projecting component are the same group of light projecting components, the light projecting components may project the first light signal and the second light signal in a time-sharing manner. That is, by controlling the electric stage, the light-shielding grating (or the light-shielding mask) is covered on the projection lens of the light projection assembly, so that the light projection assembly generates the first light signal and projects the first light signal. Then, the electric platform is controlled to remove the shading grating (or shading mask) covering the projection lens of the light projection component, and the light projection component is controlled to generate a second light signal and project the second light signal.
It can be seen that, in this case, the light projection component may project the light pulse of the light source and the coding pattern in a time-sharing manner, or the light projection component may project the light pulse of the light source and the coding pattern in a time-sharing manner, and so on, which will not be described in detail.
It should be understood that the time interval between the first optical signal and the second optical signal projected to the object 101 is in milliseconds by the same set of light projecting components. Therefore, even when the object 101 to be measured is an object in motion, the object 101 to be measured detected by the first optical signal and the second optical signal can be considered as an object to be measured in the same motion state.
Of course, the moving speed of the object in motion is a moving speed smaller than a preset threshold. The value of the preset threshold is not limited in the embodiment of the present application.
Additionally, it should be understood that both the first optical signal and the second optical signal may be infrared optical signals.
And a photosensitive module 12 including a first photosensor and a second photosensor. The first light sensor may be configured to receive the third light signal. The third optical signal is an optical signal obtained after the first optical signal is projected on the surface of the object 101 to be measured and the preset coding pattern is modulated on the surface of the object 101 to be measured. The second light sensor may be configured to receive a fourth light signal. The fourth optical signal is the optical signal reflected to the second optical sensor by the surface of the object 101 to be measured after the second optical signal is projected on the surface of the object 101 to be measured.
The first optical sensor and the second optical sensor may be two independent optical sensors or may be the same optical sensor.
When the first optical sensor and the second optical sensor are the same optical sensor, the light projection module 11 projects the first optical signal and the second optical signal to the object 101 to be measured in a time-sharing manner, so that the optical sensor can receive the third optical signal and the fourth optical signal in a time-sharing manner.
Optionally, the optical sensor may be an optical sensor including only a single charge-coupled device (CCD), or may be an optical sensor including an array CCD, which is not specifically limited in this embodiment of the present application.
Alternatively, the light sensor may be a light sensor for detecting infrared light.
It can be seen that the photosensitive module 12 receives the third optical signal, i.e. the first image is acquired. The photosensitive module 12 receives the fourth optical signal, that is, acquires the second image.
It should be understood that the first image and the second image are both grayscale images here.
In practical applications, the light projection module 11 and the light sensing module 12 may be two modules in a depth camera device. Of course, the light projection module 11 and the light sensing module 12 may be modules in two independent devices, which is not limited in the embodiment of the present application.
And a point cloud reconstruction module 13, configured to reconstruct a first three-dimensional point cloud according to the first optical signal and the third optical signal.
Optionally, the point cloud reconstruction module 13 may reconstruct the first three-dimensional point cloud according to a preset coding pattern corresponding to the first light signal and the first image corresponding to the third light signal.
For example, the point cloud reconstruction module 13 may perform primary encoding point detection on the first image to determine the position information of the encoding window corresponding to the primary encoding point. Then, the point cloud reconstruction module 13 may determine, according to the determined position information of the coding window and the position information of the coding window in the preset coding pattern corresponding to the first optical signal, based on a triangulation principle (details of the triangulation principle are not described in this embodiment of the present application), depth information of a pixel point corresponding to the main coding point in the coding window in a binocular triangle structure formed by the light projection module 11, the light sensing module 12, and the object 101 to be measured. In this way, the point cloud reconstruction module 13 can determine the depth information of the pixel point corresponding to each main encoding point in the first image.
In the binocular triangle structure, the light projection module 11 and the light sensing module 12 may represent two eyes. It should be understood that the distance between the light projection module 11 and the light sensing module 12 is recorded in advance in the point cloud reconstruction module 13.
Then, the point cloud reconstruction module 13 may map the pixel corresponding to each main encoding point in the first image in a first preset coordinate system according to the position information and the depth information of the pixel corresponding to each main encoding point on the first image, so as to obtain the first three-dimensional point cloud. Wherein the first predetermined coordinate system is a three-dimensional coordinate system.
It can be seen that the number of points in the first three-dimensional point cloud is the same as the number of main encoding points in the preset encoding pattern.
In order to improve the detection accuracy and robustness of the main encoding point, the first light projection assembly may also project the first light signal with different encoding patterns to the object 101 to be detected, that is, the first light projection assembly may project a plurality of encoding patterns to the object 101 to be detected, which is not specifically limited in this embodiment of the application. The multiple coding patterns may be grid stripes and geometric coding figures, or the multiple coding patterns may be inverse images of the grid and the geometric coding figures, and the like.
In this way, the point cloud reconstruction module 13 can perform reconstruction of the first three-dimensional point cloud according to a corresponding algorithm, which is not described herein in detail in the embodiments of the present application.
The point cloud reconstruction module 13 is further configured to reconstruct a second three-dimensional point cloud according to the second optical signal and the fourth optical signal.
Optionally, the point cloud reconstruction module 13 may reconstruct the second three-dimensional point cloud according to a time difference between the light projection module 11 projecting the second light signal and the light sensing module 12 receiving the fourth light signal.
Here, the time difference may be an optical pulse flight time determined based on the dTOF technique or an optical pulse flight time determined by the ietf technique, which is not particularly limited. Here, the flight time is: the second optical signal projected by the light projection module 11 reaches the flight time of the photosensitive module 12 after being reflected by the surface of the object to be measured, that is, the time difference between the projection of the second optical signal by the light projection module 11 and the reception of the fourth optical signal by the photosensitive module 12.
Optionally, the point cloud reconstruction module 13 may determine depth information of a pixel point in the second image, for example, determine depth information of each pixel point in the second image, according to a time difference between the light projection module 11 projecting the second light signal and the light sensing module 12 receiving the fourth light signal.
Then, the point cloud reconstruction module 13 may map the pixel points in the second image in a second preset coordinate system according to the position information and the depth information of the pixel points in the second image, so as to obtain the second three-dimensional point cloud. Wherein the second predetermined coordinate system is a three-dimensional coordinate system.
Optionally, before reconstructing the second three-dimensional point cloud, the point cloud reconstruction module 13 may optimize the second image corresponding to the fourth optical signal according to a preset algorithm, so as to remove an error point caused by multipath reflection in the second image.
And a point cloud fusion module 14, configured to fuse the first three-dimensional point cloud and the second three-dimensional point cloud reconstructed by the point cloud reconstruction module 13 to obtain a target point cloud of the object 101 to be detected.
For the process of fusing the first three-dimensional point cloud and the second three-dimensional point cloud by the point cloud fusion module 14, the following description of the method may be referred to, and is not repeated here.
In practical applications, the point cloud reconstruction module 13 and the point cloud fusion module 14 may implement their functions by a processor in the depth camera device. Of course, the point cloud reconstruction module 13 and the point cloud fusion module 14 may also realize the functions thereof by a point cloud reconstruction device having a computing processing capability, which is not specifically limited in this embodiment of the application.
The point cloud reconstruction apparatus may be any computing device with computing processing capability, for example, the computing device may be a general-purpose computer, and the like, which is not specifically limited in this embodiment of the present application.
It should be understood that the point cloud reconstruction module 13 and the point cloud fusion module 14 may be disposed in the same computing device, and of course, the point cloud reconstruction module 13 and the point cloud fusion module 14 may also be disposed in different computing devices, which is not limited in this embodiment of the present application.
Referring to fig. 3, fig. 3 provides a schematic diagram of a hardware structure of a computing device 30 according to an embodiment of the present disclosure. The computing device 30 may be used to implement the functions of the point cloud reconstruction module 13, or may be used to implement the functions of the point cloud fusion module 14, or may be used to implement the functions of both the cloud reconstruction module 13 and the point cloud fusion module 14.
As shown in fig. 3, computing device 30 includes a processor 31, a memory 32, a communication interface 33, and a bus 34. The processor 31, the memory 32, and the communication interface 33 may be connected by a bus 34.
The processor 31 is a control center of the computing device 30, and may be a Central Processing Unit (CPU), other general-purpose processors, or the like. Wherein a general purpose processor may be a microprocessor or any conventional processor or the like.
As one example, processor 31 may include one or more CPUs, such as CPU 0 and CPU 1 shown in fig. 3.
The memory 32 may be, but is not limited to, a read-only memory (ROM) or other type of static storage device that may store static information and instructions, a Random Access Memory (RAM) or other type of dynamic storage device that may store information and instructions, an electrically erasable programmable read-only memory (EEPROM), a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
In one possible implementation, the memory 32 may exist independently of the processor 31. Memory 32 may be coupled to processor 31 via bus 34 for storing data, instructions or program code. The processor 31 can implement the point cloud reconstruction method provided by the embodiment of the present application when it calls and executes the instructions or program codes stored in the memory 32.
In another possible implementation, the memory 32 may also be integrated with the processor 31.
A communication interface 33, configured to connect the computing device 30 and other devices (such as the photosensitive module 12, etc.) through a communication network, where the communication network may be an ethernet, a Radio Access Network (RAN), a Wireless Local Area Network (WLAN), or the like. The communication interface 33 may include a receiving unit for receiving data, and a transmitting unit for transmitting data.
The bus 34 may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 3, but this does not mean only one bus or one type of bus.
It should be noted that the configuration shown in FIG. 3 does not constitute a limitation of the computing device 30, and that the computing device 30 may include more or less components than those shown, or some components in combination, or a different arrangement of components than those shown in FIG. 3.
The point cloud reconstruction method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings.
Referring to fig. 4, fig. 4 shows a schematic flow chart of a point cloud reconstruction method provided in an embodiment of the present application. The method may be performed by a point cloud reconstruction apparatus comprising the point cloud fusion module 14 described above. The method may comprise the steps of:
s101, a point cloud reconstruction device obtains a first three-dimensional point cloud and a second three-dimensional point cloud of an object to be detected.
Wherein the first three-dimensional point cloud may be a sparse point cloud and the second three-dimensional point cloud may be a dense point cloud. Wherein the density of the first three-dimensional point cloud is lower than that of the second three-dimensional point cloud, and the precision of the first three-dimensional point cloud is higher than that of the second three-dimensional point cloud.
Alternatively, the first three-dimensional point cloud may be a point cloud reconstructed based on an image (e.g., the image represented by the third light signal) obtained by superimposing the preset coding pattern on the surface of the object to be measured after the light signal (e.g., the first light signal) projects the preset coding pattern on the surface of the object to be measured. That is, the first three-dimensional point cloud may be a three-dimensional point cloud reconstructed by structured light techniques. For the description of the preset coding pattern, reference may be made to the above description, which is not repeated herein.
Optionally, the sparse light source can effectively reduce the influence of multipath reflection on point cloud reconstruction, that is, the sparse light source can improve the accuracy of point cloud reconstruction by the TOF technology. Therefore, the first three-dimensional point cloud may also be a point cloud reconstructed based on the flight time (i.e., the time for the projection module to project the light pulse signal and the time difference between the light pulse signal reflected by the object to be measured and received by the photosensitive module) of the light pulse signal of the sparse light source after being reflected by the surface of the object to be measured and arriving at the photosensitive module. That is, the first three-dimensional point cloud may be a three-dimensional point cloud reconstructed by TOF techniques based on the light pulse signal projected by the sparse light source. The sparse light source may be obtained by modulating a surface light source, for example, a light-shielding grid or a light-shielding mask is added on the surface light source, which is not limited herein.
Alternatively, the second three-dimensional point cloud may be a point cloud reconstructed based on the flight time of the surface light source or the densely projected light pulse signal (for example, the time difference between the projection of the second light signal by the light projection module 11 and the reception of the fourth light signal by the light sensing module 12) that reaches the light sensing module after being reflected by the surface of the object to be measured. That is, the second three-dimensional point cloud may be a three-dimensional point cloud reconstructed by TOF techniques based on a surface light source or densely projected light pulse signals.
Optionally, the point cloud reconstruction device may obtain the first three-dimensional point cloud and the second three-dimensional point cloud of the object to be detected from other devices in advance or in real time through the communication interface.
The other device may include the point cloud reconstruction module 13, which may be connected to the photosensitive module 12, and may acquire a first image corresponding to the third optical signal and a second image corresponding to the fourth optical signal acquired by the photosensitive module 12. In this way, the other device can reconstruct the first three-dimensional point cloud of the object to be measured based on the first image and reconstruct the second three-dimensional point cloud of the object to be measured based on the second image. Here, the specific process of reconstructing the first three-dimensional point cloud and the second three-dimensional point cloud by the other device may refer to the description above of reconstructing the first three-dimensional point cloud and the second three-dimensional point cloud of the object to be detected by the point cloud reconstruction module 13, and is not described again.
Then, the other device can send the reconstructed first three-dimensional point cloud and the reconstructed second three-dimensional point cloud to the point cloud reconstruction device through the communication interface. In response, the point cloud reconstruction device acquires the first three-dimensional point cloud and the second three-dimensional point cloud.
Of course, the point cloud reconstruction device may also include the point cloud reconstruction module 13. In this way, the point cloud reconstruction device may be directly connected to the photosensitive module 12, and may obtain a first image corresponding to the third optical signal and a second image corresponding to the fourth optical signal, which are acquired by the photosensitive module 12. Thus, the point cloud reconstruction device can reconstruct a first three-dimensional point cloud of the object to be measured based on the first image and reconstruct a second three-dimensional point cloud of the object to be measured based on the second image. The specific process of reconstructing the first three-dimensional point cloud and the second three-dimensional point cloud by the point cloud reconstruction device can refer to the description of reconstructing the first three-dimensional point cloud and the second three-dimensional point cloud of the object to be detected by the point cloud reconstruction module 13, and is not repeated.
And S102, fusing the acquired first three-dimensional point cloud and the acquired second three-dimensional point cloud by using a point cloud reconstruction device to obtain a target point cloud of the object to be detected.
Since the first three-dimensional point cloud reconstructed by the structured light technology in the above embodiment of the present application generally has the characteristics of high precision and low density, and the second three-dimensional point cloud reconstructed by the TOF technology generally has the characteristics of low precision and high density. Therefore, the point cloud reconstruction device fuses the first three-dimensional point cloud and the second three-dimensional point cloud to obtain the target point cloud of the object to be measured with high precision and high density.
Specifically, the point cloud reconstruction device can correct the second three-dimensional point cloud based on the first three-dimensional point cloud to realize the fusion of the first three-dimensional point cloud and the second three-dimensional point cloud, so as to obtain the target point cloud of the object to be detected.
Specifically, the process of fusing the first three-dimensional point cloud and the second three-dimensional point cloud by the point cloud reconstruction device may include the following steps:
s1021 (optional), the point cloud reconstruction device aligns the first three-dimensional point cloud and the second three-dimensional point cloud.
As can be seen from the above description, the first and second three-dimensional point clouds acquired by the point cloud reconstruction device are point clouds reconstructed under different preset coordinate systems by different algorithm systems. For example, the first three-dimensional point cloud is a point cloud reconstructed in a first preset coordinate system, and the second three-dimensional point cloud is a point cloud reconstructed in a second preset coordinate system.
Therefore, the point cloud reconstruction device may convert the first three-dimensional point cloud into the coordinate system of the second three-dimensional point cloud before fusing the first three-dimensional point cloud and the second three-dimensional point cloud, or may convert the second three-dimensional point cloud into the coordinate system of the first two-dimensional point cloud, so as to achieve the coordinate alignment of the first three-dimensional point cloud and the second three-dimensional point cloud.
Optionally, the point cloud reconstructing apparatus may rotate and/or translate the first three-dimensional point cloud based on a corresponding relationship between a coordinate system (i.e., a first preset coordinate system) where the first three-dimensional point cloud is located and a coordinate system (i.e., a second preset coordinate system) where the second three-dimensional point cloud is located, so as to convert the first three-dimensional point cloud into the second preset coordinate system, so that the first three-dimensional point cloud and the second three-dimensional point cloud are aligned in the second preset coordinate system.
Optionally, the point cloud reconstruction device may also rotate and/or translate the second three-dimensional point cloud based on a corresponding relationship between the first preset coordinate system and the second preset coordinate system, so as to convert the second three-dimensional point cloud into the first preset coordinate, so that the first three-dimensional point cloud and the second three-dimensional point cloud are aligned in the first preset coordinate system. The embodiments of the present application do not limit this.
In the following, the point cloud reconstruction device aligns the first three-dimensional point cloud and the second three-dimensional point cloud in the first preset coordinate system (i.e. the point cloud reconstruction device converts the second three-dimensional point cloud into the coordinate system of the first three-dimensional point cloud (i.e. the first preset coordinate)) for example.
It should be understood that the point cloud reconstruction device is preset with a corresponding relationship between a first preset coordinate system and a second preset coordinate system. The corresponding relationship may be determined after the first optical sensor for acquiring the first image and the second optical sensor for acquiring the second image are jointly calibrated in advance. For example, internal parameters of the second optical sensor may be calibrated, and external parameters between the second optical sensor and the first optical sensor may be calibrated, which is not described in detail in this embodiment of the application.
And S1022, the point cloud reconstruction device corrects the second three-dimensional point cloud based on the first three-dimensional point cloud to obtain a target point cloud of the object to be detected.
The point cloud reconstruction device can correct the second three-dimensional point cloud based on the first three-dimensional point cloud according to the following steps to obtain a target point cloud of the object to be detected:
step 1, the point cloud reconstruction device determines a first two-dimensional image based on the first three-dimensional point cloud and determines a second two-dimensional image based on the second three-dimensional point cloud.
The point cloud reconstruction device may map the first three-dimensional point cloud from a first preset coordinate system to a preset reference surface according to a corresponding relationship between the coordinate system (i.e., the first preset coordinate system) where the first three-dimensional point cloud is located and the preset reference surface, so as to obtain a first two-dimensional image. It can be seen that the pixel points in the first two-dimensional image correspond to the points in the first three-dimensional point cloud one to one.
Each pixel point in the first two-dimensional image comprises position information of the pixel point in the first two-dimensional image and depth information of a point corresponding to the pixel point in the first three-dimensional point cloud.
Here, the preset reference surface may be an imaging surface of a first photo sensor for acquiring the first image, which is not limited thereto.
In addition, the point cloud reconstruction device can map the second three-dimensional point cloud from the first preset coordinate system to the preset reference surface according to the corresponding relation between the first preset coordinate system and the preset reference surface so as to obtain a second two-dimensional image. It can be seen that the pixel points in the second two-dimensional image correspond to the points in the second three-dimensional point cloud one to one.
And each pixel point in the second two-dimensional image comprises the position information of the pixel point in the second two-dimensional image and the depth information of a point corresponding to the pixel point in the second three-dimensional point cloud.
The point cloud reconstruction device can preset the corresponding relation between a first preset coordinate system and the preset reference surface. The correspondence may be determined after calibrating the first light projecting assembly for projecting the first light signal and the first light sensor for acquiring the first image in advance. For example, the corresponding relationship may be determined by calibrating an internal parameter and an external parameter between a first light projection component for projecting the first light signal and a first light sensor for acquiring the first image, which is not described in detail herein.
And 2, triangulating the first two-dimensional image by the point cloud reconstruction device to divide the first two-dimensional image into n triangular surfaces. Wherein n is an integer greater than 1.
Specifically, the point cloud reconstruction device can connect three adjacent pixel points in the first two-dimensional image, and triangulation of the first two-dimensional image can be achieved.
Exemplarily, referring to (a) of fig. 5, (a) of fig. 5 shows a triangulated first two-dimensional image. As shown in fig. 5 (a), the first two-dimensional image 51 includes 20 pixel points, and if the point cloud reconstruction device connects any three adjacent pixel points (for example, points a1, B1, and C1), a triangulated first two-dimensional image can be obtained.
It should be understood that the point cloud reconstruction apparatus may also perform m-angle subdivision on the first two-dimensional image, that is, any adjacent m points in the first two-dimensional image are sequentially connected to divide the first two-dimensional image into a plurality of m-edge planes. Here, m is an integer greater than 3.
As an example, as shown in fig. 6, the point cloud reconstruction apparatus may perform a four-corner subdivision on the first two-dimensional image 60 to divide the first two-dimensional image into a plurality of quadrilateral surfaces (e.g., any quadrilateral surface ABCD), which is not particularly limited in this embodiment of the present application.
Optionally, the point cloud reconstruction device further sets an identifier for a pixel point corresponding to a vertex of each triangular surface obtained by subdivision.
For example, the point cloud reconstruction device may set the identifiers 11, 12, and 13 to the pixel points corresponding to the three vertices constituting the triangular surface 1. The point cloud reconstruction device may set the identifiers 21, 22, 23 for the pixel points corresponding to the three vertices constituting the triangular surface 2.
It will be appreciated that different triangular faces, may share one or two vertices. Any pixel point in the first two-dimensional image can be used as a shared vertex of at most 3 triangular surfaces.
In the following description, the point cloud reconstruction apparatus is exemplified by triangulating the first two-dimensional image.
And 3, determining pixel points corresponding to the positions of the pixel points in the first two-dimensional image by the point cloud reconstruction device in the second two-dimensional image, and dividing the second two-dimensional image into n sub-areas based on the determined pixel points, wherein the n sub-areas correspond to n triangular surfaces in the first two-dimensional image one to one.
Specifically, the point cloud reconstruction device determines a pixel point corresponding to the position of the pixel point in the first two-dimensional image in the second two-dimensional image based on the position of the pixel point in the first two-dimensional image.
Then, the point cloud reconstruction device may sequentially connect the pixel points in the second two-dimensional image corresponding to the positions of the three pixel points identified as forming one triangular surface (e.g., the triangular surface 1) in the first two-dimensional image, so as to obtain the sub-region 1 corresponding to the triangular surface 1 in the second two-dimensional image.
Therefore, the point cloud reconstruction device can divide the second two-dimensional image into n sub-areas, and the n sub-areas correspond to the n triangular surfaces in the first two-dimensional image one by one.
It should be understood that, since the first three-dimensional point cloud is a sparse point cloud reconstructed based on a structured light technique and the second three-dimensional point cloud is a dense point cloud reconstructed based on a TOF technique, the number of pixel points in the second two-dimensional image mapped by the second three-dimensional point cloud is greater than the number of pixel points in the first two-dimensional image mapped by the first three-dimensional point cloud. Therefore, the second two-dimensional image is divided into any one of the n sub-regions, and may further include at least one pixel point.
Illustratively, referring to fig. 5, the point cloud reconstruction device determines a pixel point at a corresponding position in the second two-dimensional image 52 based on the position of each pixel point in the first two-dimensional image 51. For example, a pixel a2 corresponding to the position of the pixel a1 shown in fig. 5 (a), a pixel B2 corresponding to the position of the pixel B1 shown in fig. 5 (a), a pixel C2 corresponding to the position of the pixel C1 shown in fig. 5 (a), and the like.
Since the pixel points A1, B1, and C1 in the first two-dimensional image 51 are pixel points that form a triangular plane A1B1C1, the sub-regions A2B2C2 corresponding to the triangular plane A1B1C1 are obtained after A2, a B2, and a C2 respectively corresponding to the pixel points A1, B1, and C1 in the second two-dimensional image 52 are sequentially connected. Further, it can be seen that the sub-area A2B2C2 shown in fig. 5 (B) further includes a plurality of pixels (the sub-area A2B2C2 in fig. 5 (B) is indicated by black dots).
Optionally, the point cloud reconstruction device further sets an identifier for a pixel corresponding to a vertex of each divided sub-region.
For example, the point cloud reconstruction device may set the identifiers 11, 12, and 13 to the pixels corresponding to the three vertices constituting the sub-region 1. The point cloud reconstruction means may set the identifiers 21, 22, 23 for the pixels corresponding to the three vertices constituting the sub-region 2.
It will be appreciated that different sub-regions, may share one or two vertices. Any pixel point in the first two-dimensional image can be used as a shared vertex of at most 3 sub-regions.
And 4, the point cloud reconstruction device maps the triangulated first two-dimensional image to a first preset coordinate system to obtain a first triangulated point cloud, and maps a second two-dimensional image divided into n sub-areas to the first preset coordinate system to obtain a second triangulated point cloud.
The first triangulated point cloud comprises n triangular surfaces, and points in the first triangulated point cloud corresponding to the vertexes of any one of the n triangular surfaces have the same identification.
The second triangulated point cloud comprises n sub-area surfaces, and points in the second triangulated point cloud corresponding to the vertexes of any sub-area surface in the n sub-area surfaces have the same identification.
It should be understood that n triangular faces in the first triangulated point cloud and n sub-region faces in the second triangulated point cloud are in one-to-one correspondence.
Since the depth information of the points in the first triangulated point cloud is different from the depth information of the points in the second triangulated point cloud, the n triangular faces in the first triangulated point cloud and the n sub-area faces in the second triangulated point cloud do not coincide.
Exemplarily, referring to fig. 7, fig. 7 shows a triangular face 71 in a first triangulated point cloud and a sub-area face 72 in a second triangulated point cloud. The triangular surface 71 is any one of the triangular surfaces in the first triangulated point cloud, and the sub-area 72 is a sub-area corresponding to the triangular surface 71 in the second triangulated point cloud.
And 5, determining the motion transformation relation between the n triangular surfaces in the first triangulated point cloud and the n sub-area surfaces in the second triangulated point cloud by the point cloud reconstruction device.
Optionally, the point cloud reconstruction device may calculate, by using a covariance decomposition method, a motion transformation relationship between the first triangular surface and the first sub-region surface based on the position information of the vertex of the first triangular surface in the first triangulated point cloud and the position information of the vertex of the first sub-region surface corresponding to the first triangular surface in the second triangulated point cloud.
The first triangular surface is any one of the n triangular surfaces, and the first sub-area surface is a first sub-area surface corresponding to the first triangular surface in the n sub-area surfaces.
It should be understood that when the first sub-area is transformed by the motion transformation relation, a first triangular surface can be obtained. That is, the motion transformation relation may transform the first sub-area into a first triangular surface.
Exemplarily, referring to fig. 7, the sub-area 72 shown in fig. 7 may be transformed into the triangular surface 71 by a motion transformation relationship determined by the point cloud reconstruction device.
It can be seen that the point cloud reconstruction device can calculate the motion transformation relationship between the triangular surface and the sub-region surface according to a set of corresponding triangular surfaces and sub-region surfaces in the first triangulated point cloud and the second triangulated point cloud. In this way, the point cloud reconstruction device can determine the motion transformation relationship between each triangular surface in the first triangulated point cloud and the corresponding sub-area surface of each triangular surface in the second triangulated point cloud. For example, a motion conversion relationship between the triangular face 1 and the sub-area face 1, a motion conversion relationship between the triangular face 2 and the sub-area face 2, a motion conversion relationship between the triangular face 3 and the sub-area face 3, and the like. The sub-area corresponding to the triangular surface 1 is the sub-area 1, the sub-area corresponding to the triangular surface 2 is the sub-area 2, and the sub-area corresponding to the triangular surface 3 is the sub-area 3.
And 5, correcting the second three-dimensional point cloud by the point cloud reconstruction device according to the determined motion transformation relation so as to obtain the target point cloud of the object to be detected.
Specifically, the point cloud reconstruction device may transform the points in each sub-area according to the determined motion transformation relationship for transforming each sub-area in the second triangulated point cloud, and add the transformed points to the first three-dimensional point cloud, so as to obtain the target point cloud of the object to be measured.
And the points in each sub-area comprise points corresponding to pixel points in the second three-dimensional point cloud and the sub-area corresponding to each sub-area in the second two-dimensional image.
Illustratively, if the subregion 1 in the second triangulated point cloud corresponds to the subregion 1 in the second two-dimensional image, the points in the subregion 1 include the points corresponding to the pixel points in the second three-dimensional point cloud and in the subregion 1.
As an example, the point cloud reconstruction device may transform the points in the sub-region surface 1 according to the determined motion transformation relationship for transforming the sub-region surface 1 in the second triangulated point cloud, so as to correct the points corresponding to the sub-region surface 1 in the second three-dimensional point cloud. In this way, when the point cloud reconstruction device can transform the points in each sub-area according to the determined motion transformation relationship for transforming each sub-area in the second triangulated point cloud, the correction of the points in the second three-dimensional point cloud is realized. And then, adding the points obtained by transformation into the first three-dimensional point cloud by the point cloud reconstruction device to obtain the target point cloud of the object to be detected.
It should be understood that the point cloud reconstruction apparatus may correct the second three-dimensional point cloud according to the first three-dimensional point cloud based on the methods of steps 1 to 5 to obtain the target point cloud of the object to be measured. Of course, the point cloud reconstruction device may also correct the second three-dimensional point cloud according to the first three-dimensional point cloud based on any other method to obtain the target point cloud of the object to be measured, which is not specifically limited in this embodiment of the present application.
By the method, the density of the point cloud reconstructed based on the structured light technology can be obviously improved, and the accuracy of the point cloud reconstructed based on the TOF technology can be improved.
By way of example, referring to table 1, table 1 exemplarily shows beneficial effects brought by a point cloud reconstruction method provided in an embodiment of the present application.
TABLE 1
Figure BDA0002713590630000141
It can be seen that, in different measurement scenes, compared with the measurement accuracy of the sparse point cloud reconstructed based on the structured light technology, the measurement accuracy of the dense point cloud reconstructed based on the TOF technology is poor, but the density of the dense point cloud reconstructed based on the TOF technology is far higher than that of the sparse point cloud reconstructed based on the structured light technology. According to the point cloud reconstruction method provided by the embodiment of the application, after the sparse point cloud and the dense point cloud are fused, the density and the measurement precision are greatly improved.
In summary, according to the point cloud reconstruction system and method provided by the embodiment of the present application, a sparse point cloud (i.e., a first three-dimensional point cloud) with high precision and low density obtained based on a structured light technology and a dense point cloud (i.e., a second three-dimensional point cloud) with low precision and high density obtained based on a TOF technology can be fused, so that point cloud reconstruction with high precision and high density of the profile of an object to be measured is realized.
The scheme provided by the embodiment of the application is mainly introduced from the perspective of a method. To implement the above functions, it includes hardware structures and/or software modules for performing the respective functions. Those of skill in the art would readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the application, the point cloud reconstruction device may be divided into the functional modules according to the method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, in the embodiment of the present application, the division of the module is schematic, and is only one logic function division, and there may be another division manner in actual implementation.
As shown in fig. 8, fig. 8 is a schematic structural diagram of a point cloud reconstruction apparatus 80 according to an embodiment of the present disclosure. The point cloud reconstruction device 80 may be used to perform the point cloud reconstruction method described above, for example, to perform the method shown in fig. 4. The point cloud reconstruction apparatus 80 may include an acquisition unit 81 and a fusion unit 82.
The acquiring unit 81 is configured to acquire a first three-dimensional point cloud and a second three-dimensional point cloud of the object to be measured. And the fusion unit 82 is used for fusing the first three-dimensional point cloud and the second three-dimensional point cloud to obtain a target point cloud of the object to be detected. The precision of the first three-dimensional point cloud is higher than that of the second three-dimensional point cloud, and the density of the first three-dimensional point cloud is smaller than that of the second three-dimensional point cloud.
As an example, in conjunction with fig. 4, the obtaining unit 81 may be configured to perform S101, and the fusing unit 82 may be configured to perform S102.
Optionally, the first three-dimensional point cloud is a point cloud constructed by a structured light technique or a time of flight TOF technique. The second three-dimensional point cloud is a point cloud constructed by the TOF technique.
Optionally, if the first three-dimensional point cloud is a point cloud constructed by a structured light technique, the light signal for projecting the surface of the object to be measured is a light signal for projecting a preset coding pattern. The preset coding pattern is a pattern projected by an optical signal obtained by modulating a light source signal of the light projection module by the shading grating.
Optionally, the fusion unit 82 is specifically configured to correct the second three-dimensional point cloud based on the first three-dimensional point cloud to obtain the target point cloud.
As an example, in connection with fig. 4, the fusing unit 82 may be configured to perform S1022.
Optionally, the fusion unit 82 is specifically configured to determine a first triangulated point cloud based on the first three-dimensional point cloud, where the first triangulated point cloud includes n (n is a positive integer) triangular surfaces; the device comprises a first three-dimensional point cloud generating unit, a second three-dimensional point cloud generating unit, a first triangulated point cloud generating unit and a second triangulated point cloud generating unit, wherein the first three-dimensional point cloud generating unit is used for generating a first three-dimensional point cloud; and determining the motion transformation relation between the n triangular surfaces and the n sub-area surfaces; and the second three-dimensional point cloud is corrected according to the motion transformation relation to obtain the target point cloud. Wherein, the n triangular surfaces correspond to the n sub-area surfaces one by one.
As an example, in connection with fig. 4, the fusing unit 82 may be configured to perform S1022.
Optionally, the point cloud reconstructing apparatus 80 further includes: an alignment unit 83 for aligning the first three-dimensional point cloud and the second three-dimensional point cloud before the fusion unit 82 corrects the second three-dimensional point cloud based on the first three-dimensional point cloud.
As an example, in connection with fig. 4, the alignment unit 83 may be configured to perform S1021.
For the detailed description of the above alternative modes, reference may be made to the foregoing method embodiments, which are not described herein again. In addition, for the explanation and the description of the beneficial effects of any point cloud reconstruction apparatus 80 provided above, reference may be made to the corresponding method embodiment described above, and details are not repeated.
As an example, in connection with fig. 3, the acquiring unit 81, the fusing unit 82, and the aligning unit 83 in the point cloud reconstructing apparatus 80 may be implemented by the processor 31 in fig. 3 executing the program code in the memory 32 in fig. 3.
The embodiment of the present application further provides a chip system 90, as shown in fig. 9, where the chip system 90 includes at least one processor and at least one interface circuit. By way of example, when the system-on-chip 90 includes one processor and one interface circuit, then the one processor may be the processor 91 shown in solid line block in fig. 9 (or the processor 91 shown in dashed line block), and the one interface circuit may be the interface circuit 92 shown in solid line block in fig. 9 (or the interface circuit 92 shown in dashed line block). When the system-on-chip 90 includes two processors and two interface circuits, then the two processors include a processor 91 shown in a solid line block in fig. 9 and a processor 91 shown in a dashed line block, and the two interface circuits include an interface circuit 92 shown in a solid line block in fig. 9 and an interface circuit 92 shown in a dashed line block. This is not limitative.
The processor 91 and the interface circuit 92 may be interconnected by wires. For example, the interface circuit 92 may be used to receive signals (e.g., acquire a first three-dimensional point cloud and a second three-dimensional point cloud, etc.). As another example, the interface circuit 92 may be used to send signals to other devices, such as the processor 91. Illustratively, the interface circuit 92 may read instructions stored in the memory and send the instructions to the processor 91. The instructions, when executed by the processor 91, may cause the point cloud reconstruction apparatus to perform the various steps in the embodiments described above. Of course, the chip system 90 may also include other discrete devices, which is not specifically limited in this embodiment.
Another embodiment of the present application further provides a computer-readable storage medium, in which instructions are stored, and when the instructions are executed on a point cloud reconstruction apparatus, the point cloud reconstruction apparatus performs each step performed by the point cloud reconstruction apparatus in the method flow shown in the above method embodiment.
In some embodiments, the disclosed methods may be implemented as computer program instructions encoded on a computer-readable storage medium in a machine-readable format or encoded on other non-transitory media or articles of manufacture.
Fig. 10 schematically illustrates a conceptual partial view of a computer program product comprising a computer program for executing a computer process on a computing device provided by an embodiment of the application.
In one embodiment, the computer program product is provided using a signal bearing medium 100. The signal bearing medium 100 may include one or more program instructions that, when executed by one or more processors, may provide the functions or portions of the functions described above with respect to fig. 4. Thus, for example, one or more features described with reference to S101-S102 in FIG. 4 may be undertaken by one or more instructions associated with the signal bearing medium 100. Further, the program instructions in FIG. 10 also describe example instructions.
In some examples, signal bearing medium 100 may comprise a computer readable medium 101, such as, but not limited to, a hard disk drive, a Compact Disc (CD), a Digital Video Disc (DVD), a digital tape, a memory, a read-only memory (ROM), a Random Access Memory (RAM), or the like.
In some implementations, the signal bearing medium 100 may comprise a computer recordable medium 102 such as, but not limited to, a memory, a read/write (R/W) CD, a R/W DVD, and the like.
In some implementations, the signal bearing medium 100 may include a communication medium 103 such as, but not limited to, a digital and/or analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link, etc.).
The signal bearing medium 100 may be conveyed by a wireless form of communication medium 103, such as a wireless communication medium that complies with the IEEE 1902.11 standard or other transmission protocols. The one or more program instructions may be, for example, computer-executable instructions or logic-implementing instructions.
In some examples, a point cloud reconstruction apparatus, such as described with respect to fig. 4, may be configured to provide various operations, functions, or actions in response to being programmed by one or more of computer readable medium 101, computer recordable medium 102, and/or communication medium 103.
It should be understood that the arrangements described herein are for illustrative purposes only. Thus, those skilled in the art will appreciate that other arrangements and other elements (e.g., machines, interfaces, functions, orders, and groupings of functions, etc.) can be used instead, and that some elements may be omitted altogether depending upon the desired results. In addition, many of the described elements are functional entities that may be implemented as discrete or distributed components or in conjunction with other components, in any suitable combination and location.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented using a software program, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The processes or functions according to the embodiments of the present application are generated in whole or in part when the instructions are executed on and by a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). Computer-readable storage media can be any available media that can be accessed by a computer or can comprise one or more data storage devices, such as servers, data centers, and the like, that can be integrated with the media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (21)

1. A point cloud reconstruction system, comprising:
the device comprises a light projection module, a light source module and a light source module, wherein the light projection module is used for respectively projecting a first light signal and a second light signal to the surface of an object to be detected, the first light signal is used for reconstructing a first three-dimensional point cloud of the object to be detected, and the second light signal is used for reconstructing a second three-dimensional point cloud of the object to be detected; wherein the precision of the first three-dimensional point cloud is higher than that of the second three-dimensional point cloud, and the density of the first three-dimensional point cloud is less than that of the second three-dimensional point cloud;
the photosensitive module is used for receiving the first optical signal and a third optical signal obtained after the surface of the object to be detected acts on the first optical signal, and receiving the second optical signal and obtaining a fourth optical signal after the second optical signal is reflected by the surface of the object to be detected;
a point cloud reconstruction module to reconstruct the first three-dimensional point cloud based on the first light signal and the third light signal, and to reconstruct the second three-dimensional point cloud based on the second light signal and the fourth light signal;
and the point cloud fusion module is used for fusing the first three-dimensional point cloud and the second three-dimensional point cloud to obtain a target point cloud of the object to be detected.
2. The system of claim 1, wherein the first three-dimensional point cloud is a point cloud constructed by structured light techniques if the first light signal is a light signal for projecting a preset encoding pattern;
if the first light signal is a light pulse signal, the first three-dimensional point cloud is a point cloud constructed based on time-of-flight TOF techniques.
3. The system according to claim 1 or 2, wherein the light projection module comprises an occluding grid for modulating a light source signal of the light projection module resulting in the first light signal, if the first light signal is for projecting a light signal having a preset coding pattern.
4. The system of any of claims 1-3, wherein the second light signal is a light pulse signal and the second three-dimensional point cloud is a point cloud constructed by the TOF technique.
5. The system according to any one of claims 1-4,
the point cloud fusion module is specifically configured to correct the second three-dimensional point cloud based on the first three-dimensional point cloud to obtain the target point cloud.
6. The system according to any one of claims 1-5, characterized in that the point cloud fusion module is specifically configured to,
determining a first triangulated point cloud based on the first three-dimensional point cloud, wherein the first triangulated point cloud comprises n triangular faces, and n is a positive integer;
determining a second triangulated point cloud based on the first three-dimensional point cloud and the second three-dimensional point cloud, wherein the second triangulated point cloud comprises n sub-area surfaces, and the n triangular surfaces are in one-to-one correspondence with the n sub-area surfaces;
determining the motion transformation relation between the n triangular surfaces and the n sub-area surfaces;
and correcting the second three-dimensional point cloud according to the motion transformation relation to obtain the target point cloud.
7. The system of claim 5 or 6,
the point cloud fusion module is further configured to align the first three-dimensional point cloud and the second three-dimensional point cloud before correcting the second three-dimensional point cloud based on the first three-dimensional point cloud.
8. A method of point cloud reconstruction, comprising:
acquiring a first three-dimensional point cloud and a second three-dimensional point cloud of an object to be detected, wherein the precision of the first three-dimensional point cloud is higher than that of the second three-dimensional point cloud, and the density of the first three-dimensional point cloud is smaller than that of the second three-dimensional point cloud;
and fusing the first three-dimensional point cloud and the second three-dimensional point cloud to obtain a target point cloud of the object to be detected.
9. The method of claim 8,
the first three-dimensional point cloud is constructed by a structured light technology or a time of flight (TOF) technology; the second three-dimensional point cloud is a point cloud constructed by the TOF technique.
10. The method according to claim 9, wherein if the first three-dimensional point cloud is a point cloud constructed by a structured light technique, the light signal for projecting the surface of the object to be measured is a light signal for projecting a preset code pattern, and the preset code pattern is a pattern projected by a light signal obtained by modulating a light source signal of a light projection module by a light shielding grid.
11. The method according to any one of claims 8 to 10, wherein the fusing the first three-dimensional point cloud and the second three-dimensional point cloud to obtain the target point cloud of the object to be measured specifically comprises:
and correcting the second three-dimensional point cloud based on the first three-dimensional point cloud to obtain the target point cloud.
12. The method according to any one of claims 8 to 11, wherein the fusing the first three-dimensional point cloud and the second three-dimensional point cloud to obtain the target point cloud of the object to be measured specifically comprises:
determining a first triangulated point cloud based on the first three-dimensional point cloud, wherein the first triangulated point cloud comprises n triangular faces, and n is a positive integer;
determining a second triangulated point cloud based on the first three-dimensional point cloud and the second three-dimensional point cloud, wherein the second triangulated point cloud comprises n sub-area surfaces, and the n triangular surfaces are in one-to-one correspondence with the n sub-area surfaces;
determining the motion transformation relation between the n triangular surfaces and the n sub-area surfaces;
and correcting the second three-dimensional point cloud according to the motion transformation relation to obtain the target point cloud.
13. The method of claim 11 or 12, wherein prior to said correcting said second three-dimensional point cloud based on said first three-dimensional point cloud, said method further comprises:
aligning the first and second three-dimensional point clouds.
14. A point cloud reconstruction apparatus, comprising:
the device comprises an acquisition unit, a detection unit and a processing unit, wherein the acquisition unit is used for acquiring a first three-dimensional point cloud and a second three-dimensional point cloud of an object to be detected, the precision of the first three-dimensional point cloud is higher than that of the second three-dimensional point cloud, and the density of the first three-dimensional point cloud is lower than that of the second three-dimensional point cloud;
and the fusion unit is used for fusing the first three-dimensional point cloud and the second three-dimensional point cloud to obtain a target point cloud of the object to be detected.
15. The apparatus of claim 14,
the first three-dimensional point cloud is constructed by a structured light technology or a time of flight (TOF) technology; the second three-dimensional point cloud is a point cloud constructed by the TOF technique.
16. The apparatus of claim 15, wherein if the first three-dimensional point cloud is a point cloud constructed by a structured light technique, the light signal for projecting the surface of the object to be measured is a light signal for projecting a preset code pattern, and the preset code pattern is a pattern projected by a light signal obtained by modulating a light source signal of a light projection module by a light shielding grid.
17. The apparatus of any one of claims 14-16,
the fusion unit is specifically configured to correct the second three-dimensional point cloud based on the first three-dimensional point cloud to obtain the target point cloud.
18. The device according to any one of claims 14 to 15, wherein the fusion unit is specifically configured to:
determining a first triangulated point cloud based on the first three-dimensional point cloud, wherein the first triangulated point cloud comprises n triangular faces, and n is a positive integer;
determining a second triangulated point cloud based on the first three-dimensional point cloud and the second three-dimensional point cloud, wherein the second triangulated point cloud comprises n sub-area surfaces, and the n triangular surfaces are in one-to-one correspondence with the n sub-area surfaces;
determining the motion transformation relation between the n triangular surfaces and the n sub-area surfaces;
and correcting the second three-dimensional point cloud according to the motion transformation relation to obtain the target point cloud.
19. The apparatus according to any one of claims 14-15, further comprising:
an alignment unit for aligning the first three-dimensional point cloud and the second three-dimensional point cloud before the fusion unit corrects the second three-dimensional point cloud based on the first three-dimensional point cloud.
20. An apparatus for cloud reconstruction, the apparatus comprising: a memory for storing computer instructions and one or more processors for invoking the computer instructions to perform the method of any of claims 8-13.
21. A computer-readable storage medium, having stored thereon a computer program which, when run on a computer, causes the computer to perform the method of any one of claims 8-13.
CN202011065325.XA 2020-09-30 2020-09-30 Point cloud reconstruction method, device and system Pending CN114332341A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011065325.XA CN114332341A (en) 2020-09-30 2020-09-30 Point cloud reconstruction method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011065325.XA CN114332341A (en) 2020-09-30 2020-09-30 Point cloud reconstruction method, device and system

Publications (1)

Publication Number Publication Date
CN114332341A true CN114332341A (en) 2022-04-12

Family

ID=81031940

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011065325.XA Pending CN114332341A (en) 2020-09-30 2020-09-30 Point cloud reconstruction method, device and system

Country Status (1)

Country Link
CN (1) CN114332341A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116228830A (en) * 2023-03-13 2023-06-06 广州图语信息科技有限公司 Three-dimensional reconstruction method and device for triangular mesh coding structured light

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116228830A (en) * 2023-03-13 2023-06-06 广州图语信息科技有限公司 Three-dimensional reconstruction method and device for triangular mesh coding structured light
CN116228830B (en) * 2023-03-13 2024-01-26 广州图语信息科技有限公司 Three-dimensional reconstruction method and device for triangular mesh coding structured light

Similar Documents

Publication Publication Date Title
US9501833B2 (en) Method and system for providing three-dimensional and range inter-planar estimation
US8172407B2 (en) Camera-projector duality: multi-projector 3D reconstruction
CN111566437B (en) Three-dimensional measurement system and three-dimensional measurement method
CN103069250B (en) 3-D measuring apparatus, method for three-dimensional measurement
CN104541127B (en) Image processing system and image processing method
US20220277516A1 (en) Three-dimensional model generation method, information processing device, and medium
CN116664651A (en) Method and processing system for updating first image based on second image
KR101681095B1 (en) Apparatus and method for generating depth image that have same viewpoint and same resolution with color image
CN107860337B (en) Structured light three-dimensional reconstruction method and device based on array camera
KR20100134403A (en) Apparatus and method for generating depth information
CN110546686A (en) System and method for generating structured light depth map with non-uniform codeword pattern
CN112816949B (en) Sensor calibration method and device, storage medium and calibration system
CN113012277A (en) DLP (digital light processing) -surface-based structured light multi-camera reconstruction method
CN107408306B (en) Method, device and readable medium for generating depth map information of object
Furukawa et al. One-shot entire shape acquisition method using multiple projectors and cameras
CN113111513B (en) Sensor configuration scheme determining method and device, computer equipment and storage medium
Gadasin et al. Reconstruction of a Three-Dimensional Scene from its Projections in Computer Vision Systems
KR20230065978A (en) Systems, methods and media for directly repairing planar surfaces in a scene using structured light
CN114332341A (en) Point cloud reconstruction method, device and system
EP3832601A1 (en) Image processing device and three-dimensional measuring system
WO2022066583A1 (en) Decoding an image for active depth sensing to account for optical distortions
Gu et al. 3dunderworld-sls: an open-source structured-light scanning system for rapid geometry acquisition
KR100933304B1 (en) An object information estimator using the single camera, a method thereof, a multimedia device and a computer device including the estimator, and a computer-readable recording medium storing a program for performing the method.
CN110726534A (en) Visual field range testing method and device for visual device
CN112655203A (en) Method and apparatus for acquiring 3D image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination