CN106548489B - A kind of method for registering, the three-dimensional image acquisition apparatus of depth image and color image - Google Patents
A kind of method for registering, the three-dimensional image acquisition apparatus of depth image and color image Download PDFInfo
- Publication number
- CN106548489B CN106548489B CN201610835431.9A CN201610835431A CN106548489B CN 106548489 B CN106548489 B CN 106548489B CN 201610835431 A CN201610835431 A CN 201610835431A CN 106548489 B CN106548489 B CN 106548489B
- Authority
- CN
- China
- Prior art keywords
- image
- pixel
- color
- depth
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 239000011159 matrix material Substances 0.000 claims description 14
- 238000013519 translation Methods 0.000 claims description 11
- 238000010845 search algorithm Methods 0.000 claims description 9
- 238000006073 displacement reaction Methods 0.000 claims description 8
- 238000013507 mapping Methods 0.000 claims description 8
- 238000012545 processing Methods 0.000 claims description 5
- 230000001788 irregular Effects 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000005259 measurement Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
Landscapes
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention discloses the method for registering of a kind of depth image and color image, three-dimensional image acquisition apparatus, this method comprises: acquiring the black light image and color image of object to be measured using acquisition equipment;Calculate pixel coordinate deviant of each pixel relative to reference picture in black light image;The depth value of each pixel in color image is calculated using the parameter of deviant and acquisition equipment.By the above-mentioned means, the depth value of black light image of the present invention without calculating acquisition, can save the memory space of system, while avoiding complicated calculating bring error, improve the efficiency and precision of registration.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a registration method of a depth image and a color image and a three-dimensional image acquisition device.
Background
In the field of three-dimensional shooting or three-dimensional artificial intelligence, a depth image and a color image of a target are obtained simultaneously by using a depth camera and a color camera based on structured light, so that the method is high in precision and easy to realize at present. Due to the geometric deviation of the depth camera and the color camera, the depth image and the color image have a certain parallax, that is, the corresponding pixel positions of the same spatial point on the depth image and the color image are different.
In the prior art, in order to align a depth image with a color image, a pixel parallax is generally calculated by using a depth value in the depth image, and then the depth image is subjected to deviation correction by using the parallax.
Disclosure of Invention
The invention mainly solves the technical problem of providing a registration method of a depth image and a color image and a three-dimensional image acquisition device, which do not need to calculate the depth value of an acquired invisible light image, can save the storage space of a system, simultaneously avoid errors caused by complex calculation and improve the efficiency and the precision of registration.
In order to solve the technical problems, the invention adopts a technical scheme that: a method of registration of a depth image with a color image is provided, the method comprising: collecting an invisible light image and a color image of an object to be detected by using collection equipment; calculating a pixel coordinate offset value of each pixel in the invisible light image relative to the reference image; and calculating the depth value of each pixel in the color image by using the offset value and the parameters of the acquisition equipment.
The method for acquiring the invisible light image and the color image of the object to be detected by using the acquisition equipment comprises the following steps: projecting a structured light pattern to an object to be detected by using a structured light depth camera, and collecting an invisible light image of the object to be detected; and acquiring a color image of the object to be measured by using a color camera.
The method for calculating the depth value of each pixel in the color image by using the offset value and the parameters of the acquisition equipment comprises the following steps: calculating the pixel coordinate and the depth value of each pixel in the invisible light image on the corresponding pixel on the color image by using the offset value and the parameters of the acquisition equipment; and processing the pixel coordinates and the depth values of the color image by using an interpolation algorithm to obtain the depth values of all pixels in the color image.
The method for calculating the pixel coordinates and the depth values of the corresponding pixels of each pixel in the invisible light image on the color image by using the offset value and the parameters of the acquisition equipment comprises the following steps: the pixel coordinates (u) of the invisible light image are calculated by the following formulaD,vD) With pixel coordinates (u) of the color imageR,vR) Corresponding relation between them and pixel coordinates (u) of color imageR,vR) The corresponding depth value is as follows:wherein, respectively representing the homogeneous coordinates of pixels on the coordinate systems of the color image and the invisible light image; zRAs a coordinate (u)R,vR) A corresponding depth value; mR、MDR, T is the acquisition device parameter, where MR、MDReference matrices for the color camera and the structured light depth camera, respectively, R, T are a rotation matrix and a translation matrix for the structured light depth camera relative to the color camera, respectively; Δ is a pixel coordinate offset value; b is the distance between a projection module and an acquisition module in the structured light depth camera; f is the focal length of the lens of the acquisition module.
The interpolation algorithm is one of ternary linear interpolation, ternary cubic interpolation and Krigin interpolation algorithm.
The structured light depth camera is an infrared structured light depth camera and comprises an infrared projection module and an infrared receiving module; the structured light pattern is an irregular speckle pattern.
The calculating of the pixel coordinate offset value of each pixel in the invisible light image relative to the reference image includes: determining a displacement mapping relation between each pixel in the invisible light image and a corresponding pixel in the reference image; determining a corresponding search algorithm; and calculating the pixel coordinate offset value delta according to the displacement mapping relation and a search algorithm.
In order to solve the technical problem, the invention adopts another technical scheme that: there is provided a three-dimensional image acquisition apparatus, the acquisition apparatus comprising: the acquisition equipment is used for acquiring the invisible light image and the color image of the object to be detected; a processor for calculating a pixel coordinate offset value of each pixel in the invisible light image relative to a reference image; and calculating the depth value of each pixel in the color image by using the deviation value and the parameters of the acquisition equipment, thereby obtaining the three-dimensional image of the object to be detected.
Wherein the acquisition device comprises a structured light depth camera and a color camera; the structured light depth camera is used for projecting a structured light pattern to the object to be detected and acquiring an invisible light image of the object to be detected; the color camera is used for collecting color images of the object to be measured.
Wherein the processor is specifically configured to calculate the pixel coordinates (u) of the invisible light image using the following formulaD,vD) With pixel coordinates (u) of the color imageR,vR) Corresponding relation between them and pixel coordinates (u) of color imageR,vR) The corresponding depth value is as follows:wherein,respectively representing the homogeneous coordinates of pixels on the coordinate systems of the color image and the invisible light image; zRAs a coordinate (u)R,vR) A corresponding depth value; mR、MDR, T is the acquisition device parameter, where MR、MDReference matrices for the color camera and the structured light depth camera, respectively, R, T are a rotation matrix and a translation matrix for the structured light depth camera relative to the color camera, respectively; Δ is a pixel coordinate offset value; b is the distance between a projection module and an acquisition module in the structured light depth camera; f is the focal length of the lens of the acquisition module.
The invention has the beneficial effects that: unlike the prior art, the depth image and color image registration method of the present invention includes: collecting an invisible light image and a color image of an object to be detected by using collection equipment; calculating a pixel coordinate offset value of each pixel in the invisible light image relative to the reference image; and calculating the depth value of each pixel in the color image by using the offset value and the parameters of the acquisition equipment. In this way, the depth value on the corresponding pixel coordinate on the acquired color image can be obtained by directly using the deviation value calculated by the acquired invisible light image and the reference image and the parameter of the acquisition device. The depth value of the collected invisible light image does not need to be calculated, so that the storage space of the system is saved, errors caused by complex calculation are avoided, and the registration efficiency and precision are improved.
Drawings
FIG. 1 is a flowchart illustrating a method for registering a depth image and a color image according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of an example of S12 in an embodiment of the registration method of a depth image and a color image according to the present invention;
fig. 3 is a schematic flowchart of an example of S13 in an embodiment of the registration method of a depth image and a color image according to the present invention;
FIG. 4 is a schematic structural diagram of an embodiment of a three-dimensional image capturing device according to the present invention;
fig. 5 is a schematic structural diagram of another embodiment of the three-dimensional image acquisition device of the invention.
Detailed Description
Referring to fig. 1, fig. 1 is a schematic flow chart of an embodiment of a registration method of a depth image and a color image according to the present invention, the method includes:
s11: and collecting the invisible light image and the color image of the object to be detected by utilizing the collecting equipment.
The invisible light image can be acquired through the invisible light receiving module, the color image can be acquired through the color camera, and the acquisition mode can be photographing or shooting.
Generally, the invisible light projection module projects light toward the object to be measured, and then the invisible light receiving module collects an invisible light image of the object to be measured. The invisible light projection module and the invisible light receiving module can jointly form an invisible light shooting camera.
The invisible light projection module is composed of a light source and a diffraction optical element, the light source is used for emitting laser light with a single surface consistent with the wavelength of the invisible light, and the diffraction optical element is used for splitting the laser light into a plurality of irregularly distributed invisible light beams after the laser light is collimated. In other embodiments, the light source may be an array of light sources, such as a VCSEL array, and the array of light sources may be arranged in a pattern that is consistent with a portion of the sub-pattern in the projected beam pattern. The invisible light projection module and the invisible light receiving module are separated by a certain distance.
In addition, the invisible light projection module, the invisible light receiving module and the color camera can be arranged on the same straight line or at a certain angle. Due to the fact that the invisible light receiving module and the color camera are not overlapped in geometry, pixel parallax exists between the collected invisible light image and the collected color image, namely the relative positions of pixel coordinates of a certain point in space in the invisible light image and the color image are different.
The invisible light may be infrared light, ultraviolet light, or the like, and may be, for example, infrared light having a wavelength of 830nm or 850 nm.
Optionally, in an embodiment, S11 may specifically be:
projecting a structured light pattern to an object to be detected by using a structured light depth camera, and collecting an invisible light image of the object to be detected; and acquiring a color image of the object to be measured by using a color camera.
The structured light depth camera is an infrared structured light depth camera and comprises an infrared projection module and an infrared receiving module; the structured light pattern is an irregular speckle pattern.
It will be appreciated that in the general case, or in the case of motion of the object to be measured, both the structured light depth camera and the color camera should have the same acquisition frequency, so as to be able to acquire both the invisible light image and the color image at the same time. Of course, in the case of absolutely still acquisition objects or other special cases, the invisible light image and the color image may also be acquired in a time-sharing manner.
S12: a pixel coordinate offset value of each pixel in the invisible light image with respect to the reference image is calculated.
Wherein the reference image is an invisible light image with known depth values. In particular, a plate may be placed in a plane perpendicular to the optical axis of the structured light depth camera, the plate having a known distance to the structured light depth camera. And projecting the structured light pattern to the flat plate and photographing or shooting through a structured light depth camera to obtain a reference image. The pattern of the reference image and the pattern of the invisible light image for collecting the object to be measured are respectively collected under the projection of the same projection module, namely, the consistency of the structured light pattern (speckle pattern) is ensured.
Alternatively, the known depth of the reference image may be arbitrarily set, and in general, the middle value of the depth measurement range of the structured light depth camera may be selected, for example, the depth measurement range of the structured light depth camera is (a, b), and then the depth of the reference image may be (b, b)
Optionally, referring to fig. 2, in an embodiment, S12 may specifically include:
s121: and determining the displacement mapping relation between each pixel in the invisible light image and the corresponding pixel in the reference image.
S122: a corresponding search algorithm is determined.
S123: and calculating the pixel coordinate offset value delta according to the displacement mapping relation and a search algorithm.
Specifically, the calculation of the offset value is briefly described as follows:
the method comprises the steps of extracting a multi-pixel area at least containing a target pixel point in an acquired invisible light image, finding an area which is extremely similar to the pixel area (the similarity reaches a preset condition) in a reference image (namely finding a corresponding pixel point of the target pixel point in the invisible light image in the reference image) due to the fact that speckle patterns are the same, and obtaining the coordinate of the pixel corresponding to the target pixel point in the reference image, so that the offset value of the pixel coordinate of the same point in a space in the invisible light image and the reference image can be obtained by comparing the offset conditions of the two pixel points.
Specifically, a displacement mapping function of each pixel is first determined, and generally speaking, the function needs to consider the translation and deformation of each point on the object to be measured in the acquired invisible light image and the reference image. In this embodiment, since the patterns in the two graphs are only position changes due to depth changes of the object to be measured, and no large deformation occurs, the function can be simplified to a case of only considering translation, that is: x ═ X + Δ. Where X and X are the pixel coordinates of a point of the object to be measured in the collected invisible light image and the reference image, respectively, and Δ is the pixel coordinate offset value to be obtained.
Second, the corresponding search algorithm is determined. Generally, newton iteration is adopted, but the algorithm involves a large number of root numbers and division operations, and the writing and execution efficiency of the algorithm is low. The present embodiment may employ a search algorithm based on an iterative least squares method. Since only the situation of translation along the X direction is considered, only one-dimensional search algorithm is needed, and therefore the efficiency and the precision of the algorithm can be greatly improved.
Finally, the offset value delta can be solved by combining a displacement mapping function and an iterative least square method.
S13: and calculating the depth value of each pixel in the color image by using the offset value and the parameters of the acquisition equipment.
The offset value is the offset value Δ calculated in S12, and the capturing devices include a capturing device for an invisible light image and a capturing device for a color image, which may be a structured light depth camera and a color camera in this embodiment. The acquisition device parameters comprise external parameters and internal parameters, wherein the external parameters comprise the offset degree between the structured light depth camera and the color camera, such as distance and rotation angle, and the internal parameters comprise parameters inside the camera, such as the lens and focal length of the camera.
Optionally, referring to fig. 3, in an embodiment, S13 may specifically include:
s131: and calculating the pixel coordinate and the depth value of each pixel in the invisible light image on the corresponding pixel on the color image by using the offset value and the parameters of the acquisition equipment.
Since each pixel of the captured invisible light image has a correspondence relationship with the reference image and the captured color image, respectively, a correspondence relationship between the color image and each pixel coordinate in the reference image can be established.
Specifically, the pixel coordinates (u) of the invisible light image can be calculated by using the following formulaD,vD) With pixel coordinates (u) of the color imageR,vR) Corresponding relation between them and pixel coordinates (u) of color imageR,vR) The corresponding depth value is as follows:
wherein,respectively representing the homogeneous coordinates of pixels on the coordinate systems of the color image and the invisible light image; zRAs a coordinate (u)R,vR) A corresponding depth value; mR、MDR, T is the acquisition device parameter, where MR、MDInternal reference matrices for the color camera and the laser camera, respectively, R, T being a rotation matrix and a translation matrix for the laser camera relative to the color camera, respectively; Δ is a pixel coordinate offset value; b is the distance between a projection module and an acquisition module in the structured light depth camera; f is the focal length of the lens of the acquisition module.
S132: and processing the pixel coordinates and the depth values of the color image by using an interpolation algorithm to obtain the depth values of all pixels in the color image.
Because the collected invisible light image has a certain offset with the color image, each pixel in the invisible light image cannot find a corresponding pixel in the color image, in other words, some pixel points in the color image cannot obtain the depth value thereof, and therefore, the pixel coordinates and the depth value of the color image can be processed by using an interpolation algorithm to obtain the depth values of all pixels in the color image.
Optionally, the interpolation algorithm is one of a ternary linear interpolation, a ternary cubic interpolation, and a kriging interpolation algorithm, which is not limited herein.
Thus, through the above-mentioned S11-S13, the depth value of each pixel in the acquired color image is obtained, i.e., the color image with the depth information can be acquired, in other words, the registration of the depth image and the color image is completed.
Of course, in other embodiments, the RGB values of each pixel in the invisible light image may be obtained by the above-mentioned corresponding method, and then the depth value of the invisible light image is obtained by performing a correlation algorithm with the reference image, so as to obtain a color image with depth information.
Unlike the prior art, the registration method of the depth image and the color image according to the present embodiment includes: collecting an invisible light image and a color image of an object to be detected by using collection equipment; calculating a pixel coordinate offset value of each pixel in the invisible light image relative to the reference image; and calculating the depth value of each pixel in the color image by using the offset value and the parameters of the acquisition equipment. In this way, the depth value on the corresponding pixel coordinate on the acquired color image can be obtained by directly using the deviation value calculated by the acquired invisible light image and the reference image and the parameter of the acquisition device. The depth value of the collected invisible light image does not need to be calculated, so that the storage space of the system is saved, errors caused by complex calculation are avoided, and the registration efficiency and precision are improved.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an embodiment of a three-dimensional image capturing device according to the present invention, the device includes:
an acquisition device 41 for acquiring an invisible light image and a color image of an object to be measured;
a processor 42 for calculating a pixel coordinate offset value for each pixel in the invisible light image relative to the reference image; and calculating the depth value of each pixel in the color image by using the deviation value and the parameters of the acquisition equipment, thereby obtaining the three-dimensional image of the object to be detected.
The capturing device 41 specifically includes a structured light depth camera 411 and a color camera 412.
The structured light depth camera 411 is used for projecting a structured light pattern to the object to be measured and collecting an invisible light image of the object to be measured.
The color camera 412 is used to acquire a color image of the object to be measured.
In another embodiment, as shown in fig. 5, the structured light depth camera 411 is an infrared structured light depth camera 50, which includes: an infrared projection module 51 for projecting an infrared structured light pattern to an object to be measured; and the infrared receiving module 52 is used for acquiring an infrared image of the object to be detected.
Optionally, the infrared projection module 51 is configured to project an infrared speckle pattern, and the infrared receiving module 52 is configured to receive an infrared speckle image of the object to be detected.
It is understood that the infrared projection module 51, the infrared receiving module 52 and the color camera 412 can be disposed on the same straight line, or disposed at a certain angle.
Optionally, in other embodiments, the processor 42 is also used for
The pixel coordinates (u) of the invisible light image are calculated by the following formulaD,vD) With pixel coordinates (u) of the color imageR,vR) Corresponding relation between them and pixel coordinates (u) of color imageR,vR) The corresponding depth value is as follows:
wherein,respectively representing the homogeneous coordinates of pixels on the coordinate systems of the color image and the invisible light image; zRAs a coordinate (u)R,vR) A corresponding depth value; mR、MDR, T is the acquisition device parameter, where MR、MDInternal reference matrices for the color camera and the laser camera, respectively, R, T being a rotation matrix and a translation matrix for the laser camera relative to the color camera, respectively; Δ is a pixel coordinate offset value; b is the distance between a projection module and an acquisition module in the structured light depth camera; f is the focal length of the lens of the acquisition module.
The following detailed description of the principles and procedures of the present embodiment is provided as a specific example:
1. firstly, an infrared camera and a color camera are used for collecting speckle images and color images of a target.
The processor is required to set the acquisition time and frequency of the infrared receiving module of the infrared camera and the color camera, so as to realize the synchronous acquisition of the target image. The infrared receiving module, the internal parameters (focal length and center) of the color camera and the external parameters (rotation and translation parameters) of the color camera relative to the infrared receiving module need to be calibrated in advance, generally speaking, these parameters will be saved in the memory designated by the system, and can be called at any time when the subsequent calculation is performed.
2. And then calculating the deviation value delta of the infrared image relative to each pixel point in the reference speckle image by using a digital image correlation method.
Similar to the calibration of internal and external parameters of the camera, the reference speckle images are also acquired in advance. The specific steps are that the distance Z from the infrared receiving module is known0And placing a flat plate perpendicular to the optical axis of the infrared receiving module, projecting speckles onto the flat plate by using the infrared projecting module, and taking the speckle image collected by the infrared receiving module as a reference speckle image.
3. And calculating a group of three-dimensional point cloud data containing depth values and coordinate values under a pixel coordinate system of the color camera by combining the camera parameters and the deviation value delta.
After calculating the deviation value delta of each pixel in the infrared speckle image in the second step, directly calculating a group of three-dimensional point cloud data (u) consisting of pixel coordinates and depth values according to the following formulaR,vR,ZR):
Wherein,respectively representing the homogeneous coordinates of pixels on the coordinate systems of the color image and the invisible light image; zRAs a coordinate (u)R,vR) A corresponding depth value; mR、MDR, T is the acquisition device parameter, where MR、MDInternal reference matrices for the color camera and the laser camera, respectively, R, T being a rotation matrix and a translation matrix for the laser camera relative to the color camera, respectively; Δ is a pixel coordinate offset value; b is the distance between a projection module and an acquisition module in the structured light depth camera; f is the focal length of the lens of the acquisition module.
The obtained three-dimensional point cloud data refers to coordinate values and depth values of space points corresponding to all pixels of the depth camera imaged in a pixel coordinate system of the color camera.
4. And finally, calculating the depth value of each pixel point of the color camera by utilizing an interpolation algorithm.
The interpolation algorithm can be one of ternary linear interpolation, ternary cubic interpolation and Krigin interpolation algorithm.
Different from the prior art, the three-dimensional image capturing device of the present embodiment includes: the acquisition equipment is used for acquiring the invisible light image and the color image of the object to be detected; a processor for calculating a pixel coordinate offset value of each pixel in the invisible light image relative to a reference image; and calculating the depth value of each pixel in the color image by using the offset value and the parameters of the acquisition equipment. In this way, the depth value on the corresponding pixel coordinate on the acquired color image can be obtained by directly using the deviation value calculated by the acquired invisible light image and the reference image and the parameter of the acquisition device. The depth value of the collected invisible light image does not need to be calculated, so that the storage space of the system is saved, errors caused by complex calculation are avoided, and the registration efficiency and precision are improved.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes performed by the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.
Claims (7)
1. A method for registering a depth image with a color image, comprising:
collecting an invisible light image and a color image of an object to be detected by using collection equipment;
calculating a pixel coordinate offset value of each pixel in the invisible light image relative to a reference image;
calculating the pixel coordinate (u) of the invisible light image by using the following formulaD,vD) With the pixel coordinates (u) of the color imageR,vR) Corresponding relationship between them in order toAnd pixel coordinates (u) of the color imageR,vR) The corresponding depth value is as follows:
wherein,respectively are the homogeneous coordinates of the pixels on the color image and the invisible light image coordinate system; zRAs a coordinate (u)R,vR) A corresponding depth value; mR、MDR, T are the acquisition device parameters, where MR、MDReference matrices for a color camera and a structured light depth camera, respectively, R, T being a rotation matrix and a translation matrix for the structured light depth camera relative to the color camera, respectively; Δ is the pixel coordinate offset value; b is the distance between a projection module and an acquisition module in the structured light depth camera; f is the focal length of the lens of the acquisition module; z0The distance between the reference image and the acquisition module is obtained;
and processing the pixel coordinates and the depth values of the color image by using an interpolation algorithm to obtain the depth values of all pixels in the color image.
2. The registration method according to claim 1,
the method for acquiring the invisible light image and the color image of the object to be detected by using the acquisition equipment comprises the following steps:
projecting a structured light pattern to the object to be detected by using a structured light depth camera, and collecting an invisible light image of the object to be detected; and
and acquiring a color image of the object to be detected by using a color camera.
3. The registration method according to claim 1,
the interpolation algorithm is one of ternary linear interpolation, ternary cubic interpolation and Krigin interpolation algorithm.
4. The registration method according to claim 2,
the structured light depth camera is an infrared structured light depth camera and comprises an infrared projection module and an infrared receiving module;
the structured light pattern is an irregular speckle pattern.
5. The registration method according to claim 1,
the calculating a pixel coordinate offset value of each pixel in the invisible light image relative to a reference image comprises:
determining a displacement mapping relation between each pixel in the invisible light image and a corresponding pixel in the reference image;
determining a corresponding search algorithm;
and calculating the pixel coordinate offset value delta according to the displacement mapping relation and the search algorithm.
6. A three-dimensional image acquisition apparatus, comprising:
the acquisition equipment is used for acquiring the invisible light image and the color image of the object to be detected;
a processor for calculating a pixel coordinate offset value for each pixel in the non-visible light image relative to a reference image; and
calculating the pixel coordinate (u) of the invisible light image by using the following formulaD,vD) With the pixel coordinates (u) of the color imageR,vR) Corresponding relation between them and pixel coordinates (u) of said color imageR,vR) The corresponding depth value is as follows:
wherein,Respectively are the homogeneous coordinates of the pixels on the color image and the invisible light image coordinate system; zRAs a coordinate (u)R,vR) A corresponding depth value; mR、MDR, T are the acquisition device parameters, where MR、MDReference matrices for a color camera and a structured light depth camera, respectively, R, T being a rotation matrix and a translation matrix for the structured light depth camera relative to the color camera, respectively; Δ is the pixel coordinate offset value; b is the distance between a projection module and an acquisition module in the structured light depth camera; f is the focal length of the lens of the acquisition module; z0The distance between the reference image and the acquisition module is obtained;
and processing the pixel coordinates and the depth values of the color image by using an interpolation algorithm to obtain the depth values of all pixels in the color image, thereby obtaining the three-dimensional image of the object to be detected.
7. The acquisition device of claim 6,
the acquisition device comprises a structured light depth camera and a color camera;
the structured light depth camera is used for projecting a structured light pattern to the object to be detected and collecting an invisible light image of the object to be detected;
the color camera is used for collecting a color image of the object to be measured.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610835431.9A CN106548489B (en) | 2016-09-20 | 2016-09-20 | A kind of method for registering, the three-dimensional image acquisition apparatus of depth image and color image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610835431.9A CN106548489B (en) | 2016-09-20 | 2016-09-20 | A kind of method for registering, the three-dimensional image acquisition apparatus of depth image and color image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106548489A CN106548489A (en) | 2017-03-29 |
CN106548489B true CN106548489B (en) | 2019-05-10 |
Family
ID=58368097
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610835431.9A Active CN106548489B (en) | 2016-09-20 | 2016-09-20 | A kind of method for registering, the three-dimensional image acquisition apparatus of depth image and color image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106548489B (en) |
Families Citing this family (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107424187B (en) * | 2017-04-17 | 2023-10-24 | 奥比中光科技集团股份有限公司 | Depth calculation processor, data processing method and 3D image device |
CN108871310A (en) * | 2017-05-12 | 2018-11-23 | 中华映管股份有限公司 | Thermal image positioning system and localization method |
CN107360066A (en) * | 2017-06-29 | 2017-11-17 | 深圳奥比中光科技有限公司 | A kind of household service robot and intelligent domestic system |
CN107229262A (en) * | 2017-06-29 | 2017-10-03 | 深圳奥比中光科技有限公司 | A kind of intelligent domestic system |
CN107437261B (en) * | 2017-07-14 | 2021-03-09 | 梅卡曼德(北京)机器人科技有限公司 | Depth image acquisition method |
CN108413893B (en) * | 2018-03-12 | 2020-06-05 | 四川大学 | Method and device for detecting surface shape of planar element by speckle deflection technique |
CN110363806B (en) * | 2019-05-29 | 2021-12-31 | 中德(珠海)人工智能研究院有限公司 | Method for three-dimensional space modeling by using invisible light projection characteristics |
CN110705487B (en) * | 2019-10-08 | 2022-07-29 | 清华大学深圳国际研究生院 | Palm print acquisition equipment and method and image acquisition device thereof |
CN111045030B (en) * | 2019-12-18 | 2022-09-13 | 奥比中光科技集团股份有限公司 | Depth measuring device and method |
CN111882596B (en) * | 2020-03-27 | 2024-03-22 | 东莞埃科思科技有限公司 | Three-dimensional imaging method and device for structured light module, electronic equipment and storage medium |
CN111537074A (en) * | 2020-03-31 | 2020-08-14 | 深圳奥比中光科技有限公司 | Temperature measuring method and system |
CN111721236B (en) * | 2020-05-24 | 2022-10-25 | 奥比中光科技集团股份有限公司 | Three-dimensional measurement system and method and computer equipment |
CN112734862A (en) * | 2021-02-10 | 2021-04-30 | 北京华捷艾米科技有限公司 | Depth image processing method and device, computer readable medium and equipment |
CN112950502B (en) * | 2021-02-26 | 2024-02-13 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment and storage medium |
CN116843731A (en) * | 2022-03-23 | 2023-10-03 | 腾讯科技(深圳)有限公司 | Object recognition method and related equipment |
CN115797426B (en) * | 2023-02-13 | 2023-05-12 | 合肥的卢深视科技有限公司 | Image alignment method, electronic device and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102625127A (en) * | 2012-03-24 | 2012-08-01 | 山东大学 | Optimization method suitable for virtual viewpoint generation of 3D television |
CN103607584A (en) * | 2013-11-27 | 2014-02-26 | 浙江大学 | Real-time registration method for depth maps shot by kinect and video shot by color camera |
CN103778643A (en) * | 2014-01-10 | 2014-05-07 | 深圳奥比中光科技有限公司 | Method and device for generating target depth information in real time |
CN103796001A (en) * | 2014-01-10 | 2014-05-14 | 深圳奥比中光科技有限公司 | Method and device for synchronously acquiring depth information and color information |
CN104463880A (en) * | 2014-12-12 | 2015-03-25 | 中国科学院自动化研究所 | RGB-D image acquisition method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6516429B2 (en) * | 2014-09-16 | 2019-05-22 | キヤノン株式会社 | Distance measuring device, imaging device, and distance measuring method |
-
2016
- 2016-09-20 CN CN201610835431.9A patent/CN106548489B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102625127A (en) * | 2012-03-24 | 2012-08-01 | 山东大学 | Optimization method suitable for virtual viewpoint generation of 3D television |
CN103607584A (en) * | 2013-11-27 | 2014-02-26 | 浙江大学 | Real-time registration method for depth maps shot by kinect and video shot by color camera |
CN103778643A (en) * | 2014-01-10 | 2014-05-07 | 深圳奥比中光科技有限公司 | Method and device for generating target depth information in real time |
CN103796001A (en) * | 2014-01-10 | 2014-05-14 | 深圳奥比中光科技有限公司 | Method and device for synchronously acquiring depth information and color information |
CN104463880A (en) * | 2014-12-12 | 2015-03-25 | 中国科学院自动化研究所 | RGB-D image acquisition method |
Also Published As
Publication number | Publication date |
---|---|
CN106548489A (en) | 2017-03-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106548489B (en) | A kind of method for registering, the three-dimensional image acquisition apparatus of depth image and color image | |
CN110555889B (en) | CALTag and point cloud information-based depth camera hand-eye calibration method | |
CN110276808B (en) | Method for measuring unevenness of glass plate by combining single camera with two-dimensional code | |
CN106595528B (en) | A kind of micro- binocular stereo vision measurement method of telecentricity based on digital speckle | |
KR101666959B1 (en) | Image processing apparatus having a function for automatically correcting image acquired from the camera and method therefor | |
CN109859272B (en) | Automatic focusing binocular camera calibration method and device | |
CN107860337B (en) | Structured light three-dimensional reconstruction method and device based on array camera | |
Douxchamps et al. | High-accuracy and robust localization of large control markers for geometric camera calibration | |
CN109186491A (en) | Parallel multi-thread laser measurement system and measurement method based on homography matrix | |
US20150187140A1 (en) | System and method for image composition thereof | |
CN109827521B (en) | Calibration method for rapid multi-line structured optical vision measurement system | |
WO2007015059A1 (en) | Method and system for three-dimensional data capture | |
CN113034612B (en) | Calibration device, method and depth camera | |
CN112184811B (en) | Monocular space structured light system structure calibration method and device | |
WO2018201677A1 (en) | Bundle adjustment-based calibration method and device for telecentric lens-containing three-dimensional imaging system | |
WO2013005244A1 (en) | Three-dimensional relative coordinate measuring device and method | |
CN109272555B (en) | External parameter obtaining and calibrating method for RGB-D camera | |
JP2015021862A (en) | Three-dimensional measurement instrument and three-dimensional measurement method | |
KR101943046B1 (en) | Calibration Method of Projector-Camera using Auxiliary RGB-D camera | |
CN110827360B (en) | Photometric stereo measurement system and method for calibrating light source direction thereof | |
WO2022222291A1 (en) | Optical axis calibration method and apparatus of optical axis detection system, terminal, system, and medium | |
JP2020008502A (en) | Depth acquisition device by polarization stereo camera, and method of the same | |
JP7489253B2 (en) | Depth map generating device and program thereof, and depth map generating system | |
CN112950727B (en) | Large-view-field multi-target simultaneous ranging method based on bionic curved compound eye | |
Zhang et al. | Improved camera calibration method and accuracy analysis for binocular vision |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP01 | Change in the name or title of a patent holder | ||
CP01 | Change in the name or title of a patent holder |
Address after: 518057 Guangdong city of Shenzhen province Nanshan District Hing Road three No. 8 China University of Geosciences research base in building A808 Patentee after: Obi Zhongguang Technology Group Co., Ltd Address before: 518057 Guangdong city of Shenzhen province Nanshan District Hing Road three No. 8 China University of Geosciences research base in building A808 Patentee before: SHENZHEN ORBBEC Co.,Ltd. |