CN112734862A - Depth image processing method and device, computer readable medium and equipment - Google Patents

Depth image processing method and device, computer readable medium and equipment Download PDF

Info

Publication number
CN112734862A
CN112734862A CN202110183299.9A CN202110183299A CN112734862A CN 112734862 A CN112734862 A CN 112734862A CN 202110183299 A CN202110183299 A CN 202110183299A CN 112734862 A CN112734862 A CN 112734862A
Authority
CN
China
Prior art keywords
depth image
pixel
pixel point
camera
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110183299.9A
Other languages
Chinese (zh)
Inventor
郭建亚
李骊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing HJIMI Technology Co Ltd
Original Assignee
Beijing HJIMI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing HJIMI Technology Co Ltd filed Critical Beijing HJIMI Technology Co Ltd
Priority to CN202110183299.9A priority Critical patent/CN112734862A/en
Publication of CN112734862A publication Critical patent/CN112734862A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering

Abstract

The method comprises the steps of acquiring an original depth image, and registering and calculating the corresponding position of each pixel point mapped to a three-channel color image corresponding to the original depth image by using a registration matrix parameter and the pixel value of the pixel point in the original depth image aiming at each pixel point in the original depth image. Because the registration matrix parameters are fixed matrix parameters obtained by utilizing the internal parameters of the RGB camera, the internal parameters of the depth camera and the external parameters of the RGB camera and the depth camera in advance, when each pixel point in the original depth image is mapped to the corresponding position in the RGB image through registration calculation, the fixed matrix parameter part required in the registration process does not need to be repeatedly calculated, the calculation amount in the registration process is saved, the time consumption for registration processing the original depth image is reduced, and the real-time registration processing of the original depth image is realized.

Description

Depth image processing method and device, computer readable medium and equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method, an apparatus, a computer-readable medium, and a device for processing a depth image.
Background
An existing three-channel color Depth image (Red Green Blue-Depth Map, RGB-D) camera includes a Depth camera and a color camera, and a Depth image acquired by the Depth camera needs to be registered with a three-channel color (Red Green Blue, RGB) image acquired by the color camera, so that the registered Depth image and the registered color image are overlapped.
However, in the prior art, in the process of registering the depth image and the RGB image, it is necessary to perform matrix multiplication, matrix addition, and matrix inversion operations on each pixel point in the depth image for multiple times to implement the registration processing. In the prior art, in the process of registering each pixel point, complex matrix operation of a plurality of fixed parameter parts and non-fixed parameter parts is involved, so the process of registering has large operation amount and consumes long time. Particularly, when the video stream shot by the RGB-D camera is processed in a registration manner, it is difficult to perform real-time registration processing on the depth image and the color image of each frame in the video stream shot by the RGB-D camera.
Disclosure of Invention
Based on the defects of the prior art, the application provides a depth image processing method, a depth image processing device, a computer readable medium and a depth image processing device, so as to reduce the operation amount of the registration processing process and reduce the time consumption of the registration processing.
The first aspect of the present application discloses a depth image processing method, including:
acquiring an original depth image;
aiming at each pixel point in the original depth image, utilizing registration matrix parameters and pixel values of the pixel points in the original depth image to register and calculate the corresponding positions of the pixel points mapped to the three-channel color RGB image corresponding to the original depth image; the registration matrix parameters are fixed matrix parameters obtained by pre-calculating internal parameters of an RGB camera, internal parameters of a depth camera and external parameters of the RGB camera and the depth camera;
mapping each pixel point in the original depth image to a corresponding position in the depth image to be registered; the depth image to be registered is a depth image with the resolution consistent with the RGB image; mapping pixel points in the original depth image to corresponding positions in the depth image to be registered, wherein the pixel points in the original depth image are consistent with the corresponding positions in the RGB image mapped by the pixel points in the original depth image;
respectively determining the pixel value of each pixel point in the original depth image mapped to the corresponding position in the depth image to be registered by using the pixel value of each pixel point in the original depth image;
and taking the depth image to be registered after the pixel value is determined as the depth image after the registration processing.
Optionally, in the depth image processing method, the registration matrix parameters include: a first matrix and a second matrix; the first matrix is a fixed matrix parameter obtained by pre-calculating internal parameters of the RGB camera and external parameters of the RGB camera and the depth camera; the second matrix is a fixed matrix parameter obtained by pre-calculating internal parameters of the RGB camera, the depth camera and external parameters of the RGB camera and the depth camera.
Optionally, in the above depth image processing method, the internal reference of the RGB camera includes: the scale factor of the RGB image in the x-axis direction, the scale factor of the RGB image in the y-axis direction and the principal point coordinates of the RGB image; an internal reference of the depth camera, comprising: the scale factor of the original depth image in the x-axis direction, the scale factor of the original depth image in the y-axis direction and the principal point coordinates of the original depth image; external reference of the RGB camera and the depth camera comprises: rotation parameters of a coordinate system of the RGB camera relative to a coordinate system of the depth camera, and translation parameters of the coordinate system of the RGB camera relative to the coordinate system of the depth camera.
Optionally, in the depth image processing method, the calculating of the registration matrix parameter includes:
calculating to obtain a first matrix according to the internal matrix and the transformation matrix of the RGB camera; calculating to obtain a second matrix according to the internal matrix of the RGB camera, the transformation matrix and the internal matrix of the depth camera; the internal matrix of the RGB camera is obtained through internal parameter construction of the RGB camera; the internal matrix of the depth camera is obtained through the internal reference construction of the depth camera; the transformation matrix is obtained through the external parameters of the RGB camera and the depth camera.
Optionally, in the depth image processing method, the determining, by using a pixel value of each pixel point in the original depth image, a pixel value of each pixel point in the original depth image that is mapped to a corresponding position in the depth image to be registered includes:
respectively calculating the mapping pixel value of each pixel point in the original depth image by using the pixel value of each pixel point in the original depth image and the registration matrix parameter;
for each pixel point in the depth image to be registered, if the pixel point has a mapping relation with a plurality of pixel points in the original depth image, taking the minimum value of the mapping pixel values of the plurality of pixel points in the original depth image, which have the mapping relation with the pixel points, as the pixel value of the pixel point in the depth image to be registered; and if the pixel point has a mapping relation with a pixel point in the original depth image, taking the mapping pixel value of the pixel point having the mapping relation with the pixel point in the original depth image as the pixel value of the depth image to be registered.
Optionally, in the depth image processing method, the determining, by using a pixel value of each pixel point in the original depth image, a pixel value of each pixel point in the original depth image that is mapped to a corresponding position in the depth image to be registered includes:
for each pixel point in the depth image to be registered, if the pixel point has a mapping relation with a plurality of pixel points in the original depth image, taking the minimum value of the pixel values of the plurality of pixel points in the original depth image, which have a mapping relation with the pixel points, as the pixel value of the pixel point in the depth image to be registered; and if the pixel point has a mapping relation with a pixel point in the original depth image, taking the pixel value of the pixel point having the mapping relation with the pixel point in the original depth image as the pixel value of the depth image to be registered.
Optionally, in the depth image processing method, after determining the depth image to be registered after the pixel value is set as the depth image after the registration processing, the method further includes:
detecting whether the pixel point is an invalid pixel point, whether the pixel distance between the pixel point and an adjacent effective pixel point in the horizontal direction is smaller than a first preset value, and whether the absolute deviation of the pixel value between the adjacent effective pixel point on the left side of the pixel point and the adjacent effective pixel point on the right side of the pixel point is smaller than a second preset value aiming at each pixel point in the depth image after registration processing; the invalid pixel points are pixel points with pixel values being zero or pixel values being larger than the maximum working distance of the depth camera;
if the pixel point is an invalid pixel point, the pixel distance between the pixel point and the adjacent effective pixel point in the horizontal direction is smaller than the first preset value, and the absolute deviation of the pixel value between the adjacent effective pixel point on the left side of the pixel point and the adjacent effective pixel point on the right side of the pixel point is smaller than the second preset value, determining that the pixel point is a hollow pixel point;
and filling the holes in each hole pixel point.
The second aspect of the present application discloses a depth image processing apparatus, including:
an acquisition unit configured to acquire an original depth image;
the registration processing unit is used for registering and calculating the corresponding position of each pixel point mapped to the three-channel color RGB image corresponding to the original depth image by using the registration matrix parameter and the pixel value of the pixel point in the original depth image aiming at each pixel point in the original depth image; the registration matrix parameters are fixed matrix parameters obtained by pre-calculating internal parameters of an RGB camera, internal parameters of a depth camera and external parameters of the RGB camera and the depth camera;
the mapping unit is used for mapping each pixel point in the original depth image to a corresponding position in the depth image to be registered; the depth image to be registered is a depth image with the resolution consistent with the RGB image; mapping pixel points in the original depth image to corresponding positions in the depth image to be registered, wherein the pixel points in the original depth image are consistent with the corresponding positions in the RGB image mapped by the pixel points in the original depth image;
the first determining unit is used for respectively determining the pixel value of each pixel point in the original depth image, which is mapped to the corresponding position in the depth image to be registered, by using the pixel value of each pixel point in the original depth image;
and the second determining unit is used for taking the depth image to be registered after the pixel value is determined as the depth image after the registration processing.
Optionally, in the depth image processing apparatus, the registration matrix parameters include: a first matrix and a second matrix; the first matrix is a fixed matrix parameter obtained by pre-calculating internal parameters of the RGB camera and external parameters of the RGB camera and the depth camera; the second matrix is a fixed matrix parameter obtained by pre-calculating internal parameters of the RGB camera, the depth camera and external parameters of the RGB camera and the depth camera.
Optionally, in the depth image processing apparatus, the internal reference of the RGB camera includes: the scale factor of the RGB image in the x-axis direction, the scale factor of the RGB image in the y-axis direction and the principal point coordinates of the RGB image; an internal reference of the depth camera, comprising: the scale factor of the original depth image in the x-axis direction, the scale factor of the original depth image in the y-axis direction and the principal point coordinates of the original depth image; external reference of the RGB camera and the depth camera comprises: rotation parameters of a coordinate system of the RGB camera relative to a coordinate system of the depth camera, and translation parameters of the coordinate system of the RGB camera relative to the coordinate system of the depth camera.
Optionally, in the processing apparatus for depth images, the apparatus further includes:
the first calculation unit is used for calculating to obtain a first matrix according to the internal matrix and the transformation matrix of the RGB camera;
the second calculation unit is used for calculating to obtain a second matrix according to the internal matrix of the RGB camera, the transformation matrix and the internal matrix of the depth camera; the internal matrix of the RGB camera is obtained through internal parameter construction of the RGB camera; the internal matrix of the depth camera is obtained through the internal reference construction of the depth camera; the transformation matrix is obtained through the external parameters of the RGB camera and the depth camera.
Optionally, in the above depth image processing apparatus, the first determining unit includes:
the first calculating subunit is configured to calculate, by using the pixel value of each pixel in the original depth image and the registration matrix parameter, a mapping pixel value of each pixel in the original depth image;
the first determining subunit is configured to, for each pixel point in the depth image to be registered, if a mapping relationship exists between the pixel point and a plurality of pixel points in the original depth image, use a minimum value of mapping pixel values of the plurality of pixel points in the original depth image, which have a mapping relationship with the pixel point, as a pixel value of the pixel point in the depth image to be registered; and if the pixel point has a mapping relation with a pixel point in the original depth image, taking the mapping pixel value of the pixel point having the mapping relation with the pixel point in the original depth image as the pixel value of the depth image to be registered.
Optionally, in the above depth image processing apparatus, the first determining unit includes:
the second determining subunit is configured to, for each pixel point in the depth image to be registered, if a mapping relationship exists between the pixel point and a plurality of pixel points in the original depth image, use a minimum value of pixel values of the plurality of pixel points in the original depth image, which have a mapping relationship with the pixel point, as a pixel value of the pixel point in the depth image to be registered;
and the third determining subunit is configured to, if a mapping relationship exists between the pixel point and a pixel point in the original depth image, use a pixel value of a pixel point in the original depth image, which has a mapping relationship with the pixel point, as a pixel value of the depth image to be registered.
Optionally, in the processing apparatus for depth images, the apparatus further includes:
a hole detection unit, configured to detect, for each pixel point in the depth image after the registration processing, whether the pixel point is an invalid pixel point, whether pixel distances between the pixel point and adjacent valid pixel points in the horizontal direction are both smaller than a first preset value, and whether an absolute deviation of a pixel value between an adjacent valid pixel point on the left of the pixel point and an adjacent valid pixel point on the right of the pixel point is smaller than a second preset value; the invalid pixel points are pixel points with pixel values being zero or pixel values being larger than the maximum working distance of the depth camera;
a third determining unit, configured to determine that the pixel point is a void pixel point if the pixel point is an invalid pixel point, the pixel distances between the pixel point and the adjacent valid pixel points in the horizontal direction are both smaller than the first preset value, and the absolute deviation of the pixel values between the adjacent valid pixel point on the left of the pixel point and the adjacent valid pixel point on the right of the pixel point is smaller than the second preset value;
and the filling processing unit is used for filling the holes for each hole pixel point.
A third aspect of the application discloses a computer readable medium having a computer program stored thereon, wherein the program when executed by a processor implements the method as described in any of the first aspects above.
The fourth aspect of the present application discloses an apparatus comprising:
one or more processors;
a storage device having one or more programs stored thereon;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method as in any one of the first aspects above.
According to the technical scheme, the depth image processing method provided by the embodiment of the application utilizes the registration matrix parameter and the pixel value of the pixel point in the original depth image to register and calculate the corresponding position of the pixel point mapped to the RGB image by aiming at each pixel point in the original depth image. Because the registration matrix parameters are fixed matrix parameters obtained by utilizing the internal parameters of the RGB camera, the internal parameters of the depth camera and the external parameters of the RGB camera and the depth camera in advance, when each pixel point in the original depth image is mapped to the corresponding position in the RGB image through registration calculation, the fixed matrix parameter part required in the registration process does not need to be repeatedly calculated, the calculation amount in the registration process is saved, the time consumption for registration processing the original depth image is reduced, and the real-time registration processing of the original depth image is realized.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic flowchart of a depth image processing method disclosed in an embodiment of the present application;
fig. 2 is a schematic flowchart of a method for determining pixel values of a depth image to be registered according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of a method for filling a hole in a depth image according to an embodiment of the present disclosure;
FIG. 4 is a block diagram of a depth image processing system according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a depth image processing apparatus according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, the embodiment of the application discloses a depth image processing method, which specifically includes the following steps:
and S101, acquiring an original depth image.
The original depth image is acquired by a depth camera in the RGB-D camera, and pixel values of pixel points in the depth camera are depth values. When the RGB-D camera shoots, the depth camera acquires an original depth image, and the color camera acquires an RGB image corresponding to the original depth image. The pixel value of the pixel point in the RGB image is the RGB pixel value. And when the original depth image is not subjected to the registration processing, the pixel points in the original depth image are not overlapped with the pixel points in the RGB image. The original depth image and the RGB image of the RGB-D camera are taken from two different viewing angles and therefore do not have a pixel-to-pixel correspondence. For example, the pixel values (i.e., depth values) of the pixel points located at (x1, y1) in the original depth image are not the depths of the pixel points located at (x1, y1) in the RGB image, so that subsequent registration processing is required.
The depth camera can be a structured light depth camera, a binocular depth camera, and the binocular structured light depth is equal, and the difference of the depth cameras does not affect the implementation of the embodiment of the application.
It should be noted that, in step S101, multiple original depth images may be acquired, or a single original depth image may be acquired.
S102, aiming at each pixel point in the original depth image, the corresponding position of the pixel point mapped to the RGB image corresponding to the original depth image is calculated in a registering mode by utilizing the registering matrix parameter and the pixel value of the pixel point in the original depth image, wherein the registering matrix parameter is a fixed matrix parameter obtained by utilizing the internal reference of the RGB camera, the internal reference of the depth camera and the external reference of the RGB camera and the depth camera in a pre-calculation mode.
Specifically, the pixel value of each pixel point in the original depth image is respectively subjected to registration calculation with the registration matrix parameter, so that each pixel point in the original depth image is mapped to the corresponding position in the RGB image corresponding to the original depth image.
The registration matrix parameters are fixed matrix parameters obtained by pre-calculating internal parameters of the RGB camera, internal parameters of the depth camera and external parameters of the RGB camera and the depth camera. The internal parameters of the RGB camera, the internal parameters of the depth camera and the external parameters of the RGB camera and the depth camera are calibration parameters of the RGB-D camera. The calibration parameters of the RGBD camera are parameters obtained by calibrating the RGB-D camera in advance. The implementation of the embodiment of the application is not affected by different calibration modes of calibration parameters of the RGBD camera. The internal parameters of the RGB camera are the geometric parameters related to the imaging of the RGB camera, such as distortion, focal length, principal point, etc. The external parameters of the depth camera are the geometric parameters related to the depth camera imaging, such as distortion, focal length, principal point, etc. The external parameters of the RGB camera and the depth camera are parameters describing the position and posture conversion relation between cameras of the RGB camera and the depth camera, and comprise two parameters of rotation and translation.
And carrying out registration calculation on each pixel point in the original depth image, and carrying out matrix operation between related fixed parameters in advance. And in the registration calculation process of each pixel point, the fixed parameters of the internal parameters of the RGB camera, the internal parameters of the depth camera and the external parameters of the RGB camera and the depth camera are involved, so that the calculation of the fixed parameters of the internal parameters of the RGB camera, the internal parameters of the depth camera and the external parameters of the RGB camera and the depth camera in the registration process of each pixel point can be calculated in advance to obtain the registration matrix parameters, so that when the registration calculation of each pixel point in the original depth image is performed, the registration matrix parameters do not need to be calculated repeatedly, and the registration matrix parameters can be directly used for participating in the registration calculation. For the same RGB-D camera, in the process of registration calculation of any pixel point in the original depth image, the parameter parts of the registration matrix to be calculated are the same, so that the parameter parts of the registration matrix can be calculated in advance without repeatedly calculating the parameter parts of the registration matrix in the registration calculation process of each pixel point, further, the calculation amount is reduced, the time consumption for processing the registration process of the original depth image is reduced, and the real-time registration processing of the original depth image is realized.
In the prior art, in the process of registering a depth image and an RGB image, each pixel point in the depth image needs to be subjected to matrix multiplication, matrix addition and matrix inversion operation for multiple times to realize registration processing, which involves complex matrix operations between a plurality of fixed parameters and fixed parameters, between fixed parameters and non-fixed parameters, between non-fixed parameters and non-fixed parameters, and the amount of operation in the registration process is large, which results in long time for registration and fails to meet the requirement of real-time registration processing of a video stream.
In the application, the fixed matrix parameters which are calculated during the registration calculation process of each pixel point are extracted, namely, the registration matrix parameters are obtained by pre-calculating the internal parameters of the RGB camera, the depth camera and the external parameters of the RGB camera and the depth camera, because the registration matrix parameters are fixed matrix parameters, the registration calculation method is used in the registration calculation process of each pixel point, so that the registration calculation of each pixel point in the original depth image can be directly carried out on each pixel point in the original depth image by utilizing the registration matrix parameters and the pixel values of the pixel points in the original depth image to map the pixel points to the corresponding positions in the RGB image, the registration matrix parameter part does not need to be repeatedly calculated in the prior art, so that the calculation amount in the registration process is reduced, the time consumption of registration is reduced, and the requirement of real-time registration processing of the video stream can be met.
Optionally, in a specific embodiment of the present application, the registration matrix parameters obtained by pre-calculation may be stored in a memory, and when the registration calculation needs to be performed, the registration matrix parameters are extracted to participate in the calculation.
Optionally, in a specific embodiment of the present application, the registration matrix parameters include: a first matrix and a second matrix. The first matrix is a fixed matrix parameter obtained by pre-calculating internal parameters of the RGB camera and external parameters of the RGB camera and the depth camera. The second matrix is a fixed matrix parameter obtained by pre-calculating internal parameters of the RGB camera, internal parameters of the depth camera and external parameters of the RGB camera and the depth camera.
The internal parameters of the RGB camera, the internal parameters of the depth camera and the external parameters of the RGB camera and the depth camera all belong to RGBD camera calibration parameters, namely fixed parameters inside the RGBD camera. The internal parameters of the RGB camera and the external parameters of the RGB camera and the depth camera are used for calculating to obtain a first matrix, and the internal parameters of the RGB camera, the internal parameters of the depth camera and the external parameters of the RGB camera and the depth camera are used for calculating to obtain a second matrix. The first matrix and the second matrix are fixed matrix parameters which are required to be used in the registration calculation process of each pixel point in the original depth image.
Optionally, in a specific embodiment of the present application, the internal reference of the RGB camera includes: scale factor f of RGB image in x-axis direction1xScale factor f of RGB image in y-axis direction1yAnd principal point coordinates (c) of the RGB image1x,c1y). An internal reference for a depth camera, comprising: scale factor f of original depth image in x-axis direction2xThe scale factor f of the original depth image in the y-axis direction2yAnd principal point coordinates (c) of the original depth image2x,c2y). External reference of an RGB camera and a depth camera, comprising: a rotation parameter R of the coordinate system of the RGB camera relative to the coordinate system of the depth camera, and a translation parameter t of the coordinate system of the RGB camera relative to the coordinate system of the depth camera.
Wherein f is1xIs the lens focal length f of the RGB camera1The distance d between the pixel and the pixel in the horizontal direction of the RGB image sensor1xThe ratio of (a) to (b). f. of1yIs the lens focal length f of the RGB camera1Pixel interval d perpendicular to RGB image sensor1yThe ratio of (a) to (b). f. of2xIs the depth camera lens focal length f2Distance d between the depth image sensor and the pixel in the horizontal direction2xThe ratio of (a) to (b). f. of2yIs the depth camera lens focal length f2Pixel interval d perpendicular to depth image sensor2yThe ratio of (a) to (b).
Optionally, in a specific embodiment of the present application, the calculating process of the registration matrix parameters includes:
and calculating to obtain a first matrix according to the internal matrix and the transformation matrix of the RGB camera. And calculating to obtain a second matrix according to the internal matrix of the RGB camera, the transformation matrix and the internal matrix of the depth camera.
The internal matrix of the RGB camera is obtained through internal parameter construction of the RGB camera, the internal matrix of the depth camera is obtained through internal parameter construction of the depth camera, and the transformation matrix is obtained through external parameter construction of the RGB camera and the depth camera.
Optionally, an internal matrix of an RGB camera
Figure BDA0002942697830000101
f1xIs the scale factor, f, of the RGB image in the x-axis direction1yIs the scale factor of the RGB image in the y-axis direction, c1xX-axis coordinate, c, which is the principal point coordinate of the RGB image1yY-axis coordinates which are principal point coordinates of the RGB image.
Internal matrix for depth camera
Figure BDA0002942697830000102
f2xIs the scale factor, f, of the original depth image in the x-axis direction2yIs the scale factor of the original depth image in the y-axis direction, c2xX-axis coordinate, c, which is the principal point coordinate of the original depth image2yThe y-axis coordinate is the principal point coordinate of the original depth image.
The transformation matrix T ═ R-1 -R-1t]The transformation matrix T is a 3 × 4 dimensional matrix. R is a rotation parameter of the coordinate system of the RGB camera relative to the coordinate system of the depth camera, and t is a translation parameter of the coordinate system of the RGB camera relative to the coordinate system of the depth camera.
Alternatively, the transformation matrix T is [ R 'T' ], and the transformation matrix T is a 3 × 4-dimensional matrix. R 'is a rotation parameter of the coordinate system of the depth camera relative to the coordinate system of the RGB camera, and t' is a translation parameter of the coordinate system of the depth camera relative to the coordinate system of the RGB camera.
Optionally, an embodiment of calculating the first matrix according to the internal matrix and the transformation matrix of the RGB camera includes:
the calculation formula of the ith column element of the first matrix A is as follows: a. thei=i-1K1TP。
Wherein, P ═ 0001]TI is 1, 2, …, D. D is the maximum working distance of the depth camera, i.e. the effective value range of the pixel values of the original depth image is [1, D]The first matrix a is constructed as a 3 × D matrix. Element A of the ith column of AiIs a 3 x1 dimensional column vector. K1Is an internal matrix of an RGB camera. T is a transformation matrix.
Optionally, an embodiment of calculating the second matrix according to the internal matrix of the RGB camera, the transformation matrix, and the internal matrix of the depth camera includes:
the v-th row u-column element B of the second matrix BvuThe calculation formula of (2) is as follows:
Figure BDA0002942697830000111
wherein, Q ═ u, v, 1]TWhere v is 1, 2, …, M, u is 1, 2, …, N. The original depth image is an image with M rows and N columns, i.e., the resolution of the original depth image is N × M, and the second matrix B is a matrix with dimensions of M × N × 3. K1As internal matrix of RGB camera, K2T is the transformation matrix.
Optionally, in a specific embodiment of the present application, an implementation manner of executing step S102 includes:
for each pixel point in the original depth image, the calculation formula of the corresponding position (u ', v') of the pixel point in the RGB image is:
Figure BDA0002942697830000112
Figure BDA0002942697830000113
wherein, (u ', v') represents that the pixel points of the v-th row and u-th column in the original depth image are mapped to the corresponding positions in the RGB image. B isvukRepresenting B in the second matrixvuThe k-th element of the vector, i.e. the element of the v-th row, u-column and k-layer of the second matrix B, k is 1, 2, 3.
Figure BDA0002942697830000114
D-th representing the first matrix AvuRow k column elements. dvuAnd expressing the pixel value of the pixel point of the v-th row and u-th column in the original depth image.
The calculation formulas of the first matrix a and the second matrix B can be referred to the relevant parts in the above-mentioned application document, and are not described herein again.
Alternatively, in rare cases, the pixel values of the original depth image output by some depth cameras may be fractional, when d in the formula for calculating the corresponding position (u ', v') in the RGB imagevuThe value can be an integer value obtained by rounding and rounding the pixel values of the u-th row and the u-th column of pixel points in the original depth image or an integer value obtained by quantization. For example, for a fraction of 12.6, if rounding is used, the result is an integer of 13, and if quantization is performed using 0.1 as the quantization unit, the result is an integer of 126.
According to the calculation formula of the corresponding position (u ', v') in the RGB image, the fixed matrix parameters required to be calculated by each pixel point in the registration process are calculated in advance: after the first matrix and the second matrix are adopted, a calculation formula for calculating the corresponding positions (u ', v') of the pixel points of the original depth image in the RGB image in the registration calculation process becomes extremely simple, only a small amount of addition and division operations are needed, and complex matrix multiplication, matrix addition and matrix inversion operations are not needed, so that the operation amount of the registration operation of each pixel point in the depth image is small, and the registration operation of each original depth image can be processed in real time.
S103, mapping each pixel point in the original depth image to a corresponding position in the depth image to be registered, wherein the depth image to be registered is a depth image with the resolution consistent with that of the RGB image, and the pixel point in the original depth image is mapped to the corresponding position in the depth image to be registered and consistent with the corresponding position in the RGB image mapped by the pixel point in the original depth image.
Specifically, a depth image with the resolution consistent with that of the RGB image is constructed as the depth image to be registered, for example, the RGB image is an image with M rows and N columns, and then the depth image to be registered is also an image with M rows and N columns. And then, carrying out initialization processing on pixel values in the depth image to be registered, namely enabling all the pixel values in the depth image to be registered to be zero.
And then, according to the mapping of each pixel point in the original depth image obtained in the step S102 to the corresponding position in the RGB image, determining that each pixel point in the original depth image is respectively mapped to the corresponding position in the depth image to be registered. The resolution ratio of the depth image to be registered is consistent with that of the RGB image, and the pixel points in the original depth image are mapped to the corresponding positions in the depth image to be registered and are consistent with the corresponding positions in the RGB image mapped by the pixel points in the original depth image. For example, the position of the pixel point in the v-th row and u-th column in the original depth image is (u, v), and the corresponding position of the pixel point mapped into the RGB image is calculated as (u ', v') through step S102, so that the corresponding position of the pixel point mapped into the depth image to be registered is also (u ', v').
And S104, respectively determining the pixel value of each pixel point in the original depth image mapped to the corresponding position in the depth image to be registered by using the pixel value of each pixel point in the original depth image.
Specifically, for each pixel point in the depth image to be registered, the pixel value of the pixel point is determined by the pixel value of the pixel point in the original depth image having a mapping relationship with the pixel point. For example, the pixel value of the pixel point may be directly set as the pixel value of the pixel point in the original depth image having the mapping relationship. And after the pixel values of all pixel points in the depth image to be registered are determined, a depth image corresponding to the RGB image can be obtained.
Optionally, in a specific embodiment of the present application, an implementation manner of performing step S104 includes:
for each pixel point in the depth image to be registered, if the pixel point has a mapping relation with a plurality of pixel points in the original depth image, taking the minimum value of the pixel values of the plurality of pixel points in the original depth image, which have a mapping relation with the pixel point, as the pixel value of the pixel point in the depth image to be registered, and if the pixel point has a mapping relation with one pixel point in the original depth image, taking the pixel value of one pixel point in the original depth image, which has a mapping relation with the pixel point, as the pixel value of the depth image to be registered.
If the depth camera and the color camera included in the RGB-D camera are parallel, it can be considered that the depth value of the pixel point in the original depth image is consistent with the depth value of the pixel point in the RGB image having the mapping relationship. Therefore, for each pixel point in the depth image to be registered, if a plurality of pixel points in the original depth image are all mapped to corresponding positions of the pixel point, it is indicated that the pixel point has a mapping relationship with the plurality of pixel points in the original depth image, and pixel values (depth values) of the plurality of pixel points in the original depth image may be different from each other, but one pixel point in the depth image to be registered cannot have a plurality of pixel values, and therefore, one pixel point needs to be selected as the pixel value of the pixel point. And the pixel point has a mapping relation with a plurality of pixel points in the original depth image, which indicates that a plurality of objects may be overlapped on the same point, so that the minimum value of the pixel values of the plurality of pixel points having the mapping relation with the pixel points in the original depth image is used as the pixel value of the pixel point in the depth image to be registered.
And if the pixel point in the depth image to be registered has a mapping relation with a pixel point in the original depth image, directly taking the pixel value of the pixel point in the original depth image, which has the mapping relation with the pixel point, as the pixel value of the depth image to be registered.
Optionally, referring to fig. 2, in an embodiment of the present application, an implementation of step S104 is performed, including:
s201, respectively calculating a mapping pixel value of each pixel point in the original depth image by using the pixel value of each pixel point in the original depth image and the registration matrix parameter.
The mapping pixel value of the pixel point in the original depth image refers to a pixel value when the pixel point of the original depth image is mapped to a corresponding position in the RGB image. When a depth camera in an RGB-D camera is not parallel to a color camera, a pixel value of a pixel point of an original depth image is not equal to a pixel value when the pixel point of the original depth image is mapped to a corresponding position in the RGB image, and therefore, a mapping pixel value of each pixel point in the original depth image needs to be registered and calculated for each pixel point in the original depth image by using a registration matrix parameter and the pixel value in the original depth image.
Optionally, for each pixel point in the original depth image, the mapped pixel value d 'of the pixel point (u, v)'v′u′The calculation formula is as follows:
Figure BDA0002942697830000141
wherein, Bvu3Representing B in the second matrixvuThe 3 rd element of the vector, i.e. the element of the v th row u column 3 level of the second matrix B.
Figure BDA0002942697830000142
D-th representing the first matrix AvuRow 3 column elements. dvuAnd expressing the pixel value of the pixel point of the v-th row and u-th column in the original depth image. d'v′u′And the mapping pixel values are the mapping pixel values of the pixel points of the v-th row and the u-th column in the original depth image. The calculation formulas of the first matrix and the second matrix can be referred to the above related parts, and are not described herein again.
S202, aiming at each pixel point in the depth image to be registered, judging whether the number of the pixel points and the number of the pixel points with the mapping relation in the original depth image are multiple or not.
In step S103, each pixel point in the original depth image is mapped to a corresponding position in the depth image to be registered, so that the mapping relationship between each pixel point in the image to be registered and the pixel point in the original depth image can be known.
If the pixel points in the depth image to be registered have a mapping relationship with a plurality of pixel points in the original depth image, that is, the number of the pixel points and the pixel points having the mapping relationship in the original depth image is multiple, step S203 is executed. If the pixel point in the depth image to be registered only has a mapping relationship with one pixel point in the original depth image, that is, the number of the pixel point and the pixel point having the mapping relationship in the original depth image is not multiple, step S204 is executed.
S203, taking the minimum value of the mapping pixel values of a plurality of pixel points which have the mapping relation with the pixel points in the original depth image as the pixel value of the pixel point in the depth image to be registered.
For each pixel point in the depth image to be registered, if a plurality of pixel points in the original depth image are all mapped to corresponding positions of the pixel point, it is indicated that the pixel point has a mapping relationship with the plurality of pixel points in the original depth image, and mapping pixel values (depth values) of the plurality of pixel points in the original depth image may be different from each other, but one pixel point in the depth image to be registered cannot have a plurality of pixel values, so that one pixel point needs to be selected as the pixel value of the pixel point. And the pixel point has a mapping relation with a plurality of pixel points in the original depth image, which indicates that a plurality of objects may be overlapped on the same point, so that the minimum value of the mapping pixel values of the plurality of pixel points having the mapping relation with the pixel points in the original depth image is used as the pixel value of the pixel point in the depth image to be registered.
And S204, taking the mapping pixel value of a pixel point which has a mapping relation with the pixel point in the original depth image as the pixel value of the depth image to be registered.
And if the pixel point in the depth image to be registered has a mapping relation with a pixel point in the original depth image, directly taking the mapping pixel value of the pixel point in the original depth image having the mapping relation with the pixel point as the pixel value of the depth image to be registered.
And S105, taking the depth image to be registered after the pixel value is determined as the depth image after the registration processing.
The depth image to be registered after the pixel value is determined is superposed with the pixel points of the RGB image, the depth value of each pixel point in the RGB image can be reflected, and the depth image is the depth image of the original depth image after registration processing, so that the depth image to be registered after the pixel value is determined can be used as the depth image after registration processing.
Optionally, referring to fig. 3, in an embodiment of the present application, after the step S104 is executed, the method further includes:
s301, performing hole detection on the depth image after the registration processing to obtain all hole pixel points in the depth image after the registration processing.
Some pixel points exist in the depth image after the registration processing as hole pixel points, so that the hole detection needs to be performed on the depth image after the registration processing, and all hole pixel points in the depth image are found out. There are many ways to perform the hole detection, such as a connected component analysis method or a line scanning method, and the implementation of the embodiment of the present application is affected by the difference of the hole detection methods.
Optionally, in a specific embodiment of the present application, an implementation manner of executing step S301 includes:
and aiming at each pixel point in the depth image after registration processing, detecting whether the pixel point is an invalid pixel point, whether the pixel distance between the pixel point and the adjacent effective pixel point in the horizontal direction is smaller than a first preset value, and whether the absolute deviation of the pixel value between the adjacent effective pixel point on the left side of the pixel point and the adjacent effective pixel point on the right side of the pixel point is smaller than a second preset value.
And the invalid pixel points are pixel points with the pixel values being zero or the pixel values being larger than the maximum working distance of the depth camera. And if the pixel point is an invalid pixel point, the pixel distance between the pixel point and the adjacent effective pixel point in the horizontal direction is smaller than a first preset value, and the absolute deviation of the pixel value between the adjacent effective pixel point on the left side of the pixel point and the adjacent effective pixel point on the right side of the pixel point is smaller than a second preset value, determining that the pixel point is a cavity pixel point.
Specifically, for each pixel point in the depth image after the registration processing, when the pixel point satisfies the three conditions that the pixel point is an invalid pixel point, the pixel distance between the pixel point and the adjacent effective pixel point in the horizontal direction is smaller than a first preset value, and the absolute deviation of the pixel value between the adjacent effective pixel point on the left side of the pixel point and the adjacent effective pixel point on the right side of the pixel point is smaller than a second preset value, the pixel point can be determined as a hollow pixel point. If only partial conditions of the three conditions that the pixel point is an invalid pixel point, the pixel distance between the pixel point and the adjacent effective pixel point in the horizontal direction is smaller than a first preset value, and the absolute deviation of the pixel value between the adjacent effective pixel point on the left side of the pixel point and the adjacent effective pixel point on the right side of the pixel point is smaller than a second preset value are met, or the three conditions are not met, the pixel point is not a hollow pixel point.
The invalid pixel points are pixel points with pixel values of 0 or exceeding the maximum working distance of the depth camera, and if the pixel values of 0 and not exceeding the maximum working distance of the depth camera are not satisfied, the invalid pixel points are valid pixel points. The pixel distances between the pixel points and the adjacent effective pixel points in the horizontal direction are smaller than the first preset value, that is, the pixel distances between the pixel points and the adjacent effective pixel points on the left side are smaller than the first preset value, and the pixel distances between the pixel points and the adjacent effective pixel points on the right side are also smaller than the first preset value. The first preset value can be set according to the actual application condition. Wherein the adjacent effective pixel points refer to the nearest effective pixel point. The absolute deviation of the pixel value between the adjacent effective pixel point on the left side of the pixel point and the adjacent effective pixel point on the right side of the pixel point is smaller than a second preset value, and the size of the second preset value can be set according to the practical application condition.
S302, carrying out hole filling processing on each hole pixel point.
The hole filling methods include a plurality of methods, for example, a nearest neighbor filling method or a median filtering method, and the implementation of the embodiment of the present application is not affected by the difference of the hole filling methods.
Optionally, for each hole pixel point in the depth image after the registration processing, searching for an effective pixel point closest to the hole pixel point in the up, down, left, and right directions of the hole pixel point, and setting the pixel value of the hole pixel point as the pixel value of the closest effective pixel point.
Optionally, referring to fig. 4, based on the depth image processing method provided in the embodiment of the present application, the embodiment of the present application correspondingly discloses a depth image processing system, which includes an RGB-D camera 401 and a computing device 402. The computing device 402 is a device with storage and computing capabilities that may be separate from the RGB-D camera 401 or may be internal to the RGB-D camera. A computing device 402, comprising: a memory 403, a registration unit 404, a matrix access unit 405, and a matrix calculation unit 406.
The RGB-D camera 401 inputs the acquired original depth image to the computing device 402, after the computing device 402 acquires the original depth image transmitted by RGB-D, the matrix access unit 405 in the computing device 402 extracts the registration matrix parameter from the memory, the registration unit 404 uses the registration matrix parameter and the pixel value of the pixel in the original depth image to register and calculate the corresponding position of the pixel in the RGB image corresponding to the original depth image, and maps each pixel in the original depth image to the corresponding position in the depth image to be registered. The depth image to be registered is a depth image with the resolution consistent with that of the RGB image, and pixel points in the original depth image are mapped to corresponding positions in the depth image to be registered and consistent with the corresponding positions in the RGB image mapped by the pixel points in the original depth image. The registration unit 404 determines, by using a pixel value of each pixel point in the original depth image, a pixel value of each pixel point in the original depth image mapped to a corresponding position in the depth image to be registered, takes the depth image to be registered after the pixel value is determined as the depth image after the registration processing, and outputs the depth image after the registration processing.
Specifically, the matrix calculation unit 406 calculates in advance the registration matrix parameters according to the internal parameters of the RGB camera, the internal parameters of the depth camera, and the calibration parameters of the RGB-D camera, which are external parameters of the RGB camera and the depth camera, and then outputs the registration matrix parameters to the matrix access unit 405, and the matrix access unit 405 stores the registration matrix parameters to the memory 403. When the registration unit 404 acquires the original depth image, a registration matrix parameter calculated in advance is extracted by the matrix access unit 405, and registration calculation processing is performed.
According to the depth image processing method provided by the embodiment of the application, aiming at each pixel point in the original depth image, the corresponding position of the pixel point mapped to the RGB image is calculated through registration by utilizing the registration matrix parameter and the pixel value of the pixel point in the original depth image. Because the registration matrix parameters are fixed matrix parameters obtained by utilizing the internal parameters of the RGB camera, the internal parameters of the depth camera and the external parameters of the RGB camera and the depth camera in advance, when each pixel point in the original depth image is mapped to the corresponding position in the RGB image through registration calculation, the fixed matrix parameter part required in the registration process does not need to be repeatedly calculated, the calculation amount in the registration process is saved, the time consumption for registration processing the original depth image is reduced, and the real-time registration processing of the original depth image is realized.
Referring to fig. 5, based on the depth image processing method provided in the embodiment of the present application, the embodiment of the present application correspondingly discloses a depth image processing apparatus, including: an acquisition unit 501, a registration processing unit 502, a mapping unit 503, a first determination unit 504, and a second determination unit 505.
An obtaining unit 501 is configured to obtain an original depth image.
And the registration processing unit 502 is configured to, for each pixel point in the original depth image, register and calculate, by using the registration matrix parameter and the pixel value of the pixel point in the original depth image, a corresponding position of the pixel point mapped to the three-channel color RGB image corresponding to the original depth image. The registration matrix parameters are fixed matrix parameters obtained by pre-calculating internal parameters of the RGB camera, internal parameters of the depth camera and external parameters of the RGB camera and the depth camera.
Optionally, in a specific embodiment of the present application, the registration matrix parameters include: a first matrix and a second matrix. The first matrix is a fixed matrix parameter obtained by pre-calculating internal parameters of the RGB camera and external parameters of the RGB camera and the depth camera. The second matrix is a fixed matrix parameter obtained by pre-calculating internal parameters of the RGB camera, internal parameters of the depth camera and external parameters of the RGB camera and the depth camera.
Optionally, in a specific embodiment of the present application, the internal reference of the RGB camera includes: the scale factor of the RGB image in the x-axis direction, the scale factor of the RGB image in the y-axis direction and the principal point coordinates of the RGB image. An internal reference for a depth camera, comprising: the scale factor of the original depth image in the x-axis direction, the scale factor of the original depth image in the y-axis direction and the principal point coordinates of the original depth image. External reference of an RGB camera and a depth camera, comprising: rotation parameters of the coordinate system of the RGB camera relative to the coordinate system of the depth camera, and translation parameters of the coordinate system of the RGB camera relative to the coordinate system of the depth camera.
The mapping unit 503 is configured to map each pixel point in the original depth image to a corresponding position in the depth image to be registered. The depth image to be registered is a depth image with the resolution consistent with that of the RGB image, and pixel points in the original depth image are mapped to corresponding positions in the depth image to be registered and consistent with the corresponding positions in the RGB image mapped by the pixel points in the original depth image.
The first determining unit 504 is configured to determine, by using a pixel value of each pixel point in the original depth image, a pixel value of each pixel point in the original depth image that is mapped to a corresponding position in the depth image to be registered.
Optionally, in a specific embodiment of the present application, the first determining unit 504 includes: a first calculating subunit and a first determining subunit.
And the first calculating subunit is used for respectively calculating the mapping pixel value of each pixel point in the original depth image by using the pixel value of each pixel point in the original depth image and the registration matrix parameter.
The first determining subunit is configured to, for each pixel point in the depth image to be registered, if a mapping relationship exists between the pixel point and a plurality of pixel points in the original depth image, use a minimum value of mapping pixel values of the plurality of pixel points in the original depth image, which have a mapping relationship with the pixel point, as a pixel value of the pixel point in the depth image to be registered, and if a mapping relationship exists between the pixel point and one pixel point in the original depth image, use a mapping pixel value of one pixel point in the original depth image, which has a mapping relationship with the pixel point, as a pixel value of the depth image to be registered.
Optionally, in a specific embodiment of the present application, the first determining unit 504 includes: a second determining subunit and a third determining subunit.
And the second determining subunit is used for regarding each pixel point in the depth image to be registered, and if the pixel point has a mapping relation with a plurality of pixel points in the original depth image, taking the minimum value of the pixel values of the plurality of pixel points in the original depth image, which have the mapping relation with the pixel points, as the pixel value of the pixel point in the depth image to be registered.
And the third determining subunit is used for taking the pixel value of one pixel point in the original depth image, which has a mapping relation with the pixel point, as the pixel value of the depth image to be registered if the pixel point has a mapping relation with one pixel point in the original depth image.
And a second determining unit 505, configured to use the depth image to be registered after determining the pixel value as the depth image after the registration processing.
Optionally, in a specific embodiment of the present application, the method further includes: the device comprises a hole detection unit, a third determination unit and a filling processing unit.
And the hole detection unit is used for detecting whether the pixel point is an invalid pixel point, whether the pixel distance between the pixel point and the adjacent effective pixel point in the horizontal direction is smaller than a first preset value, and whether the absolute deviation of the pixel value between the adjacent effective pixel point on the left side of the pixel point and the adjacent effective pixel point on the right side of the pixel point is smaller than a second preset value aiming at each pixel point in the depth image after registration processing. And the invalid pixel points are pixel points with the pixel values being zero or the pixel values being larger than the maximum working distance of the depth camera.
And the third determining unit is used for determining that the pixel point is a hollow pixel point if the pixel point is an invalid pixel point, the pixel distances between the pixel point and the adjacent effective pixel points in the horizontal direction are both smaller than the first preset value, and the absolute deviation of the pixel values between the adjacent effective pixel point on the left side of the pixel point and the adjacent effective pixel point on the right side of the pixel point is smaller than the second preset value.
And the filling processing unit is used for filling the holes for each hole pixel point.
The specific principle and the implementation process of the depth image processing apparatus disclosed in the embodiment of the present application are the same as those of the depth image processing method disclosed in the embodiment of the present application, and reference may be made to corresponding parts in the depth image processing method disclosed in the embodiment of the present application, which are not described herein again.
In the depth image processing apparatus provided in the embodiment of the present application, the registration processing unit 402 is configured to register and calculate, for each pixel point in the original depth image, a corresponding position where the pixel point is mapped to the RGB image by using the registration matrix parameter and the pixel value of the pixel point in the original depth image. Because the registration matrix parameters are fixed matrix parameters obtained by pre-calculating the internal parameters of the RGB camera, the internal parameters of the depth camera and the external parameters of the RGB camera and the depth camera, when each pixel point in the original depth image is mapped to the corresponding position in the RGB image through registration calculation, the fixed matrix parameter part required in the registration process does not need to be repeatedly calculated, the calculation amount in the registration process is saved, the time consumption for registration processing the original depth image is reduced, and the real-time registration processing of the original depth image is realized.
The embodiment of the application discloses a computer readable medium, on which a computer program is stored, wherein the program is executed by a processor to implement any one of the depth image processing methods as set forth in the above embodiments.
The embodiment of the application discloses equipment, includes: one or more processors, a storage device, one or more programs stored thereon, which when executed by the one or more processors, cause the one or more processors to implement any of the depth image processing methods as set forth in the above embodiments.
Those skilled in the art can make or use the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
It is further noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.

Claims (10)

1. A method for processing a depth image, comprising:
acquiring an original depth image;
aiming at each pixel point in the original depth image, utilizing registration matrix parameters and pixel values of the pixel points in the original depth image to register and calculate the corresponding positions of the pixel points mapped to the three-channel color RGB image corresponding to the original depth image; the registration matrix parameters are fixed matrix parameters obtained by pre-calculating internal parameters of an RGB camera, internal parameters of a depth camera and external parameters of the RGB camera and the depth camera;
mapping each pixel point in the original depth image to a corresponding position in the depth image to be registered; the depth image to be registered is a depth image with the resolution consistent with the RGB image; mapping pixel points in the original depth image to corresponding positions in the depth image to be registered, wherein the pixel points in the original depth image are consistent with the corresponding positions in the RGB image mapped by the pixel points in the original depth image;
respectively determining the pixel value of each pixel point in the original depth image mapped to the corresponding position in the depth image to be registered by using the pixel value of each pixel point in the original depth image;
and taking the depth image to be registered after the pixel value is determined as the depth image after the registration processing.
2. The method of claim 1, wherein the registration matrix parameters comprise: a first matrix and a second matrix; the first matrix is a fixed matrix parameter obtained by pre-calculating internal parameters of the RGB camera and external parameters of the RGB camera and the depth camera; the second matrix is a fixed matrix parameter obtained by pre-calculating internal parameters of the RGB camera, the depth camera and external parameters of the RGB camera and the depth camera.
3. The method of claim 1, wherein the reference of the RGB camera comprises: the scale factor of the RGB image in the x-axis direction, the scale factor of the RGB image in the y-axis direction and the principal point coordinates of the RGB image; an internal reference of the depth camera, comprising: the scale factor of the original depth image in the x-axis direction, the scale factor of the original depth image in the y-axis direction and the principal point coordinates of the original depth image; external reference of the RGB camera and the depth camera comprises: rotation parameters of a coordinate system of the RGB camera relative to a coordinate system of the depth camera, and translation parameters of the coordinate system of the RGB camera relative to the coordinate system of the depth camera.
4. The method according to claim 2, wherein the calculation of the registration matrix parameters comprises:
calculating to obtain a first matrix according to the internal matrix and the transformation matrix of the RGB camera; calculating to obtain a second matrix according to the internal matrix of the RGB camera, the transformation matrix and the internal matrix of the depth camera; the internal matrix of the RGB camera is obtained through internal parameter construction of the RGB camera; the internal matrix of the depth camera is obtained through the internal reference construction of the depth camera; the transformation matrix is obtained through the external parameters of the RGB camera and the depth camera.
5. The method of claim 1, wherein the determining, by using the pixel value of each pixel point in the original depth image, the pixel value of each pixel point in the original depth image mapped to the corresponding position in the depth image to be registered comprises:
respectively calculating the mapping pixel value of each pixel point in the original depth image by using the pixel value of each pixel point in the original depth image and the registration matrix parameter;
for each pixel point in the depth image to be registered, if the pixel point has a mapping relation with a plurality of pixel points in the original depth image, taking the minimum value of the mapping pixel values of the plurality of pixel points in the original depth image, which have the mapping relation with the pixel points, as the pixel value of the pixel point in the depth image to be registered; and if the pixel point has a mapping relation with a pixel point in the original depth image, taking the mapping pixel value of the pixel point having the mapping relation with the pixel point in the original depth image as the pixel value of the depth image to be registered.
6. The method according to claim 1, wherein the determining, by using the pixel value of each pixel point in the original depth image, the pixel value of each pixel point in the original depth image mapped to the corresponding position in the depth image to be registered comprises:
for each pixel point in the depth image to be registered, if the pixel point has a mapping relation with a plurality of pixel points in the original depth image, taking the minimum value of the pixel values of the plurality of pixel points in the original depth image, which have a mapping relation with the pixel points, as the pixel value of the pixel point in the depth image to be registered; and if the pixel point has a mapping relation with a pixel point in the original depth image, taking the pixel value of the pixel point having the mapping relation with the pixel point in the original depth image as the pixel value of the depth image to be registered.
7. The method according to any one of claims 1 to 6, wherein after determining the depth image to be registered after setting the pixel value as the depth image after the registration processing, the method further comprises:
detecting whether the pixel point is an invalid pixel point, whether the pixel distance between the pixel point and an adjacent effective pixel point in the horizontal direction is smaller than a first preset value, and whether the absolute deviation of the pixel value between the adjacent effective pixel point on the left side of the pixel point and the adjacent effective pixel point on the right side of the pixel point is smaller than a second preset value aiming at each pixel point in the depth image after registration processing; the invalid pixel points are pixel points with pixel values being zero or pixel values being larger than the maximum working distance of the depth camera;
if the pixel point is an invalid pixel point, the pixel distance between the pixel point and the adjacent effective pixel point in the horizontal direction is smaller than the first preset value, and the absolute deviation of the pixel value between the adjacent effective pixel point on the left side of the pixel point and the adjacent effective pixel point on the right side of the pixel point is smaller than the second preset value, determining that the pixel point is a hollow pixel point;
and filling the holes in each hole pixel point.
8. A depth image processing apparatus, comprising:
an acquisition unit configured to acquire an original depth image;
the registration processing unit is used for registering and calculating the corresponding position of each pixel point mapped to the three-channel color RGB image corresponding to the original depth image by using the registration matrix parameter and the pixel value of the pixel point in the original depth image aiming at each pixel point in the original depth image; the registration matrix parameters are fixed matrix parameters obtained by pre-calculating internal parameters of an RGB camera, internal parameters of a depth camera and external parameters of the RGB camera and the depth camera;
the mapping unit is used for mapping each pixel point in the original depth image to a corresponding position in the depth image to be registered; the depth image to be registered is a depth image with the resolution consistent with the RGB image; mapping pixel points in the original depth image to corresponding positions in the depth image to be registered, wherein the pixel points in the original depth image are consistent with the corresponding positions in the RGB image mapped by the pixel points in the original depth image;
the first determining unit is used for respectively determining the pixel value of each pixel point in the original depth image, which is mapped to the corresponding position in the depth image to be registered, by using the pixel value of each pixel point in the original depth image;
and the second determining unit is used for taking the depth image to be registered after the pixel value is determined as the depth image after the registration processing.
9. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1 to 7.
10. An apparatus, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-7.
CN202110183299.9A 2021-02-10 2021-02-10 Depth image processing method and device, computer readable medium and equipment Pending CN112734862A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110183299.9A CN112734862A (en) 2021-02-10 2021-02-10 Depth image processing method and device, computer readable medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110183299.9A CN112734862A (en) 2021-02-10 2021-02-10 Depth image processing method and device, computer readable medium and equipment

Publications (1)

Publication Number Publication Date
CN112734862A true CN112734862A (en) 2021-04-30

Family

ID=75596691

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110183299.9A Pending CN112734862A (en) 2021-02-10 2021-02-10 Depth image processing method and device, computer readable medium and equipment

Country Status (1)

Country Link
CN (1) CN112734862A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115797426A (en) * 2023-02-13 2023-03-14 合肥的卢深视科技有限公司 Image alignment method, electronic device and storage medium
CN116503567A (en) * 2023-06-26 2023-07-28 广州智算信息技术有限公司 Intelligent modeling management system based on AI big data

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463880A (en) * 2014-12-12 2015-03-25 中国科学院自动化研究所 RGB-D image acquisition method
CN105701827A (en) * 2016-01-15 2016-06-22 中林信达(北京)科技信息有限责任公司 Method and device for jointly calibrating parameters of visible light camera and infrared camera
CN106548489A (en) * 2016-09-20 2017-03-29 深圳奥比中光科技有限公司 The method for registering of a kind of depth image and coloured image, three-dimensional image acquisition apparatus
CN106780618A (en) * 2016-11-24 2017-05-31 周超艳 3 D information obtaining method and its device based on isomery depth camera
CN107507235A (en) * 2017-08-31 2017-12-22 山东大学 A kind of method for registering of coloured image and depth image based on the collection of RGB D equipment
US20180225866A1 (en) * 2015-08-06 2018-08-09 Heptagon Micro Optics Pte. Ltd. Generating a merged, fused three-dimensional point cloud based on captured images of a scene
CN108629756A (en) * 2018-04-28 2018-10-09 东北大学 A kind of Kinect v2 depth images Null Spot restorative procedure
CN109615601A (en) * 2018-10-23 2019-04-12 西安交通大学 A method of fusion colour and gray scale depth image
CN109636732A (en) * 2018-10-24 2019-04-16 深圳先进技术研究院 A kind of empty restorative procedure and image processing apparatus of depth image
CN109840884A (en) * 2017-11-29 2019-06-04 杭州海康威视数字技术股份有限公司 A kind of image split-joint method, device and electronic equipment
CN110163898A (en) * 2019-05-07 2019-08-23 腾讯科技(深圳)有限公司 A kind of depth information method for registering and device
CN110209997A (en) * 2019-06-10 2019-09-06 成都理工大学 Depth camera automatic Calibration algorithm based on three-dimensional feature point
US20190295271A1 (en) * 2017-04-17 2019-09-26 Shenzhen Orbbec Co., Ltd. Depth calculation processor, data processing method and 3d image device
CN110480637A (en) * 2019-08-12 2019-11-22 浙江大学 A kind of mechanical arm part image identification grasping means based on Kinect sensor
CN110807809A (en) * 2019-10-25 2020-02-18 中山大学 Light-weight monocular vision positioning method based on point-line characteristics and depth filter
CN111161338A (en) * 2019-12-26 2020-05-15 浙江大学 Point cloud density improving method for depth prediction based on two-dimensional image gray scale
CN112070695A (en) * 2020-09-03 2020-12-11 深圳大学 Correction method of registration matrix and computer equipment
CN112132906A (en) * 2020-09-22 2020-12-25 西安电子科技大学 External reference calibration method and system between depth camera and visible light camera
CN112150372A (en) * 2019-06-28 2020-12-29 深圳创想未来机器人有限公司 Depth map correction method, depth map correction device and robot
CN112184783A (en) * 2020-09-22 2021-01-05 西安交通大学 Three-dimensional point cloud registration method combined with image information

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463880A (en) * 2014-12-12 2015-03-25 中国科学院自动化研究所 RGB-D image acquisition method
US20180225866A1 (en) * 2015-08-06 2018-08-09 Heptagon Micro Optics Pte. Ltd. Generating a merged, fused three-dimensional point cloud based on captured images of a scene
CN105701827A (en) * 2016-01-15 2016-06-22 中林信达(北京)科技信息有限责任公司 Method and device for jointly calibrating parameters of visible light camera and infrared camera
CN106548489A (en) * 2016-09-20 2017-03-29 深圳奥比中光科技有限公司 The method for registering of a kind of depth image and coloured image, three-dimensional image acquisition apparatus
CN106780618A (en) * 2016-11-24 2017-05-31 周超艳 3 D information obtaining method and its device based on isomery depth camera
US20190295271A1 (en) * 2017-04-17 2019-09-26 Shenzhen Orbbec Co., Ltd. Depth calculation processor, data processing method and 3d image device
CN107507235A (en) * 2017-08-31 2017-12-22 山东大学 A kind of method for registering of coloured image and depth image based on the collection of RGB D equipment
CN109840884A (en) * 2017-11-29 2019-06-04 杭州海康威视数字技术股份有限公司 A kind of image split-joint method, device and electronic equipment
CN108629756A (en) * 2018-04-28 2018-10-09 东北大学 A kind of Kinect v2 depth images Null Spot restorative procedure
CN109615601A (en) * 2018-10-23 2019-04-12 西安交通大学 A method of fusion colour and gray scale depth image
CN109636732A (en) * 2018-10-24 2019-04-16 深圳先进技术研究院 A kind of empty restorative procedure and image processing apparatus of depth image
CN110163898A (en) * 2019-05-07 2019-08-23 腾讯科技(深圳)有限公司 A kind of depth information method for registering and device
CN110209997A (en) * 2019-06-10 2019-09-06 成都理工大学 Depth camera automatic Calibration algorithm based on three-dimensional feature point
CN112150372A (en) * 2019-06-28 2020-12-29 深圳创想未来机器人有限公司 Depth map correction method, depth map correction device and robot
CN110480637A (en) * 2019-08-12 2019-11-22 浙江大学 A kind of mechanical arm part image identification grasping means based on Kinect sensor
CN110807809A (en) * 2019-10-25 2020-02-18 中山大学 Light-weight monocular vision positioning method based on point-line characteristics and depth filter
CN111161338A (en) * 2019-12-26 2020-05-15 浙江大学 Point cloud density improving method for depth prediction based on two-dimensional image gray scale
CN112070695A (en) * 2020-09-03 2020-12-11 深圳大学 Correction method of registration matrix and computer equipment
CN112132906A (en) * 2020-09-22 2020-12-25 西安电子科技大学 External reference calibration method and system between depth camera and visible light camera
CN112184783A (en) * 2020-09-22 2021-01-05 西安交通大学 Three-dimensional point cloud registration method combined with image information

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘培, 石教英, 周骥, 蒋龙潮: "利用平面约束中的反对称性进行相机自定标", 计算机研究与发展, no. 04 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115797426A (en) * 2023-02-13 2023-03-14 合肥的卢深视科技有限公司 Image alignment method, electronic device and storage medium
CN115797426B (en) * 2023-02-13 2023-05-12 合肥的卢深视科技有限公司 Image alignment method, electronic device and storage medium
CN116503567A (en) * 2023-06-26 2023-07-28 广州智算信息技术有限公司 Intelligent modeling management system based on AI big data
CN116503567B (en) * 2023-06-26 2024-03-12 广州智算信息技术有限公司 Intelligent modeling management system based on AI big data

Similar Documents

Publication Publication Date Title
TWI729995B (en) Generating a merged, fused three-dimensional point cloud based on captured images of a scene
CN110276808B (en) Method for measuring unevenness of glass plate by combining single camera with two-dimensional code
CN110689581B (en) Structured light module calibration method, electronic device and computer readable storage medium
KR100513055B1 (en) 3D scene model generation apparatus and method through the fusion of disparity map and depth map
CN110163898B (en) Depth information registration method, device, system, equipment and storage medium
CN108629756B (en) Kinectv2 depth image invalid point repairing method
CN109656033B (en) Method and device for distinguishing dust and defects of liquid crystal display screen
US9183634B2 (en) Image processing apparatus and image processing method
US9818199B2 (en) Method and apparatus for estimating depth of focused plenoptic data
CN109931906B (en) Camera ranging method and device and electronic equipment
CN112184811B (en) Monocular space structured light system structure calibration method and device
CN112734862A (en) Depth image processing method and device, computer readable medium and equipment
US20200342583A1 (en) Method, apparatus and measurement device for measuring distortion parameters of a display device, and computer-readable medium
CN116188558B (en) Stereo photogrammetry method based on binocular vision
CN112489137A (en) RGBD camera calibration method and system
CN114299156A (en) Method for calibrating and unifying coordinates of multiple cameras in non-overlapping area
CN110595738B (en) Laser detection method, device and equipment and depth camera
KR20230137937A (en) Device and method for correspondence analysis in images
TWI571099B (en) Device and method for depth estimation
CN113077523B (en) Calibration method, calibration device, computer equipment and storage medium
US20170186171A1 (en) Depth image processing method and depth image processing system
RU2692970C2 (en) Method of calibration of video sensors of the multispectral system of technical vision
JP4605582B2 (en) Stereo image recognition apparatus and method
CN110232715B (en) Method, device and system for self calibration of multi-depth camera
CN111292297A (en) Welding seam detection method, device and equipment based on binocular stereo vision and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination