CN114630096B - Method, device and equipment for densification of TOF camera point cloud and readable storage medium - Google Patents

Method, device and equipment for densification of TOF camera point cloud and readable storage medium Download PDF

Info

Publication number
CN114630096B
CN114630096B CN202210242970.7A CN202210242970A CN114630096B CN 114630096 B CN114630096 B CN 114630096B CN 202210242970 A CN202210242970 A CN 202210242970A CN 114630096 B CN114630096 B CN 114630096B
Authority
CN
China
Prior art keywords
angle
pixel
new
tof camera
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210242970.7A
Other languages
Chinese (zh)
Other versions
CN114630096A (en
Inventor
徐渊
周建华
张宜庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Technology University
Original Assignee
Shenzhen Technology University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Technology University filed Critical Shenzhen Technology University
Publication of CN114630096A publication Critical patent/CN114630096A/en
Application granted granted Critical
Publication of CN114630096B publication Critical patent/CN114630096B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/139Format conversion, e.g. of frame-rate or size

Abstract

The embodiment of the application belongs to the technical field of image processing and artificial intelligence, and relates to a densification method of TOF camera point clouds, which comprises the steps of acquiring shooting data of a TOF camera at each angle, acquiring angle values of the TOF camera at each angle by measuring the angle values of the TOF camera at each angle by an angle limiter, calculating original three-dimensional coordinates of each angle, calculating inherent spatial angular resolution of each pixel at the original position according to built-in parameters of the camera and the resolution data, calculating angles of each pixel of each row relative to the position of a main point of the camera at the row according to the angular resolution, and calculating new three-dimensional coordinates of a rotating TOF camera at the original position coordinate system according to the angles of each pixel and the original three-dimensional coordinates. The application also provides a densification device, equipment and a readable storage medium of the TOF camera point cloud. In addition, the application also relates to a blockchain technology, and the point cloud data can be stored in the blockchain. The application completes the densification of sparse point cloud.

Description

Method, device and equipment for densification of TOF camera point cloud and readable storage medium
Technical Field
The present application relates to the field of image processing and artificial intelligence technologies, and in particular, to a method, an apparatus, a device, and a readable storage medium for densification of a TOF camera point cloud.
Background
In the prior art, the original sparse point cloud is subjected to a dense result by an up-sampling method (random mean density interpolation, local plane sampling and triangulation cube interpolation), the method is feasible under the condition that the resolution of the 3D sensor is enough, but the method can increase the error of depth clearly under the condition that the point distance is relatively large and the point cloud is quite unnatural.
Disclosure of Invention
The embodiment of the application aims to provide a densification method, device and equipment of TOF camera point clouds and a readable storage medium, so as to solve the technical problems that the point clouds are formed unnaturally and the depth error is large under the condition that the point spacing is large by an up-sampling method adopted in the prior art.
In order to solve the above technical problems, the embodiment of the present application provides a method for densification of a TOF camera point cloud, which adopts the following technical scheme: the method comprises the following steps:
acquiring shooting data of the TOF camera at each angle, and acquiring angle values of the TOF camera at each angle, wherein the shooting data comprise depth data and resolution data;
calculating original three-dimensional coordinates of each angle according to the built-in parameters of the camera, the depth data and the angle values, and setting an original position in each angle;
calculating the inherent spatial angular resolution of each pixel at the original position according to the built-in parameters of the camera and the resolution data;
calculating the angle of each pixel of each row relative to the position of the main point of the camera at the row position according to the angle resolution;
calculating a new three-dimensional coordinate of the rotating TOF camera under the original position coordinate system according to the angle of each pixel and the original three-dimensional coordinate;
fusing the new three-dimensional coordinates of the angle of each pixel and performing smoothing filtering treatment;
and taking the processed new three-dimensional coordinates as new point cloud data and outputting the new point cloud data.
In order to solve the above technical problem, an embodiment of the present application further provides a densification device of a TOF camera point cloud, the device including:
the acquisition module is used for acquiring shooting data of the TOF camera at all angles and acquiring angle values of the TOF camera shot at all angles, wherein the shooting data comprise depth data and resolution data;
the original three-dimensional coordinate calculation module is used for calculating original three-dimensional coordinates of each angle according to the built-in parameters of the camera, the depth data and the angle values, and setting an original position in each angle;
the angular resolution calculation module is used for calculating the inherent spatial angular resolution of each pixel at the original position according to the built-in parameters of the camera and the resolution data;
the angle calculation module is used for calculating the angle of each pixel of each row relative to the position of the main point of the camera at the row position according to the angle resolution;
a new three-dimensional coordinate calculation module, configured to calculate a new three-dimensional coordinate of the rotating TOF camera in the original position coordinate system according to the angle of each pixel and the original three-dimensional coordinate;
the fusion module is used for fusing the new three-dimensional coordinates of the angle of each pixel and carrying out smooth filtering treatment;
and the generation module is used for taking the processed new three-dimensional coordinates as new point cloud data and outputting the new point cloud data.
In order to solve the above technical problems, the embodiment of the present application further provides a computer device, which adopts the following technical schemes: the method comprises a memory and a processor, wherein the memory stores computer readable instructions, and the processor executes the computer readable instructions to realize the steps of the densification method of TOF camera point cloud.
In order to solve the above technical problems, an embodiment of the present application further provides a computer readable storage medium, which adopts the following technical schemes: the computer readable storage medium has stored thereon computer readable instructions which when executed by a processor implement the steps of a densification method of a TOF camera point cloud as described above.
Compared with the prior art, the method comprises the steps of acquiring shooting data of the TOF camera at each angle, acquiring angle values of the TOF camera at each angle by the angle limiter, calculating original three-dimensional coordinates of each angle according to built-in parameters of the camera, the depth data and the angle values, calculating inherent spatial angle resolution of each pixel at the original position according to built-in parameters of the camera and the resolution data, calculating angles of each pixel of each row relative to the position of a main point of the camera according to the angle resolution, calculating new three-dimensional coordinates of the rotating TOF camera at the original position coordinate system according to the angles of each pixel and the original three-dimensional coordinates, fusing and smoothing the new three-dimensional coordinates of the angles of each pixel, and outputting the processed new three-dimensional coordinates as new point cloud data. By recalculating the new three-dimensional coordinates of the depth data of each angle, a sparse point cloud densification result is achieved, adverse effects such as depth dislocation and false effect on the new point cloud are avoided, and depth data with high precision and high reduction degree are obtained.
Drawings
In order to more clearly illustrate the solution of the present application, a brief description will be given below of the drawings required for the description of the embodiments of the present application, and it will be apparent that the drawings in the following description are some embodiments of the present application, and that other drawings may be obtained according to these drawings without the need for inventive effort for a person of ordinary skill in the art.
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 is a flow chart of one embodiment of a densification method of a TOF camera point cloud;
FIG. 3 is a calculated graph of angular resolution;
FIG. 4 is a calculated graph of new three-dimensional coordinates;
FIG. 5 is a top view of an original point cloud and a new point cloud;
FIG. 6 is a front view of an original point cloud and a new point cloud;
FIG. 7 is a structural schematic diagram of one embodiment of a densification apparatus of a TOF camera point cloud;
FIG. 8 is a schematic structural view of one embodiment of a computer device according to the present application.
Detailed Description
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used in the description of the applications herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "comprising" and "having" and any variations thereof in the description of the application and the claims and the description of the drawings above are intended to cover a non-exclusive inclusion. The terms first, second and the like in the description and in the claims or in the above-described figures, are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
In order to make the person skilled in the art better understand the solution of the present application, the technical solution of the embodiment of the present application will be clearly and completely described below with reference to the accompanying drawings.
As shown in fig. 1, a system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 is used as a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The user may interact with the server 105 via the network 104 using the terminal devices 101, 102, 103 to receive or send messages or the like. Various communication client applications, such as a web browser application, a shopping class application, a search class application, an instant messaging tool, a mailbox client, social platform software, etc., may be installed on the terminal devices 101, 102, 103.
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablet computers, electronic book readers, MP3 players (Moving Picture Experts Group Audio Layer III, dynamic video expert compression standard audio plane 3), MP4 (Moving Picture Experts Group Audio Layer IV, dynamic video expert compression standard audio plane 4) players, laptop and desktop computers, and the like.
The server 105 may be a server providing various services, such as a background server providing support for pages displayed on the terminal devices 101, 102, 103.
It should be noted that, the information retrieval method based on voice semantics provided by the embodiment of the application is generally executed by a server, and correspondingly, the information retrieval device based on voice semantics is generally arranged in the server.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to fig. 2-6, a flow chart of one embodiment of a densification method of a TOF camera point cloud in accordance with the present application is shown. The densification method of the TOF camera point cloud comprises the following steps:
step S201: acquiring shooting data of the TOF camera at each angle, and acquiring angle values of the TOF camera at each angle, wherein the shooting data comprise depth data and resolution data;
it should be noted that, the TOF camera adopts tofv2.0 depth camera, in order to reduce the influence of ambient light, the TOF module adopts vcsel of 940nm wavelength as the light source to supporting 940nm anti-reflection camera lens, because the light duty cycle of 940nm wavelength is little in ambient light or the sunlight, so the shooting data of TOF camera output can be better resist the interference, place the steering wheel that can measure rotation angle below the TOF camera, steering wheel is through adjusting pulse width signal control pivoted angle, wherein, is equipped with the angle limiter in the steering wheel, and the angle limiter is located the center of steering wheel is used for measuring the rotation angle of steering wheel, at every turn control in 1 degree rotation angle scope. Because the technical scheme does not consider the angle change of the y axis (pitch angle) but only the angle change of the x axis (yaw angle), the approximate optical center position of the TOF depth camera needs to be overlapped with the center position of the steering engine, the yaw error is reduced as much as possible, at the moment, the TOF and the steering engine are understood as a front 3D module, the cost of the device is low, and the device acquires a data sequence rotating by a small angle and transmits the data sequence to a computer to finish new densification of point cloud.
Setting the angle value as follows, setting the original position to be 0 degrees, rotating the left side of the original position to be negative increment, and rotating the right side of the original position to be positive increment; setting a reciprocating TOF camera to rotate to a small angle combination, wherein the reciprocating TOF camera rotates through an original position, negative increment and positive increment; and driving the steering engine to rotate in a small-angle combination mode according to the angle sequence, and acquiring shooting data of the TOF camera at each angle and angle values of the corresponding angles measured by the angle limiter.
The original position is 0 DEG, during the rotation process of the angle limiter, the y-axis direction is unchanged, the steering engine and the TOF camera rotate leftwards in the x-axis direction and are determined to be increased negatively, the steering engine rotates by 1 DEG to be-1 DEG, the steering engine and the TOF camera rotate rightwards in the x-axis direction and are determined to be increased positively, the steering engine rotates by 1 DEG to be 1 DEG, as shown in figure 3, finally, the direction sequence under each angle can be (0 DEG, 1 DEG, 3 DEG, 4 DEG, -1 DEG, -2 DEG, -3 DEG and-4 DEG), and can also be (4 DEG, 3 DEG, 2 DEG, 1 DEG, -2 DEG, -3 DEG, -1 DEG, -2 DEG and-4 DEG) or (0 DEG, -1 DEG, -2 DEG, -3 DEG, -4 DEG).
Step S202: calculating original three-dimensional coordinates of each angle according to the built-in parameters of the camera, the depth data and the angle values, and setting an original position in each angle;
the original position is a 0-degree position, so that corresponding data can be conveniently acquired according to the original position, and the original three-dimensional coordinate can acquire X: x-direction values in the TOF camera coordinate system; y: y-direction values in TOF camera coordinate system; z: the z-direction value in the TOF camera coordinate system facilitates the subsequent calculation of the angular resolution.
Step S203: calculating the inherent spatial angular resolution of each pixel at the original position according to the built-in parameters of the camera and the resolution data;
as shown in fig. 3, the calculation formula for calculating the intrinsic spatial angular resolution of each pixel at the original position according to the built-in parameters of the camera and the resolution data is as follows:
wherein x is c ,y c ,z c Three-dimensional coordinate points, which are points of pixel coordinates c (u, v), fx, fy are focal lengths of the TOF camera in x, y directions, u 0 ,v 0 Is the principal point coordinate of the TOF camera, and the obtained angular resolution is the spatial angular resolution of each pixel coordinate of the pixel coordinate system.
Step S204: calculating an angle alpha of each pixel of each row relative to the position of the main point of the camera at the row position according to the angle resolution;
the step of calculating the angle of each pixel of each row relative to the row position where the camera principal point position is located according to the angle resolution comprises the following steps: acquiring the angle of each pixel of the original position relative to the line position of the main reference point in the camera according to the angle resolution; dividing the positive and negative of the angle according to the positive and negative directions and the main point position; after a small angle combination is reciprocated, calculating the angle of each pixel of the row from the original position relative to the original position, and calculating the angle of each pixel of the row according to the equal increment of the negative increment and the positive increment.
Step S205: calculating a new three-dimensional coordinate of the rotating TOF camera under the original position coordinate system according to the angle of each pixel and the original three-dimensional coordinate;
the step of calculating a new three-dimensional coordinate of the rotating TOF camera in the original position coordinate system from the angle of each pixel and the original three-dimensional coordinate includes:
as shown in fig. 4, setting the depth data as z and converting the depth data as z to distance=z/cos (θ), and rotating by a certain angle under a reference coordinate system of 0 ° coordinate system (original position) to radian rad, then new x=distance=sin (α+rad), new y=y is unchanged, and new z=distance=cos (θ+rad).
Step S206: fusing the new three-dimensional coordinates of the angle of each pixel and performing smoothing filtering treatment;
the step of fusing and smoothing the new three-dimensional coordinates of the angle of each pixel further comprises the following steps: inquiring whether the new three-dimensional coordinates of the angle of each pixel have the corresponding depth data; when the new three-dimensional coordinates have depth data, fusing the new three-dimensional coordinates of the angle of each pixel and performing smoothing filtering treatment; when the new three-dimensional coordinates do not have depth data, the new three-dimensional coordinates without depth data are deleted. And filtering the new three circles to obtain coordinates, and avoiding the influence of depth coordinates without depth data on the densification of the new point cloud.
The step of fusing and smoothing the new three-dimensional coordinates of the angle of each pixel comprises the following steps: fusing new three-dimensional coordinates of the angle of each pixel; resampling and filtering the new three-dimensional coordinates of the angle of each fused pixel; smoothing the new three-dimensional coordinates of the angle of each pixel after filtering, and calculating a normal; and displaying the normal line on the new three-dimensional coordinates of the angle of each smoothed pixel to form a new point cloud.
The point cloud smoothing is achieved by resampling, which is achieved by a "moving least squares" (MLS, moving Least Squares) method, where the building of a fitting function of the moving least squares consists of a coefficient vector a (x) and a basis function p (x), where a (x) is not a constant, but a function of the coordinates x. And (3) building a fitting function: on one local subfield U of the fitting region, the fitting function represents: wherein a (x) = [ a1 (x), a2 (x) … am (x)]T is the coefficient to be solved and is a function of the coordinate x, p (x) = [ p1 (x), p2 (x) … pm (x)]T is called the basis function, which is a polynomial of complete order, is the polynomial of the basis function [7 ]]Commonly used secondary groups are [1, u, v, u ] 2 ,u 2 ,uv]T the fitting function in equation (1) is generally expressed as:
f(x)=a 0 (x)+a 1 (x)u+a 2 (x)v+u 3 (x)u 2 +a 4 (x)v 2 +a 5 (x)uv
minimizing the function of the above equation, w (x-xi) is the weight function of node xi.
The weight function is the core of the mobile least squares method. In the mobile least squares method, the weight function w (x-xi) should have a compactness, i.e. the weight function is not equal to 0 in a sub-field of x, and is all 0 outside this sub-field, which is called the support field of the weight function (i.e. the influence field of x), the radius of which is denoted as s. The usual weight function is a cubic spline weight function:
wherein the method comprises the steps ofr i =||x-x i ||;
hi is the size of the weight function support field of the ith node and β is the influence coefficient introduced.
There are generally two methods for normal computation of point clouds: 1. obtaining a curved surface corresponding to the sampling point from the point cloud data by using a curved surface reconstruction method, and then calculating the normal of the surface by using a curved surface model; 2. the surface normal is directly inferred from the point cloud dataset using the approximation, by which the surface normal for each point in the point cloud is estimated. Specifically, analysis of eigenvectors and eigenvalues of the covariance matrix calculated from the nearest neighbor of the point estimates the surface normal at a point from the neighborhood of points around the point (also referred to as the k-neighborhood).
Step S207: and taking the processed new three-dimensional coordinates as new point cloud data and outputting the new point cloud data.
As shown in fig. 5 to 6, (a) an original single-frame sparse point cloud overlooking effect, (b) a sparse point cloud additional overlooking effect, (c) an original single-frame sparse point cloud orthoscopic effect, (d) a sparse point cloud additional orthoscopic effect, wherein the original point cloud can be seen to be very sparse in (a) and (c), and only more than five hundred points are found, and two relatively large ineffective blanks are contained in the middle due to the characteristics of camera hardware; after the angle limiter is sequentially rotated by 1-4 degrees left and right, namely (b) (c), the processed new point cloud effect is obvious in densification, the original characteristics of a camera are maintained, the TOF acquires depth data through the scheme, the angle limiter device is used for acquiring the rotation angle in the horizontal direction, the vertical direction is unchanged, the depth data under each angle is obtained, and three-dimensional data of a three-dimensional coordinate system of an original position is recalculated, so that the sparse point cloud densification result is achieved.
According to the embodiment, shooting data of the TOF camera at all angles are obtained, angle values of the TOF camera at all angles are measured by the angle limiter, original three-dimensional coordinates of all angles are calculated according to built-in parameters of the camera, the depth data and the angle values, intrinsic spatial angle resolution of each pixel at the original position is calculated according to built-in parameters of the camera and the resolution data, angles of each pixel of each row at the row position corresponding to the main point position of the camera are calculated according to the angle resolution, new three-dimensional coordinates of the rotating TOF camera at the original position coordinate system are calculated according to the angles of each pixel and the original three-dimensional coordinates, the new three-dimensional coordinates of the angles of each pixel are fused, smooth filtering processing is carried out, and the processed new three-dimensional coordinates are used as new point cloud data and are output. By recalculating the new three-dimensional coordinates of the depth data of each angle, a sparse point cloud densification result is achieved, adverse effects such as depth dislocation and false effect on the new point cloud are avoided, and depth data with high precision and high reduction degree are obtained.
It should be emphasized that, to further ensure the privacy and security of the point cloud data, the point cloud data may also be stored in a node of a blockchain.
The blockchain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, encryption algorithm and the like. The Blockchain (Blockchain), which is essentially a decentralised database, is a string of data blocks that are generated by cryptographic means in association, each data block containing a batch of information of network transactions for verifying the validity of the information (anti-counterfeiting) and generating the next block. The blockchain may include a blockchain underlying platform, a platform product services layer, an application services layer, and the like.
The embodiment of the application can acquire and process the related data based on the artificial intelligence technology. Among these, artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use knowledge to obtain optimal results.
Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by computer readable instructions stored in a computer readable storage medium that, when executed, may comprise the steps of the embodiments of the methods described above. The storage medium may be a nonvolatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a random access Memory (Random Access Memory, RAM).
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited in order and may be performed in other orders, unless explicitly stated herein. Moreover, at least some of the steps in the flowcharts of the figures may include a plurality of sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, the order of their execution not necessarily being sequential, but may be performed in turn or alternately with other steps or at least a portion of the other steps or stages.
With further reference to fig. 7, as an implementation of the method shown in fig. 2-6 described above, the present application provides an embodiment of a densification apparatus of a TOF camera point cloud, where the apparatus embodiment corresponds to the method embodiment shown in fig. 2-6, and the apparatus may be applied in particular in various electronic devices.
As shown in fig. 7, the densification apparatus 300 of the TOF camera point cloud according to the present embodiment includes: an acquisition module 301, an angular resolution calculation module 303, an original three-dimensional coordinate calculation module 302, an angle calculation module 304, a new three-dimensional coordinate calculation module 305, a fusion module 306, and a generation module 307. Wherein:
the acquiring module 301 is configured to acquire shooting data of the TOF camera at each angle, and acquire an angle value of the TOF camera at each angle, where the shooting data includes depth data and resolution data;
the original three-dimensional coordinate calculation module 302 is configured to calculate original three-dimensional coordinates of each angle according to the built-in parameters of the camera, the depth data and the angle values, and set an original position in each angle;
an angular resolution calculation module 303, configured to calculate an intrinsic spatial angular resolution of each pixel at the original position according to the built-in parameters of the camera and the resolution data;
an angle calculating module 304, configured to calculate, according to the angular resolution, an angle of each pixel in each row relative to a row position where the camera principal point position is located;
a new three-dimensional coordinate calculation module 305, configured to calculate a new three-dimensional coordinate of the rotating TOF camera in the original position coordinate system according to the angle of each pixel and the original three-dimensional coordinate;
the fusion module 306 is configured to fuse the new three-dimensional coordinates of the angle of each pixel and perform smoothing filtering processing;
and the generating module 307 is configured to take the processed new three-dimensional coordinates as new point cloud data and output the new point cloud data.
According to the embodiment, shooting data of the TOF camera at all angles are obtained, angle values of the TOF camera at all angles are measured by the angle limiter, original three-dimensional coordinates of all angles are calculated according to built-in parameters of the camera, the depth data and the angle values, intrinsic spatial angle resolution of each pixel at the original position is calculated according to built-in parameters of the camera and the resolution data, angles of each pixel of each row at the row position corresponding to the main point position of the camera are calculated according to the angle resolution, new three-dimensional coordinates of the rotating TOF camera at the original position coordinate system are calculated according to the angles of each pixel and the original three-dimensional coordinates, the new three-dimensional coordinates of the angles of each pixel are fused, smooth filtering processing is carried out, and the processed new three-dimensional coordinates are used as new point cloud data and are output. By recalculating the new three-dimensional coordinates of the depth data of each angle, a sparse point cloud densification result is achieved, adverse effects such as depth dislocation and false effect on the new point cloud are avoided, and depth data with high precision and high reduction degree are obtained.
In some optional implementations of this embodiment, the acquiring module 301 includes:
the angle setting submodule is used for setting the original position to be 0 degrees, the left side of the original position rotates to be in negative increment, and the right side of the original position rotates to be in positive increment;
setting a sub-module, setting a reciprocating TOF camera to rotate into a small angle combination, wherein the reciprocating TOF camera rotates through an original position, negative increment and positive increment;
and the angle value acquisition sub-module is used for driving the steering engine to perform small-angle combined rotation according to the angle sequence and acquiring shooting data of the TOF camera with the corresponding angle at each angle and the angle value of the corresponding angle measured by the angle limiter.
In some optional implementations of this embodiment, the angle calculation module 304 includes:
the angle difference sub-module is used for acquiring the angle of each pixel at the original position relative to the row position where the main point of the camera is located according to the angle resolution;
the resolution difference submodule divides the positive and negative of the angle according to the positive and negative directions and the main point position according to the angle;
and the angle calculation sub-module is used for calculating the angle of each pixel of each row relative to the position of the line where the main point of the camera is positioned according to the angle resolution from the original position after reciprocating a small angle combination, and dividing the positive and negative of the angle according to the positive and negative directions and the position of the main point.
In some optional implementations of the present embodiment, the calculation formula of the angular resolution calculation module 303 is:
wherein x is c ,y c ,z c Sitting for pixelsThe three-dimensional coordinate point of the point marked c (u, v), fx, fy are the focal lengths of the TOF camera in x, y directions, u 0 ,v 0 Is the principal point coordinate of the TOF camera, and the obtained angular resolution is the spatial angular resolution of each pixel coordinate of the pixel coordinate system.
In some optional implementations of the present embodiment, the new three-dimensional coordinate calculation module 305 includes:
and a new three-dimensional coordinate calculation sub-module, setting the depth data as z, converting the depth data into distance=z/cos (θ), rotating a certain angle under a reference coordinate system of a 0-degree coordinate system (original position) to convert into radian rad, wherein new x=distance sin (α+rad), new y=y is unchanged, and new z=distance×cos (θ+rad).
In some optional implementations of this embodiment, the densification apparatus 300 of the TOF camera point cloud further includes:
the query module is used for querying whether the new three-dimensional coordinates of the angle of each pixel have the corresponding depth data;
when the new three-dimensional coordinates have depth data, fusing the new three-dimensional coordinates of the angle of each pixel and performing smoothing filtering treatment;
when the new three-dimensional coordinates do not have depth data, the new three-dimensional coordinates without depth data are deleted.
In some optional implementations of the present embodiment, the fusing module 306 includes:
the fusion sub-module fuses the new three-dimensional coordinates of the angle of each pixel;
the filtering sub-module resamples and filters the new three-dimensional coordinates of the angle of each fused pixel;
a smoothing sub-module for smoothing the new three-dimensional coordinates of the angle of each pixel after filtering and calculating the normal;
and forming a sub-module, and displaying the normal line on the new three-dimensional coordinates of the angle of each smoothed pixel to form a new point cloud.
In order to solve the technical problems, the embodiment of the application also provides computer equipment. Referring specifically to fig. 8, fig. 8 is a basic structural block diagram of a computer device according to the present embodiment.
The computer device 4 comprises a memory 41, a processor 42, a network interface 43 communicatively connected to each other via a system bus. It should be noted that only computer device 4 having components 41-43 is shown in FIG. 8, but it should be understood that not all of the illustrated components are required to be implemented and that more or fewer components may alternatively be implemented. It will be appreciated by those skilled in the art that the computer device herein is a device capable of automatically performing numerical calculations and/or information processing in accordance with predetermined or stored instructions, the hardware of which includes, but is not limited to, microprocessors, application specific integrated circuits (Application Specific Integrated Circuit, ASICs), programmable gate arrays (fields-Programmable Gate Array, FPGAs), digital processors (Digital Signal Processor, DSPs), embedded devices, etc.
The computer equipment can be a desktop computer, a notebook computer, a palm computer, a cloud server and other computing equipment. The computer equipment can perform man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch pad or voice control equipment and the like.
The memory 41 includes at least one type of readable storage medium including flash memory, hard disk, multimedia card, card memory (e.g., SD or DX memory, etc.), random Access Memory (RAM), static Random Access Memory (SRAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), programmable Read Only Memory (PROM), magnetic memory, magnetic disk, optical disk, etc. In some embodiments, the storage 41 may be an internal storage unit of the computer device 4, such as a hard disk or a memory of the computer device 4. In other embodiments, the memory 41 may also be an external storage device of the computer device 4, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card) or the like, which are provided on the computer device 4. Of course, the memory 41 may also comprise both an internal memory unit of the computer device 4 and an external memory device. In this embodiment, the memory 41 is typically used for storing an operating system and various application software installed on the computer device 4, such as computer readable instructions of a densification method of a TOF camera point cloud. In addition, the memory 41 may be used to temporarily store various types of data that have been output or are to be output.
The processor 42 may be a central processing unit (Central Processing Unit, CPU), controller, microcontroller, microprocessor, or other data processing chip in some embodiments. The processor 42 is typically used to control the overall operation of the computer device 4. In this embodiment, the processor 42 is configured to execute computer readable instructions stored in the memory 41 or process data, such as computer readable instructions for executing a densification method of the TOF camera point cloud.
The network interface 43 may comprise a wireless network interface or a wired network interface, which network interface 43 is typically used for establishing a communication connection between the computer device 4 and other electronic devices.
According to the embodiment, shooting data of the TOF camera at all angles are obtained, angle values of the TOF camera at all angles are measured by the angle limiter, original three-dimensional coordinates of all angles are calculated according to built-in parameters of the camera, the depth data and the angle values, intrinsic spatial angle resolution of each pixel at the original position is calculated according to built-in parameters of the camera and the resolution data, angles of each pixel of each row at the row position corresponding to the main point position of the camera are calculated according to the angle resolution, new three-dimensional coordinates of the rotating TOF camera at the original position coordinate system are calculated according to the angles of each pixel and the original three-dimensional coordinates, the new three-dimensional coordinates of the angles of each pixel are fused, smooth filtering processing is carried out, and the processed new three-dimensional coordinates are used as new point cloud data and are output. By recalculating the new three-dimensional coordinates of the depth data of each angle, a sparse point cloud densification result is achieved, adverse effects such as depth dislocation and false effect on the new point cloud are avoided, and depth data with high precision and high reduction degree are obtained.
The present application also provides another embodiment, namely, a computer-readable storage medium storing computer-readable instructions executable by at least one processor to cause the at least one processor to perform the steps of a method for densifying a TOF camera point cloud as described above.
According to the embodiment, shooting data of the TOF camera at all angles are obtained, angle values of the TOF camera at all angles are measured by the angle limiter, original three-dimensional coordinates of all angles are calculated according to built-in parameters of the camera, the depth data and the angle values, intrinsic spatial angle resolution of each pixel at the original position is calculated according to built-in parameters of the camera and the resolution data, angles of each pixel of each row at the row position corresponding to the main point position of the camera are calculated according to the angle resolution, new three-dimensional coordinates of the rotating TOF camera at the original position coordinate system are calculated according to the angles of each pixel and the original three-dimensional coordinates, the new three-dimensional coordinates of the angles of each pixel are fused, smooth filtering processing is carried out, and the processed new three-dimensional coordinates are used as new point cloud data and are output. By recalculating the new three-dimensional coordinates of the depth data of each angle, a sparse point cloud densification result is achieved, adverse effects such as depth dislocation and false effect on the new point cloud are avoided, and depth data with high precision and high reduction degree are obtained.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present application.
It is apparent that the above-described embodiments are only some embodiments of the present application, but not all embodiments, and the preferred embodiments of the present application are shown in the drawings, which do not limit the scope of the patent claims. This application may be embodied in many different forms, but rather, embodiments are provided in order to provide a thorough and complete understanding of the present disclosure. Although the application has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments described in the foregoing description, or equivalents may be substituted for elements thereof. All equivalent structures made by the content of the specification and the drawings of the application are directly or indirectly applied to other related technical fields, and are also within the scope of the application.

Claims (8)

1. The method for densifying the point cloud of the TOF camera is characterized in that the TOF camera is placed on a steering engine to synchronously rotate along with the steering engine, the approximate optical center position of the TOF camera coincides with the center position of the steering engine, and an angle limiter is positioned at the center of the steering engine and used for measuring the rotation angle of the steering engine, and the method comprises the following steps:
acquiring shooting data of the TOF camera at each angle, and acquiring angle values of the TOF camera at each angle, wherein the shooting data comprise depth data and resolution data;
calculating the original three-dimensional coordinates of each pixel under each angle according to the built-in parameters of the camera, the depth data and the angle values, and setting an original position in each angle;
calculating the inherent spatial angular resolution of each pixel at the original position according to the built-in parameters of the camera and the resolution data;
calculating an angle alpha of each pixel of each row relative to the position of the main point of the camera at the row position according to the angle resolution;
calculating a new three-dimensional coordinate of the rotating TOF camera under the original position coordinate system according to the angle of each pixel and the original three-dimensional coordinate;
fusing the new three-dimensional coordinates of each pixel and performing smoothing filtering treatment;
taking the processed new three-dimensional coordinates as new point cloud data and outputting the new point cloud data;
the step of calculating the angle alpha of each pixel of each row relative to the row position where the main point position of the camera is located according to the angle resolution comprises the following steps:
acquiring the angle of each pixel of the original position relative to the line position of the main reference point in the camera according to the angle resolution;
dividing the positive and negative of the angles according to the negative increasing direction and the positive increasing direction of each angle and the main point position of the camera;
after a small angle combination is reciprocated, calculating the angle of each pixel of the row from the original position relative to the original position, and calculating the angle alpha of each pixel of the row according to the equivalent calculation of the negative increment and the positive increment to other rows;
the step of acquiring the shooting data of the TOF camera at each angle and the step of acquiring the angle value of the TOF camera at each angle by measuring the angle value of the TOF camera at each angle comprises the following steps:
setting the original position to be 0 degrees, wherein the left side rotation of the original position is negative increment, and the right side rotation of the original position is positive increment;
setting a reciprocating TOF camera to rotate to a small angle combination, wherein the reciprocating TOF camera rotates through an original position, negative increment and positive increment;
and driving the steering engine to rotate in a small-angle combination mode according to the angle sequence, and acquiring shooting data of the TOF camera at each angle and angle values of the corresponding angles measured by the angle limiter.
2. The method of claim 1, wherein the calculation formula for calculating the intrinsic spatial angular resolution of each pixel at the original position according to the built-in parameters of the camera and the resolution data is:
wherein x is c ,y c ,z c Three-dimensional coordinate points, which are points of pixel coordinates c (u, v), fx, fy are focal lengths of the TOF camera in x, y directions, u 0 ,v 0 Is the principal point coordinate of the TOF camera, and the obtained angular resolution is the spatial angular resolution of each pixel coordinate of the pixel coordinate system.
3. The method of densification of a TOF camera point cloud according to claim 2, wherein the step of calculating new three-dimensional coordinates of the rotating TOF camera in the original position coordinate system from the angle of each pixel and the original three-dimensional coordinates comprises:
setting the depth data as z, converting the depth data z into distance=z/cos (θ), and converting the depth data z into radian rad by rotating a certain angle under a 0-degree coordinate system (original position coordinate system), wherein new x=distance sin (α+rad), new y=y is unchanged, and new z=distance cos (θ+rad).
4. The method of claim 1, wherein the step of merging and smoothing the new three-dimensional coordinates of each pixel further comprises:
inquiring whether the new three-dimensional coordinates of each pixel have the corresponding depth data;
when the new three-dimensional coordinates have depth data, fusing the new three-dimensional coordinates of each pixel and performing smoothing filtering treatment;
when the new three-dimensional coordinates do not have depth data, the new three-dimensional coordinates without depth data are deleted.
5. The method of claim 1, wherein the step of fusing and smoothing the new three-dimensional coordinates of each pixel comprises:
fusing the new three-dimensional coordinates of each pixel;
resampling and filtering the new three-dimensional coordinates of each fused pixel;
smoothing the new three-dimensional coordinates of each pixel after filtering, and calculating a normal;
and displaying the normal line on the smoothed new three-dimensional coordinates of each pixel to form a new point cloud.
6. The utility model provides a densification device of TOF camera point cloud, its characterized in that, TOF camera is placed on the steering wheel and is followed steering wheel synchronous rotation, TOF camera roughly light heart position with the central point of steering wheel puts the coincidence, and the angle limiter is located the center of steering wheel is used for measuring the rotation angle of steering wheel, the device includes:
the acquisition module is used for acquiring shooting data of the TOF camera at all angles and acquiring angle values of the TOF camera shot at all angles, wherein the shooting data comprise depth data and resolution data;
the original three-dimensional coordinate calculation module is used for calculating the original three-dimensional coordinate of each pixel under each angle according to the built-in parameters of the camera, the depth data and the angle values, and setting an original position in each angle;
the angular resolution calculation module is used for calculating the inherent spatial angular resolution of each pixel at the original position according to the built-in parameters of the camera and the resolution data;
the angle calculation module is used for calculating the angle alpha of each pixel of each row relative to the position of the main point of the camera at the position of the row according to the angle resolution;
a new three-dimensional coordinate calculation module, configured to calculate a new three-dimensional coordinate of the rotating TOF camera in the original position coordinate system according to the angle of each pixel and the original three-dimensional coordinate;
the fusion module is used for fusing the new three-dimensional coordinates of the angle of each pixel and carrying out smooth filtering treatment;
the generation module is used for taking the processed new three-dimensional coordinates as new point cloud data and outputting the new point cloud data;
the angle calculation module is further configured to:
acquiring the angle of each pixel of the original position relative to the line position of the main reference point in the camera according to the angle resolution;
dividing the positive and negative of the angles according to the negative increasing direction and the positive increasing direction of each angle and the main point position of the camera;
after a small angle combination is reciprocated, calculating the angle of each pixel of the row from the original position relative to the original position, and calculating the angle alpha of each pixel of the row according to the equivalent calculation of the negative increment and the positive increment to other rows;
the acquisition module is further configured to:
setting the original position to be 0 degrees, wherein the left side rotation of the original position is negative increment, and the right side rotation of the original position is positive increment;
setting a reciprocating TOF camera to rotate to a small angle combination, wherein the reciprocating TOF camera rotates through an original position, negative increment and positive increment;
and driving the steering engine to rotate in a small-angle combination mode according to the angle sequence, and acquiring shooting data of the TOF camera at each angle and angle values of the corresponding angles measured by the angle limiter.
7. A computer device comprising a memory having stored therein computer readable instructions which when executed implement the steps of the densification method of a TOF camera point cloud according to any of claims 1 to 5.
8. A computer readable storage medium having stored thereon computer readable instructions which when executed by a processor implement the steps of the densification method of a TOF camera point cloud according to any of claims 1 to 5.
CN202210242970.7A 2022-01-05 2022-03-11 Method, device and equipment for densification of TOF camera point cloud and readable storage medium Active CN114630096B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2022100111047 2022-01-05
CN202210011104 2022-01-05

Publications (2)

Publication Number Publication Date
CN114630096A CN114630096A (en) 2022-06-14
CN114630096B true CN114630096B (en) 2023-10-27

Family

ID=81901915

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210242970.7A Active CN114630096B (en) 2022-01-05 2022-03-11 Method, device and equipment for densification of TOF camera point cloud and readable storage medium

Country Status (1)

Country Link
CN (1) CN114630096B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109494278A (en) * 2019-01-10 2019-03-19 深圳技术大学(筹) Single-photon avalanche diode, active quenching circuit, pulsed TOF sensor and forming method
WO2020014951A1 (en) * 2018-07-20 2020-01-23 深圳市道通智能航空技术有限公司 Method and apparatus for building local obstacle map, and unmanned aerial vehicle
CN111563923A (en) * 2020-07-15 2020-08-21 浙江大华技术股份有限公司 Method for obtaining dense depth map and related device
CN112106112A (en) * 2019-09-16 2020-12-18 深圳市大疆创新科技有限公司 Point cloud fusion method, device and system and storage medium
CN112434709A (en) * 2020-11-20 2021-03-02 西安视野慧图智能科技有限公司 Aerial survey method and system based on real-time dense three-dimensional point cloud and DSM of unmanned aerial vehicle
CN112733641A (en) * 2020-12-29 2021-04-30 深圳依时货拉拉科技有限公司 Object size measuring method, device, equipment and storage medium
CN113284251A (en) * 2021-06-11 2021-08-20 清华大学深圳国际研究生院 Cascade network three-dimensional reconstruction method and system with self-adaptive view angle
CN113643382A (en) * 2021-08-22 2021-11-12 浙江大学 Dense coloring point cloud obtaining method and device based on rotating laser fusion camera

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA3024510C (en) * 2016-06-01 2022-10-04 Velodyne Lidar, Inc. Multiple pixel scanning lidar

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020014951A1 (en) * 2018-07-20 2020-01-23 深圳市道通智能航空技术有限公司 Method and apparatus for building local obstacle map, and unmanned aerial vehicle
CN109494278A (en) * 2019-01-10 2019-03-19 深圳技术大学(筹) Single-photon avalanche diode, active quenching circuit, pulsed TOF sensor and forming method
CN112106112A (en) * 2019-09-16 2020-12-18 深圳市大疆创新科技有限公司 Point cloud fusion method, device and system and storage medium
CN111563923A (en) * 2020-07-15 2020-08-21 浙江大华技术股份有限公司 Method for obtaining dense depth map and related device
CN112434709A (en) * 2020-11-20 2021-03-02 西安视野慧图智能科技有限公司 Aerial survey method and system based on real-time dense three-dimensional point cloud and DSM of unmanned aerial vehicle
CN112733641A (en) * 2020-12-29 2021-04-30 深圳依时货拉拉科技有限公司 Object size measuring method, device, equipment and storage medium
CN113284251A (en) * 2021-06-11 2021-08-20 清华大学深圳国际研究生院 Cascade network three-dimensional reconstruction method and system with self-adaptive view angle
CN113643382A (en) * 2021-08-22 2021-11-12 浙江大学 Dense coloring point cloud obtaining method and device based on rotating laser fusion camera

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Hualu Li et al..TOF Camera Array for Package Volume Measurement.International Conference on Information Science and Control Engineering.2021,第2260-2264页. *
杨洪飞 等.图像融合在空间目标三维重建中的应用.红外与激光工程.2018,(第09期),全文. *
程源文 等.单点相位式TOF深度探测器研究与设计.激光杂志.2021,第42卷(第1期),第65-70页. *

Also Published As

Publication number Publication date
CN114630096A (en) 2022-06-14

Similar Documents

Publication Publication Date Title
CN108764048B (en) Face key point detection method and device
JP6745328B2 (en) Method and apparatus for recovering point cloud data
CA2423212C (en) Apparatus and method for generating a three-dimensional representation from a two-dimensional image
US8487927B2 (en) Validating user generated three-dimensional models
CN112614213A (en) Facial expression determination method, expression parameter determination model, medium and device
CN109858333A (en) Image processing method, device, electronic equipment and computer-readable medium
Gu et al. Single-shot structured light sensor for 3d dense and dynamic reconstruction
CN111524216B (en) Method and device for generating three-dimensional face data
CN113643414B (en) Three-dimensional image generation method and device, electronic equipment and storage medium
CN116310076A (en) Three-dimensional reconstruction method, device, equipment and storage medium based on nerve radiation field
CN113450579B (en) Method, device, equipment and medium for acquiring speed information
Cao et al. Accurate 3-D reconstruction under IoT environments and its applications to augmented reality
Sheng et al. A lightweight surface reconstruction method for online 3D scanning point cloud data oriented toward 3D printing
CN113052955A (en) Point cloud completion method, system and application
Dubey et al. Image alignment in pose variations of human faces by using corner detection method and its application for PIFR system
CN114630096B (en) Method, device and equipment for densification of TOF camera point cloud and readable storage medium
CN111192312B (en) Depth image acquisition method, device, equipment and medium based on deep learning
van Dam et al. Face reconstruction from image sequences for forensic face comparison
Cao et al. Stable image matching for 3D reconstruction in outdoor
CN114627170A (en) Three-dimensional point cloud registration method and device, computer equipment and storage medium
CN113791426A (en) Radar P display interface generation method and device, computer equipment and storage medium
Zhang et al. Particle swarm optimisation algorithm for non-linear camera calibration
Wang et al. Approach for improving efficiency of three-dimensional object recognition in light-field display
CN116091570B (en) Processing method and device of three-dimensional model, electronic equipment and storage medium
CN111311712B (en) Video frame processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant