CN116612254A - Point cloud data processing method and device, computer equipment and storage medium - Google Patents

Point cloud data processing method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN116612254A
CN116612254A CN202310511118.XA CN202310511118A CN116612254A CN 116612254 A CN116612254 A CN 116612254A CN 202310511118 A CN202310511118 A CN 202310511118A CN 116612254 A CN116612254 A CN 116612254A
Authority
CN
China
Prior art keywords
voxel
laser line
vector
projection
voxels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310511118.XA
Other languages
Chinese (zh)
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xhorse Electronics Co Ltd
Original Assignee
Shenzhen Xhorse Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Xhorse Electronics Co Ltd filed Critical Shenzhen Xhorse Electronics Co Ltd
Priority to CN202310511118.XA priority Critical patent/CN116612254A/en
Publication of CN116612254A publication Critical patent/CN116612254A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The application relates to a point cloud data processing method, a point cloud data processing device, computer equipment and a storage medium. The method comprises the following steps: acquiring a current frame; the current frame is a three-dimensional point cloud picture obtained by multi-line laser scanning of an object; extracting a laser line in the current frame; determining intersection voxels at the intersection of the laser lines based on the laser lines; corresponding to each crossed voxel, projecting the crossed voxels to a laser line crossed part to obtain voxel projection data; determining a voxel normal vector of the intersected voxels based on the voxel projection data; and displaying a three-dimensional image of the object based on the voxel projection data and the voxel normal vector. The method can improve the quality of the three-dimensional image.

Description

Point cloud data processing method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and apparatus for processing point cloud data, a computer device, and a storage medium.
Background
In the three-dimensional laser scanning process, the computer displays the point cloud obtained by scanning calculation in real time, and a user can intuitively see the real-time effect and the position of scanning. In the traditional methods, the depth map is processed, so that a three-dimensional image is obtained; or the calculated real-time point cloud cannot distinguish details in the screen. The conventional method has a problem of poor three-dimensional image effect.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a point cloud data processing method, apparatus, computer device, and storage medium capable of improving the quality of three-dimensional images.
A method of point cloud data processing, the method comprising:
acquiring a current frame; the current frame is a three-dimensional point cloud picture obtained by multi-line laser scanning of an object;
extracting a laser line in the current frame;
determining crossing voxels at crossing portions of the laser line based on the laser line;
corresponding to each crossed voxel, projecting the crossed voxels to the laser line crossed part to obtain voxel projection data;
determining a voxel normal vector of the intersected voxels based on the voxel projection data;
and displaying a three-dimensional image of the object based on the voxel projection data and the voxel normal vector.
A point cloud data processing apparatus, the apparatus comprising:
the current frame acquisition module is used for acquiring a current frame; the current frame is a three-dimensional point cloud picture obtained by multi-line laser scanning of an object;
a laser line extraction module, configured to extract a laser line in the current frame;
an intersecting voxel determining module for determining an intersecting voxel at an intersecting portion of the laser line based on the laser line;
The voxel projection module is used for projecting the intersecting voxels to the laser line intersecting part corresponding to each intersecting voxel to obtain voxel projection data;
a voxel normal vector determination module for determining a voxel normal vector of the intersecting voxels based on the voxel projection data;
and the three-dimensional image display module is used for displaying a three-dimensional image of the object based on the voxel projection data and the voxel normal vector. A computer device comprising a memory storing a computer program and a processor implementing the steps of the cloud data processing method embodiments of each point when the computer program is executed.
A computer device comprising a memory storing a computer program and a processor implementing the steps of the cloud data processing method embodiments of each point when the computer program is executed.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of an embodiment of a method of cloud data processing at various points.
According to the point cloud data processing method, the device, the computer equipment and the storage medium, the three-dimensional point cloud image of the current frame is obtained, the laser lines in the current frame are extracted, so that voxel data at the intersection part of the laser lines are determined, and voxels at the intersection part of the laser lines are processed subsequently, so that the data quantity to be processed can be reduced; corresponding to each crossed voxel, the crossed voxels are projected to a laser line crossed part to obtain voxel projection data, then the voxel normal vector of the crossed voxels is determined, a three-dimensional image of the object is displayed based on the voxel projection data and the voxel normal vector, the detail texture of the surface of the object is displayed by adopting multi-line laser scanning, the quality of the three-dimensional image of the object is good, and the real-time display effect is close to that of naked eyes.
Drawings
FIG. 1 is an application environment diagram of a point cloud data processing method in one embodiment;
FIG. 2 is a flow chart of a method of point cloud data processing according to an embodiment;
FIG. 3 is a schematic diagram of a current frame in one embodiment;
FIG. 4 is a schematic view of surrounding voxels projected onto a laser line in one embodiment;
FIG. 5 is a schematic view of voxels projected onto the same laser line in one embodiment;
FIG. 6 is a schematic diagram of current camera view and voxel normal vectors in one embodiment;
FIG. 7 is an image schematic of an object in one embodiment;
FIG. 8 is a three-dimensional image of an object formed solely from coordinates of three-dimensional points in one embodiment;
FIG. 9 is a schematic representation of a three-dimensional image of an object in another embodiment;
FIG. 10 is a block diagram of a point cloud data processing device in one embodiment;
FIG. 11 is an internal block diagram of a computer device in one embodiment.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without any inventive effort, are intended to be within the scope of the application.
It should be noted that, in the embodiments of the present application, all directional indicators (such as up, down, left, right, front, and rear … …) are merely used to explain the relative positional relationship, movement conditions, and the like between the components in a specific posture (as shown in the drawings), if the specific posture is changed, the directional indicators correspondingly change, and the connection may be a direct connection or an indirect connection.
Furthermore, descriptions such as those referred to as "first," "second," and the like, are provided for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implying an order of magnitude of the indicated technical features in the present disclosure. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In addition, the technical solutions of the embodiments may be combined with each other, but it is necessary to base that the technical solutions can be realized by those skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should be considered to be absent and not within the scope of protection claimed in the present application.
The terms "first," "second," and the like, as used herein, may be used to describe various data, but such data is not limited by these terms. These terms are only used to distinguish one data element from another. For example, a first three-dimensional point may be referred to as a second three-dimensional point, and similarly, a second three-dimensional point may be referred to as a first three-dimensional point, without departing from the scope of the application. Both the first three-dimensional point and the second three-dimensional point are three-dimensional points, but they are not the same three-dimensional point.
It is to be understood that in the following embodiments, "connected" is understood to mean "electrically connected", "communicatively connected", etc., if the connected circuits, modules, units, etc., have electrical or data transfer between them.
The point cloud data processing method provided by the application can be applied to an application environment as shown in fig. 1. FIG. 1 is an application environment diagram of a point cloud data processing method in one embodiment. The computer device 110 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, among others. The three-dimensional laser scanner 120 includes 2 cameras and 2 multi-line laser transmitters. Each of the multiple line laser transmitters emits a plurality of laser lines, and the laser lines emitted by the two multiple line laser transmitters intersect. The camera of the three-dimensional laser scanner is used to capture images.
In one embodiment, a lot of data about fusion of point clouds or depth maps are available, and most of the data are operated for the depth maps.
As shown in fig. 2, a flow chart of a cloud data processing method in one embodiment includes:
step 202, obtaining a current frame; the current frame is a three-dimensional point cloud image obtained by multi-line laser scanning of an object.
In particular, the multi-line laser is a plurality of intersecting laser lines emitted by a three-dimensional laser scanner. The object may specifically be any object existing in nature, such as various models, plants, animals, sceneries, and the like. The three-dimensional laser scanner scans the object to obtain a three-dimensional point cloud image, the three-dimensional point cloud image is transmitted to computer equipment, and the computer equipment acquires a current frame.
Or the current frame is obtained by combining three-dimensional point cloud pictures scanned at two successive moments. As shown in fig. 3, a schematic diagram of a current frame in one embodiment. Fig. 3 (a) is a three-dimensional point cloud obtained by scanning at a previous time. Fig. 3 (b) is a three-dimensional point cloud obtained by scanning at a later time. Fig. 3 (c) is a schematic diagram of the current frame. It will be appreciated that the number of surrounding voxels and intersecting voxels obtained within a frame can be greater than that of a single frame, allowing more intersections to be obtained and reducing the cost of the three-dimensional scanner.
Step 204, extracting the laser line in the current frame.
Specifically, the computer device employs a laser line extraction algorithm to extract the laser lines in the current frame.
At step 206, intersecting voxels at the intersection of the laser lines are determined based on the laser lines.
The laser line crossing portion may be a plane, a curved surface or a body formed within a certain range of laser line crossing. The number of intersecting laser lines is at least two. The intersecting voxels are voxels located at the intersection of the laser lines.
Optionally, the computer device may obtain the same three-dimensional point coordinates from the coordinate sets corresponding to the laser lines, where voxels corresponding to the same three-dimensional point coordinates are intersecting voxels, and the surfaces around the three-dimensional point coordinates are intersecting portions of the laser lines. For example, the laser line 1 corresponds to the coordinate set a, and the laser line 2 corresponds to the coordinate set B, and when the coordinate set a and the coordinate set B contain the same three-dimensional point coordinates, the voxel corresponding to the three-dimensional point coordinates is the intersecting voxel, and the plane around the three-dimensional point coordinates is the intersecting portion of the laser line.
Step 208, corresponding to each intersecting voxel, projecting the intersecting voxel to the laser line intersecting portion to obtain voxel projection data.
Specifically, the voxel projection data may include a voxel projection vector, which is a vector obtained by vertically projecting intersecting voxels to the laser line intersecting portion. The voxel projection data may also include intermediate data generated during the projection process, and the like. Corresponding to each intersecting voxel, the computer device vertically projects the intersecting voxel to the laser line intersecting portion to obtain voxel projection data. The laser line crossing portion may refer to a crossing portion of an effective laser line.
At step 210, voxel normal vectors of intersecting voxels are determined based on the voxel projection data.
Wherein the voxel normal vector is used to represent the orientation of the voxel, and is an important feature of the voxel. When the voxels have own unique directions, the whole three-dimensional image can present the detailed textures of the surface, even including the properties of reflection, shielding and the like, so that the effect displayed in real time is close to the effect actually seen.
In particular, the computer device may determine a voxel normal vector of intersecting voxels based on voxel projection vectors in the voxel projection data, the voxel normal vector being in the same or opposite direction as the voxel projection vector. The specific voxel normal vector is oriented in the direction of the line of sight.
Step 212 displays a three-dimensional image of the object based on the voxel projection data and the voxel normal vector.
In particular, the voxel projection data comprises a target location of intersecting voxels. Wherein the computer device may add the coordinates of the intersecting voxels to the voxel projection vector to obtain the target location of the intersecting voxels. The voxel normal vector affects the shade of the voxel when the three-dimensional image is displayed. The computer device displays a three-dimensional image of the object based on the target location and the corresponding voxel normal vector.
Alternatively, the computer device may display a three-dimensional image of the object after three-dimensional reconstruction based on the voxel projection data and the voxel normal vector.
In the embodiment, a three-dimensional point cloud image of the current frame is obtained, laser lines in the current frame are extracted, so that voxel data at the intersection part of the laser lines are determined, and voxels at the intersection part of the laser lines are processed subsequently, so that the data quantity to be processed can be reduced; corresponding to each crossed voxel, the crossed voxels are projected to a laser line crossed part to obtain voxel projection data, then the voxel normal vector of the crossed voxels is determined, a three-dimensional image of the object is displayed based on the voxel projection data and the voxel normal vector, the detail texture of the surface of the object is displayed by adopting multi-line laser scanning, the quality of the three-dimensional image of the object is good, and the real-time display effect is close to that of naked eyes.
In one embodiment, determining intersection voxels at an intersection of laser lines based on the laser lines includes: for surrounding voxels of the laser line, projecting the surrounding voxels onto the corresponding laser line; when the number of laser lines projected by the surrounding voxels is at least two, the surrounding voxels are determined to be intersecting voxels located at the intersecting portion of the laser lines.
Wherein, surrounding voxels refer to voxels within a preset distance around the laser line as a reference.
Specifically, for surrounding voxels of the laser line, the surrounding voxels are projected onto the surrounding laser line. When the number of laser lines projected by a surrounding voxel is at least two, the surrounding voxel is indicated as an intersecting voxel located at the intersection of the laser lines.
In this embodiment, the intersecting voxels are voxels of the intersecting portion of at least two laser lines, if surrounding voxels can be projected onto at least two laser lines, it is illustrated that the surrounding voxels are intersecting voxels, which voxels are intersecting voxels can be accurately determined, and voxel projection data corresponding to the surrounding voxels can be obtained after projection, so that subsequent three-dimensional image processing is facilitated.
In one embodiment, for surrounding voxels of a laser line, projecting the surrounding voxels onto the corresponding laser line comprises:
within a search grid formed based on two three-dimensional points on a laser line, surrounding voxels in each search grid are projected onto the laser line between the two three-dimensional points.
Specifically, the search grid is a voxel grid. The two three-dimensional points may be points on diagonal corners of the search grid. It will be appreciated that the search grid may also be a voxel grid of two three-dimensional points re-expanded by a certain size.
The laser line L { P is known 1 ,P 2 ,......,P n Consists of n points, and after extraction of the laser line each point contains coordinates (x, y, z) and a tangent vector t x ,t y ,t z Is used for defining and acquiring surrounding voxels of the laser line. According to the scanning resolution, a global voxel grid is assumed, a boundingbox (an outsourced rectangular box) formed by two continuous points of a laser line is obtained, and an offset epsilon is added, so that a layer of voxel grid with epsilon size can be expanded outside the boundingbox to form a search grid bbx (epsilon). Wherein all voxels within bbx (epsilon) need to participate in the computation of the tsdf value (Truncated Signed Distance Function, based on the phase prime distance function). The computer device projects surrounding voxels onto the laser lines of the two three-dimensional points.
For each Voxel Voxel i E bbx (ε), it is noted that this bbx (ε) is defined by the laser line L: { P 1 ,P 2 ,......,P n Two consecutive points on the beam are determined, and the selected two laser points are designated as P 1 And P 2 The Voxel for which the tsdf value needs to be calculated is replaced by the center position point P. Mathematically, P should be projected to P 1 And P 2 Is a line segment.
In this embodiment, through analysis, the two three-dimensional points are the closest three-dimensional points to the surrounding voxels, the surrounding voxels should be projected onto a line segment between the two three-dimensional points, and the surrounding voxels in each search grid are projected onto the laser line between the two three-dimensional points based on the search grids formed by the two three-dimensional points on the laser line, so that the accurate projection can be ensured, and the accuracy of the subsequent data can be ensured.
In one embodiment, projecting surrounding voxels onto corresponding laser lines comprises:
obtaining a first equation, wherein the first equation represents that the first vector is parallel to the second vector; the first vector is a vector formed by the first three-dimensional point and surrounding voxels after moving a distance parameter along the first projection vector; the second vector is a vector formed by the second three-dimensional point and surrounding voxels after moving a distance parameter along the second projection vector; the first three-dimensional point and the second three-dimensional point are three-dimensional points on the laser line;
Substituting coordinates of surrounding voxels, the first three-dimensional point and the second three-dimensional point into a first equation, and solving to obtain a distance value;
moving the first three-dimensional point along the first projection vector by a distance value to obtain a first projection point;
moving the second three-dimensional point along the second projection vector by a distance value to obtain a second projection point;
determining a location scaling factor based on the distance between the surrounding voxel and the first projection point and the distance between the first projection point and the second projection point;
determining a target projection vector of surrounding voxels projected onto a corresponding laser line based on the position scaling factor, the first projection vector and the second projection vector;
surrounding voxels are projected onto the laser line between the two three-dimensional points along the object projection vector movement distance value.
Specifically, as shown in fig. 4, a schematic diagram of projection of surrounding voxels onto a laser line in one embodiment is shown. Intuitively, the central point P of surrounding voxels in the graph is a first three-dimensional point P of two points on the laser line 1 And a second three-dimensional point P 2 Projection is carried out on the line segment of the line to obtain a target projection point P c . Note that P projects to P c Is equivalent to P along a unit direction vectorThe length of d is walked, i.e. this geometric model is equivalent to solving +. >And d. Let->And->Respectively P 1 And P 2 Having a direction vector, q 1 And q 2 Is P 1 And P 2 Along the respective direction->And->The same length d is taken to obtain the point. Wherein (1)>Is through P 1 Is the tangential vector of (a) and P P 1 P 2 The outer product of the normal vector of the plane. Similarly, let go of>Is through P 2 Is the tangential vector of (a) and P P 1 P 2 The outer product of the normal vector of the plane.
It will be appreciated that P in such a geometric model is equivalent to a moving point on a line segment, whose position should be continuously variable so that it has a value that also continuously varies. In other words P is in line segment q 1 q 2 Position and distance d and P c At P 1 P 2 The above positions need to satisfy a relationship, here assumed to be linear, such that the position scale relationship:
then, within this model, the conditions that need to be met are: q 1 、q 2 Collinear with the P three points;
P c =P 2 +(1-u)P 1
by derivation of the model, an outer product equation (i.e., first equation) for d can be derived:
the outer product equation represents q 1 P and q 2 The direction P is parallel to the direction P,i.e. the first vector and the second vector are parallel. The distance d can be found by the outer product equation, so that P 1 TowardsThe distance d is shifted to q 1 P in the same way 2 Towards->The distance d is shifted to q 2 Further, the position proportionality coefficient u and the target projection point P can be obtained c Knowing the target projection point P c Then the target projection vector can be obtained +.>Note that P 1 And P 2 Is the point of the laser line, with tangent vector t P1 And t P2 Also according to the proportion, P can be obtained c Is the tangent vector t of (2) Pc . At this time, the distance value d, the target projection vector +.>And projection point-cut vector t Pc It is a calculated value of the laser line segment at a tsdf value of the Voxel. The computer device may store voxel projection data of surrounding voxels, i.e. comprising the distance value d, object projection vector +.>And projection point-cut vector t Pc And carrying out subsequent treatment.
In this embodiment, it is found through analysis that the projection of the surrounding voxels onto the laser line satisfies a certain relationship, so by constructing an equation for the geometric relationship among the surrounding voxels, the first three-dimensional point and the second three-dimensional point, the surrounding voxels can be accurately projected onto the laser line between the two three-dimensional points, the intersecting voxels can be determined, the voxel projection data of the surrounding voxels can be obtained in advance, and the intersecting voxels can be processed in the subsequent process, thereby improving the processing efficiency.
In one embodiment, projecting intersecting voxels to a laser line intersection to obtain voxel projection data comprises: determining a normal vector of the laser line crossing part; and projecting the crossed voxels to the laser line crossing part along the normal vector to obtain voxel projection data.
Specifically, the computer device may fit the planes of the laser line intersecting portions to obtain a fitted plane, calculate a normal vector of the fitted plane, and project intersecting voxels to the laser line intersecting portions along the normal vector of the laser line intersecting portions to obtain voxel projection data.
In this embodiment, by determining a normal vector of the laser line intersecting portion, the intersecting voxels are projected to the laser line intersecting portion along the normal vector, voxel projection data is obtained, and the voxels are orthographically projected to the laser line intersecting portion, thereby improving accuracy of the voxel projection data.
In one embodiment, determining a normal vector of the laser line intersection comprises: acquiring a laser line tangent vector of a crossed laser line corresponding to the laser line crossing part; a normal vector of the laser line intersection is determined based on the laser line tangent vector of the intersecting laser line.
Wherein intersecting laser lines refer to laser lines that intersect at least one other laser line. The laser line tangent vector may specifically be the tangent vector of the laser line at the laser line intersection point.
Specifically, the computer device obtains a laser line tangent vector for each intersecting laser line at the intersection of the laser lines. The computer equipment constructs a covariance matrix based on each laser line tangent vector, and decomposes the covariance matrix to obtain a corresponding eigenvector, namely, a unit normal vector of the laser line intersecting part.
Voxels at different crossing locations of the multiple laser lines are processed (as can be appreciated by the device itself, there are sometimes two laser lines crossing and sometimes surrounding voxels at the location of 3 or 4 laser lines crossing). Assuming that the voxel is in a model of a frame of intersecting laser lines, and tsdf values are calculated for N laser lines, these tsdf values form a covariance matrix C. Since matrix C is defined by a plurality of intersecting laser lines, i.e. each t i Not vectors parallel to each other, and therefore defined by t i Composed ofThe sum of the autocorrelation matrices is able to find the normal vector. The matrix C is subjected to SVD (Singular Value Decomposition ) or eigenvalue decomposition, and three eigenvalues and corresponding eigenvectors can be obtained. According to the principle of PCA (Pricncipal Component Analysis, principal component analysis), the one with the smallest eigenvalue modulus and the corresponding eigenvector are selected from the three eigenvalues. The feature vector is the normal vector of the intersection of the laser lines, i.e. the projection unit vector of the voxel of the intersection
Alternatively, the computer device may cross-multiply the two laser line cut vectors to obtain a normal vector for the laser line intersection.
In this embodiment, in the laser scanning process, the intersecting part of the laser line is not necessarily a plane, and in most cases is a curved surface, so that the normal vector of the intersecting part of the laser line is calculated by the laser line tangent vector of the intersecting laser line corresponding to the intersecting part of the laser line, so that the intersecting voxel is perpendicularly projected to the intersecting part of the laser line, the projection is more accurate, and the three-dimensional image of the object is more true.
In one embodiment, determining a normal vector of a laser line intersection based on a laser line tangent vector of an intersecting laser line comprises: acquiring the weight of each laser line tangent vector; a normal vector of the laser line intersection is determined based on the laser line tangent vector of the intersecting laser line and the corresponding weights.
Wherein each of the laser tangent vectors has a corresponding weight and is associated with a distance value.
Specifically, the weight of each laser line tangent vector is obtained, a covariance matrix is constructed based on the laser line tangent vectors of the crossed laser lines and the corresponding weights, and the covariance matrix is decomposed to obtain the corresponding feature vector, namely, the unit normal vector of the crossed part of the laser lines is obtained.
The formula is as follows:
the number N of the laser lines is included, and the cutting vector t of the laser lines i Weight ω. The weights may be determined based on the voxel resolution δ at the time of scanning and the distance value d in the voxel projection data, the weights being as follows:
where σ is the envelope size, generally bbx (. Epsilon.) and is recommendedThe effect is better.
In this embodiment, it is found through multiple experiments in the multi-line laser scanning process that the image effect can be continuously improved, and compared with the scheme of directly calculating the line cutting vector of the laser, the method of weighting each laser line can obtain a better three-dimensional image.
In one embodiment, the voxel projection data includes a voxel projection vector. Projecting intersecting voxels to the laser line intersection along a normal vector to obtain voxel projection data, comprising: acquiring laser line projection data obtained by projecting the intersecting voxels to the corresponding laser lines; the laser line projection data includes a laser line projection vector; determining the sum of projection vectors of laser lines; and projecting the sum of the laser line projection vectors to the laser line intersection part along the normal vector to obtain a voxel projection vector.
The laser line projection data refers to data generated by projecting intersecting voxels to corresponding points on the laser line. The laser line projection data includes a distance value and a laser line projection vector. The laser line projection vector refers to a vector formed by projecting intersecting voxels to corresponding points on the laser line.
Specifically, obtaining a laser line projection vector obtained by projecting the intersecting voxels to the corresponding laser line includes: in a search grid formed based on two three-dimensional points on a laser line, projecting surrounding voxels in each search grid onto the laser line between the two three-dimensional points to obtain voxel projection data of the surrounding voxels; laser line projection vectors obtained by projecting intersecting voxels to corresponding laser lines are obtained from voxel projection data of surrounding voxels. The computer device calculates the sum of the projection vectors of the laser lines and projects the sum to the intersection part of the laser lines along the normal vector to obtain the voxel projection vector.
For example, for a certain voxel, the laser line projection vector obtained by projecting the intersecting voxel to the corresponding laser line is
Wherein P is i Is the corresponding target projection point of voxel P onto laser line i.
It will be appreciated that a weight may be added to the laser line projection vector, the weight being associated with the corresponding laser line.
Then, the laser line projection vector is projected to the laser line crossing part along the normal vector, and then
In this embodiment, a laser line projection vector obtained by projecting an intersecting voxel onto a corresponding laser line is obtained, a sum of the laser line projection vectors is determined, and the sum is projected onto a laser line intersecting portion along a normal vector, so that the intersecting voxel can be orthographically projected onto the laser line intersecting portion, and a true projection position of the voxel can be determined, thereby obtaining a three-dimensional image with good effect.
In one embodiment, obtaining laser line projection data resulting from projecting intersecting voxels to corresponding laser lines includes: when the voxel projection data obtained by cross voxel projection onto the same laser line is more than one set, the set of data closest to the laser line is regarded as the voxel projection data obtained by projection onto the corresponding laser line.
Specifically, the set of data closest to the laser line, i.e., the set of data with the smallest d. In the three-dimensional laser scanning process, each voxel may be selected more than once in the calculation process, and in the case that the laser line is relatively large in bending or the scanning position is complex, some voxels may exist in bbx (epsilon) of a plurality of small line segments, and a plurality of tsdf values are generated, so that screening is needed, and when the tsdf values are calculated for the same laser line, if there are a plurality of voxels, a group of tsdf values with the smallest d is selected.
For example, as shown in fig. 5, a schematic view of voxels projected onto the same laser line in one embodiment. The blocks in the figure are intersecting voxels, the laser lines (1) (2) are actually identical, and the computer device selects voxel projection data projected onto the laser line (1).
In this embodiment, if more than one set of data generated by projecting the intersecting voxels onto the same laser line is calculated, the projection accuracy will be affected, and the set of data closest to the laser line is used as the voxel projection data obtained by projecting the closest set of data onto the corresponding laser line, so that the accuracy of three-dimensional scanning can be improved, and the quality of three-dimensional images can be improved.
In one embodiment, the voxel projection data includes a voxel projection vector; determining voxel normal vectors of intersecting voxels based on voxel projection data, comprising: acquiring the current position of a camera; determining a current camera line of sight based on the current position of the camera and the position of the intersecting voxel; determining an included angle between the current camera sight and the voxel projection vector; when the included angle is larger than 90 degrees, a vector opposite to the direction of the voxel projection vector is taken as a voxel normal vector; when the included angle is smaller than or equal to 90 degrees, the vector with the same direction as the projection vector is taken as a voxel normal vector.
Wherein, camera view refers to looking at voxels at the camera's position, the camera view is straight, without direction. The current camera view is represented by a vector in the course of the calculation. The angle between the current camera line of sight and the voxel projection vector can be represented by the inner product of the two. The normal vector of the voxels influences the presentation effect of the three-dimensional image, and the image effect that the normal vector of the voxels points to the side where the sight is located is good.
When the voxel normal vector and the current camera line of sight included angle is greater than 90 degrees, i.e. the inner product is smaller than 0, then the intersecting voxel is above the laser line intersection, i.e. at the side close to the line of sight, and the direction of the voxel normal vector is opposite to the direction of the voxel projection vector. When the included angle between the voxel normal vector and the current camera sight line is smaller than or equal to 90 degrees, namely, the inner product is larger than or equal to 0, the voxel is arranged below the intersection part of the laser lines, namely, at the side far away from the sight line, and the direction of the voxel normal vector is the same as the direction of the voxel projection vector.
In particular, the current camera view may be V c -P v . Wherein V is c P is the current camera position v Is the coordinates of the center point of the voxel. As shown in fig. 6, a schematic diagram of the current camera line of sight and voxel normal vector in one embodiment. Voxel normal vector n in fig. 6 (a) v And the current camera line of sight V c -P v The inner product of the two is smaller than 0, the included angle is larger than 90 degrees, and the normal vector n of the voxels is the same v And voxel projection vector f pv Instead, it is directed upwards. Voxel normal vector n in fig. 6 (b) v And the current camera line of sight V c -P v The inner product of the two is more than or equal to 0, the included angle is less than or equal to 90 degrees, and the normal vector n of the voxels is the same as the normal vector n v And voxel projection vector f pv The same, i.e. facing upwards.
Then, an inner product is made between the voxel projection vector and the current camera vision, and a voxel normal vector can be obtained according to the sign:
wherein, for the line of sight of the next frame, the normal vector of the current frame and the real-time line of sight V of the next frame are combined c -P v And when the inner product is larger than that of the current frame, storing the real-time line of sight of the next frame, and determining the voxels of the next frame based on the real-time line of sight of the next frameNormal vector.
In this embodiment, based on the current position of the camera and the position of the intersecting voxel, the current camera line of sight is determined, and then the voxel normal vector is determined by calculating the included angle between the current camera line of sight and the voxel projection vector, so that the normal vector direction of the voxel is towards the area where the camera is located, and the imaging effect of the three-dimensional image is improved.
In one embodiment, the voxel projection data includes a voxel projection vector; displaying a three-dimensional image of an object based on voxel normal vectors and voxel projection data, comprising: adding the coordinates of the intersecting voxels with the voxel projection vector to obtain the target position of the intersecting voxels; a three-dimensional image of the object is displayed based on the target location and the corresponding voxel normal vector.
Wherein the target position of the intersecting voxel is the position of the intersecting voxel in the three-dimensional image.
Specifically, the voxel projection vector represents orthographically projecting intersecting voxels onto a plane in which the intersecting portions lie. Target position point of intersecting voxels v Is that
point v =P v +f p v
Wherein P is v Is the coordinates of the intersecting voxels, f pv Is a voxel projection vector.
In this embodiment, the coordinates of the cross voxels are added with the voxel projection vectors, so that the target position of the cross voxels can be obtained, and the three-dimensional image of the object is displayed based on the target position and the corresponding voxel normal vector, so that the shadow lines of the three-dimensional image are more obvious, and the imaging effect of the three-dimensional image is improved.
In one embodiment, a frame in which a current frame is a fusion of two frames of images captured by a camera at two different times is described as an example. Specifically, the camera adopts 7 pairs of laser lines, which are intersected 7 pairs of laser lines, and the angle of the binocular camera is 7 pairs of parallel laser lines at the previous moment, and the angle of the binocular camera is 7 pairs of parallel laser lines rotated by a certain angle at the next moment and the laser lines at the previous moment form intersection. From the binocular camera perspective, therefore, the crossed laser map, which is actually two frames in succession, is considered to be one frame in the subsequent fusion.
According to this premise, each frame image can be regarded as a first group of parallel lasers and a second group of parallel lasers, the two groups of lasers form an intersection, and the processing mode of the current frame can be described as follows: two sets of parallel laser lines are bilinear processed and the processing results are combined to calculate tsdf values for all target voxels of the frame.
1. Calculating tsdf values of voxels surrounding the laser line
The laser line L { P is known 1 ,P 2 ,......,P n Consists of n points, and after extraction of the laser line each point contains coordinates (x, y, z) and a tangent vector (t x ,t y ,t z ) Is used for defining and acquiring surrounding voxels of the laser line. According to the scanning resolution, a global voxel grid is assumed, a boundingbox (rectangular outer box) formed by two continuous points of a laser line is acquired, and offset epsilon is added, so that a layer of voxel grid with epsilon size can be expanded outside the boundingbox to form a search grid bbx (epsilon). Wherein all voxels within bbx (epsilon) need to participate in the computation of the tsdf value. The computer device projects surrounding voxels onto the laser lines of the two three-dimensional points.
For each Voxel Voxel i E bbx (ε), it is noted that this bbx (ε) is defined by the laser line L: { P 1 ,P 2 ,......,P n Two consecutive points on the beam are determined, and the selected two laser points are designated as P 1 And P 2 The Voxel for which the tsdf value needs to be calculated is replaced by the center position point P. Mathematically, P should be projected to P 1 And P 2 And projected target projection point P c The relationship of fig. 4 should be satisfied.
Intuitively, the central point P of surrounding voxels in the graph is a first three-dimensional point P of two points on the laser line 1 And a second three-dimensional point P 2 Projection is carried out on the line segment of the line to obtain a target projection point P c . Note that P projects to P c Is equivalent to P along a unit direction vectorThe length of d is taken, i.e. the geometric formType is equivalent to solving->And d. Let->And->Respectively P 1 And P 2 Having a direction vector, q 1 And q 2 Is P 1 And P 2 Along the respective direction->And->The same length d is taken to obtain the point. Wherein (1)>Is through P 1 Is the tangential vector of (a) and P P 1 P 2 The outer product of the normal vector of the plane. In the same way, the processing method comprises the steps of,is through P 2 And P P 1 P 2 The outer product of the normal vector of the plane.
It will be appreciated that P in such a geometric model is equivalent to a moving point on a line segment, whose position should be continuously variable so that it has a value that also continuously varies. In other words P is in line segment q 1 q 2 Position and distance d and P c At P 1 P 2 The above positions need to satisfy a relationship, here assumed to be linear, such that the position scale relationship:
then at this pointInside the model, the conditions to be met are: q 1 、q 2 Collinear with the P three points;
P c =P 2 +(1-u)P 1
by derivation of the model, an outer product equation (i.e., first equation) for d can be derived:
the outer product equation represents q 1 P and q 2 P parallel, i.e. the first vector and the second vector are parallel. The distance d can be found by the outer product equation, so that P 1 TowardsThe distance d is shifted to q 1 P in the same way 2 Towards->The distance d is shifted to q 2 Further, the position proportionality coefficient u and the target projection point P can be obtained c Knowing the target projection point P c Then the target projection vector can be obtained +.>Note that P 1 And P 2 Is the point of the laser line, with tangent vector t P1 And t P2 Also according to the proportion, P can be obtained c Is the tangent vector t of (2) Pc . At this time, the distance value d, the target projection vector +.>And projection point-cut vector t Pc It is a calculated value of the laser line segment at a tsdf value of the Voxel. The computer device may store voxel projection data of surrounding voxels, i.e. comprising the distance value d, mesh as described aboveTarget projection vector +.>And projection point-cut vector t Pc And carrying out subsequent treatment.
In the three-dimensional laser scanning process, each voxel may be selected more than once in the calculation process, and in the case that the laser line is relatively large in bending or the scanning position is complex, some voxels may exist in bbx (epsilon) of a plurality of small line segments, and a plurality of tsdf values are generated, so that screening is needed, and when the tsdf values are calculated for the same laser line, if there are a plurality of voxels, a group of tsdf values with the smallest d is selected.
2. Integrating tsdf values of voxels of the intersection to generate covariance matrix
Voxels at different crossing locations of the multiple laser lines are processed (as can be appreciated by the device itself, there are sometimes two laser lines crossing and sometimes surrounding voxels at the location of 3 or 4 laser lines crossing). Assuming that the voxel is in a model of a frame of intersecting laser lines, and tsdf values are calculated for N laser lines, these tsdf values form a covariance matrix C.
The formula is as follows:
the number N of the laser lines is included, and the cutting vector t of the laser lines i Weight ω. i represents the ith laser line.
Laser line projection vector
The weights may be determined based on the voxel resolution delta at the time of scanning and the distance value d in the voxel projection data. The voxel resolution delta can be a default value or can be modified as required. The weights are as follows:
where σ is the envelope size, generally bbx (. Epsilon.) and is recommendedThe effect is better.
It will be appreciated that voxel data may be represented by tsdf values, i.e., d,And t Pc C and/or->And ω.
Since matrix C is defined by a plurality of intersecting laser lines, i.e. each t i Not vectors parallel to each other, and therefore defined by t i The sum of the constituent autocorrelation matrices is able to find the normal vector. And carrying out SVD decomposition or eigenvalue decomposition on the matrix C to obtain three eigenvalues and corresponding eigenvectors. According to the principle of PCA, the one with the smallest eigenvalue module and the corresponding eigenvector are selected from the three eigenvalues. The feature vector is the projection unit vector that the voxels of the intersection should have/>
Voxel projection vector f with length pv Is thatIn projection unit vector +.>Projection vectors on, i.e.
Then the Voxel has a projection point P + f pv
In normal vector terms, the projection vector has been taken according to equation (7) of covariance matrix SVD decompositionKnowing that the position of the camera rotationally translated with respect to the first frame position at the current shooting is V c The current line of sight is V c -P v ,P v Is the center point coordinates of the Voxel volume. Then the inner product is made on the projection vector and the line of sight, and the voxel normal vector n can be obtained according to the sign v Is that
Wherein, for the line of sight of the next frame, the normal vector of the current frame and the real-time line of sight V of the next frame are combined c -P v Making an inner product; when the inner product is greater than the inner product of the current frame, the next frame real-time line of sight is stored, and a voxel normal vector of the next frame is determined based on the next frame real-time line of sight. When the inner product is smaller than the inner product of the current frame, a voxel normal vector of the next frame is determined based on the real-time line of sight of the current frame.
As shown in fig. 7, an image of an object in one embodiment is schematically illustrated. Fig. 7 is an image obtained by photographing the tencel with a camera. The pattern thickness of the jersey is less than 2 mm. As shown in fig. 8, a three-dimensional image of an object formed from the coordinates of three-dimensional points alone in one embodiment. Fig. 8 is a three-dimensional image formed by coordinates of three-dimensional points obtained after multiple scans of the luncheon by the three-dimensional scanner. The image data of fig. 8 does not contain any normal vector. And a three-dimensional image of the object as shown in fig. 9 is obtained through the processing of the method in the embodiments of the present application. It can be seen that the texture in fig. 9 is clear, and it can be seen clearly that fig. 9 is a dragon pendant, and the details are basically consistent with the image shot by the camera in fig. 7.
In this embodiment, in the actual process, the whole algorithm logic is very complex, often one of the points does not want to be passed or deviate, and the resulting point cloud effect is poor, because the errors are accumulated continuously as the scanning proceeds. In the debugging stage, if the directions and positions of some three-dimensional points are found suddenly to be problematic, during detection, it is found that the data detected each time are not those points, because errors are accumulated and spread, so that the data of each frame need to be detected, the number of points of each frame is directly tens of thousands, more precisely the number of field points, and hundreds of thousands of data are detected for each frame as domestic rice; in addition, the detection data not only comprises the coordinates and directions of the point clouds, but also comprises a covariance matrix, a projection vector of each frame and calculation of matrix characteristic values, when each point of one point cloud has a unique direction, the point cloud can present detail textures of the surface and even comprises the properties of reflection, shielding and the like, so that the effect of real-time display is close to the effect of real observation.
In one embodiment, a method for processing point cloud data includes:
step (a 1), obtaining a current frame; the current frame is a three-dimensional point cloud image obtained by multi-line laser scanning of an object.
And (a 2) extracting the laser line in the current frame.
Step (a 3), in a search grid formed based on two three-dimensional points on a laser line, for surrounding voxels in each search grid, obtaining a first equation, the first equation representing that the first vector is parallel to the second vector; the first vector is a vector formed by the first three-dimensional point and surrounding voxels after moving a distance parameter along the first projection vector; the second vector is a vector formed by the second three-dimensional point and surrounding voxels after moving a distance parameter along the second projection vector; the first three-dimensional point and the second three-dimensional point are three-dimensional points on the laser line.
And (a 4) substituting coordinates of the surrounding voxels, the first three-dimensional point and the second three-dimensional point into a first equation, and solving to obtain a distance value.
And (a 5) moving the first three-dimensional point along the first projection vector by a distance value to obtain a first projection point.
And (a 6) moving the second three-dimensional point along the second projection vector by a distance value to obtain a second projection point.
And (a 7) determining a position scaling factor based on the distance between the surrounding voxels and the first projection point and the distance between the first projection point and the second projection point.
And (a 8) determining a target projection vector of surrounding voxels projected onto the corresponding laser line based on the position scaling factor, the first projection vector and the second projection vector.
And (a 9) projecting the surrounding voxels onto the laser line between the two three-dimensional points along the target projection vector movement distance value.
And (a 10) acquiring laser line tangent vectors of the crossed laser lines corresponding to the laser line crossed parts when the number of the laser lines projected by surrounding voxels is at least two.
And (a 11) obtaining the weight of each laser line tangent vector.
And (a 12) determining a normal vector of the intersecting part of the laser lines based on the laser line tangent vector of the intersecting laser lines and the corresponding weights.
Step (a 13), corresponding to each intersecting voxel, determining a normal vector of the intersecting portion of the laser line.
And (a 14) when the voxel projection data obtained by projecting the intersecting voxels to the same laser line is more than one set, setting the set of data closest to the laser line as the voxel projection data obtained by projecting the closest set of data to the corresponding laser line.
And (a 15) obtaining laser line projection data obtained by projecting the intersecting voxels to the corresponding laser line. The laser line projection data includes a laser line projection vector.
Step (a 16), determining the sum of the projection vectors of the laser lines.
And (a 17) projecting the sum of the projection vectors of the laser lines to the intersection part of the laser lines along the normal vector to obtain the projection vector of the voxels.
Step (a 18), the current position of the camera is acquired.
Step (a 19) of determining a current camera view based on the current position of the camera and the position of the intersecting voxels.
And (a 20) determining the included angle between the current camera vision and the voxel projection vector.
And (a 21) taking a vector opposite to the direction of the voxel projection vector as a voxel normal vector when the included angle is larger than 90 degrees.
And (a 22) taking the vector with the same direction as the projection vector as a voxel normal vector when the included angle is smaller than or equal to 90 degrees.
And (a 23) adding the coordinates of the intersecting voxels to the voxel projection vector to obtain the target position of the intersecting voxels.
And (a 24) displaying the three-dimensional image of the object based on the target position and the corresponding voxel normal vector.
In the embodiment, a three-dimensional point cloud image of the current frame is obtained, laser lines in the current frame are extracted, so that voxel data at the intersection part of the laser lines are determined, and voxels at the intersection part of the laser lines are processed subsequently, so that the data quantity to be processed can be reduced; corresponding to each crossed voxel, the crossed voxels are projected to a laser line crossed part to obtain voxel projection data, then the voxel normal vector of the crossed voxels is determined, a three-dimensional image of the object is displayed based on the voxel projection data and the voxel normal vector, the detail texture of the surface of the object is displayed by adopting multi-line laser scanning, the quality of the three-dimensional image of the object is good, and the real-time display effect is close to that of naked eyes.
It should be understood that, although the steps in the flowchart of fig. 2 are shown in sequence as indicated by the arrows, and the steps in steps (a 1) to (a 24) are shown in sequence as indicated by the numerals, these steps are not necessarily performed in sequence as indicated by the arrows or numerals. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least a portion of the steps in fig. 2 may include a plurality of steps or stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily sequential, but may be performed in rotation or alternatively with at least a portion of the steps or stages in other steps or other steps.
In one embodiment, as shown in fig. 10, a block diagram of a point cloud data processing device in one embodiment is shown. Fig. 10 provides a point cloud data processing apparatus, which may employ software modules or hardware modules, or a combination of both, as part of a computer device, and the apparatus specifically includes: a current frame acquisition module 1002, a laser line extraction module 1004, a cross voxel determination module 1006, a voxel projection module 1008, a voxel normal vector determination module 1010, and a three-dimensional image display module 1012, wherein:
A current frame acquisition module 1002, configured to acquire a current frame; the current frame is a three-dimensional point cloud picture obtained by multi-line laser scanning of an object;
a laser line extraction module 1004, configured to extract a laser line in a current frame;
an intersecting voxel determination module 1006 for determining intersecting voxels at the intersection of the laser lines based on the laser lines;
a voxel projection module 1008 for projecting the intersecting voxels to the laser line intersecting portion corresponding to the intersecting voxels to obtain voxel projection data;
a voxel normal vector determination module 1010 for determining voxel normal vectors of intersecting voxels based on the voxel projection data;
a three-dimensional image display module 1012 for displaying a three-dimensional image of the object based on the voxel projection data and the voxel normal vector.
In the embodiment, a three-dimensional point cloud image of the current frame is obtained, laser lines in the current frame are extracted, so that voxel data at the intersection part of the laser lines are determined, and voxels at the intersection part of the laser lines are processed subsequently, so that the data quantity to be processed can be reduced; corresponding to each crossed voxel, the crossed voxels are projected to a laser line crossed part to obtain voxel projection data, then the voxel normal vector of the crossed voxels is determined, a three-dimensional image of the object is displayed based on the voxel projection data and the voxel normal vector, the detail texture of the surface of the object is displayed by adopting multi-line laser scanning, the quality of the three-dimensional image of the object is good, and the real-time display effect is close to that of naked eyes.
In one embodiment, the cross voxel determination module 1006 is configured to: for surrounding voxels of the laser line, projecting the surrounding voxels onto the corresponding laser line; when the number of laser lines projected by the surrounding voxels is at least two, the surrounding voxels are determined to be intersecting voxels located at the intersecting portion of the laser lines.
In this embodiment, the intersecting voxels are voxels of the intersecting portion of at least two laser lines, and if the surrounding voxels can be projected onto the at least two laser lines, it is indicated that the intersecting voxels are intersecting voxels, so that it is possible to accurately determine the intersecting voxels and obtain voxel projection data corresponding to the surrounding voxels after the projecting, thereby facilitating subsequent three-dimensional image processing.
In one embodiment, voxel projection module 1008 is to: within a search grid formed based on two three-dimensional points on a laser line, surrounding voxels in each search grid are projected onto the laser line between the two three-dimensional points.
In this embodiment, through analysis, the two three-dimensional points are the closest three-dimensional points to the surrounding voxels, the surrounding voxels should be projected onto a line segment between the two three-dimensional points, and the surrounding voxels in each search grid are projected onto the laser line between the two three-dimensional points based on the search grids formed by the two three-dimensional points on the laser line, so that the accurate projection can be ensured, and the accuracy of the subsequent data can be ensured.
In one embodiment, voxel projection module 1008 is to: obtaining a first equation, wherein the first equation represents that the first vector is parallel to the second vector; the first vector is a vector formed by the first three-dimensional point and surrounding voxels after moving a distance parameter along the first projection vector; the second vector is a vector formed by the second three-dimensional point and surrounding voxels after moving a distance parameter along the second projection vector; the first three-dimensional point and the second three-dimensional point are three-dimensional points on the laser line;
substituting coordinates of surrounding voxels, the first three-dimensional point and the second three-dimensional point into a first equation, and solving to obtain a distance value;
moving the first three-dimensional point along the first projection vector by a distance value to obtain a first projection point;
moving the second three-dimensional point along the second projection vector by a distance value to obtain a second projection point;
determining a location scaling factor based on the distance between the surrounding voxel and the first projection point and the distance between the first projection point and the second projection point;
determining a target projection vector of surrounding voxels projected onto a corresponding laser line based on the position scaling factor, the first projection vector and the second projection vector;
surrounding voxels are projected onto the laser line between the two three-dimensional points along the object projection vector movement distance value.
In this embodiment, it is found through analysis that the projection of the surrounding voxels onto the laser line satisfies a certain relationship, so by constructing an equation for the geometric relationship among the surrounding voxels, the first three-dimensional point and the second three-dimensional point, the surrounding voxels can be accurately projected onto the laser line between the two three-dimensional points, the intersecting voxels can be determined, the voxel projection data of the surrounding voxels can be obtained in advance, and the intersecting voxels can be processed in the subsequent process, thereby improving the processing efficiency.
In one embodiment, voxel projection module 1008 is configured to: determining a normal vector of the laser line crossing part; and projecting the crossed voxels to the laser line crossing part along the normal vector to obtain voxel projection data.
In this embodiment, by determining a normal vector of the laser line intersecting portion, the intersecting voxels are projected to the laser line intersecting portion along the normal vector, voxel projection data is obtained, and the voxels are orthographically projected to the laser line intersecting portion, thereby improving accuracy of the voxel projection data.
In one embodiment, the voxel projection module 1008 is configured to obtain a laser line tangent vector of the intersecting laser line corresponding to the intersection of the laser lines; a normal vector of the laser line intersection is determined based on the laser line tangent vector of the intersecting laser line.
In this embodiment, in the laser scanning process, the intersecting part of the laser line is not necessarily a plane, and in most cases is a curved surface, so that the normal vector of the intersecting part of the laser line is calculated by the laser line tangent vector of the intersecting laser line corresponding to the intersecting part of the laser line, so that the intersecting voxel is perpendicularly projected to the intersecting part of the laser line, the projection is more accurate, and the three-dimensional image of the object is more true.
In one embodiment, voxel projection module 1008 is configured to obtain weights for each laser line tangent vector; a normal vector of the laser line intersection is determined based on the laser line tangent vector of the intersecting laser line and the corresponding weights.
In this embodiment, it is found through multiple experiments in the multi-line laser scanning process that the image effect can be continuously improved, and compared with the scheme of directly calculating the line cutting vector of the laser, the method of weighting each laser line can obtain a better three-dimensional image.
In one embodiment, the voxel projection data includes a voxel projection vector. A projection module for: acquiring laser line projection data obtained by projecting the intersecting voxels to the corresponding laser lines; the laser line projection data includes a laser line projection vector; determining the sum of projection vectors of laser lines; and projecting the sum of the laser line projection vectors to the laser line intersection part along the normal vector to obtain a voxel projection vector.
The method comprises the steps of obtaining laser line projection vectors obtained by projecting intersecting voxels to corresponding laser lines, determining the sum of the laser line projection vectors, and projecting the sum of the laser line projection vectors to a laser line intersecting part along a normal vector, so that the intersecting voxels can be orthographically projected to the laser line intersecting part, the true projection position of the voxels can be determined, and a three-dimensional image with good effect can be obtained.
In one embodiment, voxel projection module 1008 is to: when the voxel projection data obtained by cross voxel projection onto the same laser line is more than one set, the set of data closest to the laser line is regarded as the voxel projection data obtained by projection onto the corresponding laser line.
In this embodiment, if more than one set of data generated by projecting the intersecting voxels onto the same laser line is calculated, the projection accuracy will be affected, and the set of data closest to the laser line is used as the voxel projection data obtained by projecting the closest set of data onto the corresponding laser line, so that the accuracy of three-dimensional scanning can be improved, and the quality of three-dimensional images can be improved.
In one embodiment, voxel normal vector determination module 1010 is used to obtain the current position of the camera; determining a current camera line of sight based on the current position of the camera and the position of the intersecting voxel; determining an included angle between the current camera sight and the voxel projection vector; when the included angle is larger than 90 degrees, a vector opposite to the direction of the voxel projection vector is taken as a voxel normal vector; when the included angle is smaller than or equal to 90 degrees, the vector with the same direction as the projection vector is taken as a voxel normal vector.
In this embodiment, based on the current position of the camera and the position of the intersecting voxel, the current camera line of sight is determined, and then the voxel normal vector is determined by calculating the included angle between the current camera line of sight and the voxel projection vector, so that the normal vector direction of the voxel is towards the area where the camera is located, and the imaging effect of the three-dimensional image is improved.
In one embodiment, the three-dimensional image display module 1012 is configured to add the coordinates of the intersecting voxels to the voxel projection vectors to obtain the target location of the intersecting voxels; a three-dimensional image of the object is displayed based on the target location and the corresponding voxel normal vector.
In this embodiment, the coordinates of the cross voxels are added with the voxel projection vectors, so that the target position of the cross voxels can be obtained, and the three-dimensional image of the object is displayed based on the target position and the corresponding voxel normal vector, so that the shadow lines of the three-dimensional image are more obvious, and the imaging effect of the three-dimensional image is improved.
For specific limitation of the point cloud data processing device, reference may be made to the limitation of the point cloud data processing method hereinabove, and no further description is given here. The respective modules in the point cloud data processing device may be implemented in whole or in part by software, hardware, or a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal device, and an internal structure diagram thereof may be as shown in fig. 11. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a point cloud data processing method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in FIG. 11 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, implements the steps of the method embodiments described above.
In one embodiment, a computer program product or computer program is provided that includes computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the steps of the method embodiments described above.
Those skilled in the art will appreciate that implementing all or part of the above described methods in accordance with the embodiments may be accomplished by way of a computer program stored in a non-transitory computer readable storage medium, which when executed may comprise the steps of the above described embodiments of the methods. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, or the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the scope of the application, and all equivalent structures or equivalent processes using the descriptions and drawings of the present application or directly or indirectly applied to other related technical fields are included in the scope of the application.

Claims (14)

1. A method for processing point cloud data, the method comprising:
acquiring a current frame; the current frame is a three-dimensional point cloud picture obtained by multi-line laser scanning of an object;
extracting a laser line in the current frame;
determining crossing voxels at crossing portions of the laser line based on the laser line;
corresponding to each crossed voxel, projecting the crossed voxels to the laser line crossed part to obtain voxel projection data;
determining a voxel normal vector of the intersected voxels based on the voxel projection data;
and displaying a three-dimensional image of the object based on the voxel projection data and the voxel normal vector.
2. The method of claim 1, wherein the determining intersection voxels at the intersection of the laser lines based on the laser lines comprises:
for surrounding voxels of the laser line, projecting the surrounding voxels onto a corresponding laser line;
and when the number of the laser lines projected by the surrounding voxels is at least two, determining the surrounding voxels as crossing voxels positioned at the crossing part of the laser lines.
3. The method of claim 2, wherein for surrounding voxels of the laser line, projecting the surrounding voxels onto a corresponding laser line comprises:
Within a search grid formed based on two three-dimensional points on the laser line, for surrounding voxels in each of the search grids, projecting the surrounding voxels onto the laser line between the two three-dimensional points.
4. A method according to claim 3, wherein said projecting said surrounding voxels onto a laser line between said two three-dimensional points comprises:
obtaining a first equation, wherein the first equation represents that the first vector is parallel to the second vector; the first vector is a vector formed by the first three-dimensional point and the surrounding voxels after moving a distance parameter along a first projection vector; the second vector is a vector formed by the second three-dimensional point and the surrounding voxels after the distance parameter is moved along a second projection vector; the first three-dimensional point and the second three-dimensional point are three-dimensional points on the laser line;
substituting coordinates of the surrounding voxels, the first three-dimensional point and the second three-dimensional point into the first equation, and solving to obtain a distance value;
moving the first three-dimensional point by the distance value along the first projection vector to obtain a first projection point;
moving the second three-dimensional point by the distance value along the second projection vector to obtain a second projection point;
Determining a location scaling factor based on the distance between the surrounding voxel and the first projection point and the distance between the first projection point and the second projection point;
determining a target projection vector of the surrounding voxels projected onto the corresponding laser line based on the position scaling factor, the first projection vector and the second projection vector;
and moving the surrounding voxels along the target projection vector by the distance value to project onto a laser line between the two three-dimensional points.
5. The method of claim 1, the projecting the intersecting voxels to the laser line intersecting portion, obtaining voxel projection data, comprising:
determining a normal vector of the laser line crossing portion;
and projecting the crossed voxels to the laser line crossing part along the normal vector to obtain voxel projection data.
6. The method of claim 5, wherein said determining a normal vector of the laser line intersection comprises:
acquiring a laser line tangent vector of a crossed laser line corresponding to the laser line crossing part;
a normal vector of the laser line intersection is determined based on the laser line tangent vector of the intersecting laser line.
7. The method of claim 6, wherein the determining a normal vector of the laser line intersection based on the laser line tangent vector of the intersecting laser line comprises:
acquiring the weight of each laser line tangent vector;
and determining the normal vector of the intersecting part of the laser lines based on the laser line tangent vector of the intersecting laser lines and the corresponding weight.
8. The method of claim 5, wherein the voxel projection data comprises a voxel projection vector;
said projecting said intersecting voxels along said normal vector to said laser line intersection portion to obtain voxel projection data comprising:
acquiring laser line projection data obtained by projecting the intersecting voxels to the corresponding laser lines; the laser line projection data includes a laser line projection vector;
determining a sum of the laser line projection vectors;
and projecting the sum of the laser line projection vectors to the laser line intersection part along the normal vector to obtain a voxel projection vector.
9. The method of claim 8, wherein the acquiring laser line projection data resulting from projecting the intersecting voxels into corresponding laser lines comprises:
When the cross voxels are projected to more than one group of voxel projection data obtained by the same laser line, a group of data closest to the laser line is taken as voxel projection data obtained by projection to the corresponding laser line.
10. The method of claim 1, wherein the voxel projection data comprises a voxel projection vector;
the determining a voxel normal vector of the intersected voxels based on the voxel projection data comprises:
acquiring the current position of a camera;
determining a current camera view based on the current position of the camera and the position of the intersecting voxel;
determining an included angle between the current camera line of sight and the voxel projection vector;
when the included angle is larger than 90 degrees, a vector opposite to the direction of the voxel projection vector is taken as a voxel normal vector;
and when the included angle is smaller than or equal to 90 degrees, taking a vector which is the same as the direction of the voxel projection vector as a voxel normal vector.
11. The method of claim 1, wherein the voxel projection data comprises a voxel projection vector;
the displaying the three-dimensional image of the object based on the voxel projection data and the voxel normal vector comprises:
Adding the coordinates of the crossed voxels to the voxel projection vector to obtain the target position of the crossed voxels;
and displaying a three-dimensional image of the object based on the target position and the corresponding voxel normal vector.
12. A point cloud data processing apparatus, the apparatus comprising:
the current frame acquisition module is used for acquiring a current frame; the current frame is a three-dimensional point cloud picture obtained by multi-line laser scanning of an object;
a laser line extraction module, configured to extract a laser line in the current frame;
an intersecting voxel determining module for determining an intersecting voxel at an intersecting portion of the laser line based on the laser line;
the voxel projection module is used for projecting the intersecting voxels to the laser line intersecting part corresponding to each intersecting voxel to obtain voxel projection data;
a voxel normal vector determination module for determining a voxel normal vector of the intersecting voxels based on the voxel projection data;
and the three-dimensional image display module is used for displaying a three-dimensional image of the object based on the voxel projection data and the voxel normal vector.
13. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 11 when the computer program is executed.
14. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 11.
CN202310511118.XA 2023-05-06 2023-05-06 Point cloud data processing method and device, computer equipment and storage medium Pending CN116612254A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310511118.XA CN116612254A (en) 2023-05-06 2023-05-06 Point cloud data processing method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310511118.XA CN116612254A (en) 2023-05-06 2023-05-06 Point cloud data processing method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116612254A true CN116612254A (en) 2023-08-18

Family

ID=87677398

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310511118.XA Pending CN116612254A (en) 2023-05-06 2023-05-06 Point cloud data processing method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116612254A (en)

Similar Documents

Publication Publication Date Title
CN111639626B (en) Three-dimensional point cloud data processing method and device, computer equipment and storage medium
US10636168B2 (en) Image processing apparatus, method, and program
JP5248806B2 (en) Information processing apparatus and information processing method
US9972120B2 (en) Systems and methods for geometrically mapping two-dimensional images to three-dimensional surfaces
JP6740033B2 (en) Information processing device, measurement system, information processing method, and program
US10930008B2 (en) Information processing apparatus, information processing method, and program for deriving a position orientation of an image pickup apparatus using features detected from an image
US11490062B2 (en) Information processing apparatus, information processing method, and storage medium
JPWO2019035155A1 (en) Image processing system, image processing method, and program
JP2017129992A (en) Information processor and control method thereof
CN113361365B (en) Positioning method, positioning device, positioning equipment and storage medium
CN114782646A (en) House model modeling method and device, electronic equipment and readable storage medium
US11475508B2 (en) Methods and systems for evaluating a size of a garment
CN113763478A (en) Unmanned vehicle camera calibration method, device, equipment, storage medium and system
CN111435069B (en) Method and device for measuring volume
JP2022027111A (en) Measurement processing device, method and program
JP2021189946A (en) Detection apparatus, detection method, and detection program
CN116485902A (en) Mark point matching method, device, computer equipment and storage medium
CN116612254A (en) Point cloud data processing method and device, computer equipment and storage medium
JP2015118101A (en) Information processing device and method and program
CN115100257A (en) Sleeve alignment method and device, computer equipment and storage medium
CN114882194A (en) Method and device for processing room point cloud data, electronic equipment and storage medium
CN113298883A (en) Method, electronic device and storage medium for calibrating a plurality of cameras
JP2005140623A (en) Image measuring method and instrument, program, and recording medium
JP2006300656A (en) Image measuring technique, device, program, and recording medium
JP2020187626A (en) Image processing device, image processing method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination