CN116612253A - Point cloud fusion method, device, computer equipment and storage medium - Google Patents

Point cloud fusion method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN116612253A
CN116612253A CN202310505477.4A CN202310505477A CN116612253A CN 116612253 A CN116612253 A CN 116612253A CN 202310505477 A CN202310505477 A CN 202310505477A CN 116612253 A CN116612253 A CN 116612253A
Authority
CN
China
Prior art keywords
voxel
fusion
laser line
current
voxels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310505477.4A
Other languages
Chinese (zh)
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xhorse Electronics Co Ltd
Original Assignee
Shenzhen Xhorse Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Xhorse Electronics Co Ltd filed Critical Shenzhen Xhorse Electronics Co Ltd
Priority to CN202310505477.4A priority Critical patent/CN116612253A/en
Publication of CN116612253A publication Critical patent/CN116612253A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The application relates to a point cloud fusion method, a point cloud fusion device, computer equipment and a storage medium. The method comprises the following steps: acquiring a laser line in a current frame; the current frame is a three-dimensional point cloud picture obtained based on multi-line laser scanning; determining intersection voxels located at the intersection of the current laser line based on the laser line; corresponding to each intersecting voxel, projecting the intersecting voxels to the intersecting portion of the current laser line to obtain current voxel projection data; acquiring a backward frame; the backward frame is a three-dimensional point cloud image scanned after the current frame; determining backward voxel projection data of the intersected voxels in the backward frame; corresponding to each crossed voxel, carrying out point cloud fusion based on the current voxel projection data and the backward voxel projection data to obtain fusion projection data and fusion normal vector of a fusion frame; and obtaining a fusion frame based on the fusion projection data and the fusion normal vector. By adopting the method, the quality of the three-dimensional image can be improved, and the scanning instantaneity is not affected.

Description

Point cloud fusion method, device, computer equipment and storage medium
Technical Field
The application relates to the technical field of computers, in particular to a point cloud fusion method, a point cloud fusion device, computer equipment and a storage medium.
Background
In the three-dimensional laser scanning process, the computer displays the point cloud obtained by scanning calculation in real time, and a user can intuitively see the real-time effect and the position of scanning. It is noted that the calculated real-time point cloud does not contain any direction without any processing, and details cannot be distinguished in the screen display. When each point of a point cloud has a unique direction, the point cloud can present the detail texture of the surface, even including the properties of reflection, shielding and the like, so that the effect displayed in real time is close to the effect actually seen. The process in which the real-time point cloud is given direction is called point cloud fusion. The technical requirements of point cloud fusion are quite high, logic and flow are quite complex, and detection and debugging are quite high. Therefore, the traditional mode of the point cloud fusion method is adopted, and the real-time performance is poor.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a point cloud fusion method, apparatus, computer device, and storage medium that can improve three-dimensional image quality without affecting scanning instantaneity.
A method of point cloud fusion, the method comprising:
acquiring a laser line in a current frame; the current frame is a three-dimensional point cloud picture obtained based on multi-line laser scanning;
Determining crossing voxels at a crossing portion of the current laser line based on the laser line;
corresponding to each intersecting voxel, projecting the intersecting voxel to the current laser line intersecting part to obtain current voxel projection data;
acquiring a backward frame; the backward frame is a three-dimensional point cloud image scanned after the current frame;
determining backward voxel projection data of the intersected voxels in the backward frame;
corresponding to each crossed voxel, carrying out point cloud fusion based on the current voxel projection data and the backward voxel projection data to obtain fusion projection data and fusion normal vector of a fusion frame;
and obtaining the fusion frame based on the fusion projection data and the fusion normal vector.
A point cloud fusion device, the device comprising:
the laser line processing module is used for acquiring a laser line in the current frame; the current frame is a three-dimensional point cloud picture obtained based on multi-line laser scanning;
a voxel projection data determining module for determining crossing voxels at the crossing portion of the current laser line based on the laser line; and the method is used for corresponding to each intersecting voxel, projecting the intersecting voxels to the intersecting part of the current laser line, and obtaining current voxel projection data;
The point cloud image acquisition module is used for acquiring a backward frame; the backward frame is a three-dimensional point cloud image scanned after the current frame;
the voxel projection data determining module is used for determining backward voxel projection data of the crossed voxels in the backward frame;
the fusion frame data determining module is used for carrying out point cloud fusion corresponding to each crossed voxel and based on the current voxel projection data and the backward voxel projection data to obtain fusion projection data and fusion normal vector of a fusion frame;
and the fusion frame acquisition module is used for acquiring the fusion frame based on the fusion projection data and the fusion normal vector.
A computer device comprising a memory storing a computer program and a processor implementing the steps of the point cloud fusion method embodiments when the computer program is executed.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of an embodiment of a point cloud fusion method.
According to the point cloud fusion method, the device, the computer equipment and the storage medium, the three-dimensional point cloud image of the current frame is obtained, the laser lines in the current frame are extracted, so that voxel data at the intersection part of the laser lines are determined, and voxels at the intersection part of the laser lines are processed subsequently, so that the data quantity to be processed can be reduced; corresponding to each crossed voxel, projecting the crossed voxel to a laser line crossed part to obtain voxel projection data and a voxel normal vector, obtaining backward voxel projection data of the crossed voxel in a backward frame, carrying out point cloud fusion based on the current voxel projection data and the backward voxel projection data to obtain fusion projection data and fusion normal vector of a fusion frame, obtaining the fusion frame based on the fusion projection data and the fusion normal vector, adopting multi-line laser scanning to display detail textures on the surface of an object, and realizing good three-dimensional image quality of the object, wherein the real-time display effect is close to that of naked eyes; meanwhile, the data volume is reasonable, the calculation complexity is low, and the real-time performance of three-dimensional scanning is not affected.
Drawings
FIG. 1 is an application environment diagram of a point cloud fusion method in one embodiment;
FIG. 2 is a flow diagram of a point cloud fusion method according to one embodiment;
FIG. 3 is a schematic diagram of a current frame in one embodiment;
FIG. 4 is a schematic diagram of a current camera view and fusion normal vector in one embodiment;
FIG. 5 is a schematic view of surrounding voxels projected onto a laser line in one embodiment;
FIG. 6 is a schematic diagram of voxels projected onto the same laser line in one embodiment;
FIG. 7 is a schematic diagram of a memory module in one embodiment;
FIG. 8 is a schematic diagram of an object to be scanned in one embodiment;
FIG. 9 is an original three-dimensional point cloud image of a gantry scanned by a three-dimensional scanner in one embodiment;
FIG. 10 is a schematic diagram of a fused frame in one embodiment;
FIG. 11 is a schematic diagram of a reconstructed fusion frame in one embodiment;
FIG. 12 is a schematic view of an object to be scanned in another embodiment;
FIG. 13 is an original three-dimensional point cloud image of a stamp scanned by a three-dimensional scanner in one embodiment;
FIG. 14 is a schematic diagram of a fused frame in another embodiment;
FIG. 15 is a schematic diagram of a fusion frame in yet another embodiment;
FIG. 16 is a block diagram of a point cloud fusion device in one embodiment;
Fig. 17 is an internal structural view of a computer device in one embodiment.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without any inventive effort, are intended to be within the scope of the application.
It should be noted that, in the embodiments of the present application, all directional indicators (such as up, down, left, right, front, and rear … …) are merely used to explain the relative positional relationship, movement conditions, and the like between the components in a specific posture (as shown in the drawings), if the specific posture is changed, the directional indicators correspondingly change, and the connection may be a direct connection or an indirect connection.
Furthermore, descriptions such as those referred to as "first," "second," and the like, are provided for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implying an order of magnitude of the indicated technical features in the present disclosure. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In addition, the technical solutions of the embodiments may be combined with each other, but it is necessary to base that the technical solutions can be realized by those skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should be considered to be absent and not within the scope of protection claimed in the present application.
The terms "first," "second," and the like, as used herein, may be used to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another element. For example, a first three-dimensional point may be referred to as a second three-dimensional point, and similarly, a second three-dimensional point may be referred to as a first three-dimensional point, without departing from the scope of the application. Both the first three-dimensional point and the second three-dimensional point are three-dimensional points, but they are not the same three-dimensional point.
It is to be understood that in the following embodiments, "connected" is understood to mean "electrically connected", "communicatively connected", etc., if the connected circuits, modules, units, etc., have electrical or data transfer between them.
The point cloud fusion method provided by the application can be applied to an application environment as shown in fig. 1. FIG. 1 is an application environment diagram of a point cloud fusion method in one embodiment. The computer device 110 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, among others. The three-dimensional laser scanner 120 includes 2 cameras and 2 multi-line laser transmitters. Each of the multiple line laser transmitters emits a plurality of laser lines, and the laser lines emitted by the two multiple line laser transmitters intersect. The camera of the three-dimensional laser scanner is used to capture images. The three-dimensional laser scanner 120 operates on the principle that: the three-dimensional laser scanner emits a plurality of crossed laser lines to the object, the camera receives the three-dimensional point cloud image of each frame, and the front frame and the rear frame are fused through acquiring the three-dimensional point cloud image of each frame for a plurality of times to obtain a fused frame. The computer equipment obtains enough three-dimensional point cloud pictures, fuses the three-dimensional point cloud pictures for a plurality of times, and finally obtains a fusion frame which is the three-dimensional image of the object.
Many studies on the fusion of point clouds or depth maps are performed, and almost all of the studies are performed on the depth maps, and considering that the embodiments of the present application are applied to multi-line laser scanning, a complete depth map cannot be presented in real time, so that the fusion of the depth maps cannot be directly applied. In point cloud fusion for line lasers, little research is currently done, which is one of the non-optimistic factors. The publication difficulty and application scene of the related papers also illustrate the high technical difficulty of the point cloud fusion.
Next, taking three-dimensional scanning of multi-line laser as an example, the overall scheme of fusion in the embodiment of the present application will be described. Fig. 2 is a schematic flow chart of a point cloud fusion method according to an embodiment, which includes the following steps:
step 202, obtaining a laser line in a current frame; the current frame is a three-dimensional point cloud image obtained based on multi-line laser scanning.
Specifically, the current frame is a three-dimensional point cloud image obtained based on multi-line laser scanning of an object. The multi-line laser is a plurality of intersecting laser lines emitted by a three-dimensional laser scanner. The object may specifically be any object existing in nature, such as various models, plants, animals, sceneries, and the like. The three-dimensional laser scanner scans the object to obtain a three-dimensional point cloud image, the three-dimensional point cloud image is transmitted to computer equipment, and the computer equipment acquires a current frame.
Or the current frame is obtained by combining three-dimensional point cloud pictures scanned at two successive moments. As shown in fig. 3, a schematic diagram of a current frame in one embodiment. Fig. 3 (a) is a three-dimensional point cloud obtained by scanning at a previous time. Fig. 3 (b) is a three-dimensional point cloud obtained by scanning at a later time. Fig. 3 (c) is a schematic diagram of the current frame. It will be appreciated that the number of surrounding voxels and intersecting voxels obtained within a frame can be greater than that of a single frame, allowing more intersections to be obtained and reducing the cost of the three-dimensional scanner. The computer device extracts the laser lines in the current frame using a laser line extraction algorithm.
At step 204, intersecting voxels at the intersection of the laser lines are determined based on the laser lines.
The laser line crossing portion may be a plane, a curved surface or a body formed within a certain range of laser line crossing. The number of intersecting laser lines is at least two. The intersecting voxels are voxels located at the intersection of the laser lines.
Optionally, the computer device may obtain the same three-dimensional point coordinates from the coordinate sets corresponding to the laser lines, where voxels corresponding to the same three-dimensional point coordinates are intersecting voxels, and the surfaces around the three-dimensional point coordinates are intersecting portions of the laser lines. For example, the laser line 1 corresponds to the coordinate set a, and the laser line 2 corresponds to the coordinate set B, and when the coordinate set a and the coordinate set B contain the same three-dimensional point coordinates, the voxel corresponding to the three-dimensional point coordinates is the intersecting voxel, and the surrounding surface is the intersecting portion of the laser line.
Step 206, corresponding to each intersecting voxel, projecting the intersecting voxel to the intersecting portion of the current laser line, and obtaining the projection data of the current voxel.
Specifically, the voxel projection data may include a voxel projection vector, which is a vector obtained by vertically projecting intersecting voxels to the laser line intersecting portion. Voxel projection data may include the target location, i.e. the location of the final projection point. The current voxel projection data may comprise the current voxel projection vector, the current target location of the intersecting voxel. The voxel projection data may also include intermediate data generated during the projection process, and the like. Corresponding to each intersecting voxel, the computer device vertically projects the intersecting voxel to the current laser line intersecting portion to obtain current voxel projection data.
Step 208, obtaining a backward frame; the backward frame is a three-dimensional point cloud image scanned after the current frame.
In particular, the backward frame may be a three-dimensional point cloud image of a frame subsequent to the current frame. The current frame is obtained by combining three-dimensional point cloud pictures scanned at two times in sequence, such as three-dimensional point cloud pictures scanned at the time t1 and the time t 2. The backward frame can be synthesized by three-dimensional point cloud images at the time t2 and the time t 3. time t1, time t2 and time t3 are ordered in time sequence.
At step 210, backward voxel projection data of intersecting voxels in the backward frame is determined.
It will be appreciated that the backward voxel projection data is calculated in a similar manner as the forward voxel projection data.
Specifically, a laser line in a backward frame is acquired; determining crossing voxels at crossing portions of the backward laser line based on the laser line in the backward frame; corresponding to each intersecting voxel, the intersecting voxels are projected to the backward laser line intersecting portion, and backward voxel projection data is obtained. That is, the intersecting voxels that require voxel projection data to be acquired are located at the current laser line intersection as well as at the backward laser line intersection.
And step 212, corresponding to each crossed voxel, performing point cloud fusion based on the current voxel projection data and the backward voxel projection data to obtain fusion projection data and fusion normal vector of the fusion frame.
The process of giving the real-time point cloud a direction is called point cloud fusion. The point cloud fusion can be widely applied to three-dimensional scanning products. The fused projection data of the fused frame includes fused projection data of each intersecting voxel.
In particular, the fused projection data includes a target location of the intersecting voxels in the fused frame. The fused projection data may also include fused projection vectors. For the intersecting voxels at the current laser line intersecting portion and the backward laser line intersecting portion, the computer equipment performs point cloud fusion based on the current voxel projection data and the backward voxel projection data to obtain fusion projection data of a fusion frame; a fusion normal vector for the intersecting voxel is determined based on the fusion projection data.
Step 214, obtaining a fusion frame based on the fusion projection data and the fusion normal vector.
Specifically, the computer device constructs the intersection voxel based on the fused projection data and the fused normal vector, thereby obtaining a fused frame.
Alternatively, the computer device may perform three-dimensional point cloud reconstruction based on the fusion frame to obtain a three-dimensional reconstructed image. A three-dimensional image with better effect can be obtained through reconstruction.
Alternatively, the computer device may perform a point cloud correction process on the fusion frame to obtain a three-dimensional image of the object.
Optionally, the computer device returns to the step of acquiring the backward frame by taking the fusion frame A as the current frame until the scanning is finished, and a three-dimensional image of the object is obtained. If the current frame is a t1 frame, the backward frame is a t2 frame, and the point cloud fusion is carried out based on the t1 frame and the t2 frame to obtain a fusion frame A; and taking the fusion frame A as a current frame, and taking a backward frame of the fusion frame as a t3 frame, and then carrying out point cloud fusion on the fusion frame A and the t3 frame to obtain a next fusion frame.
In the embodiment, a three-dimensional point cloud image of the current frame is obtained, laser lines in the current frame are extracted, so that voxel data at the intersection part of the laser lines are determined, and voxels at the intersection part of the laser lines are processed subsequently, so that the data quantity to be processed can be reduced; corresponding to each crossed voxel, projecting the crossed voxel to a laser line crossed part to obtain voxel projection data and a voxel normal vector, obtaining backward voxel projection data of the crossed voxel in a backward frame, carrying out point cloud fusion based on the current voxel projection data and the backward voxel projection data to obtain fusion projection data and fusion normal vector of a fusion frame, obtaining the fusion frame based on the fusion projection data and the fusion normal vector, adopting multi-line laser scanning to display detail textures on the surface of an object, and realizing good three-dimensional image quality of the object, wherein the real-time display effect is close to that of naked eyes; meanwhile, the data volume is reasonable, the calculation complexity is low, and the real-time performance of three-dimensional scanning is not affected.
In one embodiment, performing point cloud fusion based on current voxel projection data and backward voxel projection data to obtain fusion projection data and a fusion normal vector of a fusion frame, including:
adding the current voxel projection data and the backward voxel projection data to obtain fusion projection data of a fusion frame;
fusion method vectors of intersecting voxels are determined based on the fusion projection data.
Wherein the current voxel projection data comprises a current laser line projection vector. The current voxel projection data may further comprise a weight ω of the laser line tangent vector, a current covariance matrix formed by the laser line tangent vector and the corresponding weight. Likewise, the backward voxel projection data includes a backward laser line projection vector. The backward voxel projection data may further comprise a weight ω of the laser line tangent vector, a backward covariance matrix formed by the laser line tangent vector and the corresponding weight. Wherein the covariance matrix C is determined based on the laser tangent vector.
Specifically, the computer equipment adds the current laser line projection vector and the backward laser line projection vector to obtain a fusion laser line projection vector; adding the current covariance matrix and the backward covariance matrix to obtain a fusion covariance matrix; performing feature decomposition on the fusion covariance matrix to obtain a unit projection vector; projecting the fusion laser line projection vector along the unit projection vector to obtain a fusion projection vector; a fusion method vector is determined based on the fusion projection vector. The direction of the fusion method vector is the same as or opposite to the fusion projection vector.
In this embodiment, the fusion projection data of the fusion frame is obtained by adding the current voxel projection data and the backward voxel projection data, the fusion method vector of the intersecting voxels is determined based on the fusion projection data, more accurate data is obtained by voxel fusion between the frames, the quality of the three-dimensional image is improved, and meanwhile, the real-time performance of the three-dimensional scanning is not affected due to reasonable data quantity and low calculation complexity.
In one embodiment, adding the current voxel projection data and the backward voxel projection data to obtain fused projection data of the fused frame includes: respectively determining voxel types of the intersected voxels in the current frame and the backward frame; and when the voxel types of the intersecting voxels in the current frame and the backward frame are the same, adding the current voxel projection data and the backward voxel projection data to obtain fusion projection data of the fusion frame.
Wherein the voxel type comprises an internal voxel and an external voxel. Voxel type identical means either co-internal voxels or co-external voxels. An internal voxel means that the voxel is inside the object. The external voxel means that the voxel is outside the object.
In particular, the computer device determines the voxel type of the intersected voxels in the current frame and the voxel type of the intersected voxels in the backward frame, respectively. When the voxel types of the crossed voxels in the current frame and the backward frame are the same, the computer equipment adds the current laser line projection vector and the backward laser line projection vector to obtain a fusion laser line projection vector; adding the current covariance matrix and the backward covariance matrix to obtain a fusion covariance matrix; performing feature decomposition on the fusion covariance matrix to obtain a unit projection vector; and projecting the fusion laser line projection vector along the unit projection vector to obtain a fusion projection vector. When the voxel types of voxel projection data of the intersected voxels in the current frame and the backward frame are different, the intersected voxels are removed and are not displayed in the fused frame.
In this embodiment, it is found through analysis that a mixed point occurs when the voxel types are different, and by determining voxel data of intersecting voxels in a current frame and a backward frame respectively, when the voxel types in the current frame and the backward frame are the same, the current voxel projection data and the backward voxel projection data are added to obtain the fusion projection data of the fusion frame, so as to reduce the fusion mixed point and improve the quality of the three-dimensional image.
In one embodiment, the current voxel projection data includes a current voxel projection vector. Determining the voxel type of the intersected voxels in the current frame comprises: acquiring the current position of a camera; determining a current camera view based on the current location and the location of the current intersection voxel; the voxel type of the intersecting voxel in the current frame is determined based on the current camera view and the current voxel projection vector.
Wherein the current camera view may be V c -P v . Wherein V is c P is the current camera position v Is the coordinates of the center point of the voxel.
Determining a voxel type of the intersecting voxel in the current frame based on the current camera view and the current voxel projection vector, comprising: determining an included angle between the current camera sight and the voxel normal vector; when the included angle is larger than 90 degrees, determining the voxel type of the surrounding voxels as an external voxel; when the included angle is smaller than 90 degrees, the voxel type of the surrounding voxels is determined to be an internal voxel.
In this embodiment, it is found through analysis that a mixed point occurs when the voxel types are different, and by determining voxel data of intersecting voxels in a current frame and a backward frame respectively, when the voxel types in the current frame and the backward frame are the same, the current voxel projection data and the backward voxel projection data are added to obtain the fusion projection data of the fusion frame, so as to reduce the fusion mixed point and improve the quality of the three-dimensional image.
In one embodiment, determining a fusion normal vector for intersecting voxels based on the fusion projection data comprises: acquiring the position of a camera; determining a camera line of sight based on the position of the camera and the position of the intersecting voxels; determining an included angle between the camera sight and the fusion projection vector; when the included angle is larger than 90 degrees, a vector opposite to the fusion projection vector is used as a fusion method vector; when the included angle is smaller than or equal to 90 degrees, the vector with the same direction as the fusion projection vector is used as the fusion method vector.
Wherein, camera view refers to looking at voxels at the camera's position, the camera view is straight, without direction. The current camera view is represented by a vector in the course of the calculation. The camera line of sight may be V c -P v . Wherein V is c P is the current camera position v Is the coordinates of the center point of the voxel. Since the camera view line phase difference of the front and rear frames is small, the camera view line can be the current camera view line or the backward camera view line. The current camera view refers to the camera view of the current frame, and the backward camera view refers to the camera view of the backward frame. The embodiment of the application selects the current camera sight.
The included angle between the camera's line of sight and the fused projection vector can be represented by the inner product of the two. The fusion normal vector influences the presentation effect of the three-dimensional image, and the image effect of the fusion normal vector pointing to the side where the sight is located is good.
Specifically, determining a voxel type of the intersected voxel in the current frame based on the camera view and the current voxel projection data comprises: determining an included angle between a camera sight and a voxel projection vector; when the included angle is larger than 90 degrees, a vector opposite to the fusion projection vector is used as a fusion method vector; when the included angle is smaller than or equal to 90 degrees, the vector with the same direction as the fusion projection vector is used as the fusion method vector.
When the included angle between the fusion vector and the camera line of sight is greater than 90 degrees, i.e. the inner product is smaller than 0, then the cross voxel is above the laser line cross part, i.e. the side close to the line of sight, and the direction of the fusion vector is opposite to the direction of the voxel projection vector. When the included angle between the fusion vector and the camera sight line is smaller than or equal to 90 degrees, namely that the inner product is larger than or equal to 0 cross voxel is arranged below the laser line cross part, namely at one side far away from the sight line, the direction of the fusion vector is the same as the direction of the fusion projection vector.
In particular, the current camera view may be V c -P v . Wherein V is c P is the current camera position v Is the coordinates of the center point of the voxel. As shown in fig. 4, a schematic diagram of the current camera view and fusion normal vector in one embodiment. Fusion normal vector in FIG. 4 (a)n v And the current camera line of sight V c -P v The inner product of the two is smaller than 0, the included angle is larger than 90 degrees, and the vector n of the fusion method is obtained at the moment v Fusion projection vector f pv Instead, it is directed upwards. Fusion normal vector n in FIG. 4 (b) v And the current camera line of sight V c -P v The inner product of the two is more than or equal to 0, the included angle is less than or equal to 90 degrees, and the vector n of the fusion method is obtained at the moment v Fusion projection vector f pv The same, i.e. facing upwards.
Then inner product is made for the fusion projection vector and the current camera sight, and fusion method vector can be obtained according to the symbol:
and for the vision required by the next fusion frame, respectively taking the fusion normal vector as an inner product with the stored camera vision of the previous fusion frame and the current camera vision, and selecting the vision corresponding to the module length from the vision as the vision required by the next fusion frame so as to determine the fusion normal vector of the next fusion frame based on the vision required by the next fusion frame and the fusion projection vector of the next frame.
In this embodiment, based on the position of the camera and the position of the intersecting voxel, the camera line of sight is determined, and then the angle between the camera line of sight and the fusion projection vector is calculated, so as to determine the fusion vector, so that the normal vector direction of the voxel is towards the area where the camera is located, and the imaging effect of the three-dimensional image is improved.
In one embodiment, determining intersection voxels at an intersection of laser lines based on the laser lines includes: for surrounding voxels of the laser line, projecting the surrounding voxels onto the corresponding laser line; when the number of laser lines projected by the surrounding voxels is at least two, the surrounding voxels are determined to be intersecting voxels located at the intersecting portion of the laser lines.
Wherein, surrounding voxels refer to voxels within a preset distance around the laser line as a reference.
Specifically, for surrounding voxels of the laser line, the surrounding voxels are projected onto the surrounding laser line. When the number of laser lines projected by a surrounding voxel is at least two, the surrounding voxel is indicated as an intersecting voxel located at the intersection of the laser lines.
In this embodiment, the intersecting voxels are voxels of the intersecting portion of at least two laser lines, if surrounding voxels can be projected onto at least two laser lines, it is illustrated that the surrounding voxels are intersecting voxels, which voxels are intersecting voxels can be accurately determined, and voxel projection data corresponding to the surrounding voxels can be obtained after projection, so that subsequent three-dimensional image processing is facilitated.
In one embodiment, for surrounding voxels of a laser line, projecting the surrounding voxels onto the corresponding laser line comprises: within a search grid formed based on two three-dimensional points on a laser line, surrounding voxels in each search grid are projected onto the laser line between the two three-dimensional points.
Specifically, the search grid is a voxel grid. The two three-dimensional points may be points on diagonal corners of the search grid. It will be appreciated that the search grid may also be a voxel grid of two three-dimensional points re-expanded by a certain size.
The laser line L { P is known 1 ,P 2 ,......,P n Consists of n points, and after extraction of the laser line each point contains coordinates (x, y, z) and a tangent vector t x ,t y ,t z Is used for defining and acquiring surrounding voxels of the laser line. According to the scanning resolution, assuming that there is a voxel grid globally, a boundingbox (outsourced rectangular box) formed by two points of a laser line in succession is acquired, and a bias e is added, so that a search grid bbx (epsilon) can be formed by expanding a voxel grid of e size outside the boundingbox. Wherein all voxels within bbx (epsilon) need to participate in the calculation of voxel projection data. The computer device projects surrounding voxels onto the laser line between the two three-dimensional points.
For each Voxel Voxel i E bbx (ε), it is noted that this bbx (ε) is defined by the laser line L: { P 1 ,P 2 ,......,P n Two consecutive points on the beam are determined, and the selected two laser points are designated as P 1 And P 2 Voxel Vox for which tsdf value needs to be calculatedel is replaced by the center point P. Mathematically, P should be projected to P 1 And P 2 Is a line segment.
In this embodiment, through analysis, the two three-dimensional points are the closest three-dimensional points to the surrounding voxels, the surrounding voxels should be projected onto the laser line segment between the two three-dimensional points, and the surrounding voxels in each search grid are projected onto the laser line between the two three-dimensional points based on the search grids formed by the two three-dimensional points on the laser line, so that the accurate projection can be ensured, and the accuracy of the subsequent data can be ensured.
In one embodiment, projecting surrounding voxels onto a laser line between two three-dimensional points comprises:
obtaining a first equation, wherein the first equation represents that the first vector is parallel to the second vector; the first vector is a vector formed by the first three-dimensional point and surrounding voxels after moving a distance parameter along the first projection vector; the second vector is a vector formed by the second three-dimensional point and surrounding voxels after moving a distance parameter along the second projection vector; the first three-dimensional point and the second three-dimensional point are three-dimensional points on the laser line;
substituting coordinates of surrounding voxels, the first three-dimensional point and the second three-dimensional point into a first equation, and solving to obtain a distance value;
moving the first three-dimensional point along the first projection vector by a distance value to obtain a first projection point;
Moving the second three-dimensional point along the second projection vector by a distance value to obtain a second projection point;
determining a location scaling factor based on the distance between the surrounding voxel and the first projection point and the distance between the first projection point and the second projection point;
determining a target projection vector of surrounding voxels projected onto a corresponding laser line based on the position scaling factor, the first projection vector and the second projection vector;
surrounding voxels are projected onto the laser line between the two three-dimensional points along the object projection vector movement distance value.
Specifically, as shown in fig. 5, a schematic diagram of projection of surrounding voxels onto a laser line in one embodiment is shown. Straight lineThe point P of the surrounding voxels in the graph is a first three-dimensional point P of two points in succession on the laser line 1 And a second three-dimensional point P 2 Projection is carried out on the line segment of the line to obtain a target projection point P c . Note that P projects to P c Is equivalent to P along a unit direction vectorThe length of d is walked, i.e. this geometric model is equivalent to solving +.>And d. Let->And->Respectively P 1 And P 2 Having a direction vector, q 1 And q 2 Is P 1 And P 2 Along the respective direction->And->The same length d is taken to obtain the point. Wherein, the liquid crystal display device comprises a liquid crystal display device,is through P 1 Is related to the tangent vector of PP 1 P 2 The outer product of the normal vector of the plane. Similarly, let go of>Is through P 2 Is related to the tangent vector of PP 1 P 2 The outer product of the normal vector of the plane.
It will be appreciated that P in this geometric model is equivalent to a moving point on a line segment whose position should be continuously variable so that it has a value that is also continuously variableAnd (3) chemical treatment. In other words P is in line segment q 1 q 2 Position and distance d and P c At P 1 P 2 The above positions need to satisfy a relationship, here assumed to be linear, such that the position scale relationship:
then, within this model, the conditions that need to be met are: q 1 、q 2 Collinear with the P three points;
by derivation of the model, the outer product equation (i.e., first equation) can be derived with respect to:
the outer product equation represents q 1 P and q 2 P parallel, i.e. the first vector and the second vector are parallel. The distance d can be found by the outer product equation, so that P 1 TowardsThe distance d is shifted to q 1 P in the same way 2 Towards->The distance d is shifted to q 2 Further, the position proportionality coefficient u and the target projection point P can be obtained c Knowing the target projection point P c Then the target projection vector can be obtained +.>Note that P 1 And P 2 Is the point of the laser line, with tangent vector t P1 And t P2 Also according to the proportion, P can be obtained c Is the tangent vector t of (2) Pc . At this time, the distance value d, the target projection vector +. >And projection point-cut vector t Pc It is a tsdf calculation of the laser line segment at Voxel. The computer device may store voxel projection data of surrounding voxels, i.e. comprising the distance value d, object projection vector +.>And projection point-cut vector t Pc And carrying out subsequent treatment.
In this embodiment, it is found through analysis that the projection of the surrounding voxels onto the laser line satisfies a certain relationship, so by constructing an equation for the geometric relationship among the surrounding voxels, the first three-dimensional point and the second three-dimensional point, the surrounding voxels can be accurately projected onto the laser line between the two three-dimensional points, the intersecting voxels can be determined, the voxel projection data of the surrounding voxels can be obtained in advance, and the intersecting voxels can be processed in the subsequent process, thereby improving the processing efficiency.
In one embodiment, projecting the intersected voxels to the current laser line intersection to obtain current voxel projection data comprises: determining a normal vector of a crossing part of the current laser line; and projecting the crossed voxels to the crossed part of the current laser line along the normal vector to obtain the projection data of the current voxels.
Specifically, the computer device may fit the laser line intersecting portion to obtain a fitting surface, calculate a normal vector of the fitting surface, and project the normal vector of the intersecting voxel along the laser line intersecting portion to the current laser line intersecting portion to obtain current voxel projection data.
In this embodiment, by determining a normal vector of the current laser line intersection, the intersection voxel is projected to the current laser line intersection along the normal vector, so as to obtain current voxel projection data, and the intersection voxel is orthographically projected to the current laser line intersection, so that accuracy of the voxel projection data is improved.
In one embodiment, determining the normal vector of the current laser line intersection comprises: acquiring a current laser line tangent vector of a crossed laser line corresponding to the laser line crossing part; a normal vector of the current laser line intersection is determined based on the current laser line tangent vector.
Wherein intersecting laser lines refer to laser lines that intersect at least one other laser line. The laser line tangent vector may specifically be the tangent vector of the laser line at the laser line intersection point. Then the current laser line tangent vector is the tangent vector of the laser line in the current frame. In particular a tangent vector at the intersection point.
Specifically, the computer device obtains a current laser line tangent vector for each intersecting laser line located at the intersection of the current laser lines. The computer equipment constructs a current covariance matrix based on the sum of the current laser line tangent vectors, and decomposes the current covariance matrix to obtain corresponding eigenvectors, namely, the unit normal vector of the laser line intersecting part.
Voxels at different crossing locations of the multiple laser lines are processed (as can be appreciated by the device itself, there are sometimes two laser lines crossing and sometimes surrounding voxels at the location of 3 or 4 laser lines crossing). Assuming that the voxel is in a model of a frame of intersecting laser lines, and the tsdf values are calculated by N laser lines, the current laser line tangent vectors form a covariance matrix C. Since matrix C is defined by a plurality of intersecting laser lines, i.e. each t i Not vectors parallel to each other, and therefore defined by t i The sum of the constituent autocorrelation matrices is able to find the normal vector. The covariance matrix C is subjected to SVD (Singular Value Decomposition ) or eigenvalue decomposition, and three eigenvalues and corresponding eigenvectors can be obtained. According to the principle of PCA (Pricncipal Component Analysis, principal component analysis), the one with the smallest eigenvalue modulus and the corresponding eigenvector are selected from the three eigenvalues. The characteristic vector is the normal vector of the crossing part of the laser line, and can be specifically a unit normal vector
Alternatively, the computer device may cross-multiply the two laser line cut vectors to obtain a normal vector for the laser line intersection.
In this embodiment, in the laser scanning process, the intersecting part of the laser line is not necessarily a plane, and in most cases is a curved surface, so that the normal vector of the intersecting part of the laser line is calculated by the laser line tangent vector of the intersecting laser line corresponding to the intersecting part of the laser line, so that the intersecting voxel is perpendicularly projected to the intersecting part of the laser line, the projection is more accurate, and the three-dimensional image of the object is more true.
In one embodiment, determining the normal vector of the current laser line intersection based on the current laser line tangent vector includes: acquiring the weight of the current laser tangent vector; a normal vector of the laser line intersection is determined based on the laser line vectors of the intersecting laser lines and the corresponding weights.
Wherein each of the laser tangent vectors has a corresponding weight and is associated with a distance value.
Specifically, the weight of each laser line tangent vector is obtained, a covariance matrix is constructed based on the product of the laser line tangent vector of the crossed laser line and the corresponding weight, and the covariance matrix is decomposed to obtain the corresponding feature vector, namely, the unit normal vector of the crossed part of the laser line is obtained.
The formula is as follows:
the number N of the laser lines is included, and the cutting vector t of the laser lines i Weight ω. The weights may be determined based on the voxel resolution δ at the time of scanning and the distance value d in the voxel projection data, the weights being as follows:
where σ is the envelope size, generally bbx (. Epsilon.) and is recommendedThe effect is better.
In this embodiment, it is found through multiple experiments in the multi-line laser scanning process that the image effect can be continuously improved, and compared with the scheme of directly calculating the line cutting vector of the laser, the method of weighting each laser line can obtain a better three-dimensional image.
In one embodiment, the current voxel projection data includes a current voxel projection vector. Projecting the intersected voxels to the intersected portion of the current laser line to obtain current voxel projection data, comprising: acquiring current laser line projection data obtained by projecting the intersecting voxels to the corresponding laser line; the current laser line projection data includes a current laser line projection vector; determining the sum of projection vectors of the current laser lines; and projecting the sum of the projection vectors of the current laser lines to the intersection part of the current laser lines along the normal vector of the intersection part of the angles of the current laser lines to obtain the projection vector of the current voxel.
The current laser line projection data refers to data generated by projecting the intersection voxels to corresponding points on the laser line. The current laser line projection data includes a distance value and a current laser line projection vector. The current laser line projection vector refers to a vector formed by projecting intersecting voxels to corresponding points on the laser line.
Specifically, obtaining a current laser line projection vector obtained by projecting the intersecting voxels to the corresponding laser line includes: in a search grid formed based on two three-dimensional points on a laser line, projecting surrounding voxels in each search grid onto the laser line between the two three-dimensional points to obtain current voxel projection data of the surrounding voxels; and acquiring a current laser line projection vector obtained by projecting the intersected voxels to the corresponding laser lines from the current voxel projection data of surrounding voxels. The computer device calculates the sum of the projection vectors of the laser lines and projects the sum to the intersection part of the laser lines along the normal vector to obtain the projection vector of the current voxel.
For example, for a certain voxel, the current laser line projection vector obtained by projecting the intersected voxel to the corresponding laser line is
Wherein P is i Is the corresponding target projection point of voxel P onto laser line i.
It will be appreciated that a weight may be added to the laser line projection vector, the weight being associated with the corresponding laser line.
Then, the laser line projection vector is projected to the laser line crossing part along the normal vector, and then
In this embodiment, a laser line projection vector obtained by projecting an intersecting voxel onto a corresponding laser line is obtained, a sum of the laser line projection vectors is determined, and the sum is projected onto a laser line intersecting portion along a normal vector, so that the intersecting voxel can be orthographically projected onto the laser line intersecting portion, and a true projection position of the voxel can be determined, thereby obtaining a three-dimensional image with good effect.
In one embodiment, obtaining current laser line projection data resulting from projecting intersecting voxels to corresponding laser lines includes: when the cross voxel is projected to the same laser line, the current voxel projection data obtained by the projection of the closest set of data to the laser line is used as the current voxel projection data obtained by the projection to the corresponding laser line.
Specifically, the set of data closest to the laser line, i.e., the set of data with the smallest d. In the three-dimensional laser scanning process, each voxel may be selected more than once in the calculation process, and in the case that the laser line is relatively large in bending or the scanning position is complex, some voxels may exist in bbx (epsilon) of a plurality of small line segments, so that a plurality of groups of voxel projection data can be generated, and therefore, screening is needed, and when the tsdf value is calculated for the same laser line, if there are a plurality of times of calculation of the voxels, a group of voxel projection data with the minimum d is selected.
For example, as shown in fig. 6, a schematic diagram of voxels projected onto the same laser line in one embodiment. The blocks in fig. 6 are intersecting voxels, the laser lines (1) (2) are actually identical, and the computer device selects voxel projection data projected onto the laser line (1).
In this embodiment, if more than one set of data generated by projecting the intersecting voxels onto the same laser line is calculated, the projection accuracy will be affected, and the set of data closest to the laser line is used as the current voxel projection data obtained by projecting the same laser line, so that the accuracy of three-dimensional scanning can be improved, and the quality of three-dimensional images can be improved.
In one embodiment, the point cloud fusion method further comprises: extracting a laser line in a current frame; screening effective laser lines which accord with the preset laser line characteristics from the laser lines; for three-dimensional points on the effective laser line, acquiring a tangent vector before smoothing of the three-dimensional points; smoothing the effective laser line based on the tangent vector before smoothing of the three-dimensional points to obtain a tangent vector set; the tangent vector set comprises smoothed tangent vectors of three-dimensional points on the effective laser line; the method for acquiring the laser line tangent vector of the crossed laser line corresponding to the laser line crossing part comprises the following steps: and obtaining the laser line tangent vector of the crossed laser line corresponding to the laser line crossed part from the tangent vector set.
The laser line characteristics comprise that the length of the laser line reaches a preset length, the length between three-dimensional points does not exceed a preset distance, and the like.
Specifically, the computer device employs a laser line extraction algorithm to extract the laser lines in the current frame. The laser line characteristics include that the length of the laser line reaches a preset length, the length between the three-dimensional points does not exceed a preset distance, and the like. The computer device determines a laser line characteristic of a laser line in the current frame, the laser line being a valid laser line when the laser line characteristic meets a preset laser line characteristic. The tangent vector before smoothing may be the original tangent vector of the three-dimensional point on the active laser line. The tangent vector before smoothing may be an average value of the tangent vector of the three-dimensional point and the tangent vector of the front and rear points. The computer equipment performs tangent vector smoothing on the effective laser line based on the tangent vector before three-dimensional point smoothing to obtain a tangent vector set, wherein the tangent vector set comprises the smoothed tangent vectors of the three-dimensional points on the effective laser line.
In this embodiment, the validity of the laser line is determined by screening the laser line, then the tangent vector of the three-dimensional point is obtained for smoothing, the effective laser line, the effective three-dimensional point and the smoothed tangent vector are obtained, and then the three-dimensional image of the object is constructed based on the obtained effective three-dimensional point and the smoothed tangent vector, so that the noise of the three-dimensional image is greatly reduced, and the quality of the three-dimensional image is improved.
In one embodiment, selecting valid laser lines from the laser lines that meet the predetermined laser line characteristics includes: deleting the laser lines with the length less than the preset length, and reserving candidate laser lines with the length less than the preset length; under the condition that the distance between three-dimensional points of the candidate laser lines exceeds a preset distance, cutting off the candidate laser lines to obtain cut laser lines; and combining the candidate laser line and the cut laser line to obtain the effective laser line.
The preset length and the preset distance can be set according to requirements. The preset length and the preset distance are both pre-stored in the computer device. The representation of the length reaching the preset length may be a length reaching a pixels, or at least include a three-dimensional points, etc.
Because the laser line of the three-dimensional space calculated by binocular vision is derived from the laser line matched on the left and right camera images in the binocular system, the laser line on the images is the central bright line of the laser line shot by the images, so that the extraction of the laser center line can be performed during image processing, and a steger center line extraction algorithm is adopted at present. Without the addition of width correction, the end points of the extracted laser lines in the complex image have center offsets, and the addition of width correction involves searching and calculating very large data, so that during the processing, part of the extracted laser lines are discarded, and therefore fewer than N pixels of laser lines can be deleted.
Since the angle and the shielding can affect the continuity of the laser lines on the image when the object is shot, some angles can appear that some laser lines are formed by laser lines which are not in a plane or cannot be well connected in three dimensions, for example, small wood blocks are shot obliquely, and the laser lines on the surfaces of the wood blocks can be connected with the laser lines on the desktops of the wood blocks in an end-to-end mode in the image, so that the laser lines can be judged as one laser line. For this case, a chopping operation of the laser line is required. First, on the left and right camera pixel planes of the binocular camera, the line can be cut according to left and right laser line matching, and generally, laser line segments with longer actual distances are hardly connected end to end on both the left and right camera planes at the same time to form a laser line, because the laser line does not conform to the logic of parallax. However, some laser line segments that are very close to each other but exceed a certain threshold may be present, so that the second operation is to cut according to the distance between consecutive three-dimensional points of the actual laser line, and if the distance exceeds the threshold, the laser line segment needs to be cut between the two three-dimensional points.
Specifically, the computer device deletes the laser line with the length not reaching the preset length, and reserves the candidate laser line, wherein the candidate laser line is the laser line with the length reaching the preset length. And the computer equipment determines the distance between the three-dimensional points on each laser line, and cuts off the candidate laser line to obtain the cut laser line under the condition that the distance between the three-dimensional points of the candidate laser line exceeds the preset distance. Then the candidate laser line and the truncated laser line are counted to obtain a valid laser line.
It will be appreciated that the number of active laser lines screened out may be greater than the number of multiline lasers due to the truncated candidate laser lines. For example, the candidate laser lines include candidate laser line 1, candidate laser line 2, and candidate laser line 3. If the distance between the three-dimensional points in the laser line 1 exceeds the preset distance, the laser line 1 is cut into laser lines 1 'and 1", and the effective laser lines include the laser line 1', the laser line 1", the candidate laser line 2 and the candidate laser line 3.
In this embodiment, the laser line with the length not reaching the preset length is deleted, the candidate laser line with the length reaching the preset length is reserved, the candidate laser line is cut off under the condition that the distance between the three-dimensional points of the candidate laser line exceeds the preset distance, the cut-off laser line is obtained, the candidate laser line and the cut-off laser line are finally combined to obtain an effective laser line, the laser line is processed based on the laser parallax principle, the accuracy of subsequent laser intersection calculation is guaranteed, and therefore the display effect of the three-dimensional image is optimized.
In one embodiment, the preset distance is determined based on voxel resolution. In particular, the threshold value of the preset distance is not arbitrary, and the subsequent processing is mainly performed on voxels, so that the threshold value is very relevant to the voxels and is related to the acquisition of the tangent vector. Optionally, 3-5 times of voxel resolution is selected as the preset distance threshold. Such as 3-fold, 3.5-fold, 4-fold, 4.5-fold, 5-fold, etc.
In this embodiment, the preset distance is determined based on the voxel resolution, so that whether the same laser line is used or not can be accurately resolved, and the accuracy of subsequent data is improved.
In one embodiment, obtaining a pre-smoothing tangent vector of a three-dimensional point includes: acquiring a first adjacent point coordinate and a second adjacent point coordinate of the same laser line adjacent to the three-dimensional point; determining a first tangential vector formed by coordinates of the three-dimensional point and the first adjacent point; determining a second tangent vector formed by coordinates of the three-dimensional point and the second adjacent point; and carrying out average processing on the basis of the first tangent vector and the second tangent vector, and determining the tangent vector before smoothing of the three-dimensional point.
One of the first adjacent point coordinates and the second adjacent point coordinates is a previous coordinate point of the three-dimensional point, and the other is a subsequent coordinate point. The first tangential vector and the second tangential vector are oriented substantially the same.
Specifically, assume a laser line segment L { P: 1 ,P 2 ,......,P n n points, each point having a coordinate P i =(x i ,y i ,z i ) The calculation formula of the tangent vector before the point Pi is smoothed is as follows:
in this embodiment, the tangent vectors of adjacent points on the same laser line are determined, and the tangent vectors are subjected to average processing, so that the tangent vectors before three-dimensional point smoothing is determined, the tangent vectors can be subjected to preliminary smoothing processing, and the subsequent smoothing processing is better.
In one embodiment, smoothing the effective laser line based on a pre-smoothing tangent vector of the three-dimensional point to obtain a smoothed tangent vector of the three-dimensional point on the effective laser line, including: obtaining a smooth order N; and obtaining the smoothed tangent vector of the three-dimensional point on the effective laser line based on the average of the tangent vectors before smoothing of the front N points and the rear N points of the three-dimensional point.
Wherein the smoothing order N is used to represent the tangent vector before smoothing of the first N points and the last N points of the three-dimensional point. The smoothing order N is stored in advance in the computer device. N and the smoothness are positively correlated. The larger N is, the smoother is, the smaller the error of the point cloud is, but the accuracy of the surface of some objects is lost; the smaller N is, the more object details remain, and the larger the error is. It is important to select the appropriate smoothing order.
Specifically, the computer device obtains the smoothing order N. And obtaining the smoothed tangent vector of the three-dimensional point on the effective laser line based on the average of the tangent vectors before smoothing of the front N points and the rear N points of the three-dimensional point.
For example, the number of the cells to be processed,
where i is the number of the current three-dimensional point.
In this embodiment, the smoothing order N is obtained, and the smoothed tangent vector of the three-dimensional point on the effective laser line is obtained based on the average of the tangent vectors before smoothing of the front N points and the rear N points, and in the calculation process of the mathematical geometric model, the obtained voxel projection data is more accurate, so as to improve the display quality of the three-dimensional image.
In one embodiment, constructing a three-dimensional image of an object based on the smoothed tangent vectors includes: deleting the data of the first N+1 points and the last N+1 points on the effective laser line, and constructing a three-dimensional image of the object based on the rest of the smoothed tangent vectors.
Specifically, when the tangent vector before smoothing is acquired, the data of the first point and the last point of the valid laser line are deleted. When the smoothing order is N, data of unsmooth tangent vectors of the front N points and the rear N points of the target point are required, so that if P i If there are fewer than N points, the point cannot smooth the tangent vector, and therefore these invalid points need to be deleted. Therefore, the data of the first n+1 points and the last n+1 points on the effective laser line need to be deleted. The laser line should have at least two points when processed, so that the number of three-dimensional points on the effective laser line is at least 2n+4 according to the smoothing order N. For example, 10 points are arranged on the effective laser line, N is 2, then the data of the coordinates, the tangent vectors and the like of the 1 st to 2 nd points are deleted, the data of the coordinates, the tangent vectors and the like of the 9 th to 10 th points are deleted, and the coordinates and the tangent vectors of the 3 rd to 8 th points are reserved.
In this embodiment, the data of the first n+1 points and the last n+1 points on the effective laser line are deleted, and the three-dimensional image of the object is constructed based on the remaining smoothed tangent vectors, so that the calculation error of the laser line data is reduced, the error of the three-dimensional image of the object is reduced, and the quality of the three-dimensional image of the object is improved.
In one embodiment, the point cloud fusion method further comprises: acquiring a voxel to be processed in a fusion frame and a neighborhood voxel of the voxel to be processed; smoothing the voxels to be processed based on the neighborhood voxels to obtain a smoothing result; performing point cloud correction on the voxel to be processed based on the smoothing result to obtain a corrected voxel to be processed; and displaying the three-dimensional image formed by the corrected voxels to be processed.
Wherein the voxel to be processed refers to a voxel with voxel projection data. Specifically, the intersection voxels having a fusion method vector and a fusion projection vector may be used. The neighborhood voxels of the voxel to be processed refer to voxels adjacent to the voxel to be processed. And if 26 voxels around the voxel to be processed are neighborhood voxels of the voxel to be processed.
The smoothing result may be a smoothed current frame, in which voxel projection data of the smoothed voxel to be processed is included. The method specifically can comprise a smoothed fusion method vector to be processed, a smoothed fusion projection vector to be processed and a smoothed fusion projection position to be processed. The smoothing process refers to smoothing of the voxel normal vector of the voxel to be processed and updating of the coordinates of the voxel to be processed based on the smoothed voxel normal vector. The smoothing process includes a normal vector smoothing process. The point cloud correction is to move the points and change the coordinates of the three-dimensional points.
Specifically, the computer device obtains a voxel to be processed from the fusion frame, and searches a neighborhood voxel of the voxel to be processed from a neighborhood of the voxel to be processed. It will be appreciated that the voxel to be processed has corresponding voxel projection data and that the voxel to be processed has been projected onto the laser line intersection.
For each voxel to be processed, the computer device needs to acquire voxel projection data of its neighborhood voxels as well as voxel projection data of its own voxels. The computer equipment adopts a point cloud filtering algorithm to carry out smoothing processing on the voxels to be processed based on the neighborhood voxels, and voxel projection data after the smoothing processing corresponding to the voxels to be processed is obtained. And carrying out smoothing treatment on the voxels to be treated based on the voxel projection data of the neighborhood voxels and the voxel projection data of the voxels to be treated to obtain a smoothing treatment result. The computer equipment adopts RIMLS algorithm to carry out point cloud correction on the voxels to be processed on the basis of the smoothing processing result, and voxel normal vector and voxel projection data of the corrected voxels to be processed are obtained. The corrected voxel to be processed is actually subjected to point cloud smoothing and point cloud correction. The computer device displays a three-dimensional image formed by the voxels to be processed after the real-time correction based on the corrected voxel normal vector and voxel projection data.
In this embodiment, by acquiring the voxel to be processed and the corresponding neighborhood data in the fusion frame and performing smoothing processing, the normal vector of the voxel to be processed can be corrected, so that the transition is smoother; and carrying out point cloud correction on the voxels to be processed based on the smoothing result, and displaying a three-dimensional image formed by the corrected voxels to be processed, so that a three-dimensional image with continuous, consistent and smooth normal vector and good three-dimensional point coordinate connection can be obtained, the quality of the three-dimensional image is improved, and the real-time performance of the three-dimensional image is ensured.
In one embodiment, obtaining a voxel to be processed in a fused frame includes: acquiring surrounding voxels of a laser line in the fusion frame; and when the surrounding voxels have corresponding voxel projection data in the current frame and the backward frame and the corresponding voxel types are the same, taking the surrounding voxels as the voxels to be processed.
Wherein, the same voxel type refers to the same internal voxel or the same external voxel. An internal voxel means that the voxel is inside the object. The external voxel means that the voxel is outside the object.
Specifically, the computer device extracts the laser lines in the fusion frame and acquires surrounding voxels of the laser lines. The surrounding voxels have corresponding voxel projection data in at least two frames, specifically, the surrounding voxels have corresponding voxel projection data in at least two continuous frames; it is explained that the voxel exists in a plurality of image frames, and the influence caused by incomplete scanning and over-angle deviation can be reduced, and the overall error caused by voxel projection data can be reduced. The corresponding voxel types of the surrounding voxels in at least two frames are the same, and the surrounding voxels are determined to be on the same side of the object; one voxel must be calculated at least twice to be taken as an effective voxel for smoothing correction, and surrounding voxels with the same voxel type in at least two frames are used for subsequent processing, so that the clutter of the three-dimensional image is reduced.
In one embodiment, obtaining a neighborhood voxel of a voxel to be processed comprises: calculating the identification of a storage module where the voxel to be processed is located; calculating a hash value according to the identifier, and searching from the hash table according to the hash value to obtain the position of the voxel to be processed in the storage module; based on the position of the voxel to be processed in the storage module, a neighborhood voxel of the voxel to be processed is obtained.
The whole scanned data structure is based on hash voxels, a hash table for searching the voxels is established for facilitating positioning and searching, and all the voxels exist in a block (storage module). Each block is made up of 8 x 8 voxels, and each block computes a hash value corresponding to a stored pointer in the hash table.
Therefore, the computer equipment calculates the mark of the block where each voxel to be processed is located, calculates a hash value according to the mark, thereby obtaining a pointer to the block from the corresponding position of the hash table, and then positions the position of the voxel to be processed in the block according to the resolution, thereby obtaining the data of the voxel to be processed.
The embodiment of the application establishes a lookup table of neighbor voxels, wherein the lookup table mainly aims at the blocks where the voxels are located, and locates adjacent blocks in the lookup process, and then notices that the neighbor voxels are located at different positions of the blocks, and the neighbor search results are different.
Specifically, when searching the position of each voxel in the storage module, searching from the hash data structure is needed, positioning of the corresponding block in all blocks is performed according to the pointer, and then the voxel in the selected storage module is positioned according to the coordinate and the voxel resolution of the voxel. The computer equipment acquires the voxel to be processed from the position of the voxel to be processed in the storage module, determines the storage position of the voxel in the corresponding field based on the position of the voxel to be processed in the storage module, and acquires the corresponding neighborhood voxel from the storage position of the neighborhood voxel.
FIG. 7 is a schematic diagram of a memory module in one embodiment. FIG. 7 (a) is a schematic diagram of a memory module and a neighbor memory module in one embodiment. Figure 7 (b) is a schematic diagram of a memory module, wherein each memory module is composed of 8 x 8 voxels. Fig. 7 (a) shows the structure of a neighbor block established from a central block, and fig. 7 (b) shows the voxel arrangement structure of a block, and it is obvious that if a voxel is inside the block, its 26 neighbor voxels are all in the block; if the voxel is positioned at the outermost layer of the block, a table is established according to the situation, and when the voxel is positioned at 8 vertex positions of the block, 26 neighborhood voxels of the voxel are distributed in the block and 7 blocks around the vertex, and different neighborhood blocks can be arranged according to different vertices; when a voxel is located on a non-vertex but on 12 sides, then the 26 neighborhood voxels of that voxel are distributed in self block and 3 blocks around that vertex; when a voxel is located on a face but not a vertex or edge, the 26 neighborhood voxels of that voxel are distributed in the own block and 1 block around that vertex. Based on this information, a table of 512 rows may be built, the row number of each row in the table representing the sequence number and position of the voxels at 512 voxels of the block, the information in the row containing the location of the neighbor block and the position of the corresponding neighbor voxel in the neighbor block. During the search, voxel data of all neighbors can be taken directly from the corresponding row.
In the embodiment, the identification of the storage module where the voxel to be processed is located is calculated, the hash value is calculated according to the identification, the position of the voxel to be processed in the storage module is obtained by searching from the hash table according to the hash value, and the neighborhood voxel of the voxel to be processed is obtained based on the position of the voxel to be processed in the storage module, so that the neighborhood voxel of the voxel to be processed can be quickly obtained, and the real-time performance of point cloud correction is improved.
In one embodiment, smoothing the voxel to be processed based on the neighborhood voxels to obtain a smoothing result, including: acquiring fusion normal vectors of neighbor voxels and fusion normal vectors of voxels to be processed; carrying out normal vector correction on the fusion normal vector of the neighbor voxels and the fusion normal vector of the voxels to be processed to obtain corrected fusion normal vector in the fusion frame; and carrying out smoothing treatment on the voxels to be treated based on the corrected fusion normal vector to obtain a smoothing treatment result.
Specifically, for a fusion frame, the computer device corrects a normal vector of a voxel to be located and a neighborhood voxel in the fusion frame to obtain a corrected normal vector contained in the fusion frame. Specifically, for each voxel, when the fusion vector of a certain voxel exists in a certain range and the fusion vector direction of most other voxels in the range is different, the computer equipment can correct the fusion vector of the voxel to obtain a corrected fusion vector.
The point cloud smoothing is mainly used for smoothing the fusion normal vector of the voxel to be processed, and when the fusion normal vector is smoothed, the fusion projection vector is smoothed, so that the position coordinates of the voxel to be processed are smoothed. And the computer equipment adopts a point cloud smoothing algorithm to carry out point cloud smoothing on the voxels to be processed based on the corrected fusion normal vector, and a smoothing processing result is obtained. The smoothing result comprises a smoothed fusion method vector to be processed, a smoothed fusion projection vector to be processed and a smoothed fusion projection position to be processed.
It is understood that the computer device may perform smoothing on the voxel to be processed and the corresponding neighborhood voxels based on the corrected fusion normal vector, to obtain a smoothing result.
In this embodiment, the normal vector correction is performed on the normal vector of the neighbor voxels and the fusion normal vector of the voxel to be processed, and then the smoothing processing is performed on the voxel to be processed and the corresponding neighbor voxels based on the corrected fusion normal vector, so as to obtain a smoothing processing result, that is, fine tuning is performed after coarse tuning, so that the accuracy of the normal vector of the fusion voxel can be improved, and therefore, the detail texture of the three-dimensional image is not lost under the condition of smoothing.
In one embodiment, performing normal vector correction on a fusion normal vector of a neighborhood voxel and a fusion normal vector of a voxel to be processed to obtain a corrected fusion normal vector in a fusion frame, including: obtaining a local plane based on fitting of the voxels to be processed and the neighborhood voxels; determining the orientation of the local plane; and carrying out orientation correction on fusion vector of the voxel to be processed and fusion normal vector of the neighborhood voxel based on the orientation of the local plane to obtain corrected fusion vector.
In particular, the computer device obtains a local plane based on a fitting of the voxel to be processed and the corresponding neighborhood voxel. The local plane is a small-scale plane. The computer device may calculate a normal vector to the local plane. The computer equipment reverses or keeps operating the normal vector of the voxel to be processed based on the direction of the local plane, so that an included angle between the normal vector of the voxel to be processed and the direction of the local plane is in a certain range, and the corrected normal vector of the current frame is obtained.
In this embodiment, some voxels calculate the fusion normal vector n in terms of normal vector and processing v May be opposite in direction to other voxels in the neighborhood, may be due to scan viewing angle and calculation errorsTherefore, it is corrected first by using the point projected by the neighborhood voxels v =P v +f pv Fitting to a local facet and then according to<n p ,n v >Is to merge normal vectors n with opposite directions v Performing the reverse or constant operation, wherein n p Is the normal vector of the fitted local facet, of course n p Is in the same direction as the sight line sight, i.e<sight,n p >>0。
In this embodiment, a local plane is fitted based on the voxels to be processed and the neighborhood voxels, the orientation of the local plane is determined, and the normal vector of the voxels to be processed and the neighborhood voxels is subjected to orientation correction based on the orientation of the local plane, so as to obtain a corrected normal vector, and the normal vector error caused by the scanning view angle and the calculation error can be corrected, so that the accuracy of the normal vector of the voxels is improved, and the three-dimensional image is smoother.
In one embodiment, the smoothing result includes a smoothed fusion method vector to be processed, a smoothed fusion projection vector to be processed, and a smoothed fusion projection position to be processed;
performing point cloud correction on the voxel to be processed based on the smoothing result to obtain a corrected voxel to be processed, including:
and carrying out point cloud correction on the voxel to be processed based on the smoothed vector of the fusion method to be processed, the smoothed vector of the fusion projection to be processed and the smoothed position of the fusion projection to be processed, and obtaining the corrected voxel to be processed.
The vector of fusion method to be processed is used to represent orientation of voxel to be processed, which is an important feature of voxel. When the voxels have own unique directions, the whole three-dimensional image can present the detailed textures of the surface, even including the properties of reflection, shielding and the like, so that the effect displayed in real time is close to the effect actually seen. The projection vector to be processed refers to a vector obtained by projecting the voxel to be processed to the laser line intersection part. The fusion projection position to be processed is the coordinates of the voxels to be processed. The target projection point refers to a point obtained by projecting a voxel to be processed onto the laser line intersection portion.
Specifically, the computer equipment inputs the smoothed fusion method vector to be processed, the smoothed fusion projection vector to be processed and the smoothed fusion projection position to be processed into a point cloud correction algorithm to move points, and when iteration is performed until the cost is minimum, corrected voxels to be processed are obtained. At this time, the fusion projection position of the voxel to be processed has changed, and the fusion vector may also change. And displaying the corrected target projection point of the voxel to be processed in real time by the computer equipment to obtain a corrected current frame.
In this embodiment, the point cloud correction is performed based on the smoothed vector of the fusion method to be processed, the smoothed vector of the fusion projection to be processed, and the smoothed position of the fusion projection to be processed, so as to obtain the coordinates of the corrected voxel to be processed, and the corrected voxel to be processed is displayed, so that the corrected fusion frame is obtained, the object surface is smoother, the connection is more natural, and the quality of the three-dimensional image is improved.
In one embodiment, smoothing the voxel to be processed based on the neighborhood voxels to obtain a smoothing result, including: smoothing the voxels to be processed based on the neighborhood voxels by adopting a WLOP algorithm to obtain a smoothing result;
performing point cloud correction on the voxel to be processed based on the smoothing result to obtain a corrected voxel to be processed, including: and carrying out local point cloud correction on the voxels to be processed based on the smoothing result by adopting an RIMLS algorithm until the local cost is minimum, and obtaining corrected voxels to be processed.
Among them, WLOP (weighted local optimization projection) algorithm is a point cloud smoothing algorithm, which can generate a set of denoising and uniformly distributed examples on the original point cloud. The RIMLS (Robust Implicit Moving Least Squares ) algorithm is also a point cloud denoising algorithm for denoising images. The local point cloud correction is to perform point cloud correction on each laser line crossing part as a local part and minimize the cost of each local part.
Specifically, some common algorithms such as KNN (K-Nearest Neighbor), local smoothing of whole pixels, etc., cannot support large data, such as when the data size is more than one million data, searching the points of the K neighbors in real time is very time-consuming; while creating Octree (Octree) in real time requires continuously updating branches and leaves when searching with existing Octree, which affects real-time display of three-dimensional images. However, the biggest disadvantage is that the whole pixel generates a considerable number of useless voxels, for example, only one layer of voxels is needed on the surface of the object, the rest of empty voxels are too many, the memory of the computer is seriously consumed, and the method cannot be applied to a scene scanned in real time.
The computer equipment can input corrected fusion normal vectors of the voxels to be processed and the neighborhood voxels into a WLOP algorithm, and carry out smoothing treatment on the fusion normal vectors of the voxels to be processed to obtain a smoothing treatment result. Similarly, the WLOP algorithm is mainly used for smoothing the fusion normal vector of the voxel to be processed, and when the fusion normal vector is smoothed, the voxel projection vector is smoothed, so that the position coordinate of the voxel to be processed is smoothed. The computer equipment inputs the smoothed fusion method vector to be processed, the smoothed fusion projection vector to be processed and the smoothed fusion projection position to be processed into a RIMLS algorithm to move points, and when iteration is performed until the cost is minimum, corrected voxels to be processed are obtained.
Therefore, in this embodiment, smoothing is performed on the voxels to be processed and the neighbor voxels thereof by using the WLOP algorithm to obtain a smoothing result, and then, local point cloud correction is performed on the voxels to be processed based on the smoothing result by using the RIMLS algorithm, so that the voxels most relevant to the object are selected for processing, the whole voxel processing is avoided, the real-time performance of three-dimensional scanning is improved, and the point cloud correction effect is not affected while the point cloud correction efficiency is improved.
In one embodiment, a point cloud fusion method includes:
Step (a 1), extracting a laser line in a current frame; the current frame is a three-dimensional point cloud image obtained based on multi-line laser scanning.
And (a 2) deleting the laser lines with the length less than the preset length, and reserving the candidate laser lines with the length less than the preset length.
And (a 3) cutting off the candidate laser line to obtain the cut laser line when the distance between the three-dimensional points of the candidate laser line exceeds the preset distance.
And (a 4) combining the candidate laser line and the cut laser line to obtain an effective laser line.
And (a 5) for the three-dimensional points on the effective laser line, acquiring a first adjacent point coordinate and a second adjacent point coordinate adjacent to the three-dimensional points on the same laser line for the three-dimensional points on the effective laser line.
And (a 6) determining a first tangential vector formed by the coordinates of the three-dimensional point and the first adjacent point.
And (a 7) determining a second tangent vector formed by the coordinates of the three-dimensional point and the second adjacent point.
And (a 8) performing an averaging process based on the first tangent vector and the second tangent vector to determine a tangent vector before smoothing of the three-dimensional point.
And (a 9) obtaining a smoothing order N.
Step (a 10), obtaining a tangent vector set based on the average of tangent vectors before smoothing of N points before and N points after the three-dimensional point; the tangent vector set includes smoothed tangent vectors of three-dimensional points on the effective laser line, wherein the smoothed tangent vectors have deleted data of the first n+1 points and the last n+1 points on the effective laser line.
Step (a 11), determining crossing voxels at the crossing portion of the current laser line based on the laser line.
And (a 12) obtaining, for each intersecting voxel, a laser line tangent vector of the intersecting laser line corresponding to the laser line intersecting portion from the tangent vector set.
And (a 13) determining a normal vector of the crossing portion of the current laser line based on the current laser line tangential vector.
And (a 14) projecting the normal vector of the crossed voxel along the crossed part of the current laser line to obtain the projection data of the current voxel.
And (a 15) when the current voxel projection data obtained by projecting the intersected voxels to the same laser line is more than one group, taking the group of data closest to the laser line as the current voxel projection data obtained by projecting the intersected voxels to the corresponding laser line.
Step (a 16), obtaining a backward frame; the backward frame is a three-dimensional point cloud image scanned after the current frame.
Step (a 17), obtaining backward voxel projection data of the crossed voxels in the backward frame;
step (a 18), corresponding to each intersecting voxel, determining the voxel type of the intersecting voxel in the current frame and the backward frame, respectively.
Step (a 19), the current position of the camera is acquired.
Step (a 20) determines a current camera view based on the current location and the location of the current intersection voxel.
Step (a 21) of determining the voxel type of the intersecting voxel in the current frame based on the current camera view and the current voxel projection data.
Step (a 22), when the voxel types of the crossed voxels in the current frame and the backward frame are the same, adding the current voxel projection data and the backward voxel projection data to obtain fusion projection data of a fusion frame; the fused projection data includes fused projection vectors.
Step (a 23), the position of the camera is acquired.
Step (a 24) of determining a camera line of sight based on the position of the camera and the position of the intersecting voxels.
And (a 25) determining an included angle between the camera sight and the fusion projection vector.
And (a 26) taking a vector opposite to the fusion projection vector direction as a fusion method vector when the included angle is larger than 90 degrees.
And (a 27) taking a vector with the same direction as the fused projection vector as a fusion vector when the included angle is smaller than or equal to 90 degrees.
And (a 28) obtaining a fusion frame based on the fusion projection data and the fusion normal vector.
Step (a 29), acquiring surrounding voxels of the laser line in the fusion frame; the voxels to be processed refer to surrounding voxels satisfying a preset condition.
And (a 30) taking the surrounding voxels as the voxels to be processed when the surrounding voxels have corresponding voxel projection data in at least two frames and the types of the corresponding voxels of the surrounding voxels in at least two frames are the same.
And (a 31) calculating the identification of the storage module where the voxel to be processed is located.
And (a 32) calculating a hash value according to the identification, and searching from the hash table according to the hash value to obtain the position of the voxel to be processed in the storage module.
And (a 33) acquiring a neighborhood voxel of the voxel to be processed based on the position of the voxel to be processed in the storage module.
And (a 34) obtaining the normal vector of the neighbor voxels and the normal vector of the voxels to be processed.
Step (a 35), obtaining a local plane based on the fitting of the voxels to be processed and the neighborhood voxels.
And (a 36) determining the orientation of the local plane.
And (a 37) carrying out orientation correction on the fusion vector of the voxel to be processed and the fusion normal vector of the neighborhood voxel based on the orientation of the local plane, and obtaining a corrected fusion vector.
Step (a 38), carrying out point cloud smoothing on voxels to be processed based on the corrected normal vector by adopting a WLOP algorithm to obtain a smoothing result; the smoothing result comprises a smoothed fusion method vector to be processed, a smoothed fusion projection vector to be processed and a smoothed fusion projection position to be processed.
And (a 39), carrying out local point cloud correction by using an RIMLS algorithm based on the smoothed fusion method vector to be processed, the smoothed fusion projection vector to be processed and the smoothed fusion projection position to be processed until the local cost is minimum, and obtaining the corrected voxel to be processed.
And (a 40) displaying the three-dimensional image formed by the corrected voxels to be processed.
In this embodiment, the four main parts of the point cloud fusion can be divided into calculating the data of the laser line, processing the voxel data around the laser line of each frame, fusing the voxel projection data between frame frames, and smoothing and local correction in real time. Screening laser lines to obtain effective laser lines, calculating normal vectors of intersecting planes of the laser lines, projecting surrounding voxels onto corresponding laser lines to obtain laser line projection vectors, and projecting normal projections of the laser line projection vectors along intersecting parts of the laser lines onto intersecting parts of the effective laser lines to obtain voxel projection data of a single frame; and fusing the voxel projection data of the two frames to obtain fused projection data and fusion method vectors of the fused frames, so that the fused frames are displayed, each point in the fused frames presents the surface detail texture, even comprises the properties of reflection, shielding and the like, and the real-time display effect is close to the real-time visual effect.
In one embodiment, a specific implementation scheme of the point cloud fusion method is comprehensively described in a computing mode. In the embodiment of the application, the description is given by taking the example that the voxel projection data comprises tsdf values, wherein the voxel projection data comprises a covariance matrix C and a laser line projection vector Weight ω, voxel projection vector f pv Distance value d, target projection vector +.>Normal vector (also referred to as projection unit vector in the embodiment of the present application) of the laser line crossing portion +.>Three-dimensional point coordinates P V And voxel projection position Point V . The normal vector of the voxels is n v . The laser line data includes a tangent vector before smoothing and a tangent vector t after smoothing Pc
1. Description of the necessity of Point cloud fusion
In the three-dimensional laser scanning process, the computer displays the point cloud obtained by scanning calculation in real time, and a user can intuitively see the real-time effect and the position of scanning. It is noted that the calculated real-time point cloud does not contain any direction without any processing, and details cannot be distinguished in the screen display. When each point of a point cloud has a unique direction, the point cloud can present the detail texture of the surface, even including the properties of reflection, shielding and the like, so that the effect displayed in real time is close to the effect actually seen.
The process of giving a real-time point cloud a direction is called point cloud fusion (also called depth map fusion in the field of depth maps). The point cloud fusion is widely applied to almost all three-dimensional scanning products, but meanwhile, the technical requirements of the point cloud fusion are quite high, the logic and the flow are quite complex, the requirements of detection and debugging are quite high, and the next section talks about specific difficulties.
2. Technical difficulties of point cloud fusion
There are many papers about fusion of point clouds or depth maps, almost all of which operate on depth maps, considering that our device is a multi-line laser scan and cannot present a complete depth map in real time, so fusion of depth maps cannot be applied directly. In terms of point cloud fusion for line lasers, there are very few papers currently, which is one of the optimistic factors. In addition, the point cloud fusion of the laser line and the depth map point cloud fusion have overlapping portions, but cannot be used directly. The publication difficulty and application scene of the related papers also illustrate the high technical difficulty of fusion.
In practical processes, the whole algorithm logic is quite complex, one point often does not want to go through or deviate, and the resulting point cloud effect is poor, because errors can accumulate continuously as scanning proceeds. In the debugging stage, if the direction and the position of some three-dimensional points are found suddenly, it is found that the data detected each time are not the points, because errors accumulate and spread, so that the data of each frame need to be detected, and the number of points of each frame is directly tens of thousands, more precisely the number of field points, and hundreds of thousands of data are detected for each frame. In addition, the detection data not only includes the coordinates and directions of the point cloud, but also includes the related covariance matrix, the projection vector of each frame, the calculation of matrix eigenvalue, the effect of local three-dimensional matching, the effect of image line lifting, the pose of each frame and other factors, so that the debugging is very energy-consuming and mental, even if the mathematical logic and principle of each step are clearly known.
3. Step of point cloud fusion
The step of fusion of the whole point cloud is divided into the following main four steps, 2 auxiliary steps and a final reconstruction step.
The four main steps are respectively: calculating data of laser lines (data preparation), processing voxel data around the laser lines of each frame (intra-frame tsdf value calculation), integrating voxel data of different frames (fusion of voxel tsdf values between frames), and smoothing and local correction in real time (optimizing the direction and position of point cloud). The 2 auxiliary steps are respectively an acquisition step of laser line three-dimensional point cloud before four main steps and a step of displaying and storing the point cloud in real time after the four main steps.
The final reconstruction step is to perform three-dimensional reconstruction according to the fused point cloud after the scanning is finished, and optional algorithms include a screening-point algorithm, a Maring probes algorithm and the like.
Here, the description is mainly made for four main steps.
3.1 calculating data of laser lines
After the three-dimensional coordinate data of the laser line are taken from the auxiliary step, the fusion cannot be directly entered, because the three-dimensional tangent vector of each point on the laser line is needed in a subsequent fusion algorithm, the calculation data of the step is used for calculating the tangent vector of each point on the laser line, and carrying out proper smoothing of the tangent vector, and meanwhile, the laser line is screened according to the quality of the laser line and the order of the tangent vector smoothing, so that the laser line which cannot be used for the subsequent fusion and the end point which cannot be used for the fusion on the laser line are removed.
3.1.1 screening laser lines and chop lines
Before calculating the tangent vector of a point on a laser line, a smoothing order (or called smoothing degree) needs to be set, the smoothing calculation needs to use a target point and adjacent points on the laser line, the higher the smoothing order is, the more the adjacent points are needed, the corresponding number of points needing the laser line is limited, for example, if the smoothing order is 2, at least 5 points are needed to calculate the tangent vector of an intermediate point, and laser line segments with less than 5 points cannot calculate and smooth the tangent vector, so the line segments need to be omitted;
meanwhile, note that the laser line segment is actually obtained according to an algorithm for extracting the laser center line in an image shot by the binocular camera, but because the algorithm for extracting the laser line has noise at the online end points, the end points of the corresponding laser line segments are required to be removed, so that the laser line segment which does not meet the screening quantity threshold value needs to be removed, for example, if the threshold value is 5, the first 5 points and the last 5 points of the default end points contain larger noise when the image extracts the center laser line, so that at least 11 points are required to be effective corresponding to the laser line segment, and of course, only 11 points are required to be negatively removed by the condition of tangent vector calculation;
Then, due to the occlusion of the object, the laser line segment extracted from the image has a height difference in the actual scene. For example, laser lines on and under a table are connected together in a photographed image due to photographing angles. Thus, laser line chopping is required, although not so far apart, but during the later fusion process, the distance may be greater than the resolution of the fusion, so that a clutter is created, which problem is illustrated within the fusion.
3.1.2 calculating the tangent vector of the three-dimensional points on the effective laser line and removing the end points
This section is mainly directed to the tangent vector section of the calculation and smoothing points in the main step.
Let it be assumed that the laser line segment L: { P 1 ,P 2 ,......,P n N points, each point P i =(x i ,y i ,z i ) The calculation formula of the tangent vector before smoothing of n is more than or equal to i is more than or equal to 1 is as follows:
at this time, if the smoothing order is set to N, the smoothed tangent vector of the three-dimensional point is as follows:
t is calculated * The final tangent vector of the point is obtained by unitizing operation.
The whole section can be summarized with two formulas (1) and (2). It should be noted that according to equation (1), P is calculated i Is the vector of cuts before smoothingWhen it is required to use the previous point P i-1 And the latter point P i+1 If P i Is the end point P of the laser line 1 Or P n Equation (1) fails because there is a lack of points on one side to calculate, that is, there must be a lack of tangent vectors to the points on one end of the laser line segment, so 2 points on the head and the tail of the laser line segment need to be deleted. After the unsmooth tangent vector data of the remaining points is obtained, the tangent vector can be smoothed according to the formula (2), and it can be seen that the formula (2) shows that when the smoothing order is N, the unsmooth tangent vector data of the N points before and N points after the target point is needed, so if P i If the number of points is less than N, the point cannot smooth the tangent vector, and therefore these invalid points also need to be deleted, and 2n+2 points are deleted. At least two points should be present at the time the last laser line enters fusion, so the number of points needed for the laser line is 2n+4 points according to the smoothing order N.
3.2 processing voxel data around each frame of laser line
This section is to be interpreted at the beginning, where each frame is not a picture taken at a certain moment in the image, but a set of completed pictures of multiple modes of laser lines at a time, specifically, the device uses 7 pairs of laser lines, 7 pairs of crossed laser lines, the laser angle is 7 parallel laser lines at the previous moment, and 7 parallel laser lines rotated by a certain angle are crossed with the laser lines at the previous moment, so that from the camera perspective, the crossed laser pictures of two continuous frames are actually regarded as one frame in the subsequent fusion.
According to this premise, each frame of picture can be regarded as a first group of parallel lasers and a second group of parallel lasers, the two groups of lasers form an intersection, and this section can be described as: processing of the two sets of parallel laser lines by the double thread, the processing results are combined to calculate tsdf values for all target voxels of the frame.
The flow is: the tsdf values of surrounding voxels are calculated for each group of laser line data, then the tsdf values of each line are screened, then some voxels corresponding to the intersection positions of the two groups of laser lines are detected, the tsdf values of the voxels are locally fused, the calculation of covariance matrix and the calculation of direction are carried out on the voxels at the intersection parts, the directions of the voxels are corrected and the sight line is judged, finally, whether the voxels are outside or inside an object is judged, and the data are stored for fusion of tsdf values of the subsequent frames.
3.2.1 calculating tsdf values of voxels surrounding the laser line
The laser line L { P is known 1 ,P 2 ,......,P n Consists of n points, and after extraction of the laser line each point contains coordinates (x, y, z) and a tangent vector (t x ,t y ,t z ) Is used for defining and acquiring surrounding voxels of the laser line. According to the scanning resolution, a global voxel grid is assumed, a boundingbox (an outsourced rectangular box) formed by two continuous points of a laser line is acquired, and a bias e is added, so that a search grid bbx (epsilon) can be formed by expanding an epsilon-size voxel grid outside the boundingbox. Wherein all voxels within bbx (epsilon) need to participate in the computation of the tsdf value. The computer device projects surrounding voxels onto the laser lines of the two three-dimensional points.
For each Voxel Voxel i E bbx (ε), it is noted that this bbx (ε) is defined by the laser line L: { P 1 ,P 2 ,......,P n Two consecutive points on the beam are determined, and the selected two laser points are designated as P 1 And P 2 The Voxel for which the tsdf value needs to be calculated is replaced by the center position point P. Mathematically, P should be projected to P 1 And P 2 And projected target projection point P c The relationship of fig. 4 should be satisfied.
Intuitively, the central point P of surrounding voxels in the graph is a first three-dimensional point P of two points on the laser line 1 And (d)Two three-dimensional point P 2 Projection is carried out on the line segment of the line to obtain a target projection point P c . Note that P projects to P c Is equivalent to P along a unit direction vectorThe length of d is walked, i.e. this geometric model is equivalent to solving +.>And d. Let->Andrespectively P 1 And P 2 Having a direction vector, q 1 And q 2 Is P 1 And P 2 Along the respective direction->And->The same length d is taken to obtain the point. Wherein (1)>Is through P 1 Is related to the tangent vector of PP 1 P 2 The outer product of the normal vector of the plane. Similarly, let go of>Is through P 2 Is related to the tangent vector and PP 1 P 2 The outer product of the normal vector of the plane.
It will be appreciated that P in such a geometric model is equivalent to a moving point on a line segment, whose position should be continuously variable so that it has a value that also continuously varies. In other words P is in line segment q 1 q 2 Position and distance d and P c At P 1 P 2 The above positions need to satisfy a relationship, here assumed to be linear, such that the position scale relationship:
then, within this model, the conditions that need to be met are: q 1 、q 2 Collinear with the P three points;
by derivation of the model, an outer product equation (i.e., first equation) for d can be derived:
the outer product equation represents q 1 P and q 2 P parallel, i.e. the first vector and the second vector are parallel. The distance d can be found by the outer product equation, so that P 1 TowardsThe distance d is shifted to q 1 P in the same way 2 Towards->The distance d is shifted to q 2 Further, the position proportionality coefficient u and the target projection point P can be obtained c Knowing the target projection point P c Then the target projection vector can be obtained +.>Note that P 1 And P 2 Is the point of the laser line, with tangent vector t P1 And t P2 Also according to the proportion, P can be obtained c Is the tangent vector t of (2) Pc . At this time, the distance value d, the target projection vector +.>And projection point-cut vector t Pc It is a calculated value of the laser line segment at a tsdf value of the Voxel. The computer device may store voxel projection data of surrounding voxels, i.e. comprising the distance value d, object projection vector +.>And projection point-cut vector t Pc And carrying out subsequent treatment.
In the three-dimensional laser scanning process, each voxel may be selected more than once in the calculation process, and in the case that the laser line is relatively large in bending or the scanning position is complex, some voxels may exist in bbx (epsilon) of a plurality of small line segments, and a plurality of tsdf values are generated, so that screening is needed, and when the tsdf values are calculated for the same laser line, if there are a plurality of voxels, a group of tsdf values with the smallest d is selected.
3.2.2 integration of tsdf values of voxels at the intersection to generate covariance matrix
Voxels at different crossing locations of the multiple laser lines are processed (as can be appreciated by the device itself, there are sometimes two laser lines crossing and sometimes surrounding voxels at the location of 3 or 4 laser lines crossing). Assuming that the voxel is in a model of a frame of intersecting laser lines, and tsdf values are calculated for N laser lines, these tsdf values form a covariance matrix C.
The formula is as follows:
the number N of the laser lines is included, and the cutting vector t of the laser lines i Weight ω. i represents the ith laser line.
Laser line projection vector
The weights may be determined based on the voxel resolution delta at the time of scanning and the distance value d in the voxel projection data. The voxel resolution delta can be a default value or can be modified as required. The weights are as follows:
where σ is the envelope size, generally bbx (. Epsilon.) and is recommendedThe effect is better.
It will be appreciated that voxel data may be represented by tsdf values, i.e., d,And t Pc C and/or->And ω.
3.2.3 solving the projection vector of the intersection voxel and determining voxel position information and line of sight
Since matrix C is defined by a plurality of intersecting laser lines, i.e. each t i Not vectors parallel to each other, and therefore defined by t i The sum of the constituent autocorrelation matrices is able to find the normal vector. And carrying out SVD decomposition or eigenvalue decomposition on the matrix C to obtain three eigenvalues and corresponding eigenvectors. According to the principle of PCA, the one with the smallest eigenvalue module and the corresponding eigenvector are selected from the three eigenvalues. The feature vector is the projection unit vector that the voxels of the intersection should have
Voxel projection vector f with length pv Is thatIn projection unit vector +.>Projection vectors on, i.e.
Then the Voxel has a projection point P + f pv
In normal vector terms, the projection vector has been taken according to equation (7) of covariance matrix SVD decompositionKnowing that the position of the camera rotationally translated with respect to the first frame position at the current shooting is V c The current line of sight is V c -P v ,P v Is the center point coordinates of the Voxel volume. Then the inner product is made on the projection vector and the line of sight, and the voxel normal vector n can be obtained according to the sign v Is that
/>
Wherein, for the line of sight of the next frame, the normal vector of the current frame and the real-time line of sight V of the next frame are combined c -P v Making an inner product; when the inner product is greater than the inner product of the current frame, the next frame real-time line of sight is stored, and a voxel normal vector of the next frame is determined based on the next frame real-time line of sight. When the inner product is smaller than that of the current frame, determining the voxel normal vector of the next frame based on the real-time line of sight of the current frame
3.2.4 storing voxel attributes for subsequent fusion
In the frame-to-frame processing of the fusion process, the stored tsdf value data is C obtained by formulas (4) (5) (6) (7) (8),ω,f pv and n v . In each frame processing, the temporarily stored tsdf value data is t calculated by a 3.2.1 geometric model pc D and n c
3.3 voxel tsdf value fusion update between every frame
The fusion is mainly performed on the intersecting part of the intersecting laser lines, because the eigenvalue of the covariance matrix of the intersecting part is obvious in a mathematical book, and the normal vector of the local plane of the point projection can be directly solved; for non-intersecting parts (parts between parallel lines), the eigenvalues of the corresponding covariance matrix are two close to 0, and two other eigenvectors besides the main direction cannot be obtained, because the presence of multiple solutions, the appearance in the point cloud is that the projection vector of the voxel projection has no clear direction, more precisely the normal vector of the plane. Additional methods and models are required for processing of voxels between parallel lines, this document being primarily directed to processing of voxels at intersections.
This part is mainly divided into two contents to explain, one is a hash data structure, and can be well adapted to three-dimensional real-time scanning; the other is the tsdf value fusion of voxels between frames.
3.3.1Hash voxel data Structure
The following describes the hash data structure and the manner in which the real-time voxel neighborhood is created and searched as set forth herein.
The operation Hash voxel data structure comprises the following parts: establishing a hash structure lookup table; a hash lookup table lookup mode; deleting the hash data; and establishing and searching the hash data of the real-time voxel neighborhood.
(1) Building a hash structure lookup table:
firstly, setting the number of buckets of a hash bucket of a hash data structure (hash table) to be built, wherein each bucket represents a hash value, then setting the number of hash entries in each bucket, and each bar represents the storage position of different data when the hash values are the same (hash collision). After preparation, voxel blocks (each containing 8 x 8 voxels) are created to store tsdf values of voxels in the actual fusion process, and a pointer for searching is created corresponding to each bucket in the hash table. When voxel data comes in, the hash value of the block where the voxel is located is calculated according to three prime numbers 73856093, 19349669 and 83492791, then a new bar (entry) is built in the corresponding barrel, if the new bar (entry) is built, the new bar (entry) does not need to be built again, if the barrel is full, a new barrel is opened after all barrels, the barrels corresponding to the original hash value are connected in series by using pointers, and a new bar is built in the new barrel.
(2) Hash lookup and localization of voxels
When searching the position of each voxel in the blocks, the search is started from the hash table, then the corresponding blocks are positioned in all the blocks according to the pointer, and then the voxels in the selected blocks are positioned according to the coordinates of the voxels and the voxel resolution.
If the block where the corresponding voxel is located cannot be found when the hash table is searched, building a hash data structure in the first step, and creating a hash entry and the corresponding block.
(3) Deletion of hash data
In the actual process, the voxels or blocks which are obviously wrong and can be combined are preferentially existed, some people can take idle space, some people can delete redundant space data in real time, and when deleting, notice that the corresponding hash entry or hash socket is deleted while the block is deleted. When deleting hash table data, note that the jump of pointers needs to be locally re-linked, similar to the pointer linking in the search ordering algorithm.
(4) Neighborhood hash data structures and lookup tables presented herein
In the subsequent fusion process, it is found that if real-time detection and real-time smooth movement of local point cloud are required to be corrected, a neighborhood search mode is required to be added for acceleration, otherwise, the real-time effect is difficult to achieve. Thus, when each block is initially established, we introduce a neighborhood of blocks, and for ease of lookup we discuss separately the location of each voxel in the block (8 x 8 voxels), A lookup table of hundreds of lines is established, and the corresponding lines can be searched directly according to the coordinates of the voxels, so that the neighbor voxels can be directly positioned. In the process of directly looking up the table, in order to facilitate the searching, the hash value and the pointer for transmitting the hash table can be calculated in real time to help the subsequent searching.
3.3.2 fusion of voxel tsdf values
According to the summary of 3.2.4, the voxel-preserving tsdf-value data is C,ω,f pv and n v . According to the brief description of the line of sight in 3.2.3, the line of sight is V c -P v Recorded as sight. This signal is stored in a new vector space within each voxel. And a new data structure of the equivalent data space is newly added, the data structure is consistent with the existing Voxel, and the reverse Voxel is represented by the correspondence of the pointer of the existing hash table.
The back-side voxels refer to the projection direction vector of a voxel at a certain time due to factors such as scanning angle and error during actual scanningIs the sum of the line of sight V c -P v In the same direction, the voxel is considered to be inside the scan region at that time. When->And V c -P v In the reverse direction, the voxel is considered to be outside the current scan region. I.e. inside the object surface and outside the object surface. The first segment of the reverse Voxel is used for storing an internal Voxel, and the Voxel in the block corresponding to the hash table in 3.3.1 is used for storing an external Voxel. Therefore, each time a voxel is processed, it is necessary to determine whether the voxel is an external voxel or an internal voxel at that time, and the following formula is given:
when sign in the formula (9) takes a value of 1, the voxel is represented to be outside the object, the voxel is represented to be inside the object in the-1 time, the two conditions need to be separated to fuse tsdf values, otherwise, tsdf values of the voxels are disordered, and then a clutter appears.
The process of Tsdf value fusion can be described as the following steps:
(1) Judging the corresponding temporary tsdf value calculated in the previous frameTo determine the sign of the voxel and thus whether it is external or internal (the operation is consistent after the internal and external determination, which will be described below for the voxel being external to the object).
I.e. the voxel
(2) When the voxel is an external voxel, the temporary tsdf value is added directly to the tsdf value stored in the external voxel data structure, C is updated,omega. This time division is divided into two cases, one being when an external Voxel is already established Outer part Corresponding voxels already exist in (C) only, in which case the corresponding C,/is only needed to be carried out directly>Omega adding; the other is in the established Voxel Outer part If the block where the voxel is exists, a space is newly built at the voxel position corresponding to the block for storing C, < >>ω, if this block does not exist either, it is necessary to create a new bucket, entry, block and Voxel to store C by creating a hash data structure in 3.3.2, +.>Omega, building a neighborhood block data according to the neighborhood lookup table; />
(3) After the update of C has been completed,after ω, solving the eigenvector corresponding to the least eigenvalue modulus in C according to SVD decomposition, and then calculating the voxel projection vector +_according to equation (7) and equation (8) >Normal vector n v
(4) According to n v Respectively with the stored signal and the current view V of the viewing angle c -P v And (3) performing inner product, selecting the vision corresponding to the larger module length as the selected vision, and then updating the sight and storing.
3.4 performing smoothing and correction
At this time, a frame of the current scanning calculates a point cloud updated in real time according to voxels to be processed, and the corresponding point cloud is not updated for the processed voxels, so that each frame calculates a fused point cloud for updating.
The directly fused point cloud has a direction, details and textures can be seen in visual display, but due to the scanning incompleteness and errors, the position of the point cloud, the direction of the point cloud and the surface of a real object are deviated, and if errors are accumulated excessively, the position of the point cloud can cause miscellaneous points, so that the point cloud in real-time scanning is quite smooth in visual display and is more similar to the surface of the real object, and the point cloud direction of the local point cloud needs to be smoothed and the position of the point cloud needs to be corrected in real time in the scanning process.
After looking up a plurality of papers, a well-known WLOP algorithm is adopted in the real-time smoothing direction part, the algorithm is mainly aimed at the local optimization of global point cloud, the global point cloud is further optimized, the neighborhood data structure is added, and the smoothing can be realized in the real-time scanning process by combining with the local optimization algorithm of WLOP; in the correction of the real-time point cloud position, a famous RIMLS algorithm is adopted, and the algorithm is also used for carrying out local optimization iteration on the global point cloud to obtain the global optimized point cloud, so that the whole algorithm is quite complex, and the calculated amount is quite large. The method is applied and modified in real-time scanning according to the locally calculated model, so that the position of the real-time point cloud is corrected.
The final model surface is clean and smooth, close to the true value, and the scanning effect is finally given.
The whole part is firstly WLOP smooth, and then RIMLS local real-time shift point is carried out.
3.4.1WLOP real-time smoothing
WLOP can realize real-time smoothing and filtering of global point cloud, so that normal vectors are continuous smooth and have almost no effect of miscellaneous points.
In the actual fusion process, the global point cloud cannot be processed, so that the point cloud which cannot be processed in real time can reach the effect of the original paper on the global point cloud processing, but the effect of the local point cloud processing can be close to the smooth effect. In our real-time fusion process:
(1) Acquiring all voxels to be processed in the current scanning frame and tsdf value data thereof according to 3.3;
(2) Simultaneously acquiring tsdf value data of neighbor voxels of the voxels to be processed, wherein the searching and searching mode is to directly acquire voxel data of corresponding positions according to a neighbor lookup table established at first;
(3) According to n of the voxel to be processed and the neighborhood voxel v ,point v ,P v Carrying out local WLOP smooth iteration until a threshold value is met or the maximum iteration number is reached, and discarding voxels which do not meet the threshold value condition;
(4) The smoothing backward data is saved to the voxels.
3.4.2RIMLS real-time mobile correction local point cloud
The RIMLS algorithm mainly corrects the point cloud of each local range (in the subtree range of the octree) of the global point cloud, thereby affecting the cost of the global correction, and then iterates continuously, so that when the global cost is lowest, the point cloud corresponding to each local correction is the global corrected point cloud as a whole.
Also in the actual fusion process of the embodiments of the present application, the algorithm is operated with respect to global, modified and thus also local.
(1) Acquiring all voxels to be processed in the current scanning frame and tsdf value data thereof according to 3.3;
(2) Simultaneously acquiring tsdf value data of neighbor voxels of the voxels to be processed, wherein the searching and searching mode is to directly acquire voxel data of corresponding positions according to a neighbor lookup table established at first;
(3) According to n of the voxel to be processed and the neighborhood voxel v ,point v ,P v Carrying out local RIMLS iterative calculation on the smoothed normal vector after WLOP processing, so that the detection value meets the threshold value condition in the iterative process;
(4) The locations of the smoothed points are saved.
In one embodiment, as shown in fig. 8, a schematic diagram of an object to be scanned in one embodiment is shown. The thickness of the pattern of the dragon pendant in fig. 8 is less than 2 mm at the thickest part. Fig. 9 is an original three-dimensional point cloud obtained by scanning a cape via a three-dimensional scanner in one embodiment. It can be seen that only a roughly circular outline can be displayed in the original three-dimensional point cloud image, and no detail exists. The original three-dimensional point cloud image of the image is processed by the point cloud fusion method in the embodiment of the application, and a fusion frame shown in fig. 10 is obtained. The left side of fig. 10 is the front side of the dragon pendant and the right side is the back side of the dragon pendant. The texture in fig. 10 is clear, the shading is obvious, and the effect achieved by photographing the original image is similar. FIG. 11 is a schematic diagram of a reconstructed fusion frame in one embodiment. The fusion method vector and the fusion projection vector are input into a three-dimensional reconstruction algorithm, and a fusion frame as shown in fig. 11 can be obtained. That is, the three-dimensional reconstruction of fig. 10 can obtain a fusion frame as shown in fig. 11. Compared with fig. 10, the reconstructed fusion frame more approximates the effect seen by the naked eye.
In one embodiment, as shown in fig. 12, a schematic diagram of an object to be scanned in another embodiment is shown. In fig. 12, the stamp text is raised less than 1 mm. The characters in fig. 12 are "lime-groin thousand hammer-ten thousand chiseled deep mountain fire burning is free, crushed bone is not afraid of being left between people, and the characters are yang. Fig. 13 is an original three-dimensional point cloud image of a stamp scanned by a three-dimensional scanner in one embodiment. It can be seen that only a roughly square outline can be displayed in the original three-dimensional point cloud image, and no detail exists. The original three-dimensional point cloud image of the image is processed by the point cloud fusion method in the embodiment of the application, and a fusion frame shown in fig. 14 is obtained. The left side of fig. 14 is the front side of the stamp, and the right side is the side of the front side of the stamp. The texture in fig. 14 is clear, the shadow is obvious, the effect achieved by photographing the original image is close, the content of poetry can be basically seen, and the accuracy is very high. Fig. 15 is a schematic diagram of a fusion frame in yet another embodiment. The fusion frame in fig. 15 is the back side of the stamp in fig. 14 and the side of the back side. The Chinese character in fig. 15 is clearer than that in fig. 14, and approximates to the effect of the characters printed by the seal. The point cloud fusion method provided by the embodiment of the application is applied to a multi-line laser scanning scene, and can obtain an accurate three-dimensional image of an object through optimization of an algorithm.
It should be understood that, although the steps in the flowchart of fig. 2 are shown in sequence as indicated by the arrows, and the steps in steps (a 1) to (a 40) are shown in sequence as indicated by the numerals, these steps are not necessarily performed in sequence as indicated by the arrows or numerals. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least a portion of the steps in fig. 2 may include a plurality of steps or stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily sequential, but may be performed in rotation or alternatively with at least a portion of the steps or stages in other steps or other steps.
In one embodiment, as shown in fig. 16, a block diagram of a point cloud fusion device in one embodiment is shown. Fig. 16 provides a point cloud fusion apparatus, which may employ a software module or a hardware module, or a combination of both, as part of a computer device, and the apparatus specifically includes: a laser line processing module 1602, a voxel projection data determination module 1604, a point cloud image acquisition module 1606, a fusion frame data determination module 1608, and a fusion frame acquisition module 1610, wherein:
A laser line processing module 1602, configured to obtain a laser line in a current frame; the current frame is a three-dimensional point cloud picture obtained based on multi-line laser scanning;
a voxel projection data determination module 1604 for determining intersecting voxels at the intersection of the current laser lines based on the laser lines; the method comprises the steps of obtaining the current voxel projection data by using a laser line, wherein the laser line is used for intersecting a laser line;
a point cloud image acquisition module 1606, configured to acquire a backward frame; the backward frame is a three-dimensional point cloud image scanned after the current frame;
a voxel projection data determination module 1604 for determining backward voxel projection data of intersecting voxels in the backward frame;
the fusion frame data determining module 1608 is used for carrying out point cloud fusion corresponding to each crossed voxel based on the current voxel projection data and the backward voxel projection data to obtain fusion projection data and fusion normal vector of a fusion frame;
a fused frame acquisition module 1610, configured to acquire a fused frame based on the fused projection data and the fusion normal vector.
In the embodiment, a three-dimensional point cloud image of the current frame is obtained, laser lines in the current frame are extracted, so that voxel data at the intersection part of the laser lines are determined, and voxels at the intersection part of the laser lines are processed subsequently, so that the data quantity to be processed can be reduced; corresponding to each crossed voxel, projecting the crossed voxel to a laser line crossed part to obtain voxel projection data and a voxel normal vector, obtaining backward voxel projection data of the crossed voxel in a backward frame, carrying out point cloud fusion based on the current voxel projection data and the backward voxel projection data to obtain fusion projection data and fusion normal vector of a fusion frame, obtaining the fusion frame based on the fusion projection data and the fusion normal vector, adopting multi-line laser scanning to display detail textures on the surface of an object, and realizing good three-dimensional image quality of the object, wherein the real-time display effect is close to that of naked eyes; meanwhile, the data volume is reasonable, the calculation complexity is low, and the real-time performance of three-dimensional scanning is not affected.
In one embodiment, the fusion frame data determination module 1608 is configured to: adding the current voxel projection data and the backward voxel projection data to obtain fusion projection data of a fusion frame; fusion method vectors of intersecting voxels are determined based on the fusion projection data.
In this embodiment, the fusion projection data of the fusion frame is obtained by adding the current voxel projection data and the backward voxel projection data, the fusion method vector of the intersecting voxels is determined based on the fusion projection data, more accurate data is obtained by voxel fusion between the frames, the quality of the three-dimensional image is improved, and meanwhile, the real-time performance of the three-dimensional scanning is not affected due to reasonable data quantity and low calculation complexity.
In one embodiment, the fusion frame data determination module 1608 is configured to: respectively determining voxel types of the intersected voxels in the current frame and the backward frame; and when the voxel types of the intersecting voxels in the current frame and the backward frame are the same, adding the current voxel projection data and the backward voxel projection data to obtain fusion projection data of the fusion frame.
In this embodiment, it is found through analysis that a mixed point occurs when the voxel types are different, and by determining voxel data of intersecting voxels in a current frame and a backward frame respectively, when the voxel types in the current frame and the backward frame are the same, the current voxel projection data and the backward voxel projection data are added to obtain the fusion projection data of the fusion frame, so as to reduce the fusion mixed point and improve the quality of the three-dimensional image.
In one embodiment, the fusion frame data determination module 1608 is configured to: acquiring the current position of a camera; determining a current camera view based on the current location and the location of the current intersection voxel; the voxel type of the intersected voxel in the current frame is determined based on the current camera view and the current voxel projection data.
In this embodiment, it is found through analysis that a mixed point occurs when the voxel types are different, and by determining voxel data of intersecting voxels in a current frame and a backward frame respectively, when the voxel types in the current frame and the backward frame are the same, the current voxel projection data and the backward voxel projection data are added to obtain the fusion projection data of the fusion frame, so as to reduce the fusion mixed point and improve the quality of the three-dimensional image.
In one embodiment, the fused projection data includes fused projection vectors; a fusion frame data determination module 1608 for: acquiring the position of a camera; determining a camera line of sight based on the position of the camera and the position of the intersecting voxels; determining an included angle between the camera sight and the fusion projection vector; when the included angle is larger than 90 degrees, a vector opposite to the fusion projection vector is used as a fusion method vector; when the included angle is smaller than or equal to 90 degrees, the vector with the same direction as the fusion projection vector is used as the fusion method vector.
In this embodiment, based on the position of the camera and the position of the intersecting voxel, the camera line of sight is determined, and then the angle between the camera line of sight and the fusion projection vector is calculated, so as to determine the fusion vector, so that the normal vector direction of the voxel is towards the area where the camera is located, and the imaging effect of the three-dimensional image is improved.
In one embodiment, the fusion frame data determination module 1608 is configured to: determining a normal vector of a crossing part of the current laser line; and projecting the intersecting voxels to the current laser line intersecting part along the normal vector of the current laser line intersecting part to obtain current voxel projection data.
In this embodiment, by determining a normal vector of the current laser line intersection, the intersection voxel is projected to the current laser line intersection along the normal vector, so as to obtain current voxel projection data, and the intersection voxel is orthographically projected to the current laser line intersection, so that accuracy of the voxel projection data is improved.
In one embodiment, the fusion frame data determination module 1608 is configured to: acquiring a current laser line tangent vector of a crossed laser line corresponding to the laser line crossing part; a normal vector of the current laser line intersection is determined based on the current laser line tangent vector.
In this embodiment, in the laser scanning process, the intersecting part of the laser line is not necessarily a plane, and in most cases is a curved surface, so that the normal vector of the intersecting part of the laser line is calculated by the laser line tangent vector of the intersecting laser line corresponding to the intersecting part of the laser line, so that the intersecting voxel is perpendicularly projected to the intersecting part of the laser line, the projection is more accurate, and the three-dimensional image of the object is more true.
In one embodiment, the laser line processing module 1602 is configured to: extracting a laser line in a current frame; screening effective laser lines which accord with the preset laser line characteristics from the laser lines; for three-dimensional points on the effective laser line, acquiring a tangent vector before smoothing of the three-dimensional points; smoothing the effective laser line based on the tangent vector before smoothing of the three-dimensional points to obtain a tangent vector set; the set of tangent vectors includes smoothed tangent vectors of three-dimensional points on the active laser line. A fusion frame data determination module 1608 for: and obtaining the laser line tangent vector of the crossed laser line corresponding to the laser line crossed part from the tangent vector set.
In this embodiment, the validity of the laser line is determined by screening the laser line, then the tangent vector of the three-dimensional point is obtained for smoothing, the effective laser line, the effective three-dimensional point and the smoothed tangent vector are obtained, and then the three-dimensional image of the object is constructed based on the obtained effective three-dimensional point and the smoothed tangent vector, so that the noise of the three-dimensional image is greatly reduced, and the quality of the three-dimensional image is improved.
In one embodiment, the laser line processing module 1602 is configured to: deleting the laser lines with the length less than the preset length, and reserving candidate laser lines with the length less than the preset length; under the condition that the distance between three-dimensional points of the candidate laser lines exceeds a preset distance, cutting off the candidate laser lines to obtain cut laser lines; and combining the candidate laser line and the cut laser line to obtain the effective laser line.
In this embodiment, the laser line with the length not reaching the preset length is deleted, the candidate laser line with the length reaching the preset length is reserved, the candidate laser line is cut off under the condition that the distance between the three-dimensional points of the candidate laser line exceeds the preset distance, the cut-off laser line is obtained, the candidate laser line and the cut-off laser line are finally combined to obtain an effective laser line, the laser line is processed based on the laser parallax principle, the accuracy of subsequent laser intersection calculation is guaranteed, and therefore the display effect of the three-dimensional image is optimized.
In one embodiment, the point cloud fusion device further includes a point cloud correction module, configured to: acquiring a voxel to be processed in a fusion frame and a neighborhood voxel of the voxel to be processed; smoothing the voxels to be processed based on the neighborhood voxels to obtain a smoothing result; performing point cloud correction on the voxel to be processed based on the smoothing result to obtain a corrected voxel to be processed; and displaying the three-dimensional image formed by the corrected voxels to be processed.
In this embodiment, by acquiring the voxel to be processed and the corresponding neighborhood data in the fusion frame and performing smoothing processing, the normal vector of the voxel to be processed can be corrected, so that the transition is smoother; and carrying out point cloud correction on the voxels to be processed based on the smoothing result, and displaying a three-dimensional image formed by the corrected voxels to be processed, so that a three-dimensional image with continuous, consistent and smooth normal vector and good three-dimensional point coordinate connection can be obtained, the quality of the three-dimensional image is improved, and the real-time performance of the three-dimensional image is ensured.
In one embodiment, the point cloud correction module is configured to: acquiring fusion normal vectors of neighbor voxels and fusion normal vectors of voxels to be processed; carrying out normal vector correction on the fusion normal vector of the neighbor voxels and the fusion normal vector of the voxels to be processed to obtain corrected fusion normal vector in the fusion frame; and carrying out smoothing treatment on the voxels to be treated based on the corrected fusion normal vector to obtain a smoothing treatment result.
In this embodiment, the normal vector correction is performed on the normal vector of the neighbor voxels and the fusion normal vector of the voxel to be processed, and then the smoothing processing is performed on the voxel to be processed and the corresponding neighbor voxels based on the corrected fusion normal vector, so as to obtain a smoothing processing result, that is, fine tuning is performed after coarse tuning, so that the accuracy of the normal vector of the fusion voxel can be improved, and therefore, the detail texture of the three-dimensional image is not lost under the condition of smoothing.
In one embodiment, the point cloud correction module is configured to: obtaining a local plane based on fitting of the voxels to be processed and the neighborhood voxels; determining the orientation of the local plane; and carrying out orientation correction on the fusion normal vector of the voxels to be processed and the neighborhood voxels based on the orientation of the local plane, and obtaining the corrected fusion normal vector in the fusion frame.
In this embodiment, a local plane is fitted based on the voxels to be processed and the neighborhood voxels, the orientation of the local plane is determined, and the normal vector of the voxels to be processed and the neighborhood voxels is subjected to orientation correction based on the orientation of the local plane, so as to obtain a corrected normal vector, and the normal vector error caused by the scanning view angle and the calculation error can be corrected, so that the accuracy of the normal vector of the voxels is improved, and the three-dimensional image is smoother.
In one embodiment, the point cloud correction module is configured to: smoothing the voxels to be processed based on the neighborhood voxels by adopting a WLOP algorithm to obtain a smoothing result; and carrying out local point cloud correction on the voxels to be processed based on the smoothing result by adopting an RIMLS algorithm until the local cost is minimum, and obtaining corrected voxels to be processed.
In this embodiment, smoothing is performed on voxels to be processed by using a WLOP algorithm to obtain a smoothing result, and then, local point cloud correction is performed on the voxels to be processed by using a RIMLS algorithm based on the smoothing result, and voxels most relevant to the object are selected for processing, so that full voxel processing is avoided, instantaneity of three-dimensional scanning is improved, and point cloud correction efficiency is improved without affecting point cloud correction effects.
For specific limitations of the point cloud fusion device, reference may be made to the above limitation of the point cloud fusion method, and no further description is given here. The modules in the point cloud fusion device can be all or partially realized by software, hardware and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal device, and an internal structure diagram thereof may be as shown in fig. 17. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a point cloud fusion method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in FIG. 17 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, implements the steps of the method embodiments described above.
In one embodiment, a computer program product or computer program is provided that includes computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the steps of the method embodiments described above.
Those skilled in the art will appreciate that implementing all or part of the above described methods in accordance with the embodiments may be accomplished by way of a computer program stored in a non-transitory computer readable storage medium, which when executed may comprise the steps of the above described embodiments of the methods. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, or the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the scope of the application, and all equivalent structures or equivalent processes using the descriptions and drawings of the present application or directly or indirectly applied to other related technical fields are included in the scope of the application.

Claims (16)

1. A method of point cloud fusion, the method comprising:
acquiring a laser line in a current frame; the current frame is a three-dimensional point cloud picture obtained based on multi-line laser scanning;
determining crossing voxels at a crossing portion of the current laser line based on the laser line;
corresponding to each intersecting voxel, projecting the intersecting voxel to the current laser line intersecting part to obtain current voxel projection data;
acquiring a backward frame; the backward frame is a three-dimensional point cloud image scanned after the current frame;
determining backward voxel projection data of the intersected voxels in the backward frame;
corresponding to each crossed voxel, carrying out point cloud fusion based on the current voxel projection data and the backward voxel projection data to obtain fusion projection data and fusion normal vector of a fusion frame;
and obtaining the fusion frame based on the fusion projection data and the fusion normal vector.
2. The method of claim 1, wherein the performing point cloud fusion based on the current voxel projection data and the backward voxel projection data to obtain fusion projection data and a fusion normal vector of a fusion frame comprises:
adding the current voxel projection data and the backward voxel projection data to obtain fusion projection data of a fusion frame;
And determining fusion method vectors of the intersecting voxels based on the fusion projection data.
3. The method of claim 2, wherein the adding the current voxel projection data and the backward voxel projection data to obtain fused projection data of a fused frame comprises:
determining voxel types of the intersected voxels in the current frame and the backward frame respectively;
and when the voxel types of the intersected voxels in the current frame and the backward frame are the same, adding the current voxel projection data and the backward voxel projection data to obtain fusion projection data of a fusion frame.
4. A method according to claim 3, wherein said determining the voxel type of the intersected voxels in the current frame comprises:
acquiring the current position of a camera;
determining a current camera view based on the current location and a location of a current intersection voxel;
a voxel type of the intersecting voxel in the current frame is determined based on the current camera view and the current voxel projection data.
5. The method of claim 2, wherein the fused projection data comprises fused projection vectors;
The determining a fusion normal vector of the intersecting voxels based on the fusion projection data comprises:
acquiring the position of a camera;
determining a camera line of sight based on the position of the camera and the position of the intersecting voxel;
determining an included angle between the camera sight and the fusion projection vector;
when the included angle is larger than 90 degrees, a vector with the opposite direction to the fusion projection vector is used as a fusion method vector;
and when the included angle is smaller than or equal to 90 degrees, taking a vector with the same direction as the fusion projection vector as a fusion method vector.
6. The method of claim 1, wherein said projecting the intersected voxels into the current laser line intersection portion to obtain current voxel projection data comprises:
determining a normal vector of the current laser line crossing portion;
and projecting the intersecting voxels to the current laser line intersecting part along the normal vector of the current laser line intersecting part to obtain current voxel projection data.
7. The method of claim 6, wherein the determining the normal vector of the current laser line intersection comprises:
acquiring a current laser line tangent vector of the crossed laser line corresponding to the current laser line crossing part;
A normal vector of the current laser line intersection is determined based on the current laser line tangent vector.
8. The method of claim 7, wherein the method further comprises:
extracting a laser line in the current frame;
screening effective laser lines which accord with the preset laser line characteristics from the laser lines;
for the three-dimensional points on the effective laser line, acquiring a tangent vector before smoothing of the three-dimensional points;
smoothing the effective laser line based on the tangent vector before smoothing of the three-dimensional points to obtain a tangent vector set; the tangent vector set comprises smoothed tangent vectors of three-dimensional points on the effective laser line;
the obtaining the current laser line tangent vector of the crossed laser line corresponding to the current laser line crossing part comprises the following steps:
and acquiring the current laser line tangent vector of the crossed laser line corresponding to the current laser line crossed part from the tangent vector set.
9. The method of claim 8, wherein the screening the laser lines for valid laser lines that meet a predetermined laser line characteristic comprises:
deleting laser lines with the length not reaching the preset length, and reserving candidate laser lines with the length reaching the preset length;
Cutting off the candidate laser line to obtain a cut laser line under the condition that the distance between the three-dimensional points of the candidate laser line exceeds a preset distance;
and combining the candidate laser line and the cut laser line to obtain an effective laser line.
10. The method according to any one of claims 1 to 9, further comprising:
acquiring a voxel to be processed in the fusion frame and a neighborhood voxel of the voxel to be processed;
carrying out smoothing treatment on the voxels to be treated based on the neighborhood voxels to obtain a smoothing treatment result;
performing point cloud correction on the voxel to be processed based on the smoothing result to obtain a corrected voxel to be processed;
and displaying the three-dimensional image formed by the corrected voxels to be processed.
11. The method according to claim 10, wherein smoothing the voxel to be processed based on the neighborhood voxels to obtain a smoothed result comprises:
acquiring fusion method vectors of the neighborhood voxels and fusion normal vectors of the voxels to be processed;
carrying out normal vector correction on the fusion normal vector of the neighborhood voxels and the fusion normal vector of the voxels to be processed to obtain corrected fusion normal vector in the fusion frame;
And carrying out smoothing treatment on the voxels to be treated based on the corrected fusion normal vector to obtain a smoothing treatment result.
12. The method according to claim 11, wherein performing normal vector correction on the fusion normal vector of the neighborhood voxel and the fusion normal vector of the voxel to be processed to obtain a corrected fusion normal vector in the fusion frame, comprises:
obtaining a local plane based on the voxel to be processed and the neighborhood voxel fitting;
determining an orientation of the local plane;
and carrying out orientation correction on the fusion normal vector of the voxel to be processed and the fusion normal vector of the neighborhood voxel based on the orientation of the local plane to obtain a corrected fusion normal vector in the fusion frame.
13. The method according to claim 10, wherein smoothing the voxel to be processed based on the neighborhood voxels to obtain a smoothed result comprises:
smoothing the voxels to be processed based on the neighborhood voxels by adopting a WLOP algorithm to obtain a smoothing result;
and performing point cloud correction on the voxel to be processed based on the smoothing result to obtain a corrected voxel to be processed, wherein the method comprises the following steps:
And carrying out local point cloud correction on the voxel to be processed based on the smoothing result by adopting an RIMLS algorithm until the local cost is minimum, and obtaining the corrected voxel to be processed.
14. A point cloud fusion device, the device comprising:
the laser line processing module is used for acquiring a laser line in the current frame; the current frame is a three-dimensional point cloud picture obtained based on multi-line laser scanning;
a voxel projection data determining module for determining crossing voxels at the crossing portion of the current laser line based on the laser line; and the method is used for corresponding to each intersecting voxel, projecting the intersecting voxels to the intersecting part of the current laser line, and obtaining current voxel projection data;
the point cloud image acquisition module is used for acquiring a backward frame; the backward frame is a three-dimensional point cloud image scanned after the current frame;
the voxel projection data determining module is used for determining backward voxel projection data of the crossed voxels in the backward frame;
the fusion frame data determining module is used for carrying out point cloud fusion corresponding to each crossed voxel and based on the current voxel projection data and the backward voxel projection data to obtain fusion projection data and fusion normal vector of a fusion frame;
And the fusion frame acquisition module is used for acquiring the fusion frame based on the fusion projection data and the fusion normal vector.
15. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 13 when the computer program is executed.
16. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 13.
CN202310505477.4A 2023-05-06 2023-05-06 Point cloud fusion method, device, computer equipment and storage medium Pending CN116612253A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310505477.4A CN116612253A (en) 2023-05-06 2023-05-06 Point cloud fusion method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310505477.4A CN116612253A (en) 2023-05-06 2023-05-06 Point cloud fusion method, device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116612253A true CN116612253A (en) 2023-08-18

Family

ID=87675726

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310505477.4A Pending CN116612253A (en) 2023-05-06 2023-05-06 Point cloud fusion method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116612253A (en)

Similar Documents

Publication Publication Date Title
CN109872397B (en) Three-dimensional reconstruction method of airplane parts based on multi-view stereo vision
CN109697688B (en) Method and device for image processing
CN110135455B (en) Image matching method, device and computer readable storage medium
Hornacek et al. Depth super resolution by rigid body self-similarity in 3d
CN109255808B (en) Building texture extraction method and device based on oblique images
US11315313B2 (en) Methods, devices and computer program products for generating 3D models
US20240169674A1 (en) Indoor scene virtual roaming method based on reflection decomposition
JP2008275341A (en) Information processor and processing method
EP3326156B1 (en) Consistent tessellation via topology-aware surface tracking
US9147279B1 (en) Systems and methods for merging textures
EP1063614A2 (en) Apparatus for using a plurality of facial images from different viewpoints to generate a facial image from a new viewpoint, method thereof, application apparatus and storage medium
CN111739167B (en) 3D human head reconstruction method, device, equipment and medium
JP2022509329A (en) Point cloud fusion methods and devices, electronic devices, computer storage media and programs
CN115439607A (en) Three-dimensional reconstruction method and device, electronic equipment and storage medium
CN109064533B (en) 3D roaming method and system
KR20220053332A (en) Server, method and computer program for generating spacial model based on panorama image
CN108629742B (en) True ortho image shadow detection and compensation method, device and storage medium
CN114782646A (en) House model modeling method and device, electronic equipment and readable storage medium
CN112733641A (en) Object size measuring method, device, equipment and storage medium
Yoo et al. True orthoimage generation by mutual recovery of occlusion areas
Verykokou et al. A Comparative analysis of different software packages for 3D Modelling of complex geometries
CN112002007A (en) Model obtaining method and device based on air-ground image, equipment and storage medium
CN116805356A (en) Building model construction method, building model construction equipment and computer readable storage medium
JP2002520969A (en) Automated 3D scene scanning from motion images
CN116612253A (en) Point cloud fusion method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination