CN115546371A - Point cloud optimization method and system, electronic device and storage medium - Google Patents

Point cloud optimization method and system, electronic device and storage medium Download PDF

Info

Publication number
CN115546371A
CN115546371A CN202211262451.3A CN202211262451A CN115546371A CN 115546371 A CN115546371 A CN 115546371A CN 202211262451 A CN202211262451 A CN 202211262451A CN 115546371 A CN115546371 A CN 115546371A
Authority
CN
China
Prior art keywords
point cloud
rgb image
rendering
gridding
gridded
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211262451.3A
Other languages
Chinese (zh)
Inventor
万庭雷
刘欣
王旭光
周扬帆
嵇亚飞
程诚
黎江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Institute of Nano Tech and Nano Bionics of CAS
Original Assignee
Suzhou Institute of Nano Tech and Nano Bionics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Institute of Nano Tech and Nano Bionics of CAS filed Critical Suzhou Institute of Nano Tech and Nano Bionics of CAS
Priority to CN202211262451.3A priority Critical patent/CN115546371A/en
Publication of CN115546371A publication Critical patent/CN115546371A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)

Abstract

The invention discloses a point cloud optimization method, a system, electronic equipment and a storage medium, wherein the point cloud optimization method comprises the following steps: acquiring an initial point cloud and an RGB image of a target object; preprocessing the initial point cloud to obtain a preprocessed point cloud; gridding the preprocessed point cloud to obtain a gridded point cloud; calculating the vertex coordinates and normal vectors of the gridding point cloud in the RGB image coordinate system; performing micro-rendering on the gridding point cloud according to the vertex coordinates and the normal vector, and outputting a rendering result; and calculating a loss function according to the rendering result and the RGB image so as to iteratively optimize the gridding point cloud. According to the point cloud optimization method, the RGB image information of the target object is obtained and used as the system input, so that the point cloud optimization direction is guided, extra equipment does not need to be added, and the cost rise of the redundancy built by the system is avoided.

Description

Point cloud optimization method and system, electronic device and storage medium
Technical Field
The present invention relates to the field of image processing, and in particular, to a method and a system for cloud optimization of structured light points, an electronic device, and a storage medium.
Background
The three-dimensional reconstruction based on the structured light has the characteristics of non-contact and high precision, and has increasingly wide application in the fields of industrial detection, cultural relic digitization, intelligent terminals and the like. The development of machine learning further promotes the deep application of the machine learning in the fields with higher equipment safety requirements such as medical treatment, cosmetology and the like. In various applications, the accuracy and the density of reconstruction generation have extremely important influence on the efficiency and the accuracy of the application, and the more accurate point cloud can obtain higher positioning efficiency and reconstruction effect in the application; denser point clouds tend to greatly improve the resolution of applications.
In practical application, due to the influences of natural environment, instrument and equipment, inherent attributes of signals and the like, point clouds obtained by structured light three-dimensional reconstruction are often polluted by noise, reconstruction quality is reduced, and application effect is influenced finally. In order to optimize the point cloud generated by the structured light three-dimensional reconstruction, point cloud optimization based on a depth camera, point cloud optimization based on a photometric stereo method, point cloud optimization based on a maximum likelihood method and the like are proposed, but due to the limitations that additional equipment is required and the noise reduction principle is required, the method cannot be adopted in practical application.
Therefore, in view of the above problems, it is necessary to provide a new point cloud optimization method, system, electronic device and storage medium.
Disclosure of Invention
The invention aims to provide a point cloud optimization method, a point cloud optimization system, electronic equipment and a storage medium, which can improve point cloud precision, increase point cloud density and improve object reconstruction effect.
In order to achieve the purpose, the technical scheme provided by the invention is as follows:
in a first aspect, the present invention provides a point cloud optimization method, which includes:
acquiring an initial point cloud and an RGB image of a target object;
preprocessing the initial point cloud to obtain a preprocessed point cloud;
gridding the preprocessed point cloud to obtain a gridded point cloud;
calculating the vertex coordinates and normal vectors of the gridding point cloud in the RGB image coordinate system;
performing micro-rendering on the gridding point cloud according to the vertex coordinates and the normal vector, and outputting a rendering result;
and calculating a loss function according to the rendering result and the RGB image so as to iteratively optimize the gridding point cloud.
In one or more embodiments, the acquiring an initial point cloud and RGB image of a target object includes:
the method comprises the steps of obtaining an initial point cloud of a target object according to a reference camera of a structural light module, and obtaining an RGB image of the target object according to an RGB camera of the structural light module.
In one or more embodiments, the pre-processing the initial point cloud to obtain a pre-processed point cloud includes:
and carrying out down-sampling on the initial point cloud according to a voxel grid method, and carrying out smoothing treatment on the initial point cloud after down-sampling according to a moving least square method to obtain a preprocessed point cloud.
In one or more embodiments, the gridding the preprocessed point cloud to obtain a gridded point cloud includes:
and carrying out triangular patch meshing on the preprocessed point cloud according to a greedy projection method to obtain a triangular patch meshed point cloud.
In one or more embodiments, the calculating vertex coordinates and normal vectors of the gridded point cloud in the RGB image coordinate system includes:
and calculating the vertex coordinates and normal vectors of the gridded point cloud in the RGB image coordinate system according to the conversion relation between the preprocessed point cloud coordinate system and the RGB image coordinate system.
In one or more embodiments, the micro-rendering the grid point cloud according to the vertex coordinates and the normal vector, and outputting a rendering result, includes:
performing projection transformation on the meshed point cloud according to the vertex coordinates and the normal vector, rasterizing the vertex and the normal vector of the meshed point cloud after the projection transformation, performing coloring treatment and outputting a rough rendering result;
rasterizing the vertexes and normal vectors of the gridded point cloud according to the vertex coordinates and the normal vectors, performing replacement mapping on the rasterized vertexes and normal vectors, performing coloring processing, and outputting a fine rendering result.
In one or more embodiments, the calculating a loss function from the rendering results and the RGB image to iteratively optimize the gridded point cloud includes:
respectively calculating L from the rough rendering result and the fine rendering result and the RGB image 1 And a loss function of the norm, namely back propagation iteration optimization of illumination, albedo and replacement chartlet information in the micro-rendering process so as to iteratively optimize the gridded point cloud.
In a second aspect, the present invention provides a point cloud optimization system, which includes:
the structural light module is used for acquiring an initial point cloud and an RGB image of a target object;
the preprocessing module is used for preprocessing the initial point cloud to obtain a preprocessed point cloud;
the gridding module is used for gridding the preprocessed point cloud to obtain a gridded point cloud;
the calculation module is used for calculating the vertex coordinates and normal vectors of the gridding point cloud in the RGB image coordinate system;
the micro-rendering module is used for carrying out micro-rendering on the gridded point cloud according to the vertex coordinates and the normal vector and outputting a rendering result;
and the optimization module is used for calculating a loss function according to the rendering result and the RGB image so as to iteratively optimize the gridding point cloud.
In a third aspect, the present invention provides an electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the point cloud optimization method as described above when executing the program.
In a fourth aspect, the present invention provides a computer storage medium having computer-executable instructions stored thereon, which when executed by a processor, are configured to implement the point cloud optimization method as described above.
Compared with the prior art, the point cloud optimization method provided by the invention has the advantages that the RGB image information of the target object is obtained as the system input, so that the point cloud optimization direction is guided, additional equipment is not required, and the cost rise of the redundancy built by the system is avoided; in addition, in the micro-rendering process, the original point cloud can be up-sampled, when the resolution ratio of the RGB camera is higher, the details of the obtained point cloud are more, the upper limit of point cloud optimization is greatly improved, and the reconstruction effect is more vivid.
Drawings
FIG. 1 is a block flow diagram of a point cloud optimization method in accordance with an embodiment of the present invention;
FIG. 2 is a schematic view of a usage scenario of a structured light module according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of the manner of gridding in accordance with an embodiment of the present invention;
FIG. 4 is a block flow diagram of coarse rendering in an embodiment of the invention;
FIG. 5 is a block flow diagram of a fine rendering in an embodiment of the invention;
FIG. 6 is a block diagram of a point cloud optimization system according to an embodiment of the present invention;
fig. 7 is a block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The following detailed description of the present invention is provided in conjunction with the accompanying drawings, but it should be understood that the scope of the present invention is not limited to the specific embodiments.
Throughout the specification and claims, unless explicitly stated otherwise, the word "comprise", or variations such as "comprises" or "comprising", will be understood to imply the inclusion of a stated element or component but not the exclusion of any other element or component.
Referring to fig. 1, a flow chart of a point cloud optimization method according to an embodiment of the present invention is shown, where the point cloud optimization method includes the following steps:
s101: and acquiring an initial point cloud and an RGB image of the target object.
In an exemplary embodiment, an initial point cloud of an object is acquired according to a reference camera of a structured light module, and an RGB image of the object is acquired according to an RGB camera of the structured light module.
As shown in fig. 2, the configuration light module includes a reference camera (e.g., camera 1 and camera 2 in fig. 2, which are typically CCD cameras) for acquiring a point cloud of the target object, a projector for projecting specific light information onto a surface of the target object, and an RGB camera for acquiring an RGB image of the target object.
In this embodiment, how the structured light module acquires the point cloud data of the target object is not specifically limited, and whether the reference camera in the structured light module is monocular or binocular is not specifically limited, as long as the point cloud data of the target object can be acquired. Of course, in other embodiments, the point cloud data of the target object may also be acquired by other point cloud data acquisition modules besides the structural light module.
S102: and preprocessing the initial point cloud to obtain a preprocessed point cloud.
In an exemplary embodiment, the specific manner of preprocessing the initial point cloud to obtain a preprocessed point cloud includes: and carrying out down-sampling on the initial point cloud according to a voxel grid method, and carrying out smoothing treatment on the initial point cloud after down-sampling according to a moving least square method to obtain a preprocessed point cloud.
It should be noted that, the initial point cloud is down-sampled by using a voxel grid method, so that redundant repeated points in the point cloud data can be removed, and the spatial geometric characteristics of the point cloud model can be completely maintained.
Specifically, smoothing the initial point cloud after down-sampling according to a moving least square method specifically includes:
determining a fitting function:
Figure BDA0003891706400000061
wherein p is T (x)=[p 1 (x)p 2 (x)…p k (x)]Is a basis function, q i (x) (i =0,1, …, m) is the coefficient to be found, which is the corresponding spatial point x = [ x, y, z ]]As a function of (c).
Weighted dispersion L 2 The paradigm is:
Figure BDA0003891706400000062
y i is node x = x i Corresponding value, w i (x) As a weight function, q (x) can be obtained when equation (2) takes a minimum value. Formula (2) derives q:
Figure BDA0003891706400000063
q(x)=A -1 (x)B(x)y (4)
wherein A (x) and B (x):
Figure BDA0003891706400000064
B(x)=[w(x 1 )p(x 1 ),w(x 2 )p(x 2 )…,w(x m )p(x m )] (6)
shape function phi n (x) Comprises the following steps:
Figure BDA0003891706400000065
a fitting function is obtained:
Figure BDA0003891706400000066
finally, the commonly used weight function is used:
Figure BDA0003891706400000067
after the down-sampling and smoothing processing of the initial point cloud is finished, the obtained preprocessed point cloud is smoother and still keeps the original characteristics, so that the optimization can be started by using a low-noise initial value in the subsequent optimization process, and the optimization time is reduced.
S103: and meshing the preprocessed point cloud to obtain a meshed point cloud.
Since the subsequent rendering is to convert the three-dimensional continuous space into a two-dimensional image for display, the preprocessed point cloud needs to be meshed.
Specifically, triangular patch meshing is carried out on the preprocessed point cloud according to a greedy projection method, and triangular patch meshing point cloud is obtained.
Specifically, when the initial point cloud of the target object is obtained through the structured light module, since the initial point cloud obtained by the structured light module has a corresponding relationship with the pixel points on the picture taken by the reference camera and has a characteristic of uniform distribution, the simplified greedy projection triangulation method shown in fig. 3 can be adopted to directly perform triangulation on the picture, connect three pixel points nearby, and perform triangulation according to a uniform direction. And after finishing all division on the picture, finishing the meshing of the triangular surface patches of the preprocessed point clouds in the three-dimensional space by the point clouds corresponding to the pixel points according to the same connection mode.
Wherein, the gray area in fig. 3 (a) is the pixel point corresponding to the point cloud on the picture shot by the reference camera; fig. 3 (b) is a triangular mesh obtained by triangulating the pixel points corresponding to the point cloud in fig. 3 (a).
S104: and calculating the vertex coordinates and normal vectors of the gridding point cloud in the RGB image coordinate system.
Since the subsequent optimization is performed based on the RGB image, the vertex coordinates and normal vectors of the gridded point cloud in the RGB image coordinate system need to be calculated according to the transformation relationship between the preprocessed point cloud coordinate system and the RGB image coordinate system.
In an exemplary embodiment, when the initial point cloud of the target object is obtained through the structured light module, the origin coordinates of the initial point cloud are the optical center coordinates of the reference camera, so that the vertex coordinates and normal vectors of the gridded point cloud in the RGB image coordinate system need to be calculated according to the conversion relationship between the reference camera coordinate system and the RGB camera coordinate system.
The following describes a specific way of calculating the vertex coordinates and normal vectors of the gridded point cloud in the RGB image coordinate system, taking the reference camera as the structural light module of the binocular camera as an example.
Since the binocular system undergoes epipolar line correction during the generation of the point cloud, the corrected point cloud needs to be rotated back to the original coordinate system. Let the corrected transformation matrix after polar line correction be R rec The coordinate of a certain three-dimensional point in the space is p = [ x, y, z =] T Then the coordinates of the point after rotation back to the original coordinate system are:
Figure BDA0003891706400000081
let the rotational-translation matrix of the RGB camera relative to the reference camera be [ R ] RGB t RGB ]And then rotating the obtained point cloud to the RGB camera coordinate system:
Figure BDA0003891706400000082
let the internal parameter of RGB camera be K RGB Distortion coefficient of (k) 1 ,k 2 ,p 1 ,p 2 ) Then the RGB camera pixel coordinates (u, v) corresponding to the point cloud satisfy:
Figure BDA0003891706400000083
Figure BDA0003891706400000084
Figure BDA0003891706400000085
since the origin of coordinates of the map generated by gridding is the lower left corner of the image and is normalized to (0,1), and the origin of coordinates of the RGB pixel plane obtained by equation (14) is the upper left corner of the image, assuming that the width of the map image is W and the height is H, the mapping is converted into the coordinates of the grid map:
Figure BDA0003891706400000086
when the reference camera in the structural optical module is a monocular camera, epipolar line correction is not needed, and only the following calculation needs to be performed by skipping the formula (10).
The foregoing coordinate transformation process is referred to as perspective projective transformation, and is denoted as y = Persp (x), where x and y are both three-dimensional vectors.
S105: and carrying out micro-rendering on the gridding point cloud according to the vertex coordinates and the normal vector, and outputting a rendering result.
It should be noted that the rendering process is a process of converting a point in a three-dimensional space into a pixel point, and the micro-rendering is a process of differentiating the rendering process, so that all functions of the process have gradients, which satisfies a condition of back propagation, that is, the input is optimized according to the result back propagation. The three-dimensional triangular patch grid is converted into a two-dimensional picture, a light ray tracing method and a rasterization method are generally adopted, and the method uses the rasterization based on the Pythrch 3D to better adapt to the accelerated operation process of the GPU.
In an exemplary embodiment, the micro-renderable manner in step S105 includes: performing projection transformation on the meshed point cloud according to the vertex coordinates and the normal vector, rasterizing the vertex and the normal vector of the meshed point cloud after the projection transformation, performing coloring treatment and outputting a rough rendering result; rasterizing the vertexes and normal vectors of the gridded point cloud according to the vertex coordinates and the normal vectors, performing replacement mapping on the rasterized vertexes and normal vectors, performing coloring processing, and outputting a fine rendering result.
Referring to fig. 4 and 5, a flow chart of a micro-renderable process according to an embodiment of the invention is shown. Fig. 4 is a block diagram of a coarse rendering flow in the micro-rendering process, and fig. 5 is a block diagram of a fine rendering flow in the micro-rendering process.
Specifically, the rendered and output picture is formed by combining the albedo of the surface of the target object and the shadow generated by illumination, the albedo of the surface is that the original color of the target object is recorded as albedo, the shadow coloring generated by illumination is recorded as shading, and then the rendered and output is as follows:
texture=albedo*shading (16)
the shadow coloring is calculated by a normal vector and illumination after rasterization, and the invention adopts a Spherical harmonic illumination model (SH) to simulate global illumination. Then for a normal vector n i The vertex of (a), shaded colored as:
Figure BDA0003891706400000101
wherein
Figure BDA0003891706400000102
Is formed by a normal vector n i The calculated spherical harmonic basis function is used as a basis function,
Figure BDA0003891706400000103
is the spherical harmonic coefficient, the illumination is represented in this embodiment using a second order spherical harmonic basis function B = 3.
Different ways of obtaining the normal vector after rasterization are different, different shadow coloring can be obtained, and different rendering output can be generated. As shown in fig. 4, if the number of the original triangular patches is kept unchanged, the vertex v in the original triangular patch mesh is directly used coarse And the normal vector n coarse And v obtained by projective transformation trans =Persp(v coarse ) And n trans =Persp(n coarse ) And rasterizing together, so that the number of the obtained rasterized vertexes and normal vectors is still unchanged, and the process can be actually regarded as that the triangular patch is displayed on a two-dimensional plane according to the projection transformation relation in the step S104.
The fine rendering process shown in fig. 5 directly rasterizes the vertex and normal vectors, and does not perform the projection transformation in step S104, which is equivalent to re-triangulating each pixel point on the two-dimensional plane of the RGB image, and re-triangulating the patch in a manner that the three-dimensional rendering is performed in a manner of triangulating corresponding to each pixel point. Because the original triangular patch mesh is processed by down sampling, and the resolution ratio of an RGB camera used by a general mapping is higher than the initial point cloud number, the triangular patch mesh is up-sampled according to the resolution ratio of an RGB mapping image through the re-division of rasterization, and the processed mesh is denser.
A replacement map (displacement map) is a map that can change the shape of a triangular patch mesh, that is, the vertex of the triangular patch mesh is not known. In this embodiment, the replacement of the vertex after mapping and rasterization and the first two dimensions of the normal vectorThe degree is consistent, that is, the proportion of the vertex corresponding to the pixel moving along the normal vector is stored in each pixel point. Let the rasterized dense vertex be v i Its corresponding normal vector is n i The replacement map value of the point is z i Then, after replacing the map, the point changes to:
v′ i =v i +z i *n i (18)
the vertices of the gridded point cloud can be continuously optimized by continuously optimizing the replacement map.
S106: and calculating a loss function according to the rendering result and the RGB image so as to iteratively optimize the gridding point cloud.
In an exemplary embodiment, the coarse rendering result and the fine rendering result are respectively calculated as L with the RGB image 1 And a loss function of the norm, namely back propagation iteration optimization of illumination, albedo and replacement chartlet information in the micro-rendering process so as to iteratively optimize the gridded point cloud.
Specifically, the entire rendering flow has three unknowns: albedo, illumination and displacement mapping of the surface of the target object. The method is based on a Pythorch framework, the three unknowns are set as random numbers with initial values approaching 0, and then the random numbers are added into an Adam (Adaptive Moment Estimation) optimizer for iterative optimization. Render the coarse rendering result coarse And fine rendering results render detail Respectively calculating L with RGB image 1 Loss function of norm:
Figure BDA0003891706400000111
wherein alpha and beta are coefficients of a loss function, and variables (albedo, illumination and displacement mapping) in the Adam optimizer are iteratively optimized through back propagation according to the final global loss until the loss is reduced to a certain size or iterated to a certain step number.
In summary, according to the point cloud optimization method provided by the invention, the RGB image information of the target object is acquired as the system input, so as to guide the direction of point cloud optimization, thus no additional equipment is required, and the cost increase of the redundancy of system construction is avoided. In addition, in the micro-rendering process, the original point cloud can be up-sampled, when the resolution ratio of the RGB camera is higher, the details of the obtained point cloud are more, the upper limit of point cloud optimization is greatly improved, and the reconstruction effect is more vivid.
Based on the same inventive concept as the aforementioned point cloud optimization method, the invention further provides a point cloud optimization system 600, which comprises a structural light module 601, a preprocessing module 602, a gridding module 603, a calculating module 604, a micro-rendering module 605 and an optimization module 606.
The structural light module 601 is configured to acquire an initial point cloud and an RGB image of a target object, and the specific acquisition mode may refer to step S101. The preprocessing module 602 is configured to preprocess the initial point cloud to obtain a preprocessed point cloud, and the specific preprocessing manner may refer to step S102. The gridding module 603 is configured to perform gridding on the preprocessed point cloud to obtain a gridded point cloud, and the specific gridding manner may refer to step S103. The calculating module 604 is configured to calculate vertex coordinates and normal vectors of the gridded point cloud in the RGB image coordinate system, and the specific calculating manner may refer to step S104. The micro-rendering module 605 is configured to perform micro-rendering on the grid point cloud according to the vertex coordinates and the normal vector, and output a rendering result, where the step S105 may be referred to in a specific micro-rendering manner. The optimization module 606 is configured to calculate a loss function according to the rendering result and the RGB image to iteratively optimize the gridding point cloud, and the specific optimization manner may refer to step S106.
Referring to fig. 7, an embodiment of the present invention further provides an electronic device 700, where the electronic device 700 includes at least one processor 701, a storage 702 (e.g., a non-volatile storage), a memory 703 and a communication interface 704, and the at least one processor 701, the storage 702, the memory 703 and the communication interface 704 are connected together via a bus 705. The at least one processor 701 is configured to invoke at least one program instruction stored or encoded in the memory 702 to cause the at least one processor 701 to perform various operations and functions of the methods described in the various embodiments of the present specification.
In embodiments of the present description, the electronic device 700 may include, but is not limited to: personal computers, server computers, workstations, desktop computers, laptop computers, notebook computers, mobile electronic devices, smart phones, tablet computers, cellular phones, personal Digital Assistants (PDAs), handsets, messaging devices, wearable electronic devices, consumer electronic devices, and the like.
Embodiments of the present invention also provide a computer-readable storage medium having stored thereon computer-executable instructions for implementing various operations and functions of the method described in the embodiments of the present specification when executed by a processor.
The computer-readable storage medium can be any available media or data storage device that can be accessed by a computer, including but not limited to magnetic memory (e.g., floppy disks, hard disks, magnetic tape, magneto-optical disks (MOs), etc.), optical memory (e.g., CDs, DVDs, BDs, HVDs, etc.), and semiconductor memory (e.g., ROMs, EPROMs, EEPROMs, non-volatile memory (NAND FLASH), solid State Disks (SSDs), etc.).
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing description of specific exemplary embodiments of the invention has been presented for the purposes of illustration and description. It is not intended to limit the invention to the precise form disclosed, and obviously many modifications and variations are possible in light of the above teaching. The exemplary embodiments were chosen and described in order to explain certain principles of the invention and its practical application to enable one skilled in the art to make and use various exemplary embodiments of the invention and various alternatives and modifications. It is intended that the scope of the invention be defined by the claims and their equivalents.

Claims (10)

1. A point cloud optimization method, comprising:
acquiring an initial point cloud and an RGB image of a target object;
preprocessing the initial point cloud to obtain a preprocessed point cloud;
gridding the preprocessed point cloud to obtain a gridded point cloud;
calculating the vertex coordinates and normal vectors of the gridding point cloud in the RGB image coordinate system;
performing micro-rendering on the gridding point cloud according to the vertex coordinates and the normal vector, and outputting a rendering result;
and calculating a loss function according to the rendering result and the RGB image so as to iteratively optimize the gridding point cloud.
2. The point cloud optimization method of claim 1, wherein said obtaining an initial point cloud and RGB image of a target object comprises:
the method comprises the steps of obtaining an initial point cloud of a target object according to a reference camera of a structural light module, and obtaining an RGB image of the target object according to an RGB camera of the structural light module.
3. The point cloud optimization method of claim 1, wherein the pre-processing the initial point cloud to obtain a pre-processed point cloud comprises:
and carrying out down-sampling on the initial point cloud according to a voxel grid method, and carrying out smoothing treatment on the down-sampled initial point cloud according to a moving least square method to obtain a preprocessed point cloud.
4. The point cloud optimization method of claim 1, wherein the gridding the preprocessed point cloud to obtain a gridded point cloud comprises:
and carrying out triangular patch meshing on the preprocessed point cloud according to a greedy projection method to obtain a triangular patch meshed point cloud.
5. The point cloud optimization method of claim 1, wherein said computing vertex coordinates and normal vectors of the gridded point cloud in the RGB image coordinate system comprises:
and calculating the vertex coordinates and normal vectors of the gridded point cloud in the RGB image coordinate system according to the conversion relation between the preprocessed point cloud coordinate system and the RGB image coordinate system.
6. The point cloud optimization method of claim 1, wherein the micro-rendering the gridded point cloud according to the vertex coordinates and the normal vector and outputting a rendering result comprises:
performing projection transformation on the meshed point cloud according to the vertex coordinates and the normal vector, rasterizing the vertex and the normal vector of the meshed point cloud after the projection transformation, performing coloring treatment and outputting a rough rendering result;
rasterizing the vertexes and normal vectors of the gridded point cloud according to the vertex coordinates and the normal vectors, performing replacement mapping on the rasterized vertexes and normal vectors, performing coloring processing, and outputting a fine rendering result.
7. The point cloud optimization method of claim 6, wherein said computing a loss function from the rendering results and the RGB image to iteratively optimize the gridded point cloud comprises:
respectively calculating L from the rough rendering result and the fine rendering result and the RGB image 1 And a loss function of norm, namely back propagation iteration optimization of illumination, albedo and displacement mapping information in the micro-rendering process so as to iteratively optimize the gridded point cloud.
8. A point cloud optimization system, comprising:
the structural light module is used for acquiring an initial point cloud and an RGB image of a target object;
the preprocessing module is used for preprocessing the initial point cloud to obtain a preprocessed point cloud;
the gridding module is used for gridding the preprocessed point cloud to obtain a gridded point cloud;
the calculation module is used for calculating the vertex coordinates and normal vectors of the gridding point cloud in the RGB image coordinate system;
the micro-rendering module is used for carrying out micro-rendering on the gridded point cloud according to the vertex coordinates and the normal vector and outputting a rendering result;
and the optimization module is used for calculating a loss function according to the rendering result and the RGB image so as to iteratively optimize the gridding point cloud.
9. An electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the point cloud optimization method of any one of claims 1-7 when executing the program.
10. A computer storage medium having computer executable instructions stored thereon, which when executed by a processor, are configured to implement the point cloud optimization method of any one of claims 1 to 7.
CN202211262451.3A 2022-10-14 2022-10-14 Point cloud optimization method and system, electronic device and storage medium Pending CN115546371A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211262451.3A CN115546371A (en) 2022-10-14 2022-10-14 Point cloud optimization method and system, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211262451.3A CN115546371A (en) 2022-10-14 2022-10-14 Point cloud optimization method and system, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN115546371A true CN115546371A (en) 2022-12-30

Family

ID=84735698

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211262451.3A Pending CN115546371A (en) 2022-10-14 2022-10-14 Point cloud optimization method and system, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN115546371A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116310224A (en) * 2023-05-09 2023-06-23 小视科技(江苏)股份有限公司 Method and device for quickly reconstructing three-dimensional target
CN116596985A (en) * 2023-07-17 2023-08-15 国网上海市电力公司 Self-adaptive illumination model modeling method and system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116310224A (en) * 2023-05-09 2023-06-23 小视科技(江苏)股份有限公司 Method and device for quickly reconstructing three-dimensional target
CN116596985A (en) * 2023-07-17 2023-08-15 国网上海市电力公司 Self-adaptive illumination model modeling method and system
CN116596985B (en) * 2023-07-17 2023-10-20 国网上海市电力公司 Self-adaptive illumination model modeling method and system

Similar Documents

Publication Publication Date Title
CN109003325B (en) Three-dimensional reconstruction method, medium, device and computing equipment
US9767598B2 (en) Smoothing and robust normal estimation for 3D point clouds
CN115100339B (en) Image generation method, device, electronic equipment and storage medium
Min et al. Depth video enhancement based on weighted mode filtering
US8406509B2 (en) Three-dimensional surface generation method
CN115546371A (en) Point cloud optimization method and system, electronic device and storage medium
US11263356B2 (en) Scalable and precise fitting of NURBS surfaces to large-size mesh representations
CN111612882B (en) Image processing method, image processing device, computer storage medium and electronic equipment
US9147279B1 (en) Systems and methods for merging textures
US9965893B2 (en) Curvature-driven normal interpolation for shading applications
CN114373056A (en) Three-dimensional reconstruction method and device, terminal equipment and storage medium
US20220343522A1 (en) Generating enhanced three-dimensional object reconstruction models from sparse set of object images
CN113962858A (en) Multi-view depth acquisition method
CN114332125A (en) Point cloud reconstruction method and device, electronic equipment and storage medium
CN113140034A (en) Room layout-based panoramic new view generation method, device, equipment and medium
CN113989434A (en) Human body three-dimensional reconstruction method and device
CN115375847A (en) Material recovery method, three-dimensional model generation method and model training method
Szirmay-Kalos Filtering and gradient estimation for distance fields by quadratic regression
Abdelfattah et al. On Image to 3D Volume Construction for E-Commerce Applications
CN116012666B (en) Image generation, model training and information reconstruction methods and devices and electronic equipment
Heimann et al. Joint Geometry and Attribute Upsampling of Point Clouds Using Frequency-Selective Models with Overlapped Support
KR102559691B1 (en) Method and device for reconstructing neural rendering-based geometric color integrated 3D mesh
US20240013341A1 (en) Point cloud processing method and electronic device
CN115100382B (en) Nerve surface reconstruction system and method based on hybrid characterization
CN117152330B (en) Point cloud 3D model mapping method and device based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination