CN113316801A - Point cloud hole filling method and device and storage medium - Google Patents

Point cloud hole filling method and device and storage medium Download PDF

Info

Publication number
CN113316801A
CN113316801A CN201980078422.1A CN201980078422A CN113316801A CN 113316801 A CN113316801 A CN 113316801A CN 201980078422 A CN201980078422 A CN 201980078422A CN 113316801 A CN113316801 A CN 113316801A
Authority
CN
China
Prior art keywords
point cloud
hole
filling
image
dimensional image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980078422.1A
Other languages
Chinese (zh)
Inventor
夏清
李延召
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN113316801A publication Critical patent/CN113316801A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

A point cloud hole filling method, device and storage medium, the method comprises the following steps: projecting the point cloud data to a two-dimensional plane for the point cloud data to be subjected to hole filling, and forming a two-dimensional image based on the projected two-dimensional plane (S410); generating a hole template corresponding to the two-dimensional image, the hole template indicating the position of a point cloud hole in the two-dimensional image (S420); and filling the two-dimensional image with the hole template (S430). The three-dimensional point cloud is projected into the two-dimensional plane and then is filled with the holes, so that the sparsity of point distribution is greatly reduced, the processing difficulty is reduced, meanwhile, the hole templates are generated by fully considering the characteristics of different scanning modes generated by different point cloud scanning modes, the holes are filled by utilizing the hole templates, and the method can be adapted to the scenes of various point cloud detection systems.

Description

Point cloud hole filling method and device and storage medium
Description
Technical Field
The present invention generally relates to the field of laser detection technology, and more particularly, to a method and an apparatus for filling a point cloud hole, and a storage medium.
Background
The three-dimensional point cloud detection system including the laser radar can sense objects in the surrounding environment through the ranging and scanning module, and can sense distance information, position information, reflectivity information and the like of the surrounding objects. Each sampling can obtain the information of one three-dimensional space point, and the points are displayed together in the three-dimensional space to form a point cloud.
In a laser radar scanning system, due to the limitation of a laser radar scanning mechanism, the distribution of laser point cloud is always sparse in space, and due to the influence of external shielding, sky and the like, the generated three-dimensional point cloud data is inevitably provided with point cloud holes. Due to the existence of the objective factors, no matter how dense the scanning density of the point cloud is, the point cloud holes always exist in the same scene. The existence of the holes causes great troubles to subsequent technologies such as point cloud data processing, detection, identification and the like.
Disclosure of Invention
In order to solve the above problems, embodiments of the present invention provide a point cloud hole filling scheme, which not only considers the characteristics of sparseness and unstructured of laser point cloud data, but also considers the characteristics of a scanning mode of a laser radar, and fills a hole region in the point cloud data through a hole template. The following briefly describes the point cloud hole filling scheme proposed by the present invention, and more details will be described in the following detailed description with reference to the drawings.
According to an aspect of the present invention, there is provided a point cloud hole filling method, the method including: projecting the point cloud data to be subjected to hole filling onto a two-dimensional plane, and forming a two-dimensional image based on the projected two-dimensional plane; generating a hole template corresponding to the two-dimensional image, wherein the hole template marks the position of a point cloud hole in the two-dimensional image; and filling the two-dimensional image by using the hole template.
In one embodiment of the invention, the projecting the point cloud data onto a two-dimensional plane comprises: and projecting the point cloud data onto a two-dimensional plane in the forward looking direction of the laser radar.
In an embodiment of the present invention, the forming a two-dimensional image based on the two-dimensional plane obtained after the projection includes: and gridding the two-dimensional plane obtained after projection to form a two-dimensional image.
In an embodiment of the present invention, a pixel value of each pixel point of the two-dimensional image formed after the gridding is determined according to a parameter value of a point cloud point falling into a position of the pixel point.
In one embodiment of the invention, the parameter value is a depth value or a reflectivity of the point cloud point.
In an embodiment of the invention, at most one point cloud point falls in each grid of the two-dimensional image formed after the two-dimensional plane gridding, and the pixel value of the grid in which no point cloud point falls is 0.
In an embodiment of the present invention, the hole template and the two-dimensional image have the same size, and a pixel value of each pixel of the hole template is a first pixel value or a second pixel value, where the first pixel value indicates that the position of the pixel is a hole, and the second pixel value indicates that the position of the pixel is not a hole.
In an embodiment of the present invention, the generating a hole template corresponding to the two-dimensional image includes: generating an image template which has the same size and gridding as the two-dimensional image; setting the pixel value of a grid corresponding to a first grid in the image template as a first pixel value for the first grid in which no point cloud point falls in the two-dimensional image; and for a second grid into which a cloud point of a point in the two-dimensional image falls, setting the pixel value of the grid corresponding to the second grid in the hole template as the second pixel value.
In an embodiment of the present invention, the filling the two-dimensional image with the hole template includes: and filling the two-dimensional image by using the global information of the two-dimensional image based on a neural network to obtain a first filling image, and determining point cloud data of the hole position in the first filling image according to point cloud data adjacent to the hole position by using the hole template to obtain a second filling image.
In one embodiment of the invention, the sharpness of at least some positions in the second filler image is higher than the sharpness of corresponding positions in the first filler image.
In one embodiment of the invention, the pixel values of the first mesh into which no point cloud point falls in the two-dimensional image are different from the pixel values of the mesh corresponding to the first mesh in the first filler image.
In an embodiment of the invention, the pixel values of the second grid into which the cloud points of the points in the two-dimensional image fall are the same as the pixel values of the grid corresponding to the second grid in the first filler image.
In one embodiment of the present invention, the populating the two-dimensional image with global information of the two-dimensional image based on the neural network includes: and performing convolution operation, pooling operation and deconvolution operation on the two-dimensional image based on the first filling neural network.
In an embodiment of the present invention, the obtaining a second filling image by using the hole template includes: generating an attention map based on an attention mechanism by utilizing the hole template based on a second filling neural network, wherein the attention map indicates the correlation degree of the position of the hole and the adjacent area of the hole; performing feature extraction on the first filling image to obtain a feature map; and obtaining the second filling image according to the attention diagram and the feature diagram.
In one embodiment of the invention, the method further comprises: after obtaining the second filler image, filtering the second filler image to obtain a filtered image.
In one embodiment of the invention, the method further comprises: the filtering the second pad image comprises: and filtering the second filling image in a bilateral filtering mode.
In one embodiment of the invention, the method further comprises: and inversely transforming the second filling image or the filtered image into a three-dimensional space to obtain three-dimensional point cloud data after hole filling.
In one embodiment of the invention, the method further comprises: outputting the second filler image or the filtered image directly for a detection or recognition task.
According to another aspect of the present invention, there is provided a point cloud hole filling method, including: acquiring a point cloud, wherein holes exist in the point cloud; filling the hole by using the global information of the point cloud to obtain a first filling result; acquiring a position to be filled, wherein the position to be filled is positioned in the hole; and determining the point cloud data filled at the position to be filled by utilizing the point cloud data of the position to be filled close to the first filling result.
In one embodiment of the invention, the point cloud is projected onto a two-dimensional plane, and the two-dimensional plane is subjected to gridding processing to obtain a two-dimensional image.
In an embodiment of the present invention, the filling the hole with the global information of the point cloud includes: and determining point cloud data of the hole position in the two-dimensional image by using the global information of the point cloud data in the two-dimensional image.
In an embodiment of the present invention, the determining point cloud data of hole positions in the two-dimensional image by using global information of point cloud data in the two-dimensional image includes: and performing convolution operation, pooling operation and deconvolution operation on the two-dimensional image based on the first filling neural network.
In an embodiment of the present invention, the acquiring the position to be filled includes: and generating a hole template corresponding to the position of the hole according to the hole, wherein the pixel value of each pixel point in the hole template is a first pixel value or a second pixel value, the first pixel value indicates that the position of the pixel point is the hole, and the second pixel value indicates that the position of the pixel point is not the hole.
In one embodiment of the present invention, the determining point cloud data filled at the position to be filled by using point cloud data of adjacent positions of the position to be filled in the first filling result comprises: generating an attention map with the location to be filled based on an attention mechanism, the attention map indicating a degree of correlation of the location to be filled with its neighboring regions; performing feature extraction on the first filling result to obtain a feature map; and obtaining the second filling result according to the attention diagram and the feature diagram based on a second filling neural network.
According to yet another aspect of the present invention, there is provided a point cloud hole filling apparatus, the apparatus comprising a memory and a processor, the memory having stored thereon a computer program for execution by the processor, the computer program, when executed by the processor, performing the above point cloud hole filling method.
According to a further aspect of the invention, a storage medium is provided, on which a computer program is stored which, when running, executes the above-described point cloud hole filling method.
According to the point cloud hole filling method, the point cloud hole filling device and the storage medium, the three-dimensional point cloud is projected into the two-dimensional plane and then hole filling is carried out, so that the sparsity of point distribution is greatly reduced, the processing difficulty is reduced, meanwhile, the hole template is generated by fully considering the characteristics of different scanning modes generated by different point cloud scanning modes, the hole template is used for hole filling, and the method and the device can be adapted to scenes of various point cloud detection systems.
Drawings
Fig. 1 shows a schematic diagram of a laser radar scanning mode.
Fig. 2 shows a schematic diagram of another lidar scanning approach.
Fig. 3A-3D illustrate exemplary point cloud hole diagrams.
FIG. 4 shows a schematic flow diagram of a point cloud hole filling method according to an embodiment of the invention.
Fig. 5 shows a schematic diagram of a point cloud projection.
Fig. 6A to 6D are schematic views showing hole templates corresponding to different holes.
Fig. 7A shows a schematic view of an image to be filled.
Fig. 7B shows a schematic diagram of a hole template for the image to be filled shown in fig. 7A.
Fig. 7C and 7D respectively show schematic diagrams of a first filled image and a second filled image obtained by the point cloud hole filling method according to the embodiment of the invention.
Fig. 8A to 8D are schematic diagrams illustrating filling results obtained after the point cloud holes illustrated in fig. 3A to 3D are filled by using the point cloud hole filling method according to the embodiment of the present invention;
FIG. 9 shows a schematic flow diagram of a point cloud hole filling method according to another embodiment of the invention.
FIG. 10 shows a schematic block diagram of a point cloud hole filling apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, exemplary embodiments according to the present invention will be described in detail below with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of embodiments of the invention and not all embodiments of the invention, with the understanding that the invention is not limited to the example embodiments described herein. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the invention described herein without inventive step, shall fall within the scope of protection of the invention.
In the following description, numerous specific details are set forth in order to provide a more thorough understanding of the present invention. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without one or more of these specific details. In other instances, well-known features have not been described in order to avoid obscuring the invention.
It is to be understood that the present invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term "and/or" includes any and all combinations of the associated listed items.
In order to provide a thorough understanding of the present invention, detailed steps and detailed structures will be set forth in the following description in order to explain the present invention. The following detailed description of the preferred embodiments of the invention, however, the invention is capable of other embodiments in addition to those detailed.
In a laser radar scanning system, due to the limitation of a laser radar scanning mechanism, the distribution of laser point cloud is always sparse in space, and due to the influence of external shielding, sky and the like, holes are inevitably formed in generated three-dimensional point cloud data. Reference is now made to the accompanying drawings for illustrative description. Fig. 1 shows a schematic diagram of one lidar scanning mode, fig. 2 shows a schematic diagram of another lidar scanning mode, and fig. 3A to 3D show exemplary point cloud hole schematic diagrams. The point cloud holes shown in fig. 3A to 3D are point cloud holes generated in the case of the scanning mode of the laser radar shown in fig. 1, but the point cloud holes shown in fig. 3A to 3D are different due to the influence of external shielding, sky, and the like, but generally, the point cloud holes shown in fig. 3A to 3D are still closely related to the scanning mode of the laser radar.
Based on the point cloud hole filling scheme, the characteristics of the scanning mode of the laser radar are considered, the hole template is generated to fill the hole area in the point cloud data, and the point cloud hole filling scheme can be suitable for the point cloud hole filling of the laser radar in various scanning modes. The following detailed description refers to the accompanying drawings.
FIG. 4 shows a schematic flow diagram of a point cloud hole filling method 400 according to an embodiment of the invention. As shown in fig. 4, a point cloud hole filling method 400 according to an embodiment of the present invention may include the following steps:
in step S410, for point cloud data to be subjected to hole filling, the point cloud data is projected onto a two-dimensional plane, and a two-dimensional image is formed based on the projected two-dimensional plane.
Laser point clouds are distributed in a three-dimensional space in a scattered manner, feature extraction is usually directly performed on the point clouds in the three-dimensional space aiming at hole filling of the laser point clouds at present, for example, the outline, the gray scale feature and the like of the edge of a missing hole are extracted, and region growing and hole filling operations are performed on the missing region through the extracted local features. However, the feature extraction of the three-dimensional space has the problems of large calculation amount, extraction error, insufficient adaptability and the like; in addition, the hole filling based on the local features extracted from the three-dimensional space cannot utilize the global features of the point cloud data, and the filled holes only accord with the characteristics of regional distribution but do not accord with the global features, so that the filling result is not accurate enough. Therefore, in the embodiment of the invention, the point cloud data to be subjected to hole filling is projected on the two-dimensional plane, the three-dimensional point cloud data is converted into the two-dimensional image, and then the subsequent hole filling processing is carried out, so that the processing difficulty can be reduced, meanwhile, the hole filling processing is carried out aiming at the two-dimensional image, a foundation is laid for global and/or local filling, and the accuracy of the filling result is favorably improved.
In an embodiment of the present invention, projecting the point cloud data onto the two-dimensional plane may refer to: and projecting the point cloud data onto a two-dimensional plane in the forward looking direction of the laser radar, namely a two-dimensional plane perpendicular to the direction of the central axis of the laser emitted by the laser radar, as shown in fig. 5. The projection of the point cloud data onto the plane may sufficiently reflect the scanning pattern of the lidar regardless of the distance of the point cloud from the lidar. After the point cloud data is projected onto the two-dimensional plane, the two-dimensional plane obtained after projection may be gridded to form a two-dimensional image. Each grid in the two-dimensional image can be regarded as a pixel of the two-dimensional image, and the pixel value of each pixel in the two-dimensional image can be determined according to the parameter value of the point cloud point falling into the position of the pixel point (grid). For example, assuming that the mesh is set to have a size such that at most one point cloud point can fall within each mesh, the pixel value of the mesh (i.e., the pixel) may be a depth value or a reflectance value or the like of the point cloud point falling within the mesh. For another example, assuming that the set mesh size is such that more than one point cloud point may fall within each mesh, the pixel value of the mesh (i.e., the pixel) may be a weighted average of the depth values or reflectance values of the point cloud points falling within the mesh, and the like, without limitation. For a mesh into which no point cloud point falls, the pixel value of the mesh (i.e., the pixel) may be set to 0.
The two-dimensional image obtained in step S410 is the image to be filled, as shown in fig. 7A. Referring back to FIG. 4, the following steps of the point cloud hole filling method 400 according to an embodiment of the invention will be described.
In step S420, a hole template corresponding to the two-dimensional image is generated, and the hole template indicates the position of the point cloud hole in the two-dimensional image.
In the embodiment of the present invention, the three-dimensional point cloud data is projected onto the two-dimensional plane in step S410, and the two-dimensional image obtained after projection reflects the scanning mode of the laser radar, so that the hole template corresponding to the two-dimensional image is generated in step S420, that is, the scanning mode of the laser radar is taken into account in the filling process of the point cloud holes, so that the scheme of the present invention can be applied to the point cloud hole filling of the laser radar in various scanning modes.
For example, generating a hole template corresponding to the two-dimensional image may include: generating an image template which has the same size and gridding as the two-dimensional image; setting the pixel value of a grid corresponding to a first grid in the image template as a first pixel value (for example, 0 value) for the first grid into which no point cloud point falls in the two-dimensional image; for a second grid into which a cloud point of a point in the two-dimensional image falls, setting a pixel value of a grid corresponding to the second grid in the hole template to be a second pixel value (for example, 1 or other non-0 value). That is, the size of the hole template generated in step S420 is the same as the size of the two-dimensional image obtained in step S410, and the pixel value of each pixel of the hole template generated in step S420 is a first pixel value or a second pixel value, where the first pixel value indicates that the position of the pixel is a hole (no point cloud point falls), and the second pixel value indicates that the position of the pixel is not a hole (a point cloud point falls). Thus, the hole template identifies the location of the point cloud hole in the two-dimensional image, e.g., in the above example, the location of the point cloud hole at the pixel having the first pixel value. Fig. 6A to 6D schematically illustrate hole templates corresponding to different holes, and it is apparent from fig. 6A to 6D that the hole areas with white color (having the first pixel value) are to-be-filled areas.
Referring back now to FIG. 4, the subsequent steps of the point cloud hole filling method 400 according to an embodiment of the invention are continuously described.
In step S430, the two-dimensional image is filled with the hole template.
In the embodiment of the present invention, the hole template indicating the position of the point cloud hole in the two-dimensional image is obtained in step S420, and the two-dimensional image is filled using the hole template in step S430, so that the point cloud hole can be filled without obtaining the hole position in advance in the scheme of the present invention. In addition, as mentioned above, the hole template reflects the scanning mode of the lidar, so that the scheme of the present invention can be applied to the point cloud hole filling of the lidar in various scanning modes.
In an embodiment of the invention, various hole templates can be utilized to train a neural network to perform point cloud hole filling on a two-dimensional image. In this way, the two-dimensional image obtained in step S410 may be filled based on the trained neural network and the hole template obtained in step S420. Because most of the existing two-dimensional image restoration algorithms based on deep learning use the characteristics of color, texture and the like of color image data, and the data without specific color and texture information, such as point cloud data, is difficult to fill holes, corresponding hole templates are generated for different types of point cloud holes for neural network learning, so that the two-dimensional image can be filled through the deep learning without color and texture information.
Furthermore, as mentioned above, in the embodiment of the present invention, the point cloud data to be hole-filled is projected onto the two-dimensional plane, the three-dimensional point cloud data is converted into the two-dimensional image (i.e., the image to be filled, such as shown in fig. 7A), and then the subsequent hole-filling processing is performed, so that the processing difficulty can be reduced, and meanwhile, since the hole-filling processing is performed on the two-dimensional image, a foundation is laid for global and/or local filling, which is beneficial to improving the accuracy of the filling result. Based on this, in the embodiment of the present invention, hole filling can be performed from coarse to fine, from global to local based on the neural network. Illustratively, filling the two-dimensional image with a hole template (e.g., as shown in fig. 7B) in step S430 may include: based on a neural network, the two-dimensional image is filled with global information of the two-dimensional image to obtain a first filled image (for example, as shown in fig. 7C), and the hole template is used, in the first filled image, point cloud data of the hole position is determined according to point cloud data adjacent to the hole position, so that a second filled image (for example, as shown in fig. 7D) is obtained.
The filling the two-dimensional image with global information of the two-dimensional image based on a neural network may include: convolution operation, pooling operation, and deconvolution operation. That is, the two-dimensional image is subjected to a global operation by the neural network, and through the global operation, the pixel value of the position not corresponding to the hole in the original two-dimensional image is kept unchanged (or may be partially or completely changed), and the pixel value of the position corresponding to the hole in the original two-dimensional image is changed, for example, from the original 0 value to another value. Generally, the two-dimensional image is filled with global information of the two-dimensional image based on a neural network, so that the hole positions in the two-dimensional image are roughly and not accurately filled, and the visual effect is fuzzy. For distinguishing from the subsequent filling, the image obtained after the filling is referred to as a first filled image. For example, the operation of filling the two-dimensional image with global information of the two-dimensional image based on a neural network may be implemented by a first filling neural network (coarse filling network).
In the embodiment of the present invention, the hole template generated in step S420 may be utilized, and in the first filled image, the point cloud data of the hole position is determined according to the point cloud data adjacent to the hole position, so as to obtain a second filled image, which is used as a final filled image. Specifically, the method may include: generating an attention map based on an attention mechanism by using the hole template, wherein the attention map indicates the correlation degree of the position of the hole and the adjacent area; performing feature extraction on the first filling image to obtain a feature map; and obtaining the second filling image according to the attention diagram and the feature diagram. In the embodiment of the present invention, an attention map indicating the correlation degree between the position of the hole and the adjacent area thereof is generated based on the attention mechanism by using the hole template generated in step S420, and based on the attention map and the feature map obtained by feature extraction on the first filling image, a more accurate pixel value to be filled at the position of the hole can be further determined, so that the filling effect is optimized based on the local information. Overall, the sharpness of at least some positions in the second filler image is higher than the sharpness of corresponding positions in the first filler image compared to the first filler image. For example, the operation of filling the first filling image with the hole template to obtain a second filling image may be implemented by a second filling neural network (fine filling network).
In an embodiment of the present invention, the aforementioned second filler image may be output as a final result for a detection or recognition task or the like. Or, further, the second filling image may be inversely transformed into a three-dimensional space, so as to obtain three-dimensional point cloud data after hole filling. In other embodiments, the second filling image may be further filtered, for example, the second filling image is filtered by using bilateral filtering to remove some noise points or some details that may be present, so as to obtain a filtered image. Similarly, the filtered image may be back-transformed into three-dimensional space to obtain hole-filled three-dimensional point cloud data, or the filtered image may be directly output for detection or recognition tasks, etc.
Based on the above description, according to the point cloud hole filling method provided by the embodiment of the invention, the three-dimensional point cloud is projected into the two-dimensional plane and then hole filling is performed, so that the sparsity of point distribution is greatly reduced, the processing difficulty is reduced, meanwhile, the characteristics of different scanning modes generated by different point cloud scanning modes are fully considered to generate the hole template, and the hole template is used for hole filling, so that the method can be adapted to the scenes of various point cloud detection systems. Fig. 8A to 8D are schematic diagrams illustrating filling results obtained after the point cloud holes shown in fig. 3A to 3D are filled by using the point cloud hole filling method according to the embodiment of the present invention, and it can be seen that the point cloud hole filling method according to the embodiment of the present invention can obtain a good point cloud hole filling effect.
The above exemplarily describes the point cloud hole filling method according to one embodiment of the present invention. A point cloud hole filling method according to another embodiment of the invention is described below with reference to fig. 9. FIG. 9 shows a schematic flow diagram of a method 900 for point cloud hole filling of point cloud data according to another embodiment of the invention. As shown in FIG. 9, a point cloud hole filling method 900 according to another embodiment of the invention may include the following steps:
in step S910, a point cloud is obtained, in which holes exist.
In step S920, the hole is filled with the global information of the point cloud, so as to obtain a first filling result.
In step S930, a position to be filled is obtained, and the position to be filled is located in the hole.
In step S940, point cloud data filled at the position to be filled is determined using point cloud data of a neighboring position of the position to be filled in the first filling result.
In the embodiment of the invention, the global information of the point cloud may be global information of the three-dimensional point cloud data in a three-dimensional space, or global information of a two-dimensional image obtained by projecting the three-dimensional point cloud data onto a two-dimensional plane. The related technical content of the two-dimensional image obtained by projecting the three-dimensional point cloud data onto the two-dimensional plane can be referred to the description in the foregoing, and for brevity, the description is omitted here.
In an embodiment of the present invention, the projecting the point cloud onto a two-dimensional plane, performing a meshing process on the two-dimensional plane to obtain a two-dimensional image, and filling the hole by using global information of the point cloud includes: the point cloud data of the hole position in the two-dimensional image is determined by using the global information of the point cloud data in the two-dimensional image, and the related technical content can be referred to the related description in the foregoing, which is not repeated herein.
In an embodiment of the present invention, filling holes with global information of a point cloud may include: and determining point cloud data of the hole position in the two-dimensional image by using the global information of the point cloud data in the two-dimensional image. Specifically, the first padding neural network padding the two-dimensional image with global information of the point cloud may include: convolution operation, pooling operation, and deconvolution operation. For the details of this part of the technical content, reference may be made to the foregoing description, and for the sake of brevity, detailed description is omitted here.
In an embodiment of the present invention, the filling of the first filling result may include filling with the two-dimensional image partial information. Further, according to the holes, a hole template corresponding to the hole position is generated, and the pixel value of each pixel point in the hole template is a first pixel value or a second pixel value, wherein the first pixel value indicates that the position of the pixel point is a hole, and the second pixel value indicates that the position of the pixel point is not a hole. Generating a hole template corresponding to the point cloud hole according to the point cloud hole may include: and acquiring a two-dimensional image according to the point cloud, and generating the hole template according to the two-dimensional image. The hole template and the two-dimensional image are the same in size, the pixel value of each pixel point of the hole template is a first pixel value or a second pixel value, the first pixel value indicates that the position of the pixel point is a hole, and the second pixel value indicates that the position of the pixel point is not a hole. For the details of this part of the technical content, reference may be made to the foregoing description, and for the sake of brevity, detailed description is omitted here.
In an embodiment of the present invention, determining point cloud data filled at the position to be filled by using point cloud data of a neighboring position of the position to be filled in the first filling result may include: generating an attention map with the location to be filled based on an attention mechanism, the attention map indicating a degree of correlation of the location to be filled with its neighboring regions; performing feature extraction on the first filling result to obtain a feature map; and obtaining the second filling result according to the attention diagram and the feature diagram based on a second filling neural network. For the details of this part of the technical content, reference may be made to the foregoing description, and for the sake of brevity, detailed description is omitted here.
Based on the above description, the point cloud hole filling method according to another embodiment of the present invention considers not only the local information of the hole but also the global information of the point cloud when performing point cloud hole filling, so as to improve the accuracy of the point cloud filling result. Further, according to the point cloud hole filling method of another embodiment of the present invention, when the point cloud hole is filled, the hole can be filled by using the hole template, and the method can be adapted to scenes of various point cloud detection systems.
The above exemplarily describes the point cloud hole filling method according to the embodiment of the present invention. A point cloud hole filling apparatus provided according to another aspect of the present invention is described below with reference to fig. 10. FIG. 10 shows a schematic block diagram of a point cloud hole filling apparatus 1000 of point cloud data according to an embodiment of the invention. The point cloud hole filling apparatus 1000 includes a memory 1010 and a processor 1020.
The memory 1010 stores a program for implementing the corresponding steps in the point cloud hole filling method according to the embodiment of the present invention. The processor 1020 is configured to execute a program stored in the memory 1010 to perform the corresponding steps of the point cloud hole filling method according to the embodiment of the present invention.
In one embodiment, the program when executed by the processor 1020 causes the point cloud hole filling apparatus 1000 to perform the steps of: projecting the point cloud data to a two-dimensional plane aiming at the point cloud data to be subjected to hole filling, and forming a two-dimensional image based on the projected two-dimensional plane; generating a hole template corresponding to the two-dimensional image, wherein the hole template marks the position of a point cloud hole in the two-dimensional image; and filling the two-dimensional image by using the hole template.
In one embodiment of the invention, the projecting the point cloud data onto a two-dimensional plane, which the program when executed by the processor 1020 causes the point cloud hole filling apparatus 1000 to perform, comprises: and projecting the point cloud data onto a two-dimensional plane in the forward looking direction of the laser radar.
In one embodiment of the present invention, when the program is executed by the processor 1020, the two-dimensional plane based on projection performed by the point cloud hole filling apparatus 1000 forms a two-dimensional image, including: and gridding the two-dimensional plane obtained after projection to form a two-dimensional image.
In an embodiment of the present invention, a pixel value of each pixel point of the two-dimensional image formed after the gridding is determined according to a parameter value of a point cloud point falling into a position of the pixel point.
In one embodiment of the present invention, the parameter value is a depth value or a reflectance of the point cloud point.
In an embodiment of the invention, at most one point cloud point falls in each grid of the two-dimensional image formed after the two-dimensional plane gridding, and the pixel value of the grid in which no point cloud point falls is 0.
In an embodiment of the present invention, the hole template and the two-dimensional image have the same size, and a pixel value of each pixel of the hole template is a first pixel value or a second pixel value, where the first pixel value indicates that the position of the pixel is a hole, and the second pixel value indicates that the position of the pixel is not a hole.
In one embodiment of the present invention, the generating a hole template corresponding to the two-dimensional image, which is performed by the point cloud hole filling apparatus 1000 when the program is executed by the processor 1020, includes: generating an image template which has the same size and gridding as the two-dimensional image; setting the pixel value of a grid corresponding to a first grid in the image template as a first pixel value for the first grid in which no point cloud point falls in the two-dimensional image; and for a second grid into which a cloud point of a point in the two-dimensional image falls, setting the pixel value of the grid corresponding to the second grid in the hole template as the second pixel value.
In one embodiment of the present invention, the filling of the two-dimensional image with the hole template, which is performed by the point cloud hole filling apparatus 1000 when the program is executed by the processor 1020, includes: and filling the two-dimensional image by using the global information of the two-dimensional image based on a neural network to obtain a first filling image, and determining point cloud data of the hole position in the first filling image according to point cloud data adjacent to the hole position by using the hole template to obtain a second filling image.
In one embodiment of the invention, the sharpness of at least some positions in the second filler image is higher than the sharpness of corresponding positions in the first filler image.
In one embodiment of the invention, the pixel values of the first mesh into which no point cloud point falls in the two-dimensional image are different from the pixel values of the mesh corresponding to the first mesh in the first filler image.
In an embodiment of the invention, the pixel values of the second grid into which the cloud points of the points in the two-dimensional image fall are the same as the pixel values of the grid corresponding to the second grid in the first filler image.
In one embodiment of the present invention, the neural network based filling performed by the point cloud hole filling apparatus 1000 with global information of the two-dimensional image when the program is executed by the processor 1020 comprises: and performing convolution operation, pooling operation and deconvolution operation on the two-dimensional image based on the first filling neural network.
In one embodiment of the present invention, the utilizing the hole template to obtain a second filling image, which is executed by the point cloud hole filling apparatus 1000 when the program is executed by the processor 1020, includes: generating an attention map based on an attention mechanism by utilizing the hole template based on a second filling neural network, wherein the attention map indicates the correlation degree of the position of the hole and the adjacent area of the hole; performing feature extraction on the first filling image to obtain a feature map; and obtaining the second filling image according to the attention diagram and the feature diagram.
In one embodiment of the invention, the program when executed by the processor 1020 further causes the point cloud hole filling apparatus 1000 to perform the steps of: after obtaining the second filler image, filtering the second filler image to obtain a filtered image.
In one embodiment of the invention, the filtering the second filled image, which the point cloud hole filling apparatus 1000 performs when the program is executed by the processor 1020, comprises: and filtering the second filling image in a bilateral filtering mode.
In one embodiment of the invention, the program when executed by the processor 1020 further causes the point cloud hole filling apparatus 1000 to perform the steps of: and inversely transforming the second filling image or the filtered image into a three-dimensional space to obtain three-dimensional point cloud data after hole filling.
In one embodiment of the invention, the program when executed by the processor 1020 further causes the point cloud hole filling apparatus 1000 to perform the steps of: outputting the second filler image or the filtered image directly for a detection or recognition task.
In one embodiment, the program when executed by the processor 1020 causes the point cloud hole filling apparatus 1000 to perform the steps of: acquiring a point cloud, wherein holes exist in the point cloud; filling the hole by using the global information of the point cloud to obtain a first filling result; acquiring a position to be filled, wherein the position to be filled is positioned in the hole; and determining the point cloud data filled at the position to be filled by utilizing the point cloud data of the position to be filled close to the first filling result.
In one embodiment of the invention, the program when executed by the processor 1020 causes the point cloud hole filling apparatus 1000 to perform the steps of: and projecting the point cloud onto a two-dimensional plane, and carrying out meshing processing on the two-dimensional plane to obtain a two-dimensional image.
In one embodiment of the present invention, the filling of the hole with the global information of the point cloud, which is executed by the point cloud hole filling apparatus 1000 when the program is executed by the processor 1020, includes: and determining point cloud data of the hole position in the two-dimensional image by using the global information of the point cloud data in the two-dimensional image.
In one embodiment of the present invention, the determining point cloud data of hole positions in the two-dimensional image by using global information of point cloud data in the two-dimensional image, which is executed by the program 1020 and is executed by the point cloud hole filling apparatus 1000, includes: and performing convolution operation, pooling operation and deconvolution operation on the two-dimensional image based on the first filling neural network.
In one embodiment of the present invention, the obtaining the positions to be filled, which the program when executed by the processor 1020 causes the point cloud hole filling apparatus 1000 to perform, includes: and generating a hole template corresponding to the position of the hole according to the hole, wherein the pixel value of each pixel point in the hole template is a first pixel value or a second pixel value, the first pixel value indicates that the position of the pixel point is the hole, and the second pixel value indicates that the position of the pixel point is not the hole.
In one embodiment of the present invention, when the program is executed by the processor 1020, the point cloud data of the neighboring position in the first filling result using the position to be filled, which is executed by the point cloud hole filling apparatus 1000, determining the point cloud data filled at the position to be filled includes: generating an attention map with the location to be filled based on an attention mechanism, the attention map indicating a degree of correlation of the location to be filled with its neighboring regions; performing feature extraction on the first filling result to obtain a feature map; and obtaining the second filling result according to the attention diagram and the feature diagram based on a second filling neural network.
Furthermore, according to an embodiment of the present invention, there is also provided a storage medium on which program instructions are stored, which when executed by a computer or a processor are used for executing the respective steps of the point cloud hole filling method of an embodiment of the present invention. The storage medium may include, for example, a memory card of a smart phone, a storage component of a tablet computer, a hard disk of a personal computer, a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a portable compact disc read only memory (CD-ROM), a USB memory, or any combination of the above storage media. The computer-readable storage medium may be any combination of one or more computer-readable storage media.
In one embodiment, the computer program instructions, when executed by a computer, may perform a point cloud hole filling method according to an embodiment of the invention.
In one embodiment, the computer program instructions, when executed by a computer or processor, cause the computer or processor to perform the steps of: projecting the point cloud data to a two-dimensional plane aiming at the point cloud data to be subjected to hole filling, and forming a two-dimensional image based on the projected two-dimensional plane; generating a hole template corresponding to the two-dimensional image, wherein the hole template marks the position of a point cloud hole in the two-dimensional image; and filling the two-dimensional image by using the hole template.
In one embodiment of the invention, the computer program instructions, when executed by a computer or processor, cause the computer or processor to perform the projecting the point cloud data onto a two-dimensional plane, comprising: and projecting the point cloud data onto a two-dimensional plane in the forward looking direction of the laser radar.
In one embodiment of the invention, the computer program instructions, when executed by a computer or processor, cause the computer or processor to perform the forming a two-dimensional image based on the two-dimensional plane obtained after projection, comprising: and gridding the two-dimensional plane obtained after projection to form a two-dimensional image.
In an embodiment of the present invention, a pixel value of each pixel point of the two-dimensional image formed after the gridding is determined according to a parameter value of a point cloud point falling into a position of the pixel point.
In one embodiment of the invention, the parameter value is a depth value or a reflectivity of the point cloud point.
In an embodiment of the invention, at most one point cloud point falls in each grid of the two-dimensional image formed after the two-dimensional plane gridding, and the pixel value of the grid in which no point cloud point falls is 0.
In an embodiment of the present invention, the hole template and the two-dimensional image have the same size, and a pixel value of each pixel of the hole template is a first pixel value or a second pixel value, where the first pixel value indicates that the position of the pixel is a hole, and the second pixel value indicates that the position of the pixel is not a hole.
In one embodiment of the invention, the computer program instructions, when executed by a computer or processor, cause the computer or processor to perform the generating a hole template corresponding to the two-dimensional image, comprising: generating an image template which has the same size and gridding as the two-dimensional image; setting the pixel value of a grid corresponding to a first grid in the image template as a first pixel value for the first grid in which no point cloud point falls in the two-dimensional image; and for a second grid into which a cloud point of a point in the two-dimensional image falls, setting the pixel value of the grid corresponding to the second grid in the hole template as the second pixel value.
In one embodiment of the invention, the computer program instructions, when executed by a computer or processor, cause the computer or processor to perform the filling of the two-dimensional image with the hole template, comprising: and filling the two-dimensional image by using the global information of the two-dimensional image based on a neural network to obtain a first filling image, and determining point cloud data of the hole position in the first filling image according to point cloud data adjacent to the hole position by using the hole template to obtain a second filling image.
In one embodiment of the invention, the sharpness of at least some positions in the second filler image is higher than the sharpness of corresponding positions in the first filler image.
In one embodiment of the invention, the pixel values of the first mesh into which no point cloud point falls in the two-dimensional image are different from the pixel values of the mesh corresponding to the first mesh in the first filler image.
In an embodiment of the invention, the pixel values of the second grid into which the cloud points of the points in the two-dimensional image fall are the same as the pixel values of the grid corresponding to the second grid in the first filler image.
In one embodiment of the invention, the computer program instructions, when executed by a computer or processor, cause the neural network executed by the computer or processor to populate the two-dimensional image with global information for the two-dimensional image, comprising: and performing convolution operation, pooling operation and deconvolution operation on the two-dimensional image based on the first filling neural network.
In one embodiment of the invention, the computer program instructions, when executed by a computer or processor, cause the computer or processor to perform the obtaining a second fill image using the hole template, comprising: generating an attention map based on an attention mechanism by utilizing the hole template based on a second filling neural network, wherein the attention map indicates the correlation degree of the position of the hole and the adjacent area of the hole; performing feature extraction on the first filling image to obtain a feature map; and obtaining the second filling image according to the attention diagram and the feature diagram.
In one embodiment of the invention, the computer program instructions, when executed by a computer or processor, further cause the computer or processor to perform the steps of: after obtaining the second filler image, filtering the second filler image to obtain a filtered image.
In one embodiment of the invention, the computer program instructions, when executed by a computer or processor, cause the computer or processor to perform the filtering of the second pad image, comprising: and filtering the second filling image in a bilateral filtering mode.
In one embodiment of the invention, the computer program instructions, when executed by a computer or processor, further cause the computer or processor to perform the steps of: and inversely transforming the second filling image or the filtered image into a three-dimensional space to obtain three-dimensional point cloud data after hole filling.
In one embodiment of the invention, the computer program instructions, when executed by a computer or processor, further cause the computer or processor to perform the steps of: outputting the second filler image or the filtered image directly for a detection or recognition task.
In one embodiment of the invention, the computer program instructions, when executed by a computer or processor, cause the computer or processor to perform the steps of: acquiring a point cloud, wherein holes exist in the point cloud; filling the hole by using the global information of the point cloud to obtain a first filling result; acquiring a position to be filled, wherein the position to be filled is positioned in the hole; and determining the point cloud data filled at the position to be filled by utilizing the point cloud data of the position to be filled close to the first filling result.
In one embodiment of the invention, the computer program instructions, when executed by a computer or processor, cause the computer or processor to perform the steps of: and projecting the point cloud onto a two-dimensional plane, and carrying out meshing processing on the two-dimensional plane to obtain a two-dimensional image.
In one embodiment of the invention, the computer program instructions, when executed by a computer or processor, cause the computer or processor to perform the filling of the hole with the global information of the point cloud, comprising: and determining point cloud data of the hole position in the two-dimensional image by using the global information of the point cloud data in the two-dimensional image.
In one embodiment of the invention, the computer program instructions, when executed by a computer or processor, cause the computer or processor to perform the determining point cloud data for hole locations in the two-dimensional image using global information of point cloud data in the two-dimensional image, comprising: and performing convolution operation, pooling operation and deconvolution operation on the two-dimensional image based on the first filling neural network.
In one embodiment of the invention, the computer program instructions, when executed by a computer or processor, cause the computer or processor to perform the obtaining a location to be filled, comprising: and generating a hole template corresponding to the position of the hole according to the hole, wherein the pixel value of each pixel point in the hole template is a first pixel value or a second pixel value, the first pixel value indicates that the position of the pixel point is the hole, and the second pixel value indicates that the position of the pixel point is not the hole.
In one embodiment of the invention, the computer program instructions, when executed by a computer or processor, cause the computer or processor to perform the determining point cloud data populated at the location to be populated from point cloud data of neighboring locations in the first population result using the location to be populated, comprising: generating an attention map with the location to be filled based on an attention mechanism, the attention map indicating a degree of correlation of the location to be filled with its neighboring regions; performing feature extraction on the first filling result to obtain a feature map; and obtaining the second filling result according to the attention diagram and the feature diagram based on a second filling neural network.
Based on the above description, according to the point cloud hole filling method, device and storage medium of the embodiment of the invention, the three-dimensional point cloud is projected into the two-dimensional plane and then hole filling is performed, so that sparsity of point distribution is greatly reduced, processing difficulty is reduced, meanwhile, the characteristics of different scanning modes generated by different point cloud scanning modes are fully considered to generate the hole template, the hole template is used for hole filling, the method can be adapted to scenes of different point cloud detection systems, the problems of insufficient sampling, moderate sampling and the like of the laser radar are well solved, details of the filled image can be better presented, and the interference of the sampling mode is avoided. The point cloud data filled by the embodiment of the invention can be directly used for tasks such as detection, identification and the like, or is reversely transformed into a three-dimensional space, so that the defect of point cloud sparsity is overcome; the embodiment of the invention can train different types of hole templates aiming at different radar scanning modes and can be well adapted to various scenes.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the foregoing illustrative embodiments are merely exemplary and are not intended to limit the scope of the invention thereto. Various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the scope or spirit of the present invention. All such changes and modifications are intended to be included within the scope of the present invention as set forth in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another device, or some features may be omitted, or not executed.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the invention and aiding in the understanding of one or more of the various inventive aspects. However, the method of the present invention should not be construed to reflect the intent: that is, the claimed invention requires more features than are expressly recited in a claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with the claims themselves being directed to separate embodiments of the present invention.
It will be understood by those skilled in the art that all of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where such features are mutually exclusive. The features disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some of the modules according to embodiments of the present invention. The present invention may also be embodied as apparatus programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on a computer-readable storage medium or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
The above description is only for the specific embodiment of the present invention or the description thereof, and the protection scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and the changes or substitutions should be covered within the protection scope of the present invention. The protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (26)

  1. A point cloud hole filling method, the method comprising:
    projecting the point cloud data to a two-dimensional plane aiming at the point cloud data to be subjected to hole filling, and forming a two-dimensional image based on the projected two-dimensional plane;
    generating a hole template corresponding to the two-dimensional image, wherein the hole template marks the position of a point cloud hole in the two-dimensional image; and
    and filling the two-dimensional image by using the hole template.
  2. The method of claim 1, wherein the projecting the point cloud data onto a two-dimensional plane comprises:
    and projecting the point cloud data onto a two-dimensional plane in the forward looking direction of the laser radar.
  3. The method according to claim 1 or 2, wherein the forming a two-dimensional image based on the two-dimensional plane obtained after the projection comprises:
    and gridding the two-dimensional plane obtained after projection to form a two-dimensional image.
  4. The method according to claim 3, wherein the pixel value of each pixel point of the two-dimensional image formed after gridding is determined according to the parameter value of the point cloud point falling into the position of the pixel point.
  5. The method of claim 4, wherein the parameter value is a depth value or a reflectivity of the point cloud point.
  6. The method according to any one of claims 3 to 5, wherein the two-dimensional image formed after the two-dimensional plane gridding has at most one point cloud point in each grid, and the pixel value of the grid in which no point cloud point falls is 0.
  7. The method of any of claims 1-6, wherein the hole template is the same size as the two-dimensional image, and wherein a pixel value of each pixel of the hole template is a first pixel value or a second pixel value, wherein the first pixel value indicates that the pixel is a hole at its location and the second pixel value indicates that the pixel is not a hole at its location.
  8. The method of claim 7, wherein generating the hole template corresponding to the two-dimensional image comprises:
    generating an image template which has the same size and gridding as the two-dimensional image;
    setting the pixel value of a grid corresponding to a first grid in the image template as a first pixel value for the first grid in which no point cloud point falls in the two-dimensional image;
    and for a second grid into which a cloud point of a point in the two-dimensional image falls, setting the pixel value of the grid corresponding to the second grid in the hole template as the second pixel value.
  9. The method according to any one of claims 1-8, wherein the filling the two-dimensional image with the hole template comprises:
    and filling the two-dimensional image by using the global information of the two-dimensional image based on a neural network to obtain a first filling image, and determining point cloud data of the hole position in the first filling image according to point cloud data adjacent to the hole position by using the hole template to obtain a second filling image.
  10. The method of claim 9, wherein the sharpness of at least some of the locations in the second pad image is higher than the sharpness of corresponding locations in the first pad image.
  11. The method according to claim 9 or 10, wherein the pixel values of the first mesh into which no point cloud point falls in the two-dimensional image are different from the pixel values of the mesh corresponding to the first mesh in the first filled image.
  12. The method of claim 11, wherein pixel values of a second grid in the two-dimensional image into which the cloud points fall are the same as pixel values of a grid in the first pad image corresponding to the second grid.
  13. The method according to any one of claims 9-12, wherein the neural network-based populating the two-dimensional image with global information for the two-dimensional image comprises: and performing convolution operation, pooling operation and deconvolution operation on the two-dimensional image based on the first filling neural network.
  14. The method according to any of claims 9-13, wherein said obtaining a second fill image using said hole template comprises:
    generating an attention map based on an attention mechanism by utilizing the hole template based on a second filling neural network, wherein the attention map indicates the correlation degree of the position of the hole and the adjacent area of the hole;
    performing feature extraction on the first filling image to obtain a feature map;
    and obtaining the second filling image according to the attention diagram and the feature diagram.
  15. The method according to any one of claims 1-14, further comprising:
    after obtaining the second filler image, filtering the second filler image to obtain a filtered image.
  16. The method of claim 15, wherein the filtering the second pad image comprises: and filtering the second filling image in a bilateral filtering mode.
  17. The method according to any one of claims 1-16, further comprising:
    and inversely transforming the second filling image or the filtered image into a three-dimensional space to obtain three-dimensional point cloud data after hole filling.
  18. The method according to any one of claims 1-17, further comprising:
    outputting the second filler image or the filtered image directly for a detection or recognition task.
  19. A point cloud hole filling method, the method comprising:
    acquiring a point cloud, wherein holes exist in the point cloud;
    filling the hole by using the global information of the point cloud to obtain a first filling result;
    acquiring a position to be filled, wherein the position to be filled is positioned in the hole;
    and determining the point cloud data filled at the position to be filled by utilizing the point cloud data of the position to be filled close to the first filling result.
  20. The method of claim 19, wherein the method comprises:
    and projecting the point cloud onto a two-dimensional plane, and carrying out meshing processing on the two-dimensional plane to obtain a two-dimensional image.
  21. The method of claim 20, wherein the filling the hole with global information of the point cloud comprises:
    and determining point cloud data of the hole position in the two-dimensional image by using the global information of the point cloud data in the two-dimensional image.
  22. The method of claim 21, wherein determining point cloud data for hole locations in the two-dimensional image using global information of point cloud data in the two-dimensional image comprises: and performing convolution operation, pooling operation and deconvolution operation on the two-dimensional image based on the first filling neural network.
  23. The method according to any one of claims 19-22, wherein said obtaining a location to be filled comprises:
    and generating a hole template corresponding to the position of the hole according to the hole, wherein the pixel value of each pixel point in the hole template is a first pixel value or a second pixel value, the first pixel value indicates that the position of the pixel point is the hole, and the second pixel value indicates that the position of the pixel point is not the hole.
  24. The method of claim 19, wherein determining point cloud data populated at the location to be populated using point cloud data of locations proximate to the location in the first population result for the location to be populated comprises:
    generating an attention map with the location to be filled based on an attention mechanism, the attention map indicating a degree of correlation of the location to be filled with its neighboring regions;
    performing feature extraction on the first filling result to obtain a feature map;
    and obtaining the second filling result according to the attention diagram and the feature diagram based on a second filling neural network.
  25. A point cloud hole filling apparatus, characterized in that the system comprises a memory and a processor, the memory having stored thereon a computer program for execution by the processor, the computer program, when executed by the processor, performing the point cloud hole filling method of any of claims 1-24.
  26. A storage medium, characterized in that the storage medium has stored thereon a computer program which, when executed, performs the point cloud hole filling method of any one of claims 1-24.
CN201980078422.1A 2019-12-09 2019-12-09 Point cloud hole filling method and device and storage medium Pending CN113316801A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/124049 WO2021114030A1 (en) 2019-12-09 2019-12-09 Method and device for filling holes in point cloud, and storage medium

Publications (1)

Publication Number Publication Date
CN113316801A true CN113316801A (en) 2021-08-27

Family

ID=76329293

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980078422.1A Pending CN113316801A (en) 2019-12-09 2019-12-09 Point cloud hole filling method and device and storage medium

Country Status (2)

Country Link
CN (1) CN113316801A (en)
WO (1) WO2021114030A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114048845A (en) * 2022-01-14 2022-02-15 深圳大学 Point cloud repairing method and device, computer equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108399609A (en) * 2018-03-06 2018-08-14 北京因时机器人科技有限公司 A kind of method for repairing and mending of three dimensional point cloud, device and robot
US20190096086A1 (en) * 2017-09-22 2019-03-28 Zoox, Inc. Three-Dimensional Bounding Box From Two-Dimensional Image and Point Cloud Data
CN110286387A (en) * 2019-06-25 2019-09-27 深兰科技(上海)有限公司 Obstacle detection method, device and storage medium applied to automated driving system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10046459B2 (en) * 2015-11-16 2018-08-14 Abb Schweiz Ag Three-dimensional visual servoing for robot positioning
US10574967B2 (en) * 2017-03-23 2020-02-25 The Boeing Company Autonomous performance of an operation on an object using a generated dense 3D model of the object
CN108198145B (en) * 2017-12-29 2020-08-28 百度在线网络技术(北京)有限公司 Method and device for point cloud data restoration

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190096086A1 (en) * 2017-09-22 2019-03-28 Zoox, Inc. Three-Dimensional Bounding Box From Two-Dimensional Image and Point Cloud Data
CN108399609A (en) * 2018-03-06 2018-08-14 北京因时机器人科技有限公司 A kind of method for repairing and mending of three dimensional point cloud, device and robot
CN110286387A (en) * 2019-06-25 2019-09-27 深兰科技(上海)有限公司 Obstacle detection method, device and storage medium applied to automated driving system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114048845A (en) * 2022-01-14 2022-02-15 深圳大学 Point cloud repairing method and device, computer equipment and storage medium
CN114048845B (en) * 2022-01-14 2022-06-03 深圳大学 Point cloud repairing method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
WO2021114030A1 (en) 2021-06-17

Similar Documents

Publication Publication Date Title
US9245200B2 (en) Method for detecting a straight line in a digital image
CN111210429B (en) Point cloud data partitioning method and device and obstacle detection method and device
US10510148B2 (en) Systems and methods for block based edgel detection with false edge elimination
US9189862B2 (en) Outline approximation for point cloud of building
AU2011362799B2 (en) 3D streets
US8885925B2 (en) Method for 3D object identification and pose detection using phase congruency and fractal analysis
CN111582054B (en) Point cloud data processing method and device and obstacle detection method and device
CN110879994A (en) Three-dimensional visual inspection detection method, system and device based on shape attention mechanism
US20080211809A1 (en) Method, medium, and system with 3 dimensional object modeling using multiple view points
US20100066737A1 (en) Dynamic-state estimating apparatus, dynamic-state estimating method, and program
US10497128B2 (en) Method and system for sea background modeling and suppression on high-resolution remote sensing sea images
US9405959B2 (en) System and method for classification of objects from 3D reconstruction
CN111553946B (en) Method and device for removing ground point cloud and method and device for detecting obstacle
CN113449534B (en) Two-dimensional code image processing method and device
CN112257605A (en) Three-dimensional target detection method, system and device based on self-labeling training sample
CN115240149A (en) Three-dimensional point cloud detection and identification method and device, electronic equipment and storage medium
Yogeswaran et al. 3d surface analysis for automated detection of deformations on automotive body panels
CN113316801A (en) Point cloud hole filling method and device and storage medium
EP1591960A1 (en) Method and apparatus for image processing
KR101927861B1 (en) Method and apparatus for removing noise based on mathematical morphology from geometric data of 3d space
CN111444839A (en) Target detection method and system based on laser radar
Gurram et al. Uniform grid upsampling of 3D lidar point cloud data
CN111932566A (en) Method, device and system for generating model contour map
US20230368462A1 (en) Information processing apparatus, information processing method, and recording medium
JP2004102402A (en) Partition data creating method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination