CN114332259A - Point cloud coding and decoding method based on vehicle-mounted laser radar - Google Patents

Point cloud coding and decoding method based on vehicle-mounted laser radar Download PDF

Info

Publication number
CN114332259A
CN114332259A CN202111634168.4A CN202111634168A CN114332259A CN 114332259 A CN114332259 A CN 114332259A CN 202111634168 A CN202111634168 A CN 202111634168A CN 114332259 A CN114332259 A CN 114332259A
Authority
CN
China
Prior art keywords
point cloud
map
distance
distance map
coding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111634168.4A
Other languages
Chinese (zh)
Inventor
陈建
林育芳
郑明魁
陈元相
廖燕俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou University
Original Assignee
Fuzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou University filed Critical Fuzhou University
Priority to CN202111634168.4A priority Critical patent/CN114332259A/en
Publication of CN114332259A publication Critical patent/CN114332259A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention relates to a point cloud coding and decoding method based on a vehicle-mounted laser radar, which comprises a coding process and a decoding process, wherein the coding process comprises the following steps: step A1: inputting laser radar data at an encoding end; step A2: carrying out point cloud format conversion; step A3: down-sampling based on 3D point cloud segmentation; step A4: carrying out distance map coding and occupancy map coding; the decoding process comprises the steps of: step B1: the distance map and the occupied map code are further converted into a 3D point cloud; step B2: upsampling based on 3D point cloud segmentation; step B3: and outputting the reconstructed point cloud at a decoding end. The method can effectively remove data redundancy for the vehicle-mounted point cloud with huge data volume, and further improves the compression performance.

Description

Point cloud coding and decoding method based on vehicle-mounted laser radar
Technical Field
The invention belongs to the field of vehicle-mounted point cloud, and particularly relates to a point cloud coding and decoding method based on a vehicle-mounted laser radar.
Background
The 3D point cloud reflects a three-dimensional world in a digital manner in the form of a point set. The application range of the point cloud is wide, one technical support is point cloud coding, and the point cloud coding has become a popular field for research of experts in recent years. The Motion Picture Experts Group (MPEG) proposed a draft collection of Compression of three types of Point clouds, static, dynamic and dynamic, in 2017, and then the academic and industrial circles continued research and study on how to compress these data, and introduced an international standard draft at the end of 2020, in which G-PCC (Geometry-based Point Cloud Compression) mainly aims at the first and third types of Point clouds.
The point clouds captured by the laser radar (Light Detection And Ranging) belong to the third category of point clouds. The most core information in the vehicle-mounted point cloud is geometric attributes, namely coordinate attributes, and also carries additional information such as reflectivity attributes and the like. Because the vehicle-mounted laser radar has huge original data volume, direct storage or transmission needs to occupy a large amount of memory and bandwidth, and the vehicle-mounted point cloud contains a large amount of redundant information, so that the vehicle-mounted laser radar is necessarily compressed.
In recent years, researchers have found that storage space can be saved by converting point clouds into range images. For example, Tu et al propose to convert raw data packets from LiDAR sensors into a distance map format, i.e., 3D LiDAR point cloud information is stored losslessly in a 2D matrix, and then compressed using conventional image encoding techniques to reduce spatial redundancy of the data. On the basis, the scholars further study the time correlation of the LiDAR point cloud sequence, and perform inter-frame prediction by utilizing motion estimation based on SLAM (simultaneous Localization and mapping) or an optical flow method based on U-net, thereby further improving the coding performance. Although the algorithm can improve the compression effect of the LiDAR point cloud, the characteristics of the vehicle-mounted point cloud are not fully utilized, and the compression performance still has a certain promotion space.
Disclosure of Invention
The invention aims to provide a point cloud coding and decoding method based on a vehicle-mounted laser radar, which can effectively remove data redundancy for vehicle-mounted point clouds with huge data volume and further improve the compression performance.
In order to achieve the purpose, the invention adopts the technical scheme that: a point cloud coding and decoding method based on a vehicle-mounted laser radar comprises a coding process and a decoding process, wherein the coding process comprises the following steps:
step A1: inputting laser radar data at an encoding end;
step A2: carrying out point cloud format conversion;
a21: generating a distance map from raw information of LiDAR data;
a22: generating geometric attributes of the 3D point cloud;
a23: generating a distance map index attribute of the 3D point cloud;
step A3: down-sampling based on 3D point cloud segmentation;
a31: carrying out point cloud segmentation according to the geometric attribute characteristics of the vehicle-mounted point cloud;
a32: adopting a corresponding down-sampling method according to the characteristics of the point cloud after segmentation;
a33: fusing the point clouds subjected to down sampling;
step A4: carrying out distance map coding and occupancy map coding;
a41: generating a down-sampled 2D distance map according to the distance map index attribute;
a42: generating an occupancy map according to the distance map index attribute;
a43: encoding the vectorized distance map;
a44: encoding the occupancy map;
the decoding process comprises the steps of:
step B1: the distance map and the occupied map code are further converted into a 3D point cloud;
b11: decoding the distance vector, thereby obtaining a distance map;
b12: carrying out occupation map decoding;
b13: converting the 2D distance map into a 3D point cloud;
step B2: upsampling based on 3D point cloud segmentation;
b21: segmenting the point cloud according to the geometric attribute characteristics of the vehicle-mounted point cloud;
b22: selecting a proper interpolation algorithm to perform upsampling on the segmented point cloud;
b23: fusing the point clouds subjected to the upsampling;
step B3: and outputting the reconstructed point cloud at a decoding end.
Further, in the step a2 point cloud format conversion, step a22 generates geometric attributes of the point cloud, and step a23 generates index attributes of the distance map, where the index attributes include the number of rows and the number of columns where the points on the distance map are located; and forming the index attribute of the point cloud by the row and column serial numbers of the distance map so as to facilitate the step A41 of the encoding end to generate the down-sampled 2D distance map according to the index attribute information of the point cloud.
Further, in the step A3 of downsampling based on 3D point cloud segmentation, the step a31 of point cloud segmentation is performed according to the geometric attribute characteristics of the vehicle-mounted point cloud, the step a32 of downsampling is performed according to the segmented point cloud characteristics in a corresponding downsampling manner, and the point clouds downsampled in different manners are fused in the step a 33.
Further, the vehicle-mounted point cloud is divided into ground point cloud and non-ground point cloud in the step A31, and the ground point cloud is subjected to sampling reduction by adopting the farthest points according to the circle-type features in the step A32; for non-ground point clouds, adopting voxel-based downsampling according to the dense features of the target point clouds; step a33 combines the downsampled ground and non-ground point clouds to obtain a downsampled 2D distance map and occupancy map for processing in step a 4.
Further, in the step a4 distance map encoding and occupancy map encoding, the step a41 restores the row and column information of the downsampled distance map according to the distance map index attribute of the 3D point cloud, and extracts the distance value of the corresponding position from the complete distance map generated in the step a21 to generate a downsampled 2D distance map; step 42, according to the distance map index attribute of the 3D point cloud, filling 1 in the corresponding row and column positions indicated by the index attribute, and filling 0 in the rest positions, thereby generating an occupancy map; step A43, obtaining distance vectors according to the occupation map and the Morton sorting, and then coding the distance vectors; step a44 encodes the occupancy map.
Further, the step B1 distance map and the occupation map code are further converted into a 3D point cloud, the step B12 occupation map code recovers a downsampled occupation map, the step B11 distance map decodes and reconstructs distance vectors, a 2D distance image is further recovered according to the morton inverse ordering of the occupation map, and the step B13 converts the distance image into the 3D point cloud.
Further, in the step B2, in the upsampling based on the 3D point cloud segmentation, after the 3D point cloud is obtained in the step B1, the steps B21 and B23 perform point cloud segmentation and point cloud fusion by using the same algorithm as that in the steps a31 and a33, and the step B22 performs corresponding upsampling processing on different types of point clouds.
Compared with the prior art, the invention has the following beneficial effects: the point cloud coding and decoding method based on the vehicle-mounted laser radar can process original data of the laser radar, can achieve a good redundancy removing effect on vehicle-mounted point cloud with large data volume, is high in compression rate and good in reconstruction performance, further improves the compression performance, and has strong practicability and wide application prospect.
Drawings
FIG. 1 is a general framework diagram of a method of an embodiment of the invention;
FIG. 2 is a schematic diagram of distance map transformation point cloud geometry in an embodiment of the invention;
FIG. 3 is a schematic diagram of down-sampling based on 3D point cloud segmentation in an embodiment of the present invention;
FIG. 4 is a schematic diagram of the farthest point down-sampling in an embodiment of the present invention;
FIG. 5 is a schematic illustration of voxel-based downsampling in an embodiment of the present invention;
FIG. 6 is a schematic diagram of encoding in an embodiment of the present invention;
FIG. 7 is a schematic diagram of upsampling based on 3D point cloud segmentation in an embodiment of the present invention;
FIG. 8 is a schematic diagram of upsampling for ground point clouds in an embodiment of the present invention;
FIG. 9 is a schematic diagram of upsampling for non-ground point clouds in an embodiment of the invention.
Wherein the parallelograms represent data and the rectangles represent processing steps.
Detailed Description
The invention is further explained below with reference to the drawings and the embodiments.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
As shown in fig. 1, the embodiment provides a point cloud encoding and decoding method based on a vehicle-mounted laser radar, which includes an encoding process and a decoding process. The encoding process comprises the steps of:
step A1: inputting laser radar data at an encoding end;
step A2: carrying out point cloud format conversion;
a21: generating a distance map from raw information of LiDAR data;
a22: generating geometric attributes of the 3D point cloud;
a23: generating a distance map index attribute of the 3D point cloud;
step A3: down-sampling based on 3D point cloud segmentation;
a31: carrying out point cloud segmentation according to the geometric attribute characteristics of the vehicle-mounted point cloud;
a32: adopting a corresponding down-sampling method according to the characteristics of the point cloud after segmentation;
a33: fusing the point clouds subjected to down sampling;
step A4: carrying out distance map coding and occupancy map coding;
a41: generating a down-sampled 2D distance map according to the distance map index attribute;
a42: generating an occupancy map according to the distance map index attribute;
a43: encoding the vectorized distance map;
a44: the occupancy map is encoded.
The decoding process comprises the steps of:
step B1: the distance map and the occupied map code are further converted into a 3D point cloud;
b11: decoding the distance vector, thereby obtaining a distance map;
b12: carrying out occupation map decoding;
b13: converting the 2D distance map into a 3D point cloud;
step B2: upsampling based on 3D point cloud segmentation;
b21: segmenting the point cloud according to the geometric attribute characteristics of the vehicle-mounted point cloud;
b22: selecting a proper interpolation algorithm to perform upsampling on the segmented point cloud;
b23: fusing the point clouds subjected to the upsampling;
step B3: and outputting the reconstructed point cloud at a decoding end.
The above steps a1 to B3 are performed in sequence for each step, and only the data set and the working list need to be set at the head, and the codec down-sampling and the codec up-sampling correspond to each other.
The step a1 inputs lidar data, and the raw information of the lidar data includes a distance value r and a reflectivity obtained by scanning in different pitch angle ω and yaw angle α directions.
In the step A2 point cloud format conversion, the step A21 distance map mapping arranges the distance values of different pitch angles and yaw azimuths into a matrix according to the row-column index sequence to form a 2D distance map.
As shown in Table 1, the step A21 distance map captures the distance values of 32 pitch angles and 2172 yaw angle azimuth scans by the vehicle-mounted laser radar Velodyne HDL-32E at each time node. As shown in table 1, the on-vehicle laser circle scan is represented as a 2D distance map, the row and column numbers represented by the coordinate values in parentheses correspond to the index values of the angles ω and α, and the distance values of the azimuth are arranged in a matrix in the row and column index order to form the 2D distance map.
TABLE 12D ideogram of distance map
Distance of (0,1) Distance of (1,2) (0,2172) distance
Distance of (1,1) Distance of (1,2) (1,2172) distance
Distance of (31,1) Distance of (31,2) (31,2172) distance
As shown in fig. 2, the step a22 generates geometric attributes of the 3D point cloud, which are characterized by: in this embodiment, the coordinate transformation is performed according to the following formula, and the ith point (r) in the spherical coordinate system is transformediii) Conversion to (x) in Cartesian coordinatesi,yi,zi):
xi=ri×cos(ωi)×sin(αi)
yi=ri×cos(ωi)×cos(αi)
zi=ri×sin(ωi)
Simultaneously with the generation of the geometric attributes of the point cloud, step a23 generates index attributes of the distance map, where the index attributes include the serial numbers of the row number and the column number where the points on the distance map are located, i.e., the index values of the angles ω and α. And forming the index attribute of the point cloud by the row and column serial numbers of the distance map so as to facilitate the A41 step of the encoding end to generate a down-sampled 2D distance map according to the index attribute information of the point cloud.
As shown in fig. 3, the step a31 performs point cloud segmentation according to the geometric attribute features of the vehicle-mounted point cloud, in this embodiment, a progressive filter is used to segment the point cloud, and for the point cloud data, the size of the filter window is gradually increased, and iterative determination is performed by combining with a height difference (i.e., height difference on the Z axis) threshold, so as to obtain the point clouds on the ground and on the non-ground. And when the maximum size of the window is the size of the building, stopping segmentation, and filtering out the ground point cloud, otherwise, obtaining the non-ground point cloud. The iterative formula for window size is as follows:
Gk=2kb+b
where k is the number of iterations and b is the initial window size (b is selected by the user).
The height difference threshold is calculated as follows:
Figure BDA0003441154370000061
wherein dh isminIs the initial elevation difference of non-ground point cloud, S is the slope, c is the grid size, dhmaxIs the maximum elevation difference threshold (typically building size).
The step A32 adopts a corresponding down-sampling method according to the characteristics of the point cloud after segmentation, and is characterized in that: in this embodiment, the farthest down point sampling is taken in view of the distance map having features that are denser with closer points to the radar and in order to accommodate the circled structure of the ground point cloud. And for non-ground point cloud, adopting voxel-based downsampling according to the dense features of the target point cloud.
Specifically, as shown in fig. 4, for the down-sampling of the farthest point of the ground point cloud, a point is randomly selected for the first time, the point forms a set a, then the farthest point in the set a is added to the set a by selecting the remaining point, and the farthest distance is recorded at the same time, an array L is formed until the selected point meets the requirement that the distance from the remaining point to the set a is less than all elements in the array L, the selection is finished, and the elements in the set a are sampling points, so that the down-sampled ground point cloud is formed. As shown in fig. 5, for voxel-based downsampling of non-ground point clouds, after a voxel grid filter is created, the voxel size is set, and points within the voxel are quantized to the voxel center to achieve downsampling of the non-ground point clouds.
As shown in fig. 6, in the step a4, in performing occupancy map encoding and distance map encoding, in this embodiment, the step a41 recovers row-column information of the downsampled distance map according to the distance map index attribute of the 3D point cloud, and extracts distance values of corresponding positions from the complete distance map generated by a21 to generate a downsampled 2D distance map; and step A42, generating an occupancy map according to the distance map index attribute of the 3D point cloud, namely judging whether the occupancy condition exists in the original point cloud, wherein the occupancy is 1 if the occupancy exists, and the occupancy is 0 if the occupancy does not exist, so that the occupancy map is obtained.
In step a43, for the distance map, according to the Morton code ordering of the occupancy map, vectorizing the distance map into one-dimensional distance vectors, and performing jpeg (joint Photographic Experts group) compression on the one-dimensional distance vectors. The calculation formula of the Morton code is as follows:
Figure BDA0003441154370000071
where (u, v) is the coordinate of the point in the 2D distance map, ul,vlE {0.1} is a binary representation of (u, v) coefficients from high (l ═ 1) to low (l ═ d), and the binary form of the Morton code for (u, v) can be represented as:
[m2d,m2d-1,...,,m2,m1]=[ud,vd,…,u1,v1]
in step A44, PNG (Portable Network graphics) compression is taken for the occupancy map.
Further, the step B1 distance map and occupancy map code are further converted into a 3D point cloud, in this embodiment, the step B12 occupancy map is decoded, and the occupancy map is restored by PNG decompression; step B11, decoding the distance map, decompressing and reconstructing a one-dimensional distance vector through JPEG, and recovering a 2D distance image according to Morton (Morton) reverse sorting of the occupied map; b13 converting the 2D distance image into A3D point cloud, and recovering the down-sampled 3D point cloud according to the same coordinate conversion formula as the step A22. The inverse sorting formula of the Morton code is as follows:
Figure BDA0003441154370000072
as shown in fig. 7, in the upsampling based on 3D point cloud segmentation in step B2, after obtaining A3D point cloud in step B1, steps B21 and B23 perform point cloud segmentation and point cloud fusion by using the same algorithm as steps a31 and a33, and step B22 performs corresponding upsampling processing on different types of point clouds. In this embodiment, after the point cloud is divided into the ground and the non-ground, the ground point cloud is upsampled based on polynomial curve fitting according to the vehicle-mounted point cloud characteristics and the downsampling mode in step a32, and the non-ground point cloud is upsampled by using a cubic spline curve.
Specifically, for the ground point cloud, as the sampling number is small, the incremental sampling based on the polynomial fitting is adopted as shown in fig. 8, after the K-D tree is created by the algorithm, the radius of the K neighborhood of the polynomial fitting is set, and then the surface reconstruction is performed by using the polynomial fitting curve. For the non-ground point cloud, a cubic spline curve is adopted for upsampling, as shown in fig. 9, the downsampled non-ground point cloud is input, fitting is carried out by using the cubic spline curve, y and z axis reference coordinates are set to solve an x axis coordinate, and a new coordinate point (x, y and z) is generated and added to the geometrical attributes of the point cloud, so that the upsampled non-ground point cloud is obtained.
Further, step B3 outputs the reconstructed 3D point cloud at the decoding end.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing is directed to preferred embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow. However, any simple modification, equivalent change and modification of the above embodiments according to the technical essence of the present invention are within the protection scope of the technical solution of the present invention.

Claims (7)

1. A point cloud coding and decoding method based on a vehicle-mounted laser radar is characterized by comprising a coding process and a decoding process, wherein the coding process comprises the following steps:
step A1: inputting laser radar data at an encoding end;
step A2: carrying out point cloud format conversion;
a21: generating a distance map from raw information of LiDAR data;
a22: generating geometric attributes of the 3D point cloud;
a23: generating a distance map index attribute of the 3D point cloud;
step A3: down-sampling based on 3D point cloud segmentation;
a31: carrying out point cloud segmentation according to the geometric attribute characteristics of the vehicle-mounted point cloud;
a32: adopting a corresponding down-sampling method according to the characteristics of the point cloud after segmentation;
a33: fusing the point clouds subjected to down sampling;
step A4: carrying out distance map coding and occupancy map coding;
a41: generating a down-sampled 2D distance map according to the distance map index attribute;
a42: generating an occupancy map according to the distance map index attribute;
a43: encoding the vectorized distance map;
a44: encoding the occupancy map;
the decoding process comprises the steps of:
step B1: the distance map and the occupied map code are further converted into a 3D point cloud;
b11: decoding the distance vector, thereby obtaining a distance map;
b12: carrying out occupation map decoding;
b13: converting the 2D distance map into a 3D point cloud;
step B2: upsampling based on 3D point cloud segmentation;
b21: segmenting the point cloud according to the geometric attribute characteristics of the vehicle-mounted point cloud;
b22: selecting a proper interpolation algorithm to perform upsampling on the segmented point cloud;
b23: fusing the point clouds subjected to the upsampling;
step B3: and outputting the reconstructed point cloud at a decoding end.
2. The point cloud encoding and decoding method based on the vehicle-mounted lidar according to claim 1, wherein in the step a2 point cloud format conversion, the step a22 generates geometric attributes of the point cloud, and the step a23 generates index attributes of the distance map, wherein the index attributes comprise serial numbers of the row number and the column number of the points on the distance map; and forming the index attribute of the point cloud by the row and column serial numbers of the distance map so as to facilitate the step A41 of the encoding end to generate the down-sampled 2D distance map according to the index attribute information of the point cloud.
3. The point cloud coding and decoding method based on the vehicle-mounted laser radar as claimed in claim 1, wherein in the step A3 of downsampling based on 3D point cloud segmentation, the step A31 of point cloud segmentation is performed according to geometric attribute characteristics of vehicle-mounted point cloud, the step A32 of corresponding downsampling mode is performed according to the segmented point cloud characteristics, and the point clouds downsampled in different modes are fused in the step A33.
4. The point cloud coding and decoding method based on the vehicle-mounted laser radar as claimed in claim 3, wherein the step A31 is to divide the vehicle-mounted point cloud into ground and non-ground point clouds, and the step A32 is to perform down-sampling on the ground point cloud by using the farthest points according to the circle feature; for non-ground point clouds, adopting voxel-based downsampling according to the dense features of the target point clouds; step a33 combines the downsampled ground and non-ground point clouds to obtain a downsampled 2D distance map and occupancy map for processing in step a 4.
5. The point cloud coding and decoding method based on the vehicle-mounted laser radar as claimed in claim 1, wherein in the step A4 distance map coding and occupancy map coding, the step A41 recovers row and column information of the downsampled distance map according to the distance map index attribute of the 3D point cloud, and extracts distance values of corresponding positions from the complete distance map generated in the step A21 to generate a downsampled 2D distance map; step 42, according to the distance map index attribute of the 3D point cloud, filling 1 in the corresponding row and column positions indicated by the index attribute, and filling 0 in the rest positions, thereby generating an occupancy map; step A43, obtaining distance vectors according to the occupation map and the Morton sorting, and then coding the distance vectors; step a44 encodes the occupancy map.
6. The point cloud coding and decoding method based on the vehicle-mounted laser radar as claimed in claim 1, wherein the step B1 distance map and the occupancy pattern code are further converted into the 3D point cloud, the step B12 occupancy pattern code recovers the down-sampled occupancy pattern, the step B11 distance map decodes to reconstruct the distance vector, the step B13 recovers the 2D distance image according to the Morton inverse ordering of the occupancy pattern, and the step B13 converts the distance image into the 3D point cloud.
7. The point cloud coding and decoding method based on the vehicle-mounted laser radar as claimed in claim 1, wherein in the step B2 of upsampling based on 3D point cloud segmentation, after the 3D point cloud is obtained in the step B1, the steps B21 and B23 perform point cloud segmentation and point cloud fusion by using the same algorithm as the steps a31 and a33, and the step B22 performs corresponding upsampling processing on different types of point clouds.
CN202111634168.4A 2021-12-29 2021-12-29 Point cloud coding and decoding method based on vehicle-mounted laser radar Pending CN114332259A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111634168.4A CN114332259A (en) 2021-12-29 2021-12-29 Point cloud coding and decoding method based on vehicle-mounted laser radar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111634168.4A CN114332259A (en) 2021-12-29 2021-12-29 Point cloud coding and decoding method based on vehicle-mounted laser radar

Publications (1)

Publication Number Publication Date
CN114332259A true CN114332259A (en) 2022-04-12

Family

ID=81017636

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111634168.4A Pending CN114332259A (en) 2021-12-29 2021-12-29 Point cloud coding and decoding method based on vehicle-mounted laser radar

Country Status (1)

Country Link
CN (1) CN114332259A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115474047A (en) * 2022-09-13 2022-12-13 福州大学 LiDAR point cloud encoding method and decoding method based on enhanced map correlation

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115474047A (en) * 2022-09-13 2022-12-13 福州大学 LiDAR point cloud encoding method and decoding method based on enhanced map correlation

Similar Documents

Publication Publication Date Title
Huang et al. 3d point cloud geometry compression on deep learning
JP5615354B2 (en) 3D mesh compression method using repetitive patterns
US6262737B1 (en) 3D mesh compression and coding
CN110390638B (en) High-resolution three-dimensional voxel model reconstruction method
US9002121B2 (en) Method and apparatus for encoding geometry patterns, and method for apparatus for decoding geometry patterns
Lee et al. Vertex data compression for triangular meshes
CN109949222B (en) Image super-resolution reconstruction method based on semantic graph
CN110691243A (en) Point cloud geometric compression method based on deep convolutional network
JP2015504559A (en) Method and apparatus for compression of mirror symmetry based 3D model
CN110278444B (en) Sparse representation three-dimensional point cloud compression method adopting geometric guidance
CN113554058A (en) Method, system, device and storage medium for enhancing resolution of visual target image
CN113518226A (en) G-PCC point cloud coding improvement method based on ground segmentation
Ho et al. Compressing large polygonal models
CN114332259A (en) Point cloud coding and decoding method based on vehicle-mounted laser radar
US10373384B2 (en) Lightfield compression using disparity predicted replacement
Yuan et al. A sampling-based 3D point cloud compression algorithm for immersive communication
Fan et al. Deep geometry post-processing for decompressed point clouds
CN115994849B (en) Three-dimensional digital watermark embedding and extracting method based on point cloud up-sampling
Ding et al. Point Diffusion Implicit Function for Large-scale Scene Neural Representation
CN115393452A (en) Point cloud geometric compression method based on asymmetric self-encoder structure
CN115065822A (en) Point cloud geometric information compression system, method and computer system
CN115769269A (en) Point cloud attribute compression
JP2023541271A (en) High density mesh compression
An et al. Research on the self-similarity of point cloud outline for accurate compression
CN113674369B (en) Method for improving G-PCC compression by deep learning sampling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination