WO2020189709A1 - 三次元データ符号化方法、三次元データ復号方法、三次元データ符号化装置、及び三次元データ復号装置 - Google Patents
三次元データ符号化方法、三次元データ復号方法、三次元データ符号化装置、及び三次元データ復号装置 Download PDFInfo
- Publication number
- WO2020189709A1 WO2020189709A1 PCT/JP2020/011928 JP2020011928W WO2020189709A1 WO 2020189709 A1 WO2020189709 A1 WO 2020189709A1 JP 2020011928 W JP2020011928 W JP 2020011928W WO 2020189709 A1 WO2020189709 A1 WO 2020189709A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- dimensional data
- dimensional
- information
- coding
- scale value
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 679
- 238000013139 quantization Methods 0.000 claims abstract description 884
- 230000008569 process Effects 0.000 claims abstract description 235
- 238000012545 processing Methods 0.000 claims description 265
- 238000006243 chemical reaction Methods 0.000 claims description 244
- 238000010586 diagram Methods 0.000 description 259
- 230000005540 biological transmission Effects 0.000 description 94
- 238000004364 calculation method Methods 0.000 description 80
- 208000031212 Autoimmune polyendocrinopathy Diseases 0.000 description 73
- 235000019395 ammonium persulphate Nutrition 0.000 description 73
- 230000002829 reductive effect Effects 0.000 description 73
- 238000013519 translation Methods 0.000 description 66
- 230000014616 translation Effects 0.000 description 66
- 238000004891 communication Methods 0.000 description 47
- 230000009466 transformation Effects 0.000 description 47
- 230000008859 change Effects 0.000 description 37
- 230000003068 static effect Effects 0.000 description 31
- 239000011159 matrix material Substances 0.000 description 28
- 230000002093 peripheral effect Effects 0.000 description 27
- 230000006835 compression Effects 0.000 description 24
- 238000007906 compression Methods 0.000 description 24
- 230000001186 cumulative effect Effects 0.000 description 24
- 238000001514 detection method Methods 0.000 description 24
- 230000006870 function Effects 0.000 description 18
- 230000010365 information processing Effects 0.000 description 18
- 238000003070 Statistical process control Methods 0.000 description 15
- 238000012544 monitoring process Methods 0.000 description 15
- 238000000605 extraction Methods 0.000 description 14
- 239000000284 extract Substances 0.000 description 13
- 230000004044 response Effects 0.000 description 12
- 230000002159 abnormal effect Effects 0.000 description 10
- 238000013500 data storage Methods 0.000 description 10
- 238000012546 transfer Methods 0.000 description 10
- 230000005856 abnormality Effects 0.000 description 8
- 230000015572 biosynthetic process Effects 0.000 description 8
- 238000003786 synthesis reaction Methods 0.000 description 8
- 239000013598 vector Substances 0.000 description 8
- 230000010485 coping Effects 0.000 description 7
- 230000002194 synthesizing effect Effects 0.000 description 7
- 238000012935 Averaging Methods 0.000 description 6
- OSWPMRLSEDHDFF-UHFFFAOYSA-N methyl salicylate Chemical compound COC(=O)C1=CC=CC=C1O OSWPMRLSEDHDFF-UHFFFAOYSA-N 0.000 description 6
- 241000023320 Luma <angiosperm> Species 0.000 description 5
- 241001465754 Metazoa Species 0.000 description 5
- 230000008878 coupling Effects 0.000 description 5
- 238000010168 coupling process Methods 0.000 description 5
- 238000005859 coupling reaction Methods 0.000 description 5
- 230000006837 decompression Effects 0.000 description 5
- 230000002441 reversible effect Effects 0.000 description 5
- 230000009471 action Effects 0.000 description 4
- 238000012937 correction Methods 0.000 description 4
- 238000011156 evaluation Methods 0.000 description 4
- 230000003044 adaptive effect Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 3
- 238000000261 appearance potential spectroscopy Methods 0.000 description 3
- 238000010276 construction Methods 0.000 description 3
- 230000007423 decrease Effects 0.000 description 3
- 230000006866 deterioration Effects 0.000 description 3
- 238000002310 reflectometry Methods 0.000 description 3
- 210000005261 ventrolateral medulla Anatomy 0.000 description 3
- 230000001174 ascending effect Effects 0.000 description 2
- 230000002457 bidirectional effect Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- TVMXDCGIABBOFY-UHFFFAOYSA-N octane Chemical compound CCCCCCCC TVMXDCGIABBOFY-UHFFFAOYSA-N 0.000 description 2
- 238000011002 quantification Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 239000000126 substance Substances 0.000 description 2
- 108091026890 Coding region Proteins 0.000 description 1
- 241000196324 Embryophyta Species 0.000 description 1
- 230000002547 anomalous effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 239000010419 fine particle Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000010187 selection method Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 230000009978 visual deterioration Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/004—Predictors, e.g. intraframe, interframe coding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/40—Tree coding, e.g. quadtree, octree
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/124—Quantisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/184—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
Definitions
- the present disclosure relates to a three-dimensional data coding method, a three-dimensional data decoding method, a three-dimensional data coding device, and a three-dimensional data decoding device.
- Devices or services that utilize 3D data are expected to become widespread in a wide range of fields such as computer vision for autonomous operation of automobiles or robots, map information, monitoring, infrastructure inspection, or video distribution.
- the three-dimensional data is acquired by various methods such as a distance sensor such as a range finder, a stereo camera, or a combination of a plurality of monocular cameras.
- a representation method As one of the representation methods of 3D data, there is a representation method called a point cloud that represents the shape of a 3D structure by a point cloud in a 3D space.
- the position and color of the point cloud are stored.
- Point clouds are expected to become the mainstream method for expressing three-dimensional data, but point clouds have a very large amount of data. Therefore, in the storage or transmission of 3D data, it is essential to compress the amount of data by coding, as in the case of 2D moving images (for example, MPEG-4 AVC or HEVC standardized by MPEG). Become.
- point cloud compression is partially supported by a public library (Point Cloud Library) that performs point cloud-related processing.
- Point Cloud Library a public library that performs point cloud-related processing.
- Patent Document 1 there is known a technique for searching and displaying facilities located around a vehicle using three-dimensional map data (see, for example, Patent Document 1).
- An object of the present disclosure is to provide a three-dimensional data coding method, a three-dimensional data decoding method, a three-dimensional data coding device, or a three-dimensional data decoding device that can reduce the memory capacity used.
- the three-dimensional data coding method is a three-dimensional data coding method using a first coding method and a second coding method different from the first coding method, and is the first.
- a table showing the correspondence between a plurality of values of the quantization parameter and a plurality of values of the first scale value, and a first table common to the first coding method and the second coding method is used. Then, the first quantization parameter is converted into the first scale value, or the first scale value is converted into the first quantization parameter, and a plurality of attributes of the plurality of three-dimensional points included in the point group data are obtained.
- Coding attribute information is generated by coding including a first quantization process in which each of a plurality of information-based first coefficient values is divided by the first scale value, and the coding attribute information and the first quantization parameter are generated. Generate a bitstream containing and.
- the three-dimensional data decoding method is a three-dimensional data decoding method using a first decoding method and a second decoding method different from the first decoding method, and is a point group data from a bit stream.
- the coded attribute information in which the plurality of attribute information of the plurality of three-dimensional points included in is encoded and the first quantization parameter are acquired, and the plurality of values of the first quantization parameter and the first scale value are acquired. It is a table showing the correspondence relationship with a plurality of values of, and the first quantization parameter is set to the first scale value by using the first table common to the first decoding method and the second decoding method.
- the plurality of attribute information is decoded by a decoding including a first inverse quantization process of converting and multiplying each of the plurality of first quantization coefficients based on the coded attribute information by the first scale value.
- the present disclosure can provide a three-dimensional data coding method, a three-dimensional data decoding method, a three-dimensional data coding device, or a three-dimensional data decoding device that can reduce the memory capacity used.
- FIG. 1 is a diagram showing a configuration of coded three-dimensional data according to the first embodiment.
- FIG. 2 is a diagram showing an example of a prediction structure between SPCs belonging to the lowest layer of GOS according to the first embodiment.
- FIG. 3 is a diagram showing an example of a prediction structure between layers according to the first embodiment.
- FIG. 4 is a diagram showing an example of the coding order of GOS according to the first embodiment.
- FIG. 5 is a diagram showing an example of the coding order of GOS according to the first embodiment.
- FIG. 6 is a block diagram of the three-dimensional data encoding device according to the first embodiment.
- FIG. 7 is a flowchart of the coding process according to the first embodiment.
- FIG. 1 is a diagram showing a configuration of coded three-dimensional data according to the first embodiment.
- FIG. 2 is a diagram showing an example of a prediction structure between SPCs belonging to the lowest layer of GOS according to the first embodiment.
- FIG. 3 is a diagram
- FIG. 8 is a block diagram of the three-dimensional data decoding device according to the first embodiment.
- FIG. 9 is a flowchart of the decoding process according to the first embodiment.
- FIG. 10 is a diagram showing an example of meta information according to the first embodiment.
- FIG. 11 is a diagram showing a configuration example of SWLD according to the second embodiment.
- FIG. 12 is a diagram showing an operation example of the server and the client according to the second embodiment.
- FIG. 13 is a diagram showing an operation example of the server and the client according to the second embodiment.
- FIG. 14 is a diagram showing an operation example of the server and the client according to the second embodiment.
- FIG. 15 is a diagram showing an operation example of the server and the client according to the second embodiment.
- FIG. 16 is a block diagram of the three-dimensional data encoding device according to the second embodiment.
- FIG. 17 is a flowchart of the coding process according to the second embodiment.
- FIG. 18 is a block diagram of the three-dimensional data decoding device according to the second embodiment.
- FIG. 19 is a flowchart of the decoding process according to the second embodiment.
- FIG. 20 is a diagram showing a configuration example of the WLD according to the second embodiment.
- FIG. 21 is a diagram showing an example of the octree structure of the WLD according to the second embodiment.
- FIG. 22 is a diagram showing a configuration example of the SWLD according to the second embodiment.
- FIG. 23 is a diagram showing an example of the octree structure of SWLD according to the second embodiment.
- FIG. 24 is a block diagram of the three-dimensional data creation device according to the third embodiment.
- FIG. 25 is a block diagram of the three-dimensional data transmission device according to the third embodiment.
- FIG. 26 is a block diagram of the three-dimensional information processing apparatus according to the fourth embodiment.
- FIG. 27 is a block diagram of the three-dimensional data creation device according to the fifth embodiment.
- FIG. 28 is a diagram showing a configuration of the system according to the sixth embodiment.
- FIG. 29 is a block diagram of the client device according to the sixth embodiment.
- FIG. 30 is a block diagram of the server according to the sixth embodiment.
- FIG. 31 is a flowchart of the three-dimensional data creation process by the client device according to the sixth embodiment.
- FIG. 32 is a flowchart of the sensor information transmission process by the client device according to the sixth embodiment.
- FIG. 33 is a flowchart of the three-dimensional data creation process by the server according to the sixth embodiment.
- FIG. 34 is a flowchart of the three-dimensional map transmission process by the server according to the sixth embodiment.
- FIG. 35 is a diagram showing a configuration of a modified example of the system according to the sixth embodiment.
- FIG. 36 is a diagram showing a configuration of a server and a client device according to the sixth embodiment.
- FIG. 37 is a block diagram of the three-dimensional data encoding device according to the seventh embodiment.
- FIG. 38 is a diagram showing an example of the predicted residual according to the seventh embodiment.
- FIG. 39 is a diagram showing an example of the volume according to the seventh embodiment.
- FIG. 40 is a diagram showing an example of an ocree representation of the volume according to the seventh embodiment.
- FIG. 41 is a diagram showing an example of a bit string of the volume according to the seventh embodiment.
- FIG. 42 is a diagram showing an example of an ocree representation of the volume according to the seventh embodiment.
- FIG. 43 is a diagram showing an example of the volume according to the seventh embodiment.
- FIG. 44 is a diagram for explaining the intra prediction process according to the seventh embodiment.
- FIG. 45 is a diagram for explaining the rotation and translation processing according to the seventh embodiment.
- FIG. 46 is a diagram showing a syntax example of the RT application flag and RT information according to the seventh embodiment.
- FIG. 47 is a diagram for explaining the inter-prediction process according to the seventh embodiment.
- FIG. 41 is a diagram showing an example of a bit string of the volume according to the seventh embodiment.
- FIG. 42 is a diagram showing an example of an ocree representation of the volume according to the seventh embodiment.
- FIG. 48 is a block diagram of the three-dimensional data decoding device according to the seventh embodiment.
- FIG. 49 is a flowchart of the three-dimensional data coding process by the three-dimensional data coding apparatus according to the seventh embodiment.
- FIG. 50 is a flowchart of the three-dimensional data decoding process by the three-dimensional data decoding apparatus according to the seventh embodiment.
- FIG. 51 is a diagram showing an example of three-dimensional points according to the eighth embodiment.
- FIG. 52 is a diagram showing a setting example of LoD according to the eighth embodiment.
- FIG. 53 is a diagram showing an example of a threshold value used for setting LoD according to the eighth embodiment.
- FIG. 54 is a diagram showing an example of attribute information used for the predicted value according to the eighth embodiment.
- FIG. 55 is a diagram showing an example of the exponential Golomb code according to the eighth embodiment.
- FIG. 56 is a diagram showing processing for the exponential Golomb code according to the eighth embodiment.
- FIG. 57 is a diagram showing an example of syntax of the attribute header according to the eighth embodiment.
- FIG. 58 is a diagram showing an example of syntax of attribute data according to the eighth embodiment.
- FIG. 59 is a flowchart of the three-dimensional data coding process according to the eighth embodiment.
- FIG. 60 is a flowchart of the attribute information coding process according to the eighth embodiment.
- FIG. 61 is a diagram showing processing for the exponential Golomb code according to the eighth embodiment.
- FIG. 62 is a diagram showing an example of a reverse lookup table showing the relationship between the remaining code and the value according to the eighth embodiment.
- FIG. 63 is a flowchart of the three-dimensional data decoding process according to the eighth embodiment.
- FIG. 64 is a flowchart of the attribute information decoding process according to the eighth embodiment.
- FIG. 65 is a block diagram of the three-dimensional data encoding device according to the eighth embodiment.
- FIG. 66 is a block diagram of the three-dimensional data decoding device according to the eighth embodiment.
- FIG. 67 is a flowchart of the three-dimensional data coding process according to the eighth embodiment.
- FIG. 68 is a flowchart of the three-dimensional data decoding process according to the eighth embodiment.
- FIG. 69 is a diagram showing a first example of a table showing predicted values calculated in each prediction mode according to the ninth embodiment.
- FIG. 70 is a diagram showing an example of attribute information used for the predicted value according to the ninth embodiment.
- FIG. 71 is a diagram showing a second example of a table showing predicted values calculated in each prediction mode according to the ninth embodiment.
- FIG. 72 is a diagram showing a third example of a table showing predicted values calculated in each prediction mode according to the ninth embodiment.
- FIG. 73 is a diagram showing a fourth example of a table showing predicted values calculated in each prediction mode according to the ninth embodiment.
- FIG. 74 is a diagram for explaining the coding of attribute information using RAHT according to the tenth embodiment.
- FIG. 75 is a diagram showing an example in which the quantization scale is set for each layer according to the tenth embodiment.
- FIG. 76 is a diagram showing an example of a first code string and a second code string according to the tenth embodiment.
- FIG. 77 is a diagram showing an example of a truncet unity code according to the tenth embodiment.
- FIG. 78 is a diagram for explaining the inverse Har transformation according to the tenth embodiment.
- FIG. 79 is a diagram showing an example of syntax of attribute information according to the tenth embodiment.
- FIG. 80 is a diagram showing an example of the coding coefficient and ZeroCnt according to the tenth embodiment.
- FIG. 81 is a flowchart of the three-dimensional data coding process according to the tenth embodiment.
- FIG. 82 is a flowchart of the attribute information coding process according to the tenth embodiment.
- FIG. 83 is a flowchart of the coding coefficient coding process according to the tenth embodiment.
- FIG. 84 is a flowchart of the three-dimensional data decoding process according to the tenth embodiment.
- FIG. 85 is a flowchart of the attribute information decoding process according to the tenth embodiment.
- FIG. 86 is a flowchart of the coding coefficient decoding process according to the tenth embodiment.
- FIG. 87 is a block diagram of the attribute information coding unit according to the tenth embodiment.
- FIG. 88 is a block diagram of the attribute information decoding unit according to the tenth embodiment.
- FIG. 89 is a diagram for explaining the processing of the quantization unit and the inverse quantization unit according to the eleventh embodiment.
- FIG. 90 is a diagram for explaining a default value and a quantization delta of the quantization value according to the eleventh embodiment.
- FIG. 91 is a block diagram showing a configuration of a first coding unit included in the three-dimensional data coding device according to the eleventh embodiment.
- FIG. 92 is a block diagram showing the configuration of the divided portion according to the eleventh embodiment.
- FIG. 93 is a block diagram showing a configuration of a position information coding unit and an attribute information coding unit according to the eleventh embodiment.
- FIG. 94 is a block diagram showing the configuration of the first decoding unit according to the eleventh embodiment.
- FIG. 95 is a block diagram showing a configuration of a position information decoding unit and an attribute information decoding unit according to the eleventh embodiment.
- FIG. 96 is a flowchart showing an example of processing related to determination of a quantized value in coding of position information or coding of attribute information according to the eleventh embodiment.
- FIG. 97 is a flowchart showing an example of decoding processing of position information and attribute information according to the eleventh embodiment.
- FIG. 98 is a diagram for explaining a first example of the method of transmitting the quantization parameter according to the eleventh embodiment.
- FIG. 99 is a diagram for explaining a second example of the method of transmitting the quantization parameter according to the eleventh embodiment.
- FIG. 100 is a diagram for explaining a third example of the method of transmitting the quantization parameter according to the eleventh embodiment.
- FIG. 101 is a flowchart of the point cloud data coding process according to the eleventh embodiment.
- FIG. 102 is a flowchart showing an example of a process of determining the QP value according to the eleventh embodiment and updating the additional information.
- FIG. 103 is a flowchart showing an example of a process of encoding the determined QP value according to the eleventh embodiment.
- FIG. 104 is a flowchart of the point cloud data decoding process according to the eleventh embodiment.
- FIG. 105 is a flowchart showing an example of a process of acquiring the QP value according to the eleventh embodiment and decoding the QP value of the slice or tile.
- FIG. 106 is a diagram showing an example of GPS syntax according to the eleventh embodiment.
- FIG. 107 is a diagram showing an example of APS syntax according to the eleventh embodiment.
- FIG. 108 is a diagram showing an example of syntax of the header of the position information according to the eleventh embodiment.
- FIG. 109 is a diagram showing an example of syntax of the header of the attribute information according to the eleventh embodiment.
- FIG. 110 is a diagram for explaining another example of the method of transmitting the quantization parameter according to the eleventh embodiment.
- FIG. 111 is a diagram for explaining another example of the method of transmitting the quantization parameter according to the eleventh embodiment.
- FIG. 112 is a diagram for explaining a ninth example of the method of transmitting the quantization parameter according to the eleventh embodiment.
- FIG. 113 is a diagram for explaining a control example of the QP value according to the eleventh embodiment.
- FIG. 114 is a flowchart showing an example of a method of determining a QP value based on the quality of the object according to the eleventh embodiment.
- FIG. 115 is a flowchart showing an example of a method for determining a QP value based on rate control according to the eleventh embodiment.
- FIG. 116 is a flowchart of the coding process according to the eleventh embodiment.
- FIG. 117 is a flowchart of the decoding process according to the eleventh embodiment.
- FIG. 118 is a diagram for explaining an example of a method of transmitting the quantization parameter according to the twelfth embodiment.
- FIG. 119 is a diagram showing a first example of the syntax of the APS according to the twelfth embodiment and the syntax of the header of the attribute information.
- FIG. 120 is a diagram showing a second example of the APS syntax according to the twelfth embodiment.
- FIG. 121 is a diagram showing a second example of the syntax of the header of the attribute information according to the twelfth embodiment.
- FIG. 122 is a diagram showing the relationship between the SPS, APS, and the header of the attribute information according to the twelfth embodiment.
- FIG. 123 is a flowchart of the coding process according to the twelfth embodiment.
- FIG. 124 is a flowchart of the decoding process according to the twelfth embodiment.
- FIG. 125 is a block diagram showing a configuration of a three-dimensional data encoding device according to the thirteenth embodiment.
- FIG. 126 is a block diagram showing a configuration of a three-dimensional data decoding device according to the thirteenth embodiment.
- FIG. 127 is a diagram showing a setting example of LoD according to the thirteenth embodiment.
- FIG. 128 is a diagram showing an example of the hierarchical structure of RAHT according to the thirteenth embodiment.
- FIG. 129 is a block diagram of the three-dimensional data encoding device according to the thirteenth embodiment.
- FIG. 130 is a block diagram of the divided portion according to the thirteenth embodiment.
- FIG. 131 is a block diagram of the attribute information coding unit according to the thirteenth embodiment.
- FIG. 132 is a block diagram of the three-dimensional data decoding device according to the thirteenth embodiment.
- FIG. 133 is a block diagram of the attribute information decoding unit according to the thirteenth embodiment.
- FIG. 134 is a diagram showing an example of setting the quantization parameter in the tile and slice division according to the thirteenth embodiment.
- FIG. 135 is a diagram showing a setting example of the quantization parameter according to the thirteenth embodiment.
- FIG. 136 is a diagram showing a setting example of the quantization parameter according to the thirteenth embodiment.
- FIG. 137 is a diagram showing a syntax example of the attribute information header according to the thirteenth embodiment.
- FIG. 138 is a diagram showing a syntax example of the attribute information header according to the thirteenth embodiment.
- FIG. 139 is a diagram showing a setting example of the quantization parameter according to the thirteenth embodiment.
- FIG. 134 is a diagram showing an example of setting the quantization parameter in the tile and slice division according to the thirteenth embodiment.
- FIG. 135 is a diagram showing a setting example of the quantization parameter according to the thirteenth embodiment.
- FIG. 140 is a diagram showing an example of syntax of the attribute information header according to the thirteenth embodiment.
- FIG. 141 is a diagram showing a syntax example of the attribute information header according to the thirteenth embodiment.
- FIG. 142 is a flowchart of the three-dimensional data coding process according to the thirteenth embodiment.
- FIG. 143 is a flowchart of the attribute information coding process according to the thirteenth embodiment.
- FIG. 144 is a flowchart of the ⁇ QP determination process according to the thirteenth embodiment.
- FIG. 145 is a flowchart of the three-dimensional data decoding process according to the thirteenth embodiment.
- FIG. 146 is a flowchart of the attribute information decoding process according to the thirteenth embodiment.
- FIG. 141 is a diagram showing a syntax example of the attribute information header according to the thirteenth embodiment.
- FIG. 142 is a flowchart of the three-dimensional data coding process according to the thirteenth embodiment.
- FIG. 143 is a flowchart of
- FIG. 147 is a block diagram of the attribute information coding unit according to the thirteenth embodiment.
- FIG. 148 is a block diagram of the attribute information decoding unit according to the thirteenth embodiment.
- FIG. 149 is a diagram showing a setting example of the quantization parameter according to the thirteenth embodiment.
- FIG. 150 is a diagram showing a syntax example of the attribute information header according to the thirteenth embodiment.
- FIG. 151 is a diagram showing a syntax example of the attribute information header according to the thirteenth embodiment.
- FIG. 152 is a flowchart of the three-dimensional data coding process according to the thirteenth embodiment.
- FIG. 153 is a flowchart of the attribute information coding process according to the thirteenth embodiment.
- FIG. 154 is a flowchart of the three-dimensional data decoding process according to the thirteenth embodiment.
- FIG. 155 is a flowchart of the attribute information decoding process according to the thirteenth embodiment.
- FIG. 156 is a block diagram of the attribute information coding unit according to the thirteenth embodiment.
- FIG. 157 is a block diagram of the attribute information decoding unit according to the thirteenth embodiment.
- FIG. 158 is a diagram showing a syntax example of the attribute information header according to the thirteenth embodiment.
- FIG. 159 is a flowchart of the three-dimensional data coding process according to the thirteenth embodiment.
- FIG. 160 is a flowchart of the three-dimensional data decoding process according to the thirteenth embodiment.
- FIG. 161 is a block diagram of the three-dimensional data encoding device according to the fourteenth embodiment.
- FIG. 162 is a block diagram of the three-dimensional data decoding device according to the fourteenth embodiment.
- FIG. 163 is a block diagram of the three-dimensional data encoding device according to the fourteenth embodiment.
- FIG. 164 is a block diagram of the three-dimensional data decoding device according to the fourteenth embodiment.
- FIG. 165 is a block diagram of the attribute information coding unit according to the fourteenth embodiment.
- FIG. 166 is a block diagram of the attribute information decoding unit according to the fourteenth embodiment.
- FIG. 167 is a block diagram of the three-dimensional data decoding device according to the fourteenth embodiment.
- FIG. 168 is a block diagram of the three-dimensional data decoding device according to the fourteenth embodiment.
- FIG. 169 is a diagram showing an example of a table according to the fourteenth embodiment.
- FIG. 170 is a block diagram of the scale value calculation unit according to the fourteenth embodiment.
- FIG. 171 is a block diagram of the three-dimensional data encoding device according to the fourteenth embodiment.
- FIG. 172 is a block diagram of the scale value calculation unit according to the fourteenth embodiment.
- FIG. 173 is a flowchart of the three-dimensional data decoding process according to the fourteenth embodiment.
- FIG. 174 is a block diagram of the scale value calculation unit according to the fourteenth embodiment.
- FIG. 175 is a block diagram of the three-dimensional data decoding device according to the fourteenth embodiment.
- FIG. 176 is a flowchart of the three-dimensional data coding process according to the fourteenth embodiment.
- FIG. 177 is a flowchart of the three-dimensional data decoding process according
- the three-dimensional data coding method is a three-dimensional data coding method using a first coding method and a second coding method different from the first coding method, and is the first.
- a table showing the correspondence between a plurality of values of the quantization parameter and a plurality of values of the first scale value, and a first table common to the first coding method and the second coding method is used. Then, the first quantization parameter is converted into the first scale value, or the first scale value is converted into the first quantization parameter, and a plurality of attributes of the plurality of three-dimensional points included in the point group data are obtained.
- Coding attribute information is generated by coding including a first quantization process in which each of a plurality of information-based first coefficient values is divided by the first scale value, and the coding attribute information and the first quantization parameter are generated. Generate a bitstream containing and.
- the first table can be shared between the first coding method and the second coding method.
- the three-dimensional data coding method can reduce the memory capacity used.
- a plurality of post-shift attribute information is generated by performing a bit shift to the left for each of the plurality of attribute information, and the plurality of post-shift attribute information is used.
- the plurality of first coefficient values are generated by performing conversion processing using the plurality of position information of the plurality of three-dimensional points, and the first scale value is converted into the second scale value for quantization. It is a value obtained by multiplying the coefficient corresponding to the bit shift to the left, and by dividing each of the plurality of first coefficient values by the first scale value, the quantization and the bit shift to the left are performed. A bit shift to the right with the same number of bits as the above may be performed.
- the accuracy of the three-dimensional data coding method can be improved.
- the number of first bits which is the number of bits for the leftward bit shift and the rightward bit shift used in the first coding method, and the leftward direction used in the second coding method.
- the first scale value to the first quantization parameter or from the first quantization parameter to the first
- the first coding method is used in the conversion to a one-scale value
- the first quantization parameter is determined by applying the first table to the first scale value, or
- the first scale value is determined by applying the first table to the first quantization parameter, and when the second coding method is used, (i) to the first scale value.
- the first quantization parameter is determined by performing a bit shift of the number of bits of the difference between the first bit number and the second bit number and applying the first table to the first scale value after the bit shift.
- the third scale value is determined by applying the first table to the first quantization parameter, and the first bit number and the second bit number are added to the third scale value.
- the first scale value may be calculated by bit-shifting the number of bits of the difference.
- the three-dimensional data coding method can standardize the table even when the number of bit shift bits differs between the first coding method and the second coding method. As a result, the three-dimensional data coding method can reduce the memory capacity used.
- the three-dimensional data coding method is a table showing the correspondence between a plurality of values of the second quantization parameter and a plurality of values of the fourth scale value, and is the same as the first coding method.
- the second quantization parameter is converted into the fourth scale value, or the fourth scale value is converted into the second quantization parameter.
- the coded position information is generated by coding including a second quantization process in which each of the plurality of second coefficient values based on the plurality of position information of the plurality of three-dimensional points is divided by the fourth scale value, and the bit.
- the stream may further include coded position information and the second quantization parameter.
- the second table can be shared between the first coding method and the second coding method.
- the three-dimensional data coding method can reduce the memory capacity used.
- the three-dimensional data decoding method is a three-dimensional data decoding method using a first decoding method and a second decoding method different from the first decoding method, and is a point group data from a bit stream.
- the coded attribute information in which the plurality of attribute information of the plurality of three-dimensional points included in is encoded and the first quantization parameter are acquired, and the plurality of values of the first quantization parameter and the first scale value are acquired. It is a table showing the correspondence relationship with a plurality of values of, and the first quantization parameter is set to the first scale value by using the first table common to the first decoding method and the second decoding method.
- the plurality of attribute information is decoded by a decoding including a first inverse quantization process of converting and multiplying each of the plurality of first quantization coefficients based on the coded attribute information by the first scale value.
- the first table can be shared between the first decoding method and the second decoding method.
- the three-dimensional data decoding method can reduce the memory capacity used.
- a plurality of first coefficient values are generated from the plurality of first quantization coefficients by the first inverse quantization process, and the plurality of first coefficient values are converted into the plurality of first coefficient values.
- a plurality of post-shift attribute information is generated by performing an inverse conversion process using a plurality of position information of the plurality of three-dimensional points, and each of the plurality of post-shift attribute information is bit-shifted to the right.
- the plurality of attribute information is generated, and the first scale value is a value obtained by multiplying the second scale value for inverse quantization by a coefficient corresponding to the bit shift to the right, and the plurality of values.
- the accuracy of the three-dimensional data decoding method can be improved.
- the number of first bits which is the number of bits for the rightward bit shift and the leftward bit shift used in the first decoding method, and the rightward bits used in the second decoding method.
- the conversion from the first quantization parameter to the first scale value is different from the second bit number, which is the number of bits of the shift and the bit shift to the left.
- the first scale value is determined by applying the first table to the first quantization parameter, and when the second decoding method is used, the first table is applied to the first quantization parameter. Even if the first scale value is calculated by determining the third scale value and performing a bit shift of the number of bits of the difference between the first bit number and the second bit number to the third scale value. Good.
- the three-dimensional data decoding method can standardize the table even when the number of bit shift bits differs between the first decoding method and the second decoding method. As a result, the three-dimensional data decoding method can reduce the memory capacity used.
- the three-dimensional data decoding method further acquires coded position information in which a plurality of position information of the plurality of three-dimensional points is encoded and a second quantization parameter from the bit stream, and described the above.
- a second inverse quantization process is included in which the second quantization parameter is converted into a fourth scale value, and each of the plurality of second quantization coefficients based on the coded position information is multiplied by the fourth scale value.
- the plurality of position information may be decoded by decoding.
- the second table can be shared between the first decoding method and the second decoding method.
- the three-dimensional data decoding method can reduce the memory capacity used.
- the first quantization parameter when the first quantization parameter is smaller than 4, the first quantization parameter may be regarded as 4.
- the three-dimensional data decoding method can correctly perform decoding.
- the three-dimensional data coding device is a three-dimensional data coding device that uses a first coding method and a second coding method different from the first coding method.
- the processor includes a processor and a memory, and the processor is a table showing a correspondence relationship between a plurality of values of a first quantization parameter and a plurality of values of a first scale value using the memory.
- the first quantization parameter is set to the first scale value, or the first scale value is set to the first quantization parameter.
- coding including a first quantization process in which each of a plurality of first coefficient values based on a plurality of attribute information of a plurality of three-dimensional points included in the point group data is divided by the first scale value.
- the coding attribute information is generated, and a bit stream including the coding attribute information and the first quantization parameter is generated.
- the three-dimensional data coding apparatus can share the first table between the first coding method and the second coding method.
- the three-dimensional data encoding device can reduce the memory capacity used.
- the three-dimensional data decoding device is a three-dimensional data decoding device that uses a first decoding method and a second decoding method different from the first decoding method, and includes a processor, a memory, and the like.
- the processor uses the memory to encode coded attribute information obtained by encoding a plurality of attribute information of a plurality of three-dimensional points included in point group data from a bit stream, and a first quantization parameter.
- the first quantization parameter is converted into the first scale value, and each of the plurality of first quantization coefficients based on the coding attribute information is multiplied by the first scale value.
- the plurality of attribute information is decoded by decoding including the inverse quantization process.
- the three-dimensional data decoding device can share the first table between the first decoding method and the second decoding method. As a result, the three-dimensional data decoding device can reduce the memory capacity used.
- a recording medium such as a system, method, integrated circuit, computer program or computer readable CD-ROM, system, method, integrated circuit, computer program. And any combination of recording media may be realized.
- FIG. 1 is a diagram showing a configuration of coded three-dimensional data according to the present embodiment.
- the three-dimensional space is divided into a space (SPC) corresponding to a picture in coding a moving image, and three-dimensional data is encoded in units of spaces.
- the space is further divided into volumes (VLM) corresponding to macroblocks and the like in moving image coding, and prediction and conversion are performed in units of VLM.
- the volume includes a plurality of voxels (VXL) which are the smallest units to which the position coordinates are associated.
- the prediction is the same as the prediction performed by the two-dimensional image, the prediction three-dimensional data similar to the processing unit of the processing target is generated by referring to another processing unit, and the prediction three-dimensional data and the processing target It is to encode the difference from the processing unit.
- this prediction includes not only spatial prediction that refers to other prediction units at the same time, but also time prediction that refers to prediction units at different times.
- a three-dimensional data coding device when encoding a three-dimensional space represented by point cloud data such as a point cloud, is a point cloud according to the size of a voxel.
- a voxel Each point of the above, or a plurality of points contained in the voxel are collectively encoded. If the voxel is subdivided, the three-dimensional shape of the point cloud can be expressed with high accuracy, and if the size of the voxel is increased, the three-dimensional shape of the point cloud can be roughly expressed.
- the 3D data is a point cloud
- the 3D data is not limited to the point cloud and may be any format 3D data.
- a hierarchical voxel may be used.
- the nth-order layer it may be indicated in order whether or not the sample points exist in the n-1th-order or lower layer (the lower layer of the nth-order layer).
- the sample point can be regarded as existing in the center of the voxel in the nth-order layer and can be decoded.
- the coding device acquires point group data using a distance sensor, a stereo camera, a monocular camera, a gyro, an inertial sensor, or the like.
- the intra space that can be decoded independently
- the predictive space that can be referred only in one direction
- the bidirectional reference are possible as in the coding of the moving image. It falls into one of at least three predictive structures, including a bidirectional space (B-SPC).
- the space has two types of time information, a decoding time and a display time.
- a random access unit GOS Group Of Space
- a world exists as a processing unit including a plurality of GOS.
- the spatial area occupied by the world is associated with the absolute position on the earth by GPS or latitude and longitude information. This position information is stored as meta information.
- the meta information may be included in the coded data or may be transmitted separately from the coded data.
- all SPCs may be three-dimensionally adjacent to each other, or there may be SPCs that are not three-dimensionally adjacent to other SPCs.
- processing such as encoding, decoding or reference of three-dimensional data included in a processing unit such as GOS, SPC or VLM is also described simply as encoding, decoding or referencing the processing unit.
- the three-dimensional data included in the processing unit includes, for example, at least one set of a spatial position such as three-dimensional coordinates and a characteristic value such as color information.
- a plurality of SPCs in the same GOS or a plurality of VLMs in the same SPC occupy different spaces from each other, but have the same time information (decoding time and display time).
- the SPC that is the first in the decoding order in GOS is I-SPC.
- GOS there are two types of GOS, closed GOS and open GOS.
- the closed GOS is a GOS capable of decoding all SPCs in the GOS when decoding is started from the head I-SPC.
- the open GOS some SPCs whose display time is earlier than the head I-SPC in the GOS refer to a different GOS, and decoding cannot be performed only by the GOS.
- WLD may be decoded from the direction opposite to the coding order, and if there is a dependency between GOS, it is difficult to reproduce in the reverse direction. Therefore, in such a case, closed GOS is basically used.
- the GOS has a layer structure in the height direction, and encoding or decoding is performed in order from the SPC of the lower layer.
- FIG. 2 is a diagram showing an example of a prediction structure between SPCs belonging to the lowest layer of GOS.
- FIG. 3 is a diagram showing an example of a prediction structure between layers.
- I-SPCs there is one or more I-SPCs in the GOS.
- Objects such as humans, animals, cars, bicycles, traffic lights, and landmark buildings exist in the three-dimensional space, but it is effective to encode the objects, which are particularly small in size, as ISPCs.
- a three-dimensional data decoding device (hereinafter, also referred to as a decoding device) decodes only the I-SPC in the GOS when decoding the GOS with a low processing amount or a high speed.
- the coding device may switch the coding interval or appearance frequency of the I-SPC according to the density of the objects in the WLD.
- the coding device or the decoding device encodes or decodes a plurality of layers in order from the lower layer (layer 1). As a result, it is possible to raise the priority of data near the ground, which has a larger amount of information for, for example, an autonomous vehicle.
- the encoded data used in a drone or the like may be encoded or decoded in order from the SPC of the upper layer in the height direction in the GOS.
- the encoding device or the decoding device may encode or decode a plurality of layers so that the decoding device can roughly grasp the GOS and gradually increase the resolution.
- the encoding device or decoding device may encode or decode layers 3, 8, 1, 9 ... In that order.
- static objects or scenes such as buildings or roads
- dynamic objects such as cars or humans
- Object detection is performed separately by extracting feature points from point cloud data or camera images from a stereo camera or the like.
- the first method is a method of encoding a static object and a dynamic object without distinguishing them.
- the second method is a method of distinguishing between a static object and a dynamic object by identification information.
- GOS is used as an identification unit.
- the GOS including the SPC constituting the static object and the GOS including the SPC constituting the dynamic object are distinguished by the identification information stored in the coded data or separately from the coded data.
- the SPC may be used as the identification unit.
- the SPC including the VLM that constitutes the static object and the SPC that includes the VLM that constitutes the dynamic object are distinguished by the above identification information.
- VLM or VXL may be used as the identification unit.
- VLM or VXL including a static object and VLM or VXL including a dynamic object are distinguished by the above identification information.
- the encoding device may encode the dynamic object as one or more VLMs or SPCs, and encode the VLM or SPC including the static object and the SPC including the dynamic object as different GOSs. Further, when the size of the GOS is variable according to the size of the dynamic object, the encoding device separately stores the size of the GOS as meta information.
- the coding device may encode the static object and the dynamic object independently of each other, and superimpose the dynamic object on the world composed of the static objects.
- the dynamic object is composed of one or more SPCs, and each SPC is associated with one or more SPCs constituting the static object on which the SPC is superimposed.
- the dynamic object may be represented by one or more VLMs or VXLs instead of the SPC.
- the encoding device may encode the static object and the dynamic object as different streams.
- the encoding device may generate a GOS including one or more SPCs constituting a dynamic object. Further, the encoding device may set the GOS including the dynamic object (GOS_M) and the GOS of the static object corresponding to the spatial area of GOS_M to the same size (occupy the same spatial area). As a result, the superimposition processing can be performed in units of GOS.
- GOS_M dynamic object
- GOS_M static object
- the P-SPC or B-SPC constituting the dynamic object may refer to the SPC included in a different encoded GOS.
- the reference across the GOS is effective from the viewpoint of compression ratio.
- the above-mentioned first method and the second method may be switched depending on the use of the coded data. For example, when the coded three-dimensional data is used as a map, it is desirable that the dynamic objects can be separated, so the coding device uses the second method. On the other hand, the coding device uses the first method when it is not necessary to separate dynamic objects when coding three-dimensional data of an event such as a concert or a sport.
- the decoding time and the display time of the GOS or SPC can be stored in the encoded data or as meta information. Moreover, the time information of all static objects may be the same. At this time, the actual decoding time and the display time may be determined by the decoding device. Alternatively, a different value may be assigned to each GOS or SPC as the decoding time, and the same value may be assigned as the display time. Further, like a decoder model in moving image coding such as HEVC HRD (Hypothetical Reference Decorder), if the decoder has a buffer of a predetermined size and reads a bit stream at a predetermined bit rate according to the decoding time, it can be decoded without failure. A model that guarantees may be introduced.
- HEVC HRD Hypothetical Reference Decorder
- the coordinates of the three-dimensional space in the world are represented by three coordinate axes (x-axis, y-axis, and z-axis) that are orthogonal to each other.
- x-axis, y-axis, and z-axis three coordinate axes that are orthogonal to each other.
- z-axis three coordinate axes
- the GOS in the xz plane is continuously encoded.
- the y-axis value is updated after the coding of all GOS in a certain xz plane is completed. That is, as the coding progresses, the world extends in the y-axis direction.
- the GOS index numbers are set in the order of coding.
- the three-dimensional space of the world is associated with GPS or geographical absolute coordinates such as latitude and longitude on a one-to-one basis.
- the three-dimensional space may be represented by a position relative to a preset reference position.
- the x-axis, y-axis, and z-axis directions of the three-dimensional space are expressed as direction vectors determined based on latitude, longitude, and the like, and the direction vectors are stored together with encoded data as meta information.
- the size of GOS is fixed, and the encoding device stores the size as meta information. Further, the size of the GOS may be switched depending on, for example, whether it is an urban area or indoors or outdoors. That is, the size of the GOS may be switched depending on the amount or nature of the objects that are valuable as information.
- the encoding device may adaptively switch the size of the GOS or the interval of the I-SPC in the GOS according to the density of objects and the like in the same world. For example, the encoding device reduces the size of the GOS and shortens the interval between I-SPCs in the GOS as the density of objects increases.
- the GOS is subdivided in order to realize random access with fine particle size.
- the 7th to 10th GOSs are located behind the 3rd to 6th GOSs, respectively.
- FIG. 6 is a block diagram of the three-dimensional data encoding device 100 according to the present embodiment.
- FIG. 7 is a flowchart showing an operation example of the three-dimensional data encoding device 100.
- the three-dimensional data coding device 100 shown in FIG. 6 generates coded three-dimensional data 112 by encoding the three-dimensional data 111.
- the three-dimensional data coding device 100 includes an acquisition unit 101, a coding area determination unit 102, a division unit 103, and a coding unit 104.
- the acquisition unit 101 acquires the three-dimensional data 111, which is the point cloud data (S101).
- the coding area determination unit 102 determines the area to be coded among the spatial areas corresponding to the acquired point cloud data (S102). For example, the coding area determination unit 102 determines the spatial area around the position as the area to be coded according to the position of the user or the vehicle.
- the division unit 103 divides the point cloud data included in the region to be encoded into each processing unit.
- the processing unit is the above-mentioned GOS, SPC, or the like.
- the region to be encoded corresponds to, for example, the above-mentioned world.
- the division unit 103 divides the point cloud data into processing units based on the preset size of GOS or the presence / absence or size of the dynamic object (S103). Further, the division unit 103 determines the start position of the SPC which is the head in the coding order in each GOS.
- the coding unit 104 generates the coded three-dimensional data 112 by sequentially coding the plurality of SPCs in each GOS (S104).
- each GOS is encoded after dividing the region to be encoded into GOS and SPC
- the processing procedure is not limited to the above.
- a procedure such as determining the configuration of one GOS, encoding the GOS, and then determining the configuration of the next GOS may be used.
- the three-dimensional data encoding device 100 generates the encoded three-dimensional data 112 by encoding the three-dimensional data 111.
- the three-dimensional data encoding device 100 divides the three-dimensional data into first processing units (GOS), which are random access units and each of which is associated with three-dimensional coordinates, and first.
- the processing unit (GOS) is divided into a plurality of second processing units (SPC), and the second processing unit (SPC) is divided into a plurality of third processing units (VLM).
- the third processing unit (VLM) includes one or more voxels (VXL) which are the minimum units to which the position information is associated.
- the three-dimensional data encoding device 100 generates encoded three-dimensional data 112 by encoding each of the plurality of first processing units (GOS). Specifically, the three-dimensional data coding apparatus 100 encodes each of the plurality of second processing units (SPCs) in each first processing unit (GOS). Further, the three-dimensional data coding apparatus 100 encodes each of the plurality of third processing units (VLM) in each second processing unit (SPC).
- GOS first processing unit
- VLM third processing units
- the three-dimensional data coding apparatus 100 when the first processing unit (GOS) of the processing target is a closed GOS, the second processing unit of the processing target included in the first processing unit (GOS) of the processing target. (SPC) is encoded with reference to another second processing unit (SPC) included in the first processing unit (GOS) to be processed. That is, the three-dimensional data coding apparatus 100 does not refer to the second processing unit (SPC) included in the first processing unit (GOS) different from the first processing unit (GOS) to be processed.
- SPC second processing unit included in the first processing unit (GOS) different from the first processing unit (GOS) to be processed.
- the second processing unit (SPC) of the processing target included in the first processing unit (GOS) of the processing target is the first processing unit of the processing target.
- Another second processing unit (SPC) included in one processing unit (GOS), or a second processing unit (SPC) included in a first processing unit (GOS) different from the first processing unit (GOS) to be processed. ) Is referenced and encoded.
- the type of the second processing unit (SPC) to be processed the first type (I-SPC) that does not refer to another second processing unit (SPC), and one other type.
- the first type (I-SPC) that does not refer to another second processing unit (SPC)
- the second type (P-SPC) that refers to the second processing unit (SPC)
- the third type that refers to the other two second processing units (SPC)
- the second processing unit (SPC) of interest is encoded.
- FIG. 8 is a block diagram of a block of the three-dimensional data decoding device 200 according to the present embodiment.
- FIG. 9 is a flowchart showing an operation example of the three-dimensional data decoding device 200.
- the three-dimensional data decoding device 200 shown in FIG. 8 generates the decoded three-dimensional data 212 by decoding the encoded three-dimensional data 211.
- the coded three-dimensional data 211 is, for example, the coded three-dimensional data 112 generated by the three-dimensional data coding device 100.
- the three-dimensional data decoding device 200 includes an acquisition unit 201, a decoding start GOS determination unit 202, a decoding SPC determination unit 203, and a decoding unit 204.
- the acquisition unit 201 acquires the coded three-dimensional data 211 (S201).
- the decoding start GOS determination unit 202 determines the GOS to be decoded (S202). Specifically, the decoding start GOS determination unit 202 refers to the meta information stored in the coded three-dimensional data 211 or separately from the coded three-dimensional data, and refers to a spatial position, an object, or a spatial position or object to start decoding. , The GOS including the SPC corresponding to the time is determined as the GOS to be decrypted.
- the decoding SPC determination unit 203 determines the type (I, P, B) of the SPC to be decoded in the GOS (S203). For example, the decoding SPC determination unit 203 determines whether (1) only the I-SPC is decoded, (2) the I-SPC and the P-SPC are decoded, or (3) all types are decoded. If the type of SPC to be decoded is determined in advance, such as decoding all SPCs, this step may not be performed.
- the decoding unit 204 acquires the address position where the first SPC in the decoding order (same as the coding order) in the GOS starts in the encoded three-dimensional data 211, and the code of the first SPC is obtained from the address position.
- the data is acquired, and each SPC is sequentially decoded from the first SPC (S204).
- the address position is stored in meta information or the like.
- the three-dimensional data decoding device 200 decodes the decoded three-dimensional data 212. Specifically, the three-dimensional data decoding device 200 decodes each of the coded three-dimensional data 211 of the first processing unit (GOS), which is a random access unit and is associated with the three-dimensional coordinates. As a result, the decrypted three-dimensional data 212 of the first processing unit (GOS) is generated. More specifically, the three-dimensional data decoding apparatus 200 decodes each of the plurality of second processing units (SPCs) in each first processing unit (GOS). Further, the three-dimensional data decoding apparatus 200 decodes each of the plurality of third processing units (VLM) in each second processing unit (SPC).
- SPCs second processing units
- VLM third processing units
- This meta information is generated by the three-dimensional data coding apparatus 100 and is included in the coded three-dimensional data 112 (211).
- FIG. 10 is a diagram showing an example of a table included in the meta information. It should be noted that not all the tables shown in FIG. 10 need to be used, and at least one table may be used.
- the address may be an address in a logical format or a physical address of an HDD or a memory.
- information that identifies a file segment may be used instead of the address.
- a file segment is a unit in which one or more GOS or the like is segmented.
- a plurality of GOS to which the object belongs may be indicated in the object-GOS table. If the plurality of GOSs are closed GOSs, the encoding device and the decoding device can perform coding or decoding in parallel. On the other hand, if the plurality of GOS are open GOS, the compression efficiency can be further improved by referring to each other by the plurality of GOS.
- the three-dimensional data encoding device 100 extracts a feature point peculiar to an object from a three-dimensional point cloud or the like at the time of coding the world, detects an object based on the feature point, and randomly accesses the detected object. Can be set as.
- the three-dimensional data coding apparatus 100 has the first information indicating the plurality of first processing units (GOS) and the three-dimensional coordinates associated with each of the plurality of first processing units (GOS). To generate. Further, the coded three-dimensional data 112 (211) includes this first information. Further, the first information further indicates at least one of an object, a time, and a data storage destination associated with each of the plurality of first processing units (GOS).
- the three-dimensional data decoding device 200 acquires the first information from the coded three-dimensional data 211, and uses the first information to encode the third-order coded unit of the first processing unit corresponding to the specified three-dimensional coordinates, object, or time.
- the original data 211 is specified, and the coded three-dimensional data 211 is decoded.
- the three-dimensional data encoding device 100 may generate and store the following meta information. Further, the three-dimensional data decoding device 200 may use this meta information at the time of decoding.
- a profile is defined according to the application, and information indicating the profile may be included in the meta information.
- information indicating the profile may be included in the meta information.
- profiles for urban or suburban or flying objects are defined, each defining a maximum or minimum size of a world, SPC or VLM.
- the minimum size of VLM is set smaller because more detailed information is required than for suburbs.
- the meta information may include a tag value indicating the type of object.
- This tag value is associated with the VLM, SPC, or GOS that make up the object. For example, the tag value "0" indicates “person”, the tag value “1” indicates “car”, the tag value “2” indicates “traffic light”, and the tag value is set for each object type. May be good.
- a tag value indicating the size or a property such as a dynamic object or a static object may be used.
- the meta information may include information indicating the range of the spatial area occupied by the world.
- the meta information may store the size of the SPC or VXL as the header information common to a plurality of SPCs such as the entire stream of encoded data or the SPC in the GOS.
- the meta information may include identification information such as a distance sensor or a camera used for generating the point cloud, or information indicating the position accuracy of the point cloud in the point cloud.
- the meta information may include information indicating whether the world is composed of only static objects or includes dynamic objects.
- the encoding device or decoding device may encode or decode two or more SPCs or GOSs different from each other in parallel.
- the GOS to be encoded or decoded in parallel can be determined based on meta information indicating the spatial position of the GOS or the like.
- the encoding device or decoding device is used for GPS, route information, zoom magnification, etc.
- the GOS or SPC contained in the space specified based on the above may be encoded or decoded.
- the decoding device may perform decoding in order from the space closest to the self-position or the traveling path.
- the coding device or decoding device may encode or decode a space far from its own position or travel path with a lower priority than a space closer to it.
- lowering the priority means lowering the processing order, lowering the resolution (processing by thinning), lowering the image quality (increasing the coding efficiency, for example, increasing the quantization step), and the like. ..
- the decoding device may decode only the lower layer.
- the decoding device may preferentially decode from the lower layer according to the zoom magnification of the map or the application.
- the coding device or decoding device reduces the resolution except for the area within a specific height from the road surface (recognition area). It may be converted or decrypted.
- the coding device may individually encode the point cloud that expresses the spatial shape between the indoor and outdoor areas. For example, by separating GOS (indoor GOS) representing indoors and GOS (outdoor GOS) representing outdoors, the decoding device selects the GOS to be decoded according to the viewpoint position when using the coded data. it can.
- the coding device may encode the indoor GOS and the outdoor GOS having similar coordinates so as to be adjacent to each other in the coding stream.
- the encoding device associates both identifiers and stores information indicating the associated identifiers in the coding stream or in the meta information separately stored.
- the decoding device can distinguish between the indoor GOS and the outdoor GOS whose coordinates are close to each other by referring to the information in the meta information.
- the coding device may switch the size of GOS or SPC between the indoor GOS and the outdoor GOS. For example, the coding device sets the size of the GOS indoors to be smaller than that outdoors. Further, the coding device may change the accuracy when extracting the feature points from the point cloud, the accuracy of object detection, and the like between the indoor GOS and the outdoor GOS.
- the encoding device may add information to the encoding data for the decoding device to distinguish the dynamic object from the static object and display it.
- the decoding device can display the dynamic object together with the red frame or the explanatory characters.
- the decoding device may display only a red frame or explanatory characters instead of the dynamic object.
- the decoding device may display a finer object type. For example, a red frame may be used for cars and a yellow frame may be used for humans.
- the encoding device or the decoding device encodes the dynamic object and the static object as different SPCs or GOSs according to the frequency of appearance of the dynamic object or the ratio of the static object to the dynamic object. Alternatively, it may be decided whether or not to decrypt. For example, when the appearance frequency or ratio of dynamic objects exceeds the threshold value, SPC or GOS in which dynamic objects and static objects coexist is allowed, and when the appearance frequency or ratio of dynamic objects does not exceed the threshold value. Does not allow SPC or GOS, which is a mixture of dynamic and static objects.
- the encoding device When detecting a dynamic object from the 2D image information of the camera instead of the point cloud, the encoding device separately acquires the information (frame or character, etc.) for identifying the detection result and the object position. This information may be encoded as part of the three-dimensional coded data. In this case, the decoding device superimposes and displays auxiliary information (frame or character) indicating the dynamic object on the decoding result of the static object.
- the encoding device may change the density of VXL or VLM in the SPC according to the complexity of the shape of the static object and the like. For example, the encoding device sets VXL or VLM more densely as the shape of the static object becomes more complex. Further, the coding apparatus may determine the quantization step in quantizing the spatial position or the color information according to the density of VXL or VLM. For example, the coding device sets the quantization step smaller as the VXL or VLM becomes denser.
- the coding device or decoding device encodes or decodes the space in units of spaces having coordinate information.
- a volume contains a voxel, which is the smallest unit to which location information is associated.
- the coding device and the decoding device associate arbitrary elements with a table in which each element of spatial information including coordinates, objects, time, etc. is associated with GOP, or a table in which each element is associated with each other. Encoding or decoding.
- the decoding device determines the coordinates using the values of the selected elements, identifies the volume, voxel or space from the coordinates, and decodes the space including the volume or voxel or the specified space.
- the coding device determines a volume, voxel or space that can be selected by an element by feature point extraction or object recognition, and encodes it as a randomly accessible volume, voxel or space.
- the space refers to an ISPC that can be encoded or decoded by the space alone, a P-SPC that is encoded or decoded by referring to any one processed space, and any two processed spaces. It is classified into three types, B-SPC, which is encoded or decoded.
- One or more volumes correspond to static or dynamic objects.
- the space containing the static object and the space containing the dynamic object are encoded or decoded as different GOS. That is, the SPC including the static object and the SPC including the dynamic object are assigned to different GOS.
- Dynamic objects are encoded or decrypted for each object and associated with one or more spaces containing static objects. That is, the plurality of dynamic objects are individually encoded, and the obtained encoded data of the plurality of dynamic objects is associated with the SPC including the static object.
- the coding device and the decoding device raise the priority of the I-SPC in the GOS to perform coding or decoding.
- the coding apparatus performs coding so that the deterioration of the ISPC is reduced (so that the original three-dimensional data is reproduced more faithfully after decoding).
- the decoding device decodes only the ISPC, for example.
- the coding device may perform coding by changing the frequency of using the I-SPC according to the sparseness or the number (quantity) of the objects in the world. That is, the encoding device changes the frequency of selecting the ISPC according to the number or density of objects contained in the three-dimensional data. For example, the encoder increases the frequency of using the I space as the objects in the world become denser.
- the encoding device sets a random access point in GOS units, and stores information indicating a spatial area corresponding to GOS in the header information.
- the encoding device uses, for example, a default value as the space size of GOS.
- the coding device may change the size of the GOS according to the number (quantity) or density of objects or dynamic objects. For example, the encoding device reduces the spatial size of the GOS as the objects or dynamic objects become denser or larger in number.
- the space or volume includes a feature point group derived by using the information obtained by a sensor such as a depth sensor, a gyro, or a camera.
- the coordinates of the feature points are set at the center position of the voxel.
- high accuracy of position information can be realized by subdividing voxels.
- the feature point group is derived using a plurality of pictures.
- the plurality of pictures have at least two types of time information, that is, the actual time information and the same time information (for example, the coded time used for rate control or the like) in the plurality of pictures associated with the space.
- encoding or decoding is performed in GOS units including one or more spaces.
- the coding device and the decoding device refer to the space in the processed GOS and predict the P space or the B space in the GOS to be processed.
- the encoding device and the decoding device do not refer to different GOSs, and use the processed space in the GOS to be processed to predict the P space or B space in the GOS to be processed.
- the encoding device and the decoding device transmit or receive a coded stream in units of worlds including one or more GOS.
- the GOS has a layer structure in at least one direction in the world, and the encoding device and the decoding device encode or decode from the lower layer.
- a randomly accessible GOS belongs to the lowest layer.
- the GOS belonging to the upper layer refers to the GOS belonging to the same layer or lower. That is, the GOS is spatially divided in predetermined directions and includes a plurality of layers, each containing one or more SPCs.
- the encoding device and the decoding device encode or decode each SPC with reference to the SPC included in the same layer as the SPC or a layer lower than the SPC.
- the encoding device and the decoding device continuously encode or decode the GOS within the world unit including the plurality of GOS.
- the encoding device and the decoding device write or read information indicating the order (direction) of coding or decoding as metadata. That is, the coded data includes information indicating the coding order of the plurality of GOS.
- the encoding device and the decoding device encode or decode two or more spaces or GOS different from each other in parallel.
- the coding device and the decoding device encode or decode the space or GOS spatial information (coordinates, size, etc.).
- the encoding device and the decoding device encode or decode a space or GOS included in a specific space specified based on external information regarding its own position and / and area size such as GPS, route information, or magnification. ..
- the coding device or decoding device encodes or decodes the space far from its own position with a lower priority than the space near it.
- the coding device sets one direction in which the world is located according to the magnification or the application, and encodes the GOS having a layer structure in that direction. Further, the decoding device preferentially decodes the GOS having the layer structure in one direction of the world set according to the magnification or the application from the lower layer.
- the coding device changes the feature point extraction included in the space, the accuracy of object recognition, the space area size, etc. between indoors and outdoors.
- the coding device and the decoding device encode or decode the indoor GOS and the outdoor GOS having similar coordinates adjacent to each other in the world, and encode or decode these identifiers in association with each other.
- a three-dimensional data coding method and a three-dimensional data coding device for providing a function of transmitting and receiving only necessary information according to the application in the coded data of the three-dimensional point cloud, and the said A three-dimensional data decoding method for decoding encoded data and a three-dimensional data decoding apparatus will be described.
- a voxel (VXL) having a certain amount of features or more is defined as a feature voxel (FVXL), and a world composed of FVXL (WLD) is defined as a sparse world (SWLD).
- FIG. 11 is a diagram showing a sparse world and a configuration example of the world.
- the SWLD includes FGOS, which is a GOS composed of FVXL, FSPC, which is an SPC composed of FVXL, and FVLM, which is a VLM composed of FVXL.
- the data structure and prediction structure of FGOS, FSPC and FVLM may be the same as those of GOS, SPC and VLM.
- the feature amount is a feature amount that expresses the three-dimensional position information of VXL or the visible light information of the VXL position, and is a feature amount that is detected especially at the corners and edges of a three-dimensional object. Specifically, this feature amount is the following three-dimensional feature amount or visible light feature amount, but what if it is a feature amount that represents the position, brightness, color information, etc. of VXL? It doesn't matter what it is.
- a SHOT feature amount Signature of Histograms of Origin States
- a PFH feature amount Point Feature Histograms
- a PPF feature amount Point Pair Histogram
- the SHOT feature amount is obtained by dividing the periphery of VXL, calculating the inner product of the reference point and the normal vector of the divided region, and forming a histogram.
- This SHOT feature quantity has a feature that the number of dimensions is high and the feature expressive power is high.
- the PFH feature amount can be obtained by selecting a large number of 2-point sets in the vicinity of VXL, calculating a normal vector or the like from the two points, and creating a histogram. Since this PFH feature quantity is a histogram feature, it has a characteristic of being robust against some disturbances and having a high feature expressive power.
- the PPF feature amount is a feature amount calculated by using a normal vector or the like for each of two VXLs. Since all VXLs are used for this PPF feature quantity, it has robustness against occlusion.
- SIFT Scale-Invariant Feature Transfers
- SURF Speeded Up Robot Features
- HOG Histogram of Oriented
- SWLD is generated by calculating the above-mentioned feature amount from each VXL of WLD and extracting FVXL.
- the SWLD may be updated every time the WLD is updated, or may be updated periodically after a lapse of a certain period of time regardless of the update timing of the WLD.
- SWLD may be generated for each feature amount. For example, SWLD1 based on the SHOT feature amount and SWLD2 based on the SIFT feature amount may be generated separately for each feature amount, and the SWLD may be used properly according to the application. Further, the calculated feature amount of each FVXL may be held in each FVXL as the feature amount information.
- SWLD sparse world
- the read time from the hard disk and the bandwidth and transfer time at the time of network transfer can be suppressed by using the SWLD information instead of the WLD.
- the network bandwidth and transfer time can be suppressed by holding WLD and SWLD as map information in the server and switching the map information to be transmitted to WLD or SWLD in response to a request from the client. ..
- a specific example is shown below.
- FIG. 12 and 13 are diagrams showing usage examples of SWLD and WLD.
- the client 1 which is an in-vehicle device needs map information for self-position determination
- the client 1 sends a request for acquisition of map data for self-position estimation to the server (S301).
- the server transmits SWLD to the client 1 in response to the acquisition request (S302).
- the client 1 makes a self-position determination using the received SWLD (S303).
- the client 1 acquires VXL information around the client 1 by various methods such as a distance sensor such as a range finder, a stereo camera, or a combination of a plurality of monocular cameras, and self-self from the obtained VXL information and SWLD.
- Estimate location information includes the three-dimensional position information and the orientation of the client 1.
- the client 2 which is an in-vehicle device needs map information for map drawing such as a three-dimensional map
- the client 2 sends a request for acquisition of map data for map drawing to the server (S311). ).
- the server transmits the WLD to the client 2 in response to the acquisition request (S312).
- the client 2 draws a map using the received WLD (S313).
- the client 2 creates a rendered image using, for example, an image taken by itself with a visible light camera or the like and a WLD acquired from a server, and draws the created image on a screen of a car navigation system or the like.
- the server sends SWLD to the client for applications that mainly require the features of each VXL such as self-position estimation, and WLD when detailed VXL information such as map drawing is required. Send to client. This makes it possible to efficiently send and receive map data.
- the client may determine whether SWLD or WLD is necessary by himself / herself and request the server to transmit SWLD or WLD. Further, the server may determine whether to transmit SWLD or WLD according to the situation of the client or the network.
- SWLD sparse world
- WLD world
- FIG. 14 is a diagram showing an operation example in this case.
- the client accesses the server via the low-speed network (S321) and maps from the server. Acquire SWLD as information (S322).
- the client accesses the server via the high-speed network (S323) and acquires WLD from the server. (S324).
- the client can acquire appropriate map information according to the network bandwidth of the client.
- the client receives SWLD via LTE outdoors, and acquires WLD via Wi-Fi (registered trademark) when entering indoors such as a facility. This allows the client to obtain more detailed map information indoors.
- the client may request WLD or SWLD from the server according to the bandwidth of the network used by the client.
- the client may send information indicating the bandwidth of the network used by the client to the server, and the server may send data (WLD or SWLD) suitable for the client according to the information.
- the server may determine the network bandwidth of the client and transmit data (WLD or SWLD) suitable for the client.
- FIG. 15 is a diagram showing an operation example in this case.
- the client when the client is moving at high speed (S331), the client receives SWLD from the server (S332).
- the client receives the WLD from the server (S334).
- the client can acquire map information suitable for the speed while suppressing the network bandwidth.
- the client can update the rough map information at an appropriate speed by receiving the SWLD with a small amount of data while traveling on the highway.
- the client can acquire more detailed map information by receiving the WLD while traveling on the general road.
- the client may request WLD or SWLD from the server according to its own moving speed.
- the client may send information indicating its own movement speed to the server, and the server may send data (WLD or SWLD) suitable for the client according to the information.
- the server may determine the moving speed of the client and transmit data (WLD or SWLD) suitable for the client.
- the client may first acquire the SWLD from the server and then acquire the WLD of the important area in it. For example, when acquiring map data, the client first acquires rough map information with SWLD, narrows down the area where many features such as buildings, signs, or people appear, and obtains the WLD of the narrowed down area. Get it later. As a result, the client can acquire detailed information of the required area while suppressing the amount of data received from the server.
- the server may create a separate SWLD for each object from the WLD, and the client may receive each according to the purpose.
- the server recognizes a person or a car in advance from the WLD and creates a person's SWLD and a car's SWLD.
- the client receives the SWLD of a person when it wants to acquire the information of the surrounding people, and the SWLD of the car when it wants to acquire the information of the car.
- the type of such SWLD may be distinguished by the information (flag or type, etc.) added to the header or the like.
- FIG. 16 is a block diagram of the three-dimensional data encoding device 400 according to the present embodiment.
- FIG. 17 is a flowchart of the three-dimensional data coding process by the three-dimensional data coding apparatus 400.
- the three-dimensional data coding device 400 shown in FIG. 16 encodes the input three-dimensional data 411 to generate coded three-dimensional data 413 and 414, which are coded streams.
- the coded three-dimensional data 413 is coded three-dimensional data corresponding to WLD
- the coded three-dimensional data 414 is coded three-dimensional data corresponding to SWLD.
- the three-dimensional data coding device 400 includes an acquisition unit 401, a coding region determination unit 402, a SWLD extraction unit 403, a WLD coding unit 404, and a SWLD coding unit 405.
- the acquisition unit 401 acquires the input three-dimensional data 411, which is the point cloud data in the three-dimensional space (S401).
- the coding area determination unit 402 determines the spatial area to be coded based on the spatial area in which the point cloud data exists (S402).
- the SWLD extraction unit 403 defines the spatial region to be encoded as WLD, and calculates the feature amount from each VXL included in WLD. Then, the SWLD extraction unit 403 extracts VXL having a feature amount equal to or higher than a predetermined threshold value, defines the extracted VXL as FVXL, and adds the FVXL to SWLD to generate the extracted three-dimensional data 412. (S403). That is, the extracted three-dimensional data 412 having the feature amount equal to or larger than the threshold value is extracted from the input three-dimensional data 411.
- the WLD coding unit 404 generates coded three-dimensional data 413 corresponding to WLD by encoding the input three-dimensional data 411 corresponding to WLD (S404). At this time, the WLD coding unit 404 adds information to the header of the coded three-dimensional data 413 to distinguish that the coded three-dimensional data 413 is a stream containing the WLD.
- the SWLD coding unit 405 generates the coded three-dimensional data 414 corresponding to SWLD by encoding the extracted three-dimensional data 412 corresponding to SWLD (S405). At this time, the SWLD coding unit 405 adds information to the header of the coded three-dimensional data 414 to distinguish that the coded three-dimensional data 414 is a stream containing the SWLD.
- the processing order of the process of generating the coded three-dimensional data 413 and the process of generating the coded three-dimensional data 414 may be reversed from the above. Moreover, a part or all of these processes may be performed in parallel.
- a parameter "world_type" is defined.
- a specific flag may be included in one of the coded three-dimensional data 413 and 414.
- the coded three-dimensional data 414 may be flagged to include that the stream contains SWLD. In this case, the decoding device can determine whether the stream contains WLD or SWLD depending on the presence or absence of the flag.
- the coding method used by the WLD coding unit 404 when coding the WLD and the coding method used by the SWLD coding unit 405 when coding the SWLD may be different.
- inter-prediction may be prioritized among intra-prediction and inter-prediction over the coding method used for WLD.
- the three-dimensional position expression method may be different between the coding method used for SWLD and the coding method used for WLD.
- the three-dimensional position of FVXL may be expressed by three-dimensional coordinates
- WLD the three-dimensional position may be expressed by an ocree described later, or vice versa.
- the SWLD coding unit 405 encodes so that the data size of the SWLD coded three-dimensional data 414 is smaller than the data size of the WLD coded three-dimensional data 413.
- SWLD may have lower correlation between data than WLD.
- the coding efficiency is lowered, and the data size of the coded three-dimensional data 414 may be larger than the data size of the WLD coded three-dimensional data 413. Therefore, when the data size of the obtained coded three-dimensional data 414 is larger than the data size of the WLD coded three-dimensional data 413, the SWLD coding unit 405 performs recoding to obtain the data size.
- the coded three-dimensional data 414 with reduced data is regenerated.
- the SWLD extraction unit 403 regenerates the extracted three-dimensional data 412 in which the number of feature points to be extracted is reduced, and the SWLD coding unit 405 encodes the extracted three-dimensional data 412.
- the degree of quantization in the SWLD coding unit 405 may be coarser.
- the degree of quantization can be coarsened by rounding the data in the lowest layer.
- the SWLD coding unit 405 does not generate the SWLD coded three-dimensional data 414 when the data size of the SWLD coded three-dimensional data 414 cannot be made smaller than the data size of the WLD coded three-dimensional data 413. You may.
- the WLD coded 3D data 413 may be copied to the SWLD coded 3D data 414. That is, the WLD coded three-dimensional data 413 may be used as it is as the SWLD coded three-dimensional data 414.
- FIG. 18 is a block diagram of the three-dimensional data decoding device 500 according to the present embodiment.
- FIG. 19 is a flowchart of the three-dimensional data decoding process by the three-dimensional data decoding apparatus 500.
- the three-dimensional data decoding device 500 shown in FIG. 18 generates the decoded three-dimensional data 512 or 513 by decoding the encoded three-dimensional data 511.
- the coded three-dimensional data 511 is, for example, the coded three-dimensional data 413 or 414 generated by the three-dimensional data coding device 400.
- the three-dimensional data decoding device 500 includes an acquisition unit 501, a header analysis unit 502, a WLD decoding unit 503, and a SWLD decoding unit 504.
- the acquisition unit 501 acquires the coded three-dimensional data 511 (S501).
- the header analysis unit 502 analyzes the header of the coded three-dimensional data 511 and determines whether the coded three-dimensional data 511 is a stream containing WLD or a stream containing SWLD (S502). For example, the above-mentioned parameters of world_type are referred to, and the determination is performed.
- the WLD decoding unit 503 When the coded three-dimensional data 511 is a stream containing the WLD (Yes in S503), the WLD decoding unit 503 generates the decoded three-dimensional data 512 of the WLD by decoding the coded three-dimensional data 511 (S504). .. On the other hand, when the coded three-dimensional data 511 is a stream containing SWLD (No in S503), the SWLD decoding unit 504 generates the SWLD decoded three-dimensional data 513 by decoding the coded three-dimensional data 511 (No). S505).
- the decoding method used by the WLD decoding unit 503 when decoding the WLD and the decoding method used by the SWLD decoding unit 504 when decoding the SWLD may be different.
- inter-prediction may be prioritized among intra-prediction and inter-prediction over the decoding method used for WLD.
- the three-dimensional position expression method may be different between the decoding method used for SWLD and the decoding method used for WLD.
- the three-dimensional position of FVXL may be expressed by three-dimensional coordinates
- WLD the three-dimensional position may be expressed by an ocree described later, or vice versa.
- FIG. 20 is a diagram showing an example of VXL of WLD.
- FIG. 21 is a diagram showing an ocree structure of the WLD shown in FIG. 20.
- VXLs 1 to 3 which are VXLs (hereinafter, effective VXLs) including a point cloud.
- the octave tree structure is composed of nodes and leaves. Each node has a maximum of 8 nodes or leaves. Each leaf has VXL information.
- the leaves 1, 2 and 3 represent VXL1, VXL2 and VXL3 shown in FIG. 20, respectively.
- each node and leaf correspond to a three-dimensional position.
- Node 1 corresponds to the entire block shown in FIG.
- the block corresponding to node 1 is divided into eight blocks, of which the block containing the valid VXL is set in the node and the other blocks are set in the leaf.
- the block corresponding to the node is further divided into eight nodes or leaves, and this process is repeated for the hierarchy of the tree structure. In addition, all the blocks in the lowest layer are set to leaves.
- FIG. 22 is a diagram showing an example of SWLD generated from the WLD shown in FIG. 20.
- VXL1 and VXL2 shown in FIG. 20 were determined to be FVXL1 and FVXL2, and were added to SWLD.
- VXL3 is not determined to be FVXL and is not included in SWLD.
- FIG. 23 is a diagram showing an ocree structure of SWLD shown in FIG. 22. In the octave tree structure shown in FIG. 23, the leaf 3 corresponding to VXL3 shown in FIG. 21 is deleted. As a result, the node 3 shown in FIG. 21 no longer has an effective VXL and is changed to a leaf.
- the number of leaves of SWLD is generally smaller than the number of leaves of WLD, and the coded three-dimensional data of SWLD is also smaller than the coded three-dimensional data of WLD.
- a client such as an in-vehicle device receives SWLD from a server when performing self-position estimation, performs self-position estimation using SWLD, and when performing obstacle detection, a distance sensor such as a range finder or a stereo.
- Obstacle detection may be performed based on the surrounding three-dimensional information acquired by oneself using various methods such as a camera or a combination of a plurality of monocular cameras.
- the server may hold a subsample world (subWLD) that subsamples the WLD and send SWLDs and subWLDs to the client for static obstacle detection.
- subWLD subsample world
- the server may generate a mesh from the WLD and hold it in advance as a mesh world (MWLD). For example, the client receives the MWLD when it needs coarse 3D drawing and the WLD when it needs detailed 3D drawing. As a result, the network bandwidth can be suppressed.
- MWLD mesh world
- the server sets VXL of each VXL whose feature amount is equal to or higher than the threshold value to FVXL
- FVXL may be calculated by a different method. For example, the server determines that VXL, VLM, SPC, or GOS constituting a signal or an intersection is necessary for self-position estimation, driving assistance, automatic driving, etc., and includes it in SWLD as FVXL, FVLM, FSPC, FGOS. It doesn't matter. Further, the above determination may be made manually. The FVXL or the like obtained by the above method may be added to the FVXL or the like set based on the feature amount. That is, the SWLD extraction unit 403 may further extract data corresponding to an object having a predetermined attribute from the input three-dimensional data 411 as the extraction three-dimensional data 412.
- the server may separately hold an FVXL necessary for self-position estimation of a signal or an intersection, driving assistance, automatic driving, or the like as an upper layer (for example, lane world) of SWLD.
- the server may also add an attribute to VXL in the WLD for each random access unit or a predetermined unit.
- the attribute includes, for example, information indicating whether it is necessary or unnecessary for self-position estimation, or information indicating whether it is important as traffic information such as a signal or an intersection.
- the attribute may include a correspondence relationship with Featur (intersection, road, etc.) in lane information (GDF: Geographic Data Files, etc.).
- the following method may be used as a method for updating WLD or SWLD.
- Update information indicating changes in people, construction, or rows of trees (for trucks) is uploaded to the server as point clouds or metadata.
- the server updates the WLD based on the upload, and then updates the SWLD with the updated WLD.
- the client may send the 3D information generated by itself to the server together with the update notification.
- the server uses WLD to update the SWLD. If the SWLD is not updated, the server determines that the WLD itself is out of date.
- information for distinguishing between WLD and SWLD is added as the header information of the coded stream, but when there are many kinds of worlds such as a mesh world or a lane world, they are distinguished. Information may be added to the header information. Further, when there are many SWLDs having different feature amounts, information for distinguishing them may be added to the header information.
- the SWLD is composed of FVXL, it may include VXL which is not determined to be FVXL.
- the SWLD may include an adjacent VXL used in calculating the features of the FVXL.
- the client can calculate the feature amount of the FVXL when the SWLD is received.
- the SWLD may include information for distinguishing whether each VXL is FVXL or VXL.
- the three-dimensional data encoding device 400 extracts and extracts the three-dimensional data 412 (second three-dimensional data) whose feature amount is equal to or larger than the threshold value from the input three-dimensional data 411 (first three-dimensional data).
- Encoded three-dimensional data 414 (first encoded three-dimensional data) is generated by encoding the three-dimensional data 412.
- the three-dimensional data coding device 400 generates coded three-dimensional data 414 in which data having a feature amount equal to or larger than a threshold value is encoded.
- the amount of data can be reduced as compared with the case where the input three-dimensional data 411 is encoded as it is. Therefore, the three-dimensional data encoding device 400 can reduce the amount of data to be transmitted.
- the three-dimensional data coding device 400 further encodes the input three-dimensional data 411 to generate coded three-dimensional data 413 (second coded three-dimensional data).
- the three-dimensional data coding apparatus 400 can selectively transmit the coded three-dimensional data 413 and the coded three-dimensional data 414 according to, for example, the intended use.
- the extracted three-dimensional data 412 is encoded by the first coding method
- the input three-dimensional data 411 is encoded by the second coding method different from the first coding method.
- the three-dimensional data coding apparatus 400 can use a coding method suitable for each of the input three-dimensional data 411 and the extracted three-dimensional data 412.
- the inter-prediction is prioritized among the intra-prediction and the inter-prediction over the second coding method.
- the three-dimensional data encoding device 400 can raise the priority of inter-prediction with respect to the extracted three-dimensional data 412 in which the correlation between adjacent data tends to be low.
- the three-dimensional position representation method differs between the first coding method and the second coding method.
- the three-dimensional position is represented by the ocree
- the three-dimensional position is represented by the three-dimensional coordinates.
- the three-dimensional data encoding device 400 can use a more suitable three-dimensional position representation method for three-dimensional data having a different number of data (number of VXL or FVXL).
- At least one of the coded three-dimensional data 413 and 414 is either the coded three-dimensional data obtained by encoding the input three-dimensional data 411 or the input three-dimensional data 411. Includes an identifier indicating whether the data is encoded three-dimensional data obtained by encoding a part of the data. That is, the identifier indicates whether the coded three-dimensional data is the WLD coded three-dimensional data 413 or the SWLD coded three-dimensional data 414.
- the decoding device can easily determine whether the acquired coded three-dimensional data is the coded three-dimensional data 413 or the coded three-dimensional data 414.
- the three-dimensional data coding apparatus 400 encodes the extracted three-dimensional data 412 so that the data amount of the coded three-dimensional data 414 is smaller than the data amount of the coded three-dimensional data 413.
- the three-dimensional data coding apparatus 400 can make the amount of data of the coded three-dimensional data 414 smaller than the amount of data of the coded three-dimensional data 413.
- the three-dimensional data encoding device 400 further extracts data corresponding to an object having a predetermined attribute from the input three-dimensional data 411 as the extraction three-dimensional data 412.
- an object having a predetermined attribute is an object necessary for self-position estimation, driving assistance, automatic driving, or the like, such as a signal or an intersection.
- the three-dimensional data coding device 400 can generate the coded three-dimensional data 414 including the data required by the decoding device.
- the three-dimensional data encoding device 400 (server) further transmits one of the encoded three-dimensional data 413 and 414 to the client according to the state of the client.
- the three-dimensional data encoding device 400 can transmit appropriate data according to the state of the client.
- the client status includes the client communication status (for example, network bandwidth) or the client movement speed.
- the three-dimensional data encoding device 400 further transmits one of the encoded three-dimensional data 413 and 414 to the client in response to the request of the client.
- the three-dimensional data encoding device 400 can transmit appropriate data according to the request of the client.
- the three-dimensional data decoding device 500 decodes the coded three-dimensional data 413 or 414 generated by the three-dimensional data coding device 400.
- the three-dimensional data decoding device 500 first decodes the coded three-dimensional data 414 obtained by encoding the extracted three-dimensional data 412 whose feature amount extracted from the input three-dimensional data 411 is equal to or greater than the threshold value. Decrypt by method. Further, the three-dimensional data decoding apparatus 500 decodes the coded three-dimensional data 413 obtained by encoding the input three-dimensional data 411 by a second decoding method different from the first decoding method.
- the three-dimensional data decoding apparatus 500 selects the coded three-dimensional data 414 in which the data having the feature amount equal to or larger than the threshold value and the coded three-dimensional data 413 are selected according to, for example, the intended use. Can be received.
- the three-dimensional data decoding device 500 can reduce the amount of data to be transmitted.
- the three-dimensional data decoding apparatus 500 can use a decoding method suitable for the input three-dimensional data 411 and the extracted three-dimensional data 412, respectively.
- the inter-prediction is prioritized among the intra-prediction and the inter-prediction over the second decoding method.
- the three-dimensional data decoding device 500 can raise the priority of inter-prediction for the extracted three-dimensional data in which the correlation between adjacent data tends to be low.
- the three-dimensional position representation method differs between the first decoding method and the second decoding method.
- the three-dimensional position is represented by the ocree
- the three-dimensional position is represented by the three-dimensional coordinates.
- the three-dimensional data decoding apparatus 500 can use a more suitable three-dimensional position expression method for three-dimensional data having different numbers of data (number of VXL or FVXL).
- At least one of the coded three-dimensional data 413 and 414 is either the coded three-dimensional data obtained by encoding the input three-dimensional data 411 or the input three-dimensional data 411. Includes an identifier indicating whether the data is encoded three-dimensional data obtained by encoding a part of the data.
- the three-dimensional data decoding device 500 identifies the coded three-dimensional data 413 and 414 with reference to the identifier.
- the three-dimensional data decoding apparatus 500 can easily determine whether the acquired coded three-dimensional data is the coded three-dimensional data 413 or the coded three-dimensional data 414.
- the three-dimensional data decoding device 500 further notifies the server of the status of the client (three-dimensional data decoding device 500).
- the three-dimensional data decoding device 500 receives one of the coded three-dimensional data 413 and 414 transmitted from the server according to the state of the client.
- the three-dimensional data decoding device 500 can receive appropriate data according to the state of the client.
- the client status includes the client communication status (for example, network bandwidth) or the client movement speed.
- the three-dimensional data decoding device 500 further requests one of the coded three-dimensional data 413 and 414 from the server, and in response to the request, one of the coded three-dimensional data 413 and 414 transmitted from the server is requested. Receive.
- the three-dimensional data decoding device 500 can receive appropriate data according to the application.
- FIG. 24 is a block diagram of the three-dimensional data creation device 620 according to the present embodiment.
- This three-dimensional data creation device 620 is included in the own vehicle, for example, and is denser by synthesizing the received second three-dimensional data 635 with the first three-dimensional data 632 created by the three-dimensional data creation device 620.
- the third three-dimensional data 636 is created.
- This three-dimensional data creation device 620 includes a three-dimensional data creation unit 621, a request range determination unit 622, a search unit 623, a reception unit 624, a decoding unit 625, and a synthesis unit 626.
- the three-dimensional data creation unit 621 creates the first three-dimensional data 632 using the sensor information 631 detected by the sensor provided in the own vehicle.
- the request range determination unit 622 determines the request range, which is the three-dimensional space range in which the data is insufficient in the created first three-dimensional data 632.
- the search unit 623 searches for peripheral vehicles that possess the three-dimensional data of the required range, and transmits the required range information 633 indicating the required range to the peripheral vehicles specified by the search.
- the receiving unit 624 receives the coded three-dimensional data 634, which is a coded stream of the required range, from the surrounding vehicles (S624).
- the search unit 623 may indiscriminately make a request to all the vehicles existing in the specific range and receive the coded three-dimensional data 634 from the other party who responded. Further, the search unit 623 may make a request not only to the vehicle but also to an object such as a traffic light or a sign and receive the coded three-dimensional data 634 from the object.
- the decoding unit 625 acquires the second three-dimensional data 635 by decoding the received coded three-dimensional data 634.
- the synthesizing unit 626 creates a denser third-dimensional data 636 by synthesizing the first three-dimensional data 632 and the second three-dimensional data 635.
- FIG. 25 is a block diagram of the three-dimensional data transmission device 640.
- the three-dimensional data transmission device 640 is, for example, included in the above-mentioned peripheral vehicle, processes the fifth three-dimensional data 652 created by the peripheral vehicle into the sixth three-dimensional data 654 required by the own vehicle, and processes the sixth three-dimensional data 654.
- the encoded three-dimensional data 634 is generated by encoding the 654, and the encoded three-dimensional data 634 is transmitted to the own vehicle.
- the three-dimensional data transmission device 640 includes a three-dimensional data creation unit 641, a reception unit 642, an extraction unit 643, an encoding unit 644, and a transmission unit 645.
- the three-dimensional data creation unit 641 creates the fifth three-dimensional data 652 using the sensor information 651 detected by the sensors provided in the surrounding vehicles.
- the receiving unit 642 receives the request range information 633 transmitted from the own vehicle.
- the extraction unit 643 processes the 5th 3D data 652 into the 6th 3D data 654 by extracting the 3D data of the required range indicated by the request range information 633 from the 5th 3D data 652. To do.
- the coding unit 644 encodes the sixth three-dimensional data 654 to generate the coded three-dimensional data 634 which is a coded stream. Then, the transmission unit 645 transmits the coded three-dimensional data 634 to the own vehicle.
- each vehicle has the three-dimensional data creation device 620 and the three-dimensional data transmission device. It may have a function with 640.
- the self-position estimation matches the three-dimensional map with the three-dimensional information around the vehicle (hereinafter referred to as the vehicle detection three-dimensional data) acquired by a range finder (LiDAR, etc.) mounted on the vehicle or a sensor such as a stereo camera. Then, it can be realized by estimating the position of the own vehicle in the three-dimensional map.
- the vehicle detection three-dimensional data acquired by a range finder (LiDAR, etc.) mounted on the vehicle or a sensor such as a stereo camera.
- the 3D map changes not only in the 3D point cloud but also in the 2D map data such as the shape information of roads and intersections, or in real time such as traffic jams and accidents, like the HD map advocated by HERE. It may contain information to be used.
- a three-dimensional map is composed of a plurality of layers such as three-dimensional data, two-dimensional data, and metadata that changes in real time, and the device can acquire or refer to only necessary data.
- the point cloud data may be the SWLD described above, or may include point cloud data that is not a feature point.
- data transmission / reception of the point cloud is performed on the basis of one or a plurality of random access units.
- the following method can be used as a matching method between the 3D map and the vehicle detection 3D data.
- the device compares the shapes of the point clouds in each other's point clouds, and determines that the parts having high similarity between the feature points are at the same position.
- the apparatus compares and matches the feature points constituting the SWLD with the three-dimensional feature points extracted from the own vehicle detection three-dimensional data.
- FIG. 26 is a block diagram showing a configuration example of the three-dimensional information processing apparatus 700 according to the present embodiment.
- the three-dimensional information processing device 700 is mounted on an animal body such as an automobile. As shown in FIG. 26, the three-dimensional information processing apparatus 700 includes a three-dimensional map acquisition unit 701, a vehicle detection data acquisition unit 702, an abnormality case determination unit 703, a coping operation determination unit 704, and an operation control unit 705. And.
- the three-dimensional information processing device 700 is not shown for detecting a structure or an animal body around the vehicle, such as a camera that acquires a two-dimensional image or a sensor for one-dimensional data using ultrasonic waves or a laser.
- a two-dimensional or one-dimensional sensor may be provided.
- the three-dimensional information processing device 700 may include a communication unit (not shown) for acquiring a three-dimensional map by a mobile communication network such as 4G or 5G, or vehicle-to-vehicle communication or road-to-vehicle communication. ..
- the three-dimensional map acquisition unit 701 acquires a three-dimensional map 711 in the vicinity of the traveling route.
- the three-dimensional map acquisition unit 701 acquires the three-dimensional map 711 through the mobile communication network, vehicle-to-vehicle communication, or road-to-vehicle communication.
- the vehicle detection data acquisition unit 702 acquires the vehicle detection three-dimensional data 712 based on the sensor information. For example, the vehicle detection data acquisition unit 702 generates the vehicle detection three-dimensional data 712 based on the sensor information acquired by the sensor included in the vehicle.
- the abnormality case determination unit 703 detects an abnormality case by performing a predetermined check on at least one of the acquired three-dimensional map 711 and the own vehicle detection three-dimensional data 712. That is, the abnormality case determination unit 703 determines whether at least one of the acquired three-dimensional map 711 and the own vehicle detection three-dimensional data 712 is abnormal.
- the coping action determination unit 704 determines the coping action for the abnormal case.
- the operation control unit 705 controls the operation of each processing unit necessary for executing the coping operation, such as the three-dimensional map acquisition unit 701.
- the three-dimensional information processing device 700 ends the process.
- the three-dimensional information processing device 700 uses the three-dimensional map 711 and the own vehicle detection three-dimensional data 712 to estimate the self-position of the vehicle having the three-dimensional information processing device 700. Next, the three-dimensional information processing device 700 automatically drives the vehicle using the result of self-position estimation.
- the three-dimensional information processing apparatus 700 acquires the map data (three-dimensional map 711) including the first three-dimensional position information via the communication path.
- the first three-dimensional position information is encoded in units of subspaces having three-dimensional coordinate information, each of which is an aggregate of one or more subspaces, and each of them can be independently decoded by a plurality of randoms. Includes access units.
- the first three-dimensional position information is data (SWLD) in which feature points whose three-dimensional feature quantities are equal to or greater than a predetermined threshold value are encoded.
- the three-dimensional information processing device 700 generates the second three-dimensional position information (own vehicle detection three-dimensional data 712) from the information detected by the sensor.
- the three-dimensional information processing apparatus 700 performs the abnormality determination process on the first three-dimensional position information or the second three-dimensional position information, thereby performing the first three-dimensional position information or the second three-dimensional position information. Determine if the 3D position information is abnormal.
- the three-dimensional information processing apparatus 700 determines a coping operation for the abnormality. Next, the three-dimensional information processing apparatus 700 executes the control necessary for carrying out the coping operation.
- the three-dimensional information processing apparatus 700 can detect an abnormality in the first three-dimensional position information or the second three-dimensional position information and take a coping action.
- FIG. 27 is a block diagram showing a configuration example of the three-dimensional data creation device 810 according to the present embodiment.
- the three-dimensional data creation device 810 is mounted on a vehicle, for example.
- the three-dimensional data creation device 810 transmits and receives three-dimensional data to and from an external traffic monitoring cloud, a vehicle in front or a following vehicle, and creates and stores three-dimensional data.
- the three-dimensional data creation device 810 includes a data reception unit 811, a communication unit 812, a reception control unit 813, a format conversion unit 814, a plurality of sensors 815, a three-dimensional data creation unit 816, and a three-dimensional data synthesis unit. It includes an 817, a three-dimensional data storage unit 818, a communication unit 819, a transmission control unit 820, a format conversion unit 821, and a data transmission unit 822.
- the data receiving unit 811 receives the three-dimensional data 831 from the traffic monitoring cloud or the vehicle in front.
- the three-dimensional data 831 includes information such as a point cloud, a visible light image, depth information, sensor position information, or speed information, which includes a region that cannot be detected by the sensor 815 of the own vehicle, for example.
- the communication unit 812 communicates with the traffic monitoring cloud or the vehicle in front, and transmits a data transmission request or the like to the traffic monitoring cloud or the vehicle in front.
- the reception control unit 813 exchanges information such as the corresponding format with the communication destination via the communication unit 812, and establishes communication with the communication destination.
- the format conversion unit 814 generates the three-dimensional data 832 by performing format conversion or the like on the three-dimensional data 831 received by the data receiving unit 811. Further, the format conversion unit 814 performs decompression or decoding processing when the three-dimensional data 831 is compressed or encoded.
- the plurality of sensors 815 are a group of sensors that acquire information outside the vehicle, such as a LiDAR, a visible light camera, or an infrared camera, and generate sensor information 833.
- the sensor information 833 is three-dimensional data such as a point cloud (point cloud data) when the sensor 815 is a laser sensor such as LiDAR.
- the number of sensors 815 does not have to be plural.
- the three-dimensional data creation unit 816 generates three-dimensional data 834 from the sensor information 833.
- the three-dimensional data 834 includes information such as point cloud, visible light image, depth information, sensor position information, and speed information.
- the three-dimensional data synthesizing unit 817 synthesizes the three-dimensional data 834 created based on the sensor information 833 of the own vehicle with the three-dimensional data 832 created by the traffic monitoring cloud or the vehicle in front of the own vehicle. Three-dimensional data 835 including the space in front of the vehicle in front, which cannot be detected by the sensor 815, is constructed.
- the three-dimensional data storage unit 818 stores the generated three-dimensional data 835 and the like.
- the communication unit 819 communicates with the traffic monitoring cloud or the following vehicle, and transmits a data transmission request or the like to the traffic monitoring cloud or the following vehicle.
- the transmission control unit 820 exchanges information such as compatible formats with the communication destination via the communication unit 819, and establishes communication with the communication destination. Further, the transmission control unit 820 is in the space of the three-dimensional data to be transmitted based on the three-dimensional data construction information of the three-dimensional data 832 generated by the three-dimensional data synthesis unit 817 and the data transmission request from the communication destination. Determine a transmission area.
- the transmission control unit 820 determines a transmission area including the space in front of the own vehicle that cannot be detected by the sensor of the following vehicle in response to a data transmission request from the traffic monitoring cloud or the following vehicle. Further, the transmission control unit 820 determines the transmission area by determining whether or not the space that can be transmitted or the space that has been transmitted is updated based on the three-dimensional data construction information. For example, the transmission control unit 820 determines the area specified in the data transmission request and the area in which the corresponding three-dimensional data 835 exists as the transmission area. Then, the transmission control unit 820 notifies the format conversion unit 821 of the format and the transmission area supported by the communication destination.
- the format conversion unit 821 converts the 3D data 836 in the transmission area into a format supported by the receiving side among the 3D data 835 stored in the 3D data storage unit 818, thereby converting the 3D data 837. Generate.
- the format conversion unit 821 may reduce the amount of data by compressing or encoding the three-dimensional data 837.
- the data transmission unit 822 transmits the three-dimensional data 837 to the traffic monitoring cloud or the following vehicle.
- the three-dimensional data 837 includes information such as a point cloud in front of the own vehicle, visible light images, depth information, or sensor position information, including an area that becomes a blind spot of the following vehicle, for example.
- the format conversion may not be performed.
- the three-dimensional data creation device 810 acquires the three-dimensional data 831 in the region that cannot be detected by the sensor 815 of the own vehicle from the outside, and the sensor information 833 detected by the three-dimensional data 831 and the sensor 815 of the own vehicle.
- the three-dimensional data 835 is generated by synthesizing the three-dimensional data 834 based on the above.
- the three-dimensional data creation device 810 can generate three-dimensional data in a range that cannot be detected by the sensor 815 of the own vehicle.
- the three-dimensional data creation device 810 obtains three-dimensional data including the space in front of the own vehicle that cannot be detected by the sensor of the following vehicle in response to the data transmission request from the traffic monitoring cloud or the following vehicle, in the traffic monitoring cloud or the following. Can be sent to vehicles, etc.
- FIG. 28 is a diagram showing a configuration of a three-dimensional map and a sensor information transmission / reception system according to the present embodiment.
- the system includes a server 901 and client devices 902A and 902B.
- client devices 902A and 902B are not particularly distinguished, they are also referred to as the client devices 902.
- the client device 902 is, for example, an in-vehicle device mounted on a moving body such as a vehicle.
- the server 901 is, for example, a traffic monitoring cloud or the like, and can communicate with a plurality of client devices 902.
- the server 901 transmits a three-dimensional map composed of a point cloud to the client device 902.
- the configuration of the three-dimensional map is not limited to the point cloud, and may represent other three-dimensional data such as a mesh structure.
- the client device 902 transmits the sensor information acquired by the client device 902 to the server 901.
- the sensor information includes, for example, at least one of LiDAR acquisition information, visible light image, infrared image, depth image, sensor position information, and speed information.
- the data sent and received between the server 901 and the client device 902 may be compressed to reduce the data, or may remain uncompressed to maintain the accuracy of the data.
- a three-dimensional compression method based on an octa-tree structure can be used for the point cloud.
- a two-dimensional image compression method can be used for visible light images, infrared images, and depth images.
- the two-dimensional image compression method is, for example, MPEG-4 AVC or HEVC standardized by MPEG.
- the server 901 transmits the three-dimensional map managed by the server 901 to the client device 902 in response to the transmission request of the three-dimensional map from the client device 902.
- the server 901 may transmit the three-dimensional map without waiting for the three-dimensional map transmission request from the client device 902.
- the server 901 may broadcast a three-dimensional map to one or more client devices 902 in a predetermined space.
- the server 901 may transmit a three-dimensional map suitable for the position of the client device 902 to the client device 902 that has received the transmission request once at regular intervals.
- the server 901 may transmit the three-dimensional map to the client device 902 every time the three-dimensional map managed by the server 901 is updated.
- the client device 902 issues a three-dimensional map transmission request to the server 901. For example, when the client device 902 wants to perform self-position estimation during traveling, the client device 902 transmits a three-dimensional map transmission request to the server 901.
- the client device 902 may issue a three-dimensional map transmission request to the server 901.
- the client device 902 may issue a transmission request for the three-dimensional map to the server 901.
- the client device 902 may issue a three-dimensional map transmission request to the server 901.
- the client device 902 may issue a three-dimensional map transmission request to the server 901 before a certain time when the client device 902 goes out. For example, when the client device 902 exists within a predetermined distance from the boundary of the space indicated by the three-dimensional map held by the client device 902, the client device 902 issues a three-dimensional map transmission request to the server 901. You may. Further, when the movement route and the movement speed of the client device 902 are known, the time when the client device 902 goes out is predicted from the space shown by the three-dimensional map held by the client device 902 based on these. You may.
- the client device 902 may issue a three-dimensional map transmission request to the server 901.
- the client device 902 transmits the sensor information to the server 901 in response to the sensor information transmission request transmitted from the server 901.
- the client device 902 may send the sensor information to the server 901 without waiting for the sensor information transmission request from the server 901. For example, once the client device 902 receives the sensor information transmission request from the server 901, the client device 902 may periodically transmit the sensor information to the server 901 for a certain period of time. Further, when the error at the time of alignment between the three-dimensional data created by the client device 902 based on the sensor information and the three-dimensional map obtained from the server 901 is equal to or more than a certain value, the client device 902 is located around the client device 902. It may be determined that the three-dimensional map may have changed, and that fact and the sensor information may be transmitted to the server 901.
- the server 901 issues a sensor information transmission request to the client device 902.
- the server 901 receives the position information of the client device 902 such as GPS from the client device 902.
- the server 901 determines that the client device 902 is approaching a space with little information in the three-dimensional map managed by the server 901 based on the position information of the client device 902
- the server 901 determines that the client device 902 is approaching a space with little information, and the client 901 generates a new three-dimensional map.
- a request for transmitting sensor information is sent to the device 902.
- the server 901 issues a sensor information transmission request when it wants to update the three-dimensional map, when it wants to check the road condition such as when it snows or when there is a disaster, when it wants to check the traffic jam situation, or when it wants to check the incident accident situation. May be good.
- the client device 902 may set the data amount of the sensor information to be transmitted to the server 901 according to the communication state or the band at the time of receiving the transmission request of the sensor information received from the server 901.
- Setting the amount of sensor information data to be transmitted to the server 901 means, for example, increasing or decreasing the data itself, or appropriately selecting a compression method.
- FIG. 29 is a block diagram showing a configuration example of the client device 902.
- the client device 902 receives a three-dimensional map composed of a point cloud or the like from the server 901, and estimates the self-position of the client device 902 from the three-dimensional data created based on the sensor information of the client device 902. Further, the client device 902 transmits the acquired sensor information to the server 901.
- the client device 902 includes a data reception unit 1011, a communication unit 1012, a reception control unit 1013, a format conversion unit 1014, a plurality of sensors 1015, a three-dimensional data creation unit 1016, and a three-dimensional image processing unit 1017. It includes a three-dimensional data storage unit 1018, a format conversion unit 1019, a communication unit 1020, a transmission control unit 1021, and a data transmission unit 1022.
- the data receiving unit 1011 receives the three-dimensional map 1031 from the server 901.
- the three-dimensional map 1031 is data including a point cloud such as WLD or SWLD.
- the three-dimensional map 1031 may include either compressed data or uncompressed data.
- the communication unit 1012 communicates with the server 901 and transmits a data transmission request (for example, a three-dimensional map transmission request) or the like to the server 901.
- a data transmission request for example, a three-dimensional map transmission request
- the reception control unit 1013 exchanges information such as the corresponding format with the communication destination via the communication unit 1012, and establishes communication with the communication destination.
- the format conversion unit 1014 generates the three-dimensional map 1032 by performing format conversion or the like on the three-dimensional map 1031 received by the data receiving unit 1011. Further, the format conversion unit 1014 performs decompression or decoding processing when the three-dimensional map 1031 is compressed or encoded. If the three-dimensional map 1031 is uncompressed data, the format conversion unit 1014 does not perform decompression or decoding processing.
- the plurality of sensors 1015 are a group of sensors that acquire information outside the vehicle on which the client device 902 is mounted, such as a LiDAR, a visible light camera, an infrared camera, or a depth sensor, and generate sensor information 1033.
- the sensor information 1033 is three-dimensional data such as a point cloud (point cloud data) when the sensor 1015 is a laser sensor such as LiDAR.
- the number of sensors 1015 does not have to be plural.
- the three-dimensional data creation unit 1016 creates three-dimensional data 1034 around the own vehicle based on the sensor information 1033. For example, the three-dimensional data creation unit 1016 creates point cloud data with color information around the own vehicle by using the information acquired by LiDAR and the visible light image obtained by the visible light camera.
- the three-dimensional image processing unit 1017 performs self-position estimation processing and the like of the own vehicle by using the received three-dimensional map 1032 such as a point cloud and the three-dimensional data 1034 around the own vehicle generated from the sensor information 1033. ..
- the three-dimensional image processing unit 1017 creates three-dimensional data 1035 around the own vehicle by synthesizing the three-dimensional map 1032 and the three-dimensional data 1034, and estimates the self-position using the created three-dimensional data 1035. Processing may be performed.
- the three-dimensional data storage unit 1018 stores the three-dimensional map 1032, the three-dimensional data 1034, the three-dimensional data 1035, and the like.
- the format conversion unit 1019 generates the sensor information 1037 by converting the sensor information 1033 into a format supported by the receiving side.
- the format conversion unit 1019 may reduce the amount of data by compressing or encoding the sensor information 1037. Further, the format conversion unit 1019 may omit the process when it is not necessary to perform the format conversion. Further, the format conversion unit 1019 may control the amount of data to be transmitted according to the designation of the transmission range.
- the communication unit 1020 communicates with the server 901 and receives a data transmission request (sensor information transmission request) and the like from the server 901.
- the transmission control unit 1021 exchanges information such as compatible formats with the communication destination via the communication unit 1020 to establish communication.
- the data transmission unit 1022 transmits the sensor information 1037 to the server 901.
- the sensor information 1037 includes a plurality of sensors such as information acquired by LiDAR, a brightness image acquired by a visible light camera, an infrared image acquired by an infrared camera, a depth image acquired by a depth sensor, sensor position information, and speed information. Includes information acquired by 1015.
- FIG. 30 is a block diagram showing a configuration example of the server 901.
- the server 901 receives the sensor information transmitted from the client device 902, and creates three-dimensional data based on the received sensor information.
- the server 901 updates the three-dimensional map managed by the server 901 by using the created three-dimensional data. Further, the server 901 transmits the updated three-dimensional map to the client device 902 in response to the transmission request of the three-dimensional map from the client device 902.
- the server 901 includes a data reception unit 1111, a communication unit 1112, a reception control unit 1113, a format conversion unit 1114, a three-dimensional data creation unit 1116, a three-dimensional data synthesis unit 1117, and a three-dimensional data storage unit 1118. , A format conversion unit 1119, a communication unit 1120, a transmission control unit 1121, and a data transmission unit 1122.
- the data receiving unit 1111 receives the sensor information 1037 from the client device 902.
- the sensor information 1037 includes, for example, information acquired by LiDAR, a brightness image acquired by a visible light camera, an infrared image acquired by an infrared camera, a depth image acquired by a depth sensor, sensor position information, speed information, and the like.
- the communication unit 1112 communicates with the client device 902 and transmits a data transmission request (for example, a sensor information transmission request) or the like to the client device 902.
- a data transmission request for example, a sensor information transmission request
- the reception control unit 1113 exchanges information such as the corresponding format with the communication destination via the communication unit 1112 to establish communication.
- the format conversion unit 1114 When the received sensor information 1037 is compressed or encoded, the format conversion unit 1114 generates the sensor information 1132 by performing decompression or decoding processing. If the sensor information 1037 is uncompressed data, the format conversion unit 1114 does not perform decompression or decoding processing.
- the three-dimensional data creation unit 1116 creates three-dimensional data 1134 around the client device 902 based on the sensor information 1132. For example, the three-dimensional data creation unit 1116 creates point cloud data with color information around the client device 902 using the information acquired by LiDAR and the visible light image obtained by the visible light camera.
- the 3D data synthesis unit 1117 updates the 3D map 1135 by synthesizing the 3D data 1134 created based on the sensor information 1132 with the 3D map 1135 managed by the server 901.
- the three-dimensional data storage unit 1118 stores the three-dimensional map 1135 and the like.
- the format conversion unit 1119 generates the 3D map 1031 by converting the 3D map 1135 into a format supported by the receiving side.
- the format conversion unit 1119 may reduce the amount of data by compressing or encoding the three-dimensional map 1135. Further, the format conversion unit 1119 may omit the process when it is not necessary to perform the format conversion. Further, the format conversion unit 1119 may control the amount of data to be transmitted according to the designation of the transmission range.
- the communication unit 1120 communicates with the client device 902 and receives a data transmission request (three-dimensional map transmission request) or the like from the client device 902.
- the transmission control unit 1121 exchanges information such as the corresponding format with the communication destination via the communication unit 1120 to establish communication.
- the data transmission unit 1122 transmits the three-dimensional map 1031 to the client device 902.
- the three-dimensional map 1031 is data including a point cloud such as WLD or SWLD.
- the three-dimensional map 1031 may include either compressed data or uncompressed data.
- FIG. 31 is a flowchart showing an operation when the client device 902 acquires a three-dimensional map.
- the client device 902 requests the server 901 to transmit a three-dimensional map (point cloud, etc.) (S1001). At this time, the client device 902 may request the server 901 to transmit a three-dimensional map related to the position information by transmitting the position information of the client device 902 obtained by GPS or the like together.
- a three-dimensional map point cloud, etc.
- the client device 902 receives the three-dimensional map from the server 901 (S1002). If the received 3D map is compressed data, the client device 902 decodes the received 3D map to generate an uncompressed 3D map (S1003).
- the client device 902 creates three-dimensional data 1034 around the client device 902 from the sensor information 1033 obtained by the plurality of sensors 1015 (S1004).
- the client device 902 estimates the self-position of the client device 902 using the three-dimensional map 1032 received from the server 901 and the three-dimensional data 1034 created from the sensor information 1033 (S1005).
- FIG. 32 is a flowchart showing the operation when the sensor information is transmitted by the client device 902.
- the client device 902 receives the sensor information transmission request from the server 901 (S1011).
- the client device 902 transmits the sensor information 1037 to the server 901 (S1012).
- the sensor information 1033 includes a plurality of information obtained by the plurality of sensors 1015
- the client device 902 may generate the sensor information 1037 by compressing each information by a compression method suitable for each information. Good.
- FIG. 33 is a flowchart showing the operation when the server 901 acquires the sensor information.
- the server 901 requests the client device 902 to transmit the sensor information (S1021).
- the server 901 receives the sensor information 1037 transmitted from the client device 902 in response to the request (S1022).
- the server 901 creates three-dimensional data 1134 using the received sensor information 1037 (S1023).
- the server 901 reflects the created three-dimensional data 1134 on the three-dimensional map 1135 (S1024).
- FIG. 34 is a flowchart showing the operation when the server 901 transmits the three-dimensional map.
- the server 901 receives a three-dimensional map transmission request from the client device 902 (S1031).
- the server 901 that has received the three-dimensional map transmission request transmits the three-dimensional map 1031 to the client device 902 (S1032).
- the server 901 may extract a three-dimensional map in the vicinity thereof according to the position information of the client device 902 and transmit the extracted three-dimensional map.
- the server 901 may compress the three-dimensional map composed of the point cloud by using, for example, a compression method based on an octree structure, and transmit the compressed three-dimensional map.
- the server 901 creates three-dimensional data 1134 near the position of the client device 902 using the sensor information 1037 received from the client device 902. Next, the server 901 calculates the difference between the three-dimensional data 1134 and the three-dimensional map 1135 by matching the created three-dimensional data 1134 with the three-dimensional map 1135 of the same area managed by the server 901. .. When the difference is equal to or greater than a predetermined threshold value, the server 901 determines that some abnormality has occurred in the vicinity of the client device 902. For example, when land subsidence occurs due to a natural disaster such as an earthquake, a large difference occurs between the 3D map 1135 managed by the server 901 and the 3D data 1134 created based on the sensor information 1037. Can be considered.
- the sensor information 1037 may include information indicating at least one of the sensor type, the sensor performance, and the sensor model number. Further, a class ID or the like corresponding to the performance of the sensor may be added to the sensor information 1037. For example, when the sensor information 1037 is the information acquired by LiDAR, the sensor capable of acquiring information with an accuracy of several mm is class 1, the sensor capable of acquiring information with an accuracy of several cm is class 2, and the sensor is united with several meters. As in class 3, it is conceivable to assign an identifier to the performance of the sensor that can acquire information with accuracy. Further, the server 901 may estimate the performance information of the sensor and the like from the model number of the client device 902.
- the server 901 may determine the sensor spec information from the vehicle type of the vehicle. In this case, the server 901 may acquire information on the vehicle type of the vehicle in advance, or the sensor information may include the information. Further, the server 901 may switch the degree of correction for the three-dimensional data 1134 created by using the sensor information 1037 by using the acquired sensor information 1037. For example, if the sensor performance is high accuracy (class 1), the server 901 does not make corrections to the 3D data 1134. When the sensor performance is low accuracy (class 3), the server 901 applies a correction to the three-dimensional data 1134 according to the accuracy of the sensor. For example, in the server 901, the lower the accuracy of the sensor, the stronger the degree (strength) of the correction.
- the server 901 may simultaneously issue a transmission request for sensor information to a plurality of client devices 902 in a certain space.
- the server 901 receives a plurality of sensor information from the plurality of client devices 902, it is not necessary to use all the sensor information for creating the three-dimensional data 1134.
- the sensor to be used depends on the performance of the sensor. Information may be selected.
- the server 901 selects highly accurate sensor information (class 1) from a plurality of received sensor information, and creates 3D data 1134 using the selected sensor information. You may.
- the server 901 is not limited to a server such as a traffic monitoring cloud, and may be another client device (vehicle-mounted).
- FIG. 35 is a diagram showing a system configuration in this case.
- the client device 902C issues a sensor information transmission request to the nearby client device 902A, and acquires the sensor information from the client device 902A. Then, the client device 902C creates three-dimensional data using the acquired sensor information of the client device 902A, and updates the three-dimensional map of the client device 902C. As a result, the client device 902C can generate a three-dimensional map of the space that can be acquired from the client device 902A by utilizing the performance of the client device 902C. For example, it is considered that such a case occurs when the performance of the client device 902C is high.
- the client device 902A that provided the sensor information is given the right to acquire the highly accurate three-dimensional map generated by the client device 902C.
- the client device 902A receives a highly accurate 3D map from the client device 902C in accordance with its rights.
- the client device 902C may issue a request for transmitting sensor information to a plurality of nearby client devices 902 (client device 902A and client device 902B).
- client device 902A and client device 902B client devices 902
- the client device 902C can create three-dimensional data using the sensor information obtained by this high-performance sensor.
- FIG. 36 is a block diagram showing the functional configurations of the server 901 and the client device 902.
- the server 901 includes, for example, a three-dimensional map compression / decoding processing unit 1201 that compresses and decodes a three-dimensional map, and a sensor information compression / decoding processing unit 1202 that compresses and decodes sensor information.
- the client device 902 includes a three-dimensional map decoding processing unit 1211 and a sensor information compression processing unit 1212.
- the three-dimensional map decoding processing unit 1211 receives the encoded data of the compressed three-dimensional map, decodes the encoded data, and acquires the three-dimensional map.
- the sensor information compression processing unit 1212 compresses the sensor information itself instead of the three-dimensional data created from the acquired sensor information, and transmits the compressed sensor information encoded data to the server 901.
- the client device 902 may internally hold a processing unit (device or LSI) that performs a process of decoding a three-dimensional map (point cloud, etc.), and the three-dimensional data of the three-dimensional map (point cloud, etc.). It is not necessary to hold a processing unit that performs processing for compressing. As a result, the cost and power consumption of the client device 902 can be suppressed.
- the client device 902 is mounted on the moving body, and is obtained from the sensor information 1033 indicating the surrounding state of the moving body obtained by the sensor 1015 mounted on the moving body. Create peripheral three-dimensional data 1034.
- the client device 902 estimates the self-position of the moving body using the created three-dimensional data 1034.
- the client device 902 transmits the acquired sensor information 1033 to the server 901 or another mobile body 902.
- the client device 902 transmits the sensor information 1033 to the server 901 and the like.
- the amount of transmitted data can be reduced as compared with the case of transmitting three-dimensional data.
- the processing amount of the client device 902 can be reduced. Therefore, the client device 902 can reduce the amount of data to be transmitted or simplify the configuration of the device.
- the client device 902 further transmits a three-dimensional map transmission request to the server 901, and receives the three-dimensional map 1031 from the server 901. In estimating the self-position, the client device 902 estimates the self-position using the three-dimensional data 1034 and the three-dimensional map 1032.
- the sensor information 1033 includes at least one of the information obtained by the laser sensor, the luminance image, the infrared image, the depth image, the position information of the sensor, and the speed information of the sensor.
- the sensor information 1033 includes information indicating the performance of the sensor.
- the client device 902 encodes or compresses the sensor information 1033, and in transmitting the sensor information, transmits the encoded or compressed sensor information 1037 to the server 901 or another mobile body 902. According to this, the client device 902 can reduce the amount of data to be transmitted.
- the client device 902 includes a processor and a memory, and the processor uses the memory to perform the above processing.
- the server 901 can communicate with the client device 902 mounted on the moving body, and the sensor information 1037 indicating the surrounding situation of the moving body obtained by the sensor 1015 mounted on the moving body is obtained. Is received from the client device 902. The server 901 creates three-dimensional data 1134 around the moving body from the received sensor information 1037.
- the server 901 creates the three-dimensional data 1134 using the sensor information 1037 transmitted from the client device 902. As a result, there is a possibility that the amount of data to be transmitted can be reduced as compared with the case where the client device 902 transmits three-dimensional data. Further, since it is not necessary for the client device 902 to perform processing such as compression or coding of three-dimensional data, the processing amount of the client device 902 can be reduced. Therefore, the server 901 can reduce the amount of data to be transmitted or simplify the configuration of the device.
- the server 901 further transmits a transmission request for sensor information to the client device 902.
- the server 901 updates the three-dimensional map 1135 using the created three-dimensional data 1134, and sends the three-dimensional map 1135 to the client device 902 in response to the transmission request of the three-dimensional map 1135 from the client device 902. Send.
- the sensor information 1037 includes at least one of the information obtained by the laser sensor, the luminance image, the infrared image, the depth image, the position information of the sensor, and the speed information of the sensor.
- the sensor information 1037 includes information indicating the performance of the sensor.
- the server 901 further corrects the three-dimensional data according to the performance of the sensor. According to this, the three-dimensional data creation method can improve the quality of the three-dimensional data.
- the server 901 receives a plurality of sensor information 1037 from the plurality of client devices 902, and based on a plurality of information indicating the performance of the sensor included in the plurality of sensor information 1037, the server 901 receives three-dimensional data 1134.
- the sensor information 1037 used for creating the above is selected. According to this, the server 901 can improve the quality of the three-dimensional data 1134.
- the server 901 decodes or decompresses the received sensor information 1037, and creates three-dimensional data 1134 from the decoded or decompressed sensor information 1132. According to this, the server 901 can reduce the amount of data to be transmitted.
- the server 901 includes a processor and a memory, and the processor uses the memory to perform the above processing.
- FIG. 37 is a block diagram of the three-dimensional data encoding device 1300 according to the present embodiment.
- the three-dimensional data coding device 1300 encodes three-dimensional data to generate a coded bit stream (hereinafter, also simply referred to as a bit stream) which is a coded signal.
- the three-dimensional data coding apparatus 1300 includes a division unit 1301, a subtraction unit 1302, a conversion unit 1303, a quantization unit 1304, an inverse quantization unit 1305, and an inverse conversion unit 1306. It includes an addition unit 1307, a reference volume memory 1308, an intra prediction unit 1309, a reference space memory 1310, an inter prediction unit 1311, a prediction control unit 1312, and an entropy coding unit 1313.
- the division unit 1301 divides each space (SPC) included in the three-dimensional data into a plurality of volumes (VLM) which are coding units. In addition, the division unit 1301 expresses the voxels in each volume into an ocree (octree). In the division unit 1301, the space and the volume may be the same size, and the space may be represented by an ocree. Further, the division unit 1301 may add information (depth information, etc.) necessary for ocreeization to a bitstream header or the like.
- the subtraction unit 1302 calculates the difference between the volume (encoded volume) output from the division unit 1301 and the predicted volume generated by the intra prediction or inter prediction described later, and uses the calculated difference as the prediction residual. It is output to the conversion unit 1303.
- FIG. 38 is a diagram showing a calculation example of the predicted residual.
- the bit strings of the coded volume and the predicted volume shown here are, for example, position information indicating the positions of three-dimensional points (for example, a point cloud) included in the volume.
- FIG. 39 is a diagram showing a structural example of a volume including a plurality of voxels.
- FIG. 40 is a diagram showing an example in which the volume shown in FIG. 39 is converted into an octree structure.
- the leaves 1, 2 and 3 represent the voxels VXL1, VXL2 and VXL3 shown in FIG. 39, respectively, and represent the VXL including the point cloud (hereinafter, effective VXL).
- the ocree is represented by a binary sequence of 0 and 1, for example.
- the binary sequence shown in FIG. 40 is assigned to each node and leaf.
- this binary string is scanned according to the breadth-first or depth-first scan order. For example, when breadth-first scan is performed, the binary sequence shown in A in FIG. 41 is obtained. When scanning with priority on depth, the binary sequence shown in B of FIG. 41 is obtained.
- the binary sequence obtained by this scan is encoded by entropy coding to reduce the amount of information.
- the depth in the ocree representation is used to control the particle size of the point cloud information contained in the volume. If the depth is set large, the point cloud information can be reproduced to a finer level, but the amount of data for expressing the node and leaf increases. Conversely, if the depth is set small, the amount of data will decrease, but since multiple point cloud information with different positions and colors are considered to be the same position and the same color, the information that the original point cloud information has will be lost. become.
- FIG. 42 is a diagram showing an example in which the ocree with a depth of 2 shown in FIG. 40 is represented by an ocree with a depth of 1.
- the ocree shown in FIG. 42 has a smaller amount of data than the ocree shown in FIG. 40. That is, the ocree shown in FIG. 42 has a smaller number of bits after binarization than the ocree shown in FIG. 42.
- the leaf 1 and the leaf 2 shown in FIG. 40 are represented by the leaf 1 shown in FIG. 41. That is, the information that the leaf 1 and the leaf 2 shown in FIG. 40 were at different positions is lost.
- FIG. 43 is a diagram showing a volume corresponding to the ocree shown in FIG. 42.
- VXL1 and VXL2 shown in FIG. 39 correspond to VXL12 shown in FIG. 43.
- the three-dimensional data coding apparatus 1300 generates the color information of VXL12 shown in FIG. 43 from the color information of VXL1 and VXL2 shown in FIG. 39.
- the three-dimensional data coding apparatus 1300 calculates the average value, the intermediate value, the weight average value, and the like of the color information of VXL1 and VXL2 as the color information of VXL12. In this way, the three-dimensional data encoding device 1300 may control the reduction of the amount of data by changing the depth of the ocree.
- the three-dimensional data encoding device 1300 may set the depth information of the octave tree in any unit of world unit, space unit, and volume unit. At that time, the three-dimensional data encoding device 1300 may add depth information to the world header information, the space header information, or the volume header information. In addition, the same value may be used for depth information for all worlds, spaces, and volumes at different times. In this case, the three-dimensional data encoding device 1300 may add depth information to the header information that manages the world for the entire time.
- the conversion unit 1303 applies frequency conversion such as orthogonal conversion to the predicted residual of the color information of the voxel in the volume. For example, the conversion unit 1303 creates a one-dimensional array by scanning the predicted residuals in a certain scan order. After that, the conversion unit 1303 converts the one-dimensional array into the frequency domain by applying the one-dimensional orthogonal transformation to the created one-dimensional array.
- frequency conversion such as orthogonal conversion to the predicted residual of the color information of the voxel in the volume. For example, the conversion unit 1303 creates a one-dimensional array by scanning the predicted residuals in a certain scan order. After that, the conversion unit 1303 converts the one-dimensional array into the frequency domain by applying the one-dimensional orthogonal transformation to the created one-dimensional array.
- the conversion unit 1303 may use orthogonal transformation of two or more dimensions instead of one dimension. For example, the conversion unit 1303 maps the predicted residuals to a two-dimensional array in a certain scan order, and applies a two-dimensional orthogonal transformation to the obtained two-dimensional array. Further, the conversion unit 1303 may select an orthogonal conversion method to be used from a plurality of orthogonal conversion methods. In this case, the three-dimensional data coding apparatus 1300 adds information indicating which orthogonal conversion method is used to the bit stream. Further, the conversion unit 1303 may select an orthogonal conversion method to be used from a plurality of orthogonal conversion methods having different dimensions. In this case, the three-dimensional data coding apparatus 1300 adds to the bit stream which dimension of the orthogonal conversion method is used.
- the conversion unit 1303 adjusts the scan order of the predicted residual to the scan order (breadth-first or depth-first, etc.) in the octave tree in the volume. As a result, it is not necessary to add information indicating the scan order of the predicted residuals to the bit stream, so that the overhead can be reduced. Further, the conversion unit 1303 may apply a scan order different from the scan order of the ocree. In this case, the three-dimensional data coding apparatus 1300 adds information indicating the scan order of the predicted residuals to the bit stream. As a result, the three-dimensional data coding apparatus 1300 can efficiently encode the predicted residuals.
- the three-dimensional data encoding device 1300 adds information (flags, etc.) indicating whether or not to apply the scan order of the octree to the bitstream, and predicts when the scan order of the ocree is not applied.
- Information indicating the scan order of the residuals may be added to the bitstream.
- the conversion unit 1303 may convert not only the predicted residual of the color information but also other attribute information of the voxel.
- the conversion unit 1303 may convert and encode information such as the reflectance obtained when the point cloud is acquired by LiDAR or the like.
- the conversion unit 1303 may skip the process. Further, the three-dimensional data coding apparatus 1300 may add information (flag) indicating whether or not to skip the processing of the conversion unit 1303 to the bit stream.
- the quantization unit 1304 generates a quantization coefficient by performing quantization on the frequency component of the predicted residual generated by the conversion unit 1303 using the quantization control parameter. This reduces the amount of information.
- the generated quantization coefficient is output to the entropy coding unit 1313.
- the quantization unit 1304 may control the quantization control parameters in world units, space units, or volume units.
- the three-dimensional data coding apparatus 1300 adds the quantization control parameter to each header information or the like.
- the quantization unit 1304 may perform quantization control by changing the weight for each frequency component of the predicted residual. For example, the quantization unit 1304 may finely quantize the low frequency component and coarsely quantize the high frequency component. In this case, the three-dimensional data encoding device 1300 may add a parameter representing the weight of each frequency component to the header.
- the quantization unit 1304 may skip the process when the space does not have attribute information such as color information. Further, the three-dimensional data coding apparatus 1300 may add information (flag) indicating whether or not to skip the processing of the quantization unit 1304 to the bit stream.
- the inverse quantization unit 1305 generates an inverse quantization coefficient of the predicted residual by performing inverse quantization on the quantization coefficient generated by the quantization unit 1304 using the quantization control parameter, and the generated inverse quantum.
- the quantization coefficient is output to the inverse conversion unit 1306.
- the inverse conversion unit 1306 generates a predicted residual after applying the inverse conversion by applying the inverse conversion to the inverse quantization coefficient generated by the inverse quantization unit 1305. Since the predicted residual after applying the inverse conversion is the predicted residual generated after quantization, it does not have to completely match the predicted residual output by the conversion unit 1303.
- the addition unit 1307 includes the predicted residual after applying the inverse conversion generated by the inverse conversion unit 1306 and the predicted volume generated by the intra-prediction or inter-prediction described later used for generating the predicted residual before quantization. To generate a reconstructed volume. This reconstructed volume is stored in the reference volume memory 1308 or the reference space memory 1310.
- the intra prediction unit 1309 generates a prediction volume of the coded volume by using the attribute information of the adjacent volume stored in the reference volume memory 1308.
- the attribute information includes voxel color information or reflectivity.
- the intra prediction unit 1309 generates color information or a prediction value of reflectance of the volume to be encoded.
- FIG. 44 is a diagram for explaining the operation of the intra prediction unit 1309.
- the volume idx is identifier information added to the volumes in the space, and different values are assigned to each volume.
- the order of allocating the volume idx may be the same as the coding order, or may be different from the coding order.
- the predicted residual is generated by subtracting the predicted value of the color information from the color information of each voxel included in the coded volume. Processing after the conversion unit 1303 is performed on this predicted residual. Further, in this case, the three-dimensional data encoding device 1300 adds the adjacent volume information and the prediction mode information to the bit stream.
- the adjacent volume information is information indicating the adjacent volume used for the prediction, and for example, indicates the volume idx of the adjacent volume used for the prediction.
- the prediction mode information indicates the mode used to generate the prediction volume.
- the mode is, for example, an average value mode in which a predicted value is generated from the average value of voxels in an adjacent volume, an intermediate value mode in which a predicted value is generated from an intermediate value of voxels in an adjacent volume, and the like.
- FIG. 45 is a diagram schematically showing an inter-prediction process according to the present embodiment.
- the inter-prediction unit 1311 encodes (inter-predicts) a space (SPC) at a certain time T_Cur using a coded space at a different time T_LX.
- the inter-prediction unit 1311 applies rotation and translation processing to the coded spaces at different times T_LX to perform the coding processing.
- the three-dimensional data encoding device 1300 adds RT information related to rotation and translation processing applied to spaces at different times T_LX to the bit stream.
- the different time T_LX is, for example, a time T_L0 before the certain time T_Cur.
- the three-dimensional data coding apparatus 1300 may add RT information RT_L0 related to rotation and translation processing applied to the space at time T_L0 to the bit stream.
- the different time T_LX is, for example, a time T_L1 after the certain time T_Cur.
- the three-dimensional data coding apparatus 1300 may add RT information RT_L1 related to rotation and translation processing applied to the space at time T_L1 to the bit stream.
- the inter-prediction unit 1311 performs coding (bi-prediction) with reference to both spaces at different time T_L0 and time T_L1.
- the three-dimensional data encoding device 1300 may add both the RT information RT_L0 and RT_L1 related to rotation and translation applied to the respective spaces to the bit stream.
- T_L0 is set to the time before T_Cur
- T_L1 is set to the time after T_Cur, but this is not always the case.
- both T_L0 and T_L1 may be times before T_Cur.
- both T_L0 and T_L1 may be times after T_Cur.
- the RT information related to rotation and translation applied to each space may be added to the bit stream.
- the three-dimensional data coding apparatus 1300 manages a plurality of encoded spaces to be referenced by two reference lists (L0 list and L1 list).
- the first reference space in the L0 list is L0R0
- the second reference space in the L0 list is L0R1
- the first reference space in the L1 list is L1R0
- the second reference space in the L1 list is L1R1.
- the three-dimensional data encoding device 1300 adds the RT information RT_L0R0 of L0R0, the RT information RT_L0R1 of L0R1, the RT information RT_L1R0 of L1R0, and the RT information RT_L1R1 of L1R1 to the bit stream. For example, the three-dimensional data encoding device 1300 adds these RT information to the header of the bit stream or the like.
- the three-dimensional data encoding device 1300 determines whether to apply rotation and translation for each reference space when encoding is performed by referring to a plurality of reference spaces at different times. At that time, the three-dimensional data coding apparatus 1300 may add information (RT application flag, etc.) indicating whether or not rotation and translation are applied for each reference space to the header information of the bitstream. For example, the three-dimensional data coding apparatus 1300 calculates RT information and an ICP error value by using an ICP (Interactive Closet Point) algorithm for each reference space referenced from the coding target space.
- ICP Interactive Closet Point
- the three-dimensional data coding apparatus 1300 determines that rotation and translation do not need to be performed, and sets the RT application flag to off. On the other hand, when the ICP error value is larger than the above constant value, the three-dimensional data coding apparatus 1300 sets the RT application flag to ON and adds the RT information to the bit stream.
- FIG. 46 is a diagram showing an example of syntax in which RT information and RT application flag are added to the header.
- the number of bits to be assigned to each syntax may be determined within the range that the syntax can take. For example, when the number of reference spaces included in the reference list L0 is eight, 3 bits may be assigned to MaxRefSpc_l0.
- the number of bits to be assigned may be variable according to the value that each syntax can take, or may be fixed regardless of the value that can be taken.
- the three-dimensional data encoding device 1300 may add the fixed number of bits to another header information.
- MaxRefSpc_l0 shown in FIG. 46 indicates the number of reference spaces included in the reference list L0.
- RT_flag_l0 [i] is an RT application flag of the reference space i in the reference list L0.
- RT_flag_l0 [i] is 1, rotation and translation are applied to the reference space i.
- RT_flag_l0 [i] is 0, rotation and translation are not applied to the reference space i.
- R_l0 [i] and T_l0 [i] are RT information of the reference space i in the reference list L0.
- R_l0 [i] is rotation information of the reference space i in the reference list L0.
- the rotation information indicates the content of the applied rotation process, for example, a rotation matrix, a quaternion, or the like.
- T_l0 [i] is translation information of the reference space i in the reference list L0.
- the translation information indicates the content of the applied translation process, for example, a translation vector or the like.
- MaxRefSpc_l1 indicates the number of reference spaces included in the reference list L1.
- RT_flag_l1 [i] is an RT application flag of the reference space i in the reference list L1. When RT_flag_l1 [i] is 1, rotation and translation are applied to the reference space i. When RT_flag_l1 [i] is 0, rotation and translation are not applied to the reference space i.
- R_l1 [i] and T_l1 [i] are RT information of the reference space i in the reference list L1.
- R_l1 [i] is rotation information of the reference space i in the reference list L1.
- the rotation information indicates the content of the applied rotation process, for example, a rotation matrix, a quaternion, or the like.
- T_l1 [i] is translation information of the reference space i in the reference list L1.
- the translation information indicates the content of the applied translation process, for example, a translation vector or the like.
- the inter-prediction unit 1311 generates a prediction volume of the coded target volume by using the information of the encoded reference space stored in the reference space memory 1310. As described above, the inter-prediction unit 1311 uses the coded space and the reference space to bring the overall positional relationship between the coded space and the reference space closer to each other before generating the predicted volume of the coded volume.
- RT information is obtained using the ICP (Interactive Closet Point) algorithm. Then, the inter-prediction unit 1311 obtains the reference space B by applying the rotation and translation processing to the reference space using the obtained RT information. After that, the inter-prediction unit 1311 generates a predicted volume of the coded target volume in the coded target space by using the information in the reference space B.
- the three-dimensional data coding apparatus 1300 adds the RT information used for obtaining the reference space B to the header information of the coding target space and the like.
- the inter-prediction unit 1311 brings the overall positional relationship between the coded target space and the reference space closer by applying rotation and translation processing to the reference space, and then uses the information of the reference space to make the prediction volume.
- the accuracy of the predicted volume can be improved by generating.
- the predicted residual can be suppressed, the code amount can be reduced.
- ICP is performed using the coding target space and the reference space is shown, but the present invention is not necessarily limited to this.
- the inter-prediction unit 1311 performs ICP using at least one of the coded target space in which the number of voxels or point clouds is thinned out and the reference space in which the number of voxels or point clouds is thinned out in order to reduce the processing amount. By doing so, RT information may be obtained.
- the inter-prediction unit 1311 rotates and rotates when the ICP error value obtained as a result of ICP is smaller than a predetermined first threshold value, that is, for example, when the positional relationship between the coding target space and the reference space is close. It is not necessary to perform rotation and translation because it is determined that translation processing is not necessary. In this case, the three-dimensional data encoding device 1300 may suppress the overhead by not adding the RT information to the bit stream.
- the inter-prediction unit 1311 determines that the shape change between spaces is large when the ICP error value is larger than the predetermined second threshold value, and applies the intra-prediction to all the volumes of the coded space. You may.
- the space to which the intra prediction is applied is referred to as an intra space.
- the second threshold value is a value larger than the first threshold value.
- the method is not limited to ICP, and any method may be applied as long as it is a method of obtaining RT information from two voxel sets or two point cloud sets.
- the inter-prediction unit 1311 can be used as a prediction volume of the coding target volume in the coding target space, for example, in the reference space. Search for the volume with the closest attribute information such as shape or color. Further, this reference space is, for example, a reference space after the above-mentioned rotation and translation processing has been performed.
- the inter-prediction unit 1311 generates a prediction volume from the volume (reference volume) obtained by the search.
- FIG. 47 is a diagram for explaining the operation of generating the predicted volume.
- the inter-prediction unit 1311 refers to the coded target volume while sequentially scanning the reference volumes in the reference space. Search for the volume with the smallest predicted residual, which is the difference from the volume.
- the inter-prediction unit 1311 selects the volume with the smallest prediction residual as the prediction volume.
- the predicted residual between the coded volume and the predicted volume is encoded by the processing of the conversion unit 1303 and subsequent units.
- the predicted residual is the difference between the attribute information of the coded volume and the attribute information of the predicted volume.
- the three-dimensional data encoding device 1300 adds the volume idx of the reference volume in the reference space referred to as the predicted volume to the header of the bit stream or the like.
- the prediction control unit 1312 controls whether to encode the coded volume using intra prediction or inter prediction.
- the mode including the intra prediction and the inter prediction is called a prediction mode.
- the prediction control unit 1312 calculates the prediction residual when the coded volume is predicted by intra-prediction and the prediction residual when predicted by inter-prediction as evaluation values, and predicts the smaller evaluation value. Select a mode.
- the prediction control unit 1312 calculates and calculates the actual code amount by applying orthogonal conversion, quantization, and entropy coding to the prediction residual of the intra prediction and the prediction residual of the inter prediction, respectively.
- the prediction mode may be selected by using the obtained code amount as an evaluation value. Further, overhead information other than the predicted residual (reference volume idx information, etc.) may be added to the evaluation value. Further, the prediction control unit 1312 may always select the intra prediction when it is determined in advance that the coding target space is encoded by the intra space.
- the entropy coding unit 1313 generates a coded signal (coded bit stream) by variable-length coding the quantization coefficient which is the input from the quantization unit 1304. Specifically, the entropy coding unit 1313, for example, binarizes the quantization coefficient and arithmetically encodes the obtained binary signal.
- FIG. 48 is a block diagram of the three-dimensional data decoding device 1400 according to the present embodiment.
- the three-dimensional data decoding device 1400 includes an entropy decoding unit 1401, an inverse quantization unit 1402, an inverse conversion unit 1403, an addition unit 1404, a reference volume memory 1405, an intra prediction unit 1406, and a reference space memory 1407. , Inter-prediction unit 1408 and prediction control unit 1409.
- the entropy decoding unit 1401 decodes the encoded signal (encoded bit stream) in a variable length. For example, the entropy decoding unit 1401 arithmetically decodes the coded signal to generate a binary signal, and generates a quantization coefficient from the generated binary signal.
- the inverse quantization unit 1402 generates an inverse quantization coefficient by inversely quantizing the quantization coefficient input from the entropy decoding unit 1401 using the quantization parameter added to the bit stream or the like.
- the inverse conversion unit 1403 generates a predicted residual by inversely converting the inverse quantization coefficient input from the inverse quantization unit 1402. For example, the inverse conversion unit 1403 generates a predicted residual by performing an inverse orthogonal transformation of the inverse quantization coefficient based on the information added to the bit stream.
- the addition unit 1404 adds the prediction residual generated by the inverse conversion unit 1403 and the prediction volume generated by the intra prediction or the inter prediction to generate the reconstructed volume.
- This reconstructed volume is output as decoded three-dimensional data and is stored in the reference volume memory 1405 or the reference space memory 1407.
- the intra prediction unit 1406 generates a predicted volume by intra prediction using the reference volume in the reference volume memory 1405 and the information added to the bit stream. Specifically, the intra prediction unit 1406 acquires the adjacent volume information (for example, volume idx) added to the bit stream and the prediction mode information, and uses the adjacent volume indicated by the adjacent volume information to obtain the prediction mode information. The predicted volume is generated according to the mode indicated by.
- the details of these processes are the same as the processes by the intra-prediction unit 1309 described above, except that the information given to the bit stream is used.
- the inter-prediction unit 1408 generates a prediction volume by inter-prediction using the reference space in the reference space memory 1407 and the information added to the bit stream. Specifically, the inter-prediction unit 1408 applies rotation and translation processing to the reference space using the RT information for each reference space added to the bit stream, and uses the applied reference space to calculate the prediction volume. Generate. When the RT application flag for each reference space exists in the bit stream, the inter-prediction unit 1408 applies rotation and translation processing to the reference space according to the RT application flag. The details of these processes are the same as the processes by the inter-prediction unit 1311 described above, except that the information given to the bit stream is used.
- the prediction control unit 1409 controls whether to decode the decoding target volume by intra-prediction or inter-prediction. For example, the prediction control unit 1409 selects intra prediction or inter prediction according to the information added to the bit stream indicating the prediction mode to be used. Note that the prediction control unit 1409 may always select intra prediction when it is determined in advance that the decoding target space is decoded in the intra space.
- the three-dimensional data encoding device 1300 may divide a space into subspaces and apply rotation and translation in units of subspaces. In this case, the three-dimensional data encoding device 1300 generates RT information for each subspace, and adds the generated RT information to the header of the bit stream or the like. Further, the three-dimensional data coding apparatus 1300 may apply rotation and translation in volume units, which are coding units.
- the three-dimensional data coding apparatus 1300 generates RT information for each coded volume, and adds the generated RT information to the header of the bit stream or the like. Further, the above may be combined. That is, the three-dimensional data coding apparatus 1300 may apply rotation and translation in large units, and then apply rotation and translation in fine units. For example, the three-dimensional data encoding device 1300 may apply rotation and translation in space units, and may apply different rotations and translations to each of the plurality of volumes contained in the obtained space.
- the three-dimensional data coding apparatus 1300 may apply scale processing to change the size of the three-dimensional data.
- the three-dimensional data encoding device 1300 may apply any one or two of rotation, translation, and scale.
- the type of processing applied to each unit may be different. For example, rotation and translation may be applied in space units, and translation may be applied in volume units.
- FIG. 48 is a flowchart of the inter-prediction process by the three-dimensional data coding apparatus 1300.
- the three-dimensional data coding apparatus 1300 uses the position information of the three-dimensional points included in the reference three-dimensional data (for example, the reference space) at a time different from the target three-dimensional data (for example, the coding target space) to predict the predicted position information.
- the target three-dimensional data for example, the coding target space
- predicted volume is generated (S1301).
- the three-dimensional data coding apparatus 1300 generates predicted position information by applying rotation and translation processing to the position information of the three-dimensional points included in the reference three-dimensional data.
- the three-dimensional data coding apparatus 1300 performs rotation and translation processing in a first unit (for example, a space), and generates predicted position information in a second unit (for example, volume) finer than the first unit. May be good.
- the three-dimensional data coding apparatus 1300 selects a volume having a minimum difference in position information from the coded volume included in the coded space among a plurality of volumes included in the reference space after rotation and translation processing. Search and use the obtained volume as the predicted volume.
- the three-dimensional data coding apparatus 1300 may perform rotation and translation processing and generation of predicted position information in the same unit.
- the three-dimensional data encoding device 1300 applies the first rotation and translation processing in the first unit (for example, space) to the position information of the three-dimensional points included in the reference three-dimensional data, and performs the first rotation and translation processing.
- the predicted position information may be generated by applying the second rotation and translation processing in a second unit (for example, volume) finer than the first unit to the position information of the three-dimensional point obtained in the above.
- the position information and the predicted position information of the three-dimensional point are represented by an octa-tree structure as shown in FIG. 41, for example.
- the position information and the predicted position information of the three-dimensional point are represented in the scan order in which the width is prioritized among the depth and the width in the octave tree structure.
- the position information and the predicted position information of the three-dimensional point are represented in the scan order in which the depth is prioritized among the depth and the width in the octane tree structure.
- the three-dimensional data coding apparatus 1300 encodes an RT application flag indicating whether or not rotation and translation processing are applied to the position information of the three-dimensional points included in the reference three-dimensional data. To do. That is, the three-dimensional data coding apparatus 1300 generates a coded signal (coded bit stream) including the RT application flag. Further, the three-dimensional data coding apparatus 1300 encodes RT information indicating the contents of rotation and translation processing. That is, the three-dimensional data coding device 1300 generates a coded signal (coded bit stream) including RT information. The three-dimensional data coding apparatus 1300 encodes RT information when the RT application flag indicates that rotation and translation processing are applied, and when the RT application flag indicates that rotation and translation processing is not applied. It is not necessary to encode the RT information.
- the three-dimensional data includes, for example, position information of three-dimensional points and attribute information (color information, etc.) of each three-dimensional point.
- the three-dimensional data coding apparatus 1300 generates predictive attribute information using the attribute information of the three-dimensional points included in the reference three-dimensional data (S1302).
- the three-dimensional data coding device 1300 encodes the position information of the three-dimensional points included in the target three-dimensional data by using the predicted position information. For example, the three-dimensional data coding apparatus 1300 calculates the difference position information which is the difference between the position information of the three-dimensional points included in the target three-dimensional data and the predicted position information as shown in FIG. 38 (S1303).
- the three-dimensional data coding device 1300 encodes the attribute information of the three-dimensional points included in the target three-dimensional data by using the predicted attribute information. For example, the three-dimensional data encoding device 1300 calculates the difference attribute information which is the difference between the attribute information of the three-dimensional point included in the target three-dimensional data and the predicted attribute information (S1304). Next, the three-dimensional data coding apparatus 1300 converts and quantizes the calculated difference attribute information (S1305).
- the three-dimensional data coding device 1300 encodes (for example, entropy coding) the difference position information and the difference attribute information after quantization (S1306). That is, the three-dimensional data coding apparatus 1300 generates a coded signal (coded bit stream) including the difference position information and the difference attribute information.
- the three-dimensional data encoding device 1300 does not have to perform steps S1302, S1304, and S1305. Further, the three-dimensional data coding apparatus 1300 may perform only one of the coding of the position information of the three-dimensional points and the coding of the attribute information of the three-dimensional points.
- the order of processing shown in FIG. 49 is an example, and is not limited to this.
- the processing for position information (S1301, S1303) and the processing for attribute information (S1302, S1304, S1305) are independent of each other, they may be performed in any order, or some of them may be processed in parallel. You may.
- the three-dimensional data coding apparatus 1300 generates predicted position information using the position information of the three-dimensional points included in the reference three-dimensional data at a time different from the target three-dimensional data, and generates the predicted position information, and the target tertiary
- the difference position information which is the difference between the position information of the three-dimensional point included in the original data and the predicted position information, is encoded.
- the amount of data of the coded signal can be reduced, so that the coding efficiency can be improved.
- the three-dimensional data encoding device 1300 generates predictive attribute information using the attribute information of the three-dimensional points included in the reference three-dimensional data, and generates predictive attribute information of the three-dimensional points included in the target three-dimensional data. Encode the difference attribute information, which is the difference between the attribute information and the predicted attribute information. As a result, the amount of data of the coded signal can be reduced, so that the coding efficiency can be improved.
- the three-dimensional data coding device 1300 includes a processor and a memory, and the processor uses the memory to perform the above processing.
- FIG. 48 is a flowchart of inter-prediction processing by the three-dimensional data decoding device 1400.
- the three-dimensional data decoding device 1400 decodes (for example, entropy decoding) the difference position information and the difference attribute information from the coded signal (coded bit stream) (S1401).
- the three-dimensional data decoding device 1400 decodes the RT application flag indicating whether or not the rotation and translation processing is applied to the position information of the three-dimensional points included in the reference three-dimensional data from the coded signal. Further, the three-dimensional data decoding device 1400 decodes RT information indicating the contents of the rotation and translation processing. The three-dimensional data decoding apparatus 1400 decodes the RT information when the RT application flag indicates that the rotation and translation processing is applied, and when the RT application flag indicates that the rotation and translation processing is not applied. It is not necessary to decode the RT information.
- the three-dimensional data decoding device 1400 performs inverse quantization and inverse conversion on the decoded difference attribute information (S1402).
- the three-dimensional data decoding apparatus 1400 uses the position information of the three-dimensional points included in the reference three-dimensional data (for example, the reference space) at a time different from the target three-dimensional data (for example, the decoding target space) to predict the predicted position information (for example). For example, the predicted volume) is generated (S1403). Specifically, the three-dimensional data decoding apparatus 1400 generates predicted position information by applying rotation and translation processing to the position information of the three-dimensional points included in the reference three-dimensional data.
- the RT application flag indicates that the 3D data decoding device 1400 applies the rotation and translation processing
- the position information of the 3D points included in the reference 3D data indicated by the RT information Apply rotation and translation processing to.
- the RT application flag indicates that the rotation and translation processing is not applied
- the three-dimensional data decoding apparatus 1400 does not apply the rotation and translation processing to the position information of the three-dimensional point included in the reference three-dimensional data. ..
- the three-dimensional data decoding apparatus 1400 may perform rotation and translation processing in a first unit (for example, a space) and generate predicted position information in a second unit (for example, volume) finer than the first unit. Good.
- the three-dimensional data decoding apparatus 1400 may perform rotation and translation processing and generation of predicted position information in the same unit.
- the three-dimensional data decoding apparatus 1400 applies the first rotation and translation processing in the first unit (for example, space) to the position information of the three-dimensional points included in the reference three-dimensional data, and performs the first rotation and translation processing.
- Predicted position information may be generated by applying the second rotation and translation processing to the obtained position information of the three-dimensional point in a second unit (for example, volume) finer than the first unit.
- the position information and the predicted position information of the three-dimensional point are represented by an octa-tree structure as shown in FIG. 41, for example.
- the position information and the predicted position information of the three-dimensional point are represented in the scan order in which the width is prioritized among the depth and the width in the octave tree structure.
- the position information and the predicted position information of the three-dimensional point are represented in the scan order in which the depth is prioritized among the depth and the width in the octane tree structure.
- the three-dimensional data decoding device 1400 generates predicted attribute information using the attribute information of the three-dimensional points included in the reference three-dimensional data (S1404).
- the three-dimensional data decoding device 1400 restores the position information of the three-dimensional points included in the target three-dimensional data by decoding the coded position information included in the coded signal using the predicted position information.
- the coded position information is, for example, difference position information
- the three-dimensional data decoding apparatus 1400 adds the difference position information and the predicted position information to the three-dimensional points included in the target three-dimensional data. Restore the position information (S1405).
- the three-dimensional data decoding device 1400 restores the attribute information of the three-dimensional points included in the target three-dimensional data by decoding the coded attribute information included in the coded signal using the predicted attribute information.
- the coded attribute information is, for example, difference attribute information
- the three-dimensional data decoding apparatus 1400 adds the difference attribute information and the predicted attribute information to the three-dimensional points included in the target three-dimensional data. Restore the attribute information (S1406).
- the three-dimensional data decoding apparatus 1400 does not have to perform steps S1402, S1404, and S1406. Further, the three-dimensional data decoding apparatus 1400 may perform only one of the decoding of the position information of the three-dimensional point and the decoding of the attribute information of the three-dimensional point.
- processing for position information S1403, S1405
- processing for attribute information S1402, S1404, S1406
- S1402, S1404, S1406 may be performed in any order, or some of them may be processed in parallel. You may.
- the information of the three-dimensional point cloud includes position information (geometry) and attribute information (attribute).
- the position information includes coordinates (x coordinate, y coordinate, z coordinate) with respect to a certain point.
- the attribute information includes information indicating the color information (RGB, YUV, etc.), reflectance, normal vector, etc. of each three-dimensional point.
- the three-dimensional data coding device can encode the attribute information by using a coding method different from the position information.
- an integer value will be used as the value of the attribute information.
- each color component of the color information RGB or YUV has an 8-bit accuracy
- each color component takes an integer value from 0 to 255.
- the reflectance value has a precision of 10 bits
- the reflectance value takes an integer value from 0 to 1023.
- the three-dimensional data encoding device may multiply the value by a scale value and then round it to an integer value so that the value of the attribute information becomes an integer value. ..
- the three-dimensional data encoding device may add this scale value to the header of the bit stream or the like.
- the code amount can be reduced by entropy-coding the difference absolute value Diffp using a coding table in which the number of generated bits decreases as the value becomes smaller.
- the reference three-dimensional point is a three-dimensional point within a predetermined distance range from the target three-dimensional point.
- the three-dimensional data encoding device determines that the position of the three-dimensional point q is close to the position of the target three-dimensional point p, and determines that the target three-dimensional point p is close to the position. It is determined that the value of the attribute information of the three-dimensional point q is used to generate the predicted value of the attribute information of the point p.
- the distance calculation method may be another method, for example, Mahalanobis distance or the like may be used. Further, the three-dimensional data coding apparatus may determine that the three-dimensional point outside the predetermined distance range from the target three-dimensional point is not used for the prediction process.
- the three-dimensional data encoding device predicts the three-dimensional point r. It may be determined that it is not used for.
- the three-dimensional data encoding device may add information indicating the threshold value THd to the header of the bit stream or the like.
- FIG. 51 is a diagram showing an example of three-dimensional points.
- the distance d (p, q) between the target three-dimensional point p and the three-dimensional point q is smaller than the threshold THd. Therefore, the three-dimensional data encoding device determines that the three-dimensional point q is the reference three-dimensional point of the target three-dimensional point p, and the attribute of the three-dimensional point q is used to generate the predicted value Pp of the attribute information Ap of the target three-dimensional p. It is determined that the value of the information Aq is used.
- the three-dimensional data encoding device determines that the three-dimensional point r is not the reference three-dimensional point of the target three-dimensional point p, and generates the predicted value Pp of the attribute information Ap of the target three-dimensional point p. It is determined that the value of the attribute information Ar of is not used.
- the three-dimensional data coding device encodes the attribute information of the target three-dimensional point using the predicted value
- the three-dimensional point whose attribute information has already been encoded and decoded is used as the reference three-dimensional point.
- the three-dimensional data decoding device uses the three-dimensional point whose attribute information has already been decoded as the reference three-dimensional point.
- each classified layer is referred to as LoD (Level of Detail).
- LoD Level of Detail
- the three-dimensional data encoding device selects the initial point a0 and assigns it to LoD0.
- the three-dimensional data encoding device extracts a point a1 whose distance from the point a0 is larger than the threshold Thres_LoD [0] of LoD0 and assigns it to LoD0.
- the three-dimensional data encoding device extracts a point a2 whose distance from the point a1 is larger than the threshold Thres_LoD [0] of LoD0 and assigns it to LoD0.
- the three-dimensional data encoding device configures LoD0 so that the distance between each point in LoD0 is larger than the threshold value Thres_LoD [0].
- the three-dimensional data encoding device selects the point b0 to which LoD has not yet been assigned and assigns it to LoD1.
- the three-dimensional data encoding device extracts the unassigned point b1 whose distance from the point b0 is larger than the threshold Thres_LoD [1] of LoD1 and assigns it to LoD1.
- the three-dimensional data encoding device extracts the unassigned point b2 whose distance from the point b1 is larger than the threshold Thres_LoD [1] of LoD1 and assigns it to LoD1.
- the three-dimensional data encoding device configures LoD1 so that the distance between each point in LoD1 is larger than the threshold value Thres_LoD [1].
- the three-dimensional data encoding device selects the point c0 to which LoD has not yet been assigned and assigns it to LoD2.
- the three-dimensional data encoding device extracts the unassigned point c1 whose distance from the point c0 is larger than the threshold Thres_LoD [2] of LoD2 and assigns it to LoD2.
- the three-dimensional data encoding device extracts the unassigned point c2 whose distance from the point c1 is larger than the threshold Thres_LoD [2] of LoD2 and assigns it to LoD2.
- the three-dimensional data encoding device configures LoD2 so that the distance between each point in LoD2 is larger than the threshold value Thres_LoD [2].
- the thresholds Thres_LoD [0], Thres_LoD [1], and Thres_LoD [2] of each LoD are set.
- the three-dimensional data encoding device may add information indicating the threshold value of each LoD to the header of the bit stream or the like. For example, in the case of the example shown in FIG. 53, the three-dimensional data encoding device may add the thresholds Thres_LoD [0], Thres_LoD [1], and Thres_LoD [2] to the header.
- the three-dimensional data encoding device may allocate all three-dimensional points to which LoD is not assigned to the lowest layer of LoD. In this case, the three-dimensional data coding apparatus can reduce the code amount of the header by not adding the threshold value of the lowest layer of LoD to the header. For example, in the case of the example shown in FIG. 53, the three-dimensional data encoding device adds the thresholds Thres_LoD [0] and Thres_LoD [1] to the header, and does not add Thres_LoD [2] to the header. In this case, the three-dimensional data decoding device may estimate the value of Thres_LoD [2] to be 0. Further, the three-dimensional data encoding device may add the number of layers of LoD to the header. As a result, the three-dimensional data decoding device can determine the LoD of the lowest layer by using the number of layers of LoD.
- the higher layer (the layer closer to LoD0) becomes a sparse point cloud (space) in which the distance between the three-dimensional points is separated.
- the lower layer is a dense point cloud (dense) in which the distance between three-dimensional points is closer.
- LoD0 is the uppermost layer.
- the method of selecting the initial three-dimensional points when setting each LoD may depend on the coding order at the time of position information coding. For example, the three-dimensional data encoding device selects the first three-dimensional point encoded at the time of position information coding as the initial point a0 of LoD0, and selects the points a1 and a2 with the initial point a0 as the base point. LoD0 is configured. Then, the three-dimensional data encoding device may select the earliest three-dimensional point whose position information is encoded among the three-dimensional points that do not belong to LoD0 as the initial point b0 of LoD1.
- the three-dimensional data encoding device sets the initial point n0 of LoDn as the earliest three-dimensional point whose position information is encoded among the three-dimensional points that do not belong to the upper layer (LoD0 to LoDn-1) of LoDn. May be selected.
- the three-dimensional data decoding device can configure the same LoD as at the time of encoding by using the same initial point selection method at the time of decoding, so that the bit stream can be appropriately decoded.
- the three-dimensional data decoding apparatus selects, as the initial point n0 of LoDn, the three-dimensional point whose position information is decoded earliest among the three-dimensional points that do not belong to the upper layer of LoDn.
- the method of generating the predicted value of the attribute information of the three-dimensional point by using the LoD information will be described below.
- the three-dimensional data encoding device encodes the three-dimensional points included in LoD0 in order
- the target three-dimensional points included in LoD1 are encoded and decoded included in LoD0 and LoD1 (hereinafter, simply "" It is generated using the attribute information of "encoded”).
- the three-dimensional data encoding device sets the predicted value of the attribute information of the three-dimensional point to N or less of the coded three-dimensional points around the target three-dimensional point to be encoded. Generated by calculating the average of attribute values. Further, the three-dimensional data encoding device may add a value of N to the header of the bit stream or the like. The three-dimensional data encoding device may change the value of N for each three-dimensional point and add the value of N for each three-dimensional point. As a result, an appropriate N can be selected for each three-dimensional point, so that the accuracy of the predicted value can be improved. Therefore, the predicted residual can be reduced.
- the three-dimensional data encoding device may add the value of N to the header of the bitstream and fix the value of N in the bitstream. As a result, it is not necessary to encode or decode the value of N for each three-dimensional point, so that the amount of processing can be reduced. Further, the three-dimensional data encoding device may separately encode the value of N for each LoD. As a result, the coding efficiency can be improved by selecting an appropriate N for each LoD.
- the three-dimensional data encoding device may calculate the predicted value of the attribute information of the three-dimensional points by the weighted average value of the attribute information of the surrounding N coded three-dimensional points. For example, the three-dimensional data encoding device calculates the weight using the distance information of the target three-dimensional point and the surrounding N three-dimensional points.
- the higher layer of LoD is set to have a larger N value
- the lower layer is set to have a smaller N value. Since the distance between the three-dimensional points to which they belong is large in the upper layer of LoD, there is a possibility that the prediction accuracy can be improved by setting a large value of N and selecting and averaging a plurality of surrounding three-dimensional points. Further, since the distance between the three-dimensional points to which the LoD lower layer belongs is short, it is possible to make an efficient prediction while suppressing the amount of averaging processing by setting the value of N small.
- FIG. 54 is a diagram showing an example of attribute information used for the predicted value.
- the peripheral point P' is selected based on the distance from the point P.
- the predicted value of the attribute information of the point b2 shown in FIG. 54 is generated by using the attribute information of the points a0, a1, a2, b0, and b1.
- the predicted value is calculated by a distance-dependent weighted average.
- the predicted value a2p of the point a2 is calculated by the weighted average of the attribute information of the points a0 and a1 as shown in (Equation A2) and (Equation A3).
- a i is a value of the attribute information of the point ai.
- the predicted value b2p of the point b2 is calculated by the weighted average of the attribute information of the points a0, a1, a2, b0, and b1 as shown in (Equation A4) to (Equation A6).
- B i is the value of the attribute information of the point bi.
- the three-dimensional data encoding device calculates the difference value (predicted residual) between the value of the attribute information of the three-dimensional point and the predicted value generated from the surrounding points, and quantizes the calculated predicted residual.
- a three-dimensional data encoder performs quantization by dividing the predicted residuals by a quantization scale (also called a quantization step).
- a quantization scale also called a quantization step.
- the smaller the quantization scale the smaller the error (quantization error) that can occur due to quantization.
- the larger the quantization scale the larger the quantization error.
- the three-dimensional data coding device may change the quantization scale used for each LoD.
- the higher layer has a smaller quantization scale
- the lower layer has a larger quantization scale. Since the value of the attribute information of the three-dimensional point belonging to the upper layer may be used as the predicted value of the attribute information of the three-dimensional point belonging to the lower layer, the quantization scale of the upper layer is reduced in the upper layer. The coding efficiency can be improved by suppressing the quantization error that can occur and improving the accuracy of the predicted value.
- the three-dimensional data coding device may add a quantization scale used for each LoD to the header or the like. As a result, the three-dimensional data decoding device can correctly decode the quantization scale, so that the bit stream can be appropriately decoded.
- the three-dimensional data coding apparatus may convert a signed integer value (signed quantization value), which is a predicted residual after quantization, into an unsigned integer value (unsigned quantization value). This eliminates the need to consider the occurrence of negative integers when entropy encoding the predicted residuals.
- the three-dimensional data coding apparatus does not necessarily have to convert the signed integer value into an unsigned integer value, and for example, the sign bit may be separately entropy-encoded.
- the predicted residual is calculated by subtracting the predicted value from the original value.
- the prediction residual a2r of the point a2 as shown in (Equation A7), from the values A 2 of the attribute information of the point a2, is calculated by subtracting the prediction value a2p the point a2.
- the predicted residual is quantized by dividing by QS (Quantization Step).
- the quantized value a2q at point a2 is calculated by (Equation A9).
- the quantized value b2q at point b2 is calculated by (Equation A10).
- QS_LoD0 is a QS for LoD0
- QS_LoD1 is a QS for LoD1. That is, the QS may be changed according to LoD.
- the three-dimensional data encoding device converts the signed integer value, which is the above-mentioned quantization value, into an unsigned integer value as follows.
- the signed integer value a2q is smaller than 0, the three-dimensional data coding apparatus sets the unsigned integer value a2u to -1- (2 ⁇ a2q).
- the signed integer value a2q is 0 or more, the three-dimensional data coding apparatus sets the unsigned integer value a2u to 2 ⁇ a2q.
- the three-dimensional data encoding device sets the unsigned integer value b2u to -1- (2 ⁇ b2q) when the signed integer value b2q is smaller than 0.
- the three-dimensional data coding apparatus sets the unsigned integer value b2u to 2 ⁇ b2q.
- the three-dimensional data encoding device may encode the predicted residual (unsigned integer value) after quantization by entropy coding.
- unsigned integer values may be binarized and then binary arithmetic coding may be applied.
- the three-dimensional data encoding device may switch the binarization method according to the value of the predicted residual. For example, when the predicted residual pu is smaller than the threshold value R_TH, the three-dimensional data encoding device binarizes the predicted residual pu with a fixed number of bits required to express the threshold value R_TH. Further, when the predicted residual pu is equal to or higher than the threshold value R_TH, the three-dimensional data encoding device uses the binarized data of the threshold value R_TH and the value of (pu-R_TH) by using an exponential-Golomb or the like. Value it.
- the three-dimensional data encoding device when the threshold value R_TH is 63 and the predicted residual pu is smaller than 63, the predicted residual pu is binarized with 6 bits. Further, when the predicted residual pu is 63 or more, the three-dimensional data encoding device binarizes the binary data (111111) and (pu-63) of the threshold value R_TH by using the exponential Golomb. Arithmetic coding is performed.
- the three-dimensional data encoding device generates 6-bit binary data (100000) when the predicted residual pu is 32, and arithmetically encodes this bit string. Further, when the predicted residual pu is 66, the three-dimensional data encoding device generates binary data (111111) of the threshold value R_TH and a bit string (00100) representing the value 3 (66-63) in exponential Golomb. , This bit string (111111 + 00100) is arithmetically coded.
- the three-dimensional data encoding device switches the binarization method according to the magnitude of the predicted residual, so that the number of binarized bits increases sharply when the predicted residual becomes large. It is possible to encode while suppressing it.
- the three-dimensional data encoding device may add the threshold value R_TH to the header of the bit stream or the like.
- the three-dimensional data encoding device sets the threshold value R_TH to a large value.
- the possibility of encoding the binarized data of the threshold value R_TH is reduced, and the coding efficiency is improved.
- the three-dimensional data encoding device sets the threshold value R_TH to be small. As a result, it is possible to prevent a sudden increase in the bit length of the binarized data.
- the three-dimensional data encoding device may switch the threshold value R_TH for each LoD and add the threshold value R_TH for each LoD to the header or the like. That is, the three-dimensional data encoding device may switch the binarization method for each LoD. For example, in the upper layer, the distance between the three-dimensional points is long, so that the prediction accuracy is poor and the prediction residual may increase as a result. Therefore, the three-dimensional data encoding device prevents a sudden increase in the bit length of the binarized data by setting the threshold value R_TH small for the upper layer. Further, since the distance between the three-dimensional points is short in the lower layer, the prediction accuracy may be high and the prediction residual may be small as a result. Therefore, the three-dimensional data coding apparatus improves the coding efficiency by setting the threshold value R_TH large for the hierarchy.
- FIG. 55 is a diagram showing an example of the exponential Golomb code, and is a diagram showing the relationship between the value before binarization (multi-value) and the bit (code) after binarization. In addition, 0 and 1 shown in FIG. 55 may be inverted.
- the three-dimensional data coding device applies arithmetic coding to the binarized data of the predicted residuals.
- the coding efficiency can be improved.
- the n-bit code (n-bit code)
- the rest which is the binarized part using exponential Golomb.
- the three-dimensional data coding apparatus may switch the application method of arithmetic coding between the n-bit code and the remaining code.
- a three-dimensional data coding device performs arithmetic coding on an n-bit code using a different coding table (probability table) for each bit.
- the three-dimensional data coding apparatus may change the number of coding tables used for each bit. For example, the three-dimensional data coding apparatus performs arithmetic coding using one coding table for the first bit b0 of the n-bit code. Further, the three-dimensional data coding apparatus uses two coding tables for the next bit b1. Further, the three-dimensional data coding apparatus switches the coding table used for arithmetic coding of bit b1 according to the value (0 or 1) of b0.
- the three-dimensional data encoding device further uses four coding tables for the next bit b2. Further, the three-dimensional data coding apparatus switches the coding table used for arithmetic coding of bit b2 according to the values (0 to 3) of b0 and b1.
- the three-dimensional data coding apparatus uses 2 n-1 coding tables when arithmetically coding each bit bn-1 of the n-bit code. Further, the three-dimensional data coding apparatus switches the coding table to be used according to the value (generation pattern) of the bits before bn-1. As a result, the three-dimensional data coding apparatus can use an appropriate coding table for each bit, so that the coding efficiency can be improved.
- the three-dimensional data coding apparatus may reduce the number of coding tables used in each bit. For example, in a three-dimensional data encoding device, when each bit bn-1 is arithmetically coded, 2 m pieces are used according to the value (generation pattern) of m bits (m ⁇ n-1) before bn-1. The encoding table of may be switched. As a result, the coding efficiency can be improved while reducing the number of coding tables used in each bit.
- the three-dimensional data coding apparatus may update the probability of occurrence of 0 and 1 in each coding table according to the value of the binarized data actually generated. Further, the three-dimensional data encoding device may fix the occurrence probabilities of 0 and 1 in the coding table of some bits. As a result, the number of updates of the occurrence probability can be suppressed, so that the processing amount can be reduced.
- the number of coding tables for b0 is one (CTb0).
- CTb10, CTb11 There are two coding tables for b1 (CTb10, CTb11). Further, the coding table to be used is switched according to the value of b0 (0 to 1).
- CTb20, CTb21, CTb22, CTb23 There are four coding tables for b2 (CTb20, CTb21, CTb22, CTb23). Further, the coding table to be used is switched according to the values (0 to 3) of b0 and b1.
- CTbn0, CTbn1, ..., CTbn (2 n-1 -1) There are 2 n-1 coding tables for bn-1 (CTbn0, CTbn1, ..., CTbn (2 n-1 -1)). Further, the coding table to be used can be switched according to the value of b0b1 ... bn-2 (0 to 2 n-1 -1).
- FIG. 56 is a diagram for explaining processing when, for example, the remaining code is an exponential Golomb code.
- the remaining code which is a portion binarized using the exponential Golomb, includes a prefix part and a suffix part as shown in FIG. 56.
- the three-dimensional data coding apparatus switches the coding table between the prefix unit and the suffix unit. That is, the three-dimensional data coding device arithmetically encodes each bit included in the prefix part using the coding table for prefix, and uses the coding table for suffix for each bit contained in the suffix part. Arithmetic code.
- the three-dimensional data coding device may update the probability of occurrence of 0 and 1 in each coding table according to the value of the binarized data actually generated.
- the three-dimensional data coding apparatus may fix the occurrence probabilities of 0 and 1 in either coding table.
- the number of updates of the occurrence probability can be suppressed, so that the processing amount can be reduced.
- the three-dimensional data encoding device may update the occurrence probability with respect to the prefix unit and fix the occurrence probability with respect to the suffix unit.
- the three-dimensional data coding apparatus decodes the predicted residual after quantization by inverse quantization and reconstruction, and decodes the decoded value, which is the decoded predicted residual, after the three-dimensional point to be encoded. Used for prediction. Specifically, the three-dimensional data encoding device calculates the inverse quantization value by multiplying the predicted residual (quantization value) after quantization by the quantization scale, and obtains the inverse quantization value and the predicted value. To obtain the decoding value (reconstruction value).
- the inverse quantization value a2iq of the point a2 is calculated by (Equation A11) using the quantization value a2q of the point a2.
- the inverse quantization value b2iq of the point b2 is calculated by (Equation A12) using the quantization value b2q of the point b2.
- QS_LoD0 is a QS for LoD0
- QS_LoD1 is a QS for LoD1. That is, the QS may be changed according to LoD.
- the decoding value a2rec of the point a2 is calculated by adding the predicted value a2p of the point a2 to the inverse quantization value a2iq of the point a2 as shown in (Equation A13).
- the decoding value b2rec of the point b2 is calculated by adding the predicted value b2p of the point b2 to the inverse quantization value b2iq of the point b2.
- FIG. 57 is a diagram showing an example of syntax of the attribute header (attribute_header) according to the present embodiment.
- the attribute header is header information of attribute information.
- the attribute header includes the layer number information (NumLoD), the three-dimensional point information (NumOfPoint [i]), the layer threshold value (Thres_Rod [i]), and the peripheral point information (NumNeighorPoint [i]).
- the number of layers information indicates the number of layers of LoD used.
- the three-dimensional score information indicates the number of three-dimensional points belonging to the layer i.
- the three-dimensional data encoding device may add three-dimensional point total number information (AllNumOfPoint) indicating the total number of three-dimensional points to another header.
- the three-dimensional data encoding device does not have to add NuOfPoint [NumMoD-1] indicating the number of three-dimensional points belonging to the lowest layer to the header.
- the three-dimensional data decoding device can calculate NuOfPoint [NumMoD-1] by (Equation A15). As a result, the amount of code in the header can be reduced.
- the hierarchy threshold (Thres_Lod [i]) is a threshold used for setting the hierarchy i.
- the three-dimensional data encoding device and the three-dimensional data decoding device configure LoDi so that the distance between each point in LoDi is larger than the threshold value Thres_LoD [i]. Further, the three-dimensional data encoding device does not have to add the value of Thres_Rod [NumMoD-1] (bottom layer) to the header. In this case, the three-dimensional data decoding device estimates the value of Thres_Lod [NumMoD-1] to be 0. As a result, the amount of code in the header can be reduced.
- Peripheral score information indicates the upper limit of the peripheral score used to generate the predicted value of the three-dimensional point belonging to the layer i.
- the three-dimensional data coding apparatus may calculate the predicted value using the M peripheral points. Further, when the three-dimensional data encoding device does not need to separate the value of NuNeigorPoint [i] for each LoD, one peripheral point information (NumNeigoPoint) used in all LoDs may be added to the header. Good.
- the prediction threshold value (THd [i]) indicates the upper limit of the distance between the surrounding 3D point and the target 3D point used for predicting the target 3D point to be encoded or decoded in the layer i.
- the three-dimensional data encoding device and the three-dimensional data decoding device do not use the three-dimensional points whose distance from the target three-dimensional point is more than THd [i] for the prediction. If it is not necessary to divide the THd [i] value in each LoD, the three-dimensional data encoding device may add one prediction threshold value (THd) used in all LoDs to the header. ..
- the quantization scale indicates the quantization scale used in the quantization and inverse quantization of the layer i.
- the binarization threshold value (R_TH [i]) is a threshold value for switching the binarization method of the predicted residual of the three-dimensional points belonging to the layer i.
- the three-dimensional data encoding device binarizes the predicted residual pu with a fixed number of bits when the predicted residual is smaller than the threshold R_TH, and when the predicted residual is equal to or higher than the threshold R_TH, the binary value of the threshold R_TH.
- the conversion data and the value of (pu-R_TH) are binarized using the exponential Golomb.
- the three-dimensional data encoding device adds one binarization threshold value (R_TH) used in all LoDs to the header. May be good.
- minimum value minimum number of bits
- the three-dimensional data encoding device entropy-encodes at least one of NumLoD, Thres_Lod [i], NuNeightborPoint [i], THd [i], QS [i] and R_TH [i] and adds them to the header. May be good.
- the three-dimensional data encoding device may binarize each value and perform arithmetic coding.
- the three-dimensional data coding apparatus may encode each value with a fixed length in order to reduce the amount of processing.
- the three-dimensional data encoding device does not have to add at least one of NumLoD, Thres_Lod [i], NuNeightborPoint [i], THd [i], QS [i], and R_TH [i] to the header. ..
- at least one of these values may be specified by a profile or level of a standard or the like. As a result, the bit amount of the header can be reduced.
- FIG. 58 is a diagram showing an example of syntax of attribute data (attribute_data) according to the present embodiment.
- This attribute data includes encoded data of attribute information of a plurality of three-dimensional points. As shown in FIG. 58, the attribute data includes an n-bit code (n-bit code) and a remaining code (remining code).
- the n-bit code (n-bit code) is the coded data of the predicted residual of the value of the attribute information or a part thereof.
- the bit length of the n-bit code depends on the value of R_TH [i]. For example, when the value indicated by R_TH [i] is 63, the n-bit code is 6 bits, and when the value indicated by R_TH [i] is 255, the n-bit code is 8 bits.
- the remaining code is the coded data encoded by the exponential Golomb among the coded data of the predicted residuals of the value of the attribute information.
- the remaining code is encoded or decoded when the n-bit code is the same as R_TH [i]. Further, the three-dimensional data decoding device adds the value of the n-bit code and the value of the remaining code to decode the predicted residual. If the n-bit code is not the same value as R_TH [i], the remaining code may not be encoded or decoded.
- FIG. 59 is a flowchart of the three-dimensional data coding process by the three-dimensional data coding apparatus.
- the three-dimensional data encoding device encodes the position information (geometry) (S3001). For example, in three-dimensional data coding, coding is performed using an ocree representation.
- the 3D data coding device When the position of the 3D point changes due to quantization or the like after the position information is coded, the 3D data coding device reassigns the attribute information of the original 3D point to the changed 3D point ( S3002). For example, the three-dimensional data encoding device reassigns by interpolating the value of the attribute information according to the amount of change in the position. For example, the three-dimensional data encoding device detects N three-dimensional points before the change, which are close to the three-dimensional position after the change, and weights and averages the values of the attribute information of the N three-dimensional points.
- a three-dimensional data encoding device determines a weight in a weighted average based on the distance from each of the N three dimensions from the changed three-dimensional position. Then, the three-dimensional data encoding device determines the value obtained by the weighted averaging as the value of the attribute information of the three-dimensional point after the change. Further, in the three-dimensional data encoding device, when two or more three-dimensional points are changed to the same three-dimensional position due to quantization or the like, the value of the attribute information of the three-dimensional points after the change is used as the value before the change. The average value of the attribute information of two or more three-dimensional points may be assigned.
- the three-dimensional data encoding device encodes the attribute information (Attribute) after reassignment (S3003).
- the three-dimensional data encoding device may encode the plurality of types of attribute information in order.
- the three-dimensional data encoding device may generate a bit stream in which the reflectance coding result is added after the color coding result. ..
- the order of the plurality of encoding results of the attribute information added to the bit stream is not limited to this order, and may be any order.
- the three-dimensional data encoding device may add information indicating the encoded data start location of each attribute information in the bit stream to the header or the like.
- the three-dimensional data decoding device can selectively decode the attribute information that needs to be decoded, so that the decoding process of the attribute information that does not need to be decoded can be omitted. Therefore, the processing amount of the three-dimensional data decoding device can be reduced.
- the three-dimensional data coding apparatus may encode a plurality of types of attribute information in parallel and integrate the coding results into one bit stream. As a result, the three-dimensional data encoding device can encode a plurality of types of attribute information at high speed.
- FIG. 60 is a flowchart of the attribute information coding process (S3003).
- the three-dimensional data encoding device sets LoD (S3011). That is, the 3D data encoding device assigns each 3D point to any of the plurality of LoDs.
- the three-dimensional data encoding device starts a loop in units of LoD (S3012). That is, the three-dimensional data encoding device repeats the processes of steps S3013 to S3021 for each LoD.
- the three-dimensional data encoding device starts a loop in units of three-dimensional points (S3013). That is, the three-dimensional data coding apparatus repeats the processes of steps S3014 to S3020 for each three-dimensional point.
- the three-dimensional data encoding device searches for a plurality of peripheral points that are three-dimensional points existing around the target three-dimensional point used for calculating the predicted value of the target three-dimensional point to be processed (S3014).
- the three-dimensional data encoding device calculates a weighted average of the values of the attribute information of the plurality of peripheral points, and sets the obtained value as the predicted value P (S3015).
- the three-dimensional data encoding device calculates the predicted residual, which is the difference between the attribute information of the target three-dimensional point and the predicted value (S3016).
- the three-dimensional data coding device calculates the quantization value by quantizing the predicted residual (S3017).
- the three-dimensional data coding device arithmetically encodes the quantized value (S3018).
- the three-dimensional data coding device calculates the inverse quantization value by inversely quantizing the quantization value (S3019).
- the three-dimensional data coding apparatus generates a decoded value by adding the predicted value to the inverse quantization value (S3020).
- the three-dimensional data encoding device ends the loop in units of three-dimensional points (S3021). Further, the three-dimensional data encoding device ends the loop in units of LoD (S3022).
- the three-dimensional data decoding device is decoded by arithmetically decoding the binarized data of the attribute information in the bit stream generated by the three-dimensional data coding device in the same manner as the three-dimensional data coding device. Generate binarized data.
- the application method of arithmetic coding was switched between the n-bit binarized part (n-bit code) and the binarized part using exponential Golomb (remaining code). In this case, the three-dimensional data decoding device performs decoding according to the application of arithmetic decoding.
- the three-dimensional data decoding device performs arithmetic decoding using a different coding table (decoding table) for each bit in the arithmetic decoding method of n-bit code.
- the three-dimensional data decoding device may change the number of coding tables used for each bit. For example, arithmetic decoding is performed using one coding table for the first bit b0 of the n-bit code. Further, the three-dimensional data decoding device uses two coding tables for the next bit b1. Further, the three-dimensional data decoding device switches the coding table used for arithmetic decoding of bit b1 according to the value (0 or 1) of b0.
- the three-dimensional data decoding device further uses four coding tables for the next bit b2. Further, the three-dimensional data decoding device switches the coding table used for arithmetic decoding of bit b2 according to the values (0 to 3) of b0 and b1.
- the three-dimensional data decoding apparatus uses 2 n-1 coding tables when arithmetically decoding each bit bn-1 of the n-bit code. Further, the three-dimensional data decoding device switches the coding table to be used according to the value (generation pattern) of the bits before bn-1. As a result, the three-dimensional data decoding apparatus can appropriately decode a bit stream having improved coding efficiency by using an appropriate coding table for each bit.
- the three-dimensional data decoding device may reduce the number of coding tables used in each bit. For example, when the three-dimensional data decoding device arithmetically decodes each bit bn-1, 2 m codes are used according to the value (generation pattern) of m bits (m ⁇ n-1) before bn-1. You may switch the conversion table. As a result, the three-dimensional data decoding device can appropriately decode the bit stream with improved coding efficiency while suppressing the number of coding tables used for each bit.
- the three-dimensional data decoding device may update the probability of occurrence of 0 and 1 in each coding table according to the value of the binarized data actually generated. Further, the three-dimensional data decoding device may fix the occurrence probabilities of 0 and 1 in the coding table of some bits. As a result, the number of updates of the occurrence probability can be suppressed, so that the processing amount can be reduced.
- the number of coding tables for b0 is one (CTb0).
- CTb10, CTb11 There are two coding tables for b1 (CTb10, CTb11). Further, the coding table is switched according to the value of b0 (0 to 1).
- CTb20, CTb21, CTb22, CTb23 There are four coding tables for b2 (CTb20, CTb21, CTb22, CTb23). Further, the coding table is switched according to the values of b0 and b1 (0 to 3).
- the coding table for bn-1 is 2 n-1 (CTbn0, CTbn1, ..., CTbn (2 n-1 -1)). Further, the coding table is switched according to the values of b0b1 ... bn-2 (0 to 2 n-1 -1).
- FIG. 61 is a diagram for explaining processing when, for example, the remaining code is an exponential Golomb code.
- the portion (remaining code) binarized and encoded by the three-dimensional data coding apparatus using the exponential Golomb includes a prefix portion and a suffix portion as shown in FIG.
- the three-dimensional data decoding device switches the coding table between the prefix unit and the suffix unit. That is, the three-dimensional data decoding device arithmetically decodes each bit included in the prefix unit using the encoding table for the prefix, and arithmetically decodes each bit included in the suffix unit using the encoding table for the suffix. Decrypt.
- the three-dimensional data decoding device may update the probability of occurrence of 0 and 1 in each coding table according to the value of the binarized data generated at the time of decoding.
- the three-dimensional data decoding device may fix the probability of occurrence of 0 and 1 in either coding table.
- the number of updates of the occurrence probability can be suppressed, so that the processing amount can be reduced.
- the three-dimensional data decoding device may update the occurrence probability with respect to the prefix unit and fix the occurrence probability with respect to the suffix unit.
- the three-dimensional data decoding device multi-values the binarized data of the predicted residual that has been arithmetically decoded according to the coding method used in the three-dimensional data coding device, so that the predicted residual after quantization is performed. Decode the difference (unsigned integer value).
- the three-dimensional data decoding device first calculates the value of the n-bit code decoded by arithmetically decoding the binarized data of the n-bit code.
- the three-dimensional data decoding device compares the value of the n-bit code with the value of R_TH.
- the three-dimensional data decoding device determines that the bit encoded by the exponential Golomb exists next, and uses the binarized data encoded by the exponential Golomb. Arithmetically decode some remaining code. Then, the three-dimensional data decoding device calculates the value of the remaining code from the decoded remaining code by using the reverse lookup table showing the relationship between the remaining code and the value.
- FIG. 62 is a diagram showing an example of a reverse lookup table showing the relationship between the remaining code and its value.
- the three-dimensional data decoding apparatus obtains a multi-valued predicted residual after quantization by adding the obtained value of the remaining code to R_TH.
- the three-dimensional data decoding device when the value of the n-bit code and the value of R_TH do not match (the value is smaller than R_TH), the value of the n-bit code is used as it is and the predicted residue after quantization is quantized. Determine the difference. As a result, the three-dimensional data decoding device can appropriately decode the bit stream generated by switching the binarization method according to the value of the predicted residual in the three-dimensional data coding device.
- the three-dimensional data decoding device decodes the value of the threshold value R_TH from the header and switches the decoding method using the decoded value of the threshold value R_TH. Good. Further, when the threshold value R_TH is added to the header or the like for each LoD, the three-dimensional data decoding device switches the decoding method using the threshold value R_TH decoded for each LoD.
- the three-dimensional data decoding device obtains the value of the remaining code by decoding the remaining code by the exponential Golomb. For example, in the example shown in FIG. 62, the remaining code is 00100, and 3 is obtained as the value of the remaining code.
- the three-dimensional data decoding apparatus obtains the predicted residual value 66 by adding the threshold value R_TH value 63 and the remaining code value 3.
- the three-dimensional data decoding device sets the n-bit code value 32 as the predicted residual value.
- the three-dimensional data decoding device converts the decoded predicted residual after quantization from an unsigned integer value to a signed integer value by, for example, the reverse processing of the processing in the three-dimensional data coding device.
- the three-dimensional data decoding apparatus can appropriately decode the generated bit stream without considering the generation of negative integers when entropy-coding the predicted residuals.
- the three-dimensional data decoding device does not necessarily have to convert an unsigned integer value into a signed integer value. For example, when decoding a bit stream generated by separately entropy-coding a sign bit, the sign bit is decoded. You may.
- the three-dimensional data decoding device generates a decoded value by decoding the predicted residual after quantization converted into a signed integer value by inverse quantization and reconstruction. Further, the three-dimensional data decoding device uses the generated decoding value for the prediction after the three-dimensional point to be decoded. Specifically, the three-dimensional data decoding device calculates the inverse quantization value by multiplying the predicted residual after quantization by the decoded quantization scale, and adds the inverse quantization value and the predicted value. To obtain the decoded value.
- the decoded unsigned integer value (unsigned quantization value) is converted to a signed integer value by the following processing.
- the three-dimensional data decoding apparatus sets the signed integer value a2q to ⁇ ((a2u + 1) >> 1) when the LSB (least significant bit) of the decoded unsigned integer value a2u is 1.
- the three-dimensional data decoding device sets the signed integer value a2q to (a2u >> 1) when the LSB of the unsigned integer value a2u is not 1.
- the three-dimensional data decoding device sets the signed integer value b2q to-((b2u + 1) >> 1) when the LSB of the decoded unsigned integer value b2u is 1.
- the three-dimensional data decoding device sets the signed integer value b2q to (b2u >> 1) when the LSB of the unsigned integer value n2u is not 1.
- the details of the inverse quantization and reconstruction processing by the three-dimensional data decoding apparatus are the same as those of the inverse quantization and reconstruction processing in the three-dimensional data coding apparatus.
- FIG. 63 is a flowchart of the three-dimensional data decoding process by the three-dimensional data decoding device.
- the three-dimensional data decoding device decodes the position information (geometry) from the bit stream (S3031). For example, a three-dimensional data decoding device decodes using an ocree representation.
- the three-dimensional data decoding device decodes the attribute information (Attribute) from the bit stream (S3032). For example, when decoding a plurality of types of attribute information, the three-dimensional data decoding device may decode the plurality of types of attribute information in order. For example, when decoding a color and a reflectance as attribute information, the three-dimensional data decoding apparatus decodes the color coding result and the reflectance coding result in the order of being added to the bit stream. For example, in a bitstream, if the reflectance coding result is added after the color coding result, the three-dimensional data decoding device decodes the color coding result, and then the reflectance coding result. To decrypt. The three-dimensional data decoding device may decode the coding result of the attribute information added to the bit stream in any order.
- the three-dimensional data decoding device may acquire information indicating the encoded data start location of each attribute information in the bit stream by decoding the header or the like. As a result, the three-dimensional data decoding device can selectively decode the attribute information that needs to be decoded, so that the decoding process of the attribute information that does not need to be decoded can be omitted. Therefore, the processing amount of the three-dimensional data decoding device can be reduced. Further, the three-dimensional data decoding device may decode a plurality of types of attribute information in parallel and integrate the decoding results into one three-dimensional point cloud. As a result, the three-dimensional data decoding device can decode a plurality of types of attribute information at high speed.
- FIG. 64 is a flowchart of the attribute information decoding process (S3032).
- the three-dimensional data decoding device sets LoD (S3041). That is, the three-dimensional data decoding device assigns each of the plurality of three-dimensional points having the decoded position information to any of the plurality of LoDs.
- this allocation method is the same as the allocation method used in the three-dimensional data coding apparatus.
- the three-dimensional data decoding device starts a loop in units of LoD (S3042). That is, the three-dimensional data decoding device repeats the processes of steps S3043 to S3049 for each LoD.
- the three-dimensional data decoding device starts a loop in units of three-dimensional points (S3043). That is, the three-dimensional data decoding apparatus repeats the processes of steps S3044 to S3048 for each three-dimensional point.
- the three-dimensional data decoding device searches for a plurality of peripheral points that are three-dimensional points existing around the target three-dimensional point used for calculating the predicted value of the target three-dimensional point to be processed (S3044).
- the three-dimensional data decoding device calculates a weighted average of the values of the attribute information of the plurality of peripheral points, and sets the obtained value as the predicted value P (S3045). Note that these processes are the same as the processes in the three-dimensional data coding apparatus.
- the three-dimensional data decoding device arithmetically decodes the quantized value from the bit stream (S3046). Further, the three-dimensional data decoding apparatus calculates the dequantized value by dequantizing the decoded quantization value (S3047). Next, the three-dimensional data decoding device generates a decoded value by adding the predicted value to the inverse quantization value (S3048). Next, the three-dimensional data decoding device ends the loop in units of three-dimensional points (S3049). Further, the three-dimensional data decoding device ends the loop in units of LoD (S3050).
- FIG. 65 is a block diagram showing the configuration of the three-dimensional data encoding device 3000 according to the present embodiment.
- the three-dimensional data coding device 3000 includes a position information coding unit 3001, an attribute information reassignment unit 3002, and an attribute information coding unit 3003.
- the attribute information coding unit 3003 encodes the position information (geometry) of a plurality of three-dimensional points included in the input point group.
- the attribute information reallocation unit 3002 reassigns the values of the attribute information of a plurality of three-dimensional points included in the input point group by using the coding and decoding results of the position information.
- the attribute information coding unit 3003 encodes the reassigned attribute information (attribute). Further, the three-dimensional data coding apparatus 3000 generates a bit stream including the coded position information and the coded attribute information.
- FIG. 66 is a block diagram showing the configuration of the three-dimensional data decoding device 3010 according to the present embodiment.
- the three-dimensional data decoding device 3010 includes a position information decoding unit 3011 and an attribute information decoding unit 3012.
- the position information decoding unit 3011 decodes the position information (geometry) of a plurality of three-dimensional points from the bit stream.
- the attribute information decoding unit 3012 decodes the attribute information (attribute) of a plurality of three-dimensional points from the bit stream. Further, the three-dimensional data decoding device 3010 generates an output point group by combining the decoded position information and the decoded attribute information.
- the three-dimensional data coding apparatus performs the processing shown in FIG. 67.
- the three-dimensional data encoding device encodes a three-dimensional point having attribute information.
- the three-dimensional data encoding device calculates a predicted value of the attribute information of the three-dimensional point (S3061).
- the three-dimensional data encoding device calculates the predicted residual, which is the difference between the attribute information of the three-dimensional point and the predicted value (S3062).
- the three-dimensional data encoding device generates binary data by binarizing the predicted residuals (S3063).
- the three-dimensional data encoding device arithmetically encodes the binary data (S3064).
- the three-dimensional data coding apparatus calculates the predicted residual of the attribute information, and further binarizes and arithmetically encodes the predicted residual to code the coded data of the attribute information. Can be reduced.
- the three-dimensional data coding device uses a different coding table for each bit of binary data. According to this, the three-dimensional data coding apparatus can improve the coding efficiency.
- the three-dimensional data coding device selects a coding table to be used for arithmetic coding of the target bit according to the value of the upper bit of the target bit included in the binary data. .. According to this, the three-dimensional data coding apparatus can select the coding table according to the value of the high-order bit, so that the coding efficiency can be improved.
- the three-dimensional data encoding device when the predicted residual is smaller than the threshold value (R_TH), the three-dimensional data encoding device generates binary data by binarizing the predicted residual with a fixed number of bits.
- the first code n-bit code
- the value obtained by subtracting the threshold (R_TH) from the predicted residual are calculated by exponential gorom.
- Binary data including the binarized second code is generated.
- the three-dimensional data coding apparatus uses different arithmetic coding methods for the first code and the second code.
- the three-dimensional data coding apparatus can perform arithmetic coding of the first code and the second code by an arithmetic coding method suitable for each of the first code and the second code, for example, so that the coding efficiency can be improved. Can be improved.
- the three-dimensional data encoding device quantizes the predicted residual, and in binarization (S3063), the quantized predicted residual is binarized.
- the threshold (R_TH) is changed according to the quantization scale in the quantization. According to this, the three-dimensional data coding apparatus can use an appropriate threshold value according to the quantization scale, so that the coding efficiency can be improved.
- the second reference numeral includes a prefix part and a suffix part.
- the three-dimensional data coding apparatus uses different coding tables for the prefix part and the suffix part. According to this, the three-dimensional data coding apparatus can improve the coding efficiency.
- the three-dimensional data encoding device includes a processor and a memory, and the processor uses the memory to perform the above processing.
- the three-dimensional data decoding device performs the processing shown in FIG. 68.
- the three-dimensional data decoding device decodes a three-dimensional point having attribute information.
- the three-dimensional data decoding device calculates a predicted value of the attribute information of the three-dimensional point (S3071).
- the three-dimensional data decoding device arithmetically decodes the coded data included in the bit stream to generate binary data (S3072).
- the three-dimensional data decoding device generates a predicted residual by multiplying the binary data into multiple values (S3073).
- the three-dimensional data decoding device calculates the decoded value of the attribute information of the three-dimensional point by adding the predicted value and the predicted residual (S3074).
- the three-dimensional data decoding device calculates the predicted residual of the attribute information, and further appropriately binarizes and arithmetically encodes the predicted residual to obtain a bit stream of the attribute information. Can be decrypted.
- the three-dimensional data decoding device uses a different coding table for each bit of binary data. According to this, the three-dimensional data decoding apparatus can appropriately decode the bit stream with improved coding efficiency.
- the three-dimensional data decoding apparatus selects a coding table to be used for arithmetic decoding of the target bit according to the value of the upper bit of the target bit included in the binary data. According to this, the three-dimensional data decoding apparatus can appropriately decode the bit stream with improved coding efficiency.
- the three-dimensional data decoding device generates the first value by multiplying the first code (n-bit code) of the fixed number of bits included in the binary data.
- the three-dimensional data decoding device determines the first value as the predicted residual when the first value is smaller than the threshold value (R_TH), and includes the first value in the binary data when the first value is equal to or more than the threshold value (R_TH).
- a second value is generated by multiplying the second code (remaining code), which is an exponential Golomb code, and a predicted residual is generated by adding the first value and the second value.
- the three-dimensional data decoding apparatus uses different arithmetic decoding methods for the first code and the second code.
- the three-dimensional data decoding device can appropriately decode the bit stream with improved coding efficiency.
- the three-dimensional data decoding device back-quantizes the predicted residual, and in addition (S3074), the predicted value and the back-quantized predicted residual are added.
- the threshold (R_TH) is changed according to the quantization scale in inverse quantization. According to this, the three-dimensional data decoding apparatus can appropriately decode the bit stream with improved coding efficiency.
- the second reference numeral includes a prefix part and a suffix part.
- the three-dimensional data decoding apparatus uses different coding tables for the prefix unit and the suffix unit. According to this, the three-dimensional data decoding apparatus can appropriately decode the bit stream with improved coding efficiency.
- the three-dimensional data decoding device includes a processor and a memory, and the processor uses the memory to perform the above processing.
- the predicted value may be generated by a method different from that of the eighth embodiment.
- the three-dimensional point to be encoded may be referred to as a first three-dimensional point, and the surrounding three-dimensional points may be referred to as a second three-dimensional point.
- the attribute value of the closest three-dimensional point among the encoded and decoded three-dimensional points of the three-dimensional point to be encoded is used as it is. It may be generated as a predicted value. Further, in the generation of the predicted value, the predicted mode information (PredMode) may be added for each three-dimensional point so that the predicted value can be generated by selecting one predicted value from a plurality of predicted values. That is, for example, in the prediction mode of the total number M, the prediction mode 0 is assigned the average value, the prediction mode 1 is assigned the attribute value of the three-dimensional point A, ..., The prediction mode M-1 is assigned the attribute value of the three-dimensional point Z.
- PredMode predicted mode information
- the prediction mode used for prediction is added to the bitstream for each three-dimensional point.
- the attribute information itself of the surrounding three-dimensional points is calculated as the predicted value. It may be smaller than the second prediction mode value indicating the second prediction mode.
- the "average value" which is the predicted value calculated in the prediction mode 0 is the average value of the attribute values of the three-dimensional points around the three-dimensional point to be encoded.
- FIG. 69 is a diagram showing a first example of a table showing predicted values calculated in each prediction mode according to the ninth embodiment.
- FIG. 70 is a diagram showing an example of attribute information used for the predicted value according to the ninth embodiment.
- FIG. 71 is a diagram showing a second example of a table showing predicted values calculated in each prediction mode according to the ninth embodiment.
- the predicted value of the attribute information of the point b2 can be generated by using the attribute information of the points a0, a1, a2, and b1.
- the attribute value of each point a0, a1, a2, b1 is predicted based on the distance information from the point b2 to each point a0, a1, a2, b1. You may select the prediction mode to generate as.
- the prediction mode is added for each three-dimensional point to be encoded.
- the predicted value is calculated according to the value according to the added prediction mode.
- the predicted value of the attribute information of the point a2 can be generated by using the attribute information of the points a0 and a1.
- the prediction mode is added for each three-dimensional point to be encoded.
- the predicted value is calculated according to the value according to the added prediction mode.
- the prediction mode in which the prediction value is not assigned in the table may be set as not available.
- the allocation of values in the prediction mode may be determined in the order of distance from the three-dimensional point to be encoded.
- the prediction mode value indicating a plurality of prediction modes is smaller as the distance from the three-dimensional point to be encoded to the surrounding three-dimensional points having the attribute information used as the prediction value is shorter.
- FIG. 69 it is shown that the distance to the point b2, which is the three-dimensional point to be encoded, is short in the order of the points b1, a2, a1, and a0.
- the attribute information of the point b1 is calculated as the predicted value in the predicted mode in which the predicted mode value of the two or more predicted modes is indicated by "1", and the predicted mode value is indicated by "2".
- the attribute information of point a2 is calculated as a predicted value in the predicted mode.
- the prediction mode value indicating the prediction mode in which the attribute information of the point b1 is calculated as the prediction value is the prediction in which the attribute information of the point a2 whose distance from the point b2 is farther than the point b1 is calculated as the prediction value. It is smaller than the predicted mode value indicating the mode.
- a small prediction mode value can be assigned to a point where the prediction is easy to hit and may be easily selected because the distance is short, and the number of bits for encoding the prediction mode value can be reduced. Further, a small prediction mode value may be preferentially assigned to a three-dimensional point belonging to the same LoD as the three-dimensional point to be encoded.
- FIG. 72 is a diagram showing a third example of a table showing predicted values calculated in each prediction mode according to the ninth embodiment.
- the third example is an example in which the attribute information used for the predicted value is a value based on the color information (YUV) of the surrounding three-dimensional points.
- the attribute information used for the predicted value may be color information indicating the color of the three-dimensional point.
- the predicted value calculated in the predicted mode in which the predicted mode value is indicated by "0" is the average of the components of each YUV that defines the YUV color space.
- the predicted values are the weighted average Yave of Yb1, Ya2, Ya1, and Ya0, which are the values of the Y components corresponding to the points b1, a2, a1, and a0, and the points b1, a2, a1, and a0.
- the weighted average Uave of Ub1, Ua2, Ua1, and Ua0 corresponding to each of the above, and the V component values of Vb1, Va2, Va1, and Va0 corresponding to points b1, a2, a1, and a0, respectively.
- the predicted values calculated in the predicted modes whose predicted mode values are indicated by “1" to "4" include color information of surrounding three-dimensional points b1, a2, a1, and a0, respectively.
- the color information is indicated by a combination of the values of the Y component, the U component, and the V component.
- the color information is indicated by a value defined in the YUV color space, but is not limited to the YUV color space, and may be indicated by a value defined in the RGB color space. It may be indicated by a value defined in the color space of.
- the average or attribute information of 2 or more may be calculated as the predicted value of the prediction mode. Further, the average or attribute information of 2 or more may indicate the values of 2 or more components that define the color space, respectively.
- the Y component, U component, and V component of the attribute value of the three-dimensional point to be encoded are predicted values, respectively. It may be encoded by using it as Ya2, Ua2, Va2. In this case, "2" as the prediction mode value is added to the bit stream.
- FIG. 73 is a diagram showing a fourth example of a table showing predicted values calculated in each prediction mode according to the ninth embodiment.
- the fourth example is an example in which the attribute information used for the predicted value is a value based on the reflectance information of the surrounding three-dimensional points.
- the reflectance information is, for example, information indicating the reflectance R.
- the predicted values calculated in the predicted mode in which the predicted mode value is “0” are the reflectances Rb1, Ra2, Ra1, Ra0 corresponding to the points b1, a2, a1, and a0, respectively. It is a weighted average Rave. Further, the predicted values calculated in the predicted modes whose prediction mode values are indicated by "1" to "4" are the reflectances Rb1, Ra2, Ra1, Ra0 of the surrounding three-dimensional points b1, a2, a1, and a0, respectively. Is.
- the prediction mode whose prediction mode value is indicated by "3" is selected in the table of FIG. 73
- the reflectance of the attribute value of the three-dimensional point to be encoded is encoded using the prediction value Ra1. May be good.
- "3" as the prediction mode value is added to the bit stream.
- the attribute information may include a first attribute information and a second attribute information of a type different from the first attribute information.
- the first attribute information is, for example, color information.
- the second attribute information is, for example, reflectance information.
- the first predicted value may be calculated using the first attribute information, and the second predicted value may be calculated using the second attribute information.
- FIG. 74 is a diagram for explaining the coding of attribute information using RAHT.
- the three-dimensional data encoding device generates a Morton code based on the position information of the three-dimensional points, and sorts the attribute information of the three-dimensional points in the order of the Morton codes. For example, the three-dimensional data encoding device may sort in ascending order of Morton codes. The sort order is not limited to the Morton code order, and other orders may be used.
- the three-dimensional data encoding device generates a high-frequency component and a low-frequency component of the layer L by applying the Har transformation to the attribute information of two three-dimensional points adjacent to each other in the Morton code order.
- the three-dimensional data encoding device may use the Har transformation of a 2 ⁇ 2 matrix.
- the generated high frequency component is included in the coding coefficient as the high frequency component of the layer L, and the generated low frequency component is used as an input value of the upper layer L + 1 of the layer L.
- the three-dimensional data encoding device generates the high-frequency component of the layer L using the attribute information of the layer L, and then continues to process the layer L + 1.
- the three-dimensional data encoding device In the processing of the layer L + 1, the three-dimensional data encoding device generates the high frequency component and the low frequency component of the layer L + 1 by applying the Haar conversion to the two low frequency components obtained by the Haar conversion of the attribute information of the layer L. To do.
- the generated high frequency component is included in the coding coefficient as the high frequency component of the layer L + 1, and the generated low frequency component is used as the input value of the upper layer L + 2 of the layer L + 1.
- the three-dimensional data encoding device repeats such hierarchical processing, and determines that the highest layer Lmax has been reached when the number of low-frequency components input to the layer becomes one.
- the three-dimensional data coding apparatus includes the low frequency component of the layer Lmax-1 input to the layer Lmax in the coding coefficient. Then, the value of the low frequency component or the high frequency component included in the coding coefficient is quantized and encoded by using entropy coding or the like.
- the value of the attribute information of the existing 3D point is set in the upper layer. It may be used as an input value.
- the three-dimensional data encoding device hierarchically applies the Har transformation to the input attribute information, generates high-frequency components and low-frequency components of the attribute information, and applies quantization and the like described later. And encode. Thereby, the coding efficiency can be improved.
- the three-dimensional data coding device may independently apply the Har conversion for each dimension and calculate each coding coefficient.
- the attribute information is color information (RGB, YUV, etc.)
- the three-dimensional data coding apparatus applies Har conversion for each component and calculates the coding coefficient for each component.
- the three-dimensional data encoding device may apply the Har conversion in the order of layer L, L + 1, ..., Layer Lmax. The closer to the layer Lmax, the more the coding coefficient including the low frequency component of the input attribute information is generated.
- W0 and w1 shown in FIG. 74 are weights assigned to each three-dimensional point.
- the three-dimensional data encoding device may calculate the weight based on the distance information between two adjacent three-dimensional points to which the Har transformation is applied.
- the three-dimensional data coding apparatus may improve the coding efficiency by increasing the weight as the distance becomes shorter.
- the three-dimensional data coding apparatus may calculate this weight by another method, or may not use the weight.
- the input attribute information is a0, a1, a2, a3, a4 and a5. Further, among the coding coefficients after the Har conversion, Ta1, Ta5, Tb1, Tb3, Tc1 and d0 are encoded. Other coding coefficients (b0, b2, c0, etc.) are intermediate values and are not encoded.
- the high frequency component Ta1 and the low frequency component b0 are generated by performing the Har conversion between a0 and a1.
- the low frequency component b0 is the average value of a0 and a1
- the high frequency component Ta1 is the difference between a0 and a1.
- a2 Since there is no paired attribute information in a2, a2 is used as it is as b1. Similarly, since there is no paired attribute information in a3, a3 is used as it is as b2. Further, by performing the Har conversion between a4 and a5, a high frequency component Ta5 and a low frequency component b3 are generated.
- the high frequency component Tb1 and the low frequency component c0 are generated by performing the Har conversion between b0 and b1.
- the high frequency component Tb3 and the low frequency component c1 are generated by performing the Har conversion between b2 and b3.
- the high frequency component Tc1 and the low frequency component d0 are generated by performing the Har conversion between c0 and c1.
- the three-dimensional data coding apparatus may encode after quantizing the coding coefficient after applying the Har conversion.
- a three-dimensional data coding apparatus performs quantization by dividing a coding coefficient by a quantization scale (also referred to as a quantization step (QS (Quantization Step))).
- QS quantization Step
- the smaller the quantization scale the smaller the error (quantization error) that can occur due to quantization.
- the larger the quantization scale the larger the quantization error.
- FIG. 75 is a diagram showing an example of setting the quantization scale for each layer.
- the higher layer has a smaller quantization scale
- the lower layer has a larger quantization scale. Since the coding coefficient of the three-dimensional point belonging to the upper layer contains more low-frequency components than the lower layer, it is highly possible that it is an important component in human visual characteristics and the like. Therefore, by reducing the quantization scale of the upper layer and suppressing the quantization error that may occur in the upper layer, visual deterioration can be suppressed and the coding efficiency can be improved.
- the three-dimensional data coding device may add a quantization scale for each layer to the header or the like. As a result, the three-dimensional data decoding device can correctly decode the quantization scale and appropriately decode the bit stream.
- the three-dimensional data coding apparatus may adaptively switch the value of the quantization scale according to the importance of the target three-dimensional point to be encoded. For example, a 3D data coding device uses a small quantization scale for 3D points of high importance and a large quantization scale for 3D points of low importance. For example, the three-dimensional data encoding device may calculate the importance from the weights at the time of Har conversion and the like. For example, the three-dimensional data coding apparatus may calculate the quantization scale by using the added value of w0 and w1. By reducing the quantization scale of the three-dimensional points, which are of high importance in this way, the quantization error can be reduced and the coding efficiency can be improved.
- the higher the layer the smaller the QS value may be.
- the QW value becomes larger in the upper layer, and the prediction efficiency can be improved by suppressing the quantization error of the three-dimensional point.
- the coding coefficient Ta1q after the quantization of the coding coefficient Ta1 of the attribute information a1 is represented by Ta1 / QS_L.
- the QS may have the same value in all layers or some layers.
- QW Quality Weight
- the QW is a value indicating the importance of a three-dimensional point to be encoded.
- the QW the above-mentioned added value of w0 and w1 may be used.
- the QW value becomes larger in the upper layer, and the prediction efficiency can be improved by suppressing the quantization error of the three-dimensional point.
- the 3D data encoding device may first initialize the QW values of all 3D points with 1, and update the QW of each 3D point using the values of w0 and w1 at the time of Har conversion. Good.
- the three-dimensional data encoding device may change the initial values according to the hierarchy without initializing the QWs of all the three-dimensional points with the value 1. For example, by setting a larger initial value of QW in the upper layer, the quantization scale of the upper layer becomes smaller. As a result, the prediction error of the upper layer can be suppressed, so that the prediction accuracy of the lower layer can be improved and the coding efficiency can be improved.
- the three-dimensional data encoding device does not necessarily have to use QW.
- the three-dimensional data coding device scans and encodes the coding coefficient (unsigned integer value) after quantization in a certain order.
- the three-dimensional data encoding device encodes a plurality of three-dimensional points in order from the three-dimensional points included in the upper layer toward the lower layer.
- the three-dimensional data encoding device encodes a plurality of three-dimensional points in the order of tc1q, Tb1q, Tb3q, Ta1q, and Ta5q included in the upper layer Lmax.
- the lower layer L tends to have a coding coefficient of 0 after quantization. The reasons for this include the following.
- the coding coefficient of the lower layer L shows a higher frequency component than that of the upper layer, it tends to be 0 depending on the target three-dimensional point. Further, by switching the quantization scale according to the importance and the like described above, the quantization scale becomes larger in the lower layer, and the coding coefficient after quantization tends to be 0.
- FIG. 76 is a diagram showing an example of a first code string and a second code string.
- the three-dimensional data encoding device counts the number of times the value 0 occurs in the first code string, and encodes the number of times the value 0 occurs consecutively instead of the continuous value 0. That is, the three-dimensional data coding apparatus generates the second code string by replacing the coding coefficient of the continuous value 0 in the first code string with the number of consecutive times of 0 (ZeroCnt).
- ZeroCnt the number of consecutive times of 0
- the three-dimensional data encoding device may entropy encode the value of ZeroCnt. For example, a three-dimensional data coding device binarizes the ZeroCnt value with a truncated unary code of the total number T of the coded three-dimensional points, and arithmetically encodes each bit after binarization. .. FIG. 77 is a diagram showing an example of a truncet unity code when the total number of coded three-dimensional points is T. At this time, the three-dimensional data coding apparatus may improve the coding efficiency by using a different coding table for each bit.
- the three-dimensional data encoding device uses the coding table 1 for the first bit, the coding table 2 for the second bit, and the coding table 3 for the subsequent bits.
- the three-dimensional data coding apparatus can improve the coding efficiency by switching the coding table for each bit.
- the three-dimensional data encoding device may perform arithmetic coding after binarizing ZeroCnt with an exponential-Golomb.
- the three-dimensional data encoding device may add a flag to the header to switch between using the truncet unity code and using the exponential Golomb code.
- the three-dimensional data coding apparatus can improve the coding efficiency by selecting the optimum binarization method.
- the three-dimensional data decoding device can correctly decode the bit stream by switching the binarization method with reference to the flag included in the header.
- the 3D data decoding device may convert the decoded quantization coefficient from the unsigned integer value to the signed integer value by the method opposite to the method performed by the 3D data coding device. This allows the 3D data decoder to adequately decode the generated bitstream without considering the occurrence of negative integers when the coding coefficients are entropy encoded. It should be noted that the three-dimensional data decoding apparatus does not necessarily have to convert the coding coefficient from an unsigned integer value to a signed integer value. For example, when the three-dimensional data decoding device decodes a bit stream including a separately entropy-encoded coded bit, the coded bit may be decoded.
- the three-dimensional data decoding device decodes the coded coefficient after quantization converted into a signed integer value by dequantization and inverse Har conversion. Further, the three-dimensional data decoding device uses the decoding coefficient after decoding for the prediction after the three-dimensional point to be decoded. Specifically, the three-dimensional data decoding device calculates the inverse quantization value by multiplying the quantization coefficient after quantization by the decoded quantization scale. Next, the three-dimensional data decoding device obtains the decoded value by applying the inverse Haar transformation described later to the inverse quantized value.
- the three-dimensional data decoding device converts the decoded unsigned integer value into a signed integer value by the following method.
- the signed integer value Ta1q is set to ⁇ ((a2u + 1) >> 1).
- the signed integer value Ta1q is set to (a2u >> 1).
- Ta1q is a quantized value of Ta1.
- QS_L is a quantization step of the layer L.
- the QS may have the same value in all layers or some layers.
- the three-dimensional data encoding device may add information indicating QS to the header or the like. As a result, the three-dimensional data decoding apparatus can correctly perform inverse quantization using the same QS as the QS used in the three-dimensional data encoding apparatus.
- FIG. 78 is a diagram for explaining the inverse Har transformation.
- the three-dimensional data decoding apparatus decodes the attribute value of the three-dimensional point by applying the inverse Haar transformation to the coding coefficient after the inverse quantization.
- the three-dimensional data decoding device generates a Morton code based on the position information of the three-dimensional point, and sorts the three-dimensional points in the order of the Morton code. For example, the three-dimensional data decoding device may sort in ascending order of Morton code. The sort order is not limited to the Morton code order, and other orders may be used.
- the three-dimensional data decoding device applies the inverse Har transformation to the coding coefficient including the low frequency component of the layer L + 1 and the coding coefficient including the high frequency component of the layer L, so that the three-dimensional data decoding device is adjacent in the Morton code order in the layer L.
- the three-dimensional data decoding device may use the inverse Har transformation of a 2 ⁇ 2 matrix.
- the restored attribute information of the layer L is used as an input value of the lower layer L-1.
- the three-dimensional data decoding device repeats such hierarchical processing, and ends the processing when all the attribute information of the lowest layer is decoded.
- the three-dimensional data decoding device has an attribute value of one existing three-dimensional point.
- the value of the coding component of the layer L may be substituted for.
- the three-dimensional data decoding device can correctly decode the bit stream with improved coding efficiency by applying the Har conversion to all the values of the input attribute information.
- the three-dimensional data decoding device may independently apply the inverse Har conversion for each dimension and decode each coding coefficient. For example, when the attribute information is color information (RGB, YUV, etc.), the three-dimensional data decoding device applies an inverse Har conversion to the coding coefficient for each component and decodes each attribute value.
- the attribute information is color information (RGB, YUV, etc.)
- the three-dimensional data decoding device applies an inverse Har conversion to the coding coefficient for each component and decodes each attribute value.
- the three-dimensional data decoding device may apply the inverse Har conversion in the order of layer Lmax, L + 1, ..., Layer L. Further, w0 and w1 shown in FIG. 78 are weights assigned to each three-dimensional point. For example, the three-dimensional data decoding device may calculate the weight based on the distance information between two adjacent three-dimensional points to which the inverse Haar transformation is applied. For example, the three-dimensional data coding apparatus may decode a bit stream whose coding efficiency is improved by increasing the weight as the distance becomes shorter.
- the coding coefficients after dequantization are Ta1, Ta5, Tb1, Tb3, Tc1 and d0, and a0, a1, a2, a3, a4 and a5 are obtained as decoding values.
- FIG. 79 is a diagram showing an example of syntax of attribute information (attribute_data).
- the attribute information (attribute_data) includes a zero continuous number (ZeroCnt), an attribute dimension number (attribute_dimension), and a coding coefficient (value [j] [i]).
- Zero continuous number indicates the number of consecutive values of 0 in the coding coefficient after quantization.
- the three-dimensional data coding apparatus may perform arithmetic coding after binarizing ZeroCnt.
- the three-dimensional data coding apparatus determines whether or not the layer L (layerL) to which the coding coefficient belongs is equal to or higher than a predetermined threshold value TH_layer, and adds it to the bit stream according to the determination result. You may switch the information. For example, the three-dimensional data encoding device adds all the coding coefficients of the attribute information to the bit stream if the determination result is true. Further, the three-dimensional data coding apparatus may add a part of the coding coefficient to the bit stream if the determination result is false.
- the three-dimensional data encoding device adds the coding result of the RGB or YUV three-dimensional information of the color information to the bit stream. If the determination result is false, the three-dimensional data encoding device does not need to add some information such as G or Y of the color information to the bitstream and add other components to the bitstream. Good. In this way, the three-dimensional data encoding device does not add a part of the coding coefficient of the layer (layer smaller than TH_layer) including the coding coefficient indicating the high frequency component whose deterioration is less noticeable visually to the bitstream. , The coding efficiency can be improved.
- the number of dimension dimensions of the attribute indicates the number of dimensions of the attribute information. For example, when the attribute information is the color information of a three-dimensional point (RGB, YUV, etc.), the number of attribute dimensions is set to the value 3 because the color information is three-dimensional. When the attribute information is reflectance, the number of attribute dimensions is set to the value 1 because the reflectance is one-dimensional. The number of attribute dimensions may be added to the header of the attribute information of the bitstream.
- the coding coefficient indicates the coding coefficient after the quantization of the j-th dimension attribute information of the i-th three-dimensional point. For example, when the attribute information is color information, value [99] [1] indicates the coding coefficient of the second dimension (for example, G value) of the 100th three-dimensional point. When the attribute information is reflectance information, value [119] [0] indicates the coding coefficient of the first dimension (for example, reflectance) of the 120th three-dimensional point.
- the three-dimensional data encoding device may subtract the value 1 from the value [j] [i] and entropy encode the obtained value.
- the three-dimensional data decoding apparatus restores the coding coefficient by adding the value 1 to the value [j] [i] after the entropy decoding.
- attribute_dimension 1, or (2) when attribute_dimension is 1 or more and the values of all dimensions are equal.
- attribute_dimension 1, so the three-dimensional data coding device calculates the value by subtracting the value 1 from the coding coefficient, and encodes the calculated value.
- the three-dimensional data decoding device adds the value 1 to the decoded value to calculate the coding coefficient.
- the three-dimensional data coding apparatus encodes a value 9 obtained by subtracting the value 1 from the coding coefficient value 10.
- the three-dimensional data decoding device adds the value 1 to the decoded value 9 to calculate the value 10 of the coding coefficient.
- attribute_dimension 3 so that the three-dimensional data encoding device can be used for each coding, for example, when the coding coefficients of the R, G, and B components after quantization are the same.
- the value 1 is subtracted from the coefficient and the resulting value is encoded.
- the three-dimensional data decoding device adds 1 to each component of (0, 0, 0) to calculate (1, 1, 1).
- the three-dimensional data coding device encodes (2, 1, 2) as it is.
- the three-dimensional data decoding device uses the decoded (2, 1, 2) as it is as the coding coefficient.
- value [0] [i] shown in FIG. 79 indicates the coding coefficient after the quantization of the first-dimensional attribute information of the i-th three-dimensional point.
- the layer L (layerL) to which the coding coefficient belongs is smaller than the threshold TH_layer, the attribute information of the first dimension is added to the bitstream (the attribute information of the second and subsequent dimensions is not added to the bitstream).
- the code amount may be reduced with.
- the three-dimensional data encoding device may switch the calculation method of the ZeroCnt value according to the value of tribute_dimension.
- FIG. 80 is a diagram showing an example of the coding coefficient and ZeroCnt in this case.
- the three-dimensional data encoding device counts the number of consecutive coding coefficients in which the R, G, and B components are all 0, and converts the counted number into a bitstream as ZeroCnt. Add.
- the three-dimensional data encoding device may calculate ZeroCnt for each dimension even when the tribute_dimension is 2 or more, and add the calculated ZeroCnt to the bit stream.
- FIG. 81 is a flowchart of the three-dimensional data coding process according to the present embodiment.
- the three-dimensional data encoding device encodes the position information (geometry) (S6601).
- a three-dimensional data encoding device encodes using an ocree representation.
- the three-dimensional data encoding device converts the attribute information (S6602). For example, a three-dimensional data coding device reassigns the attribute information of the original three-dimensional point to the changed three-dimensional point when the position of the three-dimensional point changes due to quantization or the like after coding the position information. To do.
- the three-dimensional data encoding device may interpolate the value of the attribute information according to the amount of change in the position and reassign it. For example, the three-dimensional data encoding device detects N three-dimensional points before the change, which are close to the three-dimensional position after the change, and obtains the attribute information values of the N three-dimensional points from the three-dimensional position after the change.
- Weighted averaging is performed based on the distances to each N 3Ds, and the obtained values are set to the values of the attribute information of the changed 3D points. Further, in the three-dimensional data encoding device, when two or more three-dimensional points are changed to the same three-dimensional position due to quantization or the like, the value of the attribute information after the change is two or more before the change. The average value of the attribute information at the three-dimensional point may be assigned.
- the three-dimensional data encoding device encodes the attribute information (S6603).
- the three-dimensional data coding apparatus may encode the plurality of attribute information in order.
- the three-dimensional data encoding device when the color and the reflectance are encoded, the three-dimensional data encoding device generates a bit stream in which the reflectance coding result is added after the color coding result.
- the plurality of encoding results of the attribute information added to the bit stream may be in any order.
- the three-dimensional data encoding device may add information indicating the start location of the encoded data of each attribute information in the bit stream to the header or the like.
- the three-dimensional data decoding device can selectively decode the attribute information that needs to be decoded, so that the decoding process of the attribute information that does not need to be decoded can be omitted. Therefore, the processing amount of the three-dimensional data decoding device can be reduced.
- the three-dimensional data coding apparatus may encode a plurality of attribute information in parallel and integrate the coding results into one bit stream. As a result, the three-dimensional data encoding device can encode a plurality of attribute information at high speed.
- FIG. 82 is a flowchart of the attribute information coding process (S6603).
- the three-dimensional data coding apparatus generates a coding coefficient from the attribute information by Har conversion (S6611).
- the three-dimensional data coding device applies quantization to the coding coefficient (S6612).
- the three-dimensional data coding apparatus generates coding attribute information (bit stream) by coding the coding coefficient after quantization (S6613).
- the three-dimensional data coding device applies inverse quantization to the coding coefficient after quantization (S6614).
- the three-dimensional data decoding apparatus decodes the attribute information by applying the inverse Haar transformation to the coding coefficient after the inverse quantization (S6615). For example, the decoded attribute information is referenced in subsequent coding.
- FIG. 83 is a flowchart of the coding coefficient coding process (S6613).
- the three-dimensional data coding apparatus converts the coding coefficient from a signed integer value to an unsigned integer value (S6621).
- a three-dimensional data encoding device converts a signed integer value into an unsigned integer value as follows. If the signed integer value Ta1q is less than 0, the unsigned integer value is set to -1- (2 x Ta1q). When the signed integer value Ta1q is 0 or more, the unsigned integer value is set to 2 ⁇ Ta1q. If the coding coefficient does not become a negative value, the three-dimensional data coding device may encode the coding coefficient as it is as an unsigned integer value.
- the three-dimensional data coding apparatus determines whether the value of the coding coefficient to be processed is zero (S6623). When the value of the coding coefficient to be processed is zero (Yes in S6623), the three-dimensional data coding device increments ZeroCnt by 1 (S6624) and returns to step S6622.
- the three-dimensional data encoding device When the value of the coding coefficient to be processed is not zero (No in S6623), the three-dimensional data encoding device encodes ZeroCnt and resets ZeroCnt to 0 (S6625). Further, the three-dimensional data coding apparatus arithmetically encodes the coding coefficient to be processed (S6626), and returns to step S6622. For example, a three-dimensional data encoding device performs binary arithmetic coding. Further, the three-dimensional data coding apparatus may subtract the value 1 from the coding coefficient and encode the obtained value.
- steps S6623 to S6626 are repeated for each coding coefficient. Further, when all the coding coefficients have been processed (Yes in S6622), the three-dimensional data coding apparatus ends the processing.
- FIG. 84 is a flowchart of the three-dimensional data decoding process according to the present embodiment.
- the three-dimensional data decoding device decodes the position information (geometry) from the bit stream (S6631). For example, a three-dimensional data decoding device decodes using an ocree representation.
- the three-dimensional data decoding device decodes the attribute information from the bit stream (S6632). For example, when decoding a plurality of attribute information, the three-dimensional data decoding device may decode the plurality of attribute information in order. For example, when decoding a color and a reflectance as attribute information, the three-dimensional data decoding apparatus decodes the color coding result and the reflectance coding result in the order of being added to the bit stream. For example, in a bitstream, if the color coding result is followed by the reflectance coding result, the three-dimensional data decoder decodes the color coding result and then the reflectance coding. Decrypt the result. The three-dimensional data decoding device may decode the coding result of the attribute information added to the bit stream in any order.
- the three-dimensional data decoding device may acquire information indicating the start location of the encoded data of each attribute information in the bit stream by decoding the header or the like.
- the three-dimensional data decoding device can selectively decode the attribute information that needs to be decoded, so that the decoding process of the attribute information that does not need to be decoded can be omitted. Therefore, the processing amount of the three-dimensional data decoding device can be reduced.
- the three-dimensional data decoding device may decode a plurality of attribute information in parallel and integrate the decoding results into one three-dimensional point cloud. As a result, the three-dimensional data decoding device can decode a plurality of attribute information at high speed.
- FIG. 85 is a flowchart of the attribute information decoding process (S6632).
- the three-dimensional data decoding device decodes the coding coefficient from the bit stream (S6641).
- the three-dimensional data decoding device applies inverse quantization to the coding coefficient (S6642).
- the three-dimensional data decoding apparatus decodes the attribute information by applying the inverse Haar transformation to the coding coefficient after the inverse quantization (S6643).
- FIG. 86 is a flowchart of the coding coefficient decoding process (S6641).
- the three-dimensional data decoding device decodes ZeroCnt from the bit stream (S6651). If all the coding coefficients have not been processed (No in S6652), the three-dimensional data decoding device determines whether ZeroCnt is greater than 0 (S6653).
- the three-dimensional data decoding device sets the coding coefficient of the processing target to 0 (S6654). Next, the three-dimensional data decoding device subtracts 1 from ZeroCnt (S6655) and returns to step S6652.
- the three-dimensional data decoding device decodes the coding coefficient of the processing target (S6656). For example, a three-dimensional data decoding device uses binary arithmetic decoding. Further, the three-dimensional data decoding device may add a value 1 to the decoded coding coefficient.
- the three-dimensional data decoding device decodes ZeroCnt, sets the obtained value in ZeroCnt (S6657), and returns to step S6652.
- the processes of steps S6653 to S6657 are repeated for each coding coefficient. Further, when all the coding coefficients have been processed (Yes in S6652), the three-dimensional data coding apparatus converts the plurality of decoded coding coefficients from an unsigned integer value to a signed integer value (S6658). ..
- the three-dimensional data decoding apparatus may convert the decoded coding coefficient from an unsigned integer value to a signed integer value as described below.
- the LSB least significant bit
- the signed integer value Ta1q is set to ⁇ ((Ta1u + 1) >> 1).
- the three-dimensional data decoding device may use the decoded coding coefficient as it is as a signed integer value.
- FIG. 87 is a block diagram of the attribute information coding unit 6600 included in the three-dimensional data coding device.
- the attribute information coding unit 6600 includes a sorting unit 6601, a Har conversion unit 6602, a quantization unit 6603, an inverse quantization unit 6604, an inverse Haar conversion unit 6605, a memory 6606, and an arithmetic coding unit 6607. Be prepared.
- the sort unit 6601 generates Morton codes using the position information of the three-dimensional points, and sorts a plurality of three-dimensional points in the order of Morton codes.
- the Har conversion unit 6602 generates a coding coefficient by applying the Har conversion to the attribute information.
- the quantization unit 6603 quantizes the coding coefficient of the attribute information.
- the dequantization unit 6604 dequantizes the coding coefficient after quantization.
- the inverse Har conversion unit 6605 applies the inverse Har conversion to the coding coefficient.
- the memory 6606 stores the values of the attribute information of the plurality of decoded three-dimensional points. For example, the attribute information of the decoded three-dimensional point stored in the memory 6606 may be used for the prediction of the unencoded three-dimensional point and the like.
- the arithmetic coding unit 6607 calculates ZeroCnt from the coding coefficient after quantization, and arithmetically encodes ZeroCnt. In addition, the arithmetic coding unit 6607 arithmetically encodes the non-zero coding coefficient after quantization. The arithmetic coding unit 6607 may binarize the coding coefficient before arithmetic coding. Further, the arithmetic coding unit 6607 may generate and encode various header information.
- FIG. 88 is a block diagram of the attribute information decoding unit 6610 included in the three-dimensional data decoding device.
- the attribute information decoding unit 6610 includes an arithmetic decoding unit 6611, an inverse quantization unit 6612, an inverse Har conversion unit 6613, and a memory 6614.
- the arithmetic decoding unit 6611 arithmetically decodes ZeroCnt and the coding coefficient included in the bit stream.
- the arithmetic decoding unit 6611 may decode various header information.
- the dequantization unit 6612 dequantizes the arithmetically decoded coding coefficient.
- the inverse Har conversion unit 6613 applies the inverse Haar transformation to the coding coefficient after the inverse quantization.
- the memory 6614 stores the values of the attribute information of the plurality of decoded three-dimensional points. For example, the attribute information of the decoded three-dimensional point stored in the memory 6614 may be used for predicting the undecoded three-dimensional point.
- the coding order is not necessarily limited to this.
- a method of scanning the coding coefficient after the Har conversion from the upper layer to the lower layer may be used.
- the three-dimensional data encoding device may encode the number of consecutive values of 0 as ZeroCnt.
- the three-dimensional data coding apparatus may switch whether or not to use the coding method using ZeroCnt described in the present embodiment for each WLD, SPC, or volume.
- the three-dimensional data coding apparatus may add information indicating whether or not the coding method using ZeroCnt is applied to the header information.
- the three-dimensional data decoding device can appropriately perform decoding.
- a three-dimensional data coding device counts the number of occurrences of a coding coefficient having a value of 0 for one volume.
- the three-dimensional data encoding device applies the method using ZeroCnt to the next volume when the count value exceeds a predetermined threshold value, and uses ZeroCnt to the next volume when the count value is equal to or less than the threshold value. Do not apply the method that was used.
- the three-dimensional data coding apparatus can appropriately switch whether or not to apply the coding method using ZeroCnt according to the characteristics of the three-dimensional point to be coded, thus improving the coding efficiency. it can.
- Slices and tiles are used to divide the point cloud data based on the characteristics and position of the point cloud data.
- the quality required for each divided point cloud data may differ due to hardware restrictions and real-time processing requirements.
- the slice data including plants is not so important, so the resolution (quality) can be reduced by quantization.
- important slice data can be made high resolution (quality) by setting the quantization value to a low value. Quantization parameters are used to enable such control of the quantization value.
- FIG. 89 is a diagram for explaining the processing of the quantization unit 5323 for quantizing the data and the inverse quantization unit 5333 for dequantizing the quantization data.
- the quantization unit 5323 calculates the quantized data in which the data is quantized by quantizing the data using the scale, that is, performing the process using the equation G1.
- the dequantization unit 5333 dequantizes the quantized data using the scale, that is, performs a process using the equation G2 to calculate the dequantized data of the quantized data.
- Quantized value (QP value) log (scale) (Equation G3)
- Quantization value (QP value) + quantization delta (difference information) (Equation G4)
- quantization parameters are collectively referred to as quantization parameters.
- the quantization value is a value based on the default value, and is calculated by adding the quantization delta to the default value. If the quantization value is less than the default value, the quantization delta will be negative. If the quantization value is greater than the default value, the quantization delta will be a positive value. If the quantization value is equal to the default value, the quantization delta will be zero. If the quantization delta is 0, the quantization delta may be absent.
- FIG. 91 is a block diagram showing a configuration of a first coding unit 5300 included in the three-dimensional data coding device according to the present embodiment.
- FIG. 92 is a block diagram showing the configuration of the division portion 5301 according to the present embodiment.
- FIG. 93 is a block diagram showing the configuration of the position information coding unit 5302 and the attribute information coding unit 5303 according to the present embodiment.
- the first coding unit 5300 generates coded data (coded stream) by encoding the point cloud data by the first coding method (GPCC (Geometry based PCC)).
- the first coding unit 5300 includes a dividing unit 5301, a plurality of position information coding units 5302, a plurality of attribute information coding units 5303, an additional information coding unit 5304, and a multiplexing unit 5305. ..
- the division unit 5301 generates a plurality of division data by dividing the point cloud data. Specifically, the division unit 5301 generates a plurality of division data by dividing the space of the point cloud data into a plurality of subspaces. Here, the subspace is one of tiles and slices, or a combination of tiles and slices. More specifically, the point cloud data includes position information, attribute information, and additional information. The division unit 5301 divides the position information into a plurality of division position information, and divides the attribute information into a plurality of division attribute information. In addition, the division unit 5301 generates additional information regarding the division.
- the division portion 5301 includes a tile division portion 5311 and a slice division portion 5312.
- the tile dividing unit 5311 divides the point cloud into tiles.
- the tile division unit 5311 may determine the quantization value used for each divided tile as tile addition information.
- the slice division unit 5312 further divides the tile obtained by the tile division unit 5311 into slices.
- the slice division unit 5312 may determine the quantization value used for each divided slice as slice addition information.
- the plurality of position information coding units 5302 generate a plurality of coded position information by encoding the plurality of divided position information. For example, the plurality of position information coding units 5302 process a plurality of divided position information in parallel.
- the position information coding unit 5302 includes a quantized value calculation unit 5321 and an entropy coding unit 5322.
- the quantization value calculation unit 5321 acquires the quantization value (quantization parameter) of the coded division position information.
- the entropy coding unit 5322 calculates the quantization position information by quantizing the division position information using the quantization value (quantization parameter) acquired by the quantization value calculation unit 5321.
- the plurality of attribute information coding units 5303 generate a plurality of coded attribute information by encoding the plurality of divided attribute information. For example, the plurality of attribute information coding units 5303 process a plurality of divided attribute information in parallel.
- the attribute information coding unit 5303 includes a quantized value calculation unit 5331 and an entropy coding unit 5332.
- the quantization value calculation unit 5331 acquires the quantization value (quantization parameter) of the coded division attribute information.
- the entropy coding unit 5332 calculates the quantization attribute information by quantizing the division attribute information using the quantization value (quantization parameter) acquired by the quantization value calculation unit 5331.
- the additional information coding unit 5304 generates coded additional information by encoding the additional information included in the point cloud data and the additional information related to the data division generated at the time of division by the division unit 5301.
- the multiplexing unit 5305 generates coded data (coded stream) by multiplexing a plurality of coded position information, a plurality of coded attribute information, and coded additional information, and transmits the generated coded data. ..
- the coded additional information is used at the time of decoding.
- the number of the position information coding unit 5302 and the number of the attribute information coding unit 5303 are two examples, respectively, but the number of the position information coding unit 5302 and the attribute information coding unit 5303 are each. It may be one or three or more. Further, the plurality of divided data may be processed in parallel in the same chip like a plurality of cores in the CPU, may be processed in parallel by the cores of a plurality of chips, or may be processed in parallel by a plurality of cores of a plurality of chips. May be done.
- FIG. 94 is a block diagram showing the configuration of the first decoding unit 5340.
- FIG. 95 is a block diagram showing the configurations of the position information decoding unit 5342 and the attribute information decoding unit 5343.
- the first decoding unit 5340 restores the point cloud data by decoding the coded data (encoded stream) generated by encoding the point cloud data by the first coding method (GPCC). ..
- the first decoding unit 5340 includes a demultiplexing unit 5341, a plurality of position information decoding units 5342, a plurality of attribute information decoding units 5343, an additional information decoding unit 5344, and a coupling unit 5345.
- the demultiplexing unit 5341 generates a plurality of coded position information, a plurality of coded attribute information, and coded additional information by demultiplexing the coded data (coded stream).
- the plurality of position information decoding units 5342 generate a plurality of quantized position information by decoding the plurality of coded position information. For example, the plurality of position information decoding units 5342 process a plurality of coded position information in parallel.
- the position information decoding unit 5342 includes a quantization value calculation unit 5351 and an entropy decoding unit 5352.
- the quantization value calculation unit 5351 acquires the quantization value of the quantization position information.
- the entropy decoding unit 5352 calculates the position information by dequantizing the quantization position information using the quantization value acquired by the quantization value calculation unit 5351.
- the plurality of attribute information decoding units 5343 generate a plurality of divided attribute information by decoding the plurality of coded attribute information. For example, the plurality of attribute information decoding units 5343 process a plurality of coded attribute information in parallel.
- the attribute information decoding unit 5343 includes a quantization value calculation unit 5361 and an entropy decoding unit 5362.
- the quantization value calculation unit 5361 acquires the quantization value of the quantization attribute information.
- the entropy decoding unit 5362 calculates the attribute information by dequantizing the quantization attribute information using the quantization value acquired by the quantization value calculation unit 5361.
- the plurality of additional information decoding units 5344 generate additional information by decoding the coded additional information.
- the joining unit 5345 generates position information by combining a plurality of divided position information using additional information.
- the coupling unit 5345 generates attribute information by combining a plurality of division attribute information using additional information. For example, the joining unit 5345 first generates the point cloud data corresponding to the tile by combining the decoded point cloud data for the slice using the slice addition information. Next, the joining unit 5345 restores the original point cloud data by joining the point cloud data corresponding to the tile using the tile addition information.
- the number of the position information decoding unit 5342 and the number of the attribute information decoding unit 5343 are two, respectively, but the number of the position information decoding unit 5342 and the attribute information decoding unit 5343 is one, respectively. It may be three or more. Further, the plurality of divided data may be processed in parallel in the same chip like the plurality of cores in the CPU, or may be processed in parallel by the cores of the plurality of chips, and may be processed in parallel by the plurality of cores of the plurality of chips. You may.
- FIG. 96 is a flowchart showing an example of a process related to determination of a quantization value (Quantization Parameter value: QP value) in coding of position information (Geometry) or coding of attribute information (Attribute).
- QP value Quantization Parameter value
- the QP value is determined in consideration of the coding efficiency for each data unit of the position information or each data unit of the attribute information that constitutes the PCC frame, for example.
- the QP value is determined in the divided data unit in consideration of the coding efficiency of the divided data unit. Further, the QP value may be determined in units of data before division.
- the three-dimensional data coding device determines the QP value used for coding the position information (S5301).
- the three-dimensional data coding apparatus may determine the QP value for each of the plurality of divided slices based on a predetermined method. Specifically, the three-dimensional data encoding device determines the QP value based on the characteristics or quality of the data of the position information.
- the three-dimensional data encoding device determines, for example, the density of point cloud data for each data unit, that is, the number of points per unit area belonging to a slice, and uses a value corresponding to the density of point cloud data as a QP value. You may decide.
- the 3D data encoder is based on the number of points in the point cloud data, the distribution of the points, the bias of the points, or the feature quantity obtained from the point information, the number of feature points, or the recognized object.
- the corresponding value may be determined as the QP value.
- the three-dimensional data encoding device may determine an object in the position information of the map and determine a QP value based on the object based on the position information, or information or a feature obtained by projecting a three-dimensional point cloud in two dimensions.
- the QP value may be determined based on the amount.
- the corresponding QP value may be stored in the memory in advance as a table associated with the density of the point cloud data, the number of points, the distribution of points, or the bias of points.
- the corresponding QP value is stored in the memory in advance as a feature amount or the number of feature points obtained from the point information, or a table associated with an object recognized based on the point information. May be good. Further, the corresponding QP value may be determined based on the result of simulating the coding rate or the like with various QP values when encoding the position information of the point cloud data.
- the three-dimensional data encoding device determines the reference value (default value) and the difference information (quantization delta) of the QP value of the position information (S5302). Specifically, the three-dimensional data encoding device determines the reference value and the difference information to be transmitted by using the determined QP value and a predetermined method, and uses the determined reference value and the difference information to transfer the determined reference value and the difference information to the additional information and the data. Set (add) to at least one of the headers.
- the three-dimensional data encoding device determines the QP value used for encoding the attribute information (S5303).
- the three-dimensional data coding apparatus may determine the QP value for each of the plurality of divided slices based on a predetermined method. Specifically, the three-dimensional data encoding device determines the QP value based on the characteristics or quality of the data of the attribute information.
- the three-dimensional data encoding device may determine the QP value for each data unit, for example, based on the characteristics of the attribute information. Color characteristics include, for example, luminance, chromaticity, saturation, histograms of these, color continuity, and the like.
- the attribute information is reflectance, it may be determined according to the information based on the reflectance.
- the three-dimensional data encoding device may determine a high-quality QP value for the point cloud data constituting the object for which the face is detected. As described above, the three-dimensional data encoding device may determine the QP value for the point cloud data constituting the object according to the type of the object.
- the QP value based on each attribute information may be independently determined for each attribute information, or one of them.
- the QP value of a plurality of attribute information may be determined based on the attribute information of, or the QP value of the plurality of attribute information may be determined using the plurality of attribute information.
- the three-dimensional data encoding device determines the reference value (default value) and the difference information (quantization delta) of the QP value of the attribute information (S5304). Specifically, the three-dimensional data encoding device determines the reference value and the difference information to be transmitted by using the determined QP value and a predetermined method, and uses the determined reference value and the difference information to transfer the determined reference value and the difference information to the additional information and the data. Set (add) to at least one of the headers.
- the three-dimensional data encoding device quantizes and encodes the position information and the attribute information based on the determined QP values of the position information and the attribute information, respectively (S5305).
- the QP value of the position information and the attribute information may be determined based on the position information, may be determined based on the attribute information, or may be determined based on the position information and the attribute information.
- the QP values of the position information and the attribute information may be adjusted in consideration of the balance between the quality of the position information and the quality of the attribute information in the point cloud data.
- the QP values of the position information and the attribute information may be determined so that the quality of the position information is set high and the quality of the attribute information is set lower than the quality of the position information.
- the QP value of the attribute information may be determined so as to satisfy the restricted condition of being equal to or greater than the QP value of the position information.
- the QP value may be adjusted so that the encoded data is encoded so as to be within a predetermined rate range.
- the QP value is, for example, the code amount of the data unit when the code amount is likely to exceed a predetermined rate in the coding of the previous data unit, that is, when the difference to the predetermined rate is less than the first difference. May be adjusted so that the coding quality is reduced so that is less than the first difference.
- the QP value is adjusted so that the coding quality of the data unit is improved when the difference up to a predetermined rate is larger than the second difference larger than the first difference and there is a sufficiently large difference. You may. Coordination between data units may be, for example, between PCC frames, between tiles or between slices.
- the adjustment of the QP value of the attribute information may be adjusted based on the coding rate of the position information.
- the processing order of the processing related to the position information and the processing related to the attribute information may be opposite or parallel.
- the processing in slice units is taken as an example, but the processing in tile units and other data units can be processed in the same manner as in slice units. That is, the slices in the flowchart of FIG. 96 can be read as tiles or other data units.
- FIG. 97 is a flowchart showing an example of decoding processing of position information and attribute information.
- the three-dimensional data decoding apparatus acquires the reference value and the difference information indicating the QP value of the position information and the reference value and the difference information indicating the QP value of the attribute information (S5311). Specifically, the three-dimensional data decoding device analyzes one or both of the transmitted metadata and the header of the encoded data, and acquires the reference value and the difference information for deriving the QP value.
- the three-dimensional data decoding device derives a QP value based on a predetermined method using the acquired reference value and difference information (S5312).
- the three-dimensional data decoding device acquires the quantized position information and dequantizes the quantized position information using the derived QP value to decode the position information (S5313).
- the three-dimensional data decoding device acquires the quantization attribute information and dequantizes the quantization attribute information using the derived QP value to decode the attribute information (S5314).
- FIG. 98 is a diagram for explaining the first example of the transmission method of the quantization parameter.
- FIG. 98A is a diagram showing an example of the relationship between QP values.
- Q G and Q A indicate the absolute value of the QP value used for encoding the position information and the absolute value of the QP value used for encoding the attribute information, respectively.
- Q G is an example of a first quantization parameter used to quantize the position information of each of a plurality of three-dimensional points.
- ⁇ (Q A , Q G ) indicates the difference information indicating the difference from the Q G used for deriving the Q A. That is, Q A is derived using Q G and ⁇ (Q A , Q G ). In this way, the QP value is transmitted separately as a reference value (absolute value) and a difference information (relative value). Further, in decoding, a desired QP value is derived from the transmitted reference value and difference information.
- FIG. 98 (b) is a diagram showing a first example of the relationship between the reference value of each QP value and the difference information.
- FIG. 98 (c) is a diagram showing a first example of the transmission order of the QP value, the position information, and the attribute information.
- the QP value is roughly divided into a PCC frame unit QP value (frame QP) and a data unit QP value (data QP) for each position information and each attribute information.
- the QP value of the data unit is the QP value used for coding determined in step S5301 of FIG. 96.
- Q G which is a QP value used for encoding position information in PCC frame units, is used as a reference value, and a QP value in data units is generated and transmitted as difference information indicating a difference from Q G.
- Q G QP value for encoding position information in the PCC frame ... Sent as a reference value "1.”
- Q A QP value for encoding attribute information in the PCC frame ... APS using Q Gs1, Q Gs2 sent as difference information "2" indicating the difference from Q G by: QP values of the encoding of position information in the slice data ... by using the header of the encoded data of the positional information, Q Q As1 sent as difference information "3" and "5" indicating the difference from G, Q As2: using the header of the encoded data of QP values ... attribute information of the encoded attribute information in the slice data , it is transmitted as difference information indicating the difference from Q a "4" and "6"
- the information used for deriving the frame QP is described in the metadata (GPS, APS) related to the frame, and the information used for deriving the data QP is described in the metadata (header of encoded data) related to the data. ..
- the data QP is generated and transmitted as difference information indicating the difference from the frame QP. Therefore, the amount of data in the data QP can be reduced.
- the first decoding unit 5340 refers to the metadata indicated by the arrow in FIG. 98 (c) in each coded data, and acquires the reference value and the difference information corresponding to the coded data. Then, the first decoding unit 5340 derives the QP value corresponding to the coded data to be decoded based on the acquired reference value and the difference information.
- the first decoding unit 5340 acquires the reference information “1.” and the difference information “2.” and “6.” indicated by the arrows in FIG. 98 (c) from the metadata or the header, and obtains the following. as shown by equation (G6), the reference information "1” into by adding the difference information "2" and "6", to derive a QP value of a s2.
- the point cloud data includes position information and attribute information of 0 or more. That is, the point cloud data may not have attribute information or may have a plurality of attribute information.
- one three-dimensional point has color information as attribute information, color information and reflection information, and one or more color information associated with one or more viewpoint information. is there.
- FIG. 99 is a diagram for explaining a second example of the transmission method of the quantization parameter.
- FIG. 99A is a diagram showing a second example of the relationship between the reference value of each QP value and the difference information.
- FIG. 99B is a diagram showing a second example of the transmission order of the QP value, the position information, and the attribute information.
- Each of the two color information is represented by a luminance (Luma) Y and a color difference (Chroma) Cb and Cr.
- Q Y1 which is a QP value used for encoding the brightness Y1 of the first color, is derived using Q G , which is a reference value, and ⁇ (Q Y1 , Q G ) indicating the difference between them.
- the luminance Y1 is an example of the first luminance
- Q Y1 is an example of the second quantization parameter used to quantize the luminance Y1 as the first luminance.
- ⁇ (Q Y1 , Q G ) is the difference information “2.”.
- Q Cb1, Q Cr1 a QP value to be used in encoding the color difference between the first color Cb1, Cr1, respectively, and QY1, delta showing the difference (Q Cb1, Q Y1), ⁇ (Q Cr1, Derived using Q Y1 ).
- the color difference Cb1 and Cr1 are examples of the first color difference
- Q Cb1 and Q Cr1 are examples of the third quantization parameter used for quantizing the color difference Cb1 and Cr1 as the first color difference.
- ⁇ (Q Cb1 , Q Y1 ) is the difference information “3.”
- ⁇ (Q Cr1 , Q Y1 ) is the difference information “4.”.
- ⁇ (Q Cb1 , Q Y1 ) and ⁇ (Q Cr1 , Q Y1 ) are examples of the first difference, respectively.
- Q Cb1 and Q Cr1 may be used for Q Cb1 and Q Cr1 , or common values may be used. When a common value is used, one of Q Cb1 and Q Cr1 may be used, so the other may not be used.
- Q Y1D a QP value to be used for encoding of the first color luminance Y1D in the slice data includes a Q Y1, its shows the difference ⁇ (Q Y1D, Q Y1) is derived by using the.
- the luminance Y1D of the first color in the slice data is an example of the first luminance of one or more three-dimensional points included in the subspace
- Q Y1D is the fifth quantization used to quantize the luminance Y1D. This is an example of parameters.
- ⁇ (Q Y1D , Q Y1 ) is the difference information “10.”, which is an example of the second difference.
- the first color of the color difference Cb1D in the slice data Q Cb1D a QP value to be used for coding the Cr1D, Q Cr1D, respectively, and Q Cb1, Q Cr1, delta showing the difference (Q Cb1D, Q cb1), ⁇ (Q Cr1D, is derived using the Q Cr1) and.
- the color difference Cb1D and Cr1D of the first color in the slice data are examples of the first color difference of one or more three-dimensional points included in the subspace, and Q Cb1D and Q Cr1D are for quantizing the color difference Cb1D and Cr1D. This is an example of the sixth quantization parameter used in.
- ⁇ (Q Cb1D, Q Cb1) is a difference information "11"
- ⁇ (Q Cr1D, Q Cr1 ) is the difference information "12”.
- ⁇ (Q Cb1D, Q Cb1) and ⁇ (Q Cr1D, Q Cr1) is an example of a third difference.
- a QP value to be used for coding of the reflectance R Q R is a Q G is the reference value, the indicating the difference ⁇ (Q R, Q G) is derived by using the.
- Q R is an example of a fourth quantization parameter used to quantize the reflectivity R.
- ⁇ (Q R , Q G ) is the difference information “8.”.
- Q RD is a QP value to be used for coding the reflectance RD in the slice data
- Q R its shows the difference ⁇ (Q RD, Q R) is derived by using the.
- ⁇ (Q RD, Q R) is the difference information "16.”
- the difference information "9.” to “16.” indicates the difference information between the data QP and the frame QP.
- the difference information may be set to 0, or may be regarded as 0 by not transmitting the difference information.
- the first decoding unit 5340 decodes the color difference Cr2 of the second color, for example, the reference information “1.” and the difference information “5.” and “7.” shown by the arrows in FIG. 99 (b). And “15.” are obtained from the metadata or header, and as shown in the following (Equation G7), the difference information "5.”, “7.” and “15.” are added to the reference information "1.” By adding, the QP value of the color difference Cr2 is derived.
- FIG. 100 is a diagram for explaining a third example of a method of transmitting quantization parameters.
- FIG. 100A is a diagram showing a third example of the relationship between the reference value of each QP value and the difference information.
- FIG. 100B is a diagram showing a third example of the transmission order of the QP value, the position information, and the attribute information.
- FIG. 100 (c) is a diagram for explaining the intermediate generation value of the difference information in the third example.
- the QP value (Q At1 ) and the difference information ⁇ (Q At1 , Q At1 ) for each tile after the tiles are divided.
- Q A) is produced as an intermediate value.
- QP values (Q At1s1 , Q At1s2 ) and difference information ( ⁇ (Q At1s1 , Q At1 ), ⁇ (Q At1s2 , Q At1 )) for each slice are generated.
- the difference information "4.” in (a) of FIG. 100 is derived by the following (formula G8).
- Attribute information when the first decoding unit 5340 decodes the attribute information At2s1 of the slice 1 in the tile 2, the reference information “1.” and the difference information “2.” indicated by the arrows in FIG. 100 (b), Attribute information by acquiring "8.” from the metadata or header and adding the difference information "2.” and “8.” to the reference information "1.” as shown in the following (Equation G9).
- the QP value of At2s1 is derived.
- FIG. 101 is a flowchart of the point cloud data coding process according to the present embodiment.
- the three-dimensional data encoding device determines the division method to be used (S5321).
- This division method includes whether or not to perform tile division and whether or not to perform slice division. Further, the division method may include the number of divisions when performing tile division or slice division, the type of division, and the like.
- the type of division is a method based on the object shape as described above, a method based on map information or position information, a method based on the amount of data or the amount of processing, and the like.
- the division method may be predetermined.
- the three-dimensional data encoding device When tile division is performed (Yes in S5322), the three-dimensional data encoding device generates a plurality of tile position information and a plurality of tile attribute information by dividing the position information and the attribute information in tile units (S5323). Further, the three-dimensional data encoding device generates tile addition information related to tile division.
- the three-dimensional data encoding device divides a plurality of tile position information and a plurality of tile attribute information (or position information and attribute information) to obtain a plurality of division position information and a plurality of division position information. Generate a plurality of division attribute information (S5325). Further, the three-dimensional data encoding device generates position slice addition information and attribute slice addition information related to slice division.
- the three-dimensional data coding apparatus generates a plurality of coded position information and a plurality of coded attribute information by encoding each of the plurality of divided position information and the plurality of divided attribute information (S5326). ..
- the three-dimensional data encoding device generates dependency information.
- the three-dimensional data coding device generates coded data (coded stream) by NAL unitizing (multiplexing) a plurality of coded position information, a plurality of coded attribute information, and additional information (multiplexed). S5327). In addition, the three-dimensional data encoding device sends out the generated encoded data.
- FIG. 102 is a flowchart showing an example of a process of determining a QP value and updating additional information in tile division (S5323) or slice division (S5325).
- the tile and / or slice position information and attribute information may be individually divided by each method, or may be commonly divided collectively. As a result, additional information divided for each tile and / or for each slice is generated.
- the three-dimensional data encoding device determines the reference value and the difference information of the QP value for each divided tile and / or for each slice (S5331). Specifically, the three-dimensional data encoding device determines the reference value and the difference information as illustrated in FIGS. 98 to 100.
- the three-dimensional data encoding device updates the additional information so that the determined reference value and the difference information are included (S5332).
- FIG. 103 is a flowchart showing an example of the process of encoding the determined QP value in the process of coding (S5326).
- the three-dimensional data encoding device encodes the QP value determined in step S5331 (S5341). Specifically, the three-dimensional data encoding device encodes the reference value and the difference information of the QP value included in the updated additional information.
- the three-dimensional data coding apparatus continues the coding process until the stop condition of the coding process is satisfied, for example, until there is no data to be coded (S5342).
- FIG. 104 is a flowchart of the point cloud data decoding process according to the present embodiment.
- the three-dimensional data decoding device analyzes the additional information (tile additional information, position slice additional information, and attribute slice additional information) related to the division method included in the coded data (encoded stream), thereby performing the division method. Is determined (S5351).
- This division method includes whether or not to perform tile division and whether or not to perform slice division. Further, the division method may include the number of divisions when performing tile division or slice division, the type of division, and the like.
- the three-dimensional data decoding device decodes a plurality of coded position information and a plurality of coded attribute information included in the coded data by using the dependency information included in the coded data to obtain the divided position information. And the division attribute information is generated (S5352).
- the three-dimensional data decoding apparatus has a plurality of division position information and a plurality of divisions based on the position slice addition information and the attribute slice addition information. By combining with the attribute information, a plurality of tile position information and a plurality of tile attribute information are generated (S5354).
- the three-dimensional data decoding device When the additional information indicates that the tile division is performed (Yes in S5355), the three-dimensional data decoding device performs a plurality of tile position information and a plurality of tile attribute information (a plurality of division positions) based on the tile addition information. Position information and attribute information are generated by combining information and a plurality of divided attribute information (S5356).
- FIG. 105 shows an example of a process of acquiring a QP value and decoding a QP value of a slice or a tile in a combination of information divided for each slice (S5354) or a combination of information divided for each tile (S5356). It is a flowchart which shows.
- the position information and attribute information of slices or tiles may be combined by each method or may be combined by the same method.
- the three-dimensional data decoding device decodes the reference value and the difference information from the additional information of the coded stream (S5361).
- the three-dimensional data decoding device calculates the quantization value using the decoded reference value and the difference information, and updates the QP value used for inverse quantization to the calculated QP value (S5362). Thereby, the QP value for dequantizing the quantization attribute information for each tile or each slice can be derived.
- the three-dimensional data decoding device continues the decoding process until the stop condition of the decoding process is satisfied, for example, until there is no data to be decoded (S5363).
- FIG. 106 is a diagram showing an example of GPS syntax.
- FIG. 107 is a diagram showing an example of APS syntax.
- FIG. 108 is a diagram showing an example of syntax of the header of position information.
- FIG. 109 is a diagram showing an example of syntax of the header of the attribute information.
- GPS which is additional information of position information, includes QP_value indicating an absolute value as a reference for deriving a QP value.
- QP value corresponds to, for example, exemplified Q G in FIG. 98 through FIG 100.
- APS which is additional information of attribute information, defines a default viewpoint when there are a plurality of color information of a plurality of viewpoints at a three-dimensional point, and the 0th is always the default viewpoint.
- Information may be provided.
- the three-dimensional data encoding device may decode or display the 0th attribute information.
- QP_delta_Attribute_to_Geometry indicates the difference information from the reference value (QP_value) described in GPS. This difference information is, for example, difference information from the luminance when the attribute information is color information.
- the GPS may include a flag indicating whether or not there is difference information for calculating the QP value in Geometri_header (header of position information).
- the APS may include a flag indicating whether or not the Attribute_header (header of the attribute information) has the difference information for calculating the QP value. The flag may indicate whether or not there is difference information of the data QP from the frame QP for calculating the data QP in the attribute information.
- the quantum using the second quantization parameter for quantizing the first brightness is used.
- the quantization and the quantization using the third quantization parameter for quantizing the first color difference when the fifth quantization parameter and the sixth quantization parameter are used for the quantization, the fifth quantization parameter and Identification information (flag) indicating that the quantization was performed using the sixth quantization parameter may be included.
- the header of the position information may include QP_delta_data_to_frame, which indicates the difference information from the reference value (QP_value) described in GPS. Further, the header of the position information may be divided into information for each tile and / or for each slice, and the corresponding QP value may be indicated for each tile and / or for each slice.
- the header of the attribute information may include QP_delta_data_to_frame, which indicates the difference information from the QP value described in APS.
- the reference value of the QP value has been described as the QP value of the position information in the PCC frame, but the present invention is not limited to this, and other values may be used as the reference value.
- FIG. 110 is a diagram for explaining another example of the transmission method of the quantization parameter.
- FIG. 110 show a fourth example of setting a common reference value Q in the QP values of position information and attribute information in a PCC frame.
- a fourth example and stores the reference value Q to GPS, from the reference value Q, stored QP values of the position information difference information (Q G) in GPS, QP values of the attribute information (Q Y and Q R ) Difference information is stored in APS.
- the reference value Q may be stored in the SPS.
- FIG. 110 show a fifth example in which a reference value is set independently for each position information and attribute information.
- the reference QP value absolute value
- the reference Q G is the color information of the attribute information
- the reference value Q Y is set
- the reference value Q R are respectively set.
- the reference value of the QP value may be set for each of the position information and the plurality of types of attribute information.
- the fifth example may be combined with other examples. That, Q A in the first example, Q Y1, Q Y2, Q R in the second example may be the reference value of the QP values.
- (E) and (f) of FIG. 110 show a sixth example of setting a common reference value Q in a plurality of PCC frames when there are a plurality of PCC frames.
- the reference value Q is stored in SPS or GSPS
- the difference information between the QP value and the reference value of the position information of each PCC frame is stored in GPS.
- the difference information between the PCC frames for example, ⁇ (Q G (1) , Q G ( for example )) with the first frame of the random access unit as a reference value. 0)
- the difference information between the PCC frames for example, ⁇ (Q G (1) , Q G ( for example )
- the first frame of the random access unit as a reference value. 0)
- the difference information from the QP value of the division unit is stored in the data header in the same manner and sent.
- FIG. 111 is a diagram for explaining another example of the transmission method of the quantization parameter.
- Figure 111 (a) and (b) show a seventh example of setting a common reference value Q G by the position information and attribute information in the PCC frame.
- the reference value Q G is stored in GPS, and the difference information from the position information or the attribute information is stored in each data header.
- the reference value Q G may be stored in the SPS.
- FIG. 111 show an eighth example in which the QP value of the attribute information is indicated by the difference information from the QP value of the position information belonging to the same slice and tile.
- the reference value Q G may be stored in the SPS.
- FIG. 112 is a diagram for explaining a ninth example of a method of transmitting quantization parameters.
- FIG. 112 are the ninth, which solves the common QP value in the attribute information and shows the difference information from the QP value of the position information and the difference information from the common QP value in the attribute information, respectively. Is an example of.
- FIG. 113 is a diagram for explaining a control example of the QP value.
- the three-dimensional point cloud data is divided into tiles and encoded
- the point cloud data included in the tiles is a main road
- it is encoded using the QP value of the attribute information defined in advance.
- the surrounding tiles are not important information, there is a possibility that the quality of data can be lowered and the coding efficiency can be improved by setting the difference information of the QP value to a positive value.
- FIG. 113B shows an example of deriving the difference information when the quantization delta value is set in advance based on the objects included in the tile or slice. For example, if the split data is slice data of a "building" included in a tile that is a "main road”, the quantization delta value 0 of the tile that is a "main road” and the quantization delta of the slice data that is a "building” The value -5 is added, and the difference information is derived as -5.
- FIG. 114 is a flowchart showing an example of a method of determining a QP value based on the quality of an object.
- the three-dimensional data encoding device divides the point cloud data into one or more tiles based on the map information, and determines the objects included in each one or more tiles (S5371). Specifically, the three-dimensional data encoding device performs object recognition processing for recognizing what an object is, for example, using a learning model obtained by machine learning.
- High quality coding is, for example, coding at a bit rate greater than a predetermined rate.
- the three-dimensional data encoding device sets the QP value of the tile so that the coding quality is high (S5373).
- the three-dimensional data encoding device sets the QP value of the tile so that the coding quality is low (S5374).
- step S5373 or step S5374 the 3D data encoding device determines the object in the tile and divides it into one or more slices (S5375).
- the three-dimensional data encoding device determines whether or not to encode the slice to be processed with high quality (S5376).
- the three-dimensional data encoding device sets the QP value of the slice so that the coding quality is high (S5377).
- the three-dimensional data encoding device sets the QP value of the slice so that the coding quality is low (S5378).
- the three-dimensional data encoding device determines the reference value and the difference information to be transmitted by a predetermined method based on the set QP value, and transfers the determined reference value and the difference information to the additional information and the header of the data. Store in at least one (S5379).
- the three-dimensional data encoding device quantizes and encodes the position information and the attribute information based on the determined QP value (S5380).
- FIG. 115 is a flowchart showing an example of a method for determining a QP value based on rate control.
- the three-dimensional data encoding device encodes the point cloud data in order (S5381).
- the three-dimensional data coding apparatus determines the rate control status related to the coding process from the code amount of the coded data and the occupied amount of the coded buffer, and determines the quality of the next coding (S5382). ..
- the three-dimensional data encoding device determines whether or not to improve the coding quality (S5383).
- the QP value of the tile is set so that the coding quality becomes high (S5384).
- the QP value of the tile is set so that the coding quality is lowered (S5385).
- the three-dimensional data encoding device determines the reference value and the difference information to be transmitted by a predetermined method based on the set QP value, and transfers the determined reference value and the difference information to the additional information and the header of the data. Store in at least one (S5386).
- the three-dimensional data encoding device quantizes and encodes the position information and the attribute information based on the determined QP value (S5387).
- the three-dimensional data coding apparatus performs the processing shown in FIG. 116.
- the three-dimensional data coding apparatus quantizes the position information of each of the plurality of three-dimensional points using the first quantization parameter (S5391).
- the three-dimensional data coding apparatus uses the second quantization parameter to quantize the first brightness for the first brightness and the first color difference indicating the first color in the attribute information of each of the plurality of three-dimensional points.
- the first color difference is quantized using the third quantization parameter (S5392).
- the three-dimensional data encoding device includes the quantized position information, the quantized first brightness, the quantized first color difference, the first quantization parameter, the second quantization parameter, and the like.
- a bit stream including the first difference between the second quantization parameter and the third quantization parameter is generated (S5393).
- the coding efficiency can be improved.
- the three-dimensional data encoding device further quantizes the reflectance of the attribute information of each of the plurality of three-dimensional points by using the fourth quantization parameter. Further, in the generation, a bit stream including the quantized reflectance and the fourth quantization parameter is generated.
- the first or more three-dimensional points included in the subspace are divided into a plurality of subspaces in which the target space including the plurality of three-dimensional points is divided.
- the fifth quantization parameter is further used to quantize the first brightness of one or more three-dimensional points included in the subspace.
- the sixth quantization parameter is further used to further use the first or more three-dimensional points. Quantize one color difference.
- a bit stream including a second difference between the second quantization parameter and the fifth quantization parameter and a third difference between the third quantization parameter and the sixth quantization parameter is generated. To do.
- the fifth quantization parameter is indicated by the second difference from the second quantization parameter
- the sixth quantization parameter is indicated by the third difference from the third quantization parameter.
- the fifth quantization parameter and the sixth quantization parameter are used to quantize.
- a bit stream including further identification information indicating that the quantization was performed using the fifth quantization parameter and the sixth quantization parameter is generated.
- the three-dimensional data decoding apparatus that has acquired the bit stream can determine that the quantization is performed using the fifth quantization parameter and the sixth quantization parameter using the identification information, so that the processing load of the decoding process is increased. It can be reduced.
- the three-dimensional data encoding device further uses the seventh quantization parameter for the second brightness and the second color difference indicating the second color in the attribute information of each of the plurality of three-dimensional points.
- the two brightnesses are quantized and the second color difference is quantized using the eighth quantization parameter.
- the quantized second brightness, the quantized second color difference, the seventh quantization parameter, and the fourth of the seventh quantization parameter and the eighth quantization parameter Generate a bit stream that further contains the differences.
- the 8th quantization parameter is indicated by the 4th difference from the 7th quantization parameter, the coding efficiency can be improved.
- two types of color information can be included in the attribute information of the three-dimensional point.
- the three-dimensional data encoding device includes a processor and a memory, and the processor uses the memory to perform the above processing.
- the three-dimensional data decoding device performs the process shown in FIG. 117.
- the three-dimensional data decoding device obtains the bit stream to obtain the quantized position information, the quantized first brightness, the quantized first color difference, the first quantization parameter, and the second quantization parameter.
- the first difference between the second quantization parameter and the third quantization parameter is acquired (S5394).
- the three-dimensional data decoding device calculates the position information of a plurality of three-dimensional points by dequantizing the quantized position information using the first quantization parameter (S5395).
- the three-dimensional data decoding device dequantizes the quantized first luminance using the second quantization parameter, thereby indicating the first luminance and the first luminance indicating the first color of the plurality of three-dimensional points.
- the first luminance of the color difference is calculated (S5396).
- the three-dimensional data decoding device dequantizes the quantized first color difference using the second quantization parameter and the third quantization parameter obtained from the first difference, thereby causing the first color difference. Is calculated (S5397).
- the three-dimensional data decoding device can correctly decode the position information and the attribute information of the three-dimensional point.
- the reflectance further quantized and the fourth quantization parameter are acquired by acquiring the bit stream.
- the three-dimensional data decoding apparatus further calculates the reflectances of the plurality of three-dimensional points by inversely quantizing the quantized reflectances using the fourth quantization parameter.
- the three-dimensional data decoding device can correctly decode the reflectance of the three-dimensional point.
- the quantized first luminance is one or more three-dimensional points included in the subspace for each of a plurality of subspaces obtained by dividing the target space including the plurality of three-dimensional points.
- the quantized first luminance is inversely quantized using the second quantization parameter and the fifth quantization parameter obtained from the second difference. As a result, the first luminance of the one or more three-dimensional points is calculated.
- the third quantization parameter and the third difference By inversely quantizing the quantized first color difference using the sixth quantization parameter obtained from, the first color difference of the one or more three-dimensional points is calculated.
- identification information indicating that the quantization is performed using the fifth quantization parameter and the sixth quantization parameter is further acquired.
- the quantized first brightness when the identification information indicates that it has been quantized using the fifth quantization parameter and the sixth quantization parameter, the quantized first brightness is 1 or more. It is determined that the first brightness of the three-dimensional point is the quantized brightness.
- the quantized first color difference when the identification information indicates that it has been quantized using the fifth quantization parameter and the sixth quantization parameter, the quantized first color difference is 1 or more. It is determined that the first color difference of the three-dimensional point is a quantized color difference.
- the three-dimensional data decoding apparatus can determine that the data has been quantized using the fifth quantization parameter and the sixth quantization parameter using the identification information, the processing load of the decoding process can be reduced. ..
- the quantized second brightness, the quantized second color difference, the seventh quantization parameter, and the seventh quantization parameter and the eighth quantum are further obtained.
- the fourth difference from the quantization parameter is acquired.
- the three-dimensional data decoding device further dequantizes the quantized second luminance using the seventh quantization parameter, thereby indicating the second luminance and the second luminance of the plurality of three-dimensional points.
- the second luminance of the second color difference is calculated.
- the three-dimensional data decoding apparatus further dequantizes the quantized second color difference using the seventh quantization parameter and the eighth quantization parameter obtained from the fourth difference.
- the second color difference is calculated.
- the three-dimensional data decoding device can correctly decode the second color of the three-dimensional point.
- the three-dimensional data decoding device includes a processor and a memory, and the processor uses the memory to perform the above processing.
- the three-dimensional data coding method, the three-dimensional data decoding method, the three-dimensional data coding apparatus, and the three-dimensional data decoding apparatus described in the eighth embodiment may perform the processing described in the present embodiment. ..
- FIG. 118 is a diagram for explaining an example of a method of transmitting the quantization parameter according to the twelfth embodiment.
- FIG. 118A shows an example in which the reference value of the QP value is set in each of the position information and the attribute information.
- FIG. 118 is mainly different from FIG. 105 in the eighth embodiment in that the reference value of the QP value is set in the attribute information as well as the position information. That is, the QP value of any one of the plurality of attribute information including the first color, the second color, and the reflectance is used as the reference value, and the QP value of the other attribute information is used as the common reference value. Shown as difference information from.
- Q Y1 which is a QP value used for encoding the brightness Y1 of the first color, is set as a common reference value in a plurality of attribute information including the first color, the second color, and the reflectance.
- Q Y2 is a reference value of the second color, derived by using the Q Y1 is a common reference value, a difference information "5" from the Q Y1 delta and (Q Y2, Q Y1) Will be done.
- Is a reference value of the reflectance Q R is a Q Y1 is a common reference value
- difference information "8" is delta (Q R, Q Y1) from Q Y1 is derived by using the.
- the common reference value Q Y1 is included in the APS 1 which is the APS corresponding to the first color.
- the fourth attribute may have a reference value set independently of the common reference value Q Y1 .
- the fifth attribute does not have to have a QP value.
- the attribute information used for quantization is the common reference value for deriving multiple QP values used for quantization of multiple attribute information, and the reference value independent of the common reference value is used for quantization.
- the attribute information used may be mixed. Further, attribute information whose QP value is not used for encoding may be mixed.
- the QP value used for the quantization of the attribute information of the first color is a common reference value for deriving the QP value used for the quantization of a plurality of attribute information.
- the common reference value may be determined according to the following rules. For example, when all the attribute information is described in the control information such as SPS, the QP value included in the attribute information first shown in SPS among all the attribute information may be set as a common reference value. Alternatively, in the control information such as SPS, the attribute information in which the QP value set as a common reference value is used for quantization may be indicated.
- the attribute information in which the QP value set as the common reference value is used for the quantization may be shown at the beginning of the plurality of attribute information.
- reduction of coded data can be expected by showing each QP value used for each quantization of a plurality of attribute information as information of a combination of a reference value and a difference information.
- Q Y1 for each attribute information, Q Y2, Q R at each independently APS
- QP values belonging to the first color is Q Y1
- QP values belonging to the second color is Q Y2 as a reference value
- QP values belonging to the reflectance may be used as a reference value Q R. That is, in this case, Q Y2 and Q R are shown in the same manner as Q Y1 in absolute value.
- the first example is a method of showing the QP value of the attribute information when the meta data of a plurality of attribute information are collectively described in one APS.
- FIG. 119 is a diagram showing a first example of the syntax of APS and the syntax of the header of attribute information.
- Aps_idx indicates the index number of APS.
- aps_idx indicates the correspondence between the APS and the header of the attribute information.
- Sps_idx indicates the index number of the SPS corresponding to the APS.
- Num_of_atribute indicates the number of attribute information. When APS is set for each attribute information, the field or loop of num_of_atribute may not be included in APS.
- Attribute_type indicates the type of attribute information, that is, the type of attribute information.
- the APS may include information for referring to the type of the attribute information described in the SPS instead of the attribute_type.
- the QP value corresponding to the tribute_type is shown.
- the QP value of luminance (Luma) indicated by an absolute value is indicated as a reference value
- the QP value of color difference (chroma: Cb, Cr) is the difference from the QP value of luminance. Shown as information.
- the QP value of reflectance indicated by the absolute value is shown. Also, if the type of attribute information does not have a QP value as another example, the QP value is not shown.
- the reference value of the attribute information may be indicated by a difference from the reference value of other attribute information.
- Data_QP_delata_present_flag is a flag indicating whether or not the QP value for each Data (slice) exists in the header of the attribute information. When the flag is 1, the QP value for each Data (slice) is indicated in the header of the attribute information.
- Aps_idx is also included in the header of the attribute information.
- the correspondence between the APS and the header of the attribute information is indicated by aps_idx included in the APS and the header of the attribute information. That is, having a common aps_idx indicates that the APS and attribute information headers correspond to each other.
- Attribute_type indicates the type of attribute information (type of attribute information).
- the header of the attribute information includes information for referring to the type of the attribute information described in the APS or SPS instead of the attribute_type. You may be.
- Each field in the if statement surrounded by the broken line 6702 that is, each QP value of QP_delata_data_to_frame, QP_delta1_to_frame and QP_delta2_to_frame indicates the QP value of the data corresponding to attribute_type.
- Each QP value indicates the difference information from the value described in APS.
- the second example is a method of showing the QP value of the attribute information when the meta data of one attribute information is independently described in one APS.
- the second example by forming a common header structure for various types (types) of attribute information, there is an effect of avoiding a change in the syntax structure according to the attribute information.
- FIG. 120 is a diagram showing a second example of APS syntax.
- FIG. 121 is a diagram showing a second example of the syntax of the attribute information header.
- APS includes a reference value and a difference value of the QP value of the frame. Further, when the data_QP_delta_present_flag of APS is 1, the header of the attribute information includes the difference information from the reference value of APS.
- the APS has a first number of fields in which N QP values (N is 2 or more) are stored regardless of the type of attribute information.
- N is, for example, 3.
- the type of attribute information is color
- information indicating the QP value of Luma is stored in QP_value in APS
- information indicating the QP value of chroma is stored in QP_delta1 and QP_delta2.
- QP_value is a reference value
- QP_delta1 and QP_delta2 are difference information based on QP_value. That is, the QP value of Luma is indicated by QP_value, and the QP value of chroma is indicated by the value obtained by adding QP_delta1 to QP_value and the value obtained by adding QP_delta2 to QP_value.
- the APS includes the reference value of the quantization parameter for quantizing the corresponding attribute information.
- the difference information of the QP value of Luma from the QP_value of the corresponding APS is stored.
- the QP_delta1_to_frame and QP_delta2_to_frame may store the difference information of the QP value of the chroma from the corresponding APS QP_delta1 and QP_delta2.
- QP_delata_data_to_frame may store information indicating the QP value of reflectance
- QP_delta1_to_frame and QP_delta2_to_frame may store information indicating that the value is always 0 or invalid.
- the three-dimensional data decoding device decodes the information stored in QP_delta1 and QP_delta2, and QP_delta1_to_frame and QP_delta2_to_frame, which store information indicating 0 or invalidity, regardless of the information. It may be ignored without using it.
- data_AP_delta_present_flag may be further set to 0.
- the three-dimensional data decoding device decodes the information stored in QP_delta1 and QP_delta2, and QP_delta1_to_frame and QP_delta2_to_frame, which store information indicating 0 or invalidity, regardless of the information. It may be ignored without using it.
- the three-dimensional data decoding device sets parameters stored in a specific field among a plurality of fields in the header of the specific attribute information corresponding to the attribute of a specific type in the header of the plurality of attribute information. You can ignore it.
- the QP value can be indicated by a combination of the reference value and the difference information with a common syntax structure, so that an improvement in the coding rate can be expected.
- the attribute information corresponding to one position information has two or more color information, it is indicated by a common QP reference value and its difference information, and the QP reference value of reflectance is independently indicated by APS.
- the presentation may be changed according to the type.
- the method is not limited to the method described in the eighth embodiment and the present embodiment, and the reference value may be further divided into the reference value and the difference information for signaling, or the difference information may be independently signaled as the reference value.
- Good For example, in the case of a unit that requires independent decoding, at least one reference value is sent, and in the case of a unit that does not require independent decoding, difference information is sent, and so on, depending on the characteristics of the data. The combination of the reference value and the difference information may be changed. As a result, both the effects of improving the function and reducing the amount of code can be expected.
- the amount of information of the combination of the reference value and the difference information may be calculated, and based on the calculation result, for example, the combination of the reference value and the difference information that minimizes the calculation result may be generated and transmitted.
- the meaning (semantics) of the field indicating the reference value and the field indicating the difference information may be adaptively changed.
- the meaning of each field may be changed, such as whether to invalidate each field according to the above rule, or a flag indicating that the meaning of each field is switched may be added.
- the reference destination of the reference value may be adaptively changed. In that case, a flag indicating that the reference destination has changed, an Id for specifying the reference destination, or the like may be indicated.
- FIG. 122 is a diagram showing the relationship between SPS, APS, and headers of attribute information.
- the tip of the arrow indicates a reference destination.
- the SPS contains information about multiple types of attribute information.
- the SPS may include a plurality of attribute_types corresponding to a plurality of attribute information and each indicating a different type of attribute information.
- the SPS includes a tribute_component_id indicating a number for identifying the type of the attribute information for each type of the attribute information.
- SPS is an example of control information.
- Attribute_type is an example of type information.
- Attribute_component_id included in the SPS is an example of the first identification information indicating that it is associated with any of a plurality of types of information.
- Attribute_header includes attribute_component_id corresponding to attribute_component_id included in SPS.
- APS is an example of the second attribute control information.
- Attribute_header is an example of the first attribute control information.
- Attribute_component_id included in APS is an example of the second identification information indicating that it is associated with any of a plurality of types of information.
- the three-dimensional data decoding device refers to the SPS represented by sps_idx included in the APS or Attribute_header. Then, in the referenced SPS, the three-dimensional data decoding device acquires the type of the attribute information corresponding to the APS or the Attribute_header included in the APS or the Attribute_header as the type of the attribute information corresponding to the information contained in the APS or the Attribute_header. To do.
- one APS corresponds to one of the types of attribute information.
- one attribute information header corresponds to one of the attribute information types.
- Each of the plurality of APSs corresponds to one or more attribute information headers. That is, one APS corresponds to one or more attribute information headers different from the one or more attribute information headers corresponding to the other APS.
- the 3D data decoding device can acquire the same attribute information (attribute_type, etc.) corresponding to the same, that is, the value of 0, attribute_component_id from the SPS.
- the order of the attribute information described in the SPS may be used for the SPS instead of the property_component_id. That is, the type information indicating the types of the plurality of attribute information may be stored (described) in a predetermined order in the SPS.
- the attribute_component_id included in the APS or Attribute_header indicates that the APS or Attribute_header containing the attribute_component_id is associated with the type information of one of the predetermined orders.
- the three-dimensional data decoding device derives the arrival order of the APS or the attribute information and obtains the attribute information corresponding to the arrival order. It may be referred to. Further, when the point cloud data includes the attribute information in which APS or Attribute_header is present or not in each frame and the attribute information in which APS or Attribute_header is always present in each frame, the order of the attribute information always present in each frame. May come first, and the order of attribute information that may sometimes not exist may be sent later.
- FIGS. 118 and 122 a plurality of APSs corresponding to a plurality of attribute information are shown in one frame, but one APS may be used instead of the plurality of APSs. That is, one APS in this case includes information about the attribute information corresponding to the plurality of attribute information.
- aps_idx may include a sequence number corresponding to the frame number. As a result, the correspondence between APS and Attribute_header may be shown. In addition, aps_idx may have a function of attirubte_component_id. By this method, the information of the entire sequence related to one or more kinds of APS or attribute information can be stored in SPS and can be referred from each APS or each Property_header.
- APS or Attribute_header may include directly_type_type, or the NAL unit header may include attribute_type.
- the three-dimensional data coding apparatus performs the processing shown in FIG. 123.
- the three-dimensional data encoding device encodes a plurality of attribute information possessed by each of the plurality of three-dimensional points using parameters (S6701).
- the three-dimensional data coding apparatus generates a bit stream including the plurality of encoded attribute information, control information, and a plurality of first attribute control information (S6702).
- the control information corresponds to the plurality of attribute information, and includes a plurality of type information indicating different types of attribute information.
- the plurality of first attribute control information corresponds to the plurality of attribute information, respectively.
- Each of the plurality of first attribute control information includes first identification information indicating that the information is associated with any of the plurality of types of information.
- the three-dimensional data decoding device that has received the bit stream receives the bit stream.
- the attribute information of the three-dimensional point can be decoded correctly and efficiently.
- the plurality of types of information are stored in a predetermined order in the control information.
- the first identification information indicates that the first attribute control information including the first identification information is associated with the type information of one of the predetermined orders.
- the type information is shown in a predetermined order without adding the information indicating the type information, the data amount of the bitstream can be reduced, and the transmission amount of the bitstream can be reduced. ..
- the bit stream further includes a plurality of second attribute control information corresponding to the plurality of attribute information.
- Each of the plurality of second attribute control information includes a reference value of a parameter used for encoding the corresponding first attribute information.
- each of the plurality of second attribute control information includes the reference value of the parameter
- the attribute information corresponding to the second attribute control information can be encoded by using the reference value.
- the three-dimensional data decoding device that has received the bit stream can specify the type of the second attribute information by using the second identification information, the attribute information of the three-dimensional point can be decoded correctly and efficiently. Can be done.
- the first attribute control information includes difference information which is a difference of the parameter from the reference value. Therefore, the coding efficiency can be improved.
- the bit stream further includes a plurality of second attribute control information corresponding to the plurality of attribute information.
- Each of the plurality of second attribute control information has a second identification information indicating that it is associated with any of the plurality of types of information.
- the attribute information of the three-dimensional point is correctly and efficiently decoded. Can generate a bitstream that can.
- each of the plurality of first attribute control information has the N fields in which N parameters (N is 2 or more) are stored.
- N is 2 or more
- one field of the first number field contains a value indicating that it is invalid. ..
- the three-dimensional data decoding device that has received the bit stream can specify the type of the first attribute information by using the first identification information, and can omit the decoding process in the case of the specific first attribute control information. Therefore, the attribute information of the three-dimensional point can be decoded correctly and efficiently.
- the plurality of attribute information is quantized using the quantization parameter as the parameter.
- the three-dimensional data encoding device includes a processor and a memory, and the processor uses the memory to perform the above processing.
- the three-dimensional data decoding device performs the processing shown in FIG. 124.
- the three-dimensional data decoding device acquires a plurality of encoded attribute information and parameters by acquiring a bit stream (S6711).
- the three-dimensional data decoding device decodes the plurality of attribute information possessed by each of the plurality of three-dimensional points by decoding the plurality of encoded attribute information using the parameters (S6712).
- the bit stream includes control information and a plurality of first attribute control information.
- the control information corresponds to the plurality of attribute information, and includes a plurality of type information indicating different types of attribute information.
- the plurality of first attribute control information corresponds to the plurality of attribute information, respectively.
- Each of the plurality of first attribute control information includes first identification information indicating that the information is associated with any of the plurality of types of information.
- the three-dimensional data decoding device can specify the type of the attribute information corresponding to the first attribute control information by using the first identification information, so that the attribute information of the three-dimensional point can be correctly obtained. And it can be decrypted efficiently.
- the plurality of types of information are stored in a predetermined order in the control information.
- the first identification information indicates that the first attribute control information including the first identification information is associated with the type information of one of the predetermined orders.
- the type information is shown in a predetermined order without adding the information indicating the type information, the data amount of the bitstream can be reduced, and the transmission amount of the bitstream can be reduced. ..
- the bit stream further includes a plurality of second attribute control information corresponding to the plurality of attribute information.
- Each of the plurality of second attribute control information includes a reference value of a parameter used for encoding the corresponding attribute information.
- the three-dimensional data decoding device can decode the attribute information corresponding to the second attribute control information using the second reference value, the attribute information of the three-dimensional point can be decoded correctly and efficiently. Can be done.
- the first attribute control information includes difference information which is a difference of the parameter from the reference value. According to this, since the attribute information can be decoded using the reference value and the difference information, the attribute information of the three-dimensional point can be decoded correctly and efficiently.
- the bit stream further includes a plurality of second attribute control information corresponding to the plurality of attribute information.
- Each of the plurality of second attribute control information has a second identification information indicating that it is associated with any of the plurality of types of information. According to this, since the type of the attribute information corresponding to the second attribute control information can be specified by using the second identification information, the attribute information of the three-dimensional point can be decoded correctly and efficiently. it can.
- each of the plurality of first attribute control information has a plurality of fields in which a plurality of parameters are stored.
- the parameters stored in the specific fields of the plurality of fields of the specific first attribute control information corresponding to the specific type of attributes of the plurality of first attribute control information are ignored.
- the three-dimensional data decoding device can specify the type of the first attribute information by using the first identification information, and can omit the decoding process in the case of the specific first attribute control information.
- the attribute information of the original point can be decoded correctly and efficiently.
- the plurality of encoded attribute information is inversely quantized using the quantization parameter as the parameter.
- the attribute information of the three-dimensional point can be correctly decoded.
- the three-dimensional data decoding device includes a processor and a memory, and the processor uses the memory to perform the above processing.
- the attribute information included in the PCC (Point Cloud Compression) data is converted by using a plurality of methods such as Lifting, RAHT (Region Adaptive Hierarchical Transfer) or other conversion methods.
- Lifting is one of the conversion methods using LoD (Level of Detail).
- the amount of code can be reduced by quantizing the high frequency component. That is, the conversion process has strong energy compression characteristics. In addition, the accuracy is lost due to the quantization depending on the magnitude of the quantization parameter.
- FIG. 125 is a block diagram showing a configuration of a three-dimensional data encoding device according to the present embodiment.
- This three-dimensional data coding device includes a subtraction unit 7001, a conversion unit 7002, a transformation matrix holding unit 7003, a quantization unit 7004, a quantization control unit 7005, and an entropy coding unit 7006.
- the subtraction unit 7001 calculates a coefficient value which is the difference between the input data and the reference data.
- the input data is the attribute information included in the point cloud data, and is the predicted value of the attribute information with the reference data.
- the conversion unit 7002 performs conversion processing into a coefficient value.
- this conversion process is a process of classifying a plurality of attribute information into LoD.
- this conversion process may be Har conversion or the like.
- the transformation matrix holding unit 7003 holds a transformation matrix used for the conversion process by the conversion unit 7002.
- this transformation matrix is a Har transformation matrix.
- the three-dimensional data coding apparatus has a function of performing both a conversion process using LoD and a conversion process such as Har conversion, but even if it has either function. Good. Further, the three-dimensional data coding apparatus may selectively use these two types of conversion processes. Further, the three-dimensional data coding apparatus may switch the conversion process to be used for each predetermined processing unit.
- the quantization unit 7004 generates a quantization value by quantizing the coefficient value.
- the quantization control unit 7005 controls the quantization parameters used by the quantization unit 7004 for quantization.
- the quantization control unit 7005 may switch the quantization parameter (or the quantization step) according to the hierarchical structure of coding.
- the amount of generated code can be controlled for each layer by selecting an appropriate quantization parameter for each layer structure.
- the quantization control unit 7005 sets the quantization parameter of a certain layer or lower including the frequency component having little influence on the subjective image quality to the maximum value, and sets the quantization coefficient of the layer or lower to 0.
- the quantization control unit 7005 can control the subjective image quality and the generated code amount more finely.
- the hierarchy is a hierarchy (depth in a tree structure) in LoD or RAHT (Haar transformation).
- the entropy coding unit 7006 generates a bit stream by entropy coding (for example, arithmetic coding) the quantization coefficient. Further, the entropy coding unit 7006 encodes the quantization parameter for each layer set by the quantization control unit 7005.
- FIG. 126 is a block diagram showing a configuration of a three-dimensional data decoding device according to the present embodiment.
- This three-dimensional data decoding device includes an entropy decoding unit 7011, an inverse quantization unit 7012, a quantization control unit 7013, an inverse conversion unit 7014, a transformation matrix holding unit 7015, and an addition unit 7016.
- the entropy decoding unit 7011 decodes the quantization coefficient and the quantization parameter for each layer from the bit stream.
- the dequantization unit 7012 generates a coefficient value by dequantizing the quantization coefficient.
- the quantization control unit 7013 controls the quantization parameter used by the inverse quantization unit 7012 for the inverse quantization based on the quantization parameter of the hierarchy number obtained by the entropy decoding unit 7011.
- the inverse conversion unit 7014 reversely converts the coefficient value.
- the inverse conversion unit 7014 performs inverse Har conversion of the coefficient value.
- the transformation matrix holding unit 7015 holds a transformation matrix used for the inverse transformation processing by the inverse transformation unit 7014.
- this transformation matrix is an inverse Haar transformation matrix.
- the addition unit 7016 generates output data by adding reference data to the coefficient value.
- the output data is the attribute information included in the point cloud data, and is the predicted value of the attribute information with the reference data.
- the quantization parameter of the low layer is reduced to improve the accuracy of the low layer.
- the prediction accuracy of high-rise buildings can be improved.
- the amount of data can be reduced by increasing the quantization parameter for high-rise buildings.
- the quantization tree value (Qt) can be set individually for each LoD according to the usage policy of the user.
- the quantization tree value is, for example, a quantization parameter.
- FIG. 127 is a diagram showing a setting example of LoD. For example, as shown in FIG. 127, Qt0 to Qt2 independent of LoD0 to LoD2 are set.
- the quantization parameter of the low layer is reduced to improve the accuracy of the low layer.
- the prediction accuracy of high-rise buildings can be improved.
- the amount of data can be reduced by increasing the quantization parameter for high-rise buildings.
- the quantization tree value (Qt) can be set individually for each depth of the tree structure according to the usage policy of the user.
- FIG. 128 is a diagram showing an example of a hierarchical structure (tree structure) of RAHT. For example, as shown in FIG. 128, independent Qt0 to Qt2 are set for each depth of the tree structure.
- FIG. 129 is a block diagram showing the configuration of the three-dimensional data coding device 7020 according to the present embodiment.
- the three-dimensional data encoding device 7020 generates encoded data (encoded stream) by encoding point cloud data (point cloud).
- the three-dimensional data coding device 7020 includes a dividing unit 7021, a plurality of position information coding units 7022, a plurality of attribute information coding units 7023, an additional information coding unit 7024, and a multiplexing unit 7025. ..
- the division unit 7021 generates a plurality of division data by dividing the point cloud data. Specifically, the division unit 7021 generates a plurality of division data by dividing the space of the point cloud data into a plurality of subspaces. Here, the subspace is one of tiles and slices, or a combination of tiles and slices. More specifically, the point cloud data includes position information, attribute information (color or reflectance, etc.), and additional information. The division unit 7021 generates a plurality of division position information by dividing the position information, and generates a plurality of division attribute information by dividing the attribute information. In addition, the division unit 7021 generates additional information regarding the division.
- the division unit 7021 first divides the point cloud into tiles. Next, the dividing unit 7021 further divides the obtained tile into slices.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
Description
まず、本実施の形態に係る符号化三次元データ(以下、符号化データとも記す)のデータ構造について説明する。図1は、本実施の形態に係る符号化三次元データの構成を示す図である。
ポイントクラウドの符号化データを実際の装置又はサービスにおいて使用する際には、ネットワーク帯域を抑制するために用途に応じて必要な情報を送受信することが望ましい。しかしながら、これまで、三次元データの符号化構造にはそのような機能が存在せず、そのための符号化方法も存在しなかった。
本実施の形態では、車両間での三次元データを送受信する方法について説明する。例えば、自車両と周辺車両との間での三次元データの送受信が行われる。
本実施の形態では、三次元マップに基づく自己位置推定における異常系の動作について説明する。
本実施の形態では、後続車両への三次元データ送信方法等について説明する。
実施の形態5において、車両等のクライアント装置が、他の車両又は交通監視クラウド等のサーバに三次元データを送信する例を説明した。本実施の形態では、クライアント装置は、サーバ又は他のクライアント装置にセンサで得られたセンサ情報を送信する。
本実施の形態では、インター予測処理を用いた三次元データの符号化方法及び復号方法について説明する。
三次元点群の情報は、位置情報(geometry)と属性情報(attribute)とを含む。位置情報は、ある点を基準とした座標(x座標、y座標、z座標)を含む。位置情報を符号化する場合は、各三次元点の座標を直接符号化する代わりに、各三次元点の位置を8分木表現で表現し、8分木の情報を符号化することで符号量を削減する方法が用いられる。
実施の形態8とは別の手法で予測値を生成してもよい。以下では、符号化対象の三次元点を第1三次元点と称し、その周囲の三次元点を第2三次元点と称する場合がある。
以下、三次元点の属性情報を符号化する別の方法として、RAHT(Region Adaptive Hierarchical Transform)を用いた方法を説明する。図74は、RAHTを用いた属性情報の符号化を説明するための図である。
以下、量子化パラメータについて説明する。
図96は、位置情報(Geometry)の符号化あるいは属性情報(Attribute)の符号化における量子化値(Quantization Parameter値:QP値)の決定に関する処理の一例を示すフローチャートである。
QA:PCCフレームにおける属性情報の符号化のQP値・・・APSを用いてQGからの差分を示す差分情報「2.」として送出される
QGs1,QGs2:スライスデータにおける位置情報の符号化のQP値…位置情報の符号化データのヘッダを用いて、QGからの差分を示す差分情報「3.」および「5.」として送出される
QAs1,QAs2:スライスデータにおける属性情報の符号化のQP値…属性情報の符号化データのヘッダを用いて、QAからの差分を示す差分情報「4.」および「6.」として送出される
上記実施の形態8において説明した三次元データ符号化方法、三次元データ復号方法、三次元データ符号化装置、および、三次元データ復号装置において、本実施の形態で説明する処理を行ってもよい。
高圧縮を実現するために、PCC(Point Cloud Compression)データに含まれる属性情報は、Lifting、RAHT(Region Adaptive Hierarchical Transform)又はその他の変換手法等の複数の手法を用いて変換される。ここで、Liftingとは、LoD(Level of Detail)を用いた変換方法の一つである。
高圧縮を実現するために、PCC(Point Cloud Compression)データに含まれる属性情報は、Lifting、RAHT(Region Adaptive Hierarchical Transform)又はその他の変換手法等の複数の手法を用いて変換される。ここで、Liftingとは、LoD(Level of Detail)を用いた変換方法の一つである。
101、201、401、501 取得部
102、402 符号化領域決定部
103 分割部
104、644 符号化部
111 三次元データ
112、211、413、414、511、634 符号化三次元データ
200、500 三次元データ復号装置
202 復号開始GOS決定部
203 復号SPC決定部
204、625 復号部
212、512、513 復号三次元データ
403 SWLD抽出部
404 WLD符号化部
405 SWLD符号化部
411 入力三次元データ
412 抽出三次元データ
502 ヘッダ解析部
503 WLD復号部
504 SWLD復号部
620、620A 三次元データ作成装置
621、641 三次元データ作成部
622 要求範囲決定部
623 探索部
624、642 受信部
626 合成部
631、651 センサ情報
632 第1三次元データ
633 要求範囲情報
635 第2三次元データ
636 第3三次元データ
640 三次元データ送信装置
643 抽出部
645 送信部
652 第5三次元データ
654 第6三次元データ
700 三次元情報処理装置
701 三次元マップ取得部
702 自車検知データ取得部
703 異常ケース判定部
704 対処動作決定部
705 動作制御部
711 三次元マップ
712 自車検知三次元データ
810 三次元データ作成装置
811 データ受信部
812、819 通信部
813 受信制御部
814、821 フォーマット変換部
815 センサ
816 三次元データ作成部
817 三次元データ合成部
818 三次元データ蓄積部
820 送信制御部
822 データ送信部
831、832、834、835、836、837 三次元データ
833 センサ情報
901 サーバ
902、902A、902B、902C クライアント装置
1011、1111 データ受信部
1012、1020、1112、1120 通信部
1013、1113 受信制御部
1014、1019、1114、1119 フォーマット変換部
1015 センサ
1016、1116 三次元データ作成部
1017 三次元画像処理部
1018、1118 三次元データ蓄積部
1021、1121 送信制御部
1022、1122 データ送信部
1031、1032、1135 三次元マップ
1033、1037、1132 センサ情報
1034、1035、1134 三次元データ
1117 三次元データ合成部
1201 三次元マップ圧縮/復号処理部
1202 センサ情報圧縮/復号処理部
1211 三次元マップ復号処理部
1212 センサ情報圧縮処理部
1300 三次元データ符号化装置
1301 分割部
1302 減算部
1303 変換部
1304 量子化部
1305、1402 逆量子化部
1306、1403 逆変換部
1307、1404 加算部
1308、1405 参照ボリュームメモリ
1309、1406 イントラ予測部
1310、1407 参照スペースメモリ
1311、1408 インター予測部
1312、1409 予測制御部
1313 エントロピー符号化部
1400 三次元データ復号装置
1401 エントロピー復号部
3000 三次元データ符号化装置
3001 位置情報符号化部
3002 属性情報再割り当て部
3003 属性情報符号化部
3010 三次元データ復号装置
3011 位置情報復号部
3012 属性情報復号部
5300 第1の符号化部
5301 分割部
5302 位置情報符号化部
5303 属性情報符号化部
5304 付加情報符号化部
5305 多重化部
5311 タイル分割部
5312 スライス分割部
5321、5331、5351、5361 量子化値算出部
5322、5332 エントロピ符号化部
5323 量子化部
5333 逆量子化部
5340 第1の復号部
5341 逆多重化部
5342 位置情報復号部
5343 属性情報復号部
5344 付加情報復号部
5345 結合部
5352、5362 エントロピ復号部
6600 属性情報符号化部
6601 ソート部
6602 Haar変換部
6603 量子化部
6604、6612 逆量子化部
6605、6613 逆Haar変換部
6606、6614 メモリ
6607 算術符号化部
6610 属性情報復号部
6611 算術復号部
7001 減算部
7002 変換部
7003 変換行列保持部
7004 量子化部
7005 量子化制御部
7006 エントロピー符号化部
7011 エントロピー復号部
7012 逆量子化部
7013 量子化制御部
7014 逆変換部
7015 変換行列保持部
7016 加算部
7020 三次元データ符号化装置
7021 分割部
7022 位置情報符号化部
7023 属性情報符号化部
7024 付加情報符号化部
7025 多重化部
7031 タイル分割部
7032 スライス分割部
7035 変換部
7036 量子化部
7037 エントロピー符号化部
7040 三次元データ復号装置
7041 逆多重化部
7042 位置情報復号部
7043 属性情報復号部
7044 付加情報復号部
7045 結合部
7051 エントロピー復号部
7052 逆量子化部
7053 逆変換部
7061、7072 LoD設定部
7062、7073 探索部
7063、7074 予測部
7064 減算部
7065、7083 量子化部
7066、7075、7084、7092 逆量子化部
7067、7076 再構成部
7068、7077、7086、7094 メモリ
7069、7087 算術符号化部
7070、7088 ΔQP算出部
7071、7091 算術復号部
7081 ソート部
7082 Haar変換部
7085、7093 逆Haar変換部
7901 減算部
7902 変換部
7903 変換行列保持部
7904 量子化部
7905 量子化制御部
7906 エントロピー符号化部
7911 エントロピー復号部
7912 逆量子化部
7913 量子化制御部
7914 逆変換部
7915 変換行列保持部
7916 加算部
7920 三次元データ符号化装置
7921、7921A、7922、7922A 属性情報符号化部
7930 三次元データ復号装置
7931、7931A、7931B、7932、7932A、7932B 属性情報復号部
7941、7941A、7941B 左ビットシフト部
7942、7942A、7942B 変換部
7943 右ビットシフト部
7944、7944A スケール値算出部
7945、7945A、7945B 量子化部
7946、7946A、7946B エントロピー符号化部
7951、7951A、7951B エントロピー復号部
7952、7952A、7952B、7952C スケール値算出部
7953、7953A、7953B、7953C、7953D 逆量子化部
7954、7954A、7954B 左ビットシフト部
7955、7955A、7955B 逆変換部
7956、7956A、7956B 右ビットシフト部
7960A、7960B、7970A テーブル
7961、7971 共通算出部
7962、7963、7964、7972 ビットシフト部
Claims (11)
- 第1符号化方式と、前記第1符号化方式と異なる第2符号化方式とを用いる三次元データ符号化方法であって、
第1量子化パラメータの複数の値と、第1スケール値の複数の値との対応関係を示すテーブルであって、前記第1符号化方式と前記第2符号化方式とで共通の第1テーブルを用いて、前記第1量子化パラメータを前記第1スケール値に、又は、前記第1スケール値を前記第1量子化パラメータに、変換し、
点群データに含まれる複数の三次元点の複数の属性情報に基づく複数の第1係数値の各々を前記第1スケール値で除算する第1量子化処理を含む符号化により符号化属性情報を生成し、
前記符号化属性情報と前記第1量子化パラメータとを含むビットストリームを生成する
三次元データ符号化方法。 - 前記第1量子化処理を含む符号化では、
前記複数の属性情報の各々に左方向へのビットシフトを行うことで複数のシフト後属性情報を生成し、
前記複数のシフト後属性情報に前記複数の三次元点の複数の位置情報を用いた変換処理を行うことで前記複数の第1係数値を生成し、
前記第1スケール値は、量子化のための第2スケール値に、前記左方向へのビットシフトに対応する係数を乗算した値であり、
前記複数の第1係数値の各々を前記第1スケール値で除算することにより、前記量子化と、前記左方向へのビットシフトと同じビット数の右方向へのビットシフトとが行われる
請求項1記載の三次元データ符号化方法。 - 前記第1符号化方式で用いられる前記左方向へのビットシフト及び前記右方向へのビットシフトのビット数である第1ビット数と、前記第2符号化方式で用いられる前記左方向へのビットシフト及び前記右方向へのビットシフトのビット数である第2ビット数とは異なり、
前記第1スケール値から前記第1量子化パラメータへの、又は、前記第1量子化パラメータから前記第1スケール値への、変換では、
前記第1符号化方式が用いられる場合、
(i)前記第1スケール値に、前記第1テーブルを適用することで前記第1量子化パラメータを決定する、または、
(ii)前記第1量子化パラメータに、前記第1テーブルを適用することで前記第1スケール値を決定し、
前記第2符号化方式が用いられる場合、
(i)前記第1スケール値に前記第1ビット数と前記第2ビット数との差分のビット数のビットシフトを行い、ビットシフト後の第1スケール値に、前記第1テーブルを適用することで前記第1量子化パラメータを決定する、または、
(ii)前記第1量子化パラメータに、前記第1テーブルを適用することで第3スケール値を決定し、前記第3スケール値に前記第1ビット数と前記第2ビット数との前記差分のビット数のビットシフトを行うことで前記第1スケール値を算出する
請求項2記載の三次元データ符号化方法。 - 前記三次元データ符号化方法は、さらに、
第2量子化パラメータの複数の値と、第4スケール値の複数の値との対応関係を示すテーブルであって、前記第1符号化方式と前記第2符号化方式とで共通の第2テーブルを用いて、前記第2量子化パラメータを前記第4スケール値に、又は、前記第4スケール値を前記第2量子化パラメータに、変換し、
前記複数の三次元点の複数の位置情報に基づく複数の第2係数値の各々を前記第4スケール値で除算する第2量子化処理を含む符号化により符号化位置情報を生成し、
前記ビットストリームは、さらに、符号化位置情報と前記第2量子化パラメータとを含む
請求項1記載の三次元データ符号化方法。 - 第1復号方式と、前記第1復号方式と異なる第2復号方式とを用いる三次元データ復号方法であって、
ビットストリームから、点群データに含まれる複数の三次元点の複数の属性情報が符号化された符号化属性情報と、第1量子化パラメータとを取得し、
前記第1量子化パラメータの複数の値と、第1スケール値の複数の値との対応関係を示すテーブルであって、前記第1復号方式と前記第2復号方式とで共通の第1テーブルを用いて、前記第1量子化パラメータを前記第1スケール値に変換し、
前記符号化属性情報に基づく複数の第1量子化係数の各々に前記第1スケール値を乗算する第1逆量子化処理を含む復号により前記複数の属性情報を復号する
三次元データ復号方法。 - 前記第1逆量子化処理を含む復号では、
前記第1逆量子化処理により前記複数の第1量子化係数から複数の第1係数値を生成し、
前記複数の第1係数値に、前記複数の三次元点の複数の位置情報を用いた逆変換処理を行うことで複数のシフト後属性情報を生成し、
前記複数のシフト後属性情報の各々に右方向へのビットシフトを行うことで前記複数の属性情報を生成し、
前記第1スケール値は、逆量子化のための第2スケール値に、前記右方向へのビットシフトに対応する係数を乗算した値であり、
前記複数の第1量子化係数の各々に前記第1スケール値を乗算することにより、前記右方向へのビットシフトと同じビット数の左方向へのビットシフトと、前記逆量子化とが行われる
請求項5記載の三次元データ復号方法。 - 前記第1復号方式で用いられる前記右方向へのビットシフト及び前記左方向へのビットシフトのビット数である第1ビット数と、前記第2復号方式で用いられる前記右方向へのビットシフト及び前記左方向へのビットシフトのビット数である第2ビット数とは異なり、
前記第1量子化パラメータから前記第1スケール値への変換では、
前記第1復号方式が用いられる場合、
前記第1量子化パラメータに、前記第1テーブルを適用することで前記第1スケール値を決定し、
前記第2復号方式が用いられる場合、
前記第1量子化パラメータに、前記第1テーブルを適用することで第3スケール値を決定し、前記第3スケール値に前記第1ビット数と前記第2ビット数との差分のビット数のビットシフトを行うことで前記第1スケール値を算出する
請求項6記載の三次元データ復号方法。 - 前記三次元データ復号方法は、さらに、
前記ビットストリームから、前記複数の三次元点の複数の位置情報が符号化された符号化位置情報と、第2量子化パラメータとを取得し、
前記第2量子化パラメータの複数の値と、第4スケール値の複数の値との対応関係を示すテーブルであって、前記第1復号方式と前記第2復号方式とで共通の第2テーブルを用いて、前記第2量子化パラメータを第4スケール値に変換し、
前記符号化位置情報に基づく複数の第2量子化係数の各々に前記第4スケール値を乗算する第2逆量子化処理を含む復号により前記複数の位置情報を復号する
請求項5記載の三次元データ復号方法。 - 前記第1量子化パラメータが4より小さい場合、前記第1量子化パラメータを4とみなす
請求項5記載の三次元データ復号方法。 - 第1符号化方式と、前記第1符号化方式と異なる第2符号化方式とを用いる三次元データ符号化装置であって、
プロセッサと、
メモリとを備え、
前記プロセッサは、前記メモリを用いて、
第1量子化パラメータの複数の値と、第1スケール値の複数の値との対応関係を示すテーブルであって、前記第1符号化方式と前記第2符号化方式とで共通の第1テーブルを用いて、前記第1量子化パラメータを前記第1スケール値に、又は、前記第1スケール値を前記第1量子化パラメータに、変換し、
点群データに含まれる複数の三次元点の複数の属性情報に基づく複数の第1係数値の各々を前記第1スケール値で除算する第1量子化処理を含む符号化により符号化属性情報を生成し、
前記符号化属性情報と前記第1量子化パラメータとを含むビットストリームを生成する
三次元データ符号化装置。 - 第1復号方式と、前記第1復号方式と異なる第2復号方式とを用いる三次元データ復号装置であって、
プロセッサと、
メモリとを備え、
前記プロセッサは、前記メモリを用いて、
ビットストリームから、点群データに含まれる複数の三次元点の複数の属性情報が符号化された符号化属性情報と、第1量子化パラメータとを取得し、
前記第1量子化パラメータの複数の値と、第1スケール値の複数の値との対応関係を示すテーブルであって、前記第1復号方式と前記第2復号方式とで共通の第1テーブルを用いて、前記第1量子化パラメータを前記第1スケール値に変換し、
前記符号化属性情報に基づく複数の第1量子化係数の各々に前記第1スケール値を乗算する第1逆量子化処理を含む復号により前記複数の属性情報を復号する
三次元データ復号装置。
Priority Applications (9)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
MX2021010964A MX2021010964A (es) | 2019-03-18 | 2020-03-18 | Metodo de codificacion de datos tridimensionales, metodo de decodificacion de datos tridimensionales, dispositivo codificador de datos tridimensionales y dispositivo decodificador de datos tridimensionales. |
CN202080021032.3A CN113557550A (zh) | 2019-03-18 | 2020-03-18 | 三维数据编码方法、三维数据解码方法、三维数据编码装置、以及三维数据解码装置 |
BR112021016832-0A BR112021016832A2 (pt) | 2019-03-18 | 2020-03-18 | Método de codificação de dados tridimensionais, método de decodificação de dados tridimensionais, dispositivo de codificação de dados tridimensionais, e dispositivo de decodificação de dados tridimensionais |
EP20773942.6A EP3944195A4 (en) | 2019-03-18 | 2020-03-18 | METHOD OF ENCODING THREE-DIMENSIONAL DATA, METHOD OF DECODING OF THREE-DIMENSIONAL DATA, DEVICE FOR ENCODING OF THREE-DIMENSIONAL DATA, AND DECODER OF DECODING OF THREE-DIMENSIONAL DATA |
JP2021507391A JP7453211B2 (ja) | 2019-03-18 | 2020-03-18 | 三次元データ符号化方法、三次元データ復号方法、三次元データ符号化装置、及び三次元データ復号装置 |
KR1020217029185A KR20210139261A (ko) | 2019-03-18 | 2020-03-18 | 삼차원 데이터 부호화 방법, 삼차원 데이터 복호 방법, 삼차원 데이터 부호화 장치, 및 삼차원 데이터 복호 장치 |
US17/469,257 US11889114B2 (en) | 2019-03-18 | 2021-09-08 | Three-dimensional data encoding method, three-dimensional data decoding method, three-dimensional data encoding device, and three-dimensional data decoding device |
US18/530,610 US20240114167A1 (en) | 2019-03-18 | 2023-12-06 | Three-dimensional data encoding method, three-dimensional data decoding method, three-dimensional data encoding device, and three-dimensional data decoding device |
JP2024035240A JP2024061790A (ja) | 2019-03-18 | 2024-03-07 | 符号化方法、復号方法、符号化装置、及び復号装置 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201962819913P | 2019-03-18 | 2019-03-18 | |
US62/819,913 | 2019-03-18 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/469,257 Continuation US11889114B2 (en) | 2019-03-18 | 2021-09-08 | Three-dimensional data encoding method, three-dimensional data decoding method, three-dimensional data encoding device, and three-dimensional data decoding device |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020189709A1 true WO2020189709A1 (ja) | 2020-09-24 |
Family
ID=72520338
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2020/011928 WO2020189709A1 (ja) | 2019-03-18 | 2020-03-18 | 三次元データ符号化方法、三次元データ復号方法、三次元データ符号化装置、及び三次元データ復号装置 |
Country Status (8)
Country | Link |
---|---|
US (2) | US11889114B2 (ja) |
EP (1) | EP3944195A4 (ja) |
JP (2) | JP7453211B2 (ja) |
KR (1) | KR20210139261A (ja) |
CN (1) | CN113557550A (ja) |
BR (1) | BR112021016832A2 (ja) |
MX (1) | MX2021010964A (ja) |
WO (1) | WO2020189709A1 (ja) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022062369A1 (zh) * | 2020-09-25 | 2022-03-31 | Oppo广东移动通信有限公司 | 点云编解码方法与系统、及点云编码器与点云解码器 |
WO2022075326A1 (ja) * | 2020-10-07 | 2022-04-14 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | 三次元データ符号化方法、三次元データ復号方法、三次元データ符号化装置、及び三次元データ復号装置 |
WO2022075079A1 (ja) * | 2020-10-06 | 2022-04-14 | ソニーグループ株式会社 | 情報処理装置および方法 |
WO2022145214A1 (ja) * | 2020-12-28 | 2022-07-07 | ソニーグループ株式会社 | 情報処理装置および方法 |
WO2022191132A1 (ja) * | 2021-03-09 | 2022-09-15 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | 三次元データ符号化方法、三次元データ復号方法、三次元データ符号化装置、及び三次元データ復号装置 |
WO2023204040A1 (ja) * | 2022-04-22 | 2023-10-26 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | 復号方法、符号化方法、復号装置及び符号化装置 |
WO2024020403A1 (en) * | 2022-07-22 | 2024-01-25 | Bytedance Inc. | Method, apparatus, and medium for visual data processing |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20230296758A1 (en) * | 2022-03-18 | 2023-09-21 | Nvidia Corporation | Sensor data point cloud generation for map creation and localization for autonomous systems and applications |
CN118175333A (zh) * | 2022-12-09 | 2024-06-11 | 维沃移动通信有限公司 | 属性变换解码方法、属性变换编码方法及终端 |
WO2024148573A1 (zh) * | 2023-01-12 | 2024-07-18 | Oppo广东移动通信有限公司 | 编解码方法、编码器、解码器以及存储介质 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014020663A1 (ja) | 2012-07-30 | 2014-02-06 | 三菱電機株式会社 | 地図表示装置 |
JP2018078503A (ja) * | 2016-11-11 | 2018-05-17 | 日本電信電話株式会社 | データ符号化方法、データ符号化装置及びデータ符号化プログラム |
JP2018101404A (ja) * | 2016-09-13 | 2018-06-28 | ダッソー システムズDassault Systemes | 物理的属性を表す信号の圧縮 |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10694210B2 (en) * | 2016-05-28 | 2020-06-23 | Microsoft Technology Licensing, Llc | Scalable point cloud compression with transform, and corresponding decompression |
JP7050683B2 (ja) * | 2016-08-26 | 2022-04-08 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | 三次元情報処理方法及び三次元情報処理装置 |
-
2020
- 2020-03-18 CN CN202080021032.3A patent/CN113557550A/zh active Pending
- 2020-03-18 MX MX2021010964A patent/MX2021010964A/es unknown
- 2020-03-18 WO PCT/JP2020/011928 patent/WO2020189709A1/ja unknown
- 2020-03-18 JP JP2021507391A patent/JP7453211B2/ja active Active
- 2020-03-18 BR BR112021016832-0A patent/BR112021016832A2/pt unknown
- 2020-03-18 EP EP20773942.6A patent/EP3944195A4/en active Pending
- 2020-03-18 KR KR1020217029185A patent/KR20210139261A/ko unknown
-
2021
- 2021-09-08 US US17/469,257 patent/US11889114B2/en active Active
-
2023
- 2023-12-06 US US18/530,610 patent/US20240114167A1/en active Pending
-
2024
- 2024-03-07 JP JP2024035240A patent/JP2024061790A/ja active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014020663A1 (ja) | 2012-07-30 | 2014-02-06 | 三菱電機株式会社 | 地図表示装置 |
JP2018101404A (ja) * | 2016-09-13 | 2018-06-28 | ダッソー システムズDassault Systemes | 物理的属性を表す信号の圧縮 |
JP2018078503A (ja) * | 2016-11-11 | 2018-05-17 | 日本電信電話株式会社 | データ符号化方法、データ符号化装置及びデータ符号化プログラム |
Non-Patent Citations (1)
Title |
---|
See also references of EP3944195A4 |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022062369A1 (zh) * | 2020-09-25 | 2022-03-31 | Oppo广东移动通信有限公司 | 点云编解码方法与系统、及点云编码器与点云解码器 |
WO2022075079A1 (ja) * | 2020-10-06 | 2022-04-14 | ソニーグループ株式会社 | 情報処理装置および方法 |
WO2022075326A1 (ja) * | 2020-10-07 | 2022-04-14 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | 三次元データ符号化方法、三次元データ復号方法、三次元データ符号化装置、及び三次元データ復号装置 |
WO2022145214A1 (ja) * | 2020-12-28 | 2022-07-07 | ソニーグループ株式会社 | 情報処理装置および方法 |
EP4270323A4 (en) * | 2020-12-28 | 2024-05-29 | Sony Group Corporation | DEVICE AND METHOD FOR PROCESSING INFORMATION |
WO2022191132A1 (ja) * | 2021-03-09 | 2022-09-15 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | 三次元データ符号化方法、三次元データ復号方法、三次元データ符号化装置、及び三次元データ復号装置 |
WO2023204040A1 (ja) * | 2022-04-22 | 2023-10-26 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | 復号方法、符号化方法、復号装置及び符号化装置 |
WO2024020403A1 (en) * | 2022-07-22 | 2024-01-25 | Bytedance Inc. | Method, apparatus, and medium for visual data processing |
Also Published As
Publication number | Publication date |
---|---|
JP7453211B2 (ja) | 2024-03-19 |
BR112021016832A2 (pt) | 2021-10-19 |
US20240114167A1 (en) | 2024-04-04 |
US11889114B2 (en) | 2024-01-30 |
JP2024061790A (ja) | 2024-05-08 |
MX2021010964A (es) | 2021-10-13 |
EP3944195A4 (en) | 2022-08-03 |
KR20210139261A (ko) | 2021-11-22 |
US20210409769A1 (en) | 2021-12-30 |
EP3944195A1 (en) | 2022-01-26 |
CN113557550A (zh) | 2021-10-26 |
JPWO2020189709A1 (ja) | 2020-09-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7512209B2 (ja) | 三次元データ符号化方法、三次元データ復号方法、三次元データ符号化装置、及び三次元データ復号装置 | |
WO2020189709A1 (ja) | 三次元データ符号化方法、三次元データ復号方法、三次元データ符号化装置、及び三次元データ復号装置 | |
JP7167147B2 (ja) | 三次元データ符号化方法、三次元データ復号方法、三次元データ符号化装置、及び三次元データ復号装置 | |
JP7330962B2 (ja) | 三次元データ符号化方法、三次元データ復号方法、三次元データ符号化装置、及び三次元データ復号装置 | |
JP7245244B2 (ja) | 三次元データ符号化方法、三次元データ復号方法、三次元データ符号化装置、及び三次元データ復号装置 | |
JP7167144B2 (ja) | 三次元データ符号化方法、三次元データ復号方法、三次元データ符号化装置、及び三次元データ復号装置 | |
JP7448519B2 (ja) | 三次元データ符号化方法、三次元データ復号方法、三次元データ符号化装置、及び三次元データ復号装置 | |
JP7389028B2 (ja) | 三次元データ符号化方法、三次元データ復号方法、三次元データ符号化装置、及び三次元データ復号装置 | |
JP7410879B2 (ja) | 三次元データ符号化方法、三次元データ復号方法、三次元データ符号化装置、及び三次元データ復号装置 | |
JP7478668B2 (ja) | 三次元データ符号化方法、三次元データ復号方法、三次元データ符号化装置、及び三次元データ復号装置 | |
JP7448517B2 (ja) | 三次元データの符号化方法、三次元データの復号方法、三次元データの符号化装置、及び三次元データの復号装置 | |
WO2020138352A1 (ja) | 三次元データ符号化方法、三次元データ復号方法、三次元データ符号化装置、及び三次元データ復号装置 | |
WO2020075862A1 (ja) | 三次元データ符号化方法、三次元データ復号方法、三次元データ符号化装置、及び三次元データ復号装置 | |
WO2020196677A1 (ja) | 三次元データ符号化方法、三次元データ復号方法、三次元データ符号化装置、及び三次元データ復号装置 | |
WO2020196680A1 (ja) | 三次元データ符号化方法、三次元データ復号方法、三次元データ符号化装置、及び三次元データ復号装置 | |
JP7444849B2 (ja) | 三次元データ符号化方法、三次元データ復号方法、三次元データ符号化装置、及び三次元データ復号装置 | |
JP7453212B2 (ja) | 三次元データ符号化方法、三次元データ復号方法、三次元データ符号化装置、及び三次元データ復号装置 | |
RU2806744C2 (ru) | Способ кодирования трехмерных данных, способ декодирования трехмерных данных, устройство кодирования трехмерных данных и устройство декодирования трехмерных данных |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20773942 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2021507391 Country of ref document: JP Kind code of ref document: A |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112021016832 Country of ref document: BR |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 112021016832 Country of ref document: BR Kind code of ref document: A2 Effective date: 20210825 |
|
ENP | Entry into the national phase |
Ref document number: 2020773942 Country of ref document: EP Effective date: 20211018 |