WO2020244296A1 - Système et procédé de représentation multicouche d'une carte de profondeur pendant un codage intra-image - Google Patents
Système et procédé de représentation multicouche d'une carte de profondeur pendant un codage intra-image Download PDFInfo
- Publication number
- WO2020244296A1 WO2020244296A1 PCT/CN2020/082464 CN2020082464W WO2020244296A1 WO 2020244296 A1 WO2020244296 A1 WO 2020244296A1 CN 2020082464 W CN2020082464 W CN 2020082464W WO 2020244296 A1 WO2020244296 A1 WO 2020244296A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- layer
- pixels
- depth
- encoder
- module
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 66
- 230000000750 progressive effect Effects 0.000 claims abstract description 27
- 238000005538 encapsulation Methods 0.000 claims abstract description 14
- 238000012545 processing Methods 0.000 claims abstract description 6
- 238000013507 mapping Methods 0.000 claims description 25
- 238000013139 quantization Methods 0.000 claims description 25
- 230000008569 process Effects 0.000 claims description 12
- 238000012804 iterative process Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 5
- 230000003044 adaptive effect Effects 0.000 claims description 3
- 230000000903 blocking effect Effects 0.000 claims 1
- 238000007906 compression Methods 0.000 abstract description 5
- 230000006835 compression Effects 0.000 abstract description 5
- 238000011002 quantification Methods 0.000 abstract description 3
- 238000006243 chemical reaction Methods 0.000 abstract description 2
- 238000004590 computer program Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 239000000969 carrier Substances 0.000 description 1
- 238000013144 data compression Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/124—Quantisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
Definitions
- the present invention relates to a system and method for processing a depth map, and more particularly to a system and method for multi-layer representation of an intra-frame coding depth map.
- the depth map is an image or video that records the distance between the observable scene point and the camera light point. It provides additional information for related color pixels in color images or videos shot at the same location by specifying its depth in the scene. Therefore, the depth map is a key component of the 3D multimedia experience.
- the display device has 3D structure information from which to recover the scene depicted in the image or video with the depth map.
- the compression and encoding of the depth map is a process to reduce the amount of depth map data and provide standards for different terminal devices of the network to understand the transmitted data. Depth map coding can also be regarded as a part of the entire 3D video data compression process.
- the most common depth map coding or general video coding schemes follow the same "hybrid" video coding framework.
- the frame is divided into blocks, taking advantage of the spatial dependence between adjacent blocks and frames.
- the data is predicted and coded from previously coded blocks and frames.
- Intra coding is the basic step of depth map coding. It is part of the encoding process where data is only predicted from previous data from the same frame.
- the intra-frame coding method only performs operations with respect to the information included in the current frame, and does not perform operations with respect to the information included in any other frames in the video sequence.
- 3D-High Efficiency Video Coding The most advanced standard 3D high-efficiency video coding (“3D-High Efficiency Video Coding”) in the prior art adopts the following intra-frame coding methods, including: 1. Direct current (DC) and planar prediction; 2. Based on wedgelet (“wedgelet”) The depth modeling; 3. The depth modeling of the contour segment. Only smooth data can be processed by direction prediction. Wedge-based depth modeling and contour segment depth modeling can handle drastic changes, but there are only two layers in the blocks that the frame is divided into. In addition, the depth modeling of contour line segments often obtains sharp changes from the corresponding texture video. The limitation of these methods is that the segmentation is performed only once. Therefore, if the segmentation quality is poor, you must rely on other coding methods to improve the reconstruction quality.
- the method of the present invention to adaptively quantize the image and divide the image into constant-valued layers provides a system and method for compressing depth map data in a relatively complex scene.
- the present invention provides a system for multi-layer representation of an intra-frame coded depth map, including the following devices: a block module to block the depth map data; a progressive quantization module to set progressive quantization and stop conditions for the block depth map data.
- One aspect of the present invention also includes the following devices: a data encapsulation module, which performs data encapsulation on the multi-layer depth map that stops iteration; and a data output module, which outputs the encapsulated bit stream to the decoder end.
- the progressive quantization module for setting progressive quantization and its stopping conditions for the divided depth map data further includes: a layering module, which decomposes the depth block into multiple layers, and each layer contains the depth block A subset of pixels that are mutually exclusive with other layers; a multi-layer representation module that represents the multiple layers in a non-parametric way, one of which can represent any subset of all pixels in the depth block; an iterative module, which uses an iterative method to Encode multiple layers and continuously monitor reconstruction residuals and remaining areas with high priority.
- the progressive quantization module for setting progressive quantization and its stopping conditions for the divided depth map data includes: a new layer creation module, which initializes an empty layer list with an encoder, in the layer list Create a new layer, and then classify all pixels in the block as a unique layer; the encoder repeats the following modules until the end of the iteration process: calculation module, for each layer in the layer list, use the encryptor to calculate the graph The average value and variance of the depth value of all pixels in the layer, and each average value is attached to the corresponding layer; the depth value reconstruction module, the encoder finds the maximum value of all the calculated variances, and identifies the layer with the largest variance , Call it as the maximum variance layer; and create a reconstructed block by assigning a depth value to each pixel using the average value of the layer to which each pixel belongs; calculate the sum of squared errors between the reconstructed block and the original depth block; new layer The creation module uses the encoder to create a new layer at the end of the layer
- the predetermined threshold is the required reconstruction quality in terms of the sum of squared errors.
- the data encapsulation module further includes: an average value addition module, which uses an encoder to calculate the average value and variance of the depth values of all pixels in the layer for all layers in the layer list; Each average value is attached to the corresponding layer; the layer processing module, the encoder reorders the layers by sorting the area or the number of pixels of each layer in descending order, so that the layer with the most pixels is processed first; binary Mapping module, the encoder uses the layer that has not been encoded and has the largest number of pixels to form a binary map, mark all pixels in the layer as "1", mark other pixels as "0", and use context adaptive binary arithmetic coding Method to encode this binary mapping; after completing the mapping, the encoder continues to the next largest layer and repeats until one layer is left; the last layer does not require binary mapping because it will automatically fill in all remaining pixels.
- an average value addition module which uses an encoder to calculate the average value and variance of the depth values of all pixels in the layer for all layers in the layer list; Each average value is attached to the
- the final output data in the data encapsulation module is composed of the following content: an integer representing the number of layers in the depth block; a series of bits containing the binary mapping of all layers; a value representing the depth value of each layer Series integer.
- the present invention also provides a method for multi-layer representation of an intra-frame coding depth map, which includes the following steps: dividing the depth map data into blocks; setting the progressive quantization and stopping conditions for the divided depth map data.
- the present invention also provides an encoder for implementing the method of the present invention.
- the stepwise quantization and stopping conditions are set for the block depth map data, which includes: a new layer creation module, which uses the encoder to initialize an empty layer List, create a new layer in the layer list, and then classify all pixels in the block as a unique layer; the encoder repeats the following modules until the end of the iteration process: calculation module, for the layer list For each layer of, use the encryptor to calculate the mean value and variance of the depth value of all pixels in the layer, and attach each mean value to the corresponding layer; depth value reconstruction module, the encoder finds all the calculated variances The maximum value in, identify the layer with the largest variance, call it the largest variance layer; and create a reconstruction block by assigning a depth value to each pixel using the average value of the layer to which each pixel belongs; calculate the reconstruction block and the original depth block The sum of squared errors between; the new layer creation module uses the encoder to create a new layer at the end of the layer list
- the method described in the present invention processes depth map data in a manner that mimics the properties of depth data.
- Depth maps usually contain large and smooth areas with a clear boundary between the two.
- the present invention realizes the smoothness of the depth map by reducing the pixels in the same smooth area to simple representative values and at the same time investing resources in the recording pixel grouping.
- the quality target can make the reconstruction output of this method better serve the overall quality and bit rate control of the video encoder.
- One immediate application of the present invention is 3D video content compression for online video broadcasting. Another application is 3D video format conversion.
- Fig. 1 is a schematic diagram of a method for multi-layer representation of an intra-frame coded depth map according to the present invention.
- Figures 2a-2e are examples of the steps of setting progressive quantization and its stopping conditions in the method for multi-layer representation of intra-coded depth maps according to the present invention.
- Fig. 3 schematically shows a block diagram of a server for executing the method according to the present invention.
- Fig. 4 schematically shows a storage unit for holding or carrying program codes for implementing the method according to the present invention.
- the present invention aims to provide an effective method for processing complex depth map data.
- the depth map of the present invention is actually decomposed into several "shapes", which are combined into all the pixels in the depth block to be processed, and each of these shapes One is appended with a depth value for use as a decoding result.
- the method uses the standard derived based on variance to identify regions with poor reconstruction quality in an iterative manner to repeatedly improve the reconstruction quality.
- Fig. 1 is a schematic diagram of a method for multi-layer representation of an intra-frame coded depth map according to the present invention.
- the depth block is decomposed into multiple layers, and each layer contains a subset of pixels in the depth block that are mutually exclusive with other layers.
- step B Progressive quantization and its stopping conditions below for specific content.
- the multiple layers are represented in a non-parametric manner, and each layer can represent any subset of all pixels in the depth block.
- step B Progressive quantification and sub-step d. of its stopping conditions below. It adapts to complex environments and can assign depth values in the scene arbitrarily.
- the encoding process adopts an iterative method. See step B.
- Step A Divide the depth data into blocks
- the picture is divided into smaller units.
- the block width and height are usually powers of 2. For example, 2 ⁇ 2, 4 ⁇ 4, 8 ⁇ 8...
- Step B Set progressive quantization and its stopping conditions
- the step of setting the progressive quantization step and its stopping condition includes multiple stages. Among them, the encoder first initializes an empty "layer list", creates a new layer in the list, and then classifies all pixels in the block as a unique layer. After initialization, the encoder repeats the following process until the stop condition set during the process is met.
- the encryptor calculates the mean and variance of the depth values of all pixels in the layer. Each average value is attached to the corresponding layer.
- the encoder creates a "reconstruction block” by assigning a depth value to each pixel using the average value of the layer to which each pixel belongs. Calculate the sum of squared errors (SSE) between the reconstructed block and the original depth block. If the SSE is less than a preset threshold, for example, the predetermined threshold is the required reconstruction quality in terms of SSE, then this iterative process ends.
- SSE squared errors
- the encoder finds the maximum value among all the variances calculated in step a, identifies the layer with the largest variance, and calls it the maximum variance layer ("LLV").
- the encoder creates a new layer at the end of the layer list, selects all pixels in the maximum variance layer and whose depth value is greater than the average value of the maximum variance layer, deletes these pixels from the maximum variance layer and assigns them to the new layer . Go back to step a.
- Figures 2a-2e are an example of the step of setting progressive quantization and its stopping conditions in the method for multi-layer representation of an intra-coded depth map according to the present invention.
- Figure 2a-1 is an example of a depth block, which is colored with different shades of gray to indicate different values.
- the white part 201 represents the part with the depth value of "30"
- the second light gray part 202 represents the part with the depth value of "25”
- the darker gray part 203 represents the part with the depth value of "20" and "12”
- the dark gray portion 204 represents a portion with a depth value of "10".
- Figure 2a-2 shows that for the initial layer list, all pixels are located in the "0 layer”.
- Figure 2b-1 shows that after the first iteration, the entire block is divided into two layers.
- the dark gray part 205 in Figure 2b-1 is the part whose value is lower than 15 in the depth map of Figure 2a-1; the light gray part 206 in Figure 2b-1 is the value higher than 20 in the depth map of Figure 2a-1 part.
- Figure 2b-2 shows that according to Figure 2b-1, the block is divided into two layers, and a new layer “1 layer” is generated on the basis of "0 layer".
- Figure 2c-1 shows that the depth map is reconstructed from two average values. Among them, the dark gray part 205 in Fig. 2b-1 is taken as the depth map average value "10"; the light gray part 206 in Fig. 2b-2 is taken as the depth map average value "25".
- Figure 2c-2 shows that the level list after one iteration is divided into “level 1” and "level 0", where the part with the value “10” is regarded as “level 0"; the part with the value “25” is regarded as “level 1” .
- Figure 2d-1 shows that the second iteration is performed: the "layer 1" is split into two layers. Among them, the original depth value of the first layer is restored, and the original depth value of Figure 2a-1 is divided into two parts less than or equal to "25" in the "1 layer” part and two parts equal to "30". That is, the white part 207 and the darker gray part 208 in Fig. 2d-1.
- Figure 2d-2 shows that the part of the depth value equal to "30" is regarded as a new layer "2 layer”.
- Figure 2e-1 shows that the depth map is reconstructed from three average values.
- the dark gray part 205 in Figure 2b-1 the part with the average depth map "10" remains unchanged; the white part 207 in Figure 2d-1 with the depth value less than or equal to "25” is retaken
- the average value is "22"; the part with the depth value of "30" in the darker gray part 208 in Figure 2d-1 is kept unchanged; the average value is no longer taken.
- Figure 2e-2 shows the level list after the second iteration: divided into “2 levels", “1 level” and “0 level”, where the depth value is “10” as “0 level”; the depth value is "22”
- the part with “” is regarded as “1 layer”; the part with a depth value of "30” is regarded as “2 layer”.
- the encoder By completing the previous stage, the encoder now has a layer list of one or more layers, each layer has an average depth value and contains some pixels in the depth block. Further, the data encapsulation is completed through the following two sub-steps; in order to output the data to the decoder side:
- the encoder calculates the mean and variance of the depth values of all pixels in the layer for all layers in the layer list. Append each average to the corresponding layer. The encoder then reorders the layers by sorting the area or number of pixels of each layer in descending order, so that the layer with the most pixels is processed first.
- Binary mapping step the encoder uses the layer that has not been coded and has the largest number of pixels to form a binary mapping, marking all pixels in the layer as "1", marking other pixels as "0", and using context adaptation Binary arithmetic coding method to encode this binary map. After completing the mapping, the encoder continues to the next largest layer and repeats until one layer remains. The last layer does not require binary mapping because it will automatically fill in all remaining pixels.
- the decoder will receive the final output data.
- the final output data is composed of three parts: an integer representing the number of layers in the depth block in step B. Progressive quantization and its stopping conditions. A series of bits containing the binary mapping of all layers from step C. data encapsulation; and a series of integers corresponding to the depth value of each layer in step B. progressive quantization and its stopping conditions.
- the various component embodiments of the present invention may be implemented by hardware, or by software modules running on one or more processors, or by their combination.
- a microprocessor or a digital signal processor (DSP) can be used in practice to implement the method for improving video resolution and quality and the video encoder and the decoder of the display terminal according to the embodiments of the present invention.
- DSP digital signal processor
- the present invention can also be implemented as a device or device program (for example, a computer program and a computer program product) for executing part or all of the methods described herein.
- Such a program for realizing the present invention may be stored on a computer-readable medium, or may have the form of one or more signals. Such signals can be downloaded from Internet websites, or provided on carrier signals, or provided in any other form.
- Figure 3 shows a server, such as an application server, that can implement the invention.
- the server traditionally includes a processor 1010 and a computer program product in the form of a memory 1020 or a computer readable medium.
- the memory 1020 may be an electronic memory such as flash memory, EEPROM (Electrically Erasable Programmable Read Only Memory), EPROM, hard disk, or ROM.
- the memory 1020 has a storage space 1030 for executing the program code 1031 of any method step in the above method.
- the storage space 1030 for program codes may include various program codes 1031 for implementing various steps in the above method. These program codes can be read from or written into one or more computer program products.
- These computer program products include program code carriers such as hard disks, compact disks (CDs), memory cards or floppy disks. Such computer program products are usually portable or fixed storage units as described with reference to FIG. 4.
- the storage unit may have storage segments, storage spaces, etc. arranged similarly to the storage 1020 in the server of FIG. 3.
- the program code can be compressed in an appropriate form, for example.
- the storage unit includes computer-readable codes 1031', that is, codes that can be read by, for example, a processor such as 1010, which, when run by a server, causes the server to perform the steps in the method described above.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
L'invention concerne un système et un procédé de traitement d'une carte de profondeur, et en particulier un système et un procédé de représentation multicouche d'une carte de profondeur pendant un codage intra-image. Dans la présente invention, des données de carte de profondeur sont groupées, des conditions de quantification progressive et d'arrêt sont définies pour les données de carte de profondeur groupées, une encapsulation de données est effectuée sur une carte de profondeur multicouche pour laquelle l'itération est arrêtée, le train de bits obtenu après l'encapsulation est délivré à un décodeur, les pixels présents dans la même région régulière sont réduits à des valeurs représentatives simples, et des ressources sont investies dans le groupement de pixels enregistrés pour assurer la régularité de la carte de profondeur. La cible de qualité peut amener la reconstruction et la sortie du procédé à être plus utiles à la qualité globale et à la commande de débit binaire d'un codeur vidéo. Une application directe de la présente invention est la compression de contenu vidéo 3D pour la diffusion vidéo en ligne. Une autre application est la conversion de format vidéo 3D.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910481532.4A CN112040245B (zh) | 2019-06-04 | 2019-06-04 | 用于帧内编码深度图多层表示的系统和方法 |
CN201910481532.4 | 2019-06-04 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020244296A1 true WO2020244296A1 (fr) | 2020-12-10 |
Family
ID=73575890
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/082464 WO2020244296A1 (fr) | 2019-06-04 | 2020-03-31 | Système et procédé de représentation multicouche d'une carte de profondeur pendant un codage intra-image |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112040245B (fr) |
WO (1) | WO2020244296A1 (fr) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060164268A1 (en) * | 2005-01-21 | 2006-07-27 | Samsung Electronics Co., Ltd. | Method, medium, and apparatus controlling bit rate in image data encoding |
CN101330631A (zh) * | 2008-07-18 | 2008-12-24 | 浙江大学 | 一种立体电视系统中深度图像的编码方法 |
CN101835056A (zh) * | 2010-04-29 | 2010-09-15 | 西安电子科技大学 | 基于模型的纹理视频与深度图的最优码率分配方法 |
CN104010196A (zh) * | 2014-03-14 | 2014-08-27 | 北方工业大学 | 基于hevc的3d质量可伸缩视频编码 |
CN106327458A (zh) * | 2016-08-31 | 2017-01-11 | 上海交通大学 | 一种基于图像分层渲染的方法 |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6348918B1 (en) * | 1998-03-20 | 2002-02-19 | Microsoft Corporation | Stereo reconstruction employing a layered approach |
CN106162178B (zh) * | 2010-04-13 | 2019-08-13 | 三星电子株式会社 | 执行去块滤波的对视频进行解码的设备 |
WO2013068547A2 (fr) * | 2011-11-11 | 2013-05-16 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Codage multi-vues efficace utilisant une estimée de carte de profondeur et une mise à jour |
US20140204088A1 (en) * | 2013-01-18 | 2014-07-24 | Microsoft Corporation | Surface codec using reprojection onto depth maps |
US9467681B2 (en) * | 2013-03-25 | 2016-10-11 | Microsoft Technology Licensing, Llc | Representation and compression of depth data |
CN103581647B (zh) * | 2013-09-29 | 2017-01-04 | 北京航空航天大学 | 一种基于彩色视频运动矢量的深度图序列分形编码方法 |
CN104202612B (zh) * | 2014-04-15 | 2018-11-02 | 清华大学深圳研究生院 | 基于四叉树约束的编码单元的划分方法及视频编码方法 |
CN105007494B (zh) * | 2015-07-20 | 2018-11-13 | 南京理工大学 | 一种3d视频深度图像的帧内楔形分割模式选择方法 |
CN106686383A (zh) * | 2017-01-17 | 2017-05-17 | 湖南优象科技有限公司 | 一种保留深度图边缘的深度图帧内编码方法 |
CN108734208B (zh) * | 2018-05-15 | 2020-12-25 | 重庆大学 | 基于多模态深度迁移学习机制的多源异构数据融合系统 |
-
2019
- 2019-06-04 CN CN201910481532.4A patent/CN112040245B/zh active Active
-
2020
- 2020-03-31 WO PCT/CN2020/082464 patent/WO2020244296A1/fr active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060164268A1 (en) * | 2005-01-21 | 2006-07-27 | Samsung Electronics Co., Ltd. | Method, medium, and apparatus controlling bit rate in image data encoding |
CN101330631A (zh) * | 2008-07-18 | 2008-12-24 | 浙江大学 | 一种立体电视系统中深度图像的编码方法 |
CN101835056A (zh) * | 2010-04-29 | 2010-09-15 | 西安电子科技大学 | 基于模型的纹理视频与深度图的最优码率分配方法 |
CN104010196A (zh) * | 2014-03-14 | 2014-08-27 | 北方工业大学 | 基于hevc的3d质量可伸缩视频编码 |
CN106327458A (zh) * | 2016-08-31 | 2017-01-11 | 上海交通大学 | 一种基于图像分层渲染的方法 |
Also Published As
Publication number | Publication date |
---|---|
CN112040245A (zh) | 2020-12-04 |
CN112040245B (zh) | 2023-07-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11140401B2 (en) | Coded-block-flag coding and derivation | |
KR102499355B1 (ko) | 손실 및 무손실 영상 압축을 위한 형상-적응형 모델-기반 코덱 | |
US12041243B2 (en) | Systems and methods for compressing video | |
US12022078B2 (en) | Picture processing method and apparatus | |
CN110896483A (zh) | 压缩和解压缩图像数据的方法 | |
US11263786B2 (en) | Decoding data arrays | |
US10997795B2 (en) | Method and apparatus for processing three dimensional object image using point cloud data | |
US20230370600A1 (en) | A method and apparatus for encoding and decoding one or more views of a scene | |
WO2020244296A1 (fr) | Système et procédé de représentation multicouche d'une carte de profondeur pendant un codage intra-image | |
Yoshida et al. | Two-layer lossless coding for high dynamic range images based on range compression and adaptive inverse tone-mapping | |
CN102685483B (zh) | 解码方法 | |
CN104168482B (zh) | 一种视频编解码方法及装置 | |
CN115209147A (zh) | 摄像头视频传输带宽优化方法、装置、设备及存储介质 | |
CN118575193A (zh) | 点云数据帧压缩 | |
CN115442617A (zh) | 一种基于视频编码的视频处理方法和装置 | |
WO2021035717A1 (fr) | Procédé et appareil de prédiction de chrominance intra-trame, dispositif, et système de codage et de décodage vidéo | |
CN102685485B (zh) | 编码方法以及装置、解码方法以及装置 | |
KR20150096353A (ko) | 이미지 인코딩 시스템, 디코딩 시스템 및 그 제공방법 | |
CN115336264A (zh) | 帧内预测方法、装置、编码器、解码器、及存储介质 | |
US20240364897A1 (en) | Systems and methods for compressing video | |
CN115988201B (zh) | 一种编码胶片颗粒的方法、装置、电子设备和存储介质 | |
US20240195990A1 (en) | Residual-free palatte mode coding | |
WO2023240662A1 (fr) | Procédé de codage, procédé de décodage, codeur, décodeur, et support de stockage | |
CN102685484B (zh) | 编码方法以及装置、解码方法以及装置 | |
KR102267206B1 (ko) | 이미지 압축을 위한 하이브리드 팔레트-dpcm 코딩 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20818683 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20818683 Country of ref document: EP Kind code of ref document: A1 |