US20190356925A1 - Systems and methods for providing 3d look-up table coding for color gamut scalability - Google Patents
Systems and methods for providing 3d look-up table coding for color gamut scalability Download PDFInfo
- Publication number
- US20190356925A1 US20190356925A1 US16/530,679 US201916530679A US2019356925A1 US 20190356925 A1 US20190356925 A1 US 20190356925A1 US 201916530679 A US201916530679 A US 201916530679A US 2019356925 A1 US2019356925 A1 US 2019356925A1
- Authority
- US
- United States
- Prior art keywords
- color mapping
- current
- mapping table
- vertex
- prediction mode
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/186—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/40—Tree coding, e.g. quadtree, octree
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/46—Colour picture communication systems
- H04N1/64—Systems for the transmission or the storage of the colour picture signal; Details therefor, e.g. coding or decoding means therefor
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/146—Data rate or code amount at the encoder output
- H04N19/147—Data rate or code amount at the encoder output according to rate distortion criteria
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
- H04N19/33—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability in the spatial domain
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
- H04N19/34—Scalability techniques involving progressive bit-plane based encoding of the enhancement layer, e.g. fine granular scalability [FGS]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
- H04N19/463—Embedding additional information in the video signal during the compression process by compressing encoding parameters before transmission
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/96—Tree coding, e.g. quad-tree coding
Definitions
- FIG. 10A depicts a diagram of an example communications system in which one or more disclosed embodiments may be implemented.
- Multi-view Video Coding may be another extension of H.264 that may provide view scalability.
- the base layer bitstream may be decoded to reconstruct a conventional 2D video, and additional enhancement layers may be decoded to reconstruct other view representations of the same video signal.
- FIG. 2 may provide an example prediction structure of using MVC to code a stereoscopic video with a left view (layer 1) and a right view (layer 2).
- the left view video in 0 may be coded with an IBBP prediction structure.
- the right view video may be coded with a PBBB prediction structure.
- LUT_precision_luma_minus1 may be provided and/or used.
- LUT_precision_luma_minus1+1) may be a precision parameter used to code the difference between the LUT parameter to be coded and its prediction for the luma (Y) component.
- a vertex_prediction_mode may be provided and/or used where a 0 may refer to using the parent octant of the current 3D LUT to predict the current vertex and/or a 1 may refer to using the collocated octant of the existing global 3D LUT to predict the current vertex. If prediction_mode may not be equal to 2, the vertex_prediction_mode may be set to be equal to the prediction_mode.
- the LUT parameter of the chroma V component may be reconstructed at a decoder as
- the flag coded_flag[n] may be set (e.g., once the value of n may be calculated using getVertex(y, u, v, i)). The flag may be used subsequently to track that the vertex “n” may have been coded and/or to avoid coding the vertex “n” again.
- the base station 180 a may use multiple antennas to transmit wireless signals to, and receive wireless signals from, the WTRU 102 a .
- the base stations 180 a , 180 b , and/or 180 c may also provide mobility management functions, such as handoff triggering, tunnel establishment, radio resource management, traffic classification, quality of service (QoS) policy enforcement, and the like.
- the ASN gateway 182 may serve as a traffic aggregation point and may be responsible for paging, caching of subscriber profiles, routing to the core network 109 , and the like.
Abstract
Systems and methods for improving efficiency in three-dimensional (3D) look-up table (LUT) coding and/or reducing table size of a 3D LUT may be provided. For example, octants associated with the 3D LUT may be provided for color space segmentation and coding may be performed on an octree associated with the octants where coding may include encoding nodes of the octree associated with the octants and corresponding vertices of the 3D LUT belonging to the nodes. The 3D LUT may also be signaled (e.g., based on a sequence and/or picture level).
Description
- This application claims the benefit of U.S. Provisional Patent Application No. 61/880,715 filed on Jan. 7, 2013, the contents of which are hereby incorporated by reference herein.
- A three-dimensional (3D) look-up table (LUT) may be generated from a color grading process by a colorist or it may be estimated by an encoder (for example, using an original signal in one color space and the corresponding signal in another color space). The 3D LUT may need to be sent in a bitstream from the encoder to a decoder, such that the decoder may apply a color gamut conversion process (e.g., the same color gamut conversion process) during inter-layer processing.
- Signaling overhead of 3D LUT may be significant, because the dimension of the table may be large. For example, a sample bit-depth may be 8 bits and a unit octant size may be 16×16×16 (e.g., the color space may be partitioned to a representation of 16×16×16 octants) and, as such, there may be 17×17×17 entries in the 3D LUT table. Each entry of the 3D LUT may have 3 components. Thus, the total uncompressed table size may be 117,912 (17×17×17×3×8) bits, which may result in significant signaling overhead. With this amount of overhead, 3D LUT may have to be signaled at a sequence level, because, for example, individual pictures may not be able to afford such an overhead. At the sequence level, each of the pictures in a sequence may use the same 3D LUT, which may result in a sub-optimal color gamut conversion and/or may degrade enhancement layer coding efficiency. The colorist may (for artistic production reasons) change the color gamut from picture to picture or from scene to scene, and so picture-level signaling of the 3D LUT may be required for effective color gamut prediction.
- Systems and methods for improving efficiency in three-dimensional (3D) look-up table (LUT) coding and/or reducing table size (e.g. a size in bits of a coded representation) of a 3D LUT may be provided. For example, octants associated with the 3D LUT may be provided for color space segmentation and coding may be performed on an octree associated with the octants. One or more of the octants may be a non-uniform octant. Parameters of an octant may be lossy coded with reduced precision. The 3D LUT may comprise vertices. One or more of the octants may be coarser such that there may be a larger distance between the vertices that may be neighboring.
- The octants may further provide hierarchical tree structured 3D data that may be organized in the octree for the coding. At least one of the following may apply: an octree may comprise multiple layers, each node in the octree may represent one of the octants, or each node may be referenced from a root. Further, one or more of the octants may be split at one or more of the layers and/or at least one of the octants may be segmented into sub-octants. A vertex of the vertices in the 3D LUT may belong to and/or correspond to one or more of the nodes that may represent the octants at different layers in the octree. The coding on the octree may be performed, for example, by calling and executing a coding octant function recursively to encode the nodes and the vertices associated therewith in the 3D LUT in a layer-first traversal order. The 3D LUT may also be signaled (e.g., based on a sequence and/or picture level).
-
FIG. 1 illustrates a block diagram of a scalable video coding system with one or more layers such as N layers. -
FIG. 2 illustrates a temporal and/or inter-layer prediction for stereoscopic (e.g., 2-view) video coding using Multi-view Video Coding (MVC). -
FIG. 3 illustrates a color primary comparison between a BT.709 (HDTV) and a BT.2020 (UHDTV) in a CIE color definition. -
FIGS. 4A-4B illustrate a visual difference to an end user between a BT.709 color gamut and a P3 color gamut, respectively. -
FIG. 5 depicts a color gamut scalability (CGS) coding with picture level inter-layer prediction (ILP). -
FIG. 6 illustrates a 3D look-up table for an 8-bit YUV signal. -
FIG. 7 illustrates a tri-linear 3D LUT. -
FIG. 8 illustrates an octree for 3D LUT coding. -
FIGS. 9A-9B illustrate a global 3D LUT with two layers and a picture level 3D LUT with three layers, respectively, where the picture level 3D LUT with three layers may be predicted from the coarser global 3D LUT with two layers. -
FIG. 10A depicts a diagram of an example communications system in which one or more disclosed embodiments may be implemented. -
FIG. 10B depicts a system diagram of an example wireless transmit/receive unit (WTRU) that may be used within the communications system illustrated inFIG. 10A . -
FIG. 10C depicts a system diagram of an example radio access network and an example core network that may be used within the communications system illustrated inFIG. 10A . -
FIG. 10D depicts a system diagram of another example radio access network and an example core network that may be used within the communications system illustrated inFIG. 10A . -
FIG. 10E depicts a system diagram of another example radio access network and an example core network that may be used within the communications system illustrated inFIG. 10A . - Video data today may be transmitted over a combination of wired networks and wireless networks, which may further complicate the underlying transmission channel characteristics. In such scenarios, the premise of scalable video coding may provide an attractive solution to improve the quality of experience for video applications running on devices with different capabilities over heterogeneous networks. For example, scalable video coding may encode a signal (e.g., once) at a highest representation such as a temporal resolution, spatial resolution, quality, and/or the like, but may enable decoding from subsets of the video streams depending on the specific rate and/or representation used by certain applications running on a specific client device. Bandwidth and storage may be saved compared to non-scalable solutions. International video standards such as MPEG-2 Video, H.263, MPEG4 Visual and/or H.264 may have tools and/or profiles that support some modes of scalability.
-
FIG. 1 illustrates a block diagram of a simple block-based hybrid scalable video encoding system. The spatial/temporal signal resolution represented by the layer 1 (base layer) may be generated by down-sampling of the input video signal. In a subsequent encoding stage, an appropriate setting of the quantizer (Q1) may lead to a certain quality level of the base information. To more efficiently encode subsequent higher layers, a base-layer reconstruction Y1, which may be an approximation of higher layer resolution levels, may be utilized in the encoding/decoding of the subsequent layers. The up-sampling unit may perform up-sampling of the base layer reconstruction signal to alayer- 2's resolution. Down-sampling and up-sampling may be performed throughout the layers (1, 2 . . . N) and/or the downsampling and upsampling ratios may be different depending on the dimension of the scalability between two given layers. In the system ofFIG. 1 , for a given higher layer n (2≤n≤N), a differential signal may be generated by subtracting an upsampled lower layer signal (e.g., layer n−1 signal) from the current layer n signal. The difference signal thus obtained may be encoded. If the video signals represented by two layers (e.g., n1 and n2) may have the same spatial resolution, the corresponding down-sampling and up-sampling operations may be by-passed. A given layer n (1≤n≤N) or a plurality of layers may be decoded without using decoded information from higher layers. However, relying on coding of a residual signal (i.e., a difference signal between two layers) for each of the layers except the base layer, as provided by the system inFIG. 1 , may sometimes cause visual artifacts due to, for example, quantizing and/or normalizing the residual signal to restrict its dynamic range and/or additional quantization performed during coding of the residual. Some or all of the higher layer encoders may adopt motion estimation and motion compensated prediction as an encoding mode. However, motion estimation and compensation in a residual signal may be different from conventional motion estimation and may be prone to visual artifacts. To minimize such visual artifacts, a sophisticated residual quantization as well as joint quantization between, for example, quantizing and/or normalizing the residual signal and the additional quantization performed during coding of the residual may be provided and/or used thereby increasing system complexity. - Scalable Video Coding (SVC) may be an extension of H.264 that may enable the transmission and decoding of partial bit streams to provide video services with lower temporal or spatial resolutions or reduced fidelity while retaining a relative high reconstruction quality given the rate of the partial bit streams. One design feature of SVC may be Single loop decoding. Single loop decoding may refer to the fact that an SVC decoder may set up one motion compensation loop at the layer being decoded, and may not have to set up motion compensation loop(s) at other lower layer(s). For example, the bitstream may include 2 layers such as layer 1 (base layer) and layer 2 (enhancement layer). If the decoder wants to reconstruct
layer 2 video, a decoded picture buffer and motion compensated prediction may be set up forlayer 2, but not for layer 1 (e.g., the base layer thatlayer 2 may depend on). Thus, SVC may not need and/or use a reference picture from lower layers to be fully reconstructed, for example, thereby reducing computational complexity and memory requirement at the decoder. - Single loop decoding may be achieved by constrained inter-layer texture prediction where, for a current block in a given layer, spatial texture prediction from a lower layer may be permitted if the corresponding low layer block may be coded in intra mode (this may also be called restricted intra prediction). For example, when the lower layer block may be coded in intra mode, it may be reconstructed without the need for motion compensation operations and decoded picture buffer. To improve rate-distortion efficiency of an enhancement layer, SVC may use additional inter-layer prediction techniques such as motion vector prediction, residual prediction, mode prediction, and/or the like from lower layers. Although the single loop decoding feature of SVC may reduce the computational complexity and memory requirements at the decoder, it may increase implementation complexity by relying heavily on block-level inter-layer prediction methods to achieve satisfactory performance. Furthermore, to compensate for the performance penalty incurred by imposing the single loop decoding constraint, encoder design and computation complexity may be increased such that desired performance may be achieved. Coding of interlaced content may not be well supported by SVC, which may affect its adoption by the broadcasting industry. Consequently, complications in SVC encoder and decoder design and system implementation may cause limited SVC adoptions in the market place.
- Multi-view Video Coding (MVC) may be another extension of H.264 that may provide view scalability. In view scalability, the base layer bitstream may be decoded to reconstruct a conventional 2D video, and additional enhancement layers may be decoded to reconstruct other view representations of the same video signal. When views may be combined together and displayed by a proper 3D display, the user may experience 3D video with proper depth perception.
FIG. 2 may provide an example prediction structure of using MVC to code a stereoscopic video with a left view (layer 1) and a right view (layer 2). The left view video in 0 may be coded with an IBBP prediction structure. The right view video may be coded with a PBBB prediction structure. In the right view, a first picture collocated with the first I picture in the left view may be coded as a P picture. The other pictures in the right view may be coded as B pictures with the first prediction coming from temporal references in the right view and the second prediction coming from inter-layer reference in the left view. - Stereoscopic 3D TVs, which may use 3D glasses, may be used for enjoying 3D content (e.g., movies, live sports, and/or the like) at home. Unlike SVC, MVC may not support the single loop decoding feature. As shown in
FIG. 2 , decoding of the right view (layer 2) video may need the pictures (e.g., all or the entire pictures) in the left view (layer 1) to be available such that motion compensation loops may be supported in both views/layers. However, MVC may have a design advantage in that it may include a high level syntax changes, and may not include block-level changes to H.264/AVC. This may lead to an easier implementation as the underlying MVC encoder/decoder logics may remain the same, may be easily duplicated, and/or reference pictures at slice/picture level may need to be correctly configured to enable MVC. This, coupled with an explosion of 3D video content (e.g., primarily 3D movie production and 3D live sports broadcasting) in recent years, may enable or allow MVC to enjoy much wider commercial success compared to SVC. MVC may also support coding of more than two views by extending the example in 0 to perform inter-layer prediction across multiple views. - In 3D video coding, for example, MPEG Frame Compatible (MFC) coding may also be provided and/or used. For example, as described herein, 3D content may be stereoscopic 3D video that may include two views such as the left and the right view. Stereoscopic 3D content delivery may be achieved by packing/multiplexing the two views into one frame (hence the name, frame compatible) and/or compressing and transmitting the packed video with existing standard such as H.264/AVC. At the receiver side, after decoding, the frames may be unpacked and displayed as two views. Such multiplexing of the views may be done in the temporal domain or spatial domain. The two views may be spatially downsampled by a factor of two and packed by various arrangements (e.g., when done in the spatial domain to maintain the same picture size). For example, side-by-side arrangement may put the downsampled left view on the left half of the picture and the downsampled right view on the right half of the picture. Other arrangements may include top-and-bottom, line-by-line, checkerboard, and/or the like. The specific arrangement that may be used to achieve frame compatible 3D video may be conveyed by frame packing arrangement supplemental enhancement information (SEI) messages. Although such an arrangement may achieve 3D delivery with minimal increase in bandwidth requirement (e.g., there may still be some increase since the packed frames may be more difficult to compress), spatial downsampling may cause aliasing in the views and reduce the visual quality and user experience of 3D video. Thus, MFC development may focus on providing a scalable extension to frame compatible (i.e., two views packed into the same frame) base layer video and/or providing one or more enhancement layers to recover the resolution views, for example, for an improved 3D experience. As such, though geared toward offering 3D video delivery, the primary underlying technology to enabling full-resolution MFC may be related closely (e.g., more closely) to spatial scalability technologies.
- Requirements and/or use cases for scalable enhancements of HEVC may be provided, produced, and/or used. Additionally, one or more targets may have been established, for example, for spatial scalability. Compared to using non-scalable coding, measured for higher resolution video, the targets of 25% bit rate reduction for 2× spatial scalability and 50% bit rate reduction for 1.5× spatial scalability may be achieved. To broaden the use cases for scalable HEVC, the so-called scalability may be used. Standards scalability may refer to the type of scalability when the base layer may be encoded with an earlier standard such as H.264/AVC, or even MPEG2, while the one or more enhancement layers may be encoded using a more recent standard such as the HEVC standard. Standards scalability may be aimed at providing backward compatibility for legacy content that may already be encoded using previous standards and enhancing the quality of the legacy content with one or more enhancement layers encoded with upcoming standards like HEVC that may provide better coding efficiency.
- Another 3D scalable video coding technique, called 3D video coding or 3DV, may also be provided and/or used. 3DV's primary task may be to develop various flavors of view scalability targeted for autostereoscopic applications. Autostereoscopic displays and applications may allow or enable people to experience 3D without the cumbersome glasses. To achieve a suitable or good 3D experience without glasses, more than two views may be provided and/or used. Coding many views (e.g., such as 9 views or 10 views) may be expensive. Therefore, 3DV may provide and/or use a hybrid approach of coding a few views (e.g., 2 or 3 views) with relatively large disparity together with the depth maps that may provide depth information of the views. At the display side, the coded views and depth maps may be decoded, and the remaining views may be generated using the decoded views and their depth maps using view synthesis technologies. 3DV may consider various methods to code the views and the depth maps, for example, coding them using a combination of different standards such as H.264/AVC, MVC and HEVC including coding the base layer with one standard (e.g., H.264/AVC) and coding one or more enhancement layers with another standard (e.g., HEVC). 3DV may provide a menu of different options for applications to choose from.
- Table 1 summarizes different types of scalabilities discussed herein. At the bottom of Table 1, bit-depth scalability and chroma format scalability may be tied to video formats (e.g., higher than 8-bit video, and chroma sampling formats higher than YUV4:2:0) primarily used by professional video applications.
- With advanced display technologies, Ultra high definition TV (UHDTV) that may be specified in ITU BT.2020 may support larger resolution, larger bit-depth, higher frame-rate, and wider color gamut compared to the HDTV specification (BT.709). With such a technique, the user experience may be greatly improved due to the high fidelity quality that BT.2020 may provide. UHDTV may support up to 4K (3840×2160) and 8K (7680×4320) resolution, with the frame-rate being up to 120 Hz, and the bit-depth of picture samples being 10 bits or 12 bits. The color space of UHDTV may be defined by BT.2020.
FIG. 3 illustrates a comparison between BT.709 (HDTV) and BT.2020 (UHDTV) in a CIE color definition. The volume of colors rendered in BT.2020 may be broader than that in BT.709, which may mean more visible color information may be rendered using the UHDTV specification. -
TABLE 1 Different types of sealabilities Scalability Example Standards View 2D→3D (2 or more views) MVC, MFC, 3DV scalability Spatial 720 p→1080 p SVC, scalable HEVC scalability Quality (SNR) 35 dB→38 dB SVC, scalable HEVC scalability Temporal 30 fps→60 fps H.264/AVC, SVC scalability scalable HEVC Standards H.264/AVC→HEVC 3DV, scalable HEVC scalability Bit-depth 8-bit video → 10-bit video Scalable HEVC* scalability Chroma format YUV4:2:0→YUV4:2:2, Scalable HEVC* scalability YUV4:4:4 Aspect ratio 4:3→16:9 Scalable HEVC* scalability Color gamut BT.709(HDTV) -> Scalable HEVC* scalability BT.2020(UHDTV) - One type of scalability that may be provided and/or used may be color gamut scalability. Color gamut scalable (CGS) coding may be multi-layer coding where two or more layers may have different color gamut and bit-depth. For example, as shown in Table 1, in a 2-layer scalable system, the base layer may be a HDTV color gamut as defined in BT.709 and the enhancement layer may be a UHDTV color gamut as defined in BT.2020. Another color gamut that may be used may be a P3 color gamut. The P3 color gamut may be used in digital cinema applications. The inter-layer process in CGS coding may use color gamut conversion methods to convert a base layer color gamut to an enhancement layer color gamut. After color gamut conversion may be applied, the inter-layer reference pictures generated may be used to predict the enhancement layer pictures, for example, with better or improved accuracy.
FIGS. 4A-4B depict an example of a visual difference to the end users between the BT.709 color gamut and the P3 color gamut respectively. InFIGS. 4A-4B , the same content may be color graded twice using a different color gamut. For example, the content inFIG. 4A may be color graded in BT.709 and rendered/displayed on a BT.709 display, and the content inFIG. 4B may be color graded in P3 and rendered/displayed on BT.709 display. As shown, there is a noticeable color difference between the two images. - If, for example,
FIG. 4A is coded in the base layer andFIG. 4B is coded in the enhancement layer, for example, using the CGS coding system inFIG. 5 , additional inter-layer processing may be provided and/or used to improve the enhancement layer coding efficiency. Color gamut conversion methods may also be used in inter-layer processing for CGS. Through the use of color gamut conversion methods, the colors in BT.709 space may be translated into the P3 space and may be used to more effectively predict enhancement layer signal in the P3 space. - The model parameters for color gamut conversion may be different for different content even when the BL color gamut and the EL color gamut may be fixed (e.g., BL may be in 709 and may be EL in 2020). These parameters may depend on the color grading process during post production in content generation where the colorist(s) may apply different grading parameters to different spaces and to different content to reflect his or her or their artistic intent. Moreover, the input video for color grading may include high fidelity pictures. In a scalable coding system, coding of the BL pictures may introduce quantization noise. With coding structures such as the hierarchical prediction structure, the level of quantization may be adjusted per picture or per group of pictures. Therefore, the model parameters generated from color grading may not be sufficiently accurate for coding purposes. It may be more effective for the encoder to compensate the coding noise by estimating the model parameters on the fly. The encoder may estimate these parameters per picture or per groups of pictures. These model parameters, for example, generated during color grading process and/or by the encoder may be signaled to decoder at the sequence and/or picture level so the decoder may perform the same color gamut conversion process during inter-layer prediction.
- There may be various color gamut conversion methods such as linear or piece-wise linear. In the film industry, a 3D Look-up Table (3D LUT) may be used for color gamut conversion from one color gamut method or technique to another. Additionally, 3D LUT for CGS coding may be provided and/or used.
FIG. 5 depicts an example CGS coding scheme with picture level inter-layer prediction (ILP). The ILP includes color gamut conversion from base layer (BL) color gamut to enhancement layer (EL) color gamut, upsampling from BL spatial resolution to EL spatial resolution, and/or inverse tone mapping (e.g., conversion of sample bit depth) from BL sample bit-depth to EL sample bit-depth. - As described herein, 3D LUT may be used for a color gamut conversion. For example, (y, u, v) may be denoted as the sample triplet in the color gamut of the base layer, and (Y, U, V) as the triplet in EL color gamut. In 3D LUT, the range of BL color space may be segmented into equal octants as shown in
FIG. 6 . The input of the 3D LUT may be (y, u, v) in the BL color gamut and the output of 3D LUT may be the mapped triplet (Y, U, V) in EL color gamut. During a conversion process, if the input (y, u, v) may overlap with one of the vertices of octants, the output (Y, U, V) maybe be derived by referencing one of the 3D LUT entries directly. Otherwise, if the input (y, u, v) may lie inside an octant (e.g., but not on one of its vertices), trilinear-interpolation as shown inFIG. 7 may be applied with its nearest 8 vertices. The trilinear-interpolation may be carried out using one or more of the following equations: -
- where (yi, uj, vk) may represent the vertices of the BL color gamut (i.e., inputs to 3D LUT), LUT[yi][uj][vk] may represent the vertices of the EL color gamut (i.e., outputs of 3D LUT at the entry (yi, uj, vk)), and s0(y)=y1−y, s1(y)=y−y0, s0(u)=u1−u, s1(u)=u−u0, s0(v)=v1−v, s1(v)=v−v0.
- As described herein, 3D LUT may be be generated from the color grading process by the colorists, or it may be estimated by the encoder—for example, using original signal in one color space and the corresponding signal in another color space. 3D LUT may be sent in the bitstream from the encoder to the decoder such that the decoder may apply the same color gamut conversion process during inter-layer processing. The signaling overhead of 3D LUT may be increased, large, or high (e.g., significant), because the dimension of the table may be large. For example, as shown in
FIG. 6 , a sample bit-depth may be 8 bits. If the unit octant size may be 16×16×16, there may be 17×17×17 entries in the 3D LUT table. Each entry of the 3D LUT may also include three components. Thus, the total uncompressed table size may be 117912 (17×17×17×3×8) bits. With this amount of overhead, 3D LUT may be (e.g., only) signaled at sequence level, because individual pictures may not afford such large overhead. Pictures in the sequence may use the same 3D LUT, which may result in sub-optimal color gamut conversion and may degrade enhancement layer coding efficiency. - As such, systems and/or methods may be provided to improve 3D LUT coding. For example, a table size of 3D LUT may be reduced if a non-uniform octant may be used for color space segmentation. For some color regions, the octant may be coarser (e.g., there may be a larger distance between neighboring vertices) to reduce the number of entries in the 3D LUT table.
- For this kind of hierarchical tree structured three dimensional data, an octree may be provided and/or used for efficient coding. As shown in
FIG. 8 , there may be three layers and theoctants layer 1 may be split. Each node in an octree may represent one octant and each node may be be referenced from the root. For example, theoctant 0 atlayer 2 belonging tooctant 3 atlayer 1 may be be referenced as “0-3-0” from the root node where “-” may be used as layer delimiter. An octant may be segmented into 8 sub-octants if it may be split further. Leaf octants in the octree may be encoded for the example inFIG. 8 including the layer-2 nodes and some layer-1 nodes (e.g., except 3 and 6 that may be further split). One vertex in the 3D LUT may belong to multiple nodes at different layers. For example, vertex N inFIG. 8 may belong to node 0-0, 0-1, 0-2, 0-3, and 0-3-0. When coding the 3D LUT using octree, this overlapping relationship may be considered to avoid unnecessary signaling of the vertices (e.g., each vertex may be coded once) and may be used to provide efficient 3D LUT as described herein. - Table 2 lists syntax elements for 3D LUT coding. The function coding_octant( ) in Table 2 may be recursively called to encode vertices in the 3D LUT, in a layer-first traversal order, as described, for example, in the context of Table 3. The functions u(n), ue(v) and se(v) may be defined as: u(n): unsigned integer using n bits, ue(v): unsigned integer 0-th order Exp-Golomb coded, and/or se(v): signed integer 0-th order Exp-Golomb coded.
-
TABLE 2 Syntax of the 3D LUT coding 3D_ LUT ( ) { Descriptor num_layers_minus1 u(3) prediction_mode u(2) LUT_precision_luma_minus1 ue(v) LUT_precision_chroma_minus1 ue(v) coding_octant(0, 0,0,0) } - The num_layers_minus1:(num_layers_minus1+1) may be used to calculate the number of layers octree has. It may be 2 for the octree example in
FIG. 8 . - The prediction_mode may include three possible prediction modes for the octree coding. When the prediction_mode may be 0, the parent octant from the current 3D LUT may be used as the prediction to code each of its child octants. The prediction value for each child octant/vertex may be generated from its parent octant with trilinear interpolation. This prediction mode may be further discussed in the context of Table 3. When the prediction_mode may be 1, the existing global 3D LUT (e.g., as defined herein below) may be used as prediction to code the current 3D LUT and the prediction for each vertex may be generated from the collocated vertex in the existing global 3D LUT (e.g., the collocated vertex may be interpolated, if not existing). When the prediction_mode may be 2, both the current 3D LUT and the existing global 3D LUT may be used as predictions and the prediction used for each octant/vertex coding may be signaled for each octant/vertex separately.
- LUT_precision_luma_minus1 may be provided and/or used. LUT_precision_luma_minus1+1) may be a precision parameter used to code the difference between the LUT parameter to be coded and its prediction for the luma (Y) component.
- LUT_precision_chroma_minus1 may be provided and/or used. The LUT_precision_chroma_minus1+1 may be the precision parameter used to code the difference between the LUT parameter and its prediction for the chroma (U, V) component. The precision parameter may be different from that for luma signal.
- As discussed herein, LUT_precision_luma_minus1 and LUT_precision_chroma_minus1 may be used for a LUT parameter decoding process. Smaller values of these precision parameters may make 3D LUT more accurate and reduce the distortion of color gamut conversion. Additionally, smaller values may increase the number of coding bits. Therefore, the appropriate values of these precision parameters may be determined by rate-distortion optimization (RDO) process.
- Table 3 lists example syntax elements for octree coding in layer-first traversal order. An example of the layer-first traversal order coding may be shown in
FIG. 8 . For example, the 3D LUT may be shown in two representations inFIG. 8 . For example, on the left side, the 3D LUT may be shown in a beginning octant being recursively partitioned into smaller octants, with each octant having 8 vertices. On the right side, the corresponding octree representation of the 3D LUT may be shown. Each node in the octree on the right may correspond to one octant (or equivalently, 8 vertices) on the left. To code each octant (or each octree node), the 8 representing vertices may be coded. This may be reflected by the “for” loop of “for (i=0; i<8; i++)” in Table 3. InFIG. 8 , the beginning octant inlayer 0 may be coded in the form of the 8 vertices labeled “px”, followed by coding of the 8 octants inlayer 1, each of which has 8 vertices of their own. Nineteen of these vertices inlayer 1 may be unique (e.g., they may be labeled as “qx”) and need to be coded. This may be a difference between the syntax in Table 3 and other syntaxes. After octants inlayer 1 may be coded,octants layer 1 may be split again into 8 child octants each inlayer 2. - As shown, the proposed signaling may be a flag when a given vertex has been coded and may avoid sending the vertex repeatedly in the situation when the vertex may be shared by more than one node in the octree. In the example of
FIG. 8 , this may reduce the number of vertices to be coded inlayer 1 from 64 (8×8) to 19. Additionally, when the prediction_mode is 2, the proposed method may signal the prediction method (e.g., a collocated vertex in an existing 3D LUT or the parent vertex of the current 3D LUT) for each vertex. -
TABLE 3 Syntax elements for coding_octant( ) coding_octant ( layer, y,u,v){ Descriptor for( i = 0; i<8 : i++ ) { n = getVertex(y, u, v, i) if (!coded_flag[n]) { if (prediction_mode == 2) vertex_prediction_mode u(1) nonzero_residual_flag u(1) if (nonzero_residual_flag) { deltaY se(v) deltaU se(v) deltaV se(v) } coded_flag[n] = true } } octant_split_flag u(1) if (octant_split_flag ) { for( i = 0; i<8 ; i++) { coding_octant ( layer+1, y+dy[i],u+du[i],v+dv[i]) } } } - As shown, a vertex_prediction_mode may be provided and/or used where a 0 may refer to using the parent octant of the current 3D LUT to predict the current vertex and/or a 1 may refer to using the collocated octant of the existing global 3D LUT to predict the current vertex. If prediction_mode may not be equal to 2, the vertex_prediction_mode may be set to be equal to the prediction_mode.
- A parent octant in the current 3D LUT may be used to predict the current vertex (e.g., when the vertex prediction mode may be set to 0). As previously explained, since layer-first traversal order may be used, the parent octant at layer (1) may be coded before the child octant at layer (1+1). In the example of
FIG. 8 , the 8 vertices px atlayer 0 may be coded first. When coding one of the 19 vertices qx, a predictor may be formed first using tri-linear interpolation from the 8 vertices px atlayer 0. Instead of coding the vertex qx directly, the difference between qx and its predictor may be coded to reduce bit overhead. - A nonzero_residual_flag may be provided and/or used where, for example, a 1 may indicate that there may be nonzero residual to be coded; 0 may indicate that residuals may be zero; and/or the decoded value may be equal to its prediction.
- A deltaY or the delta of luma component may be encoded. The deltaY may be calculated as follows:
-
deltaY=(Y−prediction_Y+((LUT_precision_luma_minus1+1)>>1))/(LUT_precision_luma_minus1+1). (1) - The LUT parameter of the luma component may be reconstructed at a decoder as
-
Y=prediction_Y+deltaY×(LUT_precision_luma_minus1+1). (2) - If (LUT_precision_luma_minus1+1) may be a power of 2, the division in Equation (1) may be substituted by right shifting log 2(LUT_precision_luma_minus1+1) bits and/or the multiplication in Equation (2) may be substituted by left shifting log 2(LUT_precision_luma_minus1+1) bits. Since a left/right shift may be easy to implement in hardware, a color gamut scalable coding system may find it beneficial to enforce (LUT_precision_luma_minus1+1) to be a power of 2. In such a case, the syntax element in Table 2 may be changed to represent the value of log 2(LUT_precision_luma_minus1+1) instead.
- A deltaU or the delta of chromaU component may be encoded and, for example, calculated as follows:
-
deltaU=(U−prediction_U+((LUT_precision_chroma_minus1+1)>>1))/(LUT_precision_chroma_minus1+1). (3) - The LUT parameter of the chroma U component may be reconstructed at decoder as
-
U=prediction_U+deltaU×(LUT_precision_chroma_minus1+1). (4) - A deltaV or the delta of chromaV component may be encoded. The delta V may be calculated as follows:
-
deltaV=(V−prediction_V+((LUT_precision_chroma_minus1+1)>>1))/(LUT_precision_chroma_minus1+1). (5) - The LUT parameter of the chroma V component may be reconstructed at a decoder as
-
V=prediction_V+deltaV×(LUT_precision_chroma_minus1+1). (6) - Similar to the luma component, a color gamut scalable coding system may find it beneficial to enforce (LUT_precision_chroma_minus1+1) to be power of 2 such that left/right shifts may be used in place of multiplication and division.
- If (LUT_precision_luma_minus1+1) or (LUT_precision_chroma_minus1+1) may not be a power of 2, the division may be approximated by the combination of multiplication and shifting instead of applying division directly, because divisors may be costly to implement in hardware implementation such as ASIC. For example, the Equations (1)(3)(5) may be implemented as (7)(8)(9):
-
deltaY=((Y−prediction_Y)*LUT_precision_luma_scale+(1<<(LUT_precision_luma_shift−1)))>>LUT_precision_luma_shift (7) -
deltaU=((U−prediction_U)*LUT_precision_chroma_scale+(1<<(LUT_precision_chroma_shift−1)))>>LUT_precision_chroma_shift (8) -
deltaV=((V−prediction_V)*LUT_precision_chroma_scale+(1<<(LUT_precision_chroma_shift−1)))>>LUT_precision_chroma_shift (9) - where (LUT_precision_luma_minus1+1) and (LUT_precision_chroma_minus1+1) may be calculated as:
-
(LUT_precision_luma_minus1+1)=(1<<LUT_precision_luma_shift)/LUT_precision_luma_scale (10) -
(LUT_precision_chroma_minus1+1)=(1<<LUT_precision_chroma_shift)/LUT_precision_chroma_scale. (11) - DeltaY may be calculated as LUT_precision_luma_minus1 may be 0. DeltaU and deltaV may be calculated as LUT_precision_chroma_minus1 may be 0.
- Further, prediction_Y, prediction_U and prediction_V may be the prediction of one or more LUT parameters. They may be derived according to prediction_mode. If prediction_mode may be 0, the prediction may be a trilinear interpolation from upper layer of current vertex. For example, if current encoding vertex may be vertex V in
FIG. 9B , its prediction may be trilinear interpolated be from 8 vertices of theoctant 3 atlayer 1. If prediction_mode may be 1, then prediction may be equal to collocated vertex in global LUT. - An octant_split_flag may be provided and/or used where, for example, a 1 may indicate that the current octant may be split further and 8 child octants will be coded. An Octant_split_flag that may be equal to 0 may indicate that current octant may be a leaf octant.
- The values of dy[i], du[i] and dv[i] in Table 3 may be defined in Table 4.
-
TABLE 4 Definition of dy, du, and dv i dy[i] du[i] dv[i] 0 0 0 0 1 0 0 (max_value_v+1)>>(1+layer) 2 0 (max_value_u+1)>>(1+layer) 0 3 0 (max_value_u+1)>>(1+layer) (max_value_v+1)>>(1+layer) 4 (max_value_y+1)>>(1+layer) 0 0 5 (max_value_y+l)>>(1+layer) 0 (max_value_v+1)>>(1+layer) 6 (max_value_y+1)>>(1+layer) (max_value_u+1)>>(1+layer) 0 7 (max_value_y+l)>>(1+layer) (max_value_u+1)>>(1+layer) (max_value_v+1)>>(1+layer) - The new function getVertex(y, u, v, i) may be used to derive the vertex index of the octant whose first vertex may be located at (y, u, v). The function getVertex (y, u, v, i) may be calculated using the following pseudo code:
-
getVertex (y, u, v, i) { getdy[i], du[i], dv[i] using Table 4 depending on the value of i octant_len_y = (max_value_y+1)>>num_layers_minus1 octant_len_u = (max_value_u+1)>>num_layers_minus1 octant_len_v = (max_value_v+1)>>num_layers_minus1 size_in_vertices = 1+ (1<<num_layers_minus1) n = ((y+(dy[i]<<1))/octant_len_y)*size_in_vertices*size_in_vertices + ((u+(du[i]<<1))/octant_len_u)*size_in_vertices + ((v+(dv[i]<<1))/octant_len_v) } . - The values of octant_len_y, octant_len_u and octant_len_v may represent the length of smallest octant for luma, color component u and color component v, respectively. The value of size_in_vertices may be the maximal number of vertices for each dimension.
- The flag coded_flag[n] may be set (e.g., once the value of n may be calculated using getVertex(y, u, v, i)). The flag may be used subsequently to track that the vertex “n” may have been coded and/or to avoid coding the vertex “n” again.
- Sequence and picture level 3D LUT signaling may also be provided and/or used. For example, for 3D LUT signaling, the system may signal 3D LUT based on sequence level analysis. This may be a global or sequence level 3D LUT. Such sequence level 3D LUT may be conveyed in high level parameter sets such as Video Parameter Set (VPS), Sequence Parameter Set (SPS) or Picture Parameter Set (PPS). Pictures in the video sequence may use the global 3D LUT.
- Table 5 shows an exemplary syntax table that signals the global 3D LUT in the PPS. In addition to using 3D LUT, the syntax in Table 5 may support using other types of color gamut conversion methods such as linear mapping, piecewise linear mapping, and/or the like. The specific color gamut conversion method may be coded into the PPS using ue(v) coding, which may enable additional color gamut conversion methods to be supported. Additionally, Table 5 includes a flag (color_gamut_scalability_flag) to indicate whether syntax elements to support color gamut scalability may be included. Although whether CGS may be supported between two specific layers or not may be signaled in the VPS (e.g., using a scalability_mask), it may be desirable to have parsing independency between the VSP and the PPS. Therefore, an additional 1-bit flag in the PPS may be included as in Table 5 to indicate CGS.
-
TABLE 5 Signaling global 3D LUX (using PPS as example) picture_param_set{{ Descriptor color_gamut_scalability_flag u(1) if (color gamut scalability_flag) { color_gamut_conversion_method ue(v) if (color_gamut_conversation_method == LINEAR) Signal linear mapping information else if (color_gamut_convention_method == PIECEWISE_LINEAR ) Signal piece linear mapping information else if (color_gamut_conversion_method == 3D_LUT) 3D_LUT ( 0, 0, 0, 0 ) else if (color_gamut_conversion_method == POLYNOMIAL ) Signal polynomial model order and cofficients . . . } - As shown, a color_gamut_scalability_flag may be provided. The color_gamut_scalability_flag may be used to indicate whether mapping information related to color gamut scalability may be present or not.
- A color_gamut_conversion_method may also be used to indicate the specific method used to perform color gamut conversion between two layers. The set of color gamut conversion methods may include linear mapping, piecewise linear, 3D LUT, polynomial model, etc. When the color_gamut_conversion_method may be set to 3D LUT, the 3D LUT( ) signaling as defined in Table 2 may be used.
- For one or more pictures in the video sequence, the encoder may decide to signal picture level 3D LUT in the bitstream (e.g., to further improve scalable coding efficiency). This updated 3D LUT information may be signaled inside the slice segment header along with the coded slice data and/or it may be signaled in a separate NAL unit such as Adaption Parameter Set (APS). To distinguish from the global 3D LUT, the latter case may be the picture level 3D LUT.
-
TABLE 6 Signaling picture level 30 LUT (using slice segment header as example) slice_segment_header{ Descriptor . . . else if (color_gamut_converstion_method == 3D_LUT ) { 3D_LUT_present_ flag u(1) if (3D_LUT_present_flag) 3D_LUT ( 0, 0, 0, 0 ) . . . } - A 3D_LUT_present_flag may be used to indicate whether picture level 3D LUT information may be present or not.
- The picture level 3D LUT may be coded by predicting from parent vertices in the current 3D LUT, it may be coded using the global 3D LUT as its prediction, and/or it may use both as prediction and to signal the prediction mode at the vertex level. This may be achieved by setting the value of prediction_mode in Table 2 to the appropriate value. A picture level 3D LUT may have a different number of layers compared to the global 3D LUT. For example, it may have more layers, making the picture level 3D LUT more accurate, which may in turn improve the color gamut conversion process and therefore the enhancement layer coding efficiency. If the picture level 3D LUT has more layers than the sequence level 3D LUT, it may be beneficial to code it using the sequence level 3D LUT as prediction. In this case, those collocated vertices that may not already exist in the global 3D LUT may be derived by trilinear interpolation from its neighboring vertices that already exist in the global 3D LUT.
FIGS. 9A-9B show such an example, where the picture level 3D LUT (FIG. 9B ) may have 3 layers and the global 3D LUT (FIG. 9A ) may have 2 layers and may be used as a prediction. When the vertex V atlayer 2 in the picture level 3D LUT may be coded, the prediction may be trilinear interpolated from the neighboring vertices P0, P1, . . . P7 atlayer 1 in global 3D LUT. -
FIG. 10A depicts a diagram of anexample communications system 100 in which one or more disclosed embodiments may be implemented and/or may be used. Thecommunications system 100 may be a multiple access system that provides content, such as voice, data, video, messaging, broadcast, etc., to multiple wireless users. Thecommunications system 100 may enable multiple wireless users to access such content through the sharing of system resources, including wireless bandwidth. For example, thecommunications systems 100 may employ one or more channel access methods, such as code division multiple access (CDMA), time division multiple access (TDMA), frequency division multiple access (FDMA), orthogonal FDMA (OFDMA), single-carrier FDMA (SC-FDMA), and the like. - As shown in
FIG. 10A , thecommunications system 100 may include wireless transmit/receive units (WTRUs) 102 a, 102 b, 102 c, and/or 102 d (which generally or collectively may be referred to as WTRU 102), a radio access network (RAN) 103/104/105, acore network 106/107/109, a public switched telephone network (PSTN) 108, theInternet 110, andother networks 112, though it will be appreciated that the disclosed embodiments contemplate any number of WTRUs, base stations, networks, and/or network elements. Each of theWTRUs WTRUs - The
communications systems 100 may also include abase station 114 a and abase station 114 b. Each of thebase stations WTRUs core network 106/107/109, theInternet 110, and/or thenetworks 112. By way of example, thebase stations 114 a and/or 114 b may be a base transceiver station (BTS), a Node-B, an eNode B, a Home Node B, a Home eNode B, a site controller, an access point (AP), a wireless router, and the like. While thebase stations base stations - The
base station 114 a may be part of theRAN 103/104/105, which may also include other base stations and/or network elements (not shown), such as a base station controller (BSC), a radio network controller (RNC), relay nodes, etc. Thebase station 114 a and/or thebase station 114 b may be configured to transmit and/or receive wireless signals within a particular geographic region, which may be referred to as a cell (not shown). The cell may further be divided into cell sectors. For example, the cell associated with thebase station 114 a may be divided into three sectors. Thus, in one embodiment, thebase station 114 a may include three transceivers, i.e., one for each sector of the cell. In another embodiment, thebase station 114 a may employ multiple-input multiple output (MIMO) technology and, therefore, may utilize multiple transceivers for each sector of the cell. - The
base stations 114 a and/or 114 b may communicate with one or more of theWTRUs air interface 115/116/117, which may be any suitable wireless communication link (e.g., radio frequency (RF), microwave, infrared (IR), ultraviolet (UV), visible light, etc.). Theair interface 115/116/117 may be established using any suitable radio access technology (RAT). - More specifically, as noted above, the
communications system 100 may be a multiple access system and may employ one or more channel access schemes, such as CDMA, TDMA, FDMA, OFDMA, SC-FDMA, and the like. For example, thebase station 114 a in theRAN 103/104/105 and theWTRUs air interface 115/116/117 using wideband CDMA (WCDMA). WCDMA may include communication protocols such as High-Speed Packet Access (HSPA) and/or Evolved HSPA (HSPA+). HSPA may include High-Speed Downlink Packet Access (HSDPA) and/or High-Speed Uplink Packet Access (HSUPA). - In another embodiment, the
base station 114 a and theWTRUs air interface 115/116/117 using Long Term Evolution (LTE) and/or LTE-Advanced (LTE-A). - In other embodiments, the
base station 114 a and theWTRUs - The
base station 114 b inFIG. 10A may be a wireless router, Home Node B, Home eNode B, or access point, for example, and may utilize any suitable RAT for facilitating wireless connectivity in a localized area, such as a place of business, a home, a vehicle, a campus, and the like. In one embodiment, thebase station 114 b and theWTRUs base station 114 b and theWTRUs base station 114 b and theWTRUs FIG. 10A , thebase station 114 b may have a direct connection to theInternet 110. Thus, thebase station 114 b may not be required to access theInternet 110 via thecore network 106/107/109. - The
RAN 103/104/105 may be in communication with thecore network 106/107/109, which may be any type of network configured to provide voice, data, applications, and/or voice over internet protocol (VoIP) services to one or more of theWTRUs core network 106/107/109 may provide call control, billing services, mobile location-based services, pre-paid calling, Internet connectivity, video distribution, etc., and/or perform high-level security functions, such as user authentication. Although not shown inFIG. 10A , it will be appreciated that theRAN 103/104/105 and/or thecore network 106/107/109 may be in direct or indirect communication with other RANs that employ the same RAT as theRAN 103/104/105 or a different RAT. For example, in addition to being connected to theRAN 103/104/105, which may be utilizing an E-UTRA radio technology, thecore network 106/107/109 may also be in communication with another RAN (not shown) employing a GSM radio technology. - The
core network 106/107/109 may also serve as a gateway for theWTRUs PSTN 108, theInternet 110, and/orother networks 112. ThePSTN 108 may include circuit-switched telephone networks that provide plain old telephone service (POTS). TheInternet 110 may include a global system of interconnected computer networks and devices that use common communication protocols, such as the transmission control protocol (TCP), user datagram protocol (UDP) and the internet protocol (IP) in the TCP/IP internet protocol suite. Thenetworks 112 may include wired or wireless communications networks owned and/or operated by other service providers. For example, thenetworks 112 may include another core network connected to one or more RANs, which may employ the same RAT as theRAN 103/104/105 or a different RAT. - Some or all of the
WTRUs communications system 100 may include multi-mode capabilities, i.e., theWTRUs WTRU 102 c shown inFIG. 10A may be configured to communicate with thebase station 114 a, which may employ a cellular-based radio technology, and with thebase station 114 b, which may employ an IEEE 802 radio technology. -
FIG. 10B depicts a system diagram of anexample WTRU 102. As shown inFIG. 10B , theWTRU 102 may include aprocessor 118, atransceiver 120, a transmit/receiveelement 122, a speaker/microphone 124, akeypad 126, a display/touchpad 128,non-removable memory 130,removable memory 132, apower source 134, a global positioning system (GPS)chipset 136, andother peripherals 138. It will be appreciated that theWTRU 102 may include any sub-combination of the foregoing elements while remaining consistent with an embodiment. Also, embodiments contemplate that thebase stations base stations FIG. 10B and described herein. - The
processor 118 may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs). Field Programmable Gate Array (FPGAs) circuits, any other type of integrated circuit (IC), a state machine, and the like. Theprocessor 118 may perform signal coding, data processing, power control, input/output processing, and/or any other functionality that enables theWTRU 102 to operate in a wireless environment. Theprocessor 118 may be coupled to thetransceiver 120, which may be coupled to the transmit/receiveelement 122. WhileFIG. 10B depicts theprocessor 118 and thetransceiver 120 as separate components, it may be appreciated that theprocessor 118 and thetransceiver 120 may be integrated together in an electronic package or chip. - The transmit/receive
element 122 may be configured to transmit signals to, or receive signals from, a base station (e.g., thebase station 114 a) over theair interface 115/116/117. For example, in one embodiment, the transmit/receiveelement 122 may be an antenna configured to transmit and/or receive RF signals. In another embodiment, the transmit/receiveelement 122 may be an emitter/detector configured to transmit and/or receive IR, UV, or visible light signals, for example. In yet another embodiment, the transmit/receiveelement 122 may be configured to transmit and receive both RF and light signals. It will be appreciated that the transmit/receiveelement 122 may be configured to transmit and/or receive any combination of wireless signals. - In addition, although the transmit/receive
element 122 is depicted inFIG. 10B as a single element, theWTRU 102 may include any number of transmit/receiveelements 122. More specifically, theWTRU 102 may employ MIMO technology. Thus, in one embodiment, theWTRU 102 may include two or more transmit/receive elements 122 (e.g., multiple antennas) for transmitting and receiving wireless signals over theair interface 115/116/117. - The
transceiver 120 may be configured to modulate the signals that are to be transmitted by the transmit/receiveelement 122 and to demodulate the signals that are received by the transmit/receiveelement 122. As noted above, theWTRU 102 may have multi-mode capabilities. Thus, thetransceiver 120 may include multiple transceivers for enabling theWTRU 102 to communicate via multiple RATs, such as UTRA and IEEE 802.11, for example. - The
processor 118 of theWTRU 102 may be coupled to, and may receive user input data from, the speaker/microphone 124, thekeypad 126, and/or the display/touchpad 128 (e.g., a liquid crystal display (LCD) display unit or organic light-emitting diode (OLED) display unit). Theprocessor 118 may also output user data to the speaker/microphone 124, thekeypad 126, and/or the display/touchpad 128. In addition, theprocessor 118 may access information from, and store data in, any type of suitable memory, such as thenon-removable memory 130 and/or theremovable memory 132. Thenon-removable memory 130 may include random-access memory (RAM), read-only memory (ROM), a hard disk, or any other type of memory storage device. Theremovable memory 132 may include a subscriber identity module (SIM) card, a memory stick, a secure digital (SD) memory card, and the like. In other embodiments, theprocessor 118 may access information from, and store data in, memory that is not physically located on theWTRU 102, such as on a server or a home computer (not shown). - The
processor 118 may receive power from thepower source 134, and may be configured to distribute and/or control the power to the other components in theWTRU 102. Thepower source 134 may be any suitable device for powering theWTRU 102. For example, thepower source 134 may include one or more dry cell batteries (e.g., nickel-cadmium (NIC), nickel-zinc (NiZn), nickel metal hydride (NiMH), lithium-ion (Li-ion), etc.), solar cells, fuel cells, and the like. - The
processor 118 may also be coupled to theGPS chipset 136, which may be configured to provide location information (e.g., longitude and latitude) regarding the current location of theWTRU 102. In addition to, or in lieu of, the information from theGPS chipset 136, theWTRU 102 may receive location information over theair interface 115/116/117 from a base station (e.g.,base stations WTRU 102 may acquire location information by way of any suitable location-determination method while remaining consistent with an embodiment. - The
processor 118 may further be coupled toother peripherals 138, which may include one or more software and/or hardware modules that provide additional features, functionality and/or wired or wireless connectivity. For example, theperipherals 138 may include an accelerometer, an e-compass, a satellite transceiver, a digital camera (for photographs or video), a universal serial bus (USB) port, a vibration device, a television transceiver, a hands free headset, a Bluetooth® module, a frequency modulated (FM) radio unit, a digital music player, a media player, a video game player module, an Internet browser, and the like. -
FIG. 10C depicts a system diagram of theRAN 103 and thecore network 106 according to an embodiment. As noted above, theRAN 103 may employ a UTRA radio technology to communicate with theWTRUs air interface 115. TheRAN 103 may also be in communication with thecore network 106. As shown inFIG. 10C , theRAN 103 may include Node-Bs WTRUs air interface 115. The Node-Bs RAN 103. TheRAN 103 may also includeRNCs 142 a and/or 142 b. It will be appreciated that theRAN 103 may include any number of Node-Bs and RNCs while remaining consistent with an embodiment. - As shown in
FIG. 10C , the Node-Bs 140 a and/or 140 b may be in communication with theRNC 142 a. Additionally, the Node-B 140 c may be in communication with theRNC 142 b. The Node-Bs respective RNCs RNCs RNCs Bs RNCs - The
core network 106 shown inFIG. 10C may include a media gateway (MGW) 144, a mobile switching center (MSC) 146, a serving GPRS support node (SGSN) 148, and/or a gateway GPRS support node (GGSN) 150. While each of the foregoing elements are depicted as part of thecore network 106, it will be appreciated that any one of these elements may be owned and/or operated by an entity other than the core network operator. - The
RNC 142 a in theRAN 103 may be connected to theMSC 146 in thecore network 106 via an IuCS interface. TheMSC 146 may be connected to theMGW 144. TheMSC 146 and theMGW 144 may provide the WTRUs 102 a, 102 b, and/or 102 c with access to circuit-switched networks, such as thePSTN 108, to facilitate communications between theWTRUs - The
RNC 142 a in theRAN 103 may also be connected to theSGSN 148 in thecore network 106 via an IuPS interface. TheSGSN 148 may be connected to theGGSN 150. TheSGSN 148 and theGGSN 150 may provide the WTRUs 102 a, 102 b, and/or 102 c with access to packet-switched networks, such as theInternet 110, to facilitate communications between and theWTRUs - As noted above, the
core network 106 may also be connected to thenetworks 112, which may include other wired or wireless networks that are owned and/or operated by other service providers. -
FIG. 10D depicts a system diagram of theRAN 104 and thecore network 107 according to an embodiment. As noted above, theRAN 104 may employ an E-UTRA radio technology to communicate with theWTRUs air interface 116. TheRAN 104 may also be in communication with thecore network 107. - The
RAN 104 may include eNode-Bs RAN 104 may include any number of eNode-Bs while remaining consistent with an embodiment. The eNode-Bs WTRUs air interface 116. In one embodiment, the eNode-Bs B 160 a, for example, may use multiple antennas to transmit wireless signals to, and receive wireless signals from, theWTRU 102 a. - Each of the eNode-
Bs FIG. 10D , the eNode-Bs - The
core network 107 shown inFIG. 10D may include a mobility management gateway (MME) 162, a servinggateway 164, and a packet data network (PDN)gateway 166. While each of the foregoing elements are depicted as part of thecore network 107, it will be appreciated that any one of these elements may be owned and/or operated by an entity other than the core network operator. - The
MME 162 may be connected to each of the eNode-Bs RAN 104 via an S1 interface and may serve as a control node. For example, theMME 162 may be responsible for authenticating users of theWTRUs WTRUs MME 162 may also provide a control plane function for switching between theRAN 104 and other RANs (not shown) that employ other radio technologies, such as GSM or WCDMA. - The serving
gateway 164 may be connected to each of the eNode-Bs RAN 104 via the S1 interface. The servinggateway 164 may generally route and forward user data packets to/from theWTRUs gateway 164 may also perform other functions, such as anchoring user planes during inter-eNode B handovers, triggering paging when downlink data is available for theWTRUs WTRUs - The serving
gateway 164 may also be connected to thePDN gateway 166, which may provide the WTRUs 102 a, 102 b, and/or 102 c with access to packet-switched networks, such as theInternet 110, to facilitate communications between theWTRUs - The
core network 107 may facilitate communications with other networks. For example, thecore network 107 may provide the WTRUs 102 a, 102 b, and/or 102 c with access to circuit-switched networks, such as thePSTN 108, to facilitate communications between theWTRUs core network 107 may include, or may communicate with, an IP gateway (e.g., an IP multimedia subsystem (IMS) server) that serves as an interface between thecore network 107 and thePSTN 108. In addition, thecore network 107 may provide the WTRUs 102 a, 102 b, and/or 102 c with access to thenetworks 112, which may include other wired or wireless networks that are owned and/or operated by other service providers. -
FIG. 10E depicts a system diagram of theRAN 105 and thecore network 109 according to an embodiment. TheRAN 105 may be an access service network (ASN) that employs IEEE 802.16 radio technology to communicate with theWTRUs air interface 117. As will be further discussed below, the communication links between the different functional entities of theWTRUs RAN 105, and thecore network 109 may be defined as reference points. - As shown in
FIG. 10E , theRAN 105 may includebase stations ASN gateway 182, though it will be appreciated that theRAN 105 may include any number of base stations and ASN gateways while remaining consistent with an embodiment. Thebase stations RAN 105 and may each include one or more transceivers for communicating with theWTRUs air interface 117. In one embodiment, thebase stations base station 180 a, for example, may use multiple antennas to transmit wireless signals to, and receive wireless signals from, theWTRU 102 a. Thebase stations ASN gateway 182 may serve as a traffic aggregation point and may be responsible for paging, caching of subscriber profiles, routing to thecore network 109, and the like. - The
air interface 117 between theWTRUs RAN 105 may be defined as an R1 reference point that implements the IEEE 802.16 specification. In addition, each of theWTRUs core network 109. The logical interface between theWTRUs core network 109 may be defined as an R2 reference point, which may be used for authentication, authorization. IP host configuration management, and/or mobility management. - The communication link between each of the
base stations base stations ASN gateway 182 may be defined as an R6 reference point. The R6 reference point may include protocols for facilitating mobility management based on mobility events associated with each of theWTRUs - As shown in
FIG. 10E , theRAN 105 may be connected to thecore network 109. The communication link between theRAN 105 and thecore network 109 may defined as an R3 reference point that includes protocols for facilitating data transfer and mobility management capabilities, for example. Thecore network 109 may include a mobile IP home agent (MIP-HA) 184, an authentication, authorization, accounting (AAA)server 186, and agateway 188. While each of the foregoing elements are depicted as part of thecore network 109, it will be appreciated that any one of these elements may be owned and/or operated by an entity other than the core network operator. - The MIP-HA may be responsible for IP address management, and may enable the WTRUs 102 a, 102 b, and/or 102 c to roam between different ASNs and/or different core networks. The MIP-
HA 184 may provide the WTRUs 102 a, 102 b, and/or 102 c with access to packet-switched networks, such as theInternet 110, to facilitate communications between theWTRUs AAA server 186 may be responsible for user authentication and for supporting user services. Thegateway 188 may facilitate interworking with other networks. For example, thegateway 188 may provide the WTRUs 102 a, 102 b, and/or 102 c with access to circuit-switched networks, such as thePSTN 108, to facilitate communications between theWTRUs gateway 188 may provide the WTRUs 102 a, 102 b, and/or 102 c with access to thenetworks 112, which may include other wired or wireless networks that are owned and/or operated by other service providers. - Although not shown in
FIG. 10E , it should, may, and/or will be appreciated that theRAN 105 may be connected to other ASNs and thecore network 109 may be connected to other core networks. The communication link between theRAN 105 the other ASNs may be defined as an R4 reference point, which may include protocols for coordinating the mobility of theWTRUs RAN 105 and the other ASNs. The communication link between thecore network 109 and the other core networks may be defined as an R5 reference, which may include protocols for facilitating interworking between home core networks and visited core networks. - Although features and elements are described above in particular combinations, one of ordinary skill in the art will appreciate that each feature or element can be used alone or in any combination with the other features and elements. In addition, the methods described herein may be implemented in a computer program, software, or firmware incorporated in a computer-readable medium for execution by a computer or processor. Examples of computer-readable media include electronic signals (transmitted over wired or wireless connections) and computer-readable storage media. Examples of computer-readable storage media include, but are not limited to, a read only memory (ROM), a random access memory (RAM), a register, cache memory, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs). A processor in association with software may be used to implement a radio frequency transceiver for use in a WTRU, UE, terminal, base station, RNC, or any host computer.
Claims (21)
1-23. (canceled)
24. A method comprising:
receiving, for a current picture in a video sequence, a color mapping table prediction mode indication for predicting a current color mapping table for the current picture, wherein the current color mapping table is partitioned into a plurality of segments, a segment corresponding to a portion of a color space, the current color mapping table having a plurality of color mapping coefficient parameters that are associated with various vertices of the plurality of segments;
determining, based on the color mapping table prediction mode indication associated with the current picture, a color mapping predictor for predicting at least one vertex in the current color mapping table; and
reconstructing at least one color mapping coefficient parameter of the current color mapping table for the current picture based on the determined color mapping predictor.
25. The method of claim 24 , further comprising:
receiving a global color mapping table associated with a plurality of pictures in the video sequence;
on a condition that the color mapping table prediction mode indication indicates that the global color mapping table is to be used as the color mapping predictor, determining the global color mapping table as the color mapping predictor for predicting the current color mapping table; and
predicting the plurality of color mapping coefficient parameters of the current color mapping table based on a plurality of corresponding color mapping coefficient parameters in the global color mapping table.
26. The method of claim 25 , wherein the global color mapping table associated with the plurality of pictures is received in a picture parameter set.
27. The method of claim 24 , wherein the current color mapping table is partially reconstructed, and the method further comprises:
on a condition that the color mapping table prediction mode indication indicates that a reconstructed parent segment of a current segment in the current color mapping table is to be used as the color mapping predictor, determining the parent segment of the current segment as the color mapping predictor for predicting the current segment; and
predicting a plurality of color mapping coefficient parameters associated with the current segment of the current color mapping table based on a plurality of vertices associated with the parent segment.
28. The method of claim 24 , further comprising:
receiving a global color mapping table associated with the plurality of pictures in the video sequence;
on a condition that the color mapping table prediction mode indication indicates that the color mapping predictor comprises a portion of the global color mapping table and a portion of the current color mapping table, identifying a vertex prediction mode for a current vertex of the current color mapping table based on a vertex prediction mode indication associated with the current vertex; and
predicting a plurality of color mapping coefficient parameters associated with the current vertex based on the vertex prediction mode for the current vertex, wherein the plurality of color mapping coefficient parameters associated with the current vertex are predicted based on at least one vertex associated with a reconstructed parent segment on a condition that the vertex prediction mode indication indicates that the reconstructed parent segment from the current color mapping table is to be used to predict the current vertex.
29. The method of claim 24 , further comprising:
receiving a global color mapping table associated with the plurality of pictures in the video sequence;
on a condition that the color mapping table prediction mode indication indicates that the color mapping predictor comprises a portion of the global color mapping table and a portion of the current color mapping table, identifying a vertex prediction mode for a current vertex of the current color mapping table based on a vertex prediction mode indication associated with the current vertex; and
predicting a plurality of color mapping coefficient parameters associated with the current vertex based on the vertex prediction mode for the current vertex, wherein the plurality of color mapping coefficient parameters associated with the current vertex are predicted based on a collocated segment of the global color mapping table in on a condition that the vertex prediction mode indication indicates that the collocated segment of the global color mapping table is to be used to predict the current vertex.
30. The method of claim 24 , wherein reconstructing the at least one color mapping coefficient parameter of the current color mapping table for the current picture based on the determined color mapping predictor further comprises:
deriving a plurality of color mapping coefficient parameters associated with a current vertex in the current color mapping table via trilinear interpolation based on a plurality of neighboring vertices that neighbor a collocated vertex in the determined color mapping predictor.
31. The method of claim 24 , wherein reconstructing the at least one color mapping coefficient parameter of the current color mapping table for the current picture based on the determined color mapping predictor further comprises:
deriving a plurality of color mapping coefficient parameters associated with a current vertex in the current color mapping table based on a collocated vertex in the determined color mapping predictor.
32. The method of claim 24 , wherein reconstructing the at least one color mapping coefficient parameter of the current color mapping table for the current picture based on the determined color mapping predictor further comprises:
determining a prediction value for a color mapping coefficient parameter associated with a current vertex in the current color mapping table based on the determined color mapping predictor;
receiving a residual value for the color mapping coefficient parameter associated with the current vertex; and
deriving the color mapping coefficient parameter associated with the current vertex in the current color mapping table based on the prediction value and the residual value for the color mapping coefficient parameter.
33. The method of claim 24 , wherein the color mapping table prediction mode indication for predicting the current color mapping table for the current picture is received at a picture level.
34. A video decoding device comprising:
a processor configured to:
receive, for a current picture in a video sequence, a color mapping table prediction mode indication for predicting a current color mapping table for the current picture, wherein the current color mapping table is partitioned into a plurality of segments, a segment corresponding to a portion of a color space, the current color mapping table having a plurality of color mapping coefficient parameters that are associated with various vertices of the plurality of segments;
determine, based on the color mapping table prediction mode indication associated with the current picture, a color mapping predictor for predicting at least one vertex in the current color mapping table; and
reconstruct at least one color mapping coefficient parameter of the current color mapping table for the current picture based on the determined color mapping predictor.
35. The video decoding device of claim 34 , wherein the processor is further configured to:
receive a global color mapping table associated with a plurality of pictures in the video sequence;
on a condition that the color mapping table prediction mode indication indicates that the global color mapping table is to be used as the color mapping predictor, determine the global color mapping table as the color mapping predictor for predicting the current color mapping table for the current picture; and
predict the plurality of color mapping coefficient parameters of the current color mapping table based on a plurality of corresponding color mapping coefficient parameters in the global color mapping table.
36. The video decoding device of claim 35 , wherein the global color mapping table associated with the plurality of pictures is received in a picture parameter set.
37. The video decoding device of claim 34 , wherein the current color mapping table is partially reconstructed and, the processor is further configured to:
on a condition that the color mapping table prediction mode indication indicates that a reconstructed parent segment of a current segment in the current color mapping table is to be used as the color mapping predictor, determine the parent segment of the current segment as the color mapping predictor for predicting the current segment; and
predict a plurality of color mapping coefficient parameters associated with the current segment of the current color mapping table based on a plurality of vertices associated with the parent segment.
38. The video decoding device of claim 34 , wherein the processor is further configured to:
receive a global color mapping table associated with the plurality of pictures in the video sequence;
on a condition that the color mapping table prediction mode indication indicates that the color mapping predictor comprises a portion of the global color mapping table and a portion of the current color mapping table, identify a vertex prediction mode for a current vertex of the current color mapping table based on a vertex prediction mode indication associated with the current vertex; and
predicting a plurality of color mapping coefficient parameters associated with the current vertex based on the vertex prediction mode for the current vertex, wherein the plurality of color mapping coefficient parameters associated with the current vertex are predicted based on at least one vertex associated with a reconstructed parent segment on a condition that the vertex prediction mode indication indicates that the reconstructed parent segment from the current color mapping table is to be used to predict the current vertex.
39. The video decoding device of claim 34 , wherein the processor is further configured to:
receive a global color mapping table associated with the plurality of pictures in the video sequence;
on a condition that the color mapping table prediction mode indication indicates that the color mapping predictor comprises a portion of the global color mapping table and a portion of the current color mapping table, identify a vertex prediction mode for a current vertex of the current color mapping table based on a vertex prediction mode indication associated with the current vertex; and
predict a plurality of color mapping coefficient parameters associated with the current vertex based on the vertex prediction mode for the current vertex, wherein the plurality of color mapping coefficient parameters associated with the current vertex are predicted based on a collocated segment of the global color mapping table in on a condition that the vertex prediction mode indication indicates that the collocated segment of the global color mapping table is to be used to predict the current vertex.
40. The video decoding device of claim 34 , the processor is further configured to:
derive a plurality of color mapping coefficient parameters associated with a current vertex in the current color mapping table via trilinear interpolation based on a plurality of neighboring vertices that neighbor a collocated vertex in the determined color mapping predictor.
41. The video decoding device of claim 34 , the processor is further configured to:
derive a plurality of color mapping coefficient parameters associated with a current vertex in the current color mapping table based on a collocated vertex in the determined color mapping predictor.
42. The video decoding device of claim 34 , the processor is further configured to:
determine a prediction value for a color mapping coefficient parameter associated with a current vertex in the current color mapping table based on the determined color mapping predictor;
receive a residual value for the color mapping coefficient parameter associated with the current vertex; and
derive the color mapping coefficient parameter associated with the current vertex in the current color mapping table based on the prediction value and the residual value for color mapping coefficient parameter.
43. The video decoding device of claim 34 , wherein the color mapping table prediction mode indication for predicting the current color mapping table for the current picture is received at a picture level.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/530,679 US20190356925A1 (en) | 2013-09-20 | 2019-08-02 | Systems and methods for providing 3d look-up table coding for color gamut scalability |
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201361880715P | 2013-09-20 | 2013-09-20 | |
PCT/US2014/056608 WO2015042432A1 (en) | 2013-09-20 | 2014-09-19 | Systems and methods for providing 3d look-up table coding for color gamut scalability |
US201615022386A | 2016-03-16 | 2016-03-16 | |
US15/933,349 US10390029B2 (en) | 2013-09-20 | 2018-03-22 | Systems and methods for providing 3D look-up table coding for color gamut scalability |
US16/530,679 US20190356925A1 (en) | 2013-09-20 | 2019-08-02 | Systems and methods for providing 3d look-up table coding for color gamut scalability |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/933,349 Continuation US10390029B2 (en) | 2013-09-20 | 2018-03-22 | Systems and methods for providing 3D look-up table coding for color gamut scalability |
Publications (1)
Publication Number | Publication Date |
---|---|
US20190356925A1 true US20190356925A1 (en) | 2019-11-21 |
Family
ID=51691147
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/022,386 Active US9955174B2 (en) | 2013-09-20 | 2014-09-19 | Systems and methods for providing 3D look-up table coding for color gamut scalability |
US15/933,349 Expired - Fee Related US10390029B2 (en) | 2013-09-20 | 2018-03-22 | Systems and methods for providing 3D look-up table coding for color gamut scalability |
US16/530,679 Abandoned US20190356925A1 (en) | 2013-09-20 | 2019-08-02 | Systems and methods for providing 3d look-up table coding for color gamut scalability |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/022,386 Active US9955174B2 (en) | 2013-09-20 | 2014-09-19 | Systems and methods for providing 3D look-up table coding for color gamut scalability |
US15/933,349 Expired - Fee Related US10390029B2 (en) | 2013-09-20 | 2018-03-22 | Systems and methods for providing 3D look-up table coding for color gamut scalability |
Country Status (9)
Country | Link |
---|---|
US (3) | US9955174B2 (en) |
EP (2) | EP3047639B1 (en) |
JP (2) | JP6449892B2 (en) |
KR (2) | KR102028186B1 (en) |
CN (2) | CN105556943B (en) |
DK (1) | DK3047639T3 (en) |
HK (1) | HK1222075A1 (en) |
RU (1) | RU2016115055A (en) |
WO (1) | WO2015042432A1 (en) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210297659A1 (en) | 2018-09-12 | 2021-09-23 | Beijing Bytedance Network Technology Co., Ltd. | Conditions for starting checking hmvp candidates depend on total number minus k |
US11245892B2 (en) | 2018-06-29 | 2022-02-08 | Beijing Bytedance Network Technology Co., Ltd. | Checking order of motion candidates in LUT |
US11381848B2 (en) | 2018-06-05 | 2022-07-05 | Beijing Bytedance Network Technology Co., Ltd. | Main concept of EQT, unequally four partitions and signaling |
US11463685B2 (en) | 2018-07-02 | 2022-10-04 | Beijing Bytedance Network Technology Co., Ltd. | LUTS with intra prediction modes and intra mode prediction from non-adjacent blocks |
US20220337835A1 (en) * | 2019-10-28 | 2022-10-20 | Lg Electronics Inc. | Image encoding/decoding method and device using adaptive color transform, and method for transmitting bitstream |
US11528500B2 (en) | 2018-06-29 | 2022-12-13 | Beijing Bytedance Network Technology Co., Ltd. | Partial/full pruning when adding a HMVP candidate to merge/AMVP |
US11528501B2 (en) | 2018-06-29 | 2022-12-13 | Beijing Bytedance Network Technology Co., Ltd. | Interaction between LUT and AMVP |
US11539981B2 (en) | 2019-06-21 | 2022-12-27 | Beijing Bytedance Network Technology Co., Ltd. | Adaptive in-loop color-space transform for video coding |
US11589071B2 (en) | 2019-01-10 | 2023-02-21 | Beijing Bytedance Network Technology Co., Ltd. | Invoke of LUT updating |
US11641483B2 (en) | 2019-03-22 | 2023-05-02 | Beijing Bytedance Network Technology Co., Ltd. | Interaction between merge list construction and other tools |
US11671591B2 (en) | 2019-11-07 | 2023-06-06 | Beijing Bytedance Network Technology Co., Ltd | Quantization properties of adaptive in-loop color-space transform for video coding |
US11695921B2 (en) | 2018-06-29 | 2023-07-04 | Beijing Bytedance Network Technology Co., Ltd | Selection of coded motion information for LUT updating |
US11877002B2 (en) | 2018-06-29 | 2024-01-16 | Beijing Bytedance Network Technology Co., Ltd | Update of look up table: FIFO, constrained FIFO |
US11895318B2 (en) | 2018-06-29 | 2024-02-06 | Beijing Bytedance Network Technology Co., Ltd | Concept of using one or multiple look up tables to store motion information of previously coded in order and use them to code following blocks |
US11909989B2 (en) | 2018-06-29 | 2024-02-20 | Beijing Bytedance Network Technology Co., Ltd | Number of motion candidates in a look up table to be checked according to mode |
US11909951B2 (en) | 2019-01-13 | 2024-02-20 | Beijing Bytedance Network Technology Co., Ltd | Interaction between lut and shared merge list |
US11956464B2 (en) | 2019-01-16 | 2024-04-09 | Beijing Bytedance Network Technology Co., Ltd | Inserting order of motion candidates in LUT |
US11973971B2 (en) | 2018-06-29 | 2024-04-30 | Beijing Bytedance Network Technology Co., Ltd | Conditions for updating LUTs |
Families Citing this family (62)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102481406B1 (en) * | 2013-04-08 | 2022-12-27 | 돌비 인터네셔널 에이비 | Method for encoding and method for decoding a lut and corresponding devices |
CN105556943B (en) * | 2013-09-20 | 2019-03-29 | Vid拓展公司 | 3D look-up table coding is provided with the system and method for colour gamut scalability |
US9948916B2 (en) | 2013-10-14 | 2018-04-17 | Qualcomm Incorporated | Three-dimensional lookup table based color gamut scalability in multi-layer video coding |
WO2015089352A1 (en) | 2013-12-13 | 2015-06-18 | Vid Scale, Inc | Color gamut scalable video coding device and method for the phase alignment of luma and chroma using interpolation |
US9756337B2 (en) | 2013-12-17 | 2017-09-05 | Qualcomm Incorporated | Signaling color values for 3D lookup table for color gamut scalability in multi-layer video coding |
US10531105B2 (en) | 2013-12-17 | 2020-01-07 | Qualcomm Incorporated | Signaling partition information for 3D lookup table for color gamut scalability in multi-layer video coding |
US20160286224A1 (en) * | 2015-03-26 | 2016-09-29 | Thomson Licensing | Method and apparatus for generating color mapping parameters for video encoding |
PT3166795T (en) | 2015-05-15 | 2019-02-04 | Hewlett Packard Development Company L P Texas Lp | Printer cartridges and memory devices containing compressed multi-dimensional color tables |
JP6872098B2 (en) * | 2015-11-12 | 2021-05-19 | ソニーグループ株式会社 | Information processing equipment, information recording media, information processing methods, and programs |
US10674043B2 (en) | 2016-07-08 | 2020-06-02 | Hewlett-Packard Development Company, L.P. | Color table compression |
US9992382B2 (en) | 2016-07-08 | 2018-06-05 | Hewlett-Packard Development Company, L.P. | Color table compression |
US10602028B2 (en) | 2016-07-08 | 2020-03-24 | Hewlett-Packard Development Company, L.P. | Color table compression |
WO2018022011A1 (en) * | 2016-07-26 | 2018-02-01 | Hewlett-Packard Development Company, L.P. | Indexing voxels for 3d printing |
JP6769231B2 (en) * | 2016-10-17 | 2020-10-14 | 富士通株式会社 | Moving image coding device, moving image coding method, moving image decoding device, moving image decoding method, and moving image coding computer program and moving image decoding computer program |
EP3367659A1 (en) * | 2017-02-28 | 2018-08-29 | Thomson Licensing | Hue changing color gamut mapping |
US10805646B2 (en) * | 2018-06-22 | 2020-10-13 | Apple Inc. | Point cloud geometry compression using octrees and binary arithmetic encoding with adaptive look-up tables |
US10861196B2 (en) | 2017-09-14 | 2020-12-08 | Apple Inc. | Point cloud compression |
US10897269B2 (en) | 2017-09-14 | 2021-01-19 | Apple Inc. | Hierarchical point cloud compression |
US11818401B2 (en) | 2017-09-14 | 2023-11-14 | Apple Inc. | Point cloud geometry compression using octrees and binary arithmetic encoding with adaptive look-up tables |
US11113845B2 (en) | 2017-09-18 | 2021-09-07 | Apple Inc. | Point cloud compression using non-cubic projections and masks |
US10909725B2 (en) | 2017-09-18 | 2021-02-02 | Apple Inc. | Point cloud compression |
EP3467777A1 (en) * | 2017-10-06 | 2019-04-10 | Thomson Licensing | A method and apparatus for encoding/decoding the colors of a point cloud representing a 3d object |
US10607373B2 (en) | 2017-11-22 | 2020-03-31 | Apple Inc. | Point cloud compression with closed-loop color conversion |
CA3090465A1 (en) * | 2018-02-08 | 2019-08-15 | Panasonic Intellectual Property Corporation Of America | Three-dimensional data encoding method, three-dimensional data decoding method, three-dimensional data encoding device, and three-dimensional data decoding device |
US11010928B2 (en) | 2018-04-10 | 2021-05-18 | Apple Inc. | Adaptive distance based point cloud compression |
US10909726B2 (en) | 2018-04-10 | 2021-02-02 | Apple Inc. | Point cloud compression |
EP3779886A4 (en) * | 2018-04-10 | 2021-06-16 | Panasonic Intellectual Property Corporation of America | Three-dimensional data coding method, three-dimensional data decoding method, three-dimensional data coding device, and three-dimensional data decoding device |
US10909727B2 (en) | 2018-04-10 | 2021-02-02 | Apple Inc. | Hierarchical point cloud compression with smoothing |
US10939129B2 (en) | 2018-04-10 | 2021-03-02 | Apple Inc. | Point cloud compression |
EP3804130A1 (en) | 2018-06-05 | 2021-04-14 | Telefonaktiebolaget LM Ericsson (publ) | Low-power approximate dpd actuator for 5g-new radio |
EP3804129A1 (en) | 2018-06-05 | 2021-04-14 | Telefonaktiebolaget LM Ericsson (publ) | Digital predistortion low power implementation |
WO2020003283A1 (en) | 2018-06-29 | 2020-01-02 | Beijing Bytedance Network Technology Co., Ltd. | Conditions for updating luts |
US11017566B1 (en) | 2018-07-02 | 2021-05-25 | Apple Inc. | Point cloud compression with adaptive filtering |
US11202098B2 (en) | 2018-07-05 | 2021-12-14 | Apple Inc. | Point cloud compression with multi-resolution video encoding |
US11012713B2 (en) | 2018-07-12 | 2021-05-18 | Apple Inc. | Bit stream structure for compressed point cloud data |
US10771797B2 (en) * | 2018-07-30 | 2020-09-08 | Logmein, Inc. | Enhancing a chroma-subsampled video stream |
US11367224B2 (en) | 2018-10-02 | 2022-06-21 | Apple Inc. | Occupancy map block-to-patch information compression |
EP3672244B1 (en) * | 2018-12-20 | 2020-10-28 | Axis AB | Methods and devices for encoding and decoding a sequence of image frames in which the privacy of an object is protected |
KR20210104055A (en) * | 2018-12-21 | 2021-08-24 | 파나소닉 인텔렉츄얼 프로퍼티 코포레이션 오브 아메리카 | A three-dimensional data encoding method, a three-dimensional data decoding method, a three-dimensional data encoding apparatus, and a three-dimensional data decoding apparatus |
US11057564B2 (en) | 2019-03-28 | 2021-07-06 | Apple Inc. | Multiple layer flexure for supporting a moving image sensor |
KR102609947B1 (en) | 2019-04-02 | 2023-12-04 | 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 | Bidirectional optical flow-based video coding and decoding |
WO2020211867A1 (en) | 2019-04-19 | 2020-10-22 | Beijing Bytedance Network Technology Co., Ltd. | Delta motion vector in prediction refinement with optical flow process |
KR20210152470A (en) | 2019-04-19 | 2021-12-15 | 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 | Gradient calculation of different motion vector refinements |
CN113711608B (en) | 2019-04-19 | 2023-09-01 | 北京字节跳动网络技术有限公司 | Suitability of predictive refinement procedure with optical flow |
US10970882B2 (en) * | 2019-07-24 | 2021-04-06 | At&T Intellectual Property I, L.P. | Method for scalable volumetric video coding |
US10979692B2 (en) | 2019-08-14 | 2021-04-13 | At&T Intellectual Property I, L.P. | System and method for streaming visible portions of volumetric video |
CN117499661A (en) | 2019-09-09 | 2024-02-02 | 北京字节跳动网络技术有限公司 | Coefficient scaling for high precision image and video codecs |
BR112022005133A2 (en) | 2019-09-21 | 2022-10-11 | Beijing Bytedance Network Tech Co Ltd | VIDEO DATA PROCESSING METHOD AND APPARATUS, AND NON-TRANSITORY COMPUTER-READABLE STORAGE AND RECORDING MEDIA |
US11562507B2 (en) | 2019-09-27 | 2023-01-24 | Apple Inc. | Point cloud compression using video encoding with time consistent patches |
US11627314B2 (en) | 2019-09-27 | 2023-04-11 | Apple Inc. | Video-based point cloud compression with non-normative smoothing |
US11538196B2 (en) | 2019-10-02 | 2022-12-27 | Apple Inc. | Predictive coding for point cloud compression |
US11895307B2 (en) | 2019-10-04 | 2024-02-06 | Apple Inc. | Block-based predictive coding for point cloud compression |
CN110781932B (en) * | 2019-10-14 | 2022-03-11 | 国家广播电视总局广播电视科学研究院 | Ultrahigh-definition film source color gamut detection method for multi-class image conversion and comparison |
CN114902665A (en) | 2019-10-28 | 2022-08-12 | Lg电子株式会社 | Image encoding/decoding method and apparatus using adaptive transform and method of transmitting bitstream |
WO2021086023A1 (en) * | 2019-10-28 | 2021-05-06 | 엘지전자 주식회사 | Image encoding/decoding method and apparatus for performing residual processing using adaptive transformation, and method of transmitting bitstream |
WO2021101317A1 (en) * | 2019-11-22 | 2021-05-27 | 엘지전자 주식회사 | Image encoding/decoding method and device using lossless color transform, and method for transmitting bitstream |
US11798196B2 (en) | 2020-01-08 | 2023-10-24 | Apple Inc. | Video-based point cloud compression with predicted patches |
US11475605B2 (en) | 2020-01-09 | 2022-10-18 | Apple Inc. | Geometry encoding of duplicate points |
US11620768B2 (en) | 2020-06-24 | 2023-04-04 | Apple Inc. | Point cloud geometry compression using octrees with multiple scan orders |
US11615557B2 (en) | 2020-06-24 | 2023-03-28 | Apple Inc. | Point cloud compression using octrees with slicing |
US11948338B1 (en) | 2021-03-29 | 2024-04-02 | Apple Inc. | 3D volumetric content encoding using 2D videos and simplified 3D meshes |
CN114363702B (en) * | 2021-12-28 | 2023-09-08 | 上海网达软件股份有限公司 | Method, device, equipment and storage medium for converting SDR video into HDR video |
Family Cites Families (42)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2899461B2 (en) | 1990-12-20 | 1999-06-02 | 株式会社リコー | Color signal interpolation method, color signal interpolation device, and color correction method |
US5583656A (en) | 1992-12-31 | 1996-12-10 | Eastman Kodak Company | Methods and apparatus for attaching compressed look-up table (LUT) representations of N to M-dimensional transforms to image data and for processing image data utilizing the attached compressed LUTs |
US5504821A (en) | 1993-03-31 | 1996-04-02 | Matsushita Electric Industrial Co., Ltd. | Color converting apparatus for performing a three-dimensional color conversion of a colored picture in a color space with a small capacity of memory |
US5737032A (en) | 1995-09-05 | 1998-04-07 | Videotek, Inc. | Serial digital video processing with concurrent adjustment in RGB and luminance/color difference |
KR100189908B1 (en) | 1996-05-06 | 1999-06-01 | 윤종용 | Color correction apparatus & method therefor using two-dimensional chromaticity partitioning |
US6400843B1 (en) * | 1999-04-22 | 2002-06-04 | Seiko Epson Corporation | Color image reproduction with accurate inside-gamut colors and enhanced outside-gamut colors |
US6301393B1 (en) * | 2000-01-21 | 2001-10-09 | Eastman Kodak Company | Using a residual image formed from a clipped limited color gamut digital image to represent an extended color gamut digital image |
RU2190306C2 (en) | 2000-05-10 | 2002-09-27 | Государственное унитарное предприятие "Калужский научно-исследовательский институт телемеханических устройств" | Method and device for processing half-tone color picture using vector diffusion of chroma error |
US7929610B2 (en) | 2001-03-26 | 2011-04-19 | Sharp Kabushiki Kaisha | Methods and systems for reducing blocking artifacts with reduced complexity for spatially-scalable video coding |
US7643675B2 (en) | 2003-08-01 | 2010-01-05 | Microsoft Corporation | Strategies for processing image information using a color information data structure |
KR100552695B1 (en) | 2003-11-20 | 2006-02-20 | 삼성전자주식회사 | Method and apparatus for color control in color image |
US7659911B2 (en) | 2004-04-21 | 2010-02-09 | Andreas Wittenstein | Method and apparatus for lossless and minimal-loss color conversion |
CN101005620B (en) | 2004-09-03 | 2011-08-10 | 微软公司 | Innovations in coding and decoding macroblock and motion information for interlaced and progressive video |
US7755817B2 (en) * | 2004-12-07 | 2010-07-13 | Chimei Innolux Corporation | Color gamut mapping |
JP4477675B2 (en) | 2005-01-26 | 2010-06-09 | キヤノン株式会社 | Color processing apparatus and method |
US7961963B2 (en) | 2005-03-18 | 2011-06-14 | Sharp Laboratories Of America, Inc. | Methods and systems for extended spatial scalability with picture-level adaptation |
CN1859576A (en) | 2005-10-11 | 2006-11-08 | 华为技术有限公司 | Top sampling method and its system for space layered coding video image |
JP5039142B2 (en) | 2006-10-25 | 2012-10-03 | フラウンホーファー−ゲゼルシャフト・ツール・フェルデルング・デル・アンゲヴァンテン・フォルシュング・アインゲトラーゲネル・フェライン | Quality scalable coding method |
US8233536B2 (en) | 2007-01-23 | 2012-07-31 | Sharp Laboratories Of America, Inc. | Methods and systems for multiplication-free inter-layer image prediction |
JP2008211310A (en) | 2007-02-23 | 2008-09-11 | Seiko Epson Corp | Image processing apparatus and image display device |
JP4996501B2 (en) * | 2007-04-06 | 2012-08-08 | キヤノン株式会社 | Multidimensional data encoding apparatus, decoding apparatus, and control method therefor |
US8237990B2 (en) | 2007-06-28 | 2012-08-07 | Adobe Systems Incorporated | System and method for converting over-range colors |
US8625676B2 (en) * | 2007-06-29 | 2014-01-07 | Pai Kung Limited Liability Company | Video bitstream decoding using least square estimates |
US7684084B2 (en) * | 2007-09-25 | 2010-03-23 | Xerox Corporation | Multiple dimensional color conversion to minimize interpolation error |
US9538176B2 (en) * | 2008-08-08 | 2017-01-03 | Dolby Laboratories Licensing Corporation | Pre-processing for bitdepth and color format scalable video coding |
US8169434B2 (en) * | 2008-09-29 | 2012-05-01 | Microsoft Corporation | Octree construction on graphics processing units |
RU2011140810A (en) | 2009-03-09 | 2013-04-20 | Конинклейке Филипс Электроникс Н.В. | CONVERSION OF MANY BASIC FLOWERS |
US8860745B2 (en) * | 2009-06-01 | 2014-10-14 | Stmicroelectronics, Inc. | System and method for color gamut mapping |
US9185422B2 (en) | 2010-07-15 | 2015-11-10 | Qualcomm Incorporated | Variable localized bit-depth increase for fixed-point transforms in video coding |
CN103141099B (en) * | 2010-10-01 | 2016-10-26 | 杜比实验室特许公司 | The selection of wave filter for the optimization that reference picture processes |
CN101977316B (en) | 2010-10-27 | 2012-07-25 | 无锡中星微电子有限公司 | Telescopic coding method |
UA109312C2 (en) * | 2011-03-04 | 2015-08-10 | PULSE-CODE MODULATION WITH QUANTITATION FOR CODING VIDEO INFORMATION | |
GB2495468B (en) * | 2011-09-02 | 2017-12-13 | Skype | Video coding |
US11184623B2 (en) * | 2011-09-26 | 2021-11-23 | Texas Instruments Incorporated | Method and system for lossless coding mode in video coding |
KR20130068823A (en) | 2011-12-16 | 2013-06-26 | 삼성전자주식회사 | Method and apparatus for image signal processing |
EP2803190B1 (en) * | 2012-01-09 | 2017-10-25 | Dolby Laboratories Licensing Corporation | Hybrid reference picture reconstruction method for multiple layered video coding systems |
US9673936B2 (en) * | 2013-03-15 | 2017-06-06 | Google Inc. | Method and system for providing error correction to low-latency streaming video |
KR102481406B1 (en) * | 2013-04-08 | 2022-12-27 | 돌비 인터네셔널 에이비 | Method for encoding and method for decoding a lut and corresponding devices |
CN105556943B (en) * | 2013-09-20 | 2019-03-29 | Vid拓展公司 | 3D look-up table coding is provided with the system and method for colour gamut scalability |
US9756337B2 (en) * | 2013-12-17 | 2017-09-05 | Qualcomm Incorporated | Signaling color values for 3D lookup table for color gamut scalability in multi-layer video coding |
KR102024982B1 (en) * | 2014-03-19 | 2019-09-24 | 애리스 엔터프라이지즈 엘엘씨 | Scalable coding of video sequences using tone mapping and different color gamuts |
JP6330507B2 (en) * | 2014-06-19 | 2018-05-30 | ソニー株式会社 | Image processing apparatus and image processing method |
-
2014
- 2014-09-19 CN CN201480051925.7A patent/CN105556943B/en active Active
- 2014-09-19 JP JP2016544028A patent/JP6449892B2/en active Active
- 2014-09-19 WO PCT/US2014/056608 patent/WO2015042432A1/en active Application Filing
- 2014-09-19 EP EP14783940.1A patent/EP3047639B1/en active Active
- 2014-09-19 US US15/022,386 patent/US9955174B2/en active Active
- 2014-09-19 EP EP18174350.1A patent/EP3386179A1/en not_active Withdrawn
- 2014-09-19 KR KR1020167010164A patent/KR102028186B1/en active IP Right Grant
- 2014-09-19 CN CN201910167252.6A patent/CN110033494A/en active Pending
- 2014-09-19 RU RU2016115055A patent/RU2016115055A/en not_active Application Discontinuation
- 2014-09-19 DK DK14783940.1T patent/DK3047639T3/en active
- 2014-09-19 KR KR1020197028387A patent/KR20190112854A/en not_active IP Right Cessation
-
2016
- 2016-08-26 HK HK16110179.4A patent/HK1222075A1/en unknown
-
2018
- 2018-03-22 US US15/933,349 patent/US10390029B2/en not_active Expired - Fee Related
- 2018-12-06 JP JP2018229354A patent/JP6701310B2/en not_active Expired - Fee Related
-
2019
- 2019-08-02 US US16/530,679 patent/US20190356925A1/en not_active Abandoned
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11445224B2 (en) | 2018-06-05 | 2022-09-13 | Beijing Bytedance Network Technology Co., Ltd. | Shape of EQT subblock |
US11570482B2 (en) | 2018-06-05 | 2023-01-31 | Beijing Bytedance Network Technology Co., Ltd. | Restriction of extended quadtree |
US11381848B2 (en) | 2018-06-05 | 2022-07-05 | Beijing Bytedance Network Technology Co., Ltd. | Main concept of EQT, unequally four partitions and signaling |
US11438635B2 (en) | 2018-06-05 | 2022-09-06 | Beijing Bytedance Network Technology Co., Ltd. | Flexible tree partitioning processes for visual media coding |
US11528501B2 (en) | 2018-06-29 | 2022-12-13 | Beijing Bytedance Network Technology Co., Ltd. | Interaction between LUT and AMVP |
US11695921B2 (en) | 2018-06-29 | 2023-07-04 | Beijing Bytedance Network Technology Co., Ltd | Selection of coded motion information for LUT updating |
US11973971B2 (en) | 2018-06-29 | 2024-04-30 | Beijing Bytedance Network Technology Co., Ltd | Conditions for updating LUTs |
US11528500B2 (en) | 2018-06-29 | 2022-12-13 | Beijing Bytedance Network Technology Co., Ltd. | Partial/full pruning when adding a HMVP candidate to merge/AMVP |
US11895318B2 (en) | 2018-06-29 | 2024-02-06 | Beijing Bytedance Network Technology Co., Ltd | Concept of using one or multiple look up tables to store motion information of previously coded in order and use them to code following blocks |
US11877002B2 (en) | 2018-06-29 | 2024-01-16 | Beijing Bytedance Network Technology Co., Ltd | Update of look up table: FIFO, constrained FIFO |
US11245892B2 (en) | 2018-06-29 | 2022-02-08 | Beijing Bytedance Network Technology Co., Ltd. | Checking order of motion candidates in LUT |
US11909989B2 (en) | 2018-06-29 | 2024-02-20 | Beijing Bytedance Network Technology Co., Ltd | Number of motion candidates in a look up table to be checked according to mode |
US11706406B2 (en) | 2018-06-29 | 2023-07-18 | Beijing Bytedance Network Technology Co., Ltd | Selection of coded motion information for LUT updating |
US11463685B2 (en) | 2018-07-02 | 2022-10-04 | Beijing Bytedance Network Technology Co., Ltd. | LUTS with intra prediction modes and intra mode prediction from non-adjacent blocks |
US20210297659A1 (en) | 2018-09-12 | 2021-09-23 | Beijing Bytedance Network Technology Co., Ltd. | Conditions for starting checking hmvp candidates depend on total number minus k |
US11589071B2 (en) | 2019-01-10 | 2023-02-21 | Beijing Bytedance Network Technology Co., Ltd. | Invoke of LUT updating |
US11909951B2 (en) | 2019-01-13 | 2024-02-20 | Beijing Bytedance Network Technology Co., Ltd | Interaction between lut and shared merge list |
US11956464B2 (en) | 2019-01-16 | 2024-04-09 | Beijing Bytedance Network Technology Co., Ltd | Inserting order of motion candidates in LUT |
US11962799B2 (en) | 2019-01-16 | 2024-04-16 | Beijing Bytedance Network Technology Co., Ltd | Motion candidates derivation |
US11641483B2 (en) | 2019-03-22 | 2023-05-02 | Beijing Bytedance Network Technology Co., Ltd. | Interaction between merge list construction and other tools |
US11778233B2 (en) | 2019-06-21 | 2023-10-03 | Beijing Bytedance Network Technology Co., Ltd | Selective use of adaptive in-loop color-space transform and other video coding tools |
US11539981B2 (en) | 2019-06-21 | 2022-12-27 | Beijing Bytedance Network Technology Co., Ltd. | Adaptive in-loop color-space transform for video coding |
US11677946B2 (en) * | 2019-10-28 | 2023-06-13 | Lg Electronics Inc. | Image encoding/decoding method and device using adaptive color transform, and method for transmitting bitstream |
US20220337835A1 (en) * | 2019-10-28 | 2022-10-20 | Lg Electronics Inc. | Image encoding/decoding method and device using adaptive color transform, and method for transmitting bitstream |
US11671591B2 (en) | 2019-11-07 | 2023-06-06 | Beijing Bytedance Network Technology Co., Ltd | Quantization properties of adaptive in-loop color-space transform for video coding |
Also Published As
Publication number | Publication date |
---|---|
JP2019068451A (en) | 2019-04-25 |
EP3047639B1 (en) | 2018-07-18 |
KR102028186B1 (en) | 2019-10-02 |
CN110033494A (en) | 2019-07-19 |
EP3047639A1 (en) | 2016-07-27 |
US9955174B2 (en) | 2018-04-24 |
CN105556943B (en) | 2019-03-29 |
DK3047639T3 (en) | 2018-10-15 |
WO2015042432A1 (en) | 2015-03-26 |
JP6701310B2 (en) | 2020-05-27 |
US20180213241A1 (en) | 2018-07-26 |
HK1222075A1 (en) | 2017-06-16 |
JP6449892B2 (en) | 2019-01-09 |
KR20190112854A (en) | 2019-10-07 |
KR20160058163A (en) | 2016-05-24 |
RU2016115055A (en) | 2017-10-25 |
EP3386179A1 (en) | 2018-10-10 |
US10390029B2 (en) | 2019-08-20 |
JP2016534679A (en) | 2016-11-04 |
US20160295219A1 (en) | 2016-10-06 |
CN105556943A (en) | 2016-05-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10390029B2 (en) | Systems and methods for providing 3D look-up table coding for color gamut scalability | |
US10986370B2 (en) | Combined scalability processing for multi-layer video coding | |
US10440340B2 (en) | Providing 3D look-up table (LUT) estimation for color gamut scalability | |
US11109044B2 (en) | Color space conversion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO PAY ISSUE FEE |