CN108989819B - Data compression method and device adopting respective corresponding color spaces for modes - Google Patents
Data compression method and device adopting respective corresponding color spaces for modes Download PDFInfo
- Publication number
- CN108989819B CN108989819B CN201710410669.1A CN201710410669A CN108989819B CN 108989819 B CN108989819 B CN 108989819B CN 201710410669 A CN201710410669 A CN 201710410669A CN 108989819 B CN108989819 B CN 108989819B
- Authority
- CN
- China
- Prior art keywords
- color space
- bitdepth
- coding
- data
- variant
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Color Television Systems (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The invention provides a data compression method and a data compression device for coding and decoding by adopting a data color space which accords with the characteristics of each mode in each coding mode. In the method and the device, L (usually 2 ≦ L ≦ 10) predetermined coding modes are classified into K (2 ≦ K ≦ L) groups, the K groups of coding modes respectively adopt K corresponding data color spaces which are different from each other, and the coding modes of the same group all adopt the same data color space. Thus, the data color space is adapted to the characteristics of each encoding mode, and the overall encoding efficiency is improved. When the color space of the original input data is different from the kth (1. Ltoreq. K. Ltoreq.K) color space among the K color spaces, the original input data needs to be converted into data of the kth color space using color space conversion.
Description
Technical Field
The present invention relates to an encoding and decoding system for lossy or lossless compression of data, and more particularly to a method and apparatus for encoding and decoding image and video data.
Background
With the human society entering the era of big data, cloud computing, mobile computing, cloud-mobile computing, ultra high definition (4K) and ultra high definition (8K) video image resolution, 4G/5G communication, virtual reality and artificial intelligence, the technology for compressing various data including big data, image data and video data with ultra high compression ratio and extremely high quality becomes indispensable.
A data set is a set of data elements (e.g., bytes, bits, pixels, pixel components, spatial sampling points, transform domain coefficients). When encoding (and decoding accordingly) data compression of a data set (e.g., a one-dimensional data queue, a two-dimensional data file, a frame of image, a video sequence, a transform domain, a transform block, a plurality of transform blocks, a three-dimensional scene, a sequence of continuously-changing three-dimensional scenes) arranged in a certain spatial (one-dimensional, two-dimensional, or multi-dimensional) shape, especially a two-dimensional or more data sets, the data set is usually divided into a plurality of subsets having predetermined shapes and/or sizes, called encoding blocks (i.e., decoding blocks from the decoding perspective, collectively referred to as encoding and decoding blocks), and the blocks are encoded or decoded in a predetermined order by taking the encoding and decoding blocks as units. At any one time, the coding block being coded is referred to as the current coding block. At any one time, the decoding block being decoded is referred to as the current decoding block. The current encoding block or the current decoding block is commonly referred to as the current encoding and decoding block or simply the current block. A data element (also sometimes simply referred to as an element) being encoded or decoded is referred to as a currently encoded data element or a currently decoded data element, collectively referred to as a current data element, simply referred to as a current element. When an element represents a color, the element is composed of N components (typically 1 ≦ N ≦ 5), for example, 3 components: a G (green) component, a B (blue) component, an R (red) component or a Y (luminance) component, a U (chroma 1) component, a V (chroma 2) component or a Y (luminance) component, a Cb (chroma difference blue) component, a Cr (chroma difference red) component or a Y (luminance) component, a Cg (chroma green) component, a Co (chroma orange) component or an H (hue) component, an S (saturation) component, a V (lightness) component or an H (hue) component, an S (saturation) component, an L (luminance) component; or consists of 4 components: (C, M, Y, K) i.e. (cyan, magenta, yellow, black) or (R, G, B, A) i.e. (red, green, blue, alpha) or (Y, U, V, A) i.e. (luminance, chrominance 1, chrominance 2, alpha) or (Y, cb, cr, A) i.e. (luminance, chrominance blue, chrominance red, alpha) or (Y, cg, co, A) i.e. (luminance, chrominance green, chrominance orange, alpha). These different sub-representations of color are referred to as color spaces. Different color spaces can typically be converted from each other without loss (without losing any color information) or with loss (possibly with a portion of the color information). Color space conversion establishes a relationship between different representations of the same color that is either lossless, i.e., mathematically exact, or lossy, i.e., mathematically erroneous to some degree.
For a codec block with a certain shape (not necessarily limited to a square or a rectangle, but may be any other reasonable shape), it is necessary to divide it into finer primitives (basic units) in many cases, and one primitive is encoded or decoded one primitive after another in a predetermined time sequence. The same type of encoding or decoding operation is typically performed for all samples within a primitive. At any one time, the primitive being encoded or decoded is referred to as the current primitive. The result of encoding a primitive is one or more encoding parameters, and finally a compressed data stream containing the encoding parameters is generated. Decoding a primitive by parsing the compressed data stream to obtain one or more encoding parameters, and recovering reconstructed data samples from the one or more encoding parameters.
Examples of primitives include codec blocks (the entire block as one primitive), sub-blocks, micro-blocks, strings, byte strings, alpha strings, pixel strings, sample strings, index strings, lines.
One notable feature of many common datasets is the presence of many matching (i.e., similar or even identical) patterns. For example, there are typically many matching pixel patterns in images and video sequences. Therefore, in the existing data compression technology, a matching (also called prediction or compensation) manner, i.e. a manner of matching (also called "compensation value" or "reference sample", e.g. "reference pixel") with a "prediction value" (also called "compensation value" or "reference sample") to match (also called prediction, representation, compensation, approximation, etc.) a sample (simply called "current sample") in current coding or decoding is generally adopted to achieve the effect of lossless or lossy compression of data. In many cases, the basic operation of the matching approach is to copy the reference sample, i.e. the sample at the reference position, to the position of the current sample. Therefore, the matching method or the prediction method is also called a copy method. In the matching mode, reconstructed (also called reconstructed or restored) samples that have undergone at least partial encoding operations and at least partial decoding operations constitute a reference set (also called reference set space or reference buffer). The reconstructed samples and their positions in the reference set correspond one-to-one to the original samples and their positions in the original data set.
When encoding and decoding a current block, the matching mode divides the current block into a plurality of matching (also called predicting) primitives, and one matching primitive has one or more matching (coding) parameters (also called matching relation or copying parameter or copying relation or reference relation) to represent the characteristics of the matching (coding) primitives. One of the most important parameters of the matching parameters is a displacement vector (also referred to as motion vector, position offset, relative position, relative address, relative coordinate, relative index, etc.). The displacement vector represents the relative displacement between the sample of the current primitive and its reference sample, corresponding to the one-dimensional offset after the data samples are arranged into one-dimensional data. It is clear that the reference position of the reference sample is derived from the displacement vector. The displacement vector of the current primitive is referred to as the current displacement vector. Other examples of matching parameters: primitive patterns, scan patterns, match types, match lengths, unmatched (predicted) samples, etc.
Examples of matching primitives include codec blocks, sub-blocks, micro-blocks, strings, pixel strings, sample strings, index strings, lines.
In the existing data compression technology, multiple coding modes are often used to compress data, such as the conventional Hybrid coding mode of prediction + transformation or its variant, and further such as various matching modes based on matching mode or its variant.
In the existing data compression technology, all encoding modes use the same color space to encode and decode the data element, that is, the color space of the data element is fixed and may not change with the encoding mode. On the other hand, the coding efficiency of data of different color spaces may be different for one coding mode, and the coding efficiency of data of one color space may be much higher than that of data of another color space. In the existing data compression technology, the characteristic that the coding efficiency of a coding mode can change along with the data color space is not utilized, and the improvement of the coding efficiency is influenced.
Disclosure of Invention
In order to solve the problem in data compression, the invention provides a data compression method and a data compression device for coding and decoding by adopting a data color space which accords with the characteristics of each mode in each coding mode. In the method and the device, L (usually 2 ≦ L ≦ 10) predetermined coding modes are classified into K (2 ≦ K ≦ L) groups, the K groups of coding modes respectively adopt K corresponding data color spaces which are different from each other, and the coding modes of the same group all adopt the same data color space. Thus, the data color space is adapted to the characteristics of each encoding mode, and the overall encoding efficiency is improved. When the color space of the original input data is different from the kth (1. Ltoreq. K. Ltoreq.K) color space among the K color spaces, the original input data needs to be converted into data of the kth color space using color space conversion.
For example, using a predetermined L 1 (1 ≤ L 1 ≦ 10) Hybrid coding modes and variants thereof and predetermined L 2 (1 ≤ L 2 Less than or equal to 10) matched coding mode pairs (R, G, B)When the original input data of the color space is coded, L = L 1 + L 2 The coding modes are grouped into 2 groups (K = 2), L 1 Hybrid coding patterns and variants thereof belong to the first group, L 2 The seed matching coding pattern belongs to the second group. For the first set of encoding modes, the (Y, U, V) data color space is used, and for the second set of encoding modes, the (R, G, B) data color space is used.
The most basic unique technical feature of the encoding method or apparatus of the present invention is that, when using predetermined L (L > 1) kinds of encoding modes, the encoding modes are grouped into K (K > 1) groups, corresponding to the K kinds of color spaces, respectively, according to the characteristics of each encoding mode itself and the requirements for the data color spaces. When a current coding block is coded by using a coding mode, the corresponding color space is selected according to the group to which the coding mode belongs, and the current coding block is coded to generate a compressed data stream at least containing an identification code of the coding mode which can be used for determining the corresponding color space and/or information equivalent to the identification code. Preferably, if the color space of the original input data is not the corresponding color space, the data of the current coding block is first converted into the data of the corresponding color space by using color space conversion, and then the converted data is encoded. Preferably, the data generated during encoding of the currently encoded block is converted into data of the other color space or spaces using color space conversion. Fig. 1 is a schematic diagram of an encoding method or apparatus of the present invention.
The most basic specific technical feature of the decoding method or device of the present invention is to parse the compressed data stream, obtain the information of the coding mode, and decode a current decoding block using the corresponding color space of one of the predetermined multiple color spaces according to the information of the coding mode. Preferably, if a color space of the reconstructed data (equivalent to a color space of original input data of the encoder) is not the corresponding color space, data generated in the decoding process of the currently decoded block is converted into data of the corresponding color space using color space conversion. Preferably, the data generated during decoding of the currently decoded block is converted into data of other color space or spaces using color space conversion. Fig. 2 is a schematic diagram of a decoding method or apparatus of the present invention.
According to an aspect of the present invention, there is provided an encoding method or apparatus for data compression, comprising at least steps or modules for performing the following functions and operations:
according to the self characteristics of the preset L (L > 1) coding modes and the requirements on the data color space, the coding modes are classified into K (K > 1) groups, and the K groups respectively correspond to the preset K color spaces. When a current coding block is coded by using a coding mode, the corresponding color space is selected according to the group to which the coding mode belongs, and the current coding block is coded to generate a compressed data stream at least containing an identification code and/or information equivalent to the identification code which can be used for determining the coding mode of the corresponding color space.
From a first aspect, the present invention provides an encoding method for compressing data, characterized by comprising at least the following steps:
grouping the coding modes into K (K > 1) groups according to at least the self characteristics of the preset L (L > 1) coding modes and the requirements on the data color spaces, wherein the K (K > 1) groups correspond to the preset K color spaces respectively; when a current coding block is coded by using a coding mode, the corresponding color space is selected at least according to the group to which the coding mode belongs, and the current coding block is coded to generate a compressed data stream at least containing an identification code and/or information equivalent to the identification code of the coding mode which can be used for determining the corresponding color space.
From a second aspect, the present invention provides an encoding apparatus for compressing data, comprising at least the following modules:
the encoding and/or compressed data stream generating module is used for grouping the encoding modes into K (K > 1) groups according to at least the self characteristics of the predetermined L (L > 1) encoding modes and the requirements on the data color spaces, and the K groups correspond to the predetermined K color spaces respectively; when a current coding block is coded by using a coding mode, the corresponding color space is selected at least according to the group to which the coding mode belongs, and the current coding block is coded to generate a compressed data stream at least containing an identification code and/or information equivalent to the identification code of the coding mode which can be used for determining the corresponding color space.
According to another aspect of the present invention, there is also provided a decoding method or apparatus for data compression, comprising at least the steps or modules for performing the following functions and operations:
the method comprises the steps of analyzing a compressed data stream generated by using predetermined L (L > 1) encoding modes, acquiring information of the encoding modes, and decoding a current decoding block by adopting a corresponding color space of one of predetermined K (K > 1) color spaces according to the information of the encoding modes.
From a third perspective, the present invention provides a decoding method for compressing data, characterized by at least the following steps:
the method comprises the steps of analyzing a compressed data stream generated by at least using predetermined L (L > 1) encoding modes, at least obtaining information of the encoding modes, and decoding a current decoding block by adopting a corresponding color space of at least one of predetermined K (K > 1) color spaces according to the information of the encoding modes.
From a fourth perspective, the present invention provides a decoding apparatus for compressing data, characterized by comprising at least the following modules:
the compressed data stream analyzing and decoding module analyzes a compressed data stream generated by at least using predetermined L (L > 1) encoding modes, at least acquires information of the encoding modes, and decodes a current decoding block by adopting a corresponding color space of at least one predetermined K (K > 1) color spaces according to the information of the encoding modes.
The present invention is applicable to encoding and decoding for lossy compression of data, and is also applicable to encoding and decoding for lossless compression of data. The present invention is applicable to encoding and decoding of one-dimensional data such as character string data or byte string data or one-dimensional graphics or fractal graphics, and is also applicable to encoding and decoding of two-dimensional or higher data such as image or video data.
In the present invention, the data involved in data compression includes one or a combination of the following types of data
1) One-dimensional data;
2) Two-dimensional data;
3) Multidimensional data;
4) A graph;
5) Dimension division graphics;
6) An image;
7) A sequence of images;
8) Video;
9) A three-dimensional scene;
10 A sequence of continuously changing three-dimensional scenes;
11 A scene of virtual reality;
12 ) a sequence of scenes of continuously changing virtual reality
13 Images in pixel form);
14 Transform domain data of the image;
15 A set of two or more bytes;
16 A set of bits in two or more dimensions;
17 ) a set of pixels;
18 A set of four component pixels (C, M, Y, K);
19 A set of four component pixels (R, G, B, A);
20 A set of four component pixels (Y, U, V, A);
21 A set of four component pixels (Y, cb, cr, A);
22 A set of four component pixels (Y, cg, co, A);
23 A set of three-component pixels (R, G, B);
24 A set of three-component pixels (Y, U, V);
25 A set of three-component pixels (Y, cb, cr);
26 A set of three-component pixels (Y, cg, co);
27 A set of three-component pixels (H, S, V);
28 ) a set of three-component pixels (H, S, L).
In the present invention, the coding mode includes one of the following coding modes or a combination thereof or a variant thereof:
1) Hybrid coding mode including intra prediction;
2) Hybrid coding mode including inter-prediction;
3) A coding mode including a wavelet transform;
4) A coding mode comprising residual coding;
5) A coding mode comprising a matching mode;
6) A coding mode comprising block matching;
7) A coding pattern including sub-block matching;
8) A coding mode comprising micro-block matching;
9) A coding mode comprising line matching;
10 Encoding patterns including string matches;
11 Encoding modes that include pixel string matching;
12 Code patterns including sample string matches;
13 Encoding patterns including index string matches;
14 Encoding modes including main reference buffer string matching;
15 Encoding modes including secondary reference buffer string matching;
16 Encoding modes including string prediction;
17 Encoding modes including universal string prediction;
18 Coding modes including offset string prediction;
19 Encoding mode including coordinate string prediction;
20 A coding pattern comprising unmatched pixels;
21 A coding pattern comprising a string of unmatched pixels;
22 Encoding modes that include unpredictable pixels;
23 ) encoding modes that include non-predictable pixel strings.
In the present invention, in the case where the data is a picture, a sequence of pictures, a video, or the like, the encoded block or the decoded block is a coded region or a decoded region of the picture, including at least one of: a full picture, a sub-picture of a picture, a macroblock, a largest coding unit LCU, a coding tree unit CTU, a coding unit CU, a sub-region of a CU, a prediction unit PU, a transform unit TU.
In the present invention, the primitive includes one or a combination of the following cases: a codec block, a sub-block, a micro-block, a string of bytes, an alpha (alpha) string, a string of pixels, a string of samples, an index string, a line, a match block, a match sub-block, a match micro-block, a match string, a match pixel string, a match sample string, a match index string, a match bar, a match line, an offset string, a coordinate string, an unpredictable pixel, a string of unpredictable pixels, a coordinate, or a string of unpredictable pixels.
Drawings
Fig. 1 is a schematic diagram of an encoding method or apparatus of the present invention.
Fig. 2 is a schematic diagram of a decoding method or apparatus of the present invention.
Detailed Description
The technical features of the present invention are explained above by specific embodiments. Other advantages and effects of the present invention will be readily apparent to those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention.
The following are further implementation details or variations of the present invention.
Examples or modifications 1
In the encoding method or apparatus or the decoding method or apparatus, the color space of the original input data or the color space of the reconstructed data is not the corresponding color space, and the inter-conversion between these color spaces is performed using color space conversion, including one or a combination of the following conversions:
converting the color space of the original input data into the corresponding color space
Or
Converting the color space of the reconstructed data into the corresponding color space
Or alternatively
Converting the corresponding color space into a color space of the original input data
Or
And converting the corresponding color space into the color space of the reconstruction data.
Examples of embodiment or modification 2
In the encoding method or apparatus or the decoding method or apparatus, data of one color space involved before and/or in the middle and/or after the encoding or decoding process of the current block is converted into data of other one or more color spaces using color space conversion.
Examples of embodiment or modification 3
In the encoding method or apparatus or the decoding method or apparatus, the predetermined L kinds of encoding modes at least include predetermined L 1 Hybrid coding mode or its variant and predetermined L 2 A matching coding mode is seeded; the predetermined K color spaces include at least one or a combination of the following color spaces:
(R, G, B) a color space or a variant thereof;
(Y, U, V) color space or a variant thereof;
(Y, cb, cr) color space or a variant thereof;
(Y, cg, co) color space or variants thereof;
(H, S, V) color space or a variant thereof;
(H, S, L) color space or a variant thereof;
(C, M, Y, K) color space or a variant thereof;
(R, G, B, a) a color space or a variant thereof;
(Y, U, V, a) color space or variants thereof;
(Y, cb, cr, a) color space or a variant thereof;
(Y, cg, co, A) color space or a variant thereof.
Examples of embodiment or modification 4
In the encoding method or apparatus or the decoding method or apparatus, the predetermined L kinds of encoding modes at least include predetermined L 1 Hybrid coding mode based on prediction and transformation or its variant and predetermined L 2 A generic string prediction based coding mode or a variant thereof; said L is 1 The coding mode is classified as K 1 Groups, each with a predetermined K 1 The seed color space corresponds; said L 2 The coding mode is classified as K 2 Groups, each with a predetermined K 2 The seed color space corresponds; the predetermined K 1 The seed color space comprises at least a (Y, U, V) color space or a variant thereof; the predetermined K 2 The seed color space includes at least a (R, G, B) color space or a variant thereof.
Examples of embodiment or modification 5
In the encoding method or apparatus or the decoding method or apparatus, the color space conversion is used to convert the predicted data (i.e., prediction sample data) and/or the matched data (i.e., copied reference data) and/or the compensated data and/or the reconstructed data generated in the encoding or decoding of the current block from one color space to another color space or spaces.
Examples of embodiment or modification 6
In the encoding method or apparatus or the decoding method or apparatus, the predetermined K color spaces include (R, G, B) color spaces or variations thereof and (Y, U, V) color spaces or variations thereof; the color space conversion between the (R, G, B) color space or its variants and the (Y, U, V) color space or its variants is one or a combination or its variants of the following five sets of formulaic conversions:
the first set of equations is a forward transform of
t = 0.2126*R + 0.7152*G + 0.0722*B
Y = floor ((16 + 219 * t / (2 BitDepth –1)) * 2 BitDepth–8 + 0.5)
U = floor ((128 + 112 * (B – t) / ((2 BitDepth –1) * (1 – 0.0722))) * 2 BitDepth–8 + 0.5)
V = floor ((128 + 112 * (R – t) / ((2 BitDepth –1) * (1 – 0.2126))) * 2 BitDepth–8 + 0.5)
The inverse of the first set of equations is
t = floor ((Y - 16 * 2 BitDepth–8 ) * (2 BitDepth –1) / (219 * 2 BitDepth–8 ) + 0.5)
R = clamp [t + (Cr - 2 BitDepth–1 ) * (2 BitDepth –1) * (1-0.2126)/(112 * 2 BitDepth–8 ) + 0.5]
B = clamp [t + (Cb - 2 BitDepth–1 ) * (2 BitDepth –1) * (1-0.0722)/(112 * 2 BitDepth–8 ) + 0.5]
G = clamp [( t - 0.2126*R - 0.0722*B)/0.7152 + 0.5]
Where t is a floating point number, R, G, B, Y, U, V are integers, and the binary digit is BitDepth. floor (x) is the largest integer less than or equal to x, clamp [ x [ ]]First, the maximum of 0 and floor (x) is selected, and then the sum of the maximum and 2 is selected BitDepth -the smallest of 1;
the second set of equations is a positive transformation of
t = 0.2126*R + 0.7152*G + 0.0722*B
Y = floor (t)
U = floor (2 BitDepth–1 + 0.5 * (B – t) / (1 – 0.0722))
V = floor (2 BitDepth–1 + 0.5 * (R – t) / (1 – 0.2126))
The inverse of the second set of equations is
R = clamp [Y + (Cr - 2 BitDepth–1 ) * 2 * (1-0.2126) + 0.5]
B = clamp [Y + (Cb - 2 BitDepth–1 ) * 2 * (1-0.0722) + 0.5]
G = clamp [(Y - 0.2126*R - 0.0722*B)/0.7152 + 0.5]
The positive transformation formula of the third set of formula is
Y = (G + ((R + B) >> 1) + 1) >> 1
U = ((R - B - 1) >> 1) + (1<<(BitDepth-1))
V = (G – Y) + (1<<(BitDepth-1))
The inverse of the third set of equations is
R = clip3(0, (1<<BitDepth) - 1, Y – V + U)
G = clip3(0, (1<<BitDepth) - 1, Y + V - (1<<(BitDepth-1)))
B = clip3(0, (1<<BitDepth) - 1, Y – V – U + (1<<BitDepth))
Wherein the content of the first and second substances,<<is a binary left-shift operation and is,>>is a binary right shift operation, clip3 (0, (1)<<BitDepth) -1, x) is the maximum of 0 and x, and then the sum is taken as 2 BitDepth -the smallest of 1;
the positive transformation formula of the fourth set of formula is
Y = (G + ((R + B) >> 1) + 1) >> 1
U = ((G - B - 1) >> 1) + (1<<(BitDepth-1))
V = ((G - R - 1) >> 1) + (1<<(BitDepth-1))
The fourth set of equations has the inverse transformation formula of
G = clip3(0, (1<<BitDepth) - 1, Y + ((U + V - (1<<BitDepth)) >> 1))
B = clip3(0, (1<<BitDepth) - 1, G - 2U + (1<<BitDepth) - 1)
R = clip3(0, (1<<BitDepth) - 1, G - 2V + (1<<BitDepth) - 1)
The positive transformation formula of the fifth set of formula is
Y = (G + ((R + B) >> 1) + 1) >> 1
U = ((Y - B) >> 1) + (1<<(BitDepth-1))
V = ((Y - R) >> 1) + (1<<(BitDepth-1))
The inverse transformation formula of the fifth set of formula is
G = clip3(0, (1<<BitDepth) - 1, Y + U + V - (1<<BitDepth))
B = clip3(0, (1<<BitDepth) - 1, Y - 2U + (1<<BitDepth))
R = clip3(0, (1<<BitDepth) - 1, Y - 2V + (1<<BitDepth))。
Examples of embodiment or modification 7
In the encoding method or device or the decoding method or device, the encoding and decoding block has a direct or indirect or direct and indirect mixed encoding mode identification code in the compressed data stream, and the encoding and decoding block is encoded and decoded by adopting a corresponding color space according to the value of the identification code; the direct encoding mode identification code is composed of one or more bit strings (binary symbol strings) in a compressed data stream; the indirect coding mode identification code is a coding mode identification code derived from other codec parameters and/or other syntax elements of the compressed data stream or a predetermined identification code default value; the directly indirectly mixed coding-mode identification code is a coding-mode identification code that is partially directly (i.e., composed of one or more bit strings in the compressed data stream) and partially indirectly (i.e., derived from other codec parameters and/or other syntax elements of the compressed data stream or predetermined default) mixed.
Claims (20)
1. An encoding method for data compression, comprising at least the steps of:
1) Grouping the encoding modes into K according to at least predetermined L, wherein L is more than 1, the characteristics of the encoding modes and the requirements on the data color space, wherein K is more than 1, and the groups respectively correspond to the predetermined K color spaces;
2) When a coding mode is used for coding a current coding block, selecting a corresponding color space according to a group to which the coding mode belongs, and coding the current coding block;
3) A compressed data stream is generated containing at least information that can be used to determine the identification code of the encoding mode of the corresponding color space.
2. An encoding apparatus for data compression, comprising at least modules for performing the following functions and operations:
1) Grouping the encoding modes into K according to at least predetermined L, wherein L is more than 1, the characteristics of the encoding modes and the requirements on the data color space, wherein K is more than 1, and the groups respectively correspond to the predetermined K color spaces;
2) When a coding mode is used for coding a current coding block, selecting a corresponding color space according to a group to which the coding mode belongs, and coding the current coding block;
3) A compressed data stream is generated containing at least information that can be used to determine the identification code of the encoding mode of the corresponding color space.
3. A decoding method for data compression, comprising at least the steps of:
1) Parsing a compressed data stream generated using at least a predetermined L, wherein L >1, encoding modes, obtaining at least information comprising encoding modes usable for determining a corresponding color space;
2) And decoding a current decoded block by using at least a predetermined K according to the information of the coding mode, wherein K >1, and a corresponding color space of one of the color spaces.
4. The decoding method according to claim 3, characterized in that the coding mode comprises one of the following coding modes or a combination or a variant thereof:
1) Hybrid coding mode including intra prediction;
2) Hybrid coding mode including inter-prediction;
3) A coding mode including wavelet transform;
4) A coding mode comprising residual coding;
5) A coding mode comprising a matching mode;
6) A coding mode comprising block matching;
7) A coding pattern including sub-block matching;
8) A coding mode comprising micro-block matching;
9) A coding mode comprising line matching;
10 Encoding patterns including string matches;
11 Encoding modes that include pixel string matching;
12 Code patterns including sample string matches;
13 Encoding patterns including index string matches;
14 Encoding modes including main reference buffer string matching;
15 Encoding modes including secondary reference buffer string matching;
16 Encoding modes including string prediction;
17 Encoding modes including universal string prediction;
18 Coding modes including offset string prediction;
19 Encoding modes including coordinate string prediction;
20 A coding pattern comprising unmatched pixels;
21 A coding pattern comprising a string of unmatched pixels;
22 Encoding modes that include unpredictable pixels;
23 ) encoding modes that include non-predictable pixel strings.
5. The decoding method according to claim 3, wherein:
the data involved in the data compression is image data or image sequence data or video data;
the decoding block is a decoding area of the image, and comprises at least one of the following: a full picture, a sub-picture of a picture, a macroblock, a largest coding unit LCU, a coding tree unit CTU, a coding unit CU, a sub-region of a CU, a sub-coding unit SubCU, a prediction unit PU, a sub-region of a PU, a sub-prediction unit SubPU, a transform unit TU, a sub-region of a TU, a sub-transform unit SubTU.
6. The decoding method according to claim 3, characterized by comprising the following functions and/or operations:
the color space of the original input data or the color space of the reconstructed data is not said corresponding color space, and the interconversion between these color spaces is performed using a color space conversion comprising one or a combination of the following conversions:
converting the color space of the original input data into the corresponding color space
Or
Converting the color space of the reconstructed data into the corresponding color space
Or
Converting the corresponding color space into a color space of the original input data
Or
Converting the corresponding color space to a color space of the reconstructed data;
and/or
The data of one color space involved before and/or in the middle and/or after the decoding process of the current block is converted into data of the other color space or spaces using color space conversion.
7. The decoding method according to claim 3, wherein said predetermined L coding modes comprise at least a predetermined L 1 Hybrid coding mode or variant thereof and predetermined L 2 A matching coding mode is seeded; the predetermined K color spaces include at least one or a combination of the following color spaces:
(R, G, B) a color space or a variant thereof;
(Y, U, V) color space or a variant thereof;
(Y, cb, cr) color space or a variant thereof;
(Y, cg, co) color space or variants thereof;
(H, S, V) color space or a variant thereof;
(H, S, L) color space or a variant thereof;
(C, M, Y, K) color space or a variant thereof;
(R, G, B, a) a color space or a variant thereof;
(Y, U, V, a) color space or variants thereof;
(Y, cb, cr, a) color space or a variant thereof;
(Y, cg, co, A) color space or a variant thereof.
8. The decoding method according to claim 3, wherein the predetermined L coding modes at least include a predetermined L 1 Hybrid coding mode based on prediction and transformation or its variant and predetermined L 2 A universal string prediction based coding mode or a variant thereof; said L 1 The coding mode is classified as K 1 Groups, respectively with predetermined K 1 The seed color space corresponds; said L is 2 The coding mode is classified as K 2 Groups, each with a predetermined K 2 The seed color space corresponds; the predetermined K 1 The seed color space comprises at least a (Y, U, V) color space or a variant thereof; the predetermined K 2 The seed color space includes at least a (R, G, B) color space or a variant thereof.
9. The decoding method according to claim 3, wherein the predicted data generated in decoding, i.e. prediction sample data, and/or the matched data, i.e. copied reference data, and/or the compensated data and/or the reconstructed data, of the current block are converted from one color space to another color space or spaces using color space conversion.
10. The decoding method according to claim 3, wherein the predetermined K color spaces include (R, G, B) color space or a variant thereof and (Y, U, V) color space or a variant thereof; the color space conversion between the (R, G, B) color space or its variants and the (Y, U, V) color space or its variants is one or a combination or its variants of the following five sets of formulaic conversions:
the first set of equations is a forward transform of
t = 0.2126*R + 0.7152*G + 0.0722*B
Y = floor ((16 + 219 * t / (2 BitDepth –1)) * 2 BitDepth–8 + 0.5)
U = floor ((128 + 112 * (B – t) / ((2 BitDepth –1) * (1 – 0.0722))) * 2 BitDepth–8 + 0.5)
V = floor ((128 + 112 * (R – t) / ((2 BitDepth –1) * (1 – 0.2126))) * 2 BitDepth–8 + 0.5)
The inverse of the first set of equations is
t = floor ((Y - 16 * 2 BitDepth–8 ) * (2 BitDepth –1) / (219 * 2 BitDepth–8 ) + 0.5)
R = clamp [t + (Cr - 2 BitDepth–1 ) * (2 BitDepth –1) * (1-0.2126)/(112 * 2 BitDepth–8 ) + 0.5]
B = clamp [t + (Cb - 2 BitDepth–1 ) * (2 BitDepth –1) * (1-0.0722)/(112 * 2 BitDepth–8 ) + 0.5]
G = clamp [( t - 0.2126*R - 0.0722*B)/0.7152 + 0.5]
Wherein t is a floating point number, R, G, B, Y, U and V are integers, and the binary digit is BitDepth; floor (x) is the largest integer less than or equal to x, clamp [ x [ ]]First, the maximum of 0 and floor (x) is selected, and then the sum of the maximum and 2 is selected BitDepth -the smallest of 1;
the second set of equations is a positive transformation of
t = 0.2126*R + 0.7152*G + 0.0722*B
Y = floor (t)
U = floor (2 BitDepth–1 + 0.5 * (B – t) / (1 – 0.0722))
V = floor (2 BitDepth–1 + 0.5 * (R – t) / (1 – 0.2126))
The inverse of the second set of equations is
R = clamp [Y + (Cr - 2 BitDepth–1 ) * 2 * (1-0.2126) + 0.5]
B = clamp [Y + (Cb - 2 BitDepth–1 ) * 2 * (1-0.0722) + 0.5]
G = clamp [(Y - 0.2126*R - 0.0722*B)/0.7152 + 0.5]
The positive transformation formula of the third set of formula is
Y = (G + ((R + B) >> 1) + 1) >> 1
U = ((R - B - 1) >> 1) + (1<<(BitDepth-1))
V = (G – Y) + (1<<(BitDepth-1))
The inverse of the third set of equations is
R = clip3(0, (1<<BitDepth) - 1, Y – V + U)
G = clip3(0, (1<<BitDepth) - 1, Y + V - (1<<(BitDepth-1)))
B = clip3(0, (1<<BitDepth) - 1, Y – V – U + (1<<BitDepth))
Wherein the content of the first and second substances,<<is a binary left-shift operation and is,>>is a binary right shift operation, clip3 (0, (1)<<BitDepth) -1, x) is the maximum of 0 and x, and then the sum is taken as 2 BitDepth -the smallest of 1;
the positive transformation formula of the fourth set of formula is
Y = (G + ((R + B) >> 1) + 1) >> 1
U = ((G - B - 1) >> 1) + (1<<(BitDepth-1))
V = ((G - R - 1) >> 1) + (1<<(BitDepth-1))
The fourth set of equations has the inverse transformation formula of
G = clip3(0, (1<<BitDepth) - 1, Y + ((U + V - (1<<BitDepth)) >> 1))
B = clip3(0, (1<<BitDepth) - 1, G - 2U + (1<<BitDepth) - 1)
R = clip3(0, (1<<BitDepth) - 1, G - 2V + (1<<BitDepth) - 1)
The positive transformation formula of the fifth set of formula is
Y = (G + ((R + B) >> 1) + 1) >> 1
U = ((Y - B) >> 1) + (1<<(BitDepth-1))
V = ((Y - R) >> 1) + (1<<(BitDepth-1))
The inverse transformation formula of the fifth set of formula is
G = clip3(0, (1<<BitDepth) - 1, Y + U + V - (1<<BitDepth))
B = clip3(0, (1<<BitDepth) - 1, Y - 2U + (1<<BitDepth))
R = clip3(0, (1<<BitDepth) - 1, Y - 2V + (1<<BitDepth))。
11. The decoding method according to claim 3, wherein said decoding block has an encoding mode identification code in said compressed data stream, directly or indirectly mixed therewith, and said decoding block is decoded using a corresponding color space according to a value of said identification code; the direct encoding mode identification code is composed of one or more bit strings in a compressed data stream, namely binary symbol strings; the indirect encoding mode identification code is an encoding mode identification code derived from other decoding parameters and/or other syntax elements of the compressed data stream or a predetermined identification code default value; the directly and indirectly mixed coding pattern identification code is a partially directly and partially indirectly mixed coding pattern identification code.
12. A decoding device for data compression, comprising at least the following modules for performing the following functions and operations:
1) Parsing a compressed data stream generated using at least a predetermined L, wherein L >1, encoding mode, obtaining at least information comprising encoding modes available for determining a corresponding color space;
2) And decoding a current decoded block by using at least a predetermined K according to the information of the coding mode, wherein K >1, and a corresponding color space of one of the color spaces.
13. The decoding device according to claim 12, wherein the coding mode comprises one of the following coding modes or a combination or a variant thereof:
1) Hybrid coding mode including intra prediction;
2) Hybrid coding mode including inter-prediction;
3) A coding mode including a wavelet transform;
4) A coding mode comprising residual coding;
5) A coding mode comprising a matching mode;
6) A coding mode comprising block matching;
7) A coding pattern including sub-block matching;
8) A coding mode comprising micro-block matching;
9) A coding mode comprising line matching;
10 Code patterns including string matches;
11 Encoding modes that include pixel string matching;
12 Code patterns including sample string matches;
13 Encoding patterns including index string matches;
14 Encoding modes including main reference buffer string matching;
15 Encoding modes including secondary reference buffer string matching;
16 Encoding modes including string prediction;
17 Encoding modes including universal string prediction;
18 Coding modes including offset string prediction;
19 Encoding mode including coordinate string prediction;
20 A coding pattern comprising unmatched pixels;
21 A coding pattern comprising a string of unmatched pixels;
22 Encoding modes that include unpredictable pixels;
23 ) encoding modes that include non-predictable pixel strings.
14. The decoding apparatus according to claim 12, wherein:
the data involved in the data compression is image data or image sequence data or video data;
the decoding block is a decoding area of the image and comprises at least one of the following: a full picture, a sub-picture of a picture, a macroblock, a largest coding unit LCU, a coding tree unit CTU, a coding unit CU, a sub-region of a CU, a sub-coding unit SubCU, a prediction unit PU, a sub-region of a PU, a sub-prediction unit SubPU, a transform unit TU, a sub-region of a TU, a sub-transform unit SubTU.
15. The decoding apparatus according to claim 12, characterized by comprising the following functions and/or operations:
the color space of the original input data or the color space of the reconstructed data is not the corresponding color space, and the interconversion between these color spaces is performed using a color space conversion including one or a combination of the following conversions:
converting the color space of the original input data into the corresponding color space
Or alternatively
Converting the color space of the reconstructed data into the corresponding color space
Or alternatively
Converting the corresponding color space to the color space of the original input data
Or alternatively
Converting the corresponding color space to a color space of the reconstructed data;
and/or
The data of one color space involved before and/or in the middle and/or after the decoding process of the current block is converted into data of the other color space or spaces using color space conversion.
16. The decoding apparatus according to claim 12, wherein the predetermined L kinds of coding modes at least include a predetermined L 1 Hybrid coding mode or variant thereof and predetermined L 2 A matching coding mode is seeded; the predetermined K color spaces include at least one or a combination of the following color spaces:
(R, G, B) color space or a variant thereof;
(Y, U, V) color space or a variant thereof;
(Y, cb, cr) color space or a variant thereof;
(Y, cg, co) color space or a variant thereof;
(H, S, V) color space or a variant thereof;
(H, S, L) color space or a variant thereof;
(C, M, Y, K) color space or a variant thereof;
(R, G, B, a) color space or a variant thereof;
(Y, U, V, a) color space or variants thereof;
(Y, cb, cr, a) color space or a variant thereof;
(Y, cg, co, A) color space or a variant thereof.
17. The decoding apparatus according to claim 12, wherein the predetermined L kinds of coding modes at least include a predetermined L 1 Hybrid coding mode based on prediction and transformation or its variant and predetermined L 2 A universal string prediction based coding mode or a variant thereof; said L 1 The coding mode is classified as K 1 Groups, each with a predetermined K 1 The seed color space corresponds; said L 2 The coding mode is classified as K 2 Groups, respectively with predetermined K 2 The seed color space corresponds; the predetermined K 1 The seed color space comprises at least a (Y, U, V) color space or a variant thereof; the predetermined K 2 The seed color space includes at least an (R, G, B) color space or a variant thereof.
18. Decoding device according to claim 12, characterized in that the predicted data, i.e. prediction sample data, and/or the matched data, i.e. copied reference data, and/or the compensated data and/or the reconstructed data, generated in decoding for the current block are transformed from one color space to the other color space or spaces using a color space transformation.
19. The decoding apparatus according to claim 12, wherein the predetermined K color spaces include (R, G, B) color space or a variation thereof and (Y, U, V) color space or a variation thereof; the color space conversion between the (R, G, B) color space or its variants and the (Y, U, V) color space or its variants is one or a combination or its variants of the following five sets of formulaic conversions:
the first set of equations is a forward transform of
t = 0.2126*R + 0.7152*G + 0.0722*B
Y = floor ((16 + 219 * t / (2 BitDepth –1)) * 2 BitDepth–8 + 0.5)
U = floor ((128 + 112 * (B – t) / ((2 BitDepth –1) * (1 – 0.0722))) * 2 BitDepth–8 + 0.5)
V = floor ((128 + 112 * (R – t) / ((2 BitDepth –1) * (1 – 0.2126))) * 2 BitDepth–8 + 0.5)
The inverse of the first set of equations is
t = floor ((Y - 16 * 2 BitDepth–8 ) * (2 BitDepth –1) / (219 * 2 BitDepth–8 ) + 0.5)
R = clamp [t + (Cr - 2 BitDepth–1 ) * (2 BitDepth –1) * (1-0.2126)/(112 * 2 BitDepth–8 ) + 0.5]
B = clamp [t + (Cb - 2 BitDepth–1 ) * (2 BitDepth –1) * (1-0.0722)/(112 * 2 BitDepth–8 ) + 0.5]
G = clamp [( t - 0.2126*R - 0.0722*B)/0.7152 + 0.5]
Wherein t is a floating point number, R, G, B, Y, U and V are integers, and the binary digit is BitDepth; floor (x) is the largest integer less than or equal to x, clamp [ x [ ]]First, the maximum of 0 and floor (x) is selected, and then the sum of the maximum and 2 is selected BitDepth The smallest of-1;
the second set of equations has a positive transformation equation of
t = 0.2126*R + 0.7152*G + 0.0722*B
Y = floor (t)
U = floor (2 BitDepth–1 + 0.5 * (B – t) / (1 – 0.0722))
V = floor (2 BitDepth–1 + 0.5 * (R – t) / (1 – 0.2126))
The inverse of the second set of equations is
R = clamp [Y + (Cr - 2 BitDepth–1 ) * 2 * (1-0.2126) + 0.5]
B = clamp [Y + (Cb - 2 BitDepth–1 ) * 2 * (1-0.0722) + 0.5]
G = clamp [(Y - 0.2126*R - 0.0722*B)/0.7152 + 0.5]
The positive transformation formula of the third set of formula is
Y = (G + ((R + B) >> 1) + 1) >> 1
U = ((R - B - 1) >> 1) + (1<<(BitDepth-1))
V = (G – Y) + (1<<(BitDepth-1))
The inverse transformation formula of the third set of formula is
R = clip3(0, (1<<BitDepth) - 1, Y – V + U)
G = clip3(0, (1<<BitDepth) - 1, Y + V - (1<<(BitDepth-1)))
B = clip3(0, (1<<BitDepth) - 1, Y – V – U + (1<<BitDepth))
Wherein the content of the first and second substances,<<is a binary left-shift operation in which,>>is a binary right shift operation, clip3 (0, (1)<<BitDepth) -1, x) is the maximum of 0 and x, and then the sum of 0 and x is 2 BitDepth The smallest of-1;
the positive transformation formula of the fourth set of formula is
Y = (G + ((R + B) >> 1) + 1) >> 1
U = ((G - B - 1) >> 1) + (1<<(BitDepth-1))
V = ((G - R - 1) >> 1) + (1<<(BitDepth-1))
The fourth set of equations has the inverse transformation formula of
G = clip3(0, (1<<BitDepth) - 1, Y + ((U + V - (1<<BitDepth)) >> 1))
B = clip3(0, (1<<BitDepth) - 1, G - 2U + (1<<BitDepth) - 1)
R = clip3(0, (1<<BitDepth) - 1, G - 2V + (1<<BitDepth) - 1)
The positive transformation formula of the fifth set of formula is
Y = (G + ((R + B) >> 1) + 1) >> 1
U = ((Y - B) >> 1) + (1<<(BitDepth-1))
V = ((Y - R) >> 1) + (1<<(BitDepth-1))
The inverse transformation formula of the fifth set of formula is
G = clip3(0, (1<<BitDepth) - 1, Y + U + V - (1<<BitDepth))
B = clip3(0, (1<<BitDepth) - 1, Y - 2U + (1<<BitDepth))
R = clip3(0, (1<<BitDepth) - 1, Y - 2V + (1<<BitDepth))。
20. The decoding apparatus as claimed in claim 12, wherein the decoding block has an encoding mode identification code directly or indirectly mixed with the compressed data stream, and the decoding block is decoded by using a corresponding color space according to a value of the identification code; the direct encoding mode identification code is composed of one or more bit strings in a compressed data stream, namely binary symbol strings; the indirect encoding mode identification code is an encoding mode identification code derived from other decoding parameters and/or other syntax elements of the compressed data stream or a predetermined identification code default value; the directly and indirectly mixed coding pattern identification code is a partially directly and partially indirectly mixed coding pattern identification code.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710410669.1A CN108989819B (en) | 2017-06-03 | 2017-06-03 | Data compression method and device adopting respective corresponding color spaces for modes |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710410669.1A CN108989819B (en) | 2017-06-03 | 2017-06-03 | Data compression method and device adopting respective corresponding color spaces for modes |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108989819A CN108989819A (en) | 2018-12-11 |
CN108989819B true CN108989819B (en) | 2023-04-07 |
Family
ID=64502706
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710410669.1A Active CN108989819B (en) | 2017-06-03 | 2017-06-03 | Data compression method and device adopting respective corresponding color spaces for modes |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108989819B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110312136A (en) * | 2019-05-10 | 2019-10-08 | 同济大学 | A kind of coding and decoding methods of pair of multi-component data |
CN112622276B (en) * | 2020-11-30 | 2022-08-09 | 北京恒创增材制造技术研究院有限公司 | Color model maximum color difference representation method for 3DP multi-color printing |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008271411A (en) * | 2007-04-24 | 2008-11-06 | Canon Inc | Image coding unit, control method of image coding unit, program and recording medium |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI413974B (en) * | 2008-10-16 | 2013-11-01 | Princeton Technology Corp | Method of eliminating blur on display |
CN103077396B (en) * | 2013-01-11 | 2016-08-03 | 上海电机学院 | The vector space Feature Points Extraction of a kind of coloured image and device |
CN103096092B (en) * | 2013-02-07 | 2015-12-02 | 上海国茂数字技术有限公司 | The method and system of encoding and decoding error correction is carried out based on color notation conversion space |
GB2521606A (en) * | 2013-12-20 | 2015-07-01 | Canon Kk | Method and apparatus for transition encoding in video coding and decoding |
MX2016012130A (en) * | 2014-03-19 | 2017-04-27 | Arris Entpr Inc | Scalable coding of video sequences using tone mapping and different color gamuts. |
GB2539488B8 (en) * | 2015-06-18 | 2020-08-19 | Gurulogic Microsystems Oy | Encoder, decoder and method employing palette utilization and compression |
US10225561B2 (en) * | 2015-10-08 | 2019-03-05 | Mediatek Inc. | Method and apparatus for syntax signaling in image and video compression |
US20170105012A1 (en) * | 2015-10-08 | 2017-04-13 | Mediatek Inc. | Method and Apparatus for Cross Color Space Mode Decision |
CN106657945B (en) * | 2016-12-30 | 2018-10-16 | 上海集成电路研发中心有限公司 | A kind of gamma correction implementation method of non-linear piecewise |
-
2017
- 2017-06-03 CN CN201710410669.1A patent/CN108989819B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008271411A (en) * | 2007-04-24 | 2008-11-06 | Canon Inc | Image coding unit, control method of image coding unit, program and recording medium |
Also Published As
Publication number | Publication date |
---|---|
CN108989819A (en) | 2018-12-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107071450B (en) | Coding and decoding method and device for data compression | |
KR102071764B1 (en) | Picture coding and decoding methods and devices | |
CN110691244B (en) | Method and apparatus for binarization and context adaptive coding of syntax in video coding | |
JP6749922B2 (en) | Improved Palette Mode in High Efficiency Video Coding (HEVC) Screen Content Coding (SCC) | |
KR101868247B1 (en) | Image encoding and decoding method and device | |
CN104853211A (en) | Image compression method and apparatus employing various forms of reference pixel storage spaces | |
CN110087090B (en) | Data coding and decoding method adopting mixed string matching and intra-frame prediction | |
CN107770540B (en) | Data compression method and device for fusing multiple primitives with different reference relations | |
CN108989819B (en) | Data compression method and device adopting respective corresponding color spaces for modes | |
CN116803077A (en) | Residual and coefficient coding for video coding | |
CN112637600B (en) | Method and device for encoding and decoding data in a lossy or lossless compression mode | |
CN108989807A (en) | Each component uses the data compression method and device of respective corresponding data array format | |
CN112532990A (en) | String length parameter coding and decoding methods and devices | |
CN108989820B (en) | Data compression method and device adopting respective corresponding chroma sampling formats at all stages | |
CN107770543B (en) | Data compression method and device for sequentially increasing cutoff values in multiple types of matching parameters | |
CN113365074B (en) | Encoding and decoding method and device for limiting point prediction frequent position and point vector number thereof | |
CN113395515B (en) | Coding and decoding method and device for point prediction of component down-sampling format data | |
CN113518222B (en) | Coding and decoding method and device for different types of strings by adopting different length binarization schemes | |
CN117241032A (en) | Image adopting multiple sampling formats, sequence thereof, video compression method and device | |
CN112601086B (en) | String length parameter hybrid coding and decoding method and device | |
CN113938683A (en) | Coding and decoding method and device for point prediction chroma reconstruction value from multiple reference positions | |
CN112672160B (en) | Encoding and decoding method and device for fusing intra-frame block copy and string copy encoding parameters | |
CN108989800B (en) | Data compression method and apparatus for generating a compressed data byte stream in byte units | |
CN114245130A (en) | Data coding and decoding method and device for multiplexing point vector by using historical point prediction information table | |
Chithra | A Novel Lossy Image Compression Based On Color Prediction Technique |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |