CN108366242B - Video compression method for adaptively adjusting chroma distortion weight factor according to video content - Google Patents

Video compression method for adaptively adjusting chroma distortion weight factor according to video content Download PDF

Info

Publication number
CN108366242B
CN108366242B CN201810184812.4A CN201810184812A CN108366242B CN 108366242 B CN108366242 B CN 108366242B CN 201810184812 A CN201810184812 A CN 201810184812A CN 108366242 B CN108366242 B CN 108366242B
Authority
CN
China
Prior art keywords
coding
sub
mode
modes
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810184812.4A
Other languages
Chinese (zh)
Other versions
CN108366242A (en
Inventor
赵利平
林涛
胡珂立
沈士根
周海平
肖立国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Boya Ruishi Technology Co ltd
Original Assignee
University of Shaoxing
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shaoxing filed Critical University of Shaoxing
Priority to CN201810184812.4A priority Critical patent/CN108366242B/en
Publication of CN108366242A publication Critical patent/CN108366242A/en
Application granted granted Critical
Publication of CN108366242B publication Critical patent/CN108366242B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/154Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/48Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using compressed domain processing techniques other than decoding, e.g. modification of transform coefficients, variable length coding [VLC] data or run-length data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/643Hue control means, e.g. flesh tone control

Abstract

The invention relates to a video compression method for adaptively adjusting chroma distortion weight factors according to video contents, which comprises the steps of dividing the video contents into a plurality of categories according to the characteristics of the video contents and preset classification conditions; presetting a plurality of coding modes and coding sub-modes under each coding mode; matching the coding sub-patterns for each category of the video content; matching chroma distortion weight factors for each coding sub-mode; and respectively adopting the matched coding sub-mode and the matched chroma distortion weight factor to code according to the category of the video content. The coding mode adopted by the invention is more favorable for the coding mode selection mechanism based on RDCost to well select the optimal coding mode, and the overall coding efficiency is effectively improved.

Description

Video compression method for adaptively adjusting chroma distortion weight factor according to video content
Technical Field
The invention relates to the technical field of video compression, in particular to a video compression method for adaptively adjusting chroma distortion weight factors according to video content.
Background
In recent years, with the age of human society entering artificial intelligence, big data, virtual reality, augmented reality, mixed reality, cloud computing, mobile computing, cloud-mobile computing, and 4G/5G communication, the amount and the day by day of multimedia data such as traditional videos, monitoring videos, entertainment media, social media, medical images, images/videos shot by consumers, and various new-state data of ultra-high definition (4K) and ultra-high definition (8K) image resolutions have become the main data form of internet big data.
With the wide application of multimedia big data, how to perform data compression technology with ultra-high compression ratio and extremely high quality on multimedia data becomes an indispensable technology. The widespread use of multimedia big data has also produced various new types of data sets.
The YUV color format, also known as the YCbCr color format, is a common format for video data sets and generally consists of three components, a luminance (luma) component and two chrominance (chroma) components. The YUV color format can be further subdivided into a plurality of seed formats depending on whether down-sampling is performed on the color components: 1 pixel is in YUV4:4:4 pixel color format consisting of 1Y component, 1U component and 1V component; the left and right adjacent 2 pixels are in YUV4:2:2 pixel color format consisting of 2Y components, 1U component and 1V component; the 4 pixels which are arranged at the spatial position of 2x2 and are adjacent left, right, upper and lower are in YUV4:2:0 pixel color format which consists of 4Y components, 1U component and 1V component. One component is generally represented by 1 number of 8-16 bits. The YUV4:2:2 pixel color format and the YUV4:2:0 pixel color format are obtained by down-sampling the chrominance components of the YUV4:4:4 pixel color format. A pixel component is also referred to as a pixel sample (sample) or simply a sample (sample).
In order to adapt to different image contents and properties of each part in a frame of image in various new types of data sets, in the latest international video compression standard HEVC and the second-generation digital audio and video Coding technology standard AVS2 in China, when a frame of image is coded, the frame of image is divided into a plurality of sub-images of blocks of MxM pixels, which are called Coding units (CU for short), and the sub-images are coded one by taking the CU as a basic Coding Unit. Commonly used M sizes are 8, 16, 32, 64. Thus, encoding a sequence of video images is the sequential encoding of the individual coding units. The sizes of CUs within a frame of image may be different, some 8x8, some 64x64, and so on. In order to enable CUs of different sizes to be seamlessly spliced, a frame of image is always divided into "Largest Coding Units (LCUs)" of the same size and with NxN pixels, and each LCU is further divided into a plurality of CUs of different sizes. For example, one frame image is first divided into LCUs of 64 × 64 pixels (N — 64) having the same size. One of the LCUs is composed of 3 CUs of 32x32 pixels and 4 CUs of 16x16 pixels. And another LCU is composed of 2 CUs of 32x32 pixels, 3 CUs of 16x16 pixels, and 20 CUs of 8x8 pixels. One frame of image is coded, that is, one CU is coded in sequence.
Image regions of the various new types of data sets have different properties and need to be encoded with different modes (such as Intra mode, inter mode, Intra Block Copy (IBC), pattern mode, string matching (prediction) mode). Both HEVC and AVS2 employ a coding mode selection mechanism based on Rate Distortion cost (RDCost) to adaptively select the optimal coding mode for different image regions. Let miRepresenting different coding modes, S (m) representing a set of coding modes, m*Represents the best mode currently selected by the CU, D (m)i) And R (m)i) Respectively representing the mode m for the current CUiAnd coding the obtained distortion and the bit number. The encoding mode selection mechanism based on RDcost is to traverse all m in S (m)iCoding mode m for current CUiCoding is carried out to obtain RDcost values, and the minimum value is selected as m*. One example of the RDcost value calculation method and the coding mode selection mechanism based on RDcost is as follows:
wherein λ represents a weight factor of distortion and bit number; d (m)i) Generally, a calculation method of Sum of Absolute Differences (SAD) or sum of squares of differences (SSE) of three components of a pixel is adopted; RDcost value equal to D (m)i)+λ*R(mi)。
Let the width of the currently coded CU be equal toHigh is w and h, Lx,yAnd PLx,yC the prediction values of the luminance component and the luminance component of the pixel representing the x, y coordinates in the currently coded CU, respectivelyx,y=(Ux,y,Vx,y) And PCx,y=(PUx,y,PVx,y) Two chroma components and two prediction values of the chroma components, respectively, representing pixels of x, y coordinates in the currently encoded CU. Let α denote the chrominance distortion weight factor, D (m)i) Examples of calculation methods are:
wherein the predicted value PL of the luminance componentx,yAnd the predicted values PU of the two chrominance componentsx,y,PVx,yUsually with coding mode miInstead, different coding modes may have different predictors.
To further improve coding efficiency, different coding modes miThere may be multiple sub-patterns mi1、mi2…min. For example, the inter modes include inter prediction sub-modes of a skip/direct sub-mode of 2N × 2N and N × N divisions, a non-division sub-mode (2N × 2N), a horizontal division sub-mode (2N × N), a vertical division sub-mode (N × 2N), an irregular division sub-mode (2N × nU, 2N × nD, nL × 2N, nR × 2N), and a quartering division sub-mode (N × N).
In the existing data compression technology, for different coding modes, the distortion calculation mode adopts the same chroma distortion weight factor to calculate the distortion, namely for different m in formula (2)iα is a constant, don't accompany the coding mode miBut may vary.
For many new types of data encountered in various new applications, different image regions have different properties and are coded with different modes with different degrees of distortion. These distortions will have different degrees of influence on the mode selection of the subsequent coding unit, and the use of constant chroma distortion weighting factors for the luminance component and the chrominance component of different modes and even sub-modes will result in that the best mode cannot be well selected based on the RDCost mode selection mechanism, resulting in low compression efficiency.
Disclosure of Invention
The invention provides a video compression method for adaptively adjusting chroma distortion weight factors according to video content, which aims to overcome the defects in the prior art, and the adopted coding mode is more favorable for a coding mode selection mechanism based on RDcost to well select an optimal coding mode, so that the overall coding efficiency is effectively improved.
In order to achieve the above object, the present invention has the following configurations:
the video compression method for adaptively adjusting the chroma distortion weight factor according to the video content comprises the following steps:
dividing the video content into a plurality of categories according to the characteristics of the video content and preset classification conditions;
presetting a plurality of coding modes and coding sub-modes under each coding mode;
matching the coding sub-patterns for each category of the video content;
matching chroma distortion weight factors for each coding sub-mode;
and respectively adopting the matched coding sub-mode and the matched chroma distortion weight factor to code according to the category of the video content.
Optionally, ci represents various categories of video content, i ≦ n of 1 ≦ n, n ≧ 2; alpha is alphacimjkRepresenting the chroma distortion weight factors corresponding to the class ci, the coding mode mj,1 ≦ J, the submode mjk for mode mj,1 ≦ k ≦ Kj;
the method for dividing the video content into a plurality of categories according to the characteristics of the video content and the preset classification conditions comprises the following steps:
and when the characteristics of the coding unit meet the predetermined ith condition, classifying the current coding unit into the class ci. .
Optionally, the data in the video includes one or a combination of the following:
one-dimensional data; two-dimensional data; multidimensional data; a graph; dimension division graphics; an image; a sequence of images; video; a three-dimensional scene; a sequence of continuously changing three-dimensional scenes; a virtual reality scene; a sequence of scenes of continuously changing virtual reality; an image in the form of pixels; transform domain data of the image;
a set of bytes in two or more dimensions; a set of bits in two or more dimensions; a set of pixels; a set of three-component pixels Y, U, V; a set of three-component pixels Y, Cb, Cr; a set of three-component pixels Y, Cg, Co;
the chrominance component is one of:
u, V two components;
or the first chroma difference and the second chroma difference;
or the first chroma and the second chroma;
or the chroma difference blue and the chroma difference red;
or the chroma difference green and the chroma difference orange;
when the data is a picture, a sequence of pictures, a video, the video content comprises an encoded block or a decoded block, the encoded block or decoded block comprising at least one of: a group of pictures, a predetermined number of pictures, a picture, a frame of picture, a field of picture, a sub-picture of a picture, a slice, a macroblock, a maximum coding unit, a coding tree unit, a coding unit, a sub-region, a prediction unit, a transform unit.
Optionally, the characteristics of the video content include one or a combination of the following:
texture of the code block;
or a pixel value of the encoded block;
or the total number of different pixel values of the coding block;
or histogram distribution of the coding blocks;
or the frequency of the pixel value or component value;
or the prediction direction of the coding block;
or a reference (quote) frequency of the coding block or pixel value;
or the degree of importance of the encoded block or pixel value;
or coding mode of adjacent picture coding blocks having the same picture coordinate position as the coding block
Or coding mode of adjacent image coding block having adjacent image coordinate position with coding block
Or an average of the characteristics of the coded blocks;
or a maximum value of a characteristic of the encoded block;
or a minimum of a characteristic of the coded block;
or a sum of features of the encoded blocks;
or a difference in characteristics of the encoded blocks;
or a mean square error of a characteristic of the coded block;
or the proportion of the features of the coding blocks to the whole;
or the eigenvalue or eigenvector of a two-dimensional matrix formed by the pixel values of the coding blocks;
or the number of encoded blocks that satisfy one or a set of predetermined conditions;
or the difference of the characteristics of the current coded image and the adjacent frame image;
or the difference between the characteristics of the current coded image and the previous frame image;
or the number of coding units with the difference value of the characteristics of the current coding image and the adjacent frame image being a preset value;
or the number of coding units with the difference value of the characteristics of the current coding image and the previous frame image as a preset value;
or the number of the CUs of which the difference value of the characteristics of the coding tree units at the same position of the current coding tree unit and the previous frame image is a preset value;
or the number of coding units with the difference value of the characteristics of the current coding tree unit and the adjacent position coding tree unit being a preset value;
or the number of coding units with the difference value of the characteristics of the current coding tree unit and the left coding tree unit being a preset value;
or the number of coding units with the difference value of the characteristics of the current coding tree unit and the right coding tree unit being a preset value;
or the number of coding units with the difference value of the characteristics of the current coding tree unit and the upper coding tree unit being a preset value;
or the number of coding units with the difference value of the characteristics of the current coding tree unit and the coding tree unit at the upper left as a preset value;
or the number of coding units with the difference value of the characteristics of the current coding tree unit and the coding tree unit at the upper right as a preset value;
or the difference between the maximum characteristic value and the minimum characteristic value of the current coding block;
or the characteristic value of the current coding block is the proportion of the number of preset values;
or the characteristic value of the current coding block is the number of the maximum n characteristic values or the proportion of the number to the total number;
or the characteristic value of the current coding block is the number of the minimum n characteristic values or the proportion of the number to the total number;
or the characteristic value of the current coding block is the number of n characteristic values with the maximum number or the proportion of the number to the total number;
or the characteristic value of the current coding block is the number of the minimum n characteristic values or the proportion of the number to the total number.
Optionally, the predetermined condition is one or a combination of the following conditions:
the characteristic value of the video content is larger than a preset value;
or the characteristic value of the video content is less than a preset value;
or the characteristic value of the video content is equal to a preset value;
or the characteristic of the video content takes a value between two predetermined values.
Optionally, the coding mode includes one or a combination of the following coding modes:
an intra-coding mode;
or an inter-coding mode;
or an IBC mode;
or a pattern mode;
or string matching (prediction) patterns;
the encoding sub-mode comprises one or a combination of the following encoding sub-modes:
intra-frame coding sub-mode;
or an inter-frame coding sub-mode;
or an IBC sub-mode;
or a pattern sub-mode;
or string matching (prediction) sub-patterns;
the inter-coding sub-mode comprises one or a combination of the following coding sub-modes:
skip sub-mode;
or Direct sub-mode;
or not dividing sub-patterns;
or dividing sub-patterns;
or F _ Skip _ Temporal mode;
or other F Skip sub-mode.
Optionally, the encoding is performed by respectively using the matched encoding sub-mode and the matched chroma distortion weighting factor according to the category of the video content, and the encoding includes:
for all coding modes or sub-modes of the coding modes, different types of video contents correspond to different chroma distortion weight factors;
or for a predetermined part of coding modes or sub-modes of the coding modes, the video contents of different categories all correspond to different chroma distortion weight factors, and for other coding modes or sub-modes of the coding modes, the video contents of different categories all correspond to the same chroma distortion weight factors;
or for a predetermined part of coding modes, dividing the sub-modes of each coding mode into a plurality of groups, wherein each sub-mode of the same group has the same chroma distortion weight factor;
or all sub-modes of the same coding mode have the same chroma distortion weighting factor for a predetermined fraction of the coding modes.
Optionally, the encoding is performed by respectively using the matched encoding sub-mode and the matched chroma distortion weighting factor according to the category of the video content, and the encoding includes:
traversing all the coding modes mj except one preset coding mode ms, wherein J is more than or equal to 1 and less than or equal to J, and J is not equal to s;
and the chroma distortion weight factors of the coding mode mj and all the coding sub-modes thereof are alphamj
The sub-modes of the predetermined coding mode ms are divided into three groups, and the chroma distortion weight factors of the sub-modes are as follows:
if the current coding block is classified as the class ci, if the sub-mode is the sub-mode of the first group, the chroma distortion weight factor is set to be alphacims1
Setting the chrominance distortion weight factor to alpha if the sub-pattern is a sub-pattern of a second groupcims2
Setting the chrominance distortion weight factor to alpha if the sub-mode is a sub-mode of a third groupcims3
Optionally, the encoding modes include intra mode, inter mode, and string prediction mode;
the coding sub-modes of the inter-mode are divided into three groups: the first group comprises F _ Skip _ Temporal sub-modes, the second group comprises other F _ Skip sub-modes, and the third group comprises other inter sub-modes;
the manner of classifying the video content includes:
calculating the number of coding units with zero difference between the current coding image and the previous frame image, and recording as zeroSAD8x 8;
dividing the coded images into a complete matching image, a basic matching image and a scene switching image according to zeroSAD8x 8;
dividing the current coding maximum coding unit into a local static maximum coding unit and a local motion maximum coding unit according to whether the current coding maximum coding unit and an adjacent available maximum coding unit (left, right or upper maximum coding unit) are respectively completely the same as the position maximum coding unit and the adjacent available maximum coding unit of the previous frame;
video content is divided into the following categories:
the first category: the current coding image is a complete matching image and the current coding maximum coding unit is divided into a local static maximum coding unit;
in the second category: the current coding image is a basic matching image and the current coding maximum coding unit is divided into a local static maximum coding unit;
in the third category: the current coding image is a scene switching image and the current coding maximum coding unit is divided into a local static maximum coding unit;
the fourth category: the current coding image is a complete matching image and the current coding maximum coding unit is divided into a local motion maximum coding unit;
in the fifth category: maximum coding unit for other cases.
The corresponding relation comprises:
the chroma distortion weighting factor for the intra prediction mode is set to alpha, independent of the video contentm1
The chroma distortion weighting factor of the string prediction mode is set to alpha regardless of the video contentm2
The chroma distortion weight factor for each sub-mode m3k of the inter-coded mode m3 is set according to the video content as follows:
if the current coding block is classified as the class ci, the chroma distortion weight factor of the first group of sub-modes is alphacim31
The chroma distortion weight factors of the second group of sub-modes are all alphacim32(ii) a The chroma distortion weight factors of the third group of sub-modes are all alphacim33
Optionally, the way to classify the coded images into full-match images, basic-match images, and scene-cut images according to zeroSAD8x8 is:
if the value of zeroSAD8x8 is greater than or equal to threshold 1, the encoded image is a fully matched image;
otherwise, if the value of the zeroSAD8x8 is greater than or equal to the threshold value 2, the coded image is a basic matching image;
otherwise, the coded image is a scene change image.
The invention adopts the video compression method for adaptively adjusting the chroma distortion weight factor according to the video content, and adopts the distortion calculation mode for adaptively adjusting the chroma distortion weight factor to carry out coding according to the video content characteristics. In the method and the device, different chroma distortion weight factors are adopted for different modes and sub-modes thereof according to the video content characteristics. Therefore, the adopted coding mode is more beneficial to a coding mode selection mechanism based on RDCost to well select the optimal coding mode, and the overall coding efficiency is effectively improved.
Drawings
Fig. 1 is a flowchart of a video compression method for adaptively adjusting chroma distortion weighting factors according to video content according to the present invention.
Detailed Description
In order to more clearly describe the technical contents of the present invention, the following further description is given in conjunction with specific embodiments.
The most basic unique technical characteristic of the coding method is a distortion calculation method of a coding mode selection mechanism based on RDcost, and different chroma distortion weight factors alpha are adopted for different modes and submodes thereof in a self-adaptive manner according to the video content characteristics of each coding blockmiThat is, α in equation (2) is not a constant but is a variable number α determined by the class of video content characteristics ci,1 ≦ i ≦ n and the coding mode mj,1 ≦ J ≦ J (or its submodel mjk,1 ≦ J ≦ J,1 ≦ k ≦ Kj)cimj(or α)cimjk)。
As shown in fig. 1, according to an aspect of the present invention, there is provided an encoding method for data compression of a video data set including a chrominance component, comprising the steps of:
classification of video content: according to different characteristics of video content, dividing the video content into different categories;
the corresponding relation among the category of the video content, the coding mode or the submode thereof and the chroma distortion weight factor is as follows: for video content under encoding, when calculating the distortion of RDcost of one encoding mode or a sub-mode of the encoding mode, adopting a chroma distortion weighting factor corresponding to the category to which the video content belongs and corresponding to the encoding mode or the sub-mode of the encoding mode.
Preferably, let c1, c2 … … cn (n ≧ 2 denote n categories of video content; let αcimjkRepresents a chrominance distortion weight factor corresponding to a sub-pattern mjk (1. ltoreq. k. ltoreq. Kj, where Kj. ltoreq.1, and when Kj. ltoreq.1, only one sub-pattern, i.e., the pattern mj is identical to its sub-pattern mj1) for the categories ci (1. ltoreq. i.ltoreq.n), the patterns mj (1. ltoreq. J. ltoreq.J), and the patterns mj (1. ltoreq. J) J,
with respect to the category c1 of,
the chroma distortion weighting factors of the submode m11, the submode m12, the submode … …, the submode mlk and the submode … … of the mode m1 are respectively alphac1m11、αc1m12、……αc1m1k、……;
The chroma distortion weighting factors of the submode m21, the submodes m22, … … and m2k, … … of the mode m2 are respectively alphac1m21、αc1m22、……αc1m2k、……;
By analogy, the chroma distortion weight factors of the submodels mJ1, the submodels mJ2, … …, the submodels mJk, … … of the mode mJ are alphac1mJ1、αc1mJ2、……αc1mJk、……;
With respect to the category c2 of,
the chroma distortion weighting factors of the submodels m11, m12, … …, m1k, … … of the mode m1 are alphac2m11、αc2m12、……αc2m1k、……;
The chroma distortion weighting factors of the submodes m21, m22, … … and m2k, … … of the mode m2 are alphac2m21、αc2m22、……αc2m2k、……;
By analogy, the chroma distortion weight factors of the submodels mJ1, the submodels mJ2, … …, the submodels mJk, … … of the mode mJ are alphac2mJ1、αc2mJ2、……αc2mJk、……;
By the way of analogy, the method can be used,
for the class cn to be used for the class cn,
the chroma distortion weighting factors of the submodels m11, m12, … …, m1k, … … of the mode m1 are alphacnm11、αcnm12、……αcnm1k、……;
The chroma distortion weighting factors of the submodes m21, m22, … … and m2k, … … of the mode m2 are alphacnm21、αcnm22、……αcnm2k、……;
By analogy, the chroma distortion weight factors of the submodels mJ1, the submodels mJ2, … …, the submodels mJk, … … of the mode mJ are alphacnmJ1、αcnmJ2、……αcnmJk、……;
Wherein alpha isc1m11、αc1m12、……αc1m1k、……、αc1m21、αc1m22、……αc1m2k、……、αc1mJ1、αc1mJ2、……αc1mJk、……、αcnm11、αcnm12、……αcnm1k、……、αcnm21、αcnm22、……αcnm2k、……、αcnmJ1、αcnmJ2、……αcnmJk… … are at least two or more unequal.
The invention is suitable for coding and decoding data by lossy compression. The invention is applicable to the encoding of image or video data containing chrominance components.
In the present invention, the data involved in data compression includes one or a combination of the following types of data:
one-dimensional data; two-dimensional data; multidimensional data; a graph; dimension division graphics; an image; a sequence of images; video; a three-dimensional scene; a sequence of continuously changing three-dimensional scenes; a virtual reality scene; a sequence of scenes of continuously changing virtual reality; an image in the form of pixels; transform domain data of the image;
a set of bytes in two or more dimensions; a set of bits in two or more dimensions; a set of pixels; a set of three-component pixels (Y, U, V); a set of three-component pixels (Y, Cb, Cr); a set of three-component pixels (Y, Cg, Co).
In the present invention, in the case where the data is a picture, a sequence of pictures, a video, or the like, the encoded block or the decoded block is a coded region or a decoded region of the picture, including at least one of: a group of pictures, a predetermined number of pictures, a picture, a frame of a picture, a field of a picture, a sub-picture of a picture, a slice, a macroblock, a maximum coding unit LCU, a coding tree unit CTU, a coding unit CU, a sub-region of a CU, a prediction unit PU, a transform unit TU.
In this embodiment, the chrominance component is one of the following cases:
u, V two components;
or two components of chroma difference 1 and chroma difference 2;
or two components of chroma 1 and chroma 2;
or the chroma difference blue and the chroma difference red;
or two components of chromaticity difference green and chromaticity difference orange.
In this embodiment, the different characteristics of the video content include one or a combination of the following:
texture of the code block;
or a pixel value of the encoded block;
or the total number of different pixel values of the coding block;
or histogram distribution of the coding blocks;
or the frequency of the pixel value or component value;
or the prediction direction of the coding block;
or a reference (quote) frequency of the coding block or pixel value;
or the degree of importance of the encoded block or pixel value;
or coding mode of adjacent picture coding blocks having the same picture coordinate position as the coding block
Or coding mode of adjacent image coding block having adjacent image coordinate position with coding block
Or an average of the characteristics of the coded blocks;
or a maximum value of a characteristic of the encoded block;
or a minimum of a characteristic of the coded block;
or a sum of features of the encoded blocks;
or a difference in characteristics of the encoded blocks;
or a mean square error of a characteristic of the coded block;
or the proportion of the features of the coding blocks to the whole;
or the eigenvalue or eigenvector of a two-dimensional matrix formed by the pixel values of the coding blocks;
or the number of encoded blocks that satisfy one or a set of predetermined conditions.
In this embodiment, the different characteristics of the video content include one or a combination of the following:
difference values of features of the current coded image and the adjacent frame image;
or the difference between the characteristics of the current coded image and the previous frame image;
or the number of CUs of which the difference value of the characteristics of the current coding image and the adjacent frame image is a preset value;
or the number of CUs of which the difference value of the characteristics of the current coding image and the previous frame image is a preset value;
or the number of the CUs with the CTU characteristic difference value of the same position of the current coding CTU and the previous frame image as a preset value;
or the number of CUs with the difference value between the current coding CTU and the adjacent position CTU as a preset value;
or the number of the CUs with the difference value between the current coding CTU and the left CTU as a preset value;
or the number of the CUs with the difference value between the current coding CTU and the right CTU as a preset value;
or the number of the CUs with the difference value between the current coding CTU and the upper CTU as a preset value;
or the number of the CUs with the difference value between the current coding CTU and the CTU characteristics at the upper left as a preset value;
or the number of the CUs with the difference value between the current coding CTU and the CTU at the upper right as a preset value;
or the difference between the maximum characteristic value and the minimum characteristic value of the current coding block;
or the characteristic value of the current coding block is the proportion of the number of preset values;
or the characteristic value of the current coding block is the number of the maximum n characteristic values or the proportion of the number to the total number;
or the characteristic value of the current coding block is the number of the minimum n characteristic values or the proportion of the number to the total number;
or the characteristic value of the current coding block is the number of n characteristic values with the maximum number or the proportion of the number to the total number;
or the characteristic value of the current coding block is the number of the minimum n characteristic values or the proportion of the number to the total number.
In this embodiment, the video content is classified into different categories according to different characteristics of the video content as follows:
if the characteristics of the video content meet a preset condition 1, classifying the current CU into a category 1;
if the characteristics of the video content meet a preset condition 2, classifying the current CU into a category 2;
by the way of analogy, the method can be used,
and if the characteristics of the video content meet a preset condition n (n is more than or equal to 2), classifying the current CU into a category n.
In this embodiment, the predetermined condition is one or a combination of the following:
the characteristic value of the video content is larger than a preset value
Or the characteristic value of the video content is less than a predetermined value
Or the characteristic value of the video content is equal to a predetermined value
Or the characteristic of the video content takes a value between two predetermined values.
In this embodiment, the coding modes include one or a combination of the following coding modes:
an intra-coding mode;
or an inter-coding mode;
or an IBC mode;
or a pattern mode;
or string matching (prediction) patterns.
In this embodiment, the sub-modes include one or a combination of the following sub-modes:
intra-frame coding sub-mode;
or an inter-frame coding sub-mode;
or an IBC sub-mode;
or a pattern sub-mode;
or string matching (prediction) sub-patterns.
In this embodiment, the inter-coding sub-mode includes one or a combination of the following sub-modes:
skip sub-mode;
or Direct sub-mode;
or not dividing sub-patterns;
or dividing sub-patterns;
or F _ Skip _ Temporal mode;
or other F Skip sub-mode.
In one embodiment, the correspondence is one of the following or a reasonable combination thereof:
for all coding modes or sub-modes of the coding modes, different types of video contents correspond to different chroma distortion weight factors;
or for a predetermined part of coding modes or sub-modes of the coding modes, the video contents of different categories all correspond to different chroma distortion weight factors, and for other coding modes or sub-modes of the coding modes, the video contents of different categories all correspond to the same chroma distortion weight factors;
or for a predetermined part of coding modes, dividing the sub-modes of each coding mode into a plurality of groups, wherein each sub-mode of the same group has the same chroma distortion weight factor;
or all sub-modes of the same coding mode have the same chroma distortion weighting factor for a predetermined fraction of the coding modes.
In this embodiment, the correspondence relationship has the following form:
if the current coding block is classified as the category 1(c1), all coding modes mj are traversed (J is more than or equal to 1 and less than or equal to J)
And traversing all submodes mjk of the coding mode mj (k is more than or equal to 1 and less than or equal to Kj)
And the chroma distortion weight factor is alphac1mjk
If the current coding block is classified into the category 2(c2), all coding modes mj are traversed (J is more than or equal to 1 and less than or equal to J)
And traversing all submodes mjk of the coding mode mj (k is more than or equal to 1 and less than or equal to Kj)
And the chroma distortion weight factor is alphac2mjk
By the way of analogy, the method can be used,
if the current coding block is classified into the category n (cn), traversing all the coding modes mj (J is more than or equal to 1 and less than or equal to J)
And traversing all submodes mjk of the coding mode mj (k is more than or equal to 1 and less than or equal to Kj)
And the chroma distortion weight factor is alphacnmjk
In another embodiment, the correspondence is of the form:
traversing all but one predetermined coding mode ms (1 ≦ J and J ≠ s)
The chroma distortion weight factor of the coding mode and all the sub-modes thereof is alphamj
The chroma distortion weighting factors of the predetermined coding mode ms and its sub-modes are:
if the current coding unit is classified as class 1(c1), all sub-modes msk of the predetermined coding mode are traversed (1. ltoreq. k. ltoreq. Ks)
And the chroma distortion weight factor is alphac1msk
If the current coding unit is classified as class 2(c2), all submodes msk of the predetermined coding mode are traversed (1. ltoreq. k. ltoreq. Ks)
And the chrominance distortion weight factor is set to alphac2msk
By the way of analogy, the method can be used,
if the current coding unit is classified as class n (cn), all sub-modes msk of the predetermined coding mode are traversed (1 ≦ k ≦ Ks)
And the chrominance distortion weight factor is set to alphacnmsk
In another embodiment, the correspondence is of the form:
traversing all but one predetermined coding mode ms (1 ≦ J and J ≠ s)
And the chroma distortion weight factors of the coding mode and all the sub-modes thereof are alphamj
The sub-modes of the predetermined coding mode ms are divided into three groups, and the chroma distortion weight factors of the sub-modes are as follows:
if the current coding block is classified as class 1(c1), the chroma distortion weight factor is set to alpha if the sub-pattern is a sub-pattern of the first groupc1ms1
Setting the chrominance distortion weight factor to alpha if the sub-pattern is a sub-pattern of a second groupc1ms2
Setting the chrominance distortion weight factor to alpha if the sub-mode is a sub-mode of a third groupc1ms3
If the current coding block is classified as category 2(c2), further determination is made that:
setting the chrominance distortion weight factor to alpha if the sub-pattern is a sub-pattern of the first groupc2ms1
Setting the chrominance distortion weight factor to alpha if the sub-pattern is a sub-pattern of a second groupc2ms2
Setting the chrominance distortion weight factor to alpha if the sub-mode is a sub-mode of a third groupc2ms3
By the way of analogy, the method can be used,
if the current coding block is classified into the category n (cn), further judging:
setting the chrominance distortion weight factor to alpha if the sub-pattern is a sub-pattern of the first groupcnms1
Setting the chrominance distortion weight factor to alpha if the sub-pattern is a sub-pattern of a second groupcnms2
Setting the chrominance distortion weight factor to alpha if the sub-mode is a sub-mode of a third groupcnms3
In yet another embodiment, the correspondence relationship has the following form:
traversing all but one predetermined coding mode ms (1 ≦ J and J ≠ s)
The chroma distortion weight factor of the coding mode and all the sub-modes thereof is alphamj
The chroma distortion weighting factors of the predetermined coding mode ms and its sub-modes are:
if the current coding block is classified as class 1(c1), the chroma distortion weight factor for all sub-modes of the predetermined coding mode is alphac1ms
If the current coding block is classified as class 2(c2), the chroma distortion weight factor for all sub-modes of the predetermined coding mode is alphac2ms
By the way of analogy, the method can be used,
if the current coding block is classified into the category n (cn), the chroma distortion weight factor of all sub-modes of the predetermined coding mode is alphacnms
In yet another embodiment of the present invention, the encoding modes include an intra mode, an inter mode, and a string prediction mode;
the sub-modes of the inter mode are divided into three groups: the first group comprises F _ Skip _ Temporal sub-modes, the second group comprises other F _ Skip sub-modes, and the third group comprises other inter sub-modes;
the manner of classifying the video content includes:
calculating the number of CUs with zero difference between the current coding image and the previous frame image, and recording as zeroSAD8x 8;
dividing the coded images into a complete matching image, a basic matching image and a scene switching image according to zeroSAD8x 8;
dividing the current coding LCU into a local stationary LCU and a local moving LCU according to whether the current coding LCU and an adjacent available LCU (a left LCU, a right LCU or an upper LCU) are respectively identical to the LCU at the same position of the previous frame and the adjacent available LCU;
video content is divided into the following categories:
class 1: the current coded image is a complete matching image and the current coded LCU is divided into local static LCUs;
class 2: the current coded image is a basic matching image and the current coded LCU is divided into local static LCUs;
class 3: the current coding image is a scene switching image and the current coding LCU is divided into local static LCUs;
class 4: the current coded image is a complete matching image and the current coded LCU is divided into local motion LCUs;
class 5: LCU of other cases
The corresponding relation comprises:
the chroma distortion weighting factor for the intra prediction mode is set to alpha, independent of the video contentm1
The chroma distortion weighting factor of the string prediction mode is set to alpha regardless of the video contentm2
The chroma distortion weight factor for each sub-mode m3k of the inter-coded mode m3 is set according to the video content as follows:
if the current coding block is classified as class 1(c1), the chroma distortion weighting factor for the first set of sub-patterns is αc1m31
The chroma distortion weight factors of the second group of sub-modes are all alphac1m32(ii) a The chroma distortion weight factors of the third group of sub-modes are all alphac1m33
If the current coding block is classified as class 2(c2), the chroma distortion weighting factor for the first set of sub-patterns is αc2m31
The chroma distortion weight factors of the second group of sub-modes are all alphac2m32(ii) a The chroma distortion weight factors of the third group of sub-modes are all alphac2m33
If the current coding block is classified as class 3(c3), the chroma distortion weighting factor for the first set of sub-patterns is αc3m31
The chroma distortion weight factors of the second group of sub-modes are all alphac3m32(ii) a The chroma distortion weight factors of the third group of sub-modes are all alphac3m33
If the current coding block is classified in category 4(c4), the chroma distortion weighting factor for the first set of sub-patterns is αc4m31
The chroma distortion weight factors of the second group of sub-modes are all alphac4m32(ii) a The chroma distortion weight factors of the third group of sub-modes are all alphac4m33
If the current coding block is classified in category 5(c5), the chroma distortion weighting factor for the first set of sub-patterns is αc5m31
The chroma distortion weight factors of the second group of sub-modes are all alphac5m32(ii) a The chroma distortion weight factors of the third group of sub-modes are all alphac5m33
Corresponding to this further embodiment, the way to classify the coded images according to zeroSAD8x8 into full match, basic match, scene cut images is:
if the value of zeroSAD8x8 is greater than or equal to threshold 1, the encoded image is a fully matched image;
otherwise, if the value of the zeroSAD8x8 is greater than or equal to the threshold value 2, the coded image is a basic matching image;
otherwise, the coded image is a scene change image.
Corresponding to this further embodiment, the threshold and the chrominance distortion weighting factor are each one of the following types of data:
decimal or percentage or integer.
Further, the threshold is preferably a fraction. And one example of threshold 1 is 0.99; an example of threshold 2 is 0.5. However, the present invention is not limited thereto.
Compared with the prior art, the video compression method for adaptively adjusting the chroma distortion weight factors according to the video content is adopted, and the data compression method and the data compression device for coding adopt the distortion calculation mode for adaptively adjusting the chroma distortion weight factors according to the video content characteristics. In the method and the device, different chroma distortion weight factors are adopted for different modes and sub-modes thereof according to the video content characteristics. Therefore, the adopted coding mode is more beneficial to a coding mode selection mechanism based on RDCost to well select the optimal coding mode, and the overall coding efficiency is effectively improved.
In this specification, the invention has been described with reference to specific embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.

Claims (8)

1. A video compression method for adaptively adjusting chroma distortion weighting factors according to video contents is characterized by comprising the following steps:
dividing the video content into a plurality of categories according to the characteristics of the video content and preset classification conditions;
presetting a plurality of coding modes and coding sub-modes under each coding mode;
matching the coding sub-patterns for each category of the video content;
matching chroma distortion weight factors for each coding sub-mode, wherein different chroma distortion weight factors are adopted for different coding modes and coding sub-modes thereof;
respectively adopting a matched coding sub-mode and a matched chrominance distortion weight factor to code according to the category of the video content;
ci represents each category of video content, i is more than or equal to 1 and less than or equal to n, and n is more than or equal to 2; alpha is alphacimjkRepresenting the chroma distortion weight factors corresponding to the class ci, the coding mode mj,1 ≦ J, the submode mjk for mode mj,1 ≦ k ≦ Kj;
the method for dividing the video content into a plurality of categories according to the characteristics of the video content and the preset classification conditions comprises the following steps:
when the characteristics of the coding unit meet the predetermined ith condition, classifying the current coding unit into a category ci;
according to the category of video content, respectively adopting a matched coding sub-mode and a matched chroma distortion weight factor to carry out coding, wherein the coding comprises the following steps:
traversing all the coding modes mj except one preset coding mode ms, wherein J is more than or equal to 1 and less than or equal to J, and J is not equal to s;
and the colors of the coding mode mj and all coding sub-modes thereofThe distortion weighting factors are all alphamj
The sub-modes of the predetermined coding mode ms are divided into three groups, and the chroma distortion weight factors of the sub-modes are as follows:
if the current coding block is classified as the class ci, if the sub-mode is the sub-mode of the first group, the chroma distortion weight factor is set to be alphacims1
Setting the chrominance distortion weight factor to alpha if the sub-pattern is a sub-pattern of a second groupcims2
Setting the chrominance distortion weight factor to alpha if the sub-mode is a sub-mode of a third groupcims3
2. The method of claim 1, wherein the data in the video comprises one of:
one-dimensional data; multidimensional data; a graph; an image; a sequence of images; video; a three-dimensional scene; a sequence of continuously changing three-dimensional scenes; a virtual reality scene; a sequence of scenes of continuously changing virtual reality; an image in the form of pixels; transform domain data of the image;
a set of bytes in two or more dimensions; a set of bits in two or more dimensions; a set of pixels;
when the data is an image or a video, the video content comprises an encoding block or a decoding block.
3. The method of video compression for adaptively adjusting chroma distortion weighting factors according to claim 1, wherein the characteristics of the video content comprise one of:
encoding pixel values of the blocks;
or the prediction direction of the coding block;
or a reference frequency of a coded block or pixel value;
or an average of the characteristics of the coded blocks;
or a maximum value of a characteristic of the encoded block;
or a minimum of a characteristic of the coded block;
or a sum of features of the encoded blocks;
or a difference in characteristics of the encoded blocks;
or a mean square error of a characteristic of the coded block;
or the eigenvalue or eigenvector of a two-dimensional matrix formed by the pixel values of the coding blocks;
or the number of encoded blocks that satisfy one or a set of predetermined conditions;
or the number of coding units with the difference value of the characteristics of the current coding image and the adjacent frame image being a preset value;
or the number of coding units with the difference value of the characteristics of the current coding image and the previous frame image as a preset value;
or the number of the CUs of which the difference value of the characteristics of the coding tree units at the same position of the current coding tree unit and the previous frame image is a preset value;
or the number of coding units with the difference value of the characteristics of the current coding tree unit and the adjacent position coding tree unit being a preset value;
or the number of coding units with the difference value of the characteristics of the current coding tree unit and the left coding tree unit being a preset value;
or the number of coding units with the difference value of the characteristics of the current coding tree unit and the right coding tree unit being a preset value;
or the number of coding units with the difference value of the characteristics of the current coding tree unit and the upper coding tree unit being a preset value;
or the number of coding units with the difference value of the characteristics of the current coding tree unit and the coding tree unit at the upper left as a preset value;
or the number of coding units with the difference value of the characteristics of the current coding tree unit and the coding tree unit at the upper right as a preset value;
or the difference between the maximum characteristic value and the minimum characteristic value of the current coding block;
or the characteristic value of the current coding block is the proportion of the number of preset values;
or the characteristic value of the current coding block is the number of the maximum n characteristic values or the proportion of the number to the total number;
or the characteristic value of the current coding block is the number of the minimum n characteristic values or the proportion of the number to the total number;
or the characteristic value of the current coding block is the number of n characteristic values with the maximum number or the proportion of the number to the total number;
or the characteristic value of the current coding block is the number of the minimum n characteristic values or the proportion of the number to the total number.
4. The video compression method for adaptively adjusting chroma distortion weight factors according to video content of claim 1, wherein the predetermined condition is one or a combination of the following conditions:
the characteristic value of the video content is larger than a preset value;
or the characteristic value of the video content is less than a preset value;
or the characteristic value of the video content is equal to a preset value;
or the characteristic of the video content takes a value between two predetermined values.
5. The video compression method for adaptively adjusting chroma distortion weighting factors according to claim 1, wherein the coding modes comprise one or a combination of the following coding modes:
an intra-coding mode;
or an inter-coding mode;
the encoding sub-mode comprises one or a combination of the following encoding sub-modes:
intra-frame coding sub-mode;
or an inter-frame coding sub-mode;
the inter-coding sub-mode comprises one or a combination of the following coding sub-modes:
skip sub-mode;
or Direct sub-mode;
or not dividing sub-patterns;
or a split sub-pattern.
6. The video compression method for adaptively adjusting chroma distortion weighting factors according to claim 1, wherein the coding is performed by respectively adopting a matched coding sub-mode and a matched chroma distortion weighting factor according to the category of the video content, and the method comprises:
for all coding modes or sub-modes of the coding modes, different types of video contents correspond to different chroma distortion weight factors;
or for a predetermined part of coding modes or sub-modes of the coding modes, the video contents of different categories all correspond to different chroma distortion weight factors, and for other coding modes or sub-modes of the coding modes, the video contents of different categories all correspond to the same chroma distortion weight factors;
or for a predetermined part of coding modes, dividing the sub-modes of each coding mode into a plurality of groups, wherein each sub-mode of the same group has the same chroma distortion weight factor;
or all sub-modes of the same coding mode have the same chroma distortion weighting factor for a predetermined fraction of the coding modes.
7. The video compression method for adaptively adjusting chroma distortion weight factors per video content according to claim 1, wherein the encoding modes include an intra mode, an inter mode, and a string prediction mode;
the coding sub-modes of the inter-mode are divided into three groups: the first group of sub-modes comprises F _ Skip _ Temporal sub-modes, the second group of sub-modes comprises other F _ Skip sub-modes, and the third group of sub-modes comprises other inter-frame sub-modes;
the manner of classifying the video content includes:
calculating the number of coding units with zero difference between the current coding image and the previous frame image, and recording as zeroSAD8x 8;
dividing the coded images into a complete matching image, a basic matching image and a scene switching image according to zeroSAD8x 8;
dividing the current coding maximum coding unit into a local static maximum coding unit and a local motion maximum coding unit according to whether the current coding maximum coding unit and the adjacent available maximum coding unit are the same as the position maximum coding unit and the adjacent available maximum coding unit of the previous frame or not;
video content is divided into the following categories:
the first category: the current coding image is a complete matching image and the current coding maximum coding unit is divided into a local static maximum coding unit;
in the second category: the current coding image is a basic matching image and the current coding maximum coding unit is divided into a local static maximum coding unit;
in the third category: the current coding image is a scene switching image and the current coding maximum coding unit is divided into a local static maximum coding unit;
the fourth category: the current coding image is a complete matching image and the current coding maximum coding unit is divided into a local motion maximum coding unit;
in the fifth category: maximum coding unit for other cases;
the corresponding relationship between the prediction mode and the chroma distortion weighting factor comprises:
the chroma distortion weighting factor for the intra prediction mode is set to alpha, independent of the video contentm1
The chroma distortion weighting factor of the string prediction mode is set to alpha regardless of the video contentm2
The chroma distortion weight factor for each sub-mode m3k of the inter-coded mode m3 is set according to the video content as follows:
if the current coding block is classified as the class ci, the chroma distortion weight factor of the first group of sub-modes is alphacim31
The chroma distortion weight factors of the second group of sub-modes are all alphacim32(ii) a The chroma distortion weight factors of the third group of sub-modes are all alphacim33
8. The method of claim 7, wherein the method of classifying coded images into full-match images, basic-match images, and scene-cut images according to zeroSAD8x8 is:
if the value of zeroSAD8x8 is greater than or equal to threshold 1, the encoded image is a fully matched image;
otherwise, if the value of the zeroSAD8x8 is greater than or equal to the threshold value 2, the coded image is a basic matching image;
otherwise, the coded image is a scene change image.
CN201810184812.4A 2018-03-07 2018-03-07 Video compression method for adaptively adjusting chroma distortion weight factor according to video content Active CN108366242B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810184812.4A CN108366242B (en) 2018-03-07 2018-03-07 Video compression method for adaptively adjusting chroma distortion weight factor according to video content

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810184812.4A CN108366242B (en) 2018-03-07 2018-03-07 Video compression method for adaptively adjusting chroma distortion weight factor according to video content

Publications (2)

Publication Number Publication Date
CN108366242A CN108366242A (en) 2018-08-03
CN108366242B true CN108366242B (en) 2019-12-31

Family

ID=63003441

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810184812.4A Active CN108366242B (en) 2018-03-07 2018-03-07 Video compression method for adaptively adjusting chroma distortion weight factor according to video content

Country Status (1)

Country Link
CN (1) CN108366242B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109547788B (en) * 2018-10-26 2021-01-26 西安科锐盛创新科技有限公司 Image compression method, equipment and image transmission system
CN117292003B (en) * 2023-11-27 2024-03-19 深圳对对科技有限公司 Picture cloud data storage method for computer network

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
SI3217667T1 (en) * 2011-10-17 2020-12-31 Kabushiki Kaisha Toshiba Encoding method and decoding method
US9998727B2 (en) * 2012-09-19 2018-06-12 Qualcomm Incorporated Advanced inter-view residual prediction in multiview or 3-dimensional video coding
CN102984521B (en) * 2012-12-12 2015-04-08 四川大学 High-efficiency video coding inter-frame mode judging method based on temporal relativity
CN105765976B (en) * 2013-11-05 2019-10-25 艾锐势有限责任公司 Simplify processing weight estimation syntax and semantics using bit-depth variable for high accuracy data
CN107197267B (en) * 2017-06-28 2019-10-25 陕西师范大学 The distribution method of efficient video coding criterion and quantity parameter

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Mixed Chroma Sampling-Rate High Efficiency Video Coding for Full-Chroma Screen Content;Tao Lin;《IEEE transactions on circuits and systems for video technology》;20130131;第23卷(第1期);全文 *
结合全色度HEVC和有损字典算法的屏幕图像编码;张培君;《计算机科学》;20140331;第41卷(第3期);全文 *
融合全色度LZMA与色度子采样HEVC的屏幕图像编码;张培君;《电子与信息学报》;20130131;第35卷(第1期);全文 *

Also Published As

Publication number Publication date
CN108366242A (en) 2018-08-03

Similar Documents

Publication Publication Date Title
CN108495135B (en) Quick coding method for screen content video coding
CN101222644B (en) Moving image encoding/decoding device and moving image encoding/decoding method
CN109842799B (en) Intra-frame prediction method and device of color components and computer equipment
TW202005381A (en) Flexible tree
KR100906243B1 (en) Video coding method of rgb color space signal
CN105284112B (en) Method and apparatus for the value for determining quantization parameter
CN105325000A (en) Image encoding device, image encoding method, image decoding device, and image decoding method
CN108271024A (en) Image coding, coding/decoding method and device
Zhao et al. Pseudo 2D string matching technique for high efficiency screen content coding
CN113785576B (en) Use of secondary transforms in codec video
CN103647975B (en) Improved sample adaptive offset filtering method based on histogram analysis
WO2020119814A1 (en) Image reconstruction method and device
CN108366242B (en) Video compression method for adaptively adjusting chroma distortion weight factor according to video content
CN114287133A (en) Weighting factors for predictive sampling filtering in intra mode
CN110234010A (en) Method for video coding/device and video encoding/decoding method/device
CN116918335A (en) Residual and coefficient coding for video coding
CN116803077A (en) Residual and coefficient coding for video coding
CN115836525A (en) Method and system for prediction from multiple cross components
CN108174201A (en) Terminate the video-frequency compression method of encoding block coding mode search in advance by video content
CN106878754A (en) A kind of 3D video depths image method for choosing frame inner forecast mode
CN113365080B (en) Encoding and decoding method, device and storage medium for string coding technology
WO2022166462A1 (en) Encoding/decoding method and related device
WO2021196234A1 (en) Video encoding and decoding method and device, and storage medium
CN110855973B (en) Video intra-frame fast algorithm based on regional directional dispersion sum
CN110971897B (en) Method, apparatus and system for encoding and decoding intra prediction mode of chrominance component

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20221221

Address after: 100193 303C, 304C, Floor 3, Building 2, East District, Yard 10, Northwest Wangdong Road, Haidian District, Beijing

Patentee after: Beijing Boya Ruishi Technology Co.,Ltd.

Address before: 312000 No. 508 West Ring Road, Zhejiang, Shaoxing

Patentee before: SHAOXING University