CN116781904A - Video coding method - Google Patents

Video coding method Download PDF

Info

Publication number
CN116781904A
CN116781904A CN202310863046.5A CN202310863046A CN116781904A CN 116781904 A CN116781904 A CN 116781904A CN 202310863046 A CN202310863046 A CN 202310863046A CN 116781904 A CN116781904 A CN 116781904A
Authority
CN
China
Prior art keywords
rate distortion
distortion cost
control unit
central control
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310863046.5A
Other languages
Chinese (zh)
Inventor
朱正辉
蔡文生
詹楚伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Baolun Electronics Co ltd
Original Assignee
Guangdong Baolun Electronics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Baolun Electronics Co ltd filed Critical Guangdong Baolun Electronics Co ltd
Priority to CN202310863046.5A priority Critical patent/CN116781904A/en
Publication of CN116781904A publication Critical patent/CN116781904A/en
Pending legal-status Critical Current

Links

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention relates to the technical field of video coding, in particular to a video coding method, which comprises the following steps: step 1, starting CU division judgment of a certain CU depth; step 2, calculating a luminance variance value var1 of the CU, if var1 is larger than a first threshold value TH1, executing step 3, otherwise executing step 4; step 3, DCT and quantization are carried out on the coding residual error of the current CU, the number of quantization coefficients greater than 0 in the current CU is counted, the number is marked as N1, if N1 is greater than a second threshold value TH2, the current CU is judged to be divided, otherwise, step 4 is executed; step 4, judging whether the current CU is divided according to a CU size selection method of Lagrangian rate distortion, if so, judging that the current CU should be divided; if not, the current CU is judged not to be divided. The method provided by the embodiment of the invention ensures that the CU size of the local flat block is more reasonable, the noise of the local flat block can be reduced, and the subjective quality of video is improved.

Description

Video coding method
Technical Field
The invention relates to the technical field of video coding, in particular to a video coding method.
Background
Video technology has been widely applied to the fields of mobile terminals, network live broadcast, home theater, remote monitoring and the like, video resolution is gradually changed from standard definition (Standard Defin it ion, SD) to High definition (H igh Defin it ion, HD), ultra High Definition (UHD), and currently, the internationally used video coding and decoding standards include h.264, h.265/HEVC (High Efficiency Video Cod ing ), national AVS (Aud ioVideo cod ing Standard, audio video coding standard), avs+, AVS2 and the like.
The HEVC encoder divides each frame of image into several CTUs (Cod ing Tree Un it, coding tree units) of the same size, and each CTU is further divided into CUs (coding Un it) of different sizes, such as 64x64, 32x32, 16x16 and 8x8, according to the texture and motion information of each region, where CU depths corresponding to the different sizes are 0, 1, 2 and 3. A CU of larger size can generally save more code rate but has greater coding distortion, while a smaller CU size generally consumes more code rate but has less coding distortion.
In order to compromise the code rate and the distortion, the HEVC encoder recursively processes the CU in a quadtree form, as shown in fig. 1, by determining the CU size by comparing rdcosts (Rate Di stort ion Cost, rate distortion costs) of the CUs of each size, selecting the CU size with the smallest RDCost as the optimal CU size, where the calculation of the RDCost is: rdcost=λ·r+ssd, where λ is a lagrangian factor, R represents a code rate, and SSD represents coding distortion, which is called a method for selecting CU sizes based on lagrangian rate distortion, which can select CU sizes with low code rate consumption and small coding distortion, as shown in fig. 2, which is a CU size division result of a video using the method, where a black flat region generally selects a large-sized CU, and a region with complex texture generally selects a smaller-sized CU.
HEVC coding adopts efficient predictive coding and transform coding technologies, and predictive coding predicts pixels of a current CU by using pixels of CU with correlation in a space domain, so that data information required to be carried by the current CU is reduced; transform coding subtracts the CU predicted pixels from the CU original pixels to form coded residuals, and DCT (Di screteCos ine Transform ) and quantization of the coded residuals further compresses residual information.
The DCT concentrates most of the energy of the encoded residual information in a small range of the frequency domain, so that few bits are needed to describe the unimportant components, and in addition, the frequency domain decomposition maps the processing of the human visual system and allows the subsequent quantization process to meet the sensitivity requirements, the DCT transform formula is:
wherein X is a coding residual coefficient matrix, Y is a DCT coefficient matrix, C is a transformation matrix, and E is a correction matrix.
The conventional CU size selection method based on lagrangian rate distortion can not select the CU size most suitable for subjective perception of human eyes, although the CU size with smaller code rate and smaller objective coding distortion is selected. For a CU that contains both flat and textured regions, the distribution of high frequency information is not concentrated enough during the DCT process of the HEVC encoder, making it difficult for the quantization process to eliminate the high frequency information, which is transferred to the flat region during decoding reconstruction, making the flat region appear as a distinct noise, which is referred to as local flat block noise.
Disclosure of Invention
Therefore, the present invention provides a video coding method to overcome the problem of complex coding in the prior art.
To achieve the above object, the present invention provides a video encoding method, comprising:
step 1, starting CU division judgment of a certain CU depth;
step 2, calculating a luminance variance value var1 of the CU, if var1 is larger than a first threshold value TH1, executing step 3, otherwise executing step 4;
step 3, DCT and quantization are carried out on the coding residual error of the current CU, the number of quantization coefficients larger than 0 in the current CU is counted to be N1, if N1 is larger than a second threshold value TH2, the current CU is judged to be divided, otherwise, step 4 is executed;
step 4, judging whether the current CU is divided according to a CU size selection method of Lagrangian rate distortion, if so, judging that the current CU should be divided; if not, judging that the current CU should not be divided, and executing the step 5;
and 5, calculating the motion vectors of the coding units and the adjacent T coding units by the central control unit, obtaining the minimum rate distortion cost corresponding to the motion vectors of the T coding units respectively, judging whether the next motion vector accords with the standard by the central control unit, judging the reason of the non-compliance of the standard according to the difference value between the minimum rate distortion cost and the first preset rate distortion cost arranged in the central control unit when the next motion vector is judged to be non-compliance with the standard, and determining the next coding unit or adjusting the prediction model arranged in the central control unit according to the judgment result by the central control unit.
Further, the value range of the first threshold value TH l is between [1, 1000 ].
Further, the first threshold TH l has a value of 600.
Further, the second threshold TH2 has a value ranging between [1, 20 ].
Further, the second threshold TH2 has a value of 8.
Further, the luminance variance value var1 is specifically:where N represents the number of pixels in the current CU, yt represents the luminance value of the t-th pixel in the current CU, and μ represents the average of the luminance values of all pixels in the current CU.
Further, the quantization coefficients are:where Y (i, j) represents a DCT coefficient at a position (i, j) in the DCT coefficient matrix Y, L (i, j) is a quantization coefficient at a position (i, j), qstep represents a quantization step size, f loor () is a downward rounding function, and f is a rounding offset.
Further, the central control unit obtains minimum rate distortion costs Qmin corresponding to the motion vectors of the T encoding units respectively under a first preset condition, and determines whether the next motion vector accords with a standard determination mode according to the minimum rate distortion costs Qmin, wherein:
the first judging mode is that the central control unit judges that the next motion vector accords with a standard, and the next coding unit is a coding unit with the minimum rate distortion cost of the current coding unit being Qmin; the first judging mode meets the condition that the minimum rate distortion cost Qmin is smaller than or equal to the first preset rate distortion cost Q1;
the second judging mode is that the central control unit judges that the next motion vector does not accord with the standard, and the central control unit judges whether the next motion vector accords with the standard or not according to the difference value between the minimum rate distortion cost Qmin and the first preset rate distortion cost Q1; the second judging mode meets the condition that the minimum rate distortion cost Qmin is larger than the first preset rate distortion cost Q1 and smaller than or equal to the second preset rate distortion cost Q2 arranged in the central control unit;
the third judging mode is that the central control unit judges that the next motion vector does not accord with the standard, and the central control unit reestablishes a prediction model arranged in the central control unit according to the difference value between the minimum rate distortion cost Qmin and the first preset rate distortion cost Q1; the third judging mode meets the condition that the minimum rate distortion cost Qmin is larger than the second preset rate distortion cost Q2;
the first preset condition is that the central control unit finishes the calculation of the motion vectors of the coding unit and the adjacent T coding units.
Further, the central control unit marks a difference value between the minimum rate distortion cost Qmin and the first preset rate distortion cost Q1 as a rate distortion cost difference value Δq in the second determination mode, and determines a vector determination mode of the next coding unit according to the rate distortion cost difference value Δq, wherein:
the first vector judgment mode is that the central control unit judges that the next coding unit is a coding unit with the minimum rate distortion cost of Qmin with the current coding unit; the first vector judgment mode meets the condition that the rate distortion cost difference DeltaQ is smaller than or equal to a preset rate distortion cost difference DeltaQ 0 set in the central control unit;
the second vector judgment mode is that the central control unit judges that the next coding unit is the coding unit with the smallest difference between the rate distortion cost and the minimum rate distortion cost Qmin; the second vector judgment mode satisfies that the rate distortion cost difference DeltaQ is larger than a preset rate distortion cost difference DeltaQ 0 arranged in the central control unit.
Further, the central control unit determines a model adjustment mode of the prediction model according to the rate-distortion cost difference Δq in the third determination mode, where:
the first model adjusting mode is that the central control unit adjusts the prediction model to a corresponding value by using a first preset rate distortion cost coefficient alpha 1; the first model adjusting mode meets the condition that the rate distortion cost difference DeltaQ is smaller than or equal to a first preset rate distortion cost difference DeltaQ 1 arranged in the central control unit;
the second model adjusting mode is that the central control unit adjusts the prediction model to a corresponding value by using a second preset rate distortion cost coefficient alpha 2; the second model adjustment mode meets the condition that the rate distortion cost difference DeltaQ is larger than the first preset rate distortion cost difference DeltaQ 1 and smaller than or equal to a second preset rate distortion cost difference DeltaQ 2 arranged in the central control unit;
the third model adjusting mode is that the central control unit adjusts the prediction model to a corresponding value by using a third preset rate distortion cost coefficient alpha 3; the third model adjustment mode satisfies that the rate distortion cost difference DeltaQ is larger than the second preset rate distortion cost difference DeltaQ 2.
Compared with the prior art, the invention has the beneficial effects that the adoption of the invention has the following: utilizing the brightness variance value of the local flat block, and judging that if the brightness variance value is larger, the local flat block is considered to be easy to generate noise, and the CU tends to select a smaller size; and according to the number of the quantized non-zero coefficients, judging that if the number is larger, the coding distortion is larger, and enabling the CU to select a smaller CU size. The method provided by the embodiment of the invention ensures that the CU size of the local flat block is more reasonable, the noise of the local flat block can be reduced, and the subjective quality of video is improved.
Drawings
Fig. 1 is a schematic diagram of a quadtree partitioning structure of a CTU in the prior art;
FIG. 2 is a schematic view of CU size division based on Lagrangian rate distortion in the prior art;
FIG. 3 is a flowchart illustrating steps of a video encoding method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of pictures with an open source x265 video encoder as an experimental platform and a comparison platform;
fig. 5 is a schematic diagram of a video encoded by the method according to an embodiment of the present invention.
Detailed Description
In order that the objects and advantages of the invention will become more apparent, the invention will be further described with reference to the following examples; it should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Preferred embodiments of the present invention are described below with reference to the accompanying drawings. It should be understood by those skilled in the art that these embodiments are merely for explaining the technical principles of the present invention, and are not intended to limit the scope of the present invention.
It should be noted that, in the description of the present invention, terms such as "upper," "lower," "left," "right," "inner," "outer," and the like indicate directions or positional relationships based on the directions or positional relationships shown in the drawings, which are merely for convenience of description, and do not indicate or imply that the apparatus or elements must have a specific orientation, be constructed and operated in a specific orientation, and thus should not be construed as limiting the present invention.
Furthermore, it should be noted that, in the description of the present invention, unless explicitly specified and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be either fixedly connected, detachably connected, or integrally connected, for example; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention can be understood by those skilled in the art according to the specific circumstances.
Referring to fig. 3, a flowchart of steps of a video encoding method according to an embodiment of the present invention is shown, including:
step 1, starting CU division judgment of a certain CU depth;
step 2, calculating a luminance variance value var1 of the CU, if var1 is larger than a first threshold value TH1, executing step 3, otherwise executing step 4; the threshold TH1 is typically 600, and has a value range of [1, 1000 ].
Step 3, DCT and quantization are carried out on the coding residual error of the current CU, the number of quantization coefficients greater than 0 in the current CU is counted, the number is marked as N1, if N1 is greater than a second threshold value TH2, the current CU is judged to be divided, otherwise, step 4 is executed; the threshold TH2 is typically 8, and has a value range between [1, 20 ].
Step 4, judging whether the current CU is divided according to a CU size selection method of Lagrangian rate distortion, if so, judging that the current CU should be divided; if not, the current CU is judged not to be divided.
In a specific application example, the luminance variance value var1 is specifically:where N represents the number of pixels in the current CU, yt represents the luminance value of the t-th pixel in the current CU, and μ represents the average of the luminance values of all pixels in the current CU.
The quantization process, which is essentially an optimization process for the DCT coefficients, is a process that uses the insensitivity of the human eye to high frequency parts to achieve a substantial simplification of the data, is essentially simply dividing each component in the frequency domain by a constant for that component, and then rounding to the nearest integer. In a specific application example, in step 3, the quantization coefficients are:where Y (i, j) represents the DCT coefficient at position (i, j) in DCT coefficient matrix Y, L (i, j) is the quantization coefficient at position (i, j), qstep represents the quantization step size, and f loor () is the decreasing integer functionThe number, f, is the rounding offset.
The open source x265 video encoder is used as an experimental platform and a comparison platform, and the small CU size is selected by utilizing the brightness variance value of the local flat block and the number of the non-zero coefficients of the CU after quantization, so that the noise-prone region is reduced, and the subjective quality is improved. As can be seen from the following figures 4 and 5, which are the x265 method and the code output diagram of the method of the present invention, the noise in the boundary area between the flat and the texture is very obvious in the x265 method, such as the black box part in fig. 4, and the noise in the boundary area between the flat and the texture is very small in the method of the present invention, which shows that the method of the present invention has a remarkable effect on improving the local flat block noise, such as the black box part in fig. 5. The method optimizes the subjective quality of the video by efficiently removing the local flat block noise, and can be applied to video compression standards such as H265/HEVC, AVS2 and the like.
Specifically, the central control unit obtains minimum rate distortion costs Qmin corresponding to the motion vectors of the T coding units respectively under a first preset condition, and determines whether the next motion vector meets a standard according to the minimum rate distortion costs Qmin, wherein:
the first judging mode is that the central control unit judges that the next motion vector accords with a standard, and the next coding unit is a coding unit with the minimum rate distortion cost of the current coding unit being Qmin; the first judging mode meets the condition that the minimum rate distortion cost Qmin is smaller than or equal to the first preset rate distortion cost Q1;
the second judging mode is that the central control unit judges that the next motion vector does not accord with the standard, and the central control unit judges whether the next motion vector accords with the standard or not according to the difference value between the minimum rate distortion cost Qmin and the first preset rate distortion cost Q1; the second judging mode meets the condition that the minimum rate distortion cost Qmin is larger than the first preset rate distortion cost Q1 and smaller than or equal to the second preset rate distortion cost Q2 arranged in the central control unit;
the third judging mode is that the central control unit judges that the next motion vector does not accord with the standard, and the central control unit reestablishes a prediction model arranged in the central control unit according to the difference value between the minimum rate distortion cost Qmin and the first preset rate distortion cost Q1; the third judging mode meets the condition that the minimum rate distortion cost Qmin is larger than the second preset rate distortion cost Q2;
the first preset condition is that the central control unit finishes the calculation of the motion vectors of the coding unit and the adjacent T coding units.
Specifically, the central control unit marks the difference between the minimum rate distortion cost Qmin and the first preset rate distortion cost Q1 as a rate distortion cost difference Δq in the second determination mode, and determines the vector determination mode of the next coding unit according to the rate distortion cost difference Δq, where:
the first vector judgment mode is that the central control unit judges that the next coding unit is a coding unit with the minimum rate distortion cost of Qmin with the current coding unit; the first vector judgment mode meets the condition that the rate distortion cost difference DeltaQ is smaller than or equal to a preset rate distortion cost difference DeltaQ 0 set in the central control unit;
the second vector judgment mode is that the central control unit judges that the next coding unit is the coding unit with the smallest difference between the rate distortion cost and the minimum rate distortion cost Qmin; the second vector judgment mode satisfies that the rate distortion cost difference DeltaQ is larger than a preset rate distortion cost difference DeltaQ 0 arranged in the central control unit.
Specifically, the central control unit determines a model adjustment mode of the prediction model according to the rate-distortion cost difference Δq in the third determination mode, where:
the first model adjusting mode is that the central control unit adjusts the prediction model to a corresponding value by using a first preset rate distortion cost coefficient alpha 1; the first model adjusting mode meets the condition that the rate distortion cost difference DeltaQ is smaller than or equal to a first preset rate distortion cost difference DeltaQ 1 arranged in the central control unit;
the second model adjusting mode is that the central control unit adjusts the prediction model to a corresponding value by using a second preset rate distortion cost coefficient alpha 2; the second model adjustment mode meets the condition that the rate distortion cost difference DeltaQ is larger than the first preset rate distortion cost difference DeltaQ 1 and smaller than or equal to a second preset rate distortion cost difference DeltaQ 2 arranged in the central control unit;
the third model adjusting mode is that the central control unit adjusts the prediction model to a corresponding value by using a third preset rate distortion cost coefficient alpha 3; the third model adjustment mode satisfies that the rate distortion cost difference DeltaQ is larger than the second preset rate distortion cost difference DeltaQ 2.
It should be understood that the exemplary embodiments described herein are illustrative and not limiting. Although one or more embodiments of the present invention have been described with reference to the accompanying drawings, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present invention as defined by the following claims.
Thus far, the technical solution of the present invention has been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of protection of the present invention is not limited to these specific embodiments. Equivalent modifications and substitutions for related technical features may be made by those skilled in the art without departing from the principles of the present invention, and such modifications and substitutions will be within the scope of the present invention.
The foregoing description is only of the preferred embodiments of the invention and is not intended to limit the invention; various modifications and variations of the present invention will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A video encoding method, comprising:
step 1, starting CU division judgment of a certain CU depth;
step 2, calculating a luminance variance value var1 of the CU, if var1 is larger than a first threshold value TH1, executing step 3, otherwise executing step 4;
step 3, DCT and quantization are carried out on the coding residual error of the current CU, the number of quantization coefficients larger than 0 in the current CU is counted to be N1, if N1 is larger than a second threshold value TH2, the current CU is judged to be divided, otherwise, step 4 is executed;
step 4, judging whether the current CU is divided according to a CU size selection method of Lagrangian rate distortion, if so, judging that the current CU should be divided; if not, judging that the current CU should not be divided, and executing the step 5;
and 5, calculating the motion vectors of the coding units and the adjacent T coding units by the central control unit, obtaining the minimum rate distortion cost corresponding to the motion vectors of the T coding units respectively, judging whether the next motion vector accords with the standard by the central control unit, judging the reason of the non-compliance of the standard according to the difference value between the minimum rate distortion cost and the first preset rate distortion cost arranged in the central control unit when the next motion vector is judged to be non-compliance with the standard, and determining the next coding unit or adjusting the prediction model arranged in the central control unit according to the judgment result by the central control unit.
2. The video encoding method according to claim 1, wherein the first threshold THl has a value in a range between [1, 1000 ].
3. The video encoding method according to claim 1, wherein the first threshold THl has a value of 600.
4. The video coding method according to claim 1, wherein the second threshold TH2 has a value ranging between [1, 20 ].
5. The video encoding method according to claim 1, wherein the second threshold TH2 has a value of 8.
6. The video coding method according to any one of claims 1 to 5, wherein the luminance variance value var1 is specifically:wherein N representsThe number of pixels in the current CU, yt represents the luminance value of the t-th pixel in the current CU, and μ represents the average value of the luminance values of all pixels in the current CU.
7. The video coding method according to any one of claims 1 to 5, wherein the quantization coefficients are:where Y (i, j) represents a DCT coefficient at a position (i, j) in the DCT coefficient matrix Y, L (i, j) is a quantization coefficient at a position (i, j), qstep represents a quantization step size, floor () is a rounding-down function, and f is a rounding offset.
8. The video coding method according to claim 1, wherein the central control unit obtains minimum rate distortion costs Qmin corresponding to the motion vectors of the T coding units respectively under a first preset condition, and determines whether the next motion vector meets a standard according to the minimum rate distortion costs Qmin, wherein:
the first judging mode is that the central control unit judges that the next motion vector accords with a standard, and the next coding unit is a coding unit with the minimum rate distortion cost of the current coding unit being Qmin; the first judging mode meets the condition that the minimum rate distortion cost Qmin is smaller than or equal to a first preset rate distortion cost Q1 arranged in the central control unit;
the second judging mode is that the central control unit judges that the next motion vector does not accord with the standard, and the central control unit judges whether the next motion vector accords with the standard or not according to the difference value between the minimum rate distortion cost Qmin and the first preset rate distortion cost Q1; the second judging mode meets the condition that the minimum rate distortion cost Qmin is larger than the first preset rate distortion cost Q1 and smaller than or equal to the second preset rate distortion cost Q2 arranged in the central control unit;
the third judging mode is that the central control unit judges that the next motion vector does not accord with the standard, and the central control unit reestablishes a prediction model arranged in the central control unit according to the difference value between the minimum rate distortion cost Qmin and the first preset rate distortion cost Q1; the third judging mode meets the condition that the minimum rate distortion cost Qmin is larger than the second preset rate distortion cost Q2;
the first preset condition is that the central control unit finishes the calculation of the motion vectors of the coding unit and the adjacent T coding units.
9. The video coding method according to claim 8, wherein the central control unit marks a difference between a minimum rate distortion cost Qmin and the first preset rate distortion cost Q1 as a rate distortion cost difference Δq in the second decision mode, and determines a vector decision mode of the next coding unit according to the rate distortion cost difference Δq, wherein:
the first vector judgment mode is that the central control unit judges that the next coding unit is a coding unit with the minimum rate distortion cost of Qmin with the current coding unit; the first vector judgment mode meets the condition that the rate distortion cost difference DeltaQ is smaller than or equal to a preset rate distortion cost difference DeltaQ 0 set in the central control unit;
the second vector judgment mode is that the central control unit judges that the next coding unit is the coding unit with the smallest difference between the rate distortion cost and the minimum rate distortion cost Qmin; the second vector judgment mode satisfies that the rate distortion cost difference DeltaQ is larger than the preset rate distortion cost difference DeltaQ 0.
10. The video coding method according to claim 8, wherein the central control unit determines a model adjustment mode of the prediction model according to the rate-distortion cost difference Δq in the third decision mode, wherein:
the first model adjusting mode is that the central control unit adjusts the prediction model to a corresponding value by using a first preset rate distortion cost coefficient alpha 1; the first model adjusting mode meets the condition that the rate distortion cost difference DeltaQ is smaller than or equal to a first preset rate distortion cost difference DeltaQ 1 arranged in the central control unit;
the second model adjusting mode is that the central control unit adjusts the prediction model to a corresponding value by using a second preset rate distortion cost coefficient alpha 2; the second model adjustment mode meets the condition that the rate distortion cost difference DeltaQ is larger than the first preset rate distortion cost difference DeltaQ 1 and smaller than or equal to a second preset rate distortion cost difference DeltaQ 2 arranged in the central control unit;
the third model adjusting mode is that the central control unit adjusts the prediction model to a corresponding value by using a third preset rate distortion cost coefficient alpha 3; the third model adjustment mode satisfies that the rate distortion cost difference DeltaQ is larger than the second preset rate distortion cost difference DeltaQ 2.
CN202310863046.5A 2023-07-13 2023-07-13 Video coding method Pending CN116781904A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310863046.5A CN116781904A (en) 2023-07-13 2023-07-13 Video coding method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310863046.5A CN116781904A (en) 2023-07-13 2023-07-13 Video coding method

Publications (1)

Publication Number Publication Date
CN116781904A true CN116781904A (en) 2023-09-19

Family

ID=88011457

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310863046.5A Pending CN116781904A (en) 2023-07-13 2023-07-13 Video coding method

Country Status (1)

Country Link
CN (1) CN116781904A (en)

Similar Documents

Publication Publication Date Title
CA2823503C (en) Image encoding device, image decoding device, image encoding method, and image decoding method
KR20210096029A (en) Apparatus for decoding a moving picture
CA2795425C (en) Moving image encoding device and moving image decoding device
US8331449B2 (en) Fast encoding method and system using adaptive intra prediction
US10820001B2 (en) Image coding device, image decoding device, image coding method, and image decoding method
KR20130058524A (en) Method for generating chroma intra prediction block
TW201334547A (en) Method of deriving quantization parameter
EP2983360A1 (en) Color image encoding apparatus, color image decoding apparatus, color image encoding method, and color image decoding method
WO2014049981A1 (en) Video encoding device, video decoding device, video encoding method and video decoding method
CN116781904A (en) Video coding method
CN113225556B (en) Video coding method
KR20140057514A (en) Prediction block generating apparatus
JP2014090327A (en) Moving image encoder, moving image decoder, moving image encoding method and moving image decoding method
JP2014090326A (en) Moving image encoder, moving image decoder, moving image encoding method and moving image decoding method
CN113242430B (en) Video coding method
WO2014049982A1 (en) Video encoding device, video decoding device, video encoding method and video decoding method
WO2013108882A1 (en) Video encoding apparatus, video decoding apparatus, video encoding method, and video decoding method
WO2014166553A1 (en) A method for coding a sequence of digital images
KR20170001704A (en) Method and apparatus for decoding image
WO2019188845A1 (en) Systems and methods for partitioning video blocks for video coding based on threshold values

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination