CN108347603B - Moving image encoding device and moving image encoding method - Google Patents
Moving image encoding device and moving image encoding method Download PDFInfo
- Publication number
- CN108347603B CN108347603B CN201710060951.1A CN201710060951A CN108347603B CN 108347603 B CN108347603 B CN 108347603B CN 201710060951 A CN201710060951 A CN 201710060951A CN 108347603 B CN108347603 B CN 108347603B
- Authority
- CN
- China
- Prior art keywords
- sum
- data
- circuit
- values
- transformed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/146—Data rate or code amount at the encoder output
- H04N19/147—Data rate or code amount at the encoder output according to rate distortion criteria
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/189—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
- H04N19/19—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding using optimisation based on Lagrange multipliers
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The invention provides a dynamic image coding device and a dynamic image coding method.A controller comprises a summing circuit, a data quantity estimation circuit and an evaluation circuit. Each of a plurality of intra prediction/motion compensation modes each corresponds to a set of transform quantized residual data. The summing circuit calculates the sum of absolute values of non-zero elements and the sum of coordinate values of the non-zero elements relative to a reference point for each set of transformed quantized residual data. The data amount estimation circuit generates a data amount estimation value according to the sum of the absolute value and the coordinate value of the corresponding transformed and quantized residual data for each intra prediction/motion compensation mode. The evaluation circuit selects a best mode from the plurality of intra prediction/motion compensation modes according to the plurality of data volume estimates.
Description
Technical Field
The present invention relates to an image processing technique, and more particularly, to a technique for selecting an optimal mode from a plurality of image processing modes according to the amount of data.
Background
In recent years, with the rapid development of various electronic related technologies, multimedia systems such as home theaters have become popular. In most multimedia systems, the most important hardware devices belong to the image display devices. In order to meet the demands of viewers for realistic images, the current trend of image display devices is to continuously increase the size and resolution of frames (frames), thereby greatly increasing the amount of image data per frame. It is a significant issue to reduce the amount of image data as much as possible by compression technique while maintaining good image quality to save storage space and transmission resources.
FIG. 1 is a block diagram of a currently widely used motion picture coding system. Each frame is usually divided into a plurality of image blocks as the basic unit of encoding. The reference data of a block to be encoded is input to an intra-prediction (intra-prediction)/motion compensation (motion compensation) circuit 101, and the circuit 101 outputs a reference block after operation. The intra-prediction (intra-prediction)/motion compensation circuit also includes a motion estimation function, which is not discussed in the present invention and will not be described herein. Then, the residual error generating circuit 102 is responsible for finding the difference between the block to be encoded and the reference block. This inter-block difference, which is commonly referred to as residual (residual) data, is subjected to Discrete Cosine Transform (DCT) by the transform circuit 103A and subjected to quantization (quantization) by the quantization circuit 103B. The entropy encoding circuit 104 is responsible for performing entropy encoding (entropy encoding) on the transformed quantized residual data and the corresponding intermediate data (metadata) to generate an encoding result.
The inverse quantization circuit 106A and the inverse transform circuit 106B generate restored residual data after receiving the transformed and quantized residual data at the analog image decoding end. The addition circuit 107 adds the restored residual data to the reference block and stores the added residual data in a buffer 108 as intra prediction/motion compensation reference data for use by the intra prediction/motion compensation circuit 101.
In the dynamic image coding system 100 shown in fig. 1, the control circuit 110A controls the intra prediction/motion compensation circuit 101 to try various intra prediction/motion compensation modes one by one, the controller 110A further includes a data amount calculation circuit 110B, a distortion amount calculation circuit 110C, and an evaluation circuit 110D, the data amount calculation circuit 110B is responsible for calculating a data amount R of a coding result, the distortion amount calculation circuit 110C receives residual data generated by the residual generation circuit 102 and restored residual data generated by the inverse conversion circuit 106B, respectively, and calculates a distortion amount D caused by a process performed by the conversion circuit 103A and the quantization circuit 103B, and provides the residual data to the evaluation circuit 110D, and the evaluation circuit 110D selects the optimal intra prediction/motion compensation mode for which the intra prediction mode has a low distortion amount and a distortion amount for which is the most suitable for the intra prediction mode.
Before the evaluation circuit 110D selects the best mode, each of the encoding results generated by the entropy encoding circuit 104 is temporarily stored in the temporary storage 109. Until the evaluation circuit 110D selects the best mode, the temporary memory 109 sends the coding result (marked as the best coding result in the figure) corresponding to the best mode as the output signal of the motion picture coding system 100.
Although the method of calculating the encoding result data amount and distortion amount for each intra prediction/motion compensation mode in the motion picture encoding system 100 can accurately select the best mode, the process is time consuming and requires a lot of computing resources.
Disclosure of Invention
In order to solve the above problems, the present invention provides a new moving picture encoding apparatus and a new moving picture encoding method.
According to an embodiment of the present invention, an apparatus for encoding a moving picture includes an intra prediction/motion compensation circuit, a residual generation circuit, a transform circuit, a quantization circuit and a controller. The intra prediction/motion compensation circuit respectively adopts a plurality of intra prediction/motion compensation modes to find a plurality of reference blocks for an image block to be coded. The residual error generating circuit generates a plurality of groups of corresponding residual error data according to the image block to be coded and the plurality of reference blocks. The conversion circuit performs a discrete cosine conversion process on each set of residual data to generate a converted matrix. The quantization circuit performs a quantization process on each transformed matrix to generate a set of transformed quantized residual data. The controller includes a summing circuit, a data quantity estimation circuit and an evaluation circuit. The summing circuit calculates the sum of absolute values of non-zero elements and the sum of coordinate values of the non-zero elements relative to a reference point for each set of transformed quantized residual data. The data amount estimation circuit generates a data amount estimation value according to the sum of the absolute value and the coordinate value of the corresponding transformed and quantized residual data for each intra prediction/motion compensation mode. The evaluation circuit selects a best mode from the plurality of intra prediction/motion compensation modes according to the plurality of data volume estimates.
According to another embodiment of the present invention, a method for encoding a moving picture is provided. First, a plurality of intra prediction/motion compensation modes are respectively adopted, and a plurality of reference blocks of an image block to be coded are found. Then, according to the image block to be coded and the multiple reference blocks, corresponding multiple sets of residual data are generated. Then, each set of residual data is subjected to a discrete cosine transform process and a quantization process to generate a set of transform-quantized residual data. For each set of transformed quantized residual data, the sum of the absolute values of the non-zero elements therein and the sum of the coordinate values of the non-zero elements with respect to a reference point are calculated. For each intra prediction/motion compensation mode, a data amount estimate is generated based on the sum of the absolute values and the sum of the coordinate values of the corresponding transformed quantized residual data. A best mode among the plurality of intra prediction/motion compensation modes is selected according to the plurality of data volume estimates.
The advantages and spirit of the present invention can be further understood by the following detailed description and accompanying drawings.
Drawings
FIG. 1 is a block diagram of a currently widely used motion picture coding system.
FIG. 2 is a block diagram of a motion picture coding system according to an embodiment of the present invention.
Fig. 3 shows an example of a data matrix with a size of 4 x 4.
FIG. 4 is a block diagram of a motion picture coding system according to another embodiment of the present invention.
FIG. 5 is a block diagram of a motion picture coding system according to another embodiment of the present invention.
FIG. 6 is a flowchart illustrating an image processing method according to an embodiment of the invention.
FIG. 7 is a flowchart illustrating an image processing method according to another embodiment of the present invention.
Description of the symbols
100. 200: dynamic image coding system
101. 201: intra prediction/motion compensation circuit
102. 202: residual error generating circuit
103A, 203A: switching circuit
103B, 203B: quantization circuit
104. 204: entropy coding circuit
106A, 206A: inverse quantization circuit
106B, 206B: inverse conversion circuit
107. 207: adder circuit
108. 208: buffer device
109. 209: temporary storage device
110. 210: controller
110A, 210A: control circuit
110B: data amount calculation circuit
210B: data amount estimation circuit
110C, 210C: distortion amount calculation circuit
110D, 210D: evaluation circuit
210E: summing circuit
210F: distortion amount estimation circuit
S61-S67: procedure step
S71-S74: procedure step
It should be noted that the drawings of the present invention are not detailed circuit diagrams, and the connecting lines are only used for representing signal flows. The various interactions between functional elements and/or processes need not be achieved through direct electrical connections. In addition, the functions of the individual elements do not have to be distributed as shown in the figures, and the distributed blocks do not have to be implemented by distributed electronic elements.
Detailed Description
An embodiment according to the present invention is a motion picture coding system. Please refer to the functional block diagram shown in fig. 2. The motion picture coding system 200 comprises an intra prediction/motion compensation circuit 201, a residual generation circuit 202, a transform circuit 203A, a quantization circuit 203B, an entropy coding circuit 204, an inverse quantization circuit 206A, an inverse transform circuit 206B, an adding circuit 207, a buffer 208, a temporary memory 209, and a controller 210. The controller 210 further includes a control circuit 210A, a data amount estimation circuit 210B, a distortion amount calculation circuit 210C, an evaluation circuit 210D, and a summation circuit 210E.
In this embodiment, the intra prediction/motion compensation circuit 201, the residual error generation circuit 202, the conversion circuit 203A, the quantization circuit 203B, the inverse quantization circuit 206A, the inverse conversion circuit 206B, the addition circuit 207, and the buffer 208 are conventional technologies, and the operation thereof can be described with reference to the corresponding circuits in fig. 1. One of the main differences between the motion video coding system 200 and the motion video coding system 100 is that the controller 210 does not select the best mode according to the exact amount of data of the coding result, but instead refers to an estimated amount of data generated according to the transformed quantized residual data, as described in detail below.
For each intra prediction/motion compensation mode, the quantization circuit 203B generates a set of transform quantized residual data. The example of the 4 x 4 data matrix shown in fig. 3 represents two-dimensional data of transformed quantized residual data, where each data value in the matrix has an abscissa x and an ordinate y. The summation circuit 210E will calculate the SUM SUM of the absolute values of the non-zero elements thereinABSAnd SUM SUM of coordinate values of the non-zero elements with respect to a reference pointCRD. Taking the data matrix of FIG. 3 as an example, the SUM of absolute values SUM calculated by the summation circuit 210EABSThe following steps are carried out:
54+25+16+4+32+11+10+6+2+8+1=169。
in one embodiment, the summation circuit 210E adds the vertical coordinate values and the horizontal coordinate values of the coordinates of all non-zero elements as the SUM SUM of the coordinate valuesCRD. If the data matrix of FIG. 3 is taken as an example, the SUM SUM of the coordinate values calculated by the summing circuit 210E in this wayCRDThe following steps are carried out:
(0+0)+(1+0)+(2+0)+(3+0)+(0+1)+(1+1)+(0+2)+(1+2)+(2+2)+(0+3)+(3+3)=27。
for each intra prediction/motion compensation mode, the data amount estimation circuit 210B SUMs SUM according to the absolute value of its corresponding data matrixABSAnd SUM of coordinate values SUMCRDGenerating an estimate of the data volumeIn one embodiment, the data volume is estimatedThe circuits 210B respectively give the SUM of absolute values SUMABSAnd SUM of coordinate values SUMCRDA specific weight, and generating a data amount estimation value according to the weighted sum of the absolute values and the weighted sum of the coordinate valuesFor example, the data amount estimation circuit 210B may employ the following predetermined formula:
the symbols a, b, and c represent parameters with fixed values (the parameters a and b are the specific weights), and can be generated by linear regression or other methods. More specifically, the circuit designer may pre-match the data amount calculating circuit 110B shown in FIG. 1 with a plurality of sets of sample data to find the SUMABSSUM of coordinate values SUMCRDAnd the exact corresponding relationship between the actual data amount, and find out the proper formula (i.e. find out the proper parameters a, b, c) to describe the corresponding relationship by using the methods of linear regression, etc. In brief, the SUM of absolute values SUM is givenABSAnd SUM of coordinate values SUMCRDThe specific weight of (a) may be determined by linear regression or the like. It should be noted that the calculation formula describing the corresponding relationship may also include SUM of absolute value SUM (SUM)ABSAnd/or SUM of coordinate values SUMCRDThe term of (a) is not limited to the first term and the second term.
In another embodiment, for each set of two-dimensional data representing transformed quantized residual data, the summing circuit 210E calculates a SUM SUM of the abscissa values of the non-zero elements thereinCRD_XSUM with longitudinal coordinate value SUM SUMCRD_Y. The data amount estimation circuits 210B give the SUM of absolute values SUM, respectivelyABSSUM of transverse coordinate values SUMCRD_XSUM with longitudinal coordinate value SUM SUMCRD_YA specific weight, and generating a data amount estimation value according to the weighted sum of absolute values, the weighted sum of ordinate values and the weighted sum of abscissa valuesFor example, the data amount estimation circuit 210B may employ the following predetermined formula:
the parameters a, b1, b2, c can be generated by the linear regression method.
As shown in fig. 2, the data amount estimation value generated by the data amount estimation circuit 210BIs provided to the evaluation circuit 210D. For the same image block to be decoded, the control circuit 210A can control the intra prediction/motion compensation circuit 201 to perform various intra prediction/motion compensation modes one by one, and the data amount estimation circuit 210B generates different data amount estimation values according to the various modes
Similarly, the distortion calculating module 207 generates different distortion D according to various intra prediction/motion compensation modes. More specifically, the inverse quantization circuit 206A and the inverse transform circuit 206B reconstruct the transformed quantized residual data of each mode to generate corresponding restored residual data. The distortion calculating module 207 determines the distortion D according to the difference between the restored residual data and the residual data generated by the residual generating circuit 202. The evaluation circuit 210D may estimate the data amount according to all intra prediction/motion compensation modesAnd a distortion D, which is obtained by selecting an optimal mode (i.e., a mode that best combines low data amount and low distortion amount) for the current image block from a plurality of image processing modes of the intra prediction/motion compensation circuit 201 by using lagrange's method or the like.
In one embodiment, the transformed quantized residual data may be temporarily stored in the temporary memory 209 before the evaluation circuit 210D selects the best mode. After the evaluation circuit 210D selects the best mode, the temporary memory 209 may provide the transform quantized residual data corresponding to the best mode to the entropy encoding circuit 204 for entropy encoding the transform quantized residual data and the corresponding intermediate data to generate an encoding result.
In another embodiment, the temporary memory 209 only stores the transformed quantized residual data of the best mode currently known. Whenever the evaluation circuit 210D finds that another intra prediction/motion compensation mode is better, the original temporary transformed quantized residual data in the temporary memory 209 is replaced by new transformed quantized residual data. Until all intra prediction/motion compensation modes have been attempted, the temporary memory 209 stores the transformed quantized residual data corresponding to the best mode. This has the advantage of saving hardware space in the temporary memory 209.
In another embodiment, before the evaluation circuit 210D selects the best mode, the temporary memory 209 does not store any transformed quantized residual data, but merely records (e.g., by indexing) which mode the best mode currently is. Until all intra prediction/motion compensation modes are tried, the control circuit 210A will control the intra prediction/motion compensation circuit 201, the residual generation circuit 202, the transform circuit 203A, and the quantization circuit 203B to re-generate the transformed quantized residual data corresponding to the best mode for the entropy encoding circuit 204 to encode.
As can be seen from the above description, in the moving picture coding system 200, the data amount estimation circuit 210B generates the data amount estimation value according to the residual data after transform quantizationCompared to the prior art shown in FIG. 1, the entropy encoding circuit 204 only needs to encode the transformed quantized residual data and the intermediate data corresponding to the selected best mode, and does not need to encode the transformed quantized residual data and the intermediate data corresponding to each modeAnd (5) line coding. The motion video coding system 200 can generate the data amount estimation value in a shorter time and with less computation resourcesAs the reference data for the evaluation circuit 210D to select the best mode.
Please refer to fig. 4. In another embodiment, the data amount estimation circuit 210B also performs data amount estimation on the intermediate data of various intra prediction/motion compensation modes. Taking an example where a parameter in the intermediate data is represented by a plurality of bits, if some of the bits in the plurality of bits are classified as bypass (bypass) data, it means that the some bits cannot be correctly predicted by the occurrence probability, so the entropy encoding circuit 204 does not apply the entropy encoding procedure to the some bits. The parameter content of each mode of the intermediate data is known as bypass data or non-bypass (non-bypass) data, so that the data amount estimation circuit 210B can calculate the bypass data amount B of the corresponding intermediate data according to the informationPAnd a non-bypass data amount BNPAnd generating an estimate of the amount of intermediate data based thereon
For example, the data amount estimation circuit 210B may employ the following predetermined formula:
wherein the symbol α represents a weighting parameter whose value can be determined empirically by the circuit designer, for example, can be set equal to 1 or slightly less than 1. in practice, if the content of various intermediate data is fixed, the data amount estimation circuit 210B can generate the bypass data amount B by using a table lookupPNumber of data to be bypassed, BNP。
Except for the SUM of absolute values SUM of the respective modesABSAnd SUM of coordinate values SUMCRDThe data amount estimation circuit 210B in FIG. 4 generates its data amount estimation value for each modeThe intermediate data amount is also estimatedConsideration is taken into account. For example, the data amount estimation circuit 210B in fig. four may modify equation one as follows:
fig. 5 shows a variation of the motion picture coding system 200. In this embodiment, the distortion calculating module 210C is replaced by a distortion estimating circuit 210F. The input signal of the distortion amount estimation circuit 210F is the converted matrix generated by the conversion circuit 203A, and the dequantization result generated by the dequantization circuit 206A. For each intra prediction/motion compensation mode, the distortion estimation circuit 210F calculates the difference between the dequantization result and the transformed matrix as a distortion estimation valueFor reference by the evaluation circuit 210D in selecting the best mode. In contrast to the prior art shown in FIG. 1, the method in FIG. five is used to generate the distortion estimationWithout performing a reverse conversion process, thereby shortening the yield distortion estimation valueThe time required and further computational resources are saved.
In addition, the distortion estimation circuit 210F may be designed to calculate only a higher bit difference between the transformed matrix and its inverse quantization result, and ignore a lower bit difference. For example, assuming that each element in the transformed matrix and the dequantization result is represented by binary data with a length of sixteen bits, the distortion estimation circuit 210F may calculate only the difference between the first eight higher bits of the two corresponding elements, and ignore the difference between the last eight lower bits. The method can also achieve the effect of further saving the operation time and the operation resources.
In practice, the summing circuit 210E, the data amount estimation circuit 210B and the distortion amount estimation circuit 210F can be implemented as, but not limited to, fixed and/or programmable digital logic circuits, including programmable logic gate arrays, application specific integrated circuits, microcontrollers, microprocessors, digital signal processors, and other necessary circuits.
Another embodiment of the present invention is a method for encoding a moving image, and a flowchart thereof is shown in fig. 6. First, in step S61, a plurality of intra prediction/motion compensation modes are respectively employed to find a plurality of reference blocks for an image block to be encoded. Next, in step S62, a plurality of sets of residual data corresponding to the to-be-coded image block and the plurality of reference blocks are generated. Subsequently, in step S63, a discrete cosine transform procedure is performed on each set of residual data to generate a transformed matrix. In step S64, a quantization procedure is performed on each transformed matrix to generate a set of transformed quantized residual data. Next, step S65 is to calculate the sum of absolute values of the non-zero elements and the sum of coordinate values of the non-zero elements relative to a reference point for each set of transformed quantized residual data. Step S66 is to generate a data amount estimation value according to the sum of the absolute value and the coordinate value of the transformed and quantized residual data corresponding to each intra prediction/motion compensation mode. In step S67, a best mode is selected from the plurality of intra prediction/motion compensation modes according to the plurality of data volume estimates.
Another embodiment of the present invention is an image processing method, and a flowchart thereof is shown in fig. 7. First, in step S71, a discrete cosine transform process is performed on an image data to generate a transformed matrix. Next, in step S72, a quantization process is performed on the transformed matrix to generate transformed quantized data. Then, in step S73, an inverse quantization procedure is performed on the transformed and quantized data to generate an inverse quantization result. Subsequently, in step S74, a distortion estimation value is determined according to the difference between the transformed matrix and the dequantization result.
The foregoing detailed description of the preferred embodiments is intended to more clearly illustrate the features and spirit of the present invention, and not to limit the scope of the invention by the preferred embodiments disclosed above. On the contrary, it is intended to cover various modifications and equivalent arrangements included within the scope of the claims.
Claims (12)
1. A motion picture encoding device includes:
an intra prediction/motion compensation circuit, which respectively adopts multiple intra prediction/motion compensation modes to find multiple reference blocks for an image block to be coded;
a residual error generating circuit for generating multiple sets of residual error data corresponding to the image block to be encoded and the multiple reference blocks;
a conversion circuit, which performs a discrete cosine conversion procedure for each set of residual data to generate a converted matrix;
a quantization circuit, performing a quantization procedure on each transformed matrix to generate a set of transformed quantized residual data; and
a controller, comprising:
a summation circuit, for each group of the transformed and quantized residual data, calculating the sum of absolute values of the non-zero elements therein and the sum of coordinate values of the non-zero elements with respect to a reference point;
a data quantity estimation circuit, which gives the absolute value sum and the coordinate value sum of the residual data after the corresponding conversion quantization and a specific weight to each intra-frame prediction/motion compensation mode, and then generates a data quantity estimation value according to the weighted absolute value sum and the weighted coordinate value sum; and
an evaluation circuit selects a best mode from the plurality of intra prediction/motion compensation modes according to a plurality of data amount estimation values.
2. The apparatus of claim 1, wherein the data amount estimation circuit assigns a specific weight to the sum of absolute values and the sum of coordinate values, respectively, to generate a weighted sum of absolute values and a weighted sum of coordinate values; the data amount estimation circuit generates the data amount estimation value according to the weighted sum of absolute values and the weighted sum of coordinate values.
3. The apparatus of claim 1, wherein for each set of transformed quantized residual data, the summing circuit calculates a sum of ordinate values and a sum of abscissa values of non-zero elements therein; the data quantity estimation circuit respectively gives a specific weight to the absolute value sum, the longitudinal coordinate value sum and the transverse coordinate value sum so as to generate a weighted absolute value sum, a weighted longitudinal coordinate value sum and a weighted transverse coordinate value sum; the data amount estimation circuit generates the data amount estimation value according to the weighted sum of absolute values, the weighted sum of vertical coordinate values and the weighted sum of horizontal coordinate values.
4. The apparatus according to claim 1, wherein said data amount estimation circuit further:
for each intra-prediction/motion compensation mode, calculating a bypass data amount and a non-bypass data amount of the corresponding intermediate data, and generating an intermediate data amount estimation value according to the calculated bypass data amount and the non-bypass data amount; and
the estimate of the amount of data is also taken into account when generating its estimate for each intra-prediction/motion-compensation mode.
5. The moving picture coding device according to claim 1, further comprising:
an inverse quantization circuit, which performs an inverse quantization procedure on each group of transformed and quantized residual data to generate an inverse quantization result;
and the controller further comprises:
a distortion estimation circuit, which calculates the difference between the inverse quantization result and the transformed matrix as a distortion estimation value for each intra prediction/motion compensation mode;
wherein the evaluation circuit also takes into account the distortion estimate for each intra prediction/motion compensation mode when selecting the best mode.
6. The apparatus of claim 5, wherein the distortion estimation circuit computes an upper bit difference between the transformed matrix and its inverse quantization result and ignores a lower bit difference when generating the distortion estimation for a set of transformed quantized residual data.
7. A dynamic image encoding method includes:
(a) using multiple intra-frame prediction/motion compensation modes to find multiple reference blocks for an image block to be coded;
(b) generating a plurality of groups of corresponding residual error data according to the image block to be coded and the plurality of reference blocks;
(c) performing a discrete cosine transform procedure on each set of residual data to generate a transformed matrix;
(d) performing a quantization process on each transformed matrix to generate a set of transformed quantized residual data;
(e) calculating the sum of absolute values of non-zero elements in each group of transformed and quantized residual data and the sum of coordinate values of the non-zero elements relative to a reference point;
(f) for each intra-frame prediction/motion compensation mode, respectively giving the sum of the absolute value and the sum of the coordinate value of the residual data after corresponding conversion and quantization and a specific weight, and then generating a data amount estimation value according to the weighted sum of the absolute value and the weighted sum of the coordinate value; and
(g) a best mode is selected from the plurality of intra prediction/motion compensation modes according to a plurality of data amount estimation values.
8. The method of claim 7, wherein step (f) comprises, for each set of transform quantized residual data:
respectively assigning a specific weight to the sum of absolute values and the sum of coordinate values to generate a weighted sum of absolute values and a weighted sum of coordinate values; and
the data volume estimation value is generated according to the weighted sum of absolute values and the weighted sum of coordinate values.
9. The moving picture coding method according to claim 7, wherein:
for each set of transformed quantized residual data, step (e) comprises calculating a sum of vertical coordinate values and a sum of horizontal coordinate values of non-zero elements therein;
step (f) comprises assigning a specific weight to the sum of absolute values, the sum of vertical coordinate values and the sum of horizontal coordinate values, respectively, to generate a weighted sum of absolute values, a weighted sum of vertical coordinate values and a weighted sum of horizontal coordinate values; and
step (f) includes generating the data volume estimate based on the weighted sum of absolute values, the weighted sum of vertical coordinate values, and the weighted sum of horizontal coordinate values.
10. The method of claim 7, further comprising:
for each intra-prediction/motion compensation mode, calculating a bypass data amount and a non-bypass data amount of the corresponding intermediate data, and generating an intermediate data amount estimation value according to the calculated bypass data amount and the non-bypass data amount;
wherein step (f) also takes into account the estimate of the amount of inter-data when generating the estimate of the amount of data for each intra-prediction/motion-compensation mode.
11. The method of claim 7, further comprising:
performing an inverse quantization procedure on each set of transformed and quantized residual data to generate an inverse quantization result; and
calculating the difference between the inverse quantization result and the transformed matrix for each intra-frame prediction/motion compensation mode to be used as a distortion estimation value;
wherein step (g) also takes into account the distortion measure estimates for each intra-prediction/motion-compensation mode when selecting the best mode.
12. The method of claim 11, wherein generating the distortion measure estimates for a set of transformed quantized residual data comprises calculating an upper bit difference between the transformed matrix and its inverse quantization result, and ignoring a lower bit difference.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710060951.1A CN108347603B (en) | 2017-01-25 | 2017-01-25 | Moving image encoding device and moving image encoding method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710060951.1A CN108347603B (en) | 2017-01-25 | 2017-01-25 | Moving image encoding device and moving image encoding method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108347603A CN108347603A (en) | 2018-07-31 |
CN108347603B true CN108347603B (en) | 2020-08-04 |
Family
ID=62963165
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710060951.1A Expired - Fee Related CN108347603B (en) | 2017-01-25 | 2017-01-25 | Moving image encoding device and moving image encoding method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108347603B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101014128A (en) * | 2007-02-02 | 2007-08-08 | 清华大学 | Method for quick estimating rate and distortion in H.264/AVC video coding |
CN101409845A (en) * | 2008-10-31 | 2009-04-15 | 北京大学软件与微电子学院 | Method and apparatus for estimating video distortion in AVS video encode |
CN101466040A (en) * | 2009-01-09 | 2009-06-24 | 北京大学 | Code rate estimation method for video encoding mode decision |
CN106034235A (en) * | 2015-03-11 | 2016-10-19 | 杭州海康威视数字技术股份有限公司 | Method for calculating coding distortion degree and coding mode control and system thereof |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5741092B2 (en) * | 2011-03-15 | 2015-07-01 | 富士通株式会社 | Image encoding method and image encoding apparatus |
-
2017
- 2017-01-25 CN CN201710060951.1A patent/CN108347603B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101014128A (en) * | 2007-02-02 | 2007-08-08 | 清华大学 | Method for quick estimating rate and distortion in H.264/AVC video coding |
CN101409845A (en) * | 2008-10-31 | 2009-04-15 | 北京大学软件与微电子学院 | Method and apparatus for estimating video distortion in AVS video encode |
CN101466040A (en) * | 2009-01-09 | 2009-06-24 | 北京大学 | Code rate estimation method for video encoding mode decision |
CN106034235A (en) * | 2015-03-11 | 2016-10-19 | 杭州海康威视数字技术股份有限公司 | Method for calculating coding distortion degree and coding mode control and system thereof |
Also Published As
Publication number | Publication date |
---|---|
CN108347603A (en) | 2018-07-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
He et al. | Low-power VLSI design for motion estimation using adaptive pixel truncation | |
US7843995B2 (en) | Temporal and spatial analysis of a video macroblock | |
CN107046645B9 (en) | Image coding and decoding method and device | |
US20100166073A1 (en) | Multiple-Candidate Motion Estimation With Advanced Spatial Filtering of Differential Motion Vectors | |
CN101584215B (en) | Integrated spatial-temporal prediction | |
CN104994386A (en) | Method and apparatus for encoding and decoding image through intra prediction | |
US8831101B2 (en) | Method and system for determining a metric for comparing image blocks in motion compensated video coding | |
CN106170093B (en) | Intra-frame prediction performance improving coding method | |
US20150208094A1 (en) | Apparatus and method for determining dct size based on transform depth | |
CN111050176B (en) | Video encoding method, video decoding method, video encoding apparatus, video decoding apparatus, computer device, and storage medium | |
CN108810549B (en) | Low-power-consumption-oriented streaming media playing method | |
JP2007538415A (en) | Encoding method for handheld devices | |
CN110800297B (en) | Video encoding method and apparatus, and computer-readable storage medium | |
US9332266B2 (en) | Method for prediction in image encoding and image encoding apparatus applying the same | |
JPH09327019A (en) | Object area encoding device | |
JP2013506379A (en) | Combined scalar embedded graphics coding for color images | |
CN108347603B (en) | Moving image encoding device and moving image encoding method | |
CN115442617A (en) | Video processing method and device based on video coding | |
TWI635742B (en) | Dynamic image encoding apparatus and dynamic image encoding method | |
US6141449A (en) | Coding mode determination system | |
CN114143537B (en) | All-zero block prediction method based on possibility size | |
JP2019102861A (en) | Moving image encoding device, moving image encoding method, and moving image encoding program | |
CN114449294A (en) | Motion estimation method, motion estimation apparatus, motion estimation device, storage medium, and computer program product | |
KR20040035390A (en) | Method and apparatus for motion estimation using of adaptive search pattern for video sequence compression | |
CN113573067B (en) | Video coding method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20200413 Address after: No.1, Duhang 1st Road, Hsinchu City, Hsinchu Science Park, Taiwan, China Applicant after: MEDIATEK Inc. Address before: 1/2, 4th floor, 26 Taiyuan Street, Zhubei City, Hsinchu County, Taiwan, China Applicant before: MSTAR SEMICONDUCTOR Inc. |
|
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20200804 Termination date: 20220125 |