CN103765901B - Method and apparatus for carrying out image coding and decoding using infra-frame prediction - Google Patents
Method and apparatus for carrying out image coding and decoding using infra-frame prediction Download PDFInfo
- Publication number
- CN103765901B CN103765901B CN201280042446.XA CN201280042446A CN103765901B CN 103765901 B CN103765901 B CN 103765901B CN 201280042446 A CN201280042446 A CN 201280042446A CN 103765901 B CN103765901 B CN 103765901B
- Authority
- CN
- China
- Prior art keywords
- unit
- coding
- depth
- coding unit
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/59—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/517—Processing of motion vectors by encoding
- H04N19/52—Processing of motion vectors by encoding by predictive encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/119—Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/13—Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
- H04N19/82—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/91—Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/96—Tree coding, e.g. quad-tree coding
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
A kind of method and apparatus that infra-frame prediction is carried out to image, methods described and equipment produce predicted value via along the linear interpolation both horizontally and vertically of current prediction unit.Methods described includes:The first virtual pixel and the second virtual pixel are produced by using at least one neighborhood pixels positioned at current prediction unit upper right side and lower left side;The first predicted value of current pixel is obtained via using the neighbouring left pixel on mutually being gone together with the first virtual pixel and the linear interpolation of current pixel;The second predicted value of current pixel is obtained via using being located at the neighbouring upside pixel in the second virtual pixel same column and the linear interpolation of current pixel;The predicted value of current pixel is obtained by using the first predicted value and the second predicted value.
Description
Technical field
The present invention relates to the coding and decoding of image, more particularly, to a kind of intraframe predictive coding for image and
The method and apparatus of infra-frame prediction decoding, wherein, improve image by using various directions and new intra prediction mode
Compression efficiency.
Background technology
In method for compressing image(Such as, Motion Picture Experts Group(MPEG)- 1, MPEG-2, MPEG-4 or H.264/MPEG-
4 advanced Video codings(AVC))In, in order to be encoded to image, picture is divided into macro block.According to all coding modes
(Wherein, all coding modes can be used for inter prediction or infra-frame prediction)Each macro block is encoded, and then according to root
According to the distortion level of the bit rate for being encoded to macro block and the decoded macroblock based on original macro come the coding mould of selection
Formula encodes to each macro block.
With the development and offer of the hardware for reproducing and storing high-resolution or high-quality video content, for can
The Video Codec that effectively high-resolution or high-quality video content are encoded or decoded needs increasingly to increase.Passing
In the Video Codec of system, video is encoded in units of each macro block with preliminary dimension.
The content of the invention
The technical goal of the present invention
The method and apparatus that the present invention provides a kind of intraframe predictive coding and infra-frame prediction decoding for image, wherein,
Via the new intra-frame prediction method using the pixel neighbouring with current prediction unit, to improve coding effect according to characteristics of image
Rate.
Realize mesh calibration method of the present invention
The present invention also provides a kind of new intra prediction mode of use pixel neighbouring with current prediction unit.
Beneficial effect
According to the one or more aspects of the present invention, by via the various intra-frame prediction methods using neighborhood pixels, root
Optimal intra-frame prediction method is applied according to characteristics of image, the code efficiency of image can be enhanced.
Brief description of the drawings
Fig. 1 be show it is according to an embodiment of the invention for the block diagram of the equipment encoded to video;
Fig. 2 be show it is according to an embodiment of the invention for the block diagram of the equipment decoded to video;
Fig. 3 is the diagram for describing the design of coding unit according to an embodiment of the invention;
Fig. 4 is the block diagram for showing the image encoder according to an embodiment of the invention based on coding unit;
Fig. 5 is the block diagram for showing the image decoder according to an embodiment of the invention based on coding unit;
Fig. 6 is that the deeper for showing the deeper coding unit according to depth and subregion according to an embodiment of the invention is compiled
The diagram of code unit;
Fig. 7 is the diagram for describing the relation between coding unit and converter unit according to an embodiment of the invention;
Fig. 8 is for describing showing for the coding information of the coding unit corresponding with coding depth according to embodiments of the invention
Figure;
Fig. 9 is the diagram for showing the deeper coding unit according to an embodiment of the invention according to depth;
Figure 10 to Figure 12 be used for describe coding unit, predicting unit and converter unit according to an embodiment of the invention it
Between relation diagram;
Figure 13 is for describing between the coding unit of the coding mode information according to table 1, predicting unit and converter unit
Relation diagram;
Figure 14 is the quantity for the intra prediction mode for showing the size according to an embodiment of the invention according to predicting unit
Form;
Figure 15 is that the reference for describing the intra prediction mode according to an embodiment of the invention with various directions is shown
Figure;
Figure 16 is to be used to describe current pixel according to an embodiment of the invention and have(dx,dy)Directionality extension
The diagram of the relation between neighborhood pixels on line;
Figure 17 and Figure 18 is the diagram for showing intra prediction mode direction according to an embodiment of the invention;
Figure 19 is the direction for showing the intra prediction mode with 33 directionality of embodiments of the invention;
Figure 20 A and Figure 20 B are the diagrams for describing plane mode according to an embodiment of the invention;
Figure 21 is the diagram for showing the neighborhood pixels according to an embodiment of the invention filtered around current prediction unit;
Figure 22 is the reference diagram for describing the filtering process of neighborhood pixels;
Figure 23 is the flow chart for showing the intra-frame prediction method according to an embodiment of the invention according to plane mode.
Optimal mode
According to an aspect of the present invention, there is provided a kind of method that infra-frame prediction is carried out to image, methods described include:Pass through
Using at least one neighborhood pixels positioned at the upper right side of current prediction unit, to obtain positioned at current with current prediction unit
Prediction pixel, which is mutually gone together, goes up the first virtual pixel corresponding with the pixel positioned at the rightmost side of current prediction unit simultaneously;By making
With at least one neighborhood pixels of the lower left side positioned at current prediction unit, come obtain be located at in current predictive pixel same column
The second virtual pixel corresponding with the pixel positioned at current prediction unit lower side simultaneously;Via using the first virtual pixel and with
Current predictive pixel mutually go together on neighbouring left pixel linear interpolation obtain current predictive pixel the first predicted value;Via
Current predictive is obtained using the second virtual pixel and with the linear interpolation of the neighbouring upside pixel in current predictive pixel same column
Second predicted value of pixel;The predicted value of current predictive pixel is obtained by using the first predicted value and the second predicted value.
According to another aspect of the present invention, there is provided a kind of equipment for being used to carry out image infra-frame prediction, the equipment bag
Include:Intra predictor generator, for by using at least one neighborhood pixels positioned at the upper right side of current prediction unit, to obtain position
In mutually gone together with the current predictive pixel of current prediction unit it is upper simultaneously with the pixel phase positioned at the rightmost side of current prediction unit
The first virtual pixel answered, by using at least one neighborhood pixels of the lower left side positioned at current prediction unit, to obtain position
In with current predictive pixel same column simultaneously the second virtual pixel corresponding with the pixel positioned at current prediction unit lower side,
Via using the first virtual pixel and with current predictive pixel mutually go together on neighbouring left pixel linear interpolation obtain it is current
First predicted value of prediction pixel, via using the second virtual pixel and with the neighbouring upside picture in current predictive pixel same column
The linear interpolation of element obtains the second predicted value of current predictive pixel, is obtained and worked as by using the first predicted value and the second predicted value
The predicted value of preceding prediction pixel.
Embodiment
Hereinafter, the present invention is more fully described in the accompanying drawing with reference to the exemplary embodiment for showing the present invention.
Fig. 1 is the block diagram for showing video encoder 100 according to an embodiment of the invention.
Video encoder 100 includes maximum coding unit divide 110, coding unit determiner 120 and output unit
130。
Maximum coding unit divide 110 can the maximum coding unit based on the current picture for image, to current
Picture is divided.If current picture is more than maximum coding unit, the view data of current picture can be divided at least
One maximum coding unit.Maximum coding unit according to an embodiment of the invention can have 32 × 32,64 × 64,128
The data cell of × 128 or 256 × 256 equidimensions, wherein, the shape of data cell is the power that there is width and length to be 2
The square of side.View data can be output to coding unit determiner 120 according at least one maximum coding unit.
Coding unit feature according to an embodiment of the invention is full-size and depth.Depth representing coding unit from
The number of maximum coding unit space division, and with depth down, can be from maximum according to the deeper coding unit of depth
Coding unit is divided into minimum coding unit.The depth of maximum coding unit is most greater depths, the depth of minimum coding unit
Degree is most more low depth.Due to the depth down of maximum coding unit, the size of coding unit corresponding with each depth
Reduce, therefore coding unit corresponding with greater depths may include multiple coding units corresponding with more low depth.
As described above, the view data of current picture is divided into maximum coding list according to the full-size of coding unit
Member, and each maximum coding unit may include the deeper coding unit being divided according to depth.Due to according to depth to root
Divided according to the maximum coding unit of embodiments of the invention, therefore the image for the spatial domain being included in maximum coding unit
Data can divide according to Depth Stratification.
The depth capacity and full-size of coding unit can be predefined, depth capacity and the full-size limitation is to most
The height and width of big coding unit be layered the number of division.
Coding unit determiner 120 to by the region of maximum coding unit is divided according to depth and obtain to
A few zoning is encoded, and determines that final coding result will be output according at least one zoning
Depth.That is, each maximum coding unit of the coding unit determiner 120 according to current picture, according to depth to root
Encoded according to the view data of deeper coding unit, depth of the selection with minimum coding error.The coding depth of determination
Output unit 130 is output to the view data of the coding depth according to determination.
Based on equal to or less than the corresponding deeper coding unit of at least one depth of depth capacity to being compiled in maximum
View data in code unit is encoded, and view data will be entered based on each in each deeper coding unit
The result of row coding is compared.After the encoding error of deeper coding unit is compared, it may be selected that there is minimum
The depth of encoding error.Each maximum coding unit can be directed to and select at least one coding depth.
As coding unit is according to depth and by layering division, and with the quantity increase of coding unit, maximum coding
The size of unit is divided.In addition, coding unit corresponds to identical depth in a coding unit, also by respectively
The encoding error for measuring the view data of each coding unit is single by each coding corresponding with identical depth to determine whether
Member is divided into more low depth.Therefore, even if when view data is included in a maximum coding unit, due to encoding error
Can be different according to the region in a maximum coding unit, thus coding depth can according to the region in view data without
Together.Therefore, the depth of one or more codings is can determine that in a maximum coding unit, and can be according at least one coding
The coding unit of depth divides the view data of maximum coding unit.
Therefore, coding unit determiner 120 can determine that the volume with the tree construction being included in current maximum coding unit
Code unit." coding unit with tree construction " according to an embodiment of the invention include with from being included in current maximum compile
The corresponding coding unit of depth of the coding depth determined in all deeper coding units in code unit.Can be according in maximum
The depth in same area in coding unit determines the coding unit of coding depth to be layered.Similarly, can be with another region
Coding depth independently determine the coding depth of current region.
Depth capacity according to an embodiment of the invention is on the division from maximum coding unit to minimum coding unit
The index of number.First depth capacity according to an embodiment of the invention can be represented from maximum coding unit to minimum coding unit
Total number of division.Second depth capacity according to embodiments of the present invention can be represented from maximum coding unit to minimum coding unit
Depth levels total quantity.For example, when the depth of maximum coding unit is 0, by being carried out once to maximum coding unit
The depth of the coding unit of division can be arranged to 1, by the depth of the coding unit divided twice to maximum coding unit
Degree can be arranged to 2.Here, if minimum coding unit is the coding list by carrying out four divisions to maximum coding unit
Member, the depth levels of depth 0,1,2,3 and 4 are present, and therefore the first depth capacity can be arranged to the 4, and second maximum
Depth can be arranged to 5.
It can encode and convert according to maximum coding unit perform prediction.Always according to each maximum coding unit, based on basis
Depth is equal to or less than the deeper coding unit of depth capacity, comes perform prediction coding and conversion.
Due to no matter when being divided to maximum coding unit according to depth, the quantity of deeper coding unit increases
Add, therefore to all as deeper coding unit caused by depth down performs the coding for including predictive coding and converting.For
Facilitate description, predictive coding and conversion will be described based on the coding unit of the current depth in maximum coding unit.
Video encoder 100 can differently select the size and shape of the data cell for being encoded to view data
Shape.In order to be encoded to view data, such as predictive coding, conversion and the operation of entropy code are performed, meanwhile, identical data
Unit can be used for all operations, or different pieces of information unit can be used for different operating.
For example, video encoder 100 can not only select the coding unit for being encoded to view data, it is also optional
The data cell different from coding unit is selected, to be encoded to the view data perform prediction in coding unit.
In order to which perform prediction encodes in maximum coding unit, coding unit corresponding with coding depth can be based on(I.e., not
The subdivided coding unit to coding unit corresponding with relatively low layer depth)Carry out perform prediction coding.Hereinafter, it is not subdivided and into
It is referred to as " predicting unit " for the coding unit of the elementary cell for predictive coding.The subregion obtained by dividing predicting unit
It may include predicting unit and by least one data sheet divided to obtain in the height and width to predicting unit
Member.
For example, work as 2N × 2N(N is positive integer)Size coding unit be no longer divided and as 2N × 2N prediction
During unit, the size of subregion can be 2N × 2N, N × N, N × 2N or N × N.The example of divisional type may include by prediction
Symmetric partitioning that the height and width of unit are symmetrically divided and obtained, pass through the height to predicting unit and width is carried out
Asymmetricly divide(Such as, according to 1:N or n:1)And obtain subregion, by geometrically being divided predicting unit to obtain
The subregion taken and the subregion with arbitrary shape.
The predictive mode of predicting unit can be at least one in frame mode, inter-frame mode and skip mode.For example,
Frame mode and inter-frame mode can be performed to 2N × 2N, 2N × N, N × 2N or N × N subregion.In addition, can be only to 2N × 2N's
Subregion performs skip mode.Coding can independently be performed to a predicting unit in coding unit, so as to select that there is minimum
The predictive mode of encoding error.
Video encoder 100 can also be based not only on the coding unit for being encoded to view data, also based on not
The data cell of coding unit is same as, conversion is performed to the view data in coding unit.
, can the data cell based on the size with less than or equal to coding unit in order to perform conversion in coding unit
To perform conversion.For example, the data cell for conversion may include the data cell of inter-frame mode and the data sheet of frame mode
Member.
Basic data cell as conversion is referred to alternatively as " converter unit " now.It is similar to coding unit, encoding
Converter unit in unit can be recursively divided into smaller size of region so that can independently be determined using region as unit
Converter unit.Therefore, the residual error data of coding unit can be entered according to the converter unit with the tree construction based on transformed depth
Row division.
Transformed depth can be also set in converter unit, and the transformed depth instruction passes through the height and width to coding unit
Degree is divided and reaches the division number of converter unit.For example, in 2N × 2N current coded unit, when converter unit
When size is 2N × 2N, transformed depth can be 0, and when the size of converter unit is N × N, transformed depth can be 0, work as change
When the size for changing unit is N × N, transformed depth can be 1, and when the size of converter unit is N/2 × N/2, conversion is deep
Degree can be 2.In other words, the converter unit with tree construction can be set according to transformed depth.
Information on coding depth is not only needed according to the coding information of coding unit corresponding with coding depth, also needed
Will be on the information related to predictive coding and conversion.Therefore, coding unit determiner 120 not only determines to determine there is minimum compile
The coding depth of code error, also determine predicting unit in divisional type, according to the predictive mode of predicting unit and for converting
Converter unit size.
Later with reference to Fig. 3 to Figure 12 explain in detail in maximum coding unit according to an embodiment of the invention according to tree
The coding unit of structure and the method for determining subregion.
Coding unit determiner 120 can be by using the rate-distortion optimization based on Lagrange's multiplier, to measure deeper
The encoding error of coding unit.
Image data-outputting unit 130 exports the view data of maximum coding unit and on according to coding in the bitstream
The information of the coding mode of depth, wherein, the view data of the maximum coding unit is based on true by coding unit determiner 120
Fixed at least one coding depth is encoded.
Can be by being encoded the residual error data of image to obtain coded image data.
Information on the coding mode according to coding depth may include the information on coding depth, on single in prediction
The information of the information of divisional type in member, the information on predictive mode and the size on converter unit.
Can be by using the basis indicated whether to the coding unit of relatively low layer depth rather than current depth execution coding
The division information of depth defines the information on coding depth.If the current depth of current coded unit is coding depth,
Then the view data in current coded unit is encoded and exported, therefore division information can be defined as current coded unit not
It is divided into relatively low layer depth.Selectively, if the current depth of current coded unit is not coding depth, to relatively low layer depth
Coding unit perform coding, therefore division information can be defined as dividing current coded unit to obtain relatively low layer depth
The coding unit of degree.
If current depth is not coding depth, coding is performed to the coding unit for being divided into relatively low layer depth.By
It is present at least one coding unit of relatively low layer depth in a coding unit of current depth, therefore can be to relatively low layer depth
Each coding unit of degree repeats coding, and therefore can recursively perform coding to the coding unit with same depth.
Due to determining the coding unit with tree construction for a maximum coding unit, and for the volume of coding depth
Code unit determines the information at least one coding mode, is determined so a maximum coding unit can be directed at least one
The information of individual coding mode.Further, since the view data of maximum coding unit can be divided, and according to Depth Stratification
Therefore the coding depth of the view data of maximum coding unit can be different according to position, therefore can be directed to view data and set pass
In coding depth and the information of coding mode.
Therefore, output unit 130 coding information on corresponding coding depth and coding mode can be distributed to including
It is at least one in coding unit, predicting unit and minimum unit in maximum coding unit.
Minimum unit according to an embodiment of the invention is by will form the minimum coding unit division 4 of minimum layer depth
It is secondary and obtain square data cell.Selectively, minimum unit can may be included in included in maximum coding unit
All coding units, in the largest square data cell in predicting unit and converter unit.
For example, the coding information exported by output unit 130 can be classified as according to the coding information of coding unit and
According to the coding information of predicting unit.It may include according to the coding information of coding unit on the information of predictive mode and on dividing
The information of the size in area.Coding information on predicting unit may include on inter-frame mode estimation direction information, on
The information of the reference picture index of inter-frame mode, the information on motion vector, the information of the chromatic component on frame mode,
And the information of the interpolation method on frame mode.In addition, according to picture, slice or GOP define on coding unit
Maximum sized information and information on depth capacity can be inserted into the head of bit stream.
According in video encoder 100, deeper coding unit can be by will be used as the greater depths of last layer
Coding unit height or width divided by two and obtain coding unit.In other words, when current depth coding unit chi
Very little when being 2N × 2N, the size of the coding unit of relatively low layer depth is N × N.In addition, the size with 2N × 2N of current depth
Coding unit may include the coding units of most 4 relatively low layer depths.
Therefore, video encoder 100 can be by based on the maximum coding unit determined the characteristics of considering current picture
Size and depth capacity, determine the coding unit with optimum shape and optimal size come shape for each maximum coding unit
Into the coding unit with tree construction.Further, since can be by using any different predictive modes and conversion to each maximum volume
Code unit performs coding, therefore it is contemplated that determines optimal coding mode the characteristics of the coding unit of different images size.
Therefore, it is each to draw if encoded according to conventional macro block to the image with high-resolution or big data quantity
The quantity of the macro block in face extremely increases.Therefore, increase for compressing the bar number of information caused by each macro block, therefore, it is difficult to send
The information of compression, and efficiency of data compression reduces.However, by using video encoder 100, due to considering image
During feature, when increasing the full-size of coding unit, coding unit is adjusted when considering the size of image, so figure can be improved
As compression efficiency.
Fig. 2 is the block diagram of video decoding apparatus 200 according to an embodiment of the invention.
Video decoding apparatus 200 includes receiver 210, view data and coded information extractor 220 and view data solution
Code device 230.Various terms for the various operations of video decoding apparatus 200(Such as coding unit, depth, predicting unit, change
Change unit and the information on various coding modes)Definition it is identical with the description of reference picture 1 and video encoder 100.
Receiver 210 receives and parsed the bit stream of encoded video.View data and coded information extractor 220 be directed to from
The bitstream extraction of parsing is used for the coded image data of each coding unit, and the view data of extraction is output into picture number
According to decoder 230, wherein, the coding unit has the tree construction according to each maximum coding unit.View data and coding
Information extractor 220 can extract the maximum sized information of the coding unit of current picture from the head on current picture.
In addition, the bitstream extraction of view data and coded information extractor 220 analytically is on for according to every
The coding depth of the coding unit of the tree construction of individual maximum coding unit and the information of coding mode.Extraction on coding depth
Image data decoder 230 is output to the information of coding mode.In other words, the view data in bit stream is divided into most
Big coding unit so that image data decoder 230 decodes for each maximum coding unit to view data.
It can be directed to and be set on the information of at least one coding unit corresponding with coding depth on being encoded according to maximum
The coding depth of unit and the information of coding mode, the information on coding mode may include on phase corresponding with coding depth
Answer the information of the information of the divisional type of coding unit, the information on predictive mode and the size on converter unit.In addition,
Information on coding depth can be extracted as according to the division information of depth.
By view data and coded information extractor 220 extract on the coding depth according to each maximum coding unit
Information with coding mode is the information on coding depth and coding mode, wherein, the coding depth and coding mode quilt
It is defined as in coding side(Such as, equipment 100)By repeating to each deeper coding unit weight according to maximum coding unit
Multiple perform produces minimum coding error when encoding.Therefore, video decoding apparatus 200 can be by according to generation minimum coding error
Coding depth and coding mode are decoded to data to recover image.
Because the coding information on coding depth and coding mode can be assigned to corresponding coding unit, predicting unit
With the predetermined unit of data in minimum unit, therefore view data and coded information extractor 220 can be according to predetermined unit of data
Extraction is on coding depth and the information of coding mode.If it be have recorded according to predetermined unit of data on corresponding maximum coding
The coding depth of unit and the information of coding mode, then it is assigned with the predetermined of coding depth and the identical information of coding mode
Data cell can be inferred to be the data cell being included in identical maximum coding unit.
Image data decoder 230 passes through based on the coding depth and the letter of coding mode according to maximum coding unit
Breath is decoded to the view data in each maximum coding unit to recover current picture.In other words, image data decoder
230 can be based on extraction on being included in each coding in the coding unit with tree construction in each maximum coding unit
The information of the divisional type of unit, predictive mode and converter unit, is decoded to the view data of coding.Decoding process can wrap
Include prediction and inverse transformation comprising infra-frame prediction and motion compensation.
Coded data decoder 230 can the divisional type based on the predicting unit on the coding unit according to coding depth
The information of predictive mode, according to each subregion and predictive mode of each coding unit, perform infra-frame prediction or motion compensation.
In addition, image data decoder 230 can the chi based on the converter unit on the coding unit according to coding depth
Very little information, each converter unit in coding unit perform inverse transformation, to perform according to the anti-of maximum coding unit
Conversion.
Image data decoder device 230 can determine current maximum coding by using the division information according to each depth
At least one coding depth of unit.It is current deep if division information instruction view data is not subdivided in current depth
Degree is coding depth.Therefore, image data decoder 230 can be by using on each coding unit corresponding with coding depth
The divisional type of predicting unit, the information of the size of predictive mode and converter unit, pair with current maximum coding unit
The coded data of each coding depth at least one coding unit accordingly is decoded.
In other words, the predetermined unit of data in coding unit, predicting unit and minimum unit can be distributed to by observation
Coding information collects the data cell for the coding information for including identical division information, and the data cell collected can quilt
It is considered the data cell that will be decoded by image data decoder 230 according to identical coding mode.
Video decoding apparatus 200 can obtain minimum on being produced when recursively performing and encoding to each maximum coding unit
The information of the coding unit of encoding error, and described information can be used to be decoded to current picture.In another words, it is determined that it is
The coding unit with tree construction of optimum code unit in each maximum coding unit is decoded.
Therefore, also can be by using from the pass that encoder receives even if view data has high-resolution or big data quantity
In the information of optimal coding mode, the size and coding mode of the adaptive coding unit that should determine that according to the characteristics of image are used
Effectively to decode and recover view data.
It is described in detail now with reference to Fig. 3 to Figure 13 and determines that the coding unit with tree construction, predicting unit and conversion are single
The method of member.
Fig. 3 is the diagram for describing the concept of coding unit according to an embodiment of the invention.
The size of coding unit may be expressed as wide × height, and can be 64 × 64,32 × 32,16 × 16 and 8 × 8.64
× 64 coding unit can be divided into 64 × 64,64 × 32,32 × 64 and 32 × 32 subregion, and 32 × 32 coding unit can
It is divided into 32 × 32,32 × 16,16 × 32 and 16 × 16 subregion, 16 × 16 coding unit can be divided into 16 × 16,
16 × 8,8 × 16 and 8 × 8 subregion, 8 × 8 coding unit can be divided into 8 × 8,8 × 4,4 × 8 and 4 × 4 subregion.
In video data 310, resolution ratio is 1920 × 1080, and the full-size of coding unit is 64, and depth capacity is
2.In video data 320, resolution ratio is 1920 × 1080, and the full-size of coding unit is 64, and depth capacity is 3.Regarding
Frequency is according in 330, and resolution ratio is 352 × 288, and the full-size of coding unit is 16, and depth capacity is 1.Maximum in Fig. 3 is deep
Degree represents the division total degree from maximum coding unit to minimum coding unit.
If high resolution or data volume are big, the full-size of coding unit can be big so that not only improve and compile
Code efficiency, and exactly reflect image the characteristics of.Therefore, there is the video data 310 than the higher resolution of video data 330
Full-size with the coding unit of video data 320 can be 64.
Because the depth capacity of video data 310 is 2, therefore due to by being divided twice to maximum coding unit,
Two layers of depth down, therefore the coding unit 315 of video data 310 may include the maximum coding unit with 64 major axis dimension
With the coding unit with 32 and 16 major axis dimension.Simultaneously as the depth capacity of video data 330 is 1, therefore due to logical
Cross and maximum coding unit is once divided, depth is deepened one layer, therefore the coding unit 335 of video data 330 can wrap
Include the maximum coding unit with 16 major axis dimension and the coding unit with 8 major axis dimension.
Because the depth capacity of video data 320 is 3, therefore due to by being divided three times to maximum coding unit,
Depth is deepened 3 layers, therefore the coding unit 325 of video data 320 may include that the maximum coding with 64 major axis dimension is single
Member and the coding unit with 32,16 and 8 major axis dimension.With depth down, details can be represented accurately.
Fig. 4 is the block diagram for showing the image encoder 400 according to an embodiment of the invention based on coding unit.
The operation that image encoder 400 performs the coding unit determiner 120 of video encoder 100 comes to view data
Encoded.In other words, intra predictor generator 410 performs infra-frame prediction, fortune to the coding unit of the frame mode in present frame 405
Dynamic estimator 420 and motion compensator 425 are held by using present frame 405 and reference frame 495 to the coding unit of inter-frame mode
Row inter prediction and motion compensation.
Pass through the and of converter 430 from the data of intra predictor generator 410, exercise estimator 420 and motion compensator 425 output
Quantizer 440 is outputted as the conversion coefficient quantified.The conversion coefficient of quantization passes through inverse DCT 460 and the quilt of inverse transformer 470
Revert to the data in spatial domain, the data in the spatial domain of recovery are passing through block elimination filtering unit 408 and loop filtering unit
Reference frame 495 is outputted as after 490 post processings.The conversion coefficient of quantization can be outputted as comparing by entropy code encoder 450
Spy's stream 455.
In order to which image encoder 400 is applied into video encoder 100, all elements of image encoder 400, also
It is to say, intra predictor generator 410, exercise estimator 420, motion compensator 425, converter 430, quantizer 440, entropy coder
450th, inverse DCT 460, inverse transformer 470, block elimination filtering unit 480 and loop filtering unit 490 are considering each maximum volume
During the depth capacity of code unit, the operation based on each coding unit in the coding unit with tree construction is performed.
Specifically, intra predictor generator 410, exercise estimator 420 and motion compensator 425 are considering that current maximum coding is single
When the depth capacity and full-size of member, to determine to have the predicting unit of each coding unit in the coding unit of tree construction
And subregion, converter 430 determine the size of the converter unit in each coding unit in the coding unit with tree construction.
Fig. 5 is the block diagram of the image decoder 500 according to an embodiment of the invention based on coding unit.
Resolver 510 parses the coded image data that will be decoded and the coding needed on decoding from bit stream 505
Information.The view data of coding is outputted as the data of inverse quantization, inverse quantization by entropy decoder 520 and inverse DCT 530
Data view data in spatial domain is recovered as by inverse transformer 540.
For the view data in spatial domain, intra predictor generator 550 performs infra-frame prediction, fortune to the coding unit of frame mode
Dynamic compensator 560 performs motion compensation by using reference frame 585 to the coding unit of inter-frame mode.
Can be by removing module unit by the view data in the spatial domain of intra predictor generator 550 and motion compensator 560
570 and loop filtering unit 580 post-process after, be outputted as recover frame 595.In addition, by going module unit 570 and loop to filter
The view data that ripple 580 post-processes is outputted as reference frame 585.
In order to be decoded in the image data decoder 230 of video decoding apparatus 200 to view data, image decoding
Device 500 can perform the operation performed after resolver 510.
In order to which image decoder 500 is applied into video decoding apparatus 200, all elements of image decoder 500(That is,
Resolver 510, entropy decoder 520, inverse DCT 530, inverse transformer 540, this interior prediction device 550, motion compensator 560, block
Decoding unit 570 and loop filtering unit 580)Each maximum coding unit is performed based on the coding unit with tree construction
Operation.
Specifically, intra predictor generator 550 and motion compensator 560 are based on for each coding unit with tree construction
Subregion and predictive mode perform operation, and the size of converter unit of the inverse transformer 540 based on each coding unit performs operation.
Fig. 6 be show it is according to an embodiment of the invention according to the deeper coding unit of depth and the diagram of subregion.
Video encoder 100 and video decoding apparatus 200 use hierarchical coding unit, the characteristics of so as to consider image.
Maximum height, Breadth Maximum and the depth capacity of coding unit can be adaptively determined according to the characteristics of image, or can be by user
It is arranged differently than them.The chi of the deeper coding unit according to depth can be determined according to the default full-size of coding unit
It is very little.
In the hierarchy 600 of coding unit according to an embodiment of the invention, the maximum height of coding unit and most
Big width is 64, and depth capacity is 4.Due to the vertical axis depth down along hierarchy 600, therefore deeper coding is single
The height and width of member are divided.In addition, the basic predicting unit of the predictive coding as each deeper coding unit
It is illustrated with transverse axis of the subregion along hierarchy 600.
In other words, coding unit 610 is the maximum coding unit in hierarchy 600, wherein, depth is 0, size
(That is, length multiplies width)It is 64 × 64.Along longitudinal axis depth down, the coding unit with 32 × 32 size and 1 depth be present
620th, the coding unit 630 with 16 × 16 size and 2 depth and the coding unit with 8 × 8 size and 3 depth
640 and with 4 × 4 size and 4 depth coding unit 650.Coding unit with 4 × 4 size and 4 depth
650 be minimum coding unit.
The predicting unit and subregion of coding unit are arranged along transverse axis according to each depth.In other words, if with 64
The coding unit 610 of × 64 size and 0 depth is predicting unit, then predicting unit, which can be divided into, is included in coding unit
Subregion in 610, i.e. the subregion 610 with 64 × 64 size, the subregion 612 with 64 × 32 size, with 32 × 64
Size subregion 614 or with 32 × 32 size subregion 616.
Similarly, with 32 × 32 size and 1 depth coding unit 620 predicting unit can be divided into including
Subregion in coding unit 620, i.e. the subregion 620 with 32 × 32 size, the subregion 622 with 32 × 16 size,
Subregion 624 with 16 × 32 size and the subregion 626 with 16 × 16 size.
Similarly, with 16 × 16 size and 2 depth coding unit 630 predicting unit can be divided into including
Subregion in coding unit 630, that is, it is included in the subregion of 16 × 16 size in coding unit 630, there is 16 × 8 chi
Very little subregion 632, the subregion 634 with 8 × 16 size and the subregion 636 with 8 × 8 size.
Similarly, the predicting unit of the coding unit 640 with 8 × 8 size and 3 depth, which can be divided into, to be included in
Subregion in coding unit 640, that is, it is included in the subregion of 8 × 8 size in coding unit 640, with 8 × 4 size
Subregion 642, the subregion 644 with 4 × 8 size and the subregion 646 with 4 × 4 size.
Coding unit 650 with 4 × 4 size and 4 depth is the coding of minimum coding unit and minimum layer depth
Unit.The predicting unit of coding unit 650 is only distributed to the subregion with 4 × 4 size.
In order to determine to form at least one coding depth of the coding unit of maximum coding unit 610, video encoder
100 120 pairs of coding unit determiner is held with the coding unit corresponding with each depth being included in maximum coding unit 610
Row coding.
According to the quantity of the deeper coding unit for being included in same range and identical size of depth with depth down
And increase.For example, four coding units corresponding with 2 depth are required that covering is included in a volume corresponding with 1 depth
Data in code unit.Therefore, it is corresponding with 1 depth to encode in order to compare the coding result of the identical data according to depth
Unit and four coding units corresponding with 2 depth are encoded.
In order to perform coding for the current depth in multiple depth, can by the transverse axis along hierarchy 600, pair with
Each predicting unit in the corresponding coding unit of current depth performs coding, is missed to be directed to current depth selection minimum code
Difference.Selectively, can be by deepening to perform coding to each depth to compare root with the longitudinal axis of the depth along hierarchy 600
According to the minimum coding error of depth, to search for minimum coding error.The depth with minimum coding error in coding unit 610
Degree and subregion can be selected as the coding depth and divisional type of coding unit 610.
Fig. 7 is for describing the relation between the converter unit 720 of coding unit 710 according to an embodiment of the invention
Diagram.
Video encoder 100 or video decoding apparatus 200 are less than or waited according to having for each maximum coding unit
Image is encoded or decoded in the coding unit of the size of maximum coding unit.It can be based on being not more than corresponding coding unit
Data cell select the size of the converter unit for being converted during coding.
For example, in video encoder 100 or video decoding apparatus 200, if the size of coding unit 710 be 64 ×
64, then it can perform conversion by using with the converter unit 720 of 32 × 32 size.
In addition, each conversion to the size with 32 × 32 less than 64 × 64,16 × 16,8 × 8 and 4 × 4 can be passed through
Unit execution change is brought to be encoded to the data of the coding unit 710 with 64 × 64 size, and then selection has most
The converter unit of lower Item error.
Fig. 8 is the coding information for describing coding unit corresponding with coding depth according to an embodiment of the invention
Diagram.
The output unit 130 of video encoder 100 can pair each coding unit corresponding with coding depth on point
The information 820 of the information 800 of area's type, the information 810 on predictive mode and the size on converter unit is encoded simultaneously
Send as the information on coding mode.
Information 800 is indicated on the shape by being divided the subregion to obtain to the predicting unit of current coded unit
Information, wherein, the subregion is as being predicted the data cell of coding to current coded unit.For example, there is 2N
The current coded unit CU_0 of × 2N size can be divided into the subregion 802 of the size with 2N × 2N, the chi with 2N × N
Any one in the subregion 808 of very little subregion 804, the subregion 806 of the size with N × 2N and the size with N × N.This
In, the information 800 on divisional type is set to indicate that the subregion 804 of the size with 2N × N, the size with N × 2N
Subregion 806 and with N × N size subregion 808 in one.
Information 810 indicates the predictive mode of each subregion.For example, information 810 may indicate that the subregion to being indicated by information 800
The pattern of perform prediction coding(That is, frame mode 812, inter-frame mode 814 or skip mode 816).
Information 820 indicates the converter unit being based on when current coded unit is performed and converted.For example, converter unit can
Be converter unit 822 in the first frame, converter unit 824, the first inter-frame transform unit 826 or the second Inter-frame Transformation in the second frame
Unit 828.
The view data and coded information extractor 220 of video decoding apparatus 200 can be according to each deeper coding units
Extract and use information 800, information 810 and the information 820 for encoding.
Fig. 9 is the diagram for showing the deeper coding unit according to an embodiment of the invention according to depth.
Division information may be used to indicate that the change of depth.Whether the coding unit of division information instruction current depth is drawn
It is divided into the coding unit of more low depth.
For being predicted the prediction list of coding to the coding unit 900 with 0 depth and 2N_0 × 2N_0 size
Member 910 may include the divisional type 912 of the size with 2N_0 × 2N_0, the divisional type of size with 2N_0 × N_0
914th, the subregion of the divisional type 918 of the divisional type 916 of the size with N_0 × 2N_0 and the size with N_0 × N_0.
Fig. 9 is only shown by symmetrically being divided the divisional type 912 to 918 to obtain to predicting unit 910, but divisional type is simultaneously
Not limited to this, and the subregion of predicting unit 910 may include asymmetric subregion, the subregion with predetermined shape or with geometric form
The subregion of shape.
According to a subregion of each classified types to the size with 2N_0 × 2N_0, the size with 2N_0 × N_0
Two subregions, two subregions of size with N_0 × 2N_0 and four subregions of the size with N_0 × N_0 repeat to hold
Row predictive coding.Frame can be performed to the subregion of the size with 2N_0 × 2N_0, N_0 × 2N_0,2N_0 × N_0 and N_0 × N_0
The predictive coding of internal schema and inter-frame mode.The prediction volume of skip mode only is performed to the subregion of the size with 2N_0 × 2N_0
Code.
If encoding error is minimum in one in divisional type 912 to 916, predicting unit 910 can divide
To relatively low layer depth.
If encoding error is minimum in classified types 918, in operation 920, depth is changed to 1 from 0
Divisional type 918 is divided, coding is repeated to the coding unit 930 with 2 depth and N_0 × N_0 size
To search for minimum coding error.
For to 1 depth and 2N_1 × 2N_1(=N_0×N_0)Size the perform prediction of coding unit 930 compile
The predicting unit 940 of code may include the divisional type 942 of the size with 2N_1 × 2N_1, size with 2N_1 × N_1
Point of divisional type 944, the divisional type 946 of size with N_1 × 2N_1 and the divisional type 948 with N_1 × N_1
Area.
If encoding error is minimum in divisional type 948, in operation 950, depth changes into 2 pairs from 1
Divisional type 948 is divided, and repeats volume to the coding unit 960 with 2 depth and N_2 × N_2 size
Code searches for minimum coding error.
When depth capacity is d, can be performed according to the division information of each depth until depth is changed into d-1 positions, and
And division information can be encoded untill depth is changed into d-2.That is, in operation 970, corresponding to d-2 depth
Coding unit be divided after, when coding is performed when depth is d-1, for the depth with d-1 and 2N_ (d-
1) predicting unit 990 of the perform prediction of coding unit 980 coding of × 2N_ (d-1) size may include with 2N_ (d-1) ×
The divisional type 992 of 2N_ (d-1) size, the divisional type 994 of size with 2N_ (d-1) × N_ (d-1), with N_
(d-1) divisional type 996 of × 2N_ (d-1) size and with N_ (d-1) × N_ (d-1) size divisional type 998.
Can to a subregion of the size with 2N_ (d-1) × 2N_ (d-1) in divisional type 992 to 998, with
Two subregions of 2N_ (d-1) × N_ (d-1) size and with N_ (d-1) × N_ (d-1) size four subregions repeat hold
Row predictive coding, to search for the divisional type with minimum coding error.
Even if divisional type 998 has minimum coding error, but because depth capacity is d, therefore the depth with d-1
Coding unit CU_ (d-1) be no longer divided to more low depth, and form the coding unit of current maximum coding unit 900
Coding depth can be confirmed as d-1, the divisional type of current maximum coding unit 900 can be confirmed as N_ (d-1) × N_ (d-
1).Further, since depth capacity is d, and the minimum coding unit 980 of the lowest depth with d-1 be not further subdivided into it is lower
Depth, therefore no longer it is provided for the division information of minimum coding unit 980.
Data cell 999 can be current maximum coding unit " minimum unit ".It is according to an embodiment of the invention most
Junior unit can be by the way that minimum coding unit is divided into 4 square data cells and acquisition.Compiled by repeating
Code, video encoder 100 can be by being compared to the encoding error of the depth according to coding unit 900, and selection produces most
The depth of lower Item error, to determine coding depth, and corresponding divisional type and predictive mode are arranged to coding depth
Coding mode.
So, the minimum coding error according to depth can be compared in 1 to d all depth, and there is minimum code
The depth of error can be confirmed as coding depth.Coding depth, divisional type and the predictive mode of predicting unit can be encoded simultaneously
Send as the information on coding mode.Further, since coding unit is divided from 0 depth to coding depth, therefore only compile
The division information of code depth is arranged to 0, and is arranged to according to the division information of the depth in addition to coding depth
“1”。
The view data and coded information extractor 220 of video decoding apparatus 200 are extractable and use is on coding unit
900 coding depth and the information of predicting unit decode to divisional type 912.Video decoding apparatus 200 can lead to
Cross to use and the depth that division information is " 0 " is defined as by coding depth according to the division information of depth, and use is on corresponding
Depth coding mode information, to be decoded.
Figure 10 to Figure 12 is to be used to describe coding unit 1010, predicting unit 1060 and change according to an embodiment of the invention
The diagram for the relation changed between unit 1070.
Coding unit 1010 be in maximum coding unit 1000 with the coding depth phase that is determined by video encoder 100
The coding unit with tree construction answered.Predicting unit 1060 is the subregion of the predicting unit of each coding unit 1010, conversion
Unit 1070 is the converter unit of each coding unit 1010.
When the depth of maximum coding unit in coding unit 1010 is 0, the depth of coding unit 1012 and 1054 is 1,
The depth of coding unit 1014,1016,1018,1028,1050 and 1052 is 2, coding unit 1020,1022,1024,1026,
1030th, 1032 and 1048 depth is 3, and the depth of coding unit 1040,1042,1044 and 1046 is 4.
In predicting unit 1060, some codings are obtained by being divided to the coding unit in coding unit 1010
Unit 1014,1016,1022,1032,1048,1050,1052 and 1054.In other words, coding unit 1014,1022,1050
There is 2N × N size with the divisional type in 1054, the divisional type of coding unit 1016,1048 and 1052 has N × 2N
Size, the divisional type of coding unit 1032 has N × N size.The predicting unit and subregion of coding unit 1010 are less than
Or equal to each coding unit.
To the picture number of the coding unit 1052 in the converter unit 1070 in the data cell less than coding unit 1052
According to execution conversion or inverse transformation.In addition, in terms of size and dimension, converter unit 1014 in converter unit 1070,1016,
1022nd, 1032,1048,1050 and 1052 are different from the converter unit in predicting unit 1060.In other words, video encoder
100 and video decoding apparatus 200 data cell in identical coding unit can independently be performed infra-frame prediction, estimation,
Motion compensation, conversion and inverse transformation.
Therefore, because by the coding unit with hierarchy in each region to maximum coding unit recursively
Coding is performed to determine optimum code unit, therefore the coding unit with recurrence tree construction can be obtained.Coding information may include
Division information on coding unit, the information on divisional type, the information on predictive mode and on converter unit
The information of size.Table 1 shows the coding information that can be set by video encoder 100 and video decoding apparatus 200.
Table 1
The exportable coding information on the coding unit with tree construction of output unit 130 of video encoder 100,
The view data and coding information extraction unit 220 of video decoding apparatus 200 can be extracted from the bit stream received on tool
There is the coding information of the coding unit of tree construction.
Whether division information instruction current coded unit is divided into the coding unit of relatively low layer depth.If d's is current
The division information of depth is 0, then because the depth for the current coded unit for again not being divided into lower coding unit is that coding is deep
Degree, therefore the information on divisional type, the information on predictive mode can be defined for coding depth and on converter unit
Size information.If current coded unit needs to be further divided into according to division information, to 4 of relatively low layer depth
Division coding unit independently performs coding.
Predictive mode can be one in frame mode, inter-frame mode and skip mode.Can be in all divisional types
Define frame mode and inter-frame mode, can only defined in the divisional type of the size with 2N × 2N skip mode.
Information on divisional type may indicate that by symmetrically being divided to the height and width of predicting unit to obtain
The symmetric partitioning type of size with 2N × 2N, 2N × N, N × 2N and N × N, and by asymmetricly to the height of predicting unit
Or the asymmetric divisional type of the wide size with 2N × nU, 2N × nD, nL × 2N and nR × 2N for being divided and being obtained.It is logical
Cross according to 1:3 and 3:1 pair of height is divided the asymmetric divisional type for obtaining the size with 2N × nU and 2N × nD respectively, is led to
Cross according to 1:3 and 3:1 pair of wide divided and the asymmetric subregion class of the size with nL × 2N and nR × 2N obtained respectively
Type.
The size of converter unit can be arranged to the two types under the two types under frame mode, and inter-frame mode.
That is, if the division information on converter unit is 0, converter unit is dimensioned to as present encoding list
2N × 2N of the size of member., can be by being divided current coded unit to obtain if the division information of converter unit is 1
Take converter unit.In addition, if the divisional type of the current coded unit of the size with 2N × 2N is non-symmetric partitioning type,
Then the size of converter unit can be N × N, if the divisional type of current coded unit is symmetric partitioning type, conversion is single
The size of member can be N/2 × N/2.
The coding information of coding unit with tree construction may include coding unit corresponding with coding depth, predicting unit
With it is at least one in minimum unit.Coding unit corresponding with coding depth may include the prediction list for including identical coding information
It is at least one in member and minimum unit.
Therefore, determine whether adjacent data unit is included in and compiles by comparing the coding information of adjacent data unit
Code depth is accordingly in identical coding unit.Further, since determined by using the coding information of data cell deep with coding
Corresponding corresponding coding unit is spent, therefore can determine that the distribution of the coding depth in maximum coding unit.
Therefore, if based on the predicting coding information current coded unit of adjacent data unit, then can directly with reference to and make
With the coding information of the data cell in the deeper coding unit neighbouring with current coded unit.
Selectively, if based on the predicting coding information current coded unit of neighbouring coding unit, then using data sheet
The coding information of member searches for the data cell neighbouring with current coded unit, and the neighbouring coding unit of search can be referenced is used for
Predict current coded unit.
Figure 13 is single according to coding unit, predicting unit or the subregion of the coding mode information of table 1 and conversion for describing
The diagram of relation between member.
The coding unit 1302 of maximum coding unit 1300 including coding depth, coding unit 1304, coding unit 1306,
Coding unit 1312, coding unit 1314, coding unit 1316 and coding unit 1318.Here, because coding unit 1318 is
The coding unit of coding depth, therefore division information can be arranged to 0.Point of the coding unit 1318 of size with 2N × 2N
Area's type information can be arranged to the divisional type 1322 of the size with 2N × 2N, the divisional type of size with 2N × N
1324th, the divisional type 1326 of the size with N × 2N, the divisional type 1328 of size with N × N, with 2N × nU's
The divisional type 1332 of size, the divisional type 1334 of the size with 2N × nD, the divisional type of size with nL × 2N
1336 and with nR × 2N size divisional type 1338 in one.
When divisional type is arranged to symmetrical(That is, divisional type 1322, with 2N × N size divisional type 1324,
1326 or 1328)When, if the division information of converter unit(TU dimension marks)0, then the conversion of the size with 2N × 2N
Class1 342 is set, if the division information of converter unit is 1, the converter unit 1344 of the size with N × N is set
Put.
When divisional type is arranged to asymmetric(That is, divisional type 1332,1334,1366 or 1338)When, if TU chis
Very little mark is 0, then the converter unit 1352 of the size with 2N × 2N can be set, if TU dimension marks are 1, with N/
The converter unit 1354 of 2 × N/2 size is set.
Hereinafter, will be described in being set by the video decoding of the intra predictor generator 410 and Fig. 5 of Fig. 4 video encoder 100
The infra-frame prediction that standby 200 intra predictor generator 550 performs to predicting unit.
Intra predictor generator 410 and 550 performs is worked as by using the neighborhood pixels of current prediction unit to perform to be used to obtain
The infra-frame prediction of the predictive factor of preceding predicting unit.Consider that predicting unit has the size equal to or more than 16 × 16, it is pre- in frame
Device 410 and 550 is surveyed to use(dx,dy)Parameter and extra according to the intra prediction mode with limited directionality of correlation technique
Ground performs the intra prediction mode with different directions.Will be explained in detail below according to an embodiment of the invention with difference
The intra prediction mode of directionality.
In addition, in order to obtain the predictive factor of current pixel, intra predictor generator 410 and 550 can be via along current pixel
The linear interpolation of horizontal direction produce predictive factor P1, produced via the linear interpolation of the vertical direction along current pixel pre-
Factor P2 is surveyed, and predictive factor P1 and P2 average value are used as to the predictive factor of current pixel.By combination via along level
Predictive factor that the linear interpolation in direction and linear difference vertically obtain produces the predictive factor of current pixel
Intra prediction mode is defined as flat pattern.Specifically, intra predictor generator 410 and 550 is by using positioned at current predictive list
At least one neighborhood pixels of first upper right side produce the virtual pixel for linear interpolation in the horizontal direction, and by using
At least one neighborhood pixels of current prediction unit lower left side in flat pattern produce for along vertical direction
The virtual pixel of linear interpolation.Flat pattern according to an embodiment of the invention will be described later on.
Figure 14 is the quantity for the intra prediction mode for showing the size according to an embodiment of the invention according to predicting unit
Form.
Intra predictor generator 410 and 550 can be arranged differently than applied to the predicting unit of the size according to predicting unit
The quantity of intra prediction mode.For example, referring to Figure 14, in example 2, when will be N by the size of the predicting unit of infra-frame prediction
It is during × N, then actual to the predicting unit with 2 × 2,4 × 4,8 × 8,16 × 16,32 × 32,64 × 64 and 128 × 128 sizes
The quantity of the intra prediction mode of execution can be separately arranged as 5,9,9,17,33 and 5.Because for entering to prediction mode information
The expense of row coding is different according to the size of predicting unit, so the quantity of the intra prediction mode actually performed is according to prediction
The size of unit and it is different.In other words, even if the part for occupying the predicting unit of whole image is small, but it is additional for sending
Information(Such as, the predictive mode of such small predicting unit)Expense can be big.Therefore, when with many predictive modes pair
When predicting unit with small size is encoded, the quantity of bit can increase, and therefore compression efficiency can reduce.In addition,
Due to the predicting unit with big size(For example, the predicting unit with the size equal to or more than 64 × 64)Usually by
The predicting unit of the planar unit mode of image is primarily selected as, therefore under many predictive modes, to mainly electing as to flat region
What domain was encoded has the compression efficiency aspect meeting deficiency that large-sized predicting unit is encoded.Therefore, predicting unit is worked as
Size it is excessive compared to preliminary dimension or too small when, only can apply relatively small amount intra prediction mode.However, according to prediction
The quantity of the intra prediction mode of the size application of unit is not limited to Figure 14, and can be different.According to prediction as shown in figure 14
The quantity of the intra prediction mode of the size application of unit is only example, and can be different.Selectively, no matter predicting unit
How is size, what the quantity applied to the intra prediction mode of predicting unit can always be unified.
As intra prediction mode is applied to predicting unit, intra predictor generator 410 and 550 may include such frame mode:
By using the line based on the pixel in predicting unit with predetermined angular and the neighbouring reference pixel determined is used as picture
The predictive factor of element determines neighbouring reference pixel.Can be by using parameter(dx,dy)(Wherein, dx and dy is integer)Set
The angle of such line.For example, when 33 predictive modes are defined as pattern N(Wherein, N is the integer from 0 to 32)When, pattern
0 is arranged to vertical mode, pattern 1 is arranged to horizontal pattern, pattern 2 is arranged to DC patterns, pattern 3 is arranged to plane
Pattern, pattern 32 are arranged to flat pattern.In addition, pattern 4 to pattern 31 can be defined as such frame mode:By making
With with tan-1(dy/dx)Directionality line and usually determine neighbour using the neighbouring reference image of determination for infra-frame prediction
Nearly reference pixel, wherein, tan-1(dy/dx)Use(dx,dy)It is denoted respectively as table 1(1,-1)、(1,1)、(1,2)、
(2,1)、(1,-2)、(2,1)、(1,-2)、(2,-1)、(2,-11)、(5,-7)、(10,-7)、(11,3)、(4,3)、(1,11)、
(1,-1)、(12,-3)、(1,-11)、(1,-7)、(3,-10)、(5,-6)、(7,-6)、(7,-4)、(11,1)、(6,1)、(8,
3)、(5,3)、(5,7)、(2,7)、(5,-7)With(4,-3).
Table 2
The quantity for the intra prediction mode that intra predictor generator 410 and 550 uses is not limited to table 2, and can be based on current predictive
Unit is chromatic component or luminance component or the change in size based on current prediction unit.In addition, each pattern N can be represented not
It is same as above-mentioned intra prediction mode.For example, the quantity of intra prediction mode can be 36, wherein, pattern 0 is described later on
Flat pattern, pattern 1 are DC patterns, and pattern 2 to 34 is the intra prediction mode with 33 kinds of directionality being described later on, and
Pattern 35 is to use the intra prediction mode of the predicting unit in luminance component corresponding with the predicting unit in chromatic component
Intra_FromLuma.Pattern 35(That is, using the predicting unit in luminance component corresponding with the predicting unit in chromatic component
Intra prediction mode Intra_FromLuma)The predicting unit being only applied in chromatic component, and be not used for bright
The predicting unit spent in component carries out infra-frame prediction.
Figure 15 is the reference for describing the intra prediction mode according to an embodiment of the invention with various directionality
Diagram.
As described above, intra predictor generator 410 and 550 can be by using by multiple(dx,dy)What parameter determined has tan-1
(dy/dx)Angle line to determine neighbouring reference pixel, and usually performed in frame in advance by using the neighbouring reference image of determination
Survey.
Reference picture 15, be based in current prediction unit by the neighbour on the extended line 150 for the current pixel P being predicted
Nearly pixel A and neighborhood pixels B are used as current pixel P predictive factor, wherein, there is the extended line 150 basis to meet
The tan that the value of the intra prediction mode of table 2 determines-1(dy/dx)Angle.Here, the neighborhood pixels as predictive factor can be with
It is the pixel of previous prediction unit, the previous prediction unit is pre-coded and pre-restore, and positioned at the upper of current coded unit
Side, left side, upper left or bottom right.So, by performing the predictive coding according to the intra prediction mode with various directionality, pressure
Contracting can be performed efficiently according to the feature of image.
In fig.15, generated currently when by using on extended line 150 or close to the neighborhood pixels of extended line 150
During pixel P predictive factor, extended line 150 is actual to have tan-1(dy/dx)Directionality, and need(dx,dy)Division
To determine the neighborhood pixels using extended line 150, therefore hardware or software may include that decimal point operates, so as to increase amount of calculation.
Therefore, when by using(dy,dx)When parameter setting is used to select the prediction direction of reference pixel, dx and dy can be set with
Reduce amount of calculation.
Figure 16 is to be used to describe current pixel according to an embodiment of the invention and be arranged in have(dx,dy)Directionality
Extended line on neighborhood pixels between relation diagram.
Reference picture 16, P1610 represent to be located at(j,i)Current pixel, A1611 and B1612, which represent to be located across respectively, to be worked as
Preceding pixel P1610's has directionality(That is, tan-1(dy/dx)Angle)Extended line on neighbouring upside pixel and neighbouring left
Side pixel.Assuming that the size of the predicting unit including current pixel P1610 is nS × nS(Wherein, nS is positive integer), prediction list
Member pixel position be from(0,0)Arrive(nS-1,nS-1)In one, the position of the neighbouring upside pixel A 1611 in x-axis is
(m,-1)(Wherein, m is integer), the position of the neighbouring left pixel B1612 in y-axis is(-1,n)(Wherein, n is integer).With
The position for the neighbouring upside pixel A 1611 met through current pixel P1610 extended line is(j+i×dx/dy,-1), it is neighbouring
Left pixel B1612 position is(-1,i+j×dy/dx).Therefore, in order to determine neighbouring upside pixel A 1611 or neighbouring left side
Pixel B 1612 predicts current pixel P1610, it is necessary to such as dx/dy or dy/dx division arithmetic.As noted previously, as division
The computational complexity of computing is high, therefore the arithmetic speed in software or hardware can reduce.Therefore, represent to be used to determine neighborhood pixels
Predictive mode directionality dx and dy at least one value can be determined as 2 exponential depth.It is, work as n and m
When being integer, dx and dy can be 2^n and 2^m respectively.
When neighbouring left pixel B1612 is used as current pixel P1610 predictive factor and dx has 2^n value,
It is determined that(-1,i+j×dy/dx)(That is, neighbouring left pixel B1612 position)Required j × dy/dx computings can be(i×
dy)/(2^n), and shift operation can be passed through(Such as(i×dy)>>n)The division arithmetic using 2 exponential depth is realized, so as to
Reduce amount of calculation.
Similarly, when the predictive factor for being used as current pixel P1610 adjacent to upside pixel A 1611, and dy has 2^m
Value when, it is determined that(j+i×dx/dy,-1)(That is, the position of neighbouring upside pixel A 1611)Required i × dx/dy computings
Can be(i×dx)/(2^m), shift operation can be passed through(Such as,(i×dx)>>m)To realize the division using 2 exponential depth
Computing.
Figure 17 and Figure 18 is the diagram in the direction for showing intra prediction mode according to an embodiment of the invention.
Usually, the linear model in image or vision signal is mainly horizontal or vertical.Therefore, when by using
(dx,dy)When parameter definition has the intra prediction mode of different directions, figure can be improved by the value that dx and dy is defined as below
The code efficiency of picture.
In detail, when dy has 2^m fixed value, dx absolute value can be set so that close to the prediction of vertical direction
Interval between direction is narrow, and the interval between predictive mode increases towards the prediction direction close to horizontal direction.For example, ginseng
According to Figure 17, when dy is 2^5(That is, 32)When, dx can be arranged to 2,5,9,13,17,21,26,32, -2, -5, -9, -13, -17, -
21st, -26 and -32 so that it is relatively narrow close to the interval between the prediction direction of vertical direction, and the interval between predictive mode
Towards the prediction direction increase close to horizontal direction.
Similarly, when dx has 2^n fixed value, dy absolute value can be set so that close to the prediction of horizontal direction
Interval between direction is narrow, and the interval between predictive mode increases towards the prediction direction close to vertical direction.For example, referring to figure
18, when dx is 2^5(That is, 32)When, dy can be arranged to 2,5,9,13,17,21,26,32, -2, -5, -9, -13, -17, -21, -
26 and -32 so that relatively narrow close to the interval between the prediction direction of horizontal direction, the interval direction between predictive mode is close
The prediction direction increase of vertical direction.
In addition, when one in dx and dy is fixed, another value can be arranged to be increased according to predictive mode.For example,
When dy value is fixed, the interval between dx value can be arranged to be increased according to predetermined value.Can according in the horizontal direction and
Increment as the angle setting divided between vertical direction.For example, when dy is fixed, dx can have the angle of vertical axis small
The section of increment b and angle more than 30 ° in the section of increment a, angle between 15 ° and 30 ° in 15 ° of section
In increment c.
For example, with use(dx,dy)Tan-1(dy/dx)The predictive mode of directionality can be shown by table 3 into table 5
's(dx,dy)Parameter definition.
Table 3
dx | Dy | dx | dy | dx | dy |
-32 | 32 | 21 | 32 | 32 | 13 |
-26 | 32 | 26 | 32 | 32 | 17 |
-21 | 32 | 32 | 32 | 32 | 21 |
-17 | 32 | 32 | -26 | 32 | 26 |
-13 | 32 | 32 | -21 | 32 | 32 |
-9 | 32 | 32 | -17 | ||
-5 | 32 | 32 | -13 | ||
-2 | 32 | 32 | -9 | ||
0 | 32 | 32 | -5 | ||
2 | 32 | 32 | -2 | ||
5 | 32 | 32 | 0 |
9 | 32 | 32 | 2 | ||
13 | 32 | 32 | 5 | ||
17 | 32 | 32 | 9 |
Table 4
dx | Dy | dx | dy | dx | dy |
-32 | 32 | 19 | 32 | 32 | 10 |
-25 | 32 | 25 | 32 | 32 | 14 |
-19 | 32 | 32 | 32 | 32 | 19 |
-14 | 32 | 32 | -25 | 32 | 25 |
-10 | 32 | 32 | -19 | 32 | 32 |
-6 | 32 | 32 | -14 | ||
-3 | 32 | 32 | -10 | ||
-1 | 32 | 32 | -6 | ||
0 | 32 | 32 | -3 | ||
1 | 32 | 32 | -1 | ||
3 | 32 | 32 | 0 | ||
6 | 32 | 32 | 1 | ||
10 | 32 | 32 | 3 | ||
14 | 32 | 32 | 6 |
Table 5
dx | Dy | dx | dy | dx | dy |
-32 | 32 | 23 | 32 | 32 | 15 |
-27 | 32 | 27 | 32 | 32 | 19 |
-23 | 32 | 32 | 32 | 32 | 23 |
-19 | 32 | 32 | -27 | 32 | 27 |
-15 | 32 | 32 | -23 | 32 | 32 |
-11 | 32 | 32 | -19 | ||
-7 | 32 | 32 | -15 |
-3 | 32 | 32 | -11 | ||
0 | 32 | 32 | -7 | ||
3 | 32 | 32 | -3 | ||
7 | 32 | 32 | 0 | ||
11 | 32 | 32 | 3 | ||
15 | 32 | 32 | 7 | ||
19 | 32 | 32 | 11 |
As described above, use(dx,dy)The intra prediction mode of parameter uses neighbouring left pixel(-1,i+j×dy/dx)
Or neighbouring upside pixel(j+i×dx/dy,-1)As positioned at(j,i)Pixel predictive factor.As shown in table 2, as dx and
When at least one in dy value has 2 exponential depth, without using division arithmetic, it can only pass through multiplication and shift operation obtains
The position of neighbouring left pixel(-1,i+j×dy/dx)With the position of neighbouring upside pixel(j+i×dx/dy,-1).When the institute of table 2
Show(dx,dy)In dx be 2^n(That is, 32)When, replaced because the division using dx can be right-shifted by computing, therefore without division
Computing, it can be based on(i×dy)>>N obtains the position of neighbouring left pixel.Similarly, when as shown in table 2(dx,dy)In dy
It is 2^m(That is, 32)When, replaced because the division arithmetic using 2^m can be right-shifted by computing, therefore can be based on without division arithmetic
(i×dx)>>M can obtain the position of neighbouring upside pixel.
Figure 19 is the direction for describing the intra prediction mode with 33 kinds of directionality according to an embodiment of the invention
Diagram.
Intra predictor generator 410 and 550 can be according to the intra prediction mode with 33 kinds of directionality as shown in Figure 19, really
The fixed neighborhood pixels by the predictive factor for being used as current pixel.As described above, the direction of frame mode can be set so that pre-
Interval between survey pattern is reduced towards horizontal or vertical direction, and is increased away from horizontal or vertical direction.
Figure 20 A and Figure 20 B are the diagrams for describing flat pattern according to an embodiment of the invention.
As described above, under flat pattern, intra predictor generator 410 and 550 is by using positioned at current prediction unit upper right
At least one neighborhood pixels of side produce the virtual pixel used in linear interpolation in the horizontal direction, by using positioned at
At least one neighborhood pixels of the lower left side of current prediction unit produce the void used in linear interpolation vertically
Intend pixel.In addition, intra predictor generator 410 and 550 by using via using virtual pixel and neighborhood pixels along horizontal and vertical
The average value of two predictive factors caused by the linear interpolation in direction produces the predicted value of current pixel.
Reference picture 20A, intra predictor generator 410 and 550 are at least one by using the upper right side of current prediction unit 2010
Neighborhood pixels 2020, obtain be located at the phase of current predictive pixel 2011 in current prediction unit 2010 go together it is upper and with positioned at working as
Corresponding first virtual pixel 2012 of pixel on the farthest right side of preceding predicting unit 2010.For obtaining the first virtual pixel 2012
The quantity of neighborhood pixels 2020 can be determined in advance.For example, intra predictor generator 410 and 550 can will be by using as initial
Two upper right side neighborhood pixels T12021 and T22022 average value or weighted average caused by value be defined as first void
Intend pixel 2012.
In addition, intra predictor generator 410 and 550 can the size based on current prediction unit 2010, it is determined that for obtain first
The quantity of the neighborhood pixels 2020 of virtual pixel 2012.For example, when the size of current prediction unit 2010 is nS × nS(Wherein,
NS is integer)When, intra predictor generator 410 and 550 can select from the neighborhood pixels 2020 for obtaining the first virtual pixel 2012
Select nS/(2^m)(Wherein, m is the integer for meeting conditions of the 2^m not higher than nS)Upper right side neighborhood pixels, and the right side for passing through selection
The average value or weighted average of upside neighborhood pixels obtain the first virtual pixel 2012.In other words, the He of intra predictor generator 410
550 can select the pixels such as nS/2, nS/4, nS/8 from neighborhood pixels 2020.For example, when the size of current prediction unit 2010 is
When 32 × 32, intra predictor generator 410 and 550 may be selected 32/2,32/4,32/8,32/16,32/32, i.e. 1 to 16 upper right side is adjacent
Nearly pixel.
Similarly, reference picture 20B, intra predictor generator 410 and 550 is by using positioned at the lower left side of current prediction unit 2011
At least one neighborhood pixels 2030, obtain be located at in the current predictive pixel same column in current prediction unit 2010 and with
Pixel corresponding second virtual pixel 2014 farthest positioned at the lower section of current prediction unit 2010.For obtaining the first virtual pixel
The quantity of 2014 neighborhood pixels 2030 can be determined in advance.For example, can be using by using as two initial lower left side neighbours
Value caused by the L12031 and L22032 of nearly pixel average value or weighted average is defined as the second virtual pixel 2014.
In addition, intra predictor generator 410 and 550 can the size based on current prediction unit 2010, it is determined that for obtain second
The quantity of the neighborhood pixels 2030 of virtual pixel 2014.As described above, when the size of current prediction unit 2010 is nS × nS(Its
In, nS is integer)When, intra predictor generator 410 and 550 can be from the neighborhood pixels 2030 for obtaining the second virtual pixel 2014
Select nS/(2^m)(Wherein, m is the integer for meeting conditions of the 2^m not higher than nS)Lower left side neighborhood pixels, and pass through selection
The average value or weighted average of lower left side neighborhood pixels obtain the second virtual pixel 2014.
Meanwhile the if predicting unit that neighborhood pixels 2020 are encoded afterwards due to being included in current prediction unit 2010
In and it is unusable, then intra predictor generator 410 and 550 can be used just the left side of neighborhood pixels 2020 pixel T0 as first
Virtual pixel 2012.On the other hand, if neighborhood pixels 2030 are encoded afterwards because being included in current prediction unit 2010
Predicting unit in and it is unusable, then intra predictor generator 410 and 550 can be used just in the top pixel of neighborhood pixels 2030
L0 is as the second virtual pixel 2014.
Referring back to Figure 20 A, current predictive pixel 2011 and the first virtual pixel obtained from neighborhood pixels 2020 are considered
The distance between 2012 and current predictive pixel 2011 and the neighbouring left side picture with current predictive pixel 2011 on mutually going together
Element the distance between 2013, intra predictor generator 410 and 550 performs linear interpolation by using the geometrical mean of above distance to be come
Produce the first predicted value p1 of current predictive pixel 2011.
When the pixel value of neighbouring left pixel 2013 is rec(- 1, y)When, it is located at(NS-1, y)The first virtual pixel
2012 pixel value is T(Wherein T is real number), the predicted value of current predictive pixel 2011 is p(X, y), wherein x, y=0 to nS-
1, wherein(x,y)The position of the current predictive pixel 2011 of current prediction unit 2010 is represented, rec (x, y) represents current predictive
The neighborhood pixels of unit 2010, wherein(X, y=- 1 to 2 × nS-1), the first predicted value p1(X, y)Can according to equation p1 (x, y)=
(nS-1-x)×rec(-1,y)+(x+1)×T.Here, (ns-1-x) and the virtual pixel of current predictive pixel 2011 and first
The distance between 2012 is corresponding, and(x+1)With the distance between current predictive pixel 2011 and neighbouring left pixel 2013 phase
Should.So, intra predictor generator 410 and 550 by using between the first virtual pixel 2012 and current predictive pixel 2011 away from
From, current predictive pixel 2011 and between neighbouring left pixel 2013 of the current predictive pixel 2011 on mutually going together away from
From, the first virtual pixel 2012 pixel value and neighbouring left pixel 2013 pixel value linear interpolation it is pre- to produce first
Measured value p1.
Referring back to Figure 20 B, current predictive pixel 2011 and the second virtual pixel obtained from neighborhood pixels 2030 are considered
The distance between 2014 and current predictive pixel 2011 and the neighbouring upside picture with current predictive pixel 2011 in same column
Element the distance between 2015, intra predictor generator 410 and 550 performs linear interpolation by using the geometrical mean of above distance to be come
Produce the second predicted value p2 of current predictive pixel 2011.
When the pixel value of neighbouring upside pixel 2015 is rec(x,-1)When, it is located at(x,nS-1)The second virtual pixel
2014 pixel value is L(Wherein L is real number), the predicted value of current predictive pixel 2011 is p(X, y), wherein x, y=0 to nS-
1, wherein(x,y)The position of the current predictive pixel 2011 of current prediction unit 2010 is represented, rec (x, y) represents current predictive
The neighborhood pixels of unit 2010, wherein(X, y=- 1 to 2 × nS-1), can according to equation p2 (x, y)=(nS-1-y) × rec (x ,-
1)+(y+1) × L obtains the second predicted value p2(X, y).Here, (ns-1-y) and current predictive pixel 2011 and second are virtual
The distance between pixel 2014 is corresponding, and(y+1)Between current predictive pixel 2011 and neighbouring upside pixel 2015 away from
From corresponding.So, intra predictor generator 410 and 550 is by using between the second virtual pixel 2014 and current predictive pixel 2011
Distance, current predictive pixel 2011 and between neighbouring upside pixel 2015 of the current predictive pixel 2011 in same column
The linear interpolation of the pixel value of distance, the pixel value of the second virtual pixel 2014 and neighbouring upside pixel 2015 is pre- to produce second
Measured value p2.
So, when obtaining the first predicted value p1 via linear interpolation both horizontally and vertically(X, y)With the second prediction
Value p2(X, y)When, intra predictor generator 410 and 550 is by using the first predicted value p1(X, y)With the second predicted value p2(X, y)'s
Average value obtains the predicted value p of current predictive pixel 2011(X, y).In detail, intra predictor generator 410 and 550 can be by making
With equation p (x, y)={ p1 (x, y)+p2 (x, y)+nS }>>(k+1) the predicted value p (x, y) of current predictive pixel 2011 is obtained,
Wherein k is log2nS。
Selectively, intra predictor generator 410 and 550 can by using filtering neighbouring upper right side pixel and filtering it is neighbouring
Lower left side pixel, rather than using neighbouring upper right side pixel and neighbouring lower left side pixel in itself, come obtain the first virtual pixel and
Second virtual pixel.
Figure 21 is to show the neighbouring picture according to an embodiment of the invention that filtered around current prediction unit 2100
Element 2110 and 2120.
Reference picture 21, intra predictor generator 410 and 550 is by above the current prediction unit 2100 by current intra prediction
X neighborhood pixels 2110 and the Y neighborhood pixels 2120 in left side of current prediction unit 2100 perform to filter at least once and produce
The neighborhood pixels of filtering.Here, when the size of current prediction unit 2100 is nS × nS, X can be 2ns and Y can be
2ns。
When ContextOrg [n] represents the top of current prediction unit 2100 with nS × nS sizes and the X+Y in left side
Original neighborhood pixels, wherein n are the integers from 0 to X+Y-1, and n is 0 in the neighbouring lower side pixel in neighbouring left pixel,
(That is, ContextOrg [0]), and n is X+Y-1 in the neighbouring rightmost side pixel in the pixel of neighbouring upside,(That is,
ContextOrg[X+Y-1]).
Figure 22 is the reference diagram for describing the filtering process of neighborhood pixels.
Reference picture 22, the upside of current prediction unit and the original neighborhood pixels in left side are represented as ContextOrg [n], its
In, n is can be filtered from 0 to the integer for 4nS-1, original neighborhood pixels via the weighted average between original neighborhood pixels
Ripple.When ContextFiltered1 [n] represents the neighborhood pixels once filtered, by applying 3 taps to original neighborhood pixels
The neighborhood pixels ContextOrg [n] of wave filter can be according to equation ContextFiltered1 [n]=(ContextOrg [n-1]+2
× ContextOrg [n]+ContextOrg [n+1])/4 be acquired.Similarly, the neighborhood pixels filtered twice
Between the neighborhood pixels ContextFiltered1 [n] that ContextFiltered2 [n] can once be filtered by calculating again
Weighted average produces.For example, the neighborhood pixels filtered by the neighborhood pixels to filtering using 3 tap filters can basis
Equation ContextFiltered2 [n]=(ContextFiltered1 [n-1]+2 × ContextFiltered1 [n]+
ContextFiltered1 [n+1])/4 produce.
Selectively, can by using any one in various methods to neighborhood pixels filter, and then as described above,
Intra predictor generator 410 and 550 can obtain the first virtual pixel from the upper right side pixel of at least one neighbouring filtering, from least one
The lower left side pixel of neighbouring filtering obtains the second virtual pixel, and then produces current pixel via linear difference as described above
Predicted value.The use of neighbouring filtered pixel being dimensioned based on current prediction unit.For example, only when current predictive list
When the size of member is equal to or more than 16 × 16, the pixel of neighbouring filtering can be used.
Figure 23 is the flow chart of the intra-frame prediction method according to an embodiment of the invention according to flat pattern.
In operation 2310, intra predictor generator 410 and 550 is by using positioned at least the one of the upper right side of current prediction unit
Individual neighborhood pixels come obtain be located at mutually gone together with the current predictive pixel of current prediction unit it is upper and with positioned at current prediction unit
The rightmost side corresponding second virtual pixel of pixel.As described above, the number of the neighborhood pixels for obtaining the first virtual pixel
Amount can be by pre-determining, or being dimensioned based on current prediction unit.
In operation 2320, intra predictor generator 410 and 550 is by using at least one of the lower left side positioned at current prediction unit
Individual neighborhood pixels come obtain be located at mutually gone together with current predictive pixel it is upper and with the pixel positioned at the lower side of current predictive pixel
Corresponding first virtual pixel.As described above, for obtain the second virtual pixel neighborhood pixels quantity can by pre-determining, or
Being dimensioned based on current prediction unit.
Operation 2330, intra predictor generator 410 and 550 via using the first virtual pixel and positioned at current predictive pixel
The linear interpolation of the mutually neighbouring left pixel on colleague obtain the first predicted value of current predictive pixel.As described above, work as
The position of current predictive pixel is(x,y)When, wherein, x and y are from 0 to nS-1, and the neighborhood pixels of current prediction unit are
rec(x,y), wherein x and y are from -1 to 2 × nS-1, and the pixel value of neighbouring left pixel is rec(- 1, y), it is located at(NS-1,
y)The pixel value of the first virtual pixel be T, wherein T is real number, and the predicted value of current predictive pixel is p(x,y), wherein x and y
It is from 0 to nS-1, the first predicted value can be obtained according to equation p1 (x, y)=(nS-1-x) × rec (- 1, y)+(x+1) × T
p1(x,y)。
In operation 2340, intra predictor generator 410 and 550 is using the second virtual pixel and positioned at identical with current predictive pixel
The linear interpolation of neighbouring upside pixel on row obtains the second predicted value of current predictive pixel.When the picture of neighbouring upside pixel
Plain value is rec(x,-1)And it is located at(X, nS-1)The pixel value of the second virtual pixel when being L, wherein L is real number, can basis
Equation p2 (x, y)=(nS-1-y) × rec (x, -1)+(y+1) × L obtains the second predicted value p2 (x, y).
In operation 2350, intra predictor generator 410 and 550 obtains current predictive picture by using the first and second predicted values
The predicted value of element.As described above, as the first predicted value p1(X, y)With the second predicted value p2(X, y)Via along horizontal and vertical side
To linear interpolation be acquired when, intra predictor generator 410 and 550 is by using the first p1(x,y)With the second predicted value p2(x,y)
Average value obtain the predicted value p of the pixel of current predictive(X, y).In detail, intra predictor generator 410 and 550 can according to etc.
Formula p (x, y)={ p1 (x, y)+p2 (x, y)+nS }>>(k+1) predicted value p (x, y) is obtained, wherein, k is log2nS。
According to one or more embodiments of the invention, by via the various infra-frame predictions using neighborhood pixels, application
The code efficiency of image can be improved according to the optimal intra-frame prediction method of characteristics of image.
Embodiments of the invention can be written as computer program, and can be implemented as by using computer-readable record
The general purpose digital computer of medium configuration processor.The example of computer readable recording medium storing program for performing includes magnetic storage medium(For example, ROM,
Floppy disk, hard disk etc.), optical record medium(For example, CD-ROM or DVD)And storage medium.
Although it is particularly shown and described the present invention, the common skill of this area with reference to the preferred embodiments of the present invention
Art personnel will be understood that, in the case where not departing from the spirit and scope of the present invention being defined by the claims, can carry out wherein
The change of various forms and details.Preferred embodiment should be to be considered only as descriptive meaning rather than the mesh for limitation
's.Therefore, the scope of the present invention is limited by the detailed description of the present invention, but is defined by the claims, and the model
All difference in enclosing are to be interpreted as being included in the invention.
Claims (1)
1. a kind of method that infra-frame prediction is carried out to image, methods described include:
Obtaining includes the reference sample of the first corner sample, the second corner sample, the first side sample and the second side sample, wherein,
The reference sample is used for the prediction of current sample;
Current sample is obtained based on the weighted sum of the first corner sample, the second corner sample, the first side sample and the second side sample
This predicted value;
First corner sample, the second corner sample, the weight of the first side sample and the second side sample are the positions based on current sample
Put and be dimensioned with current block,
First corner sample is located at the infall of the row of the upside of current block and the row on the right side of current block,
Second corner sample is located at the infall of the row of the downside of current block and the row in the left side of current block,
First side sample is located at the infall of the row in the left side of the row and current block where current sample,
Second side sample is located at the infall of the row and the row where current sample of the upside of current block.
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510237934.1A CN104918055B (en) | 2011-06-28 | 2012-06-28 | Method and apparatus for using intra prediction to carry out image coding and decoding |
CN201510449585.XA CN104954805B (en) | 2011-06-28 | 2012-06-28 | Method and apparatus for using intra prediction to carry out image coding and decoding |
CN201810167413.7A CN108282659B (en) | 2011-06-28 | 2012-06-28 | Method and apparatus for image encoding and decoding using intra prediction |
CN201510452250.3A CN105100809B (en) | 2011-06-28 | 2012-06-28 | Method and apparatus for using intra prediction to carry out image coding and decoding |
CN201510450555.0A CN105100808B (en) | 2011-06-28 | 2012-06-28 | For the method and apparatus that intra prediction is used to carry out image coding and decoding |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201161501969P | 2011-06-28 | 2011-06-28 | |
US61/501,969 | 2011-06-28 | ||
PCT/KR2012/005148 WO2013002586A2 (en) | 2011-06-28 | 2012-06-28 | Method and apparatus for image encoding and decoding using intra prediction |
Related Child Applications (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510452250.3A Division CN105100809B (en) | 2011-06-28 | 2012-06-28 | Method and apparatus for using intra prediction to carry out image coding and decoding |
CN201510237934.1A Division CN104918055B (en) | 2011-06-28 | 2012-06-28 | Method and apparatus for using intra prediction to carry out image coding and decoding |
CN201510449585.XA Division CN104954805B (en) | 2011-06-28 | 2012-06-28 | Method and apparatus for using intra prediction to carry out image coding and decoding |
CN201510450555.0A Division CN105100808B (en) | 2011-06-28 | 2012-06-28 | For the method and apparatus that intra prediction is used to carry out image coding and decoding |
CN201810167413.7A Division CN108282659B (en) | 2011-06-28 | 2012-06-28 | Method and apparatus for image encoding and decoding using intra prediction |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103765901A CN103765901A (en) | 2014-04-30 |
CN103765901B true CN103765901B (en) | 2018-03-30 |
Family
ID=47424690
Family Applications (6)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510449585.XA Active CN104954805B (en) | 2011-06-28 | 2012-06-28 | Method and apparatus for using intra prediction to carry out image coding and decoding |
CN201810167413.7A Active CN108282659B (en) | 2011-06-28 | 2012-06-28 | Method and apparatus for image encoding and decoding using intra prediction |
CN201280042446.XA Active CN103765901B (en) | 2011-06-28 | 2012-06-28 | Method and apparatus for carrying out image coding and decoding using infra-frame prediction |
CN201510452250.3A Active CN105100809B (en) | 2011-06-28 | 2012-06-28 | Method and apparatus for using intra prediction to carry out image coding and decoding |
CN201510237934.1A Active CN104918055B (en) | 2011-06-28 | 2012-06-28 | Method and apparatus for using intra prediction to carry out image coding and decoding |
CN201510450555.0A Active CN105100808B (en) | 2011-06-28 | 2012-06-28 | For the method and apparatus that intra prediction is used to carry out image coding and decoding |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510449585.XA Active CN104954805B (en) | 2011-06-28 | 2012-06-28 | Method and apparatus for using intra prediction to carry out image coding and decoding |
CN201810167413.7A Active CN108282659B (en) | 2011-06-28 | 2012-06-28 | Method and apparatus for image encoding and decoding using intra prediction |
Family Applications After (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510452250.3A Active CN105100809B (en) | 2011-06-28 | 2012-06-28 | Method and apparatus for using intra prediction to carry out image coding and decoding |
CN201510237934.1A Active CN104918055B (en) | 2011-06-28 | 2012-06-28 | Method and apparatus for using intra prediction to carry out image coding and decoding |
CN201510450555.0A Active CN105100808B (en) | 2011-06-28 | 2012-06-28 | For the method and apparatus that intra prediction is used to carry out image coding and decoding |
Country Status (15)
Country | Link |
---|---|
US (7) | US9813727B2 (en) |
EP (6) | EP2728884A4 (en) |
JP (5) | JP5956572B2 (en) |
KR (7) | KR101654673B1 (en) |
CN (6) | CN104954805B (en) |
AU (3) | AU2012276407B2 (en) |
BR (1) | BR112013033710A2 (en) |
CA (2) | CA2840486C (en) |
MX (4) | MX337647B (en) |
MY (4) | MY173199A (en) |
PH (4) | PH12016500446B1 (en) |
RU (4) | RU2627033C1 (en) |
TW (4) | TWI552583B (en) |
WO (1) | WO2013002586A2 (en) |
ZA (1) | ZA201400651B (en) |
Families Citing this family (57)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2728884A4 (en) * | 2011-06-28 | 2015-03-18 | Samsung Electronics Co Ltd | Method and apparatus for image encoding and decoding using intra prediction |
US9497485B2 (en) | 2013-04-12 | 2016-11-15 | Intel Corporation | Coding unit size dependent simplified depth coding for 3D video coding |
EP2984824A4 (en) * | 2013-04-12 | 2016-11-09 | Intel Corp | Coding unit size dependent simplified depth coding for 3d video coding |
US9571809B2 (en) | 2013-04-12 | 2017-02-14 | Intel Corporation | Simplified depth coding with modified intra-coding for 3D video coding |
US10602155B2 (en) | 2013-04-29 | 2020-03-24 | Intellectual Discovery Co., Ltd. | Intra prediction method and apparatus |
US11463689B2 (en) | 2015-06-18 | 2022-10-04 | Qualcomm Incorporated | Intra prediction and intra mode coding |
US10841593B2 (en) | 2015-06-18 | 2020-11-17 | Qualcomm Incorporated | Intra prediction and intra mode coding |
US20160373770A1 (en) * | 2015-06-18 | 2016-12-22 | Qualcomm Incorporated | Intra prediction and intra mode coding |
KR102160667B1 (en) * | 2015-09-10 | 2020-09-28 | 엘지전자 주식회사 | Intra prediction method and apparatus in video coding system |
US9743092B2 (en) * | 2015-10-13 | 2017-08-22 | Nokia Technologies Oy | Video coding with helper data for spatial intra-prediction |
WO2017069419A1 (en) * | 2015-10-22 | 2017-04-27 | 엘지전자 주식회사 | Intra-prediction method and apparatus in video coding system |
CN108293117A (en) | 2015-11-24 | 2018-07-17 | 三星电子株式会社 | The method and apparatus of the intraframe or interframe prediction block of post-processing based on pixel gradient |
KR20180075558A (en) * | 2015-11-24 | 2018-07-04 | 삼성전자주식회사 | Video decoding method and apparatus, coding method and apparatus thereof |
WO2017090957A1 (en) * | 2015-11-24 | 2017-06-01 | 삼성전자 주식회사 | Video encoding method and apparatus, and video decoding method and apparatus |
WO2017188565A1 (en) * | 2016-04-25 | 2017-11-02 | 엘지전자 주식회사 | Image decoding method and device in image coding system |
CN109417633B (en) * | 2016-04-29 | 2023-11-28 | 英迪股份有限公司 | Method and apparatus for encoding/decoding video signal |
CN116506605A (en) | 2016-08-01 | 2023-07-28 | 韩国电子通信研究院 | Image encoding/decoding method and apparatus, and recording medium storing bit stream |
US11438582B2 (en) | 2016-08-03 | 2022-09-06 | Kt Corporation | Video signal processing method and device for performing intra-prediction for an encoding/decoding target block |
CN107786874A (en) * | 2016-08-24 | 2018-03-09 | 浙江大学 | Directional prediction method and apparatus in two-way frame |
CN116405671A (en) | 2016-09-20 | 2023-07-07 | 株式会社Kt | Method for decoding and encoding video and transmission method |
US10721479B2 (en) | 2016-09-30 | 2020-07-21 | Lg Electronics Inc. | Intra prediction method and apparatus in image coding system |
CN109845263B (en) * | 2016-10-14 | 2021-07-16 | 华为技术有限公司 | Apparatus and method for video encoding |
US10681354B2 (en) | 2016-12-05 | 2020-06-09 | Lg Electronics Inc. | Image encoding/decoding method and apparatus therefor |
WO2018124653A1 (en) * | 2016-12-27 | 2018-07-05 | 삼성전자 주식회사 | Method and device for filtering reference sample in intra-prediction |
US10542275B2 (en) | 2016-12-28 | 2020-01-21 | Arris Enterprises Llc | Video bitstream coding |
DE112017006657B4 (en) | 2016-12-28 | 2021-03-11 | Arris Enterprises Llc | Adaptive planar prediction with unequal weights |
CN116437109A (en) * | 2017-01-31 | 2023-07-14 | 夏普株式会社 | System and method for performing planar intra-prediction video coding |
CN106791849B (en) * | 2017-03-01 | 2019-08-13 | 四川大学 | Based on the drop bit-rate algorithm staggeredly predicted in HEVC frame |
WO2018174371A1 (en) * | 2017-03-21 | 2018-09-27 | 엘지전자 주식회사 | Image decoding method and device according to intra prediction in image coding system |
WO2018174354A1 (en) * | 2017-03-21 | 2018-09-27 | 엘지전자 주식회사 | Image decoding method and device according to intra prediction in image coding system |
US10893267B2 (en) * | 2017-05-16 | 2021-01-12 | Lg Electronics Inc. | Method for processing image on basis of intra-prediction mode and apparatus therefor |
CN116634175A (en) | 2017-05-17 | 2023-08-22 | 株式会社Kt | Method for decoding image signal and method for encoding image signal |
EP3410708A1 (en) | 2017-05-31 | 2018-12-05 | Thomson Licensing | Method and apparatus for intra prediction with interpolation |
EP3410722A1 (en) * | 2017-05-31 | 2018-12-05 | Thomson Licensing | Method and apparatus for low-complexity bi-directional intra prediction in video encoding and decoding |
ES2927102T3 (en) | 2017-05-31 | 2022-11-02 | Lg Electronics Inc | Method and device for performing image decoding on the basis of intra-prediction in image coding systems |
EP3410721A1 (en) * | 2017-05-31 | 2018-12-05 | Thomson Licensing | Method and apparatus for bi-directional intra prediction in video coding |
WO2019009622A1 (en) * | 2017-07-04 | 2019-01-10 | 엘지전자 주식회사 | Intra-prediction mode-based image processing method and apparatus therefor |
CN111247796B (en) * | 2017-10-20 | 2022-11-04 | 韩国电子通信研究院 | Image encoding/decoding method and apparatus, and recording medium storing bit stream |
CN117176958A (en) * | 2017-11-28 | 2023-12-05 | Lx 半导体科技有限公司 | Image encoding/decoding method, image data transmission method, and storage medium |
DE112018005899T5 (en) | 2017-12-18 | 2020-08-13 | Arris Enterprises Llc | System and method for constructing a plane for planar prediction |
CN111837388B (en) * | 2018-03-09 | 2023-04-14 | 韩国电子通信研究院 | Image encoding/decoding method and apparatus using sampling point filtering |
WO2019199149A1 (en) * | 2018-04-14 | 2019-10-17 | 엘지전자 주식회사 | Intra-prediction mode-based image processing method and device therefor |
CN115988205B (en) | 2018-06-15 | 2024-03-01 | 华为技术有限公司 | Method and apparatus for intra prediction |
US11277644B2 (en) | 2018-07-02 | 2022-03-15 | Qualcomm Incorporated | Combining mode dependent intra smoothing (MDIS) with intra interpolation filter switching |
WO2020017910A1 (en) * | 2018-07-18 | 2020-01-23 | 한국전자통신연구원 | Method and device for effective video encoding/decoding via local lighting compensation |
KR20200028856A (en) * | 2018-09-07 | 2020-03-17 | 김기백 | A method and an apparatus for encoding/decoding video using intra prediction |
GB2577056B (en) * | 2018-09-11 | 2022-12-14 | British Broadcasting Corp | Bitstream decoder |
US11303885B2 (en) | 2018-10-25 | 2022-04-12 | Qualcomm Incorporated | Wide-angle intra prediction smoothing and interpolation |
CN116744008A (en) | 2018-12-15 | 2023-09-12 | 华为技术有限公司 | Image reconstruction method and device |
CN116546218A (en) | 2019-01-02 | 2023-08-04 | Oppo广东移动通信有限公司 | Decoding prediction method, device and computer storage medium |
EP3713235B1 (en) * | 2019-03-19 | 2023-08-02 | Axis AB | Methods and devices for encoding a video stream using a first and a second encoder |
US11363284B2 (en) * | 2019-05-09 | 2022-06-14 | Qualcomm Incorporated | Upsampling in affine linear weighted intra prediction |
WO2021061020A1 (en) * | 2019-09-23 | 2021-04-01 | Huawei Technologies Co., Ltd. | Method and apparatus of weighted prediction for non-rectangular partitioning modes |
JP2021057649A (en) * | 2019-09-27 | 2021-04-08 | マクセル株式会社 | Image coding method and image decoding method |
EP4029247A4 (en) * | 2019-10-07 | 2022-12-07 | Huawei Technologies Co., Ltd. | Method and apparatus of harmonizing weighted prediction and bi-prediction with coding-unit-level weight |
WO2023022530A1 (en) * | 2021-08-18 | 2023-02-23 | 엘지전자 주식회사 | Image encoding/decoding method and apparatus for performing reference sample filtering on basis of intra prediction mode, and method for transmitting bitstream |
WO2024080706A1 (en) * | 2022-10-10 | 2024-04-18 | 엘지전자 주식회사 | Image encoding/decoding method and apparatus, and recording medium on which bitstream is stored |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101361370A (en) * | 2005-11-30 | 2009-02-04 | 株式会社东芝 | Image encoding/image decoding method and image encoding/image decoding apparatus |
WO2010123056A1 (en) * | 2009-04-24 | 2010-10-28 | ソニー株式会社 | Image processing apparatus and method |
KR20110036401A (en) * | 2009-10-01 | 2011-04-07 | 삼성전자주식회사 | Method and apparatus for encoding video, and method and apparatus for decoding video |
Family Cites Families (43)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
ES2170744T3 (en) | 1996-05-28 | 2002-08-16 | Matsushita Electric Ind Co Ltd | PREDICTION AND DECODING DEVICE DEVICE. |
EP0895424B1 (en) * | 1997-07-31 | 2007-10-31 | Victor Company of Japan, Ltd. | digital video signal inter-block predictive encoding/decoding apparatus and method providing high efficiency of encoding. |
US6418166B1 (en) * | 1998-11-30 | 2002-07-09 | Microsoft Corporation | Motion estimation and block matching pattern |
US6882637B1 (en) * | 1999-10-14 | 2005-04-19 | Nokia Networks Oy | Method and system for transmitting and receiving packets |
WO2003021971A1 (en) | 2001-08-28 | 2003-03-13 | Ntt Docomo, Inc. | Moving picture encoding/transmission system, moving picture encoding/transmission method, and encoding apparatus, decoding apparatus, encoding method, decoding method, and program usable for the same |
JP2004088722A (en) | 2002-03-04 | 2004-03-18 | Matsushita Electric Ind Co Ltd | Motion picture encoding method and motion picture decoding method |
US7269730B2 (en) * | 2002-04-18 | 2007-09-11 | Nokia Corporation | Method and apparatus for providing peer authentication for an internet key exchange |
US7289672B2 (en) * | 2002-05-28 | 2007-10-30 | Sharp Laboratories Of America, Inc. | Methods and systems for image intra-prediction mode estimation |
BRPI0307197B1 (en) | 2002-11-25 | 2018-06-19 | Godo Kaisha Ip Bridge 1 | MOTION COMPENSATION METHOD, IMAGE CODING METHOD AND IMAGE DECODING METHOD |
US7680342B2 (en) * | 2004-08-16 | 2010-03-16 | Fotonation Vision Limited | Indoor/outdoor classification in digital images |
CN1224270C (en) * | 2003-09-30 | 2005-10-19 | 清华大学 | Frame coding method of inter-frame coding frame for two stage predicting coding of macro block group structure |
CN100534192C (en) | 2003-10-28 | 2009-08-26 | 松下电器产业株式会社 | Intra-picture prediction coding method |
CN100536573C (en) * | 2004-01-16 | 2009-09-02 | 北京工业大学 | Inframe prediction method used for video frequency coding |
CN100479527C (en) | 2004-02-26 | 2009-04-15 | 联合信源数字音视频技术(北京)有限公司 | Method for selecting predicting mode within frame |
CN100401789C (en) | 2004-06-11 | 2008-07-09 | 上海大学 | Quick selection of prediction modes in H.264/AVC frame |
CN1589028B (en) * | 2004-07-29 | 2010-05-05 | 展讯通信(上海)有限公司 | Predicting device and method based on pixel flowing frame |
KR100679035B1 (en) * | 2005-01-04 | 2007-02-06 | 삼성전자주식회사 | Deblocking filtering method considering intra BL mode, and video encoder/decoder based on multi-layer using the method |
CN100348051C (en) | 2005-03-31 | 2007-11-07 | 华中科技大学 | An enhanced in-frame predictive mode coding method |
KR100750128B1 (en) * | 2005-09-06 | 2007-08-21 | 삼성전자주식회사 | Method and apparatus for intra prediction of video |
JP4791129B2 (en) * | 2005-10-03 | 2011-10-12 | ルネサスエレクトロニクス株式会社 | Image coding apparatus, image coding method, and image editing apparatus |
JP2007116351A (en) | 2005-10-19 | 2007-05-10 | Ntt Docomo Inc | Image prediction coding apparatus, image prediction decoding apparatus, image prediction coding method, image prediction decoding method, image prediction coding program, and image prediction decoding program |
TW200808067A (en) | 2006-07-31 | 2008-02-01 | Univ Nat Cheng Kung | Prediction module |
US8582663B2 (en) * | 2006-08-08 | 2013-11-12 | Core Wireless Licensing S.A.R.L. | Method, device, and system for multiplexing of video streams |
EP2418854A3 (en) * | 2006-10-24 | 2012-06-06 | Thomson Licensing | Picture identification for multi-view video coding |
TWI327866B (en) | 2006-12-27 | 2010-07-21 | Realtek Semiconductor Corp | Apparatus and related method for decoding video blocks in video pictures |
KR101411315B1 (en) * | 2007-01-22 | 2014-06-26 | 삼성전자주식회사 | Method and apparatus for intra/inter prediction |
KR101365575B1 (en) * | 2007-02-05 | 2014-02-25 | 삼성전자주식회사 | Method and apparatus for encoding and decoding based on inter prediction |
CN101217663B (en) * | 2008-01-09 | 2010-09-08 | 上海华平信息技术股份有限公司 | A quick selecting method of the encode mode of image pixel block for the encoder |
CN101217669A (en) * | 2008-01-15 | 2008-07-09 | 北京中星微电子有限公司 | An intra-frame predictor method and device |
US20100084478A1 (en) * | 2008-10-02 | 2010-04-08 | Silverbrook Research Pty Ltd | Coding pattern comprising columns and rows of coordinate data |
US8634456B2 (en) * | 2008-10-03 | 2014-01-21 | Qualcomm Incorporated | Video coding with large macroblocks |
JP2012504924A (en) * | 2008-10-06 | 2012-02-23 | エルジー エレクトロニクス インコーポレイティド | Video signal processing method and apparatus |
US9113168B2 (en) * | 2009-05-12 | 2015-08-18 | Lg Electronics Inc. | Method and apparatus of processing a video signal |
TWI442777B (en) * | 2009-06-23 | 2014-06-21 | Acer Inc | Method for spatial error concealment |
KR101456498B1 (en) * | 2009-08-14 | 2014-10-31 | 삼성전자주식회사 | Method and apparatus for video encoding considering scanning order of coding units with hierarchical structure, and method and apparatus for video decoding considering scanning order of coding units with hierarchical structure |
KR101452860B1 (en) | 2009-08-17 | 2014-10-23 | 삼성전자주식회사 | Method and apparatus for image encoding, and method and apparatus for image decoding |
KR101510108B1 (en) * | 2009-08-17 | 2015-04-10 | 삼성전자주식회사 | Method and apparatus for encoding video, and method and apparatus for decoding video |
KR101457418B1 (en) | 2009-10-23 | 2014-11-04 | 삼성전자주식회사 | Method and apparatus for video encoding and decoding dependent on hierarchical structure of coding unit |
KR101772459B1 (en) * | 2010-05-17 | 2017-08-30 | 엘지전자 주식회사 | New intra prediction modes |
KR102043218B1 (en) * | 2010-05-25 | 2019-11-11 | 엘지전자 주식회사 | New planar prediction mode |
CN101895751B (en) * | 2010-07-06 | 2012-02-08 | 北京大学 | Method and device for intra-frame prediction and intra-frame prediction-based encoding/decoding method and system |
US8837577B2 (en) * | 2010-07-15 | 2014-09-16 | Sharp Laboratories Of America, Inc. | Method of parallel video coding based upon prediction type |
EP2728884A4 (en) | 2011-06-28 | 2015-03-18 | Samsung Electronics Co Ltd | Method and apparatus for image encoding and decoding using intra prediction |
-
2012
- 2012-06-28 EP EP12804848.5A patent/EP2728884A4/en not_active Ceased
- 2012-06-28 RU RU2016127510A patent/RU2627033C1/en active
- 2012-06-28 TW TW101123374A patent/TWI552583B/en active
- 2012-06-28 MX MX2015001316A patent/MX337647B/en unknown
- 2012-06-28 CN CN201510449585.XA patent/CN104954805B/en active Active
- 2012-06-28 MX MX2014000171A patent/MX2014000171A/en active IP Right Grant
- 2012-06-28 EP EP15164785.6A patent/EP2919466A3/en not_active Ceased
- 2012-06-28 MY MYPI2016001292A patent/MY173199A/en unknown
- 2012-06-28 CA CA2840486A patent/CA2840486C/en active Active
- 2012-06-28 WO PCT/KR2012/005148 patent/WO2013002586A2/en active Application Filing
- 2012-06-28 EP EP15164787.2A patent/EP2919468A3/en not_active Ceased
- 2012-06-28 CN CN201810167413.7A patent/CN108282659B/en active Active
- 2012-06-28 MY MYPI2016001290A patent/MY174172A/en unknown
- 2012-06-28 EP EP15164786.4A patent/EP2919467A3/en not_active Ceased
- 2012-06-28 CN CN201280042446.XA patent/CN103765901B/en active Active
- 2012-06-28 US US14/130,095 patent/US9813727B2/en active Active
- 2012-06-28 CA CA3017176A patent/CA3017176C/en active Active
- 2012-06-28 AU AU2012276407A patent/AU2012276407B2/en active Active
- 2012-06-28 MX MX2017009212A patent/MX368350B/en unknown
- 2012-06-28 CN CN201510452250.3A patent/CN105100809B/en active Active
- 2012-06-28 KR KR1020120070365A patent/KR101654673B1/en not_active Application Discontinuation
- 2012-06-28 MX MX2016003159A patent/MX349194B/en unknown
- 2012-06-28 TW TW107134133A patent/TWI685251B/en active
- 2012-06-28 BR BR112013033710A patent/BR112013033710A2/en not_active Application Discontinuation
- 2012-06-28 EP EP15164789.8A patent/EP2919469A3/en not_active Ceased
- 2012-06-28 RU RU2014102592/07A patent/RU2594291C2/en active
- 2012-06-28 TW TW106131245A patent/TWI642299B/en active
- 2012-06-28 JP JP2014518806A patent/JP5956572B2/en active Active
- 2012-06-28 CN CN201510237934.1A patent/CN104918055B/en active Active
- 2012-06-28 CN CN201510450555.0A patent/CN105100808B/en active Active
- 2012-06-28 EP EP17170932.2A patent/EP3247115A1/en not_active Withdrawn
- 2012-06-28 MY MYPI2016001291A patent/MY173195A/en unknown
- 2012-06-28 MY MYPI2013004680A patent/MY165859A/en unknown
- 2012-06-28 TW TW105126165A patent/TWI603613B/en active
-
2014
- 2014-01-27 ZA ZA2014/00651A patent/ZA201400651B/en unknown
- 2014-10-29 KR KR1020140148756A patent/KR101855293B1/en active IP Right Grant
-
2015
- 2015-04-17 KR KR1020150054499A patent/KR101600063B1/en active IP Right Grant
- 2015-04-17 KR KR1020150054497A patent/KR101564422B1/en active IP Right Grant
- 2015-04-17 KR KR1020150054500A patent/KR101564423B1/en active IP Right Grant
- 2015-04-17 KR KR1020150054498A patent/KR101600061B1/en active IP Right Grant
- 2015-05-11 JP JP2015096548A patent/JP6101736B2/en active Active
- 2015-05-11 JP JP2015096547A patent/JP6101735B2/en active Active
- 2015-05-11 JP JP2015096549A patent/JP6101737B2/en active Active
- 2015-05-11 JP JP2015096546A patent/JP6101734B2/en active Active
- 2015-05-28 US US14/724,209 patent/US10045042B2/en active Active
- 2015-05-28 US US14/724,050 patent/US9788006B2/en active Active
- 2015-05-28 US US14/723,992 patent/US10085037B2/en active Active
- 2015-05-28 US US14/724,333 patent/US10045043B2/en active Active
- 2015-05-28 US US14/724,117 patent/US10075730B2/en active Active
-
2016
- 2016-03-02 AU AU2016201361A patent/AU2016201361B2/en active Active
- 2016-03-08 PH PH12016500446A patent/PH12016500446B1/en unknown
- 2016-03-08 PH PH12016500450A patent/PH12016500450B1/en unknown
- 2016-03-08 PH PH12016500451A patent/PH12016500451B1/en unknown
- 2016-03-08 PH PH12016500448A patent/PH12016500448B1/en unknown
-
2017
- 2017-03-10 AU AU2017201660A patent/AU2017201660B2/en active Active
- 2017-07-12 RU RU2017124656A patent/RU2660956C1/en active
-
2018
- 2018-04-30 KR KR1020180050183A patent/KR102040317B1/en active IP Right Grant
- 2018-07-03 RU RU2018124326A patent/RU2687294C1/en active
- 2018-09-21 US US16/137,594 patent/US10506250B2/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101361370A (en) * | 2005-11-30 | 2009-02-04 | 株式会社东芝 | Image encoding/image decoding method and image encoding/image decoding apparatus |
WO2010123056A1 (en) * | 2009-04-24 | 2010-10-28 | ソニー株式会社 | Image processing apparatus and method |
KR20110036401A (en) * | 2009-10-01 | 2011-04-07 | 삼성전자주식회사 | Method and apparatus for encoding video, and method and apparatus for decoding video |
Non-Patent Citations (2)
Title |
---|
CE6.e/f: Planar mode experiments and results;Sandeep Kanumuri et al;《JCTVC-E321, JCT-VC of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11》;20110323;章节2,图3 * |
WD3: Working Draft 3 of High-Efficiency Video Coding;Thomas Wiegand et al;《JCTVC-E603, JCT-VC of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11》;20110323;章节8.3.3.1 * |
Also Published As
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103765901B (en) | Method and apparatus for carrying out image coding and decoding using infra-frame prediction | |
CN105100797B (en) | To the decoded equipment of video | |
CN104702949B (en) | To the method and apparatus of Video coding and the method and apparatus to video decoding | |
CN104767996B (en) | Determine the intra prediction mode of image coding unit and image decoding unit | |
CN104811710B (en) | Method and apparatus to Video coding and to the decoded method and apparatus of video | |
CN104980739B (en) | The method and apparatus that video is decoded by using deblocking filtering | |
CN103392341A (en) | Method and device for encoding intra prediction mode for image prediction unit, and method and device for decoding intra prediction mode for image prediction unit | |
CN103782596A (en) | Prediction method and apparatus for chroma component of image using luma component of image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |