CN101345876B - In-frame predication encoding equipment and method in video coding - Google Patents

In-frame predication encoding equipment and method in video coding Download PDF

Info

Publication number
CN101345876B
CN101345876B CN2008101268873A CN200810126887A CN101345876B CN 101345876 B CN101345876 B CN 101345876B CN 2008101268873 A CN2008101268873 A CN 2008101268873A CN 200810126887 A CN200810126887 A CN 200810126887A CN 101345876 B CN101345876 B CN 101345876B
Authority
CN
China
Prior art keywords
prediction mode
mode
frame
optimum prediction
predicted value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2008101268873A
Other languages
Chinese (zh)
Other versions
CN101345876A (en
Inventor
中神央二
佐藤数史
矢崎阳一
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp filed Critical Sony Corp
Publication of CN101345876A publication Critical patent/CN101345876A/en
Application granted granted Critical
Publication of CN101345876B publication Critical patent/CN101345876B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/107Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/147Data rate or code amount at the encoder output according to rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/152Data rate or code amount at the encoder output by measuring the fullness of the transmission buffer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness

Abstract

A coding device for subtracting predication value from input image data to produce differential data intraframe predication mode, and performing coding processing for input image data by processing differential data comprises: a first predication mode detection part, for detecting first optimum predication mode; a second predication mode detection part, for detecting the second optimum predication mode; and a predication value generation part, for selecting optimum predication mode. The first predication mode detection part detects the first optimum predication mode by using processing of the second predication mode detection part.

Description

Intraframe predictive coding equipment and coding method in the video coding
Technical field
The present invention relates to the program of encoding device, coding method, coding method, and the recording medium that has write down the program of coding method thereon, and be applicable to the encoding device that for example meets H.264 with MPEG-4 the 10th part (advanced video coding).Undertaken under the situation of encoding process by from detected optimum prediction mode different predicted value generating means, detecting the predictive mode be suitable for carrying out infra-frame prediction, the present invention has reduced the required computational complexity of selection optimum prediction mode, and when when from a plurality of predictive modes, selecting optimum prediction mode to come that view data carried out encoding process, be used to carry out other optimum prediction modes by utilization simultaneously and detect one of them optimum prediction mode detection processing of handling, detect optimum prediction mode based on different predicted value generating means.
Background technology
In recent years, for the transmission of the live image of broadcasting station, average family or the like and record can be by the redundancy that effectively utilizes view data effectively the equipment of transmission and storing image data just becoming and becoming more and more popular.The equipment that meets MPEG (Motion Picture Experts Group) for example like this is by utilizing orthogonal transform, such as discrete cosine transform, and motion compensation, compressing image data effectively.
MPEG2 (ISO/ISC 13818-2) as such wherein a kind of data compression scheme is defined as the general image encoding scheme.MPEG2 is defined as and interlacing scan and sequential scanning compatibility, and with standard-resolution image and high-definition image compatibility.At present, MPEG2 is widely used in the wide range of applications occasion, comprises professional purpose and consumer applications.Specifically, according to MPEG2, for example the view data based on 720 * 480 pixel image datas of standard resolution and interlacing scan scheme may be compressed to 4 to 8[Mbps] bit rate.In addition, for example the view data based on high-resolution 1920 * 1088 pixel image datas and interlacing scan scheme may be compressed to 18 to 22[Mbps] bit rate, thus can keep high image quality and high-compressibility.
Yet MPEG2 is the high image quality encoding scheme that is suitable for the broadcasted application occasion, and is therefore, incompatible with the high compression rate encoding scheme of the code with the code that is less than MPEG1.Simultaneously, along with portable terminal catches in recent years, bigger than the demand of the high compression rate encoding scheme of the code code still less of MPEG1 to having.So, the standard based on the encoding scheme of MPEG4 is international standard by ISO/IEC (International Standards Organization/International Electrotechnical Commission) approval.
In addition, as such data compression scheme, the H26L (ITU-T Q6/16 VCEG) of image encoding that is intended to be used for video conference at first is by standardization.Compare with the computational complexity of MPEG2 and MPEG4, the computational complexity of H26L is bigger, but can obtain to be higher than the code efficiency of MPEG2 and MPEG4.In addition, as the part of MPEG4 activity, be used for guaranteeing that by injecting the standardization of the encoding scheme of higher code efficiency is promoted to JointModel of Enhanced-Compression Video Coding (strengthening the conjunctive model of compressed video coding) based on the various functions of H26L.This scheme is established as the international standard of title for " H.264 with MPEG-4 Part10 (Advanced Video Coding (advanced video coding)) " in March, 2003.This standard abbreviates " JVT standard " in this manual as.
Fig. 5 is the block diagram that has shown the encoding device that meets the JVT standard.Encoding device 1 is selected optimum prediction mode from a plurality of intra prediction modes and a plurality of inter-frame forecast mode, and generates differential data (prediction error data) D2 by the predicted value that deducts from view data D1 based on the predictive mode of selecting.In addition, encoding device 1 also carries out orthogonal transform processing, quantification treatment to differential data D2, and the variable length code processing, with output process coded data D5.
In encoding device 1, analog-to-digital conversion circuit (A/D) 2 is handled and output image data D1 by vision signal S1 being carried out analog/digital conversion.3 inputs of screen classification (sorting) buffer are from the view data D1 of A/D conversion circuit 2 outputs, based on the GOP (" Group of Pictures " that carries out encoding process for encoding device 1, image sets) structure is classified to the frame of view data D1, and output is through the frame of classification.
Subtraction circuit 4 receives the view data D1 of the buffer of screen classification from then on 3 outputs, and, in intraframe coding, from then on deduct among the view data D1 from the predicted value of intraframe prediction circuit 5 outputs, and output differential data D2.In addition, in interframe encode, from motion prediction/compensating circuit 6 input predicted values, from view data D1, deduct these predicted values, with output differential data D2.
Orthogonal intersection inverter 7 is handled by the orthogonal transform such as discrete cosine transform or Ka Nan-Luo Wei (Karhunen-Loeve) conversion, handles the dateout D2 from subtraction circuit 4, and output is as the coefficient data D3 of result.Sample circuit 8 is according to quantization scale (scale), and the rate controlled of through-rate control circuit 9 is come quantization coefficient data D3, and the coefficient data D3 of output through quantizing.Reversible encoding circuit 10 is handled carrying out reversible encoding from the data of sample circuit 8 outputs, and is generated dateout D4 by variable length code, arithmetic coding or the like.In addition, reversible encoding circuit 10 is also obtained the information of relevant intraframe coding from intraframe prediction circuit 5 with motion prediction/compensating circuit 6, and the information of relevant interframe encode, or the like, and be the header of dateout D4 with the information setting that is obtained.
Storage buffer 11 storage is from the dateout D4 of reversible encoding circuit 10, and with the transmission rate dateout D4 of subsequently transmission channel.Rate controlled circuit 9 monitors the amount of the code that produces by encoding process by the remaining space in the supervisory memory buffering area 11, according to monitoring that the result switches the quantization scale of sample circuit 8, and the amount of the code that produces by encoding device 1 of control.
13 pairs of dateouts from sample circuit 8 of inverse quantization circuit are carried out inverse quantization and are handled, and the data of the input of sample circuit 8 are decoded.14 pairs of dateouts from inverse quantization circuit 13 of anti-quadrature translation circuit are carried out the anti-quadrature conversion process, and the input data D2 of orthogonal intersection inverter 7 is decoded.Encoding device 1 adds corresponding to the predicted value from the dateout of anti-quadrature translation circuit 14 by using the add circuit (not shown), and the input data of subtraction circuit 4 are decoded.De-blocking filter 15 is eliminated the piece distortion from the dateout of add circuit, and exports the data that produced.Frame memory 16 record is also preserved dateout from de-blocking filter 15 as the reference image information.
Motion prediction/compensating circuit 6 is in interframe encode, based on predictive frame, by being stored in the reference image information in the frame memory 16, detect from the motion vector of image data MV of image classification buffer 3 outputs, and the optimum prediction mode in the detection inter-frame forecast mode.Then, when selecting interframe encode, motion prediction/compensating circuit 6 is by using corresponding motion vector MV, based on optimum prediction mode, the reference image information that is stored in the frame memory 16 is carried out motion compensation, the generation forecast image information, and predicted value is outputed to subtraction circuit 4 based on predicted picture information.
In intraframe coding, intraframe prediction circuit 5 is according to the optimum prediction mode in the input Data Detection intra prediction mode of frame memory 16.Then, when selecting intraframe coding, intraframe prediction circuit 5 is according to the predicted value of optimum prediction mode from the input data generation forecast image information of frame memory 16, and the predicted value that is generated is outputed to subtraction circuit 4.
In encoding device 1, according to the image that is used for the processing related with gop structure, from by the predictive mode of selecting intraframe prediction circuit 5 and the motion prediction/compensating circuit 6 detected optimum prediction modes wherein, then based on selection result, by predicted value, in subtraction circuit 4, generate differential data D2 selectively from intraframe prediction circuit 5 or motion prediction/compensating circuit 6.
Fig. 6 is the block diagram that has shown the decoder that meets the JVT standard.In decoder 20, storage buffer 21 interim storages are by the process coded data D5 of transmission channel input, and dateout.Reversible decoding circuit 22 is handled the dateout from storage buffer 21 by length-changeable decoding, arithmetic decoding or the like, and the input data of the reversible encoding circuit in the encoding device 1 10 are decoded.And if this moment, dateout was an intra-coding data, then the intra prediction mode information that is stored in the head of 22 pairs in circuit is decoded, and the information that will decode outputs to intraframe prediction circuit 23.Simultaneously, if dateout is an interframe coded data, then 22 pairs of information that are stored in the relevant motion vector in the head of circuit are decoded, and the information that will decode outputs to motion prediction/compensating circuit 24.
25 pairs of dateouts from reversible decoding circuit 22 of inverse quantization circuit are carried out inverse quantization and are handled, and the coefficient data D3 of the sample circuit 8 that is input to encoding device 1 is decoded.Anti-quadrature translation circuit 26 input is from the coefficient data of inverse quantization circuit 25 outputs, and it is carried out the anti-quadrature conversion process, and the differential data D2 of the orthogonal intersection inverter 7 that is input to encoding device 1 is decoded.
In intraframe coding, add circuit 27 inputs are from the differential data D2 of anti-quadrature translation circuit 26 outputs, and the predicted value addition of the predicted picture that will generate in intraframe prediction circuit 23, the data that output is produced.On the other hand, in interframe encode, will be from the predicted value addition of the predicted picture of motion prediction/compensating circuit 24 output, and the data that produced of output.As a result, the input data of the subtraction circuit 4 in 27 pairs of encoding devices 1 of add circuit are decoded.
De-blocking filter 28 is eliminated the piece distortion from the dateout of add circuit 27, and exports the data that produced.Screen classification buffer 29 is according to gop structure, the frame from the view data of de-blocking filter 28 outputs classified, and the frame of output category.30 pairs of dateouts from screen classification buffer 29 of D/A conversion circuit (D/A) are carried out the digital-to-analog conversion process, and will output to raw video signal S1 through data converted.
Frame memory 31 record is also preserved dateout from de-blocking filter 28 as the reference image information.Motion prediction/compensating circuit 24 is in interframe encode, and the motion vector information according to from reversible decoding circuit 22 notices carries out motion compensation to the reference image information that is stored in the frame memory 31, the generation forecast value, and predicted value outputed to add circuit 27.Intraframe prediction circuit 23 according to the intra prediction mode from reversible decoding circuit 22 notices, from the input data generation forecast value of frame memory 31, and outputs to add circuit 27 with predicted value in intraframe coding.
In the JVT standard, three intra prediction modes have been prepared: 8 * 8 patterns in intra-frame 4 * 4 pattern, the frame, and 16 * 16 patterns in the frame.In the intra-frame 4 * 4 pattern, the piece of one 4 pixel * 4 pixels is orthogonal transform processing units, and one of them orthogonal transform processing is the predicted value generating means in the intraframe prediction circuit 5.In addition, in 16 * 16 patterns, the piece of 4 pixels * 4 pixels is orthogonal transform processing units in frame, and 4 * 4 (pieces of one 16 pixel * 16 pixels) that the orthogonal transform of 4 pixels * 4 pixels is handled are set to be predicted value generating means.Simultaneously, in 8 * 8 patterns, be similar to MPEG2 in frame, the piece of one 8 pixel * 8 pixels is orthogonal transform processing units, also serves as predicted value generating means.8 * 8 patterns in intra-frame 4 * 4 pattern, the frame, and abbreviate " piece ", " sub-macro block " and " macro block " respectively as below the predicted value generating means in 16 * 16 patterns in the frame.
According to the JVT standard, 8 * 8 patterns in these intra-frame 4 * 4 patterns, frame, and in the frame in 16 * 16 patterns are by 16 pixels * 16 block of pixels unit (it is the macro block of 16 * 16 patterns in the frame) image data processing D1 continuously.Correspondingly, be the sub-macro block in 8 * 8 patterns in the frame by 16 pixels in 16 * 16 patterns in the frame * 16 block of pixels vertically and flatly being divided equally formed.In frame in 8 * 8 patterns, according to the sub-macro block in sequential processes 16 pixels * 16 pixel macroblock of raster scan.In addition, be piece in the intra-frame 4 * 4 pattern by 16 pixels in 16 * 16 patterns in the frame * 16 block of pixels vertically and flatly being divided into four parts formed.In the intra-frame 4 * 4 pattern, handle piece in 16 pixels * 16 pixel macroblock continuously according to the order of raster scan, shown in the numeral among Fig. 7.
According to the JVT standard, in intra-frame 4 * 4 pattern in intra prediction mode or the frame in 16 * 16 patterns, by use will by de-blocking filter 15 handle through the adjacent piece of decoding or through the pixel value of the adjacent macroblocks of decoding, generation forecast value.In addition, in frame, in 8 * 8 patterns, be similar to MPEG2, by using the pixel value that has carried out the adjacent macroblocks of processing through decoding, generation forecast value by de-blocking filter 15.
In the JVT standard, if vision signal S1 is the vision signal of interlacing scan scheme, then can be by forming two vertically continuous predicted value generting elements as a pair of (be designated hereinafter simply as " macro block to "), presence mode (field mode) or carry out encoding process in frame pattern continuously, processing sequence is shown in the arrow among Fig. 8.
In intra prediction mode, depend on 8 * 8 patterns in the intra-frame 4 * 4 pattern, frame, and 16 * 16 patterns in the frame, predicted value is handled with respect to luminance signal and difference.Correspondingly, processing in the intra-frame 4 * 4 pattern will at first be described below.Note, omitted the processing in 8 * 8 patterns in the frame, because this processing is similar to the processing of MPEG2.
The intra-frame 4 * 4 pattern is applicable to that " mb part pred mode (mb type, 0) " equals the situation of " Intra4 * 4 ".Generate in the processing in predicted value, by using " prey intra4 * 4 pred mode flag ", " rem intra4 * 4 pred mode (if available) ", and the pretreated pixel value of de-blocking filter " (if available) " of peripheral brightness " block " is as input parameter, generates the luminance signal predicted value that is used to carry out the piece that predicted value generates.
Here, when the value of " adaptive frame field flag (adaptive frame field mark) " is 0 in the intra-frame 4 * 4 pattern, generate in the processing in predicted value, the infra-frame prediction of carrying out decision block is the processing of " available (available) " or " not available (unavailable) ".In this deterministic process, as shown in Figure 9, import the luminance pixel values of pixel A-M that comprises in the piece adjacent, and check piece with respect to the pixel a-p that is used to handle through decoding with this piece, whether the pixel value of neighbor A-M " available ".
Here, the piece that is used to carry out 4 pixels * 4 pixels that predicted value generates is defined as " 4 * 4LumaBlkLoc ", and the position of the upper left pixel value of this piece is defined as carrying out index by " 4 * 4LumaBlckIdx ".Correspondingly, the piece of the pixel a-p among Fig. 9 is " 4 * 4LumaBlkLoc ", and " 4 * 4LumaBlkIdx " expression is marked as the locations of pixels of a.In addition, position (the x of pixel d, y) pass through " x:4 * 4LumaBlkLoc[4 * 4LumaBlkIdx] [0]+3; y:4 * 4LumaBlkLoc[4 * 4LumaBlkIdx] [1] " represent, and the position of pixel A (x, y) pass through " x:4 * 4LumaBlkLoc[4 * 4LumaBlkIdx] [0], y:4 * 4LumaBlkLoc[4 * 4LumaBlkIdx] [1]-1 " represent.
When pixel is not in image or not in section (slice) (condition 1), when in decoding processing, when pixel belongs to piece handling more behindhand than the piece that is used for the predicted value generation (condition 2), or when " constrained intra pred flag (restriction infra-frame prediction sign)=0 " and this piece belong to intra-frame macro block (condition 3), the pixel value of neighbor A-M is judged as " unavailable ".Any one condition in 1 to 3 even pixel E-H satisfies condition, when pixel D was " available ", by using the pixel value of pixel D, pixel E-H was judged as " available ".
The value of " if mb adaptive frame field flag (macro block adapt to frame field mark) " be 1 and the value of " field pic flag " be 0, then neighbor A-M defines in the identical mode of the mode when value as " mb adaptiveframe field flag " is 0.In addition, if block is " field decoding mode (decoding schema) " and be " top field macroblock (a top macro block) ", then by following formula definition neighbor.
[formula 1]
A:
x=4×4LumaBlkLoc[4×4LumaBlkIdx][0]
y=4×4LumaBlkLoc[4×4LumaBlkIdx][1]-2
I:
x=4×4LumaBlkLoc[4×4LumaBlkIdx][0]-1
y=4×4LumaBlkLoc[4×4LumaBlkIdx][1] ……(1)
In addition, when piece be " field decoding mode " and when being " bottom field macroblock (field, end macro block) ", then by following formula definition neighbor.
[formula 2]
A:
x=4×4LumaBlkLoc[4×4LumaBlkIdx][0]
y=4×4LumaBlkLoc[4×4LumaBlkIdx][1]-1
I:
x=4×4LumaBlkLoc[4×4LumaBlkIdx][0]-1
y=4×4LumaBlkLoc[4×4LumaBlkIdx][1] ……(2)
Be that the intra-frame 4 * 4 pattern prepares a plurality of predictive modes as shown in figure 10, according to about above describing the judgment processing result that neighbor is " available " or " unavailable ", suitable predictive mode changes.In Figure 11, by comparing, shown that the value as " Intra4 * 4LumaPredMode[4 * 4LumaBlkIdx] " is 0, the prediction direction during 1-8 with Figure 10.Describe the concrete predicted value of corresponding predictive mode in detail below with reference to decoding processing.
As shown in figure 12, it is relevant with the macro block height that is used to handle to be positioned at the macro block on the top and left side of macro block to be processed.So, in the JVT standard, adopted from be positioned at macro block above and the prediction direction of macro block through decoding on the left side come the technology of the prediction direction of target of prediction macro block.
In the prediction direction prediction processing of the prediction direction of using adjacent macroblocks, at first, if the value of " adaptive frame field flag " is 0, if perhaps the value of " field pic flag " is 1, then adjacent macroblocks is carried out address detected, carry out " availability (availability) " checking then.Executive address detects and the checking processing as follows.The address " MbAddress " of input target macroblock.Detection is positioned at the address " MbAddressA " of the macro block above the target macroblock and is positioned at the address " MbAddressB " of macro block on the left side of target macroblock.Then, to the top and left side that is positioned at target macroblock macro block judge " availability ".
Here, if satisfy condition 1 to 4, judge that then the macro block with address " MbAddress " is " not available ": " MbAddress "<0 (condition 1) for the address MbAddress of macro block; " MbAddress ">" MaxMbAddress-1 " (condition 2); Macro block by " MbAddress " appointment belongs to different sections (condition 3); Perhaps the macro block by " MbAddress " appointment does not also have decoding (condition 4).
This judgment processing is applicable to and is positioned at above the macro block and the macro block left side, and is available to judge whether.In the processing right based on macro block, being positioned at the affiliated right address " MbPairAddressA " of macro block of left side macro block is the affiliated right address " MbPairAddress-1 " of macro block of macro block that is used to handle, and the right address " MbPairAddressB " of macro block that is positioned under the top macro block is the affiliated right address " MbPairAddress-(frame width in mbs minus1+1) " of macro block of macro block that is used to handle.
Based on the macro block judged result, encoding device carries out address detected and " availability " checking of adjacent macroblocks as follows.The input parameter that is used to handle is " MbAddressA ", and output is the value (four types) of variable " ExtIntra4 * 4LumaPred ".In encoding device, when when adjacent macroblocks has been predicted prediction direction, transmission variables " ExtIntra4 * 4LumaPred ".
Here, macro block (by the macro block of " MbAddressA " index) when the left side that is positioned at macro block is " available ", and when being " I 4 * 4 " or " SI " by " the mb type " of the macro block of " MbAddressA " index, by following processing, obtain the value of " ExtIntra4 * 4LumaPredMode ".Here, " Intra4 * 4LumaPredMode " is the macro block of distributing to by " MbAddressA " index.
[formula 3]
for(i=0,i<4,i++)
ExtIntra4×4LumaPredMode[(i+1)*5]
=Intra4×4LumaPredMode[4×4LumaBlkScan(3,i)] ……(3)
In addition, when the macro block by " MbAddressA " index be not " available " or its " mbtype " neither " I 4 * 4 " when not being " SI " again, obtain the value of " ExtIntra4 * 4LumaPredMode " by following processing.
[formula 4]
for(i=0,i<4,i++)
ExtIntra4×4LumaPredMode[(i+1)*5]=2 ……(4)
If when being positioned at the top macro block (by the macro block of " MbAddressB " index) of macro block and being " available " and being " I 4 * 4 " or " SI " by " the mb type " of the macro block of " MbAddressB " index, by following processing, obtain the value of " ExtIntra4 * 4LumaPredMode ".Here, " Intra4 * 4LumaPredMode " is the value of distributing to by the macro block of " MbAddressB " index.
[formula 5]
for(i=0,i<4,i++)
ExtIntra4×4LumaPredMode[i+1]
=Intra4×4LumaPredMode[4×4LumaBlkScan(i,3)] ……(5)
In addition, if by the macro block of " MbAddressB " index be not " available " or its " mb type " neither " I 4 * 4 " are not again " SI ", obtain the value of " ExtIntra4 * 4LumaPredMode " by following processing.
[formula 6]
for(i=0,i<4,i++)
ExtIntra4×4LumaPredMode[i+1]=2 ……(6)
In addition, under situation above, in encoding device, extract and transmit the information of the right intra prediction mode of relevant adjacent macroblocks with reference to figure 7 described right processing based on macro block.Here, if the value of " the mb adaptive frame field flag " of macro block be 1 and the value of " field pic flag " be 0, then carry out as follows and handle.
At first, when macro block when being " frame mode (frame pattern) ", with respect in " top macroblock (top macro block) " and " bottom macroblock (end macro block) " each, obtain " ExtIntra4 * 4LumaPredModeTop " and " ExtIntra4 * 4LumaPredModeBottom " (8 types altogether) of macro block from " MbPairAddressA " and " MbPairAddressB ".
Here, about " Top macroblock ", obtain " ExtIntra4 * 4PredModeTop " and " ExtIntra4 * 4PredModeTop " from formula (3) and (5).In addition, about " Bottom Macroblock ", obtain " ExtIntra4 * 4PredModeTop " and " ExtIntra4 * 4PredModeTop " by formula (3) and (5).In above-mentioned situation, " MbPairAddressA " and " MbPairAddressB " is applied to " MbAddressA " and " MbAddressB " in formula (3) and (5) respectively.
In addition, when macro block when being " field mode ", for among " top macroblock " and " bottommacroblock " each, obtain " ExtIntra4 * 4LumaPredModeTop " and " ExtIntra4 * 4LumaPredModeBottom " (8 types altogether) of macro block similarly from " MbPairAddressA " and " MbPairAddressB ".
Here, to " MbPairAddreddA ", obtain " ExtIntra4 * 4LumaPredModeTop " and " ExtIntra4 * 4LumaPredModeBottom " for adjacent macroblocks by formula (3).Here, " MbPairAddressA " is applied to " MbAddressA " in the formula (3).
It should be noted that, if adjacent macroblocks is " field mode " to " MbPairAddressB ", if perhaps adjacent macroblocks to " MbPairAddressB " be not " available " and macro block to being " top macroblock ", then obtain " ExtIntra4 * 4LumaPredModeTop " by formula (5).In the case, be applied to " MBAddressB " in the formula (5) by " the top macroblock " of " the MB Pair " of " MBPairAddressB " index.
If adjacent macroblocks is " field mode (field mode) " to " MbPairAddressB ", if perhaps adjacent macroblocks to " MbPairAddressB " be not " available " and macro block to being " bottom macroblock ", then obtain " ExtIntra4 * 4LumaPredModeBottom " by formula (5).In the case, be applicable to " MBAddressB " in the formula (5) by " the top macroblock " of " the MB Pair " of " MBPairAddressB " index.
In addition, when adjacent macroblocks to " MbPairAddressB " was " frame mode (frame pattern) ", " the top macroblock " with respect to macro block centering obtained " ExtIntra4 * 4LumaPredModeTop " by application of formula (5).Yet, be applicable to " MBAddressB " in the formula (5) by the macro block of " MBPairAddressB " index right " top macroblock ".In addition, for " the bottom macroblock " of macro block centering, obtain " ExtIntra4 * 4LumaPredModeBottom " by application of formula (5).Yet, be applicable to " MBAddressB " in the formula (5) by the macro block of " MBPairAddressB " index right " bottom macroblock ".
Figure 13 A to 13D be shown based on the result's of the processing of carrying out when being " field mode " when macro block macro block between the figure of relation.
Carry out the decoding processing of " Intra4 * 4LumaPredMode " of macro block as follows.Be to be under 1 the situation under 1 the situation or in the value of " field pic flag " in the value of " adaptive frame field flag ", use the processing here, utilize following false code to describe this processing, false code is used " previntra4 * 4 pred mode flag ", " rem intra4 * 4 pred mode " and " ExtIntra4 * 4LumaPredMode ".
[formula 7]
for(4×4LumaBlkIdx=0,4×4LumaBlkIdx<16,4×4LumaBlkIdx++){
i=4×4LumaBlkIdx+RasterTo4×4LumaBlkOffset(4×4LumaBlkIdx)
Ext4×4LumaBlkIdx=5*(i/4+l)+(i%4)+1
PredIntra4×4LumaPredMode
=Min(ExtIntra4×4LumaPredMode[Ext4×4LumaBlkIdx-1],
ExtIntra4×4LumaPredMode[Ext4×4LumaBlkIdx-5])
if(prev_intra4×4_pred_mode_flag[4×4LumaBlkIdx])
Intra4×4LumaPredMode[4×4LumaBlkIdx]=PredIntra4×4LumaPredMode
else{
if(rem_intra4×4_pred_mode[4×4LumaBlkIdx]<PredIntra4×4LumaPredMode)
Intra4×4LumaPredMode[4×4LumaBlkIdx]
=rem_intra4×4_pred_mode[4×4LumaBlkIdx]
else
Intra4×4LumaPredMode[4×4LumaBlkIdx]
=rem_intra4×4_pred_mode[4×4LumaBlkIdx]+1
}
ExtIntra4×4LumaPredMode[Ext4×4LumaBlkIdx]
=Intra4×4LumaPredMode[4×4LumaBlkIdx]
} ……(7)
Simultaneously, carry out above decoding processing as follows with reference to figure 7 described macro blocks right " Intra4 * 4LumaPredMode ".The value of " if mb adaptive frame field flag " be 1 and the value of " field pic flag " be 0, then use the processing here.This processing is carried out in processing by the false code in the formula (7), and formula (7) uses " previntra4 * 4 pred mode flag ", " the rem intra4 * 4 pred mode " and " ExtIntra4 * 4LumaPredModeTop " of macro block right " top macroblock "." Intra4 * 4LumaPredMode " that is obtained is assigned to macro block right " top macroblock ".In addition, " Intra4 * 4LumaPredMode " is that the processing by the false code in the formula (7) obtains, and being assigned to macro block right " bottom macroblock ", this handles " prev intra4 * 4 pred mode flag ", " remintra4 * 4 pred mode " and " ExtIntra4 * 4LumaModePredTop " that uses macro block right " bottom macroblock ".
In addition, carry out in the decoding processing of infra-frame prediction in the intra-frame 4 * 4 pattern being used for, obtained to use the pixel value of neighbor A-M and be predicted pixel values on the piece of " Intra4 * 4LumaPredMode[4 * 4LumaBlkIdx] " appointment of 4 * 4 pieces definition by " 4 * 4LumaBlkIdx ".
Here, when the value of " Intra4 * 4LumaPredMode[4 * 4LumaBlkIdx] " is 0, use " Vertical (vertically) " prediction, and be just application under the situation of " available " only at neighbor A-D.Corresponding predicted value is represented by following formula.
[formula 8]
a,e,i,m:A
b,f,j,n:B
c,g,k,o:C
d,h,l,p:D ……(8)
In addition, when the value of " Intra4 * 4LumaPredMode[4 * 4LumaBlkIdx] " is 1, use " Horizontal (level) " prediction, and only under neighbor I-L is the situation of " available ", just use.Corresponding predicted value is represented by following formula.
[formula 9]
a,b,c,d:I,
e,f,g,h:J,
i,j,k,l:K,
m,n,o,p:L. ……(9)
In addition, when the value of " Intra4 * 4LumaPredMode[4 * 4LumaBlkIdx] " is 2, use " DC " prediction, and when all neighbor A-L were " available ", predicted value was represented by following formula.
[formula 10]
(A+B+C+D+I+J+K+L+4)>>3 ……(10)
On the other hand, if all neighbor A-D are " unavailable ", then predicted value is represented by following formula.
[formula 11]
(I+J+K+L+2)>>2 ……(11)
In addition, if all neighbor I-L are " unavailable ", then predicted value is represented by following formula.
[formula 12]
(A+B+C+D+2)>>2 ……(12)
In addition, when all neighbor A-L are " unavailable ", then predicted value is set to 128.
On the other hand, when the value of " Intra4 * 4LumaPredMode[4 * 4LumaBlkIdx] " is 3, use " Diagonal Down Left (tiltedly lower-left) " prediction, and only under neighbor A-M is the situation of " available ", just use.
[formula 13]
a: (A+2B+C+2)>>2
b,e: (B+2C+D+2)>>2
c,f,i: (C+2D+E+2)>>2
d,g,j,m: (D+2E+F+2)>>2
h,k,n: (E+2F+G+2)>>2
l,o: (F+2G+H+2)>>2
p: (G+3H+2)>>2 ……(13)
In addition, when the value of " Intra4 * 4LumaPredMode[4 * 4LumaBlkIdx] " is 4, use " Diagonal Down Right (tiltedly bottom right) " prediction, and only under neighbor A-M is the situation of " available ", just use.Predicted value is represented by following formula.
[formula 14]
m: (J+2K+L+2)>>2
i,n: (I+2J+K+2)>>2
e,j,o: (M+2I+J+2)>>2
a,f,k,p: (A+2M+I+2)>>2
b,g,l: (M+2A+B+2)>>2
c,h: (A+2B+C+2)>>2
d: (B+2C+D+2)>>2 ……(14)
In addition when " Intra4 * 4LumaPredMode[4 * 4LumaBlkIdx] use " Diagonal Vertical Right (tiltedly straight right) " prediction when the value of " " is 5, and only under neighbor A-M is the situation of " available ", just use.Predicted value is represented by following formula.
[formula 15]
a,j: (M+A+1)>>1
b,k: (A+B+1)>>1
c,l: (B+C+1)>>1
d: (C+D+1)>>1
e,n: (I+2M+A+2)>>2
f,o: (M+2A+B+2)>>2
g,p: (A+2B+C+2)>>2
h: (B+2C+D+2)>>2
i: (M+2I+J+2)>>2
m: (I+2J+K+2)>>2 ……(15)
In addition, when the value of " Intra4 * 4LumaPredMode[4 * 4LumaBlkIdx] " is 6, use " Horizontal Down (under the level) " prediction, and only under neighbor A-M is the situation of " available ", just use.Predicted value is represented by following formula.
[formula 16]
a,g: (M+I+1)>>1
b,h: (I+2M+A+2)>>2
c: (M+2A+B+2)>>2
d: (A+2B+C+2)>>2
e,k: (I+J+1)>>1
f,l: (M+2I+J+2)>>2
i,o: (J+K+1)>>1
j,p: (I+2J+K+2)>>2
m: (K+L+1)>>1
n: (J+2K+L+2)>>2 ……(16)
In addition, when the value of " Intra4 * 4LumaPredMode[4 * 4LumaBlkIdx] " is 7, use " Vertical Left (vertical left) " prediction, and be just being suitable under the situation of " available " only at neighbor A-M.Predicted value is represented by following formula.
[formula 17]
a: (A+B+1)>>1
b,i: (B+C+1)>>1
c,j: (C+D+1)>>1
d,k: (D+E+1)>>1
l: (E+F+1)>>1
e: (A+2B+C+2)>>2
f,m: (B+2C+D+2)>>2
g,n: (C+2D+E+2)>>2
h,o: (D+2E+F+2)>>2
p: (E+2F+G+2)>>2 ……(17)
In addition, when the value of " Intra4 * 4LumaPredMode[4 * 4LumaBlkIdx] " is 8, use " Horizontal Up " (on the level) prediction, and be just application on the situation of " available " only at neighbor A-M.Predicted value is represented by following formula.
[formula 18]
a: (I+J+1)>>1
b: (I+2J+K+2)>>2
c,e: (J+K+1)>>1
d,f: (J+2K+L+2)>>2
g,i: (K+L+1)>>1
h,j: (K+3L+2)>>2
k,l,m,n,o,p:L ……(18)
Processing for 16 * 16 patterns in the frame will be described below.Here, 16 * 16 patterns are applied to the situation that " (macro block part predictive mode) mb part pred mode (mb type, 0) " equals " Intra16 * 16 (frame interior 16 * 16) " in the frame.In predicted value generate to be handled, the pixel value that " the mb type " by using macro block and the de-blocking filter 15 of neighbor also not have processing was as input parameter, and generation is with respect to the predicted value of the luminance signal of macro block.
Here, the pixel value that belongs to macro block be represented as P (x, y); X, y=0 to 15.In addition, adjacent pixel values be represented as P (x ,-1) and P (1, y); X, y=-1 to 15.If for pixel value P (x ,-1) and P (1, y) satisfy following condition 1 or 2, judge that then (x y) is " not available " to pixel value P.Condition 1 is not exist predicted value to generate the situation in source at image or in section, and the value of condition 2 to be adjacent pixel values belong to non-intra-frame macro block and " constrained intra pred flag " is the situation of " 1 ".
In 16 * 16 patterns, (x y) comes defining mode 0-3 by judgement pixel value P in frame.Here, pattern 0 is " vertical " prediction, only in pixel value P (x ,-1); X, y=-1 to 15 just use under the situation of " available ", and predicted value is represented by following formula.
[formula 19]
Pred(x,y)=P(x,-1);x,y=0..15 ……(19)
In addition, pattern 1 be " horizontal " prediction, and only pixel value P (1, y); X just is suitable under the situation of y=-1 to 15.Predicted value is represented by following formula.
[formula 20]
Pred(x,y)=P(-1,y);x,y=0..15 ……(20)
In addition, pattern 2 is " DC " predictions, when all pixel value P (x ,-1) and P (1, y); Use when x, y=-1 to 15 are " available ", predicted value is represented by following formula.
[formula 21]
Pred ( x , y ) = [ Σ x ′ = 0 15 P ( x ′ , - 1 ) + Σ y ′ = 0 15 P ( - 1 , y ′ ) + 16 ] > > 5
with?x,y=0..15 ……(21)
In addition, in pattern 2, if pixel value P (x ,-1); X, y=-1 to 15 are " notavailable ", and then predicted value is represented by following formula.
[formula 22]
Pred ( x , y ) = [ Σ y ′ = 0 15 P ( - 1 , y ′ ) + 8 ] > > 4
with?x,y=0..15 ……(22)
In addition, in pattern 2, if pixel value P (1, y); X, y=-1 to 15 are " unavailable ", and then predicted value is represented by following formula.
[formula 23]
Pred ( x , y ) = [ Σ x ′ = 0 15 P ( x ′ , - 1 ) + 8 ] > > 4
with?x,y=0..15 ……(23)
In addition, if all pixel value P (x ,-1) and P (1, y); X, y=-1 to 15 are " unavailable ", and then the predicted value value of being set to 128.
On the other hand, mode 3 is " plane (plane) " prediction, only all pixel value P (x ,-1) and P (1, y); X, y=-1 to 15 just use under the situation of " available ".Predicted value is represented by following formula.Notice that the pruning (clip) in the scope of Clip () expression from 0 to 255 is handled.
[formula 24]
Pred(x,y)=Clip1((a+b·(x-7)+c·(y-7)+16)>>5)
a=16·(P(-1,15)+P(15,-1))
b=(5·H+32)>>6
c=(5·V+32)>>6
H = Σ x = 1 8 x · ( P ( 7 + x , - 1 ) - P ( 7 - x , - 1 ) )
V = Σ y = 1 8 y · ( P ( - 1,7 + y ) - P ( - 1,7 - y ) ) · · · · · · ( 24 )
The infra-frame prediction of carrier chrominance signal will be described below.Here, the infra-frame prediction of carrier chrominance signal only is applied to " I " macro block and " SI " macro block, and by the adjacent pixel values using " intra chroma pred mode (prediction mode for chroma in the frame) " and also do not handled by de-blocking filter 15 as input parameter, be macroblock generation carrier chrominance signal predicted value.
Here, the pixel value that belongs to macro block be represented as P (x, y); X, y=0 to 7.In addition, adjacent pixel values be expressed P (x ,-1) and P (1, y); X, y=-1 to 7.Note, the intra prediction mode with respect to carrier chrominance signal can irrespectively be set with the predictive mode of luminance signal.If to pixel value P (x ,-1) and P (1, y) satisfy condition 1 or 2, judge that then (x y) is " not available " to pixel value P.Notice that condition 1 is not exist predicted value to generate the situation in source at image or in section, and the value of condition 2 to be adjacent pixel values belong to non-intra-frame macro block and " constrained intra pred flag " is 1 situation.
In the carrier chrominance signal intra prediction mode, by judging that (x y) comes defining mode 0-3 to pixel value P.Here, pattern 0 is " DC " prediction, when P (x ,-1) and P (1, application when y) being " available ", predicted value is represented by following formula.
[formula 25]
Pred ( x , y ) = ( ( Σ n = 0 7 ( P ( - 1 , n ) ) + P ( n , - 1 ) ) + 8 ) > > 4
with?x,y=0..7 ……(25)
Simultaneously, if pixel value P (1, y) be " unavailable ", then predicted value is represented by following formula.
[formula 26]
Pred ( x , y ) = [ ( Σ n = 0 7 P ( n , - 1 ) ) + 4 ] > > 3
with?x,y=0..7 ……(26)
In addition, if pixel value P (x ,-1) is " unavailable ", then predicted value is represented by following formula.
[formula 27]
Pred ( x , y ) = [ ( Σ n = 0 7 P ( - 1 , n ) ) + 4 ] > > 3
with?x,y=0..7 ……(27)
In addition, if pixel value P (x ,-1) and P (1, y) be " unavailable ", then the predicted value value of being set to 128.
In addition, pattern 1 is " horizontal " prediction, only pixel value P (1, y) be just to use under the situation of " available ".Predicted value is represented by following formula.
[formula 28]
Pred(x,y)=P(-1,y),x,y=0,...,7 ……(28)
In addition, pattern 2 is " vertical " predictions, is just to use under the situation of " available " at pixel value P (x ,-1) only.Predicted value is represented by following formula.
[formula 29]
Pred(x,y)=P(x,-1),x,y=0,...,7 ……(29)
In addition, mode 3 is " plane " prediction, only pixel value P (x ,-1) and P (1, y) be just application under the situation of " available ".Predicted value is represented by following formula.
[formula 30]
Pred(x,y)=Clip1(a+b·(x-3)+c·(y-3)+16)>>5;x,y=0,...,7
a=16·(P(-1,7)+P(7,-1))
b=(17·H+16)>>5
c=(17·V+16)>>5
H = Σ x = 1 4 x · [ P ( 3 + x , - 1 ) - P ( 3 - x , - 1 ) ]
V = Σ y = 1 4 x · [ P ( - 1,3 + y ) - P ( - 1,3 - y ) ] · · · · · · ( 30 )
Encoding device is selected optimum prediction mode from various predictive modes, and view data D1 is carried out encoding process.Here, suppose that (i j) is pixel value on the original image in 4 * 4 to Org, supposes that (mode, i are by mode (i, j) Biao Shi prediction on intra-frame prediction mode value j) to Pred.Encoding device wherein is set to optimum prediction mode by the predictive mode that the computing based on following formula has minimized predicated error.
[formula 31]
SAD ( mode ) = Σ i = 0 3 Σ j = 0 3 | Org ( i , j ) - Pred ( mode , i , j ) | · · · · · · ( 31 )
Here, SAD is a function of selecting the pattern of minimum value, shown in following formula, replacedly, can use SATD (mode) and replace SAD.
[formula 32]
SATD ( mode ) = Σ i = 0 3 Σ j = 0 3 | Hadamard ( Org ( i , j ) - Pred ( Mode , i , j ) ) | · · · · · · ( 32 )
Hadamard () expression Hadamard conversion operations is carried out by objective matrix being multiply by the Hadamard transition matrix, shown in following formula.
[formula 33]
Hadamard(A)=H TAH ……(33)
Notice that " H " is the Hadamard transition matrix, represents the quadravalence matrix by following formula.
[formula 34]
H = 1 1 1 1 1 - 1 1 - 1 1 1 - 1 - 1 1 - 1 - 1 1 · · · · · · ( 34 )
" H T" expression transposition Hadamard transition matrix H.
In practice, in encoding device 1, as Figure 14 A to shown in the 14C, based on 8 * 8 patterns in intra-frame 4 * 4 pattern, the frame, and 16 * 16 patterns in the frame, according to each predictive mode, on the basis of the module unit of 16 pixels * 16 pixels, obtain SAD (mode) or SATD (mode), perhaps the pattern of the minimum value that obtains by cost function (wherein, being added in the code that each pattern generates based on the code of the part of a bit) is selected, so that select optimum prediction mode.Figure 14 D also shown with each predictive mode in the relation of macro block.
As replaceable scheme, bit under the situation that 16 pixels * 16 block of pixels are encoded in each intra prediction mode " mode " can be expressed as SAD0 (mode) or SATD0 (mode), and wherein the cost value of being represented by SAD0 (mode) or SATD0 (mode) " Cost " is set to minimum predictive mode and is assumed to be optimum prediction mode.Note, by following formulate cost value " Cost ", QP 0(QP) be the function that quantization parameter QP and the quantization scale that is used for actual quantization are linked.Cost value is the value of each predictive mode of being obtained by the cost function that is used for judging predictive mode, also is the finger target value of the amount of the code that produces as expression.Specifically, cost value is the value on right side in each predictive mode in formula (31) and (32).
[formula 35]
Cost=SAD(mode)+QP 0(QP)·SAD 0(mode) ……(35)
Notice that following formula can replace formula (35) and use.
[formula 36]
Cost=SATD(mode)+QP 0(QP)·SATD 0(mode) ……(36)
Relevant such encoding process based on various predictive modes, for example, the Japan open No.2005-151017 of uncensored patent application has proposed a kind of method, wherein, and by simplifying the processing of selecting optimal mode based on the resolution of the image that is used to handle or the like constriction predictive mode.
Simultaneously, the problem that the processing of selection optimum prediction mode exists computational complexity to magnify from a plurality of predictive modes.Especially, 8 * 8 patterns in intra-frame 4 * 4 pattern, frame, and in interior 16 * 16 patterns of frame, be necessary for each piece respectively at 9,9 and 4 prediction direction calculation cost values (costvalue), so, the required computational complexity of calculation cost value is very high.
Summary of the invention
The present invention makes considering under the situation of these problems, be desirable to provide the program of a kind of encoding device, coding method, coding method, and the recording medium that records the program of coding method thereon, they can reduce the computational complexity of selecting optimum prediction mode by selecting optimum prediction mode to come view data is carried out under the situation of encoding process from a plurality of predictive modes.
According to embodiments of the invention, the present invention is applied to a kind of encoding device, and this encoding device is used at intra prediction mode, generates differential data by deduct predicted value from input image data, be used for differential data is handled, and input image data is carried out encoding process.This encoding device comprises: the first predictive mode test section is used for detecting first optimum prediction mode that is suitable for most according to the encoding process of this first predicted value generating means from a plurality of predictive modes of first predicted value generating means; The second predictive mode test section is used for detecting second optimum prediction mode that is suitable for most according to the encoding process of this second predicted value generating means from a plurality of predictive modes of second predicted value generating means (this is first predicted value generating means of dividing); And the predicted value generating portion, be used for selecting to be suitable for most the encoding process optimum prediction mode of encoding process from first and second optimum prediction modes, and according to the optimum prediction mode generation forecast value of encoding process.First optimum prediction mode is detected by utilizing the processing in the second predictive mode test section in the first predictive mode test section.
According to another embodiment of the invention, the present invention is applicable to a kind of encoding device, this encoding device is used at intra prediction mode, generates differential data by deduct predicted value from input image data, and by differential data is handled input image data is carried out encoding process.This encoding device comprises: the first predictive mode test section is used for detecting first optimum prediction mode that is suitable for the encoding process in first predicted value generating means most from a plurality of predictive modes of first predicted value generating means; The second predictive mode test section is used for detecting second optimum prediction mode that is suitable for the described encoding process in second predicted value generating means most from a plurality of predictive modes of second predicted value generating means (this is first predicted value generating means of dividing); And the predicted value generating portion, be used for handling predictive mode, and handle predictive mode generation forecast value according to this forced coding from the forced coding that first and second optimum prediction modes are selected to be suitable for encoding process most.Described second optimum prediction mode is detected by utilizing the processing in the described first predictive mode test section in the described second predictive mode test section.
According to still a further embodiment, a kind of coding method is provided, be used at intra prediction mode, generate differential data by from input image data, deducting predicted value, and input image data is carried out encoding process by differential data is handled.This coding method comprises: first predictive mode detects step, is used for detecting from a plurality of predictive modes of first predicted value generating means first optimum prediction mode that is suitable for the encoding process in first predicted value generating means most; Second predictive mode detects step, is used for detecting second optimum prediction mode that is suitable for the encoding process in second predicted value generating means most from a plurality of predictive modes of described second predicted value generating means (this is first predicted value generating means of dividing); And predicted value generation step, be used for handling predictive mode, and handle predictive mode generation forecast value according to this forced coding from the forced coding that first and second optimum prediction modes are selected to be suitable for encoding process most.First predictive mode detects step and detects first optimum prediction mode by the processing that utilizes second predictive mode to detect in the step.
In another embodiment of the present invention, a kind of coding method is provided, be used at intra prediction mode, generate differential data by from input image data, deducting predicted value, and input image data is carried out encoding process by differential data is handled.This coding method comprises: first predictive mode detects step, is used for detecting from a plurality of predictive modes of first predicted value generating means first optimum prediction mode that is suitable for the encoding process in first predicted value generating means most; Second predictive mode detects step, is used for detecting second optimum prediction mode that is suitable for the encoding process in second predicted value generating means most from a plurality of predictive modes of second predicted value generating means (this is first predicted value generating means of dividing); And predicted value generation step, be used for handling predictive mode, and handle predictive mode generation forecast value according to this forced coding from the forced coding that first and second optimum prediction modes are selected to be suitable for encoding process most.Second predictive mode detects step and detects second optimum prediction mode by the processing that utilizes first predictive mode to detect in the step.
In another embodiment of the present invention, a kind of program of coding method is provided, this coding method is used at intra prediction mode, generate differential data by from input image data, deducting predicted value, and input image data is carried out encoding process by differential data is handled.The program of this coding method comprises: first predictive mode detects step, is used for detecting from a plurality of predictive modes of first predicted value generating means first optimum prediction mode that is suitable for the encoding process in first predicted value generating means most; Second predictive mode detects step, is used for detecting second optimum prediction mode that is suitable for the encoding process in second predicted value generating means most from a plurality of predictive modes of second predicted value generating means (this is first predicted value generating means of dividing); And predicted value generation step, be used for handling predictive mode, and generate predicted value according to this forced coding processing predictive mode from the forced coding that first and second optimum prediction modes are selected to be suitable for encoding process most.First predictive mode detects step and detects first optimum prediction mode by the processing that utilizes second predictive mode to detect in the step.
In another embodiment of the present invention, a kind of program of coding method is provided, this coding method is used at intra prediction mode, generate differential data by from input image data, deducting predicted value, and input image data is carried out encoding process by differential data is handled.This program comprises: first predictive mode detects step, is used for detecting from a plurality of predictive modes of first predicted value generating means first optimum prediction mode that is suitable for the encoding process in first predicted value generating means most; Second predictive mode detects step, is used for detecting second optimum prediction mode that is suitable for the encoding process in second predicted value generating means most from a plurality of predictive modes of second predicted value generating means (this is first predicted value generating means of dividing); And predicted value generation step, be used for handling predictive mode, and generate predicted value according to this forced coding processing predictive mode from the forced coding that first and second optimum prediction modes are selected to be suitable for encoding process most.Second predictive mode detects step and detects second optimum prediction mode by the processing that utilizes first predictive mode to detect in the step.
In another embodiment of the present invention, first recording medium of the program that has write down a kind of coding method thereon is provided, this coding method is used at intra prediction mode, generate differential data by from input image data, deducting predicted value, and input image data is carried out encoding process by differential data is handled.This program comprises: first predictive mode detects step, is used for detecting from a plurality of predictive modes of first predicted value generating means first optimum prediction mode that is suitable for the encoding process in first predicted value generating means most; Second predictive mode detects step, is used for detecting second optimum prediction mode that is suitable for the encoding process in second predicted value generating means most from a plurality of predictive modes of second predicted value generating means (this is first predicted value generating means of dividing); And predicted value generation step, be used for handling predictive mode, and generate predicted value according to this forced coding processing predictive mode from the forced coding that first and second optimum prediction modes are selected to be suitable for encoding process most.First predictive mode detects step and detects first optimum prediction mode by the processing that utilizes second predictive mode to detect in the step.
In another embodiment of the present invention, the recording medium that records a kind of program of coding method thereon is provided, this coding method is used at intra prediction mode, generate differential data by from input image data, deducting predicted value, and input image data is carried out encoding process by differential data is handled.This program comprises: first predictive mode detects step, is used for detecting from a plurality of predictive modes of first predicted value generating means first optimum prediction mode that is suitable for the encoding process in first predicted value generating means most; Second predictive mode detects step, is used for detecting second optimum prediction mode that is suitable for the encoding process in second predicted value generating means most from a plurality of predictive modes of second predicted value generating means (this is first predicted value generating means of dividing); And predicted value generation step, be used for handling predictive mode, and generate predicted value according to this forced coding processing predictive mode from the forced coding that first and second optimum prediction modes are selected to be suitable for encoding process most.Second predictive mode detects step and detects second optimum prediction mode by the processing that utilizes first predictive mode to detect in the step.
Program according to first encoding device, coding method, coding method, and the configuration of recording medium that records the program of coding method thereon, the processing procedure that is used to detect second optimum prediction mode by utilization detects first predictive mode, so, be used to detect first predictive mode required computational complexity can reduce.Correspondingly, when when from a plurality of predictive modes, selecting optimum prediction mode to come that view data carried out encoding process, select the required computational complexity of optimum prediction mode to reduce.
Program according to the configuration of encoding device, coding method, coding method, and the recording medium that has write down the program of coding method thereon, the processing procedure that is used to detect first optimum prediction mode by utilization, detect second predictive mode, so, the computational complexity that is used to detect second predictive mode can reduce.Correspondingly, when when from a plurality of predictive modes, selecting optimum prediction mode to come that view data carried out encoding process, select the required computational complexity of optimum prediction mode to reduce.
According to embodiments of the invention,, select the required computational complexity of optimum prediction mode to reduce when when from a plurality of predictive modes, selecting optimum prediction mode to come that view data carried out encoding process.
Description of drawings
Fig. 1 is the flow chart that has shown according to the processing procedure of the infra-frame prediction estimating circuit in the encoding device of embodiments of the invention 1;
Fig. 2 is the block diagram that has shown according to the encoding device of embodiments of the invention 1;
Fig. 3 is the flow chart that has shown according to the processing procedure of the infra-frame prediction estimating circuit in the encoding device of embodiments of the invention 2;
Fig. 4 is the flow chart that has shown according to the processing procedure of the infra-frame prediction estimating circuit in the encoding device of embodiments of the invention 3;
Fig. 5 is the block diagram that has shown the correlation technique encoding device;
Fig. 6 is the block diagram that has shown the correlation technique decoder;
Fig. 7 is the signal line layout figure (line drawing) that has shown the processing sequence of the intra-frame 4 * 4 pattern in the piece of 16 pixels * 16 pixels;
Fig. 8 is the signal line layout figure of the processing that shown that macro block is right;
Fig. 9 is the signal line layout figure that has shown the predicted value generative process;
Figure 10 is the table that has shown the predictive mode in the intra-frame 4 * 4 pattern;
Figure 11 is the signal line layout figure that has shown the prediction direction of intra-frame 4 * 4 pattern;
Figure 12 is the signal line layout figure that is used for showing from adjacent macroblocks prediction " prediction direction ";
Figure 13 A is the signal line layout figure of the processing that is used to show that macro block is right to 13D; And
Figure 14 A is the signal line layout figure that has shown the relation of the macro block in each predictive mode to 14D.
Embodiment
Describe embodiments of the invention below with reference to the accompanying drawings in detail.
[embodiment 1]
The configuration of embodiment
Fig. 2 be by with Fig. 5 in encoding device compare, shown the block diagram of the encoding device of embodiments of the invention 1.The configuration of this encoding device 41 is identical with encoding device 1, has only provided intraframe prediction circuit 42 and infra-frame prediction estimating circuit 43 replaces intraframe prediction circuit 5.This encoding device 41 is selected best predictive mode by intraframe prediction circuit 42 and infra-frame prediction estimating circuit 43, and generates the predicted value that is used to carry out infra-frame prediction.
Intraframe prediction circuit 42 is calculated the cost value of the corresponding predictive mode in the intra-frame 4 * 4 pattern, and by the module unit corresponding to 16 * 16 mode macro in the frame, the cost value that calculates is outputed to infra-frame prediction estimating circuit 43.Various cost value computational methods can be used, as long as they can indicate the code of generation.
Infra-frame prediction estimating circuit 43 calculates the cost value of the corresponding predictive mode in 16 * 16 patterns in interior 8 * 8 patterns of frame and the frame by the cost value of the corresponding predictive mode in the intra-frame 4 * 4 pattern of using intraframe prediction circuit 42 outputs from then on.Then, detect minimum value, and detect the optimum prediction mode in 16 * 16 patterns in interior 8 * 8 patterns of frame and the frame from the cost value that calculates.Infra-frame prediction estimating circuit 43 is notified intraframe prediction circuit 42 with detected optimum prediction mode together with their cost value.
The cost value of the corresponding predictive mode of intraframe prediction circuit 42 from initial detected intra-frame 4 * 4 pattern detects the optimum prediction mode in the intra-frame 4 * 4 pattern.The optimum prediction mode of intraframe prediction circuit 42 from the intra-frame 4 * 4 pattern, and in the frame of infra-frame prediction estimating circuit 43 notice 8 * 8 patterns with frame in optimum prediction mode in 16 * 16 patterns, detection is used to carry out the optimum prediction mode of infra-frame prediction, calculate predicted value based on the optimum prediction mode that is used to carry out infra-frame prediction, and the predicted value that calculates is outputed to subtraction circuit 4.Carry out the detection of the optimum prediction mode that is used to carry out infra-frame prediction by for example detecting the lowest costs value.
Fig. 1 is the flow chart that has shown the processing procedure in the infra-frame prediction estimating circuit 43.When the cost value (corresponding to the amount of 16 pixels * 16 block of pixels) of the corresponding predictive mode from intraframe prediction circuit 42 input intra-frame 4 * 4 patterns, infra-frame prediction estimating circuit 43 begins these processing processes, and enters step SP2 from step SP1.
The cost value of supposing each predictive mode in the intra-frame 4 * 4 pattern is Cost[x] [K], wherein, K is the variable of expression prediction direction.In 8 * 8 patterns, variable K has specified nine prediction direction of pattern 0-8 thus in intra-frame 4 * 4 pattern and frame, and in frame in 16 * 16 patterns, variable K has specified four prediction direction of pattern 0-3 thus.In addition, x is any variable that is expressed as 1-16 of specifying among Figure 14 A, and is the variable of each the intra-frame 4 * 4 mode block in 16 * 16 mode macro in the designated frame.
In this step SP2,43 initialization of infra-frame prediction estimating circuit are used to calculate 8 * 8 pattern cost value Cost[α in the frame] the variable α of [k], k.Here, variable α is the variable of specifying the sub-macro block that is represented as the i-iv among Figure 14 B, and is the variable of the interior sub-macro block of 8 * 8 patterns of each frame in 16 * 16 mode macro in the designated frame.
Specifically, in the example of Fig. 1, variable K is initialized to the variable of expression pattern 0, and α is initialized to the variable i of the sub-macro block of expression beginning.
Then, in step SP3, the computing of the formula of infra-frame prediction estimating circuit 43 below utilizing, by variable K, with respect to the predictive mode in 8 * 8 patterns in the frame, with cost value Cost (x) [K] addition in the intra-frame 4 * 4 pattern of the prediction direction of correspondence and piece, and calculation cost value Cost (α) [K].
[formula 37]
Cost(α)[K]=∑Cost(x)[K] ……(37)
In the table of Figure 14 D, demonstrated " x " on formula (37) right side.Correspondingly, for the pattern 0 of the sub-macro block of beginning in 8 * 8 patterns in the frame, cost value " Cost " the phase Calais calculation cost value that infra-frame prediction estimating circuit 43 passes through the pattern 0 of the piece in the intra-frame 4 * 4 pattern 1,2,5,6 is shown in following formula.
[formula 38]
Cost(i)[K]=Cost(1)[K]+Cost(2)[K]+Cost(5)[K]+Cost(6)[K]……(38)
If the prediction direction u that exists the JVT standard to use, by cost value Cost (x) [K] being set to not select by it value of this prediction direction pattern, the cost value addition with in the case for example resembles shown in the following formula.Correspondingly, in following formula, though cost value is set to infinity, but,, compare various values with other cost value with bigger value as replaceable scheme, such as the value of in computational process, overflowing with above the peaked value of predicting cost value, also be adaptable.
[formula 39]
Cost(N)[u]=∞ ……(39)
Then, infra-frame prediction estimating circuit 43 enters step SP4, and at this, it is new variables K more, changes the predictive mode that calculates its cost value.In addition, in step SP5 subsequently, circuit 43 judges by judgment variable K whether the calculating of cost value proceeds to last predictive mode.If obtained the result who negates, then infra-frame prediction estimating circuit 43 turns back to step SP3, calculates the cost value of predictive mode subsequently similarly.As a result, the processing procedure of infra-frame prediction estimating circuit 43 repeating step SP3-SP4-SP5-SP3, and calculate in the frame in 8 * 8 patterns cost value of the sub-macro block i of 8 beginnings continuously from pattern 0 to pattern.When calculation cost value during until pattern 8, in step SP5, obtain definite results, circuit 43 forwards step SP6 to.
In step SP6, infra-frame prediction estimating circuit 43 initializing variable K.In addition, by new variables α more, switch and be used to carry out the sub-macro block that cost value is calculated.Then, infra-frame prediction estimating circuit 43 forwards step SP7 to, and judges by judgment variable α whether cost value is calculated to the sub-macro block iv of 8 * 8 patterns in last frame.If obtained the result who negates, then infra-frame prediction estimating circuit 43 turns back to step SP3, and calculates the cost value of sub-macro block subsequently.
Correspondingly, in the case, the cost value addition that infra-frame prediction estimating circuit 43 will calculate in the intra-frame 4 * 4 pattern continuously, and calculate the cost value of the corresponding predictive mode of each subsequently sub-macro block ii, iii, iv, be similar to the calculating of sub-macro block i.In addition, after the cost value of having calculated last sub-macro block iv, circuit 43 enters step SP8 from step SP7.
In step SP8, for each sub-macro block, infra-frame prediction estimating circuit 43 is carried out computational process based on following formula, and is each detected pattern 0-8 cost value of sub-macro block i-iv by using, and detects the lowest costs value of each sub-macro block.Circuit 43 further is that detected minimum value detects prediction direction d (α).
[formula 40]
d(α)=min(Cost(α)[K]) ……(40)
By the processing of SP8, the optimum prediction mode in the frame of infra-frame prediction estimating circuit 43 each sub-macro block of detection in 8 * 8 patterns.Then, infra-frame prediction estimating circuit 43 forwards step SP9 to, and initializing variable K.
Further, in step SP10 subsequently, circuit 43 is carried out computational process based on following formula, with cost value Cost (x) [K] addition in the intra-frame 4 * 4 pattern of the prediction direction of correspondence, and, come calculation cost value Cost[K] with respect to the predictive mode in 16 * 16 patterns in the frame by variable K.
[formula 41]
Cost[K]=∑Cost(x)[K] ……(41)
Correspondingly, for pattern 0, infra-frame prediction estimating circuit 43 is with cost value " Cost " addition of pattern 0 among the intra-frame 4 * 4 mode block 1-16, with the calculation cost value.Then, in step SP11, infra-frame prediction estimating circuit 43 changes the pattern that is used to calculate by new variables K more.In addition, in step SP12 subsequently, circuit 43 judges by judgment variable K whether cost value is calculated to last pattern of 16 * 16 patterns in the frame.
When in step SP12, obtained to negate as a result the time, infra-frame prediction estimating circuit 43 turns back to step SP10, and calculates the cost value of pattern 1 subsequently.Then, when the cost value of computation schema 1, circuit 43 upgrades the pattern that is used to calculate in step SP11, after this turn back to step SP10 from step SP12, and the cost value of computation schema 2.Similarly, the cost value of circuit 43 computation schemas 3.
In the JVT standard, different between 16 * 16 patterns and the intra-frame 4 * 4 pattern in frame as the 3rd prediction direction of mode 3.So, infra-frame prediction estimating circuit 43 will have the intra-frame 4 * 4 pattern cost value addition of different prediction direction by the mean value of asking cost value, and calculates the cost value of mode 3 in interior 16 * 16 patterns of frame.Specifically, calculate for this mode 3 cost, infra-frame prediction estimating circuit 43 is based on by utilizing the computational process of following formula, with the mean value addition of the cost value Cost (x) [8] (pattern 8 cost value) (both is in the intra-frame 4 * 4 pattern) of the cost value Cost (x) [3] (mode 3 cost value) of the 3rd prediction direction and the 8th prediction direction, and calculate the cost value Cost[3 of 16 * 16 patterns in the frame].
[formula 42]
Cost[3]=∑{(Cost(x)[3]+Cost(x)[8])/2} ……(42)
In the computational process of using this formula (42), infra-frame prediction estimating circuit 43 utilizes the result of the pattern of the height correlation between 16 * 16 patterns and intra-frame 4 * 4 pattern in the frame, and calculates in the frame cost value Cost[3 of disabled prediction direction in the intra-frame 4 * 4 pattern in 16 * 16 patterns].
When calculating the mode 3 cost value in step SP10, infra-frame prediction estimating circuit 43 enters step SP13 from step SP12, because obtained definite results in step SP12 subsequently.
Infra-frame prediction estimating circuit 43 is from by detecting minimum value 16 * 16 pattern cost value in the frame of the detected pattern 0-3 of repeating step SP10, and detects the optimum prediction mode in 16 * 16 patterns in the frame.Infra-frame prediction estimating circuit 43 will be in step SP13 in the detected frame optimum prediction mode in 16 * 16 patterns and in step SP8 in the detected frame optimum prediction mode in 8 * 8 patterns be notified to intraframe prediction circuit 42, enter step SP14 then, finish this processing process.
(2) operation among the embodiment
In this configuration (Fig. 2), vision signal S1 simulated-and digital quantizer 2 is converted to view data D1, based on gop structure classification in screen classification buffer 3.Then, in subtraction circuit 4 subsequently, deduct the infra-frame prediction that generates respectively by intraframe prediction circuit 5 and/or motion prediction/compensating circuit 6 and/or the predicted value of inter prediction, therefore, generate differential data D2.The orthogonal transform that vision signal S1 makes its differential data D2 accept orthogonal intersection inverter 7 is handled, make the coefficient data that obtains by conversion process in sample circuit 8, accept quantification, after this, make the variable length code processing of accepting reversible encoding circuit 10 through the data that quantize, so that generate through coded data D5 (D4).Be output to transmission channel through coded data D5 by buffer 11.In addition, accept the processing of inverse quantization circuit 13 and anti-quadrature translation circuit 14 in order from the dateout of sample circuit 8, so that raw image data D1 is decoded, this view data D1 is stored in the frame memory 16 as the reference image information.
In a series of encoding process processes, in encoding device 41, from a plurality of intra prediction modes and a plurality of inter-frame forecast mode, select optimum prediction mode, deduct predicted value from view data D1 by selected predictive mode and generate differential data D2, and compress effectively and transmit image data D1.
Yet, from a plurality of predictive modes, selecting more to ask the cost value of the amount of calculating the code that indicates generation in the process of optimum prediction mode, and detecting the minimum cost value.So, for the calculation cost value, need huge amount of calculation.
In encoding device 41, about the intra-frame 4 * 4 pattern of intra prediction mode, intraframe prediction circuit 42 is calculated the cost value of each predictive mode, and this is similar to the situation of correlation technique, and from the cost value that calculates, detect the minimum cost value, to detect optimum prediction mode.
On the other hand, about 16 * 16 patterns in 8 * 8 patterns and the frame in the frame, infra-frame prediction estimating circuit 43 detects corresponding optimum prediction mode by utilizing the intra-frame 4 * 4 mode treatment in the intraframe prediction circuit 42.Further, judge by 16 * 16 pattern optimum prediction modes in 8 * 8 patterns and the frame in the infra-frame prediction estimating circuit 43 detected frames by intraframe prediction circuit 42, and intra-frame 4 * 4 pattern optimum prediction mode, be used to carry out the optimum prediction mode of infra-frame prediction with detection, and generate the predicted value that is used to carry out infra-frame prediction by detected optimum prediction mode.
As a result, in encoding device 41, the intra-frame 4 * 4 mode treatment is used to detect 8 * 8 patterns and interior 16 * 16 pattern optimum prediction modes of frame in the frame.Correspondingly, compare with the processing of correlation technique, the computing of carrying out when the optimum prediction mode in 16 * 16 patterns in 8 * 8 patterns and the frame in the detection frame can reduce, and has reduced and has selected the required computing of optimum prediction mode.
Specifically, in the infra-frame prediction estimating circuit 43 of encoding device 41, for 8 * 8 model prediction patterns in each frame, the cost value of the corresponding predictive mode that obtained by the intra-frame 4 * 4 pattern is sued for peace, thereby obtain the cost value of the corresponding predictive mode in 8 * 8 patterns in the frame.Similarly,, the cost value of the corresponding predictive mode that obtained by the intra-frame 4 * 4 pattern is sued for peace, obtain the cost value of the corresponding predictive mode in 16 * 16 patterns in the frame for 16 * 16 model prediction patterns in each frame.Further, respectively in frame in 8 * 8 patterns and the frame 16 * 16 patterns obtain the minimum value of cost value, then, obtain in the frame in 8 * 8 patterns and the optimum prediction mode in 16 * 16 patterns in the frame.
Correspondingly, in encoding device 41, can utilize compare with the amount of calculation of correlation technique much smaller amount of calculation carry out calculate in the frame in 8 * 8 patterns with frame in the processing of cost value in 16 * 16 patterns.As a result, compare, select the required computing of optimum prediction mode to reduce with the situation of correlation technique.
As in this embodiment, when when using the cost value in the intra-frame 4 * 4 pattern, obtain to calculate in the frame in 8 * 8 patterns and frame the cost value in 16 * 16 patterns, because the difference of prediction source locations of pixels, and whether carry out processing or the like by de-blocking filter, with compare by the situation of utilizing formula (31), (32) or the like to obtain cost value, accuracy may reduce.Yet, in intra-frame 4 * 4 pattern, frame in 8 * 8 patterns or the like, because the prediction direction of original image is identical with the locus, in fact, with compare by the situation of utilizing formula (31), (32) or the like to obtain cost value, can obtain the enough cost value of accuracy in practice.Correspondingly, in encoding device 41, can obtain optimum prediction mode with accuracy enough in the practice, and can carry out encoding process effectively view data D1.
Further, for 16 * 16 model prediction directions in the non-existent frame in the intra-frame 4 * 4 pattern, the mean value of the cost value by asking two highly related patterns and ask these values and obtain cost value.The result, even for 16 * 16 model prediction directions in the non-existent frame in the intra-frame 4 * 4 pattern, whether encoding device 41 also can detect cost value, and come judgment value to be in the optimum prediction mode by the result of utilizing the height association in the intra-frame 4 * 4 pattern.
(3) effect of embodiment
According to configuration, by from a plurality of predictive modes, selecting optimum prediction mode, handle based on different predicted value generating means detection optimum prediction modes by one of them optimum prediction mode detection processing being used for other optimum prediction modes detections simultaneously, thereby view data is carried out under the situation of encoding process, when from the detected optimum prediction mode of different predicted value generating means, detecting the predictive mode be suitable for carrying out infra-frame prediction and carry out encoding process, select the required computational complexity of optimum prediction mode to reduce at each.
In addition, calculated and compared so that detect the mode of optimum prediction mode with the cost value of corresponding predictive mode, by the cost value summation that detects in less predicted value generating means side and calculate cost value in bigger predicted value generating means side, carrying out the required computational complexity of cost value calculating in bigger predicted value generating means side significantly reduces, therefore, select the required computational complexity of optimum prediction mode to reduce.
In addition, by assembling the mean value that (aggregate) has the cost value of different prediction direction, even for having the cost value of the non-existent prediction direction of a side therein, calculate required computational complexity and also can reduce significantly, therefore can be with enough in practice accuracy calculation cost values.
In addition, in by the cost value of using the intra-frame 4 * 4 pattern, calculate in the frame cost value of 16 * 16 patterns in 8 * 8 patterns and frame, 16 * 16 patterns are distinguished under the situation of selecting optimum prediction mode available a plurality of predictive modes in 8 * 8 patterns and frame in intra-frame 4 * 4 pattern, frame, can reduce and select the required computational complexity of optimum prediction mode.
[embodiment 2]
In this embodiment, be used for the optimum prediction mode that processing by 8 * 8 mode detection optimum prediction modes in the frame detects 16 * 16 patterns in intra-frame 4 * 4 pattern and the frame by means of utilization, rather than be used for processing by intra-frame 4 * 4 mode detection optimum prediction mode by means of utilization and detect in the frame optimum prediction mode of 16 * 16 patterns in 8 * 8 patterns and frame.Specifically, by using the cost value that in frame, calculates in 8 * 8 patterns to calculate the cost value that is used for 16 * 16 patterns in intra-frame 4 * 4 pattern and the frame.Identical with configuration mode according to the encoding device 41 of embodiment 1 according to the configuration mode of the encoding device of embodiments of the invention 2, just carrying out cost value, to calculate required processing be different.Correspondingly, the configuration below with reference to Fig. 2 is described.
In encoding device, intraframe prediction circuit 42 is calculated the cost value of the corresponding predictive mode in 8 * 8 patterns in the frame, and by the module unit corresponding to 16 * 16 mode macro in the frame, the cost value that calculates is outputed to infra-frame prediction estimating circuit 43.
Infra-frame prediction estimating circuit 43 calculates the cost value of the corresponding predictive mode in 16 * 16 patterns in intra-frame 4 * 4 pattern and the frame by using the cost value of the corresponding predictive mode 8 * 8 patterns in the frame of intraframe prediction circuit 42 outputs.Further, circuit 43 detects minimum value from the cost value that calculates, and detects the optimum prediction mode in 16 * 16 patterns in intra-frame 4 * 4 pattern and the frame.Infra-frame prediction estimating circuit 43 is notified intraframe prediction circuit 42 with detected optimum prediction mode together with cost value.
The cost value of intraframe prediction circuit 42 corresponding predictive mode 8 * 8 patterns at first detected frame detects the optimum prediction mode in 8 * 8 patterns in the frame.Intraframe prediction circuit 42 is the optimum prediction mode 8 * 8 patterns in frame, and the optimum prediction mode in 16 * 16 patterns in optimum prediction mode in the intra-frame 4 * 4 pattern notified of infra-frame prediction estimating circuit 43 and the frame, detection is best predictive mode for infra-frame prediction, calculate predicted value based on the optimum prediction mode that is used for infra-frame prediction, and these values are outputed to subtraction circuit 4.
Compared to Figure 1, Fig. 3 is the flow chart that has shown the processing procedure in the infra-frame prediction estimating circuit 43.When from intraframe prediction circuit 42 input corresponding to the frame of 16 pixels * 16 block of pixels in during the cost value of corresponding predictive mode 8 * 8 patterns, infra-frame prediction estimating circuit 43 begins these processing processes, and enters step SP22 from step SP21.
In step SP22,43 initialization of infra-frame prediction estimating circuit will be used for calculating the cost value Cost[x of intra-frame 4 * 4 pattern] the variable x of [K], K.Specifically, in the example of Fig. 2, variable K is initialized to the variable of expression pattern 0, and x is initialized to the variable (referring to Figure 14) of expression starting block 1.
Then, in step SP23, infra-frame prediction estimating circuit 43 based on following formula, is carried out computational process with regard to the predictive mode in the intra-frame 4 * 4 pattern by variable K.With the cost value Cost (α) [K] in 8 * 8 patterns in the frame of the prediction direction of correspondence and sub-macro block divided by the quantity of intra-frame 4 * 4 mode block, to calculate cost value Cost (x) [K] corresponding to it.
[formula 43]
Cost(x)[K]=Cost(α)[K]/4 ……(43)
Note, in the table of Figure 14 D, shown the α on formula (43) right side.If it is exist in the prediction direction u that can not use in the JVT standard, then, similar to Example 1 by cost value Cost (α) [K] is set to not select the value of the pattern of this prediction direction to come the calculation cost value by it.
Then, infra-frame prediction estimating circuit 43 enters step SP24, new variables K more, and change into the predictive mode of its calculation cost value.In addition, in step SP25 subsequently, whether circuit 43 judgment variable K are calculated to last predictive mode to judge cost value.If obtained the result who negates, then infra-frame prediction estimating circuit 43 turns back to step SP23, calculates the cost value of predictive mode subsequently similarly.The result, the processing procedure of infra-frame prediction estimating circuit 43 repeating step SP23-SP24-SP25-SP23, arrive the cost value of pattern 8 with the pattern 0 of calculating the starting block 1 in the intra-frame 4 * 4 pattern continuously, and when calculation cost value during until pattern 8, obtain definite results in step SP25, circuit 43 forwards step SP26 to.
In step SP26, infra-frame prediction estimating circuit 43 initializing variable K.In addition, circuit 43 is new variables x more, is used to carry out the piece that cost value is calculated to switch to.Then, infra-frame prediction estimating circuit 43 forwards step SP27 to, and judgment variable x, whether is calculated to last piece 16 of intra-frame 4 * 4 forecasting model to judge cost value.If obtained the result who negates, then infra-frame prediction estimating circuit 43 turns back to step SP23, to calculate the cost value of piece subsequently.
Correspondingly, in the case, infra-frame prediction estimating circuit 43 sequentially is divided in piece 2,3 subsequently ... the cost value that in frame, calculates in 8 * 8 patterns, similar with piece 1, and calculate the cost value of corresponding predictive mode.Further, when calculating the cost value of last piece 16, infra-frame prediction estimating circuit 43 enters step SP28 from step SP27.Correspondingly, for each intra-frame 4 * 4 mode block 1,2 corresponding to the initial sub-macro block i in 8 * 8 patterns in the frame, 5,6, infra-frame prediction estimating circuit 43 is provided with cost value, and this cost value is 1/4 of the cost value Cost (i) [K] that obtains among the sub-macro block i of 8 * 8 patterns in start frame.In addition, for each intra-frame 4 * 4 mode block 3,4 corresponding to the sub-macro block ii subsequently in 8 * 8 patterns in the frame, 7,8, infra-frame prediction estimating circuit 43 is provided with cost value, this cost value be the cost value Cost that in frame, obtains among the sub-macro block ii of 8 * 8 patterns (ii) [K] 1/4.
In step SP28, infra-frame prediction estimating circuit 43 is from detecting the minimum cost value of each piece respectively in the cost value of piece 1 to 16 detected pattern 0-8.In addition, circuit 43 also detects the prediction direction d (x) of detected minimum value further.As the result of the processing of carrying out in step SP28, infra-frame prediction estimating circuit 43 is by the optimum prediction mode in each piece ground detection intra-frame 4 * 4 pattern.
Then, infra-frame prediction estimating circuit 43 forwards step SP29 to, and initializing variable K.Further, in step SP30 subsequently, circuit 43 is carried out computing based on following formula.By variable K, with respect to the predictive mode in 16 * 16 patterns in the frame, with cost value Cost (a) [K] addition in 8 * 8 patterns in the frame of the prediction direction of correspondence, with calculation cost value Cost[K].
[formula 44]
Cost[K]=∑Cost(α)[K] ……(44)
Then, in step SP31, infra-frame prediction estimating circuit 43 is new variables K more, the pattern that is used to calculate with change.Further, in step SP32 subsequently, circuit 43 is judged this variable K, whether is calculated to last pattern of 16 * 16 patterns in the frame to judge cost value.
If obtained the result who negates in step SP32, then infra-frame prediction estimating circuit 43 turns back to step SP30 to calculate the cost value of pattern 1 subsequently.When the cost value of computation schema 1, circuit 43 upgrades the pattern that is used to calculate in step SP31, after this, turn back to step SP30 from step SP32, with the cost value of computation schema 2.Similarly, the cost value of circuit 43 computation schemas 3.
In this mode 3 cost computational process, infra-frame prediction estimating circuit 43 is by the computing based on following formula, with the mean value addition of the cost value Cost (α) [8] (pattern 8 cost value) (both is in the intra-frame 4 * 4 pattern) of the cost value Cost (α) [3] (mode 3 cost value) of the 3rd prediction direction and the 8th prediction direction, and calculate cost value Cost[3] based on 16 * 16 patterns in the frame.
[formula 45]
Cost[3]=∑{(Cost(α)[3]+Cost(α)[8])/2} ……(45)
By using the computing of formula (45), infra-frame prediction estimating circuit 43 utilizes in the frame result of the pattern of the height correlation between 8 * 8 patterns in 16 * 16 patterns and frame to calculate in the frame the cost value Cost[3 of non-existent prediction direction in 8 * 8 patterns in frame in 16 * 16 patterns].
In step SP30, infra-frame prediction estimating circuit 43 computation schemas 3 cost value in the case, obtain definite results in step SP32 subsequently, and circuit 43 enters step SP33 from step SP32 thus.Infra-frame prediction estimating circuit 43 is from by detecting minimum value 16 * 16 pattern cost value in the frame of the detected pattern 0 to 3 of repeating step SP30, to detect the optimum prediction mode in 16 * 16 patterns in the frame.Infra-frame prediction estimating circuit 43 will be in this step SP33 in the detected frame optimum prediction mode in 16 * 16 patterns and in step SP28 the optimum prediction mode in the detected intra-frame 4 * 4 pattern be notified to intraframe prediction circuit 42, forward step SP34 then to, finish this processing procedure.
According to this embodiment,, also can obtain to be similar to the effect of the effect of embodiment 1 even by using in the frame 8 * 8 pattern cost value to calculate 16 * 16 pattern cost value in the frame.
Further, if, also can obtain to be similar to the effect of the effect among the embodiment 1 by the processing of bigger predicted value generating means side is used for the optimum prediction mode that less predicted value generating means side detects less predicted value generating means side.
[embodiment 3]
In this embodiment, be used for the optimum prediction mode of 8 * 8 patterns in intra-frame 4 * 4 pattern and the frame that detect by the processing of 16 * 16 mode detection optimum prediction modes in the frame by means of utilization, rather than be used for the optimum prediction mode of 16 * 16 patterns in intra-frame 4 * 4 pattern and the frame that detect by the processing of 8 * 8 mode detection optimum prediction modes in the frame by means of utilization.Specifically, by using the cost value that in frame, calculates in 16 * 16 patterns to calculate the cost value of 8 * 8 patterns in intra-frame 4 * 4 pattern and the frame.The configuration mode of encoding device according to this embodiment of the invention is identical with configuration mode according to the encoding device 41 of embodiment 1, and just carrying out this cost value, to calculate required processing be different.Correspondingly, the configuration below with reference to Fig. 2 is described.
In encoding device, intraframe prediction circuit 42 is calculated the cost value of the corresponding predictive mode in 16 * 16 patterns in the frame, and the cost value that calculates is outputed to infra-frame prediction estimating circuit 43.
Infra-frame prediction estimating circuit 43 calculates the cost value of the corresponding prediction direction in 8 * 8 patterns in intra-frame 4 * 4 pattern and the frame by the cost value of the corresponding predictive mode in 16 * 16 patterns in the frame that uses intraframe prediction circuit 42 outputs from then on.Then, circuit 43 detects minimum value from the cost value that calculates, to detect the optimum prediction mode in 8 * 8 patterns in intra-frame 4 * 4 pattern and the frame respectively.Infra-frame prediction estimating circuit 43 is notified intraframe prediction circuit 42 with detected optimum prediction mode together with cost value.
The cost value of intraframe prediction circuit 42 corresponding predictive mode 16 * 16 patterns at first detected frame detects the optimum prediction mode in 16 * 16 patterns in the frame.Optimum prediction mode in optimum prediction mode in the intra-frame 4 * 4 pattern that intraframe prediction circuit 42 optimum prediction mode 16 * 16 patterns and infra-frame prediction estimating circuit 43 in frame are notified and the frame in 8 * 8 patterns, detection is used for the optimum prediction mode of infra-frame prediction, calculate predicted value based on the optimum prediction mode that is used for infra-frame prediction, and these predicted values are outputed to subtraction circuit 4.
Compared to Figure 1, Fig. 4 is the flow chart that has shown the processing procedure in this infra-frame prediction estimating circuit 43.During the cost value of the corresponding predictive mode 16 * 16 patterns in intraframe prediction circuit 42 incoming frames, infra-frame prediction estimating circuit 43 these processing processes of beginning, and enter step SP42 from step SP41.
In this step SP42,43 initialization of infra-frame prediction estimating circuit will be used for calculating the cost value Cost[x of intra-frame 4 * 4 pattern] the variable x of [k], K.Specifically, variable K is initialized to the variable of expression pattern 0, and x is initialized to the variable (referring to Figure 14) of expression starting block 1.
Then, in step SP43, infra-frame prediction estimating circuit 43 is carried out computing based on following formula.Cost value Cost[K with the prediction direction of correspondence] divided by the quantity of intra-frame 4 * 4 mode block, so that by variable K, with respect to the predictive mode calculation cost value Cost (x) [K] in the intra-frame 4 * 4 pattern corresponding to it.
[formula 46]
Cost(x)[K]=Cost[K]/16 ……(46)
If the prediction direction u that exists the JVT standard to use is then by with cost value Cost[K] be set to not select the value of the pattern of this prediction direction to come the calculation cost value by it, similar to Example 1.
Then, infra-frame prediction estimating circuit 43 forwards step SP44 to, new variables K more, and change into the predictive mode of its calculation cost value.Further, in step SP45, whether circuit 43 judgment variable K are calculated to last predictive mode of 16 * 16 patterns in the frame to judge cost value.If obtained the result who negates, then infra-frame prediction estimating circuit 43 turns back to step SP43, to calculate the cost value of predictive mode subsequently similarly.The result, the processing procedure of infra-frame prediction estimating circuit 43 repeating step SP43-SP44-SP45-SP43, arrive the cost value of mode 3 with the pattern 0 of calculating the starting block 1 in the intra-frame 4 * 4 pattern continuously, and when Contemporary Value is calculated to mode 3, obtain definite results in step SP45, circuit 43 forwards step SP46 to.
In this step SP46, infra-frame prediction estimating circuit 43 initializing variable K.In addition, circuit 43 is new variables x more also, is used for the piece that cost value is calculated with switching.Then, infra-frame prediction estimating circuit 43 forwards step SP47 to, and judgment variable x, whether calculates last piece 16 in the intra-frame 4 * 4 pattern to judge cost value.If obtained the result who negates, then infra-frame prediction estimating circuit 43 turns back to step SP43, to calculate the cost value of piece subsequently.
Correspondingly, in the case, the processing of infra-frame prediction estimating circuit 43 repeating step SP43, and with the cost value of the pattern 0 to 3 of 16 * 16 patterns in the frame quantity, with the cost value of the pattern 0 to 3 of calculating the intra-frame 4 * 4 pattern divided by the intra-frame 4 * 4 mode block that comprises in 16 * 16 mode macro in the single frame.Further, after the cost value of having calculated last piece 16, circuit 43 enters step SP48 from step SP47.
In this step SP48, infra-frame prediction estimating circuit 43 from respectively piece 1 to 16 cost value of detected pattern 0 to 3 detect the minimum cost value of each piece.In addition, circuit 43 also detects the prediction direction d (x) of detected minimum value.As the result of step SP48, infra-frame prediction estimating circuit 43 is by the optimum prediction mode in each piece ground detection intra-frame 4 * 4 pattern.
Then, infra-frame prediction estimating circuit 43 forwards step SP49 to, and initializing variable K.Then, in step SP50 subsequently, circuit 43 is with the cost value of the pattern 0 of 16 * 16 patterns in the frame quantity divided by the sub-macro block of 8 * 8 patterns in the frame that comprises in 16 * 16 macro blocks in the single frame, to calculate the cost value of the pattern 0 of 8 * 8 patterns in the frame.
Then, in step SP51, infra-frame prediction estimating circuit 43 is new variables K more, the pattern that is used to calculate with change.Then, in step SP52 subsequently, whether circuit 43 judgment variable K proceed to last patterns of 16 * 16 patterns in the frame with the calculating of judging cost value.
If obtained the result who negates in step SP52, then infra-frame prediction estimating circuit 43 turns back to step SP50, to calculate the cost value of pattern 1 subsequently.When last patterns of processing 16 * 16 patterns in frame of repeating step SP50, in step SP52, obtain definite results, infra-frame prediction estimating circuit 43 enters step SP53 from step SP52.
Then, in step SP53, circuit 43 detects the optimum prediction mode of 8 * 8 patterns in the frame, and with the optimum prediction mode in 8 * 8 patterns in the frame, and the optimum prediction mode in the detected intra-frame 4 * 4 pattern is notified to intraframe prediction circuit 42 in step SP48, enter step SP54 then, finish this processing process.
In the JVT standard, the mode 3 prediction direction in the frame in 8 * 8 patterns and the intra-frame 4 * 4 pattern is different from the mode 3 prediction direction in 16 * 16 patterns in the frame.Correspondingly, shown in following formula, the prediction direction by the mode 3 in 8 * 8 patterns in frame and the intra-frame 4 * 4 pattern, also the prediction direction of the mode 3 in 16 * 16 patterns in the frame is distributed to the cost value of pattern 8 respectively, can be calculated the cost value of different directions according to the mode 3 cost value in 16 * 16 patterns in the frame.
[formula 47]
Cost(x)[8]=Cost[3]
Cost(α)[8]=Cost[3] ……(47)
Even as in this embodiment, calculate the cost value of interior 8 * 8 patterns of frame and intra-frame 4 * 4 pattern, also can obtain to be similar to the effect of the effect among the embodiment 1 by the cost value in 16 * 16 patterns in the use frame.
[embodiment 4]
Simultaneously, in embodiment 1 as described above, when when calculating in the frame 8 * 8 pattern cost value from intra-frame 4 * 4 pattern cost value and detect in the frame 8 * 8 pattern optimum prediction modes, if for example pattern 0 is the optimum prediction mode in all four intra-frame 4 * 4 mode blocks that comprise in the sub-macro block of 8 * 8 patterns in the single frame, what in frame in 8 * 8 patterns, pattern 0 also is an optimum prediction mode, does not need to detect 8 * 8 pattern cost value in the frame.
Further, if for example pattern 0 is wherein three the optimum prediction mode in four intra-frame 4 * 4 mode blocks that comprise in the sub-macro block of 8 * 8 patterns in the single frame, what for extremely for 8 * 8 patterns in the frame, the possibility that pattern 0 becomes optimum prediction mode also is very big.
In addition, if for example pattern 0 is wherein two the optimum prediction mode in four intra-frame 4 * 4 mode blocks that comprise in the sub-macro block of 8 * 8 patterns in the single frame, and, if for example pattern 1 and mode 3 are the optimum prediction modes in all the other two pieces, what in frame in 8 * 8 patterns, the possibility that pattern 0 becomes optimum prediction mode also is very big.
Correspondingly, in this embodiment, to suing for peace, and detect the optimum prediction mode of big predicted value generating means side based on so-called " majority rule (majority rule) " at the detected optimum prediction mode of less predicted value generating means.In addition, if can not detect the optimum prediction mode of value, then calculate and detect optimum prediction mode by the cost value that is similar to embodiment 1 as described above or embodiment 2 with the summation that is not less than predetermined value.
According to this embodiment, to suing for peace at the less detected optimum prediction mode of predicted value generating means side, detect the optimum prediction mode of predicted value generating means side greatly based on so-called " majority rule ", can further reduce computational complexity thus.
[embodiment 5]
Simultaneously, in embodiment 3 as described above, 8 * 8 pattern cost value detect under the situation of 8 * 8 pattern optimum prediction modes in the frame in the frame calculating by 16 * 16 pattern cost value in frame, if 16 * 16 pattern optimum prediction modes are patterns 0 in the frame, then pattern 0 (it is identical with optimum prediction mode in 16 * 16 patterns in the frame) becomes the optimum prediction mode in the interior sub-macro block of 8 * 8 patterns of four frames that comprise in 16 * 16 mode macro in the single frame.
So, in this embodiment, detected optimum prediction mode is assigned to the optimum prediction mode in the less predicted value generating means side in bigger predicted value generating means side.
According to this embodiment, by will be in bigger predicted value generating means side detected optimum prediction mode distribute to the optimum prediction mode of less predicted value generating means side, computational complexity can further reduce.
[embodiment 6]
In embodiment as described above, described in frame the cost value or the optimum prediction mode that calculate in 8 * 8 patterns in 16 * 16 patterns, the frame and one of them pattern of intra-frame 4 * 4 pattern and be used to detect the cost value of all the other two kinds of patterns or the situation of optimum prediction mode.Yet the present invention is not limited only to these situations.It can also be configured to obtain cost value or optimum prediction mode by two kinds of patterns in three kinds of patterns, can obtain the cost value or the optimum prediction mode of all the other patterns from these cost value or optimum prediction mode.
In addition, in embodiment as described above, also described the present invention and be applied to situation by the encoding device of hardware configuration.Yet the present invention is not limited only to these situations, but is widely applicable for based on the encoding device of the computational process of carrying out by computer or the like by software arrangements.In the case, can provide program, can also provide program by network such as the internet by program being recorded on the recording medium such as CD, disk or memory card.
In addition, in embodiment as described above, also described the present invention and be applied to situation when carrying out the encoding process that meets the JVT encoding scheme.Yet the present invention is not limited only to these situations, but is widely applicable for the various encoding process that are used for infra-frame prediction in the predicted value generating means of different masses size.
The present invention relates to the program of encoding device, coding method, coding method, and the recording medium that records the program of coding method thereon, and be applicable to the encoding device that for example meets H.264 with MPEG-4 the 10th part.
The personnel of present technique should be appreciated that, can carry out various modifications, combination, sub-portfolio and replacement according to designing requirement and other factors, and still in the scope of appended claim or its equivalent.
The relevant theme of Japanese patent application No.2007-184286 that the present invention comprises and on July 13rd, 2007 submitted in Japan Patent office is incorporated into this with the full content of this application here by reference.

Claims (13)

1. an encoding device is used for generating differential data at intra prediction mode by deduct predicted value from input image data, and by described differential data is handled described input image data is carried out encoding process, and this encoding device comprises:
The first predictive mode test section is used for detecting first optimum prediction mode that is suitable for the encoding process in this first predicted value generating means most from a plurality of predictive modes of first predicted value generating means;
The second predictive mode test section, be used for detecting from a plurality of predictive modes of second predicted value generating means second optimum prediction mode that is suitable for the encoding process in this second predicted value generating means most, wherein said second predicted value generating means is first predicted value generating means of dividing; And
The predicted value generating portion is used for selecting to be suitable for most from described first optimum prediction mode and described second optimum prediction mode encoding process optimum prediction mode of encoding process, and generates predicted value according to this optimum prediction mode of encoding process, wherein
Described first optimum prediction mode is detected by utilizing the processing in the described second predictive mode test section in the described first predictive mode test section,
And wherein:
The described first predictive mode test section has first generation value calculation part, this first generation is worth assorted calculation part and is used for for each pattern in a plurality of predictive modes of described first predicted value generating means, calculate the cost value of the amount of the code of representing generation, described first optimum prediction mode is detected by the cost value that relatively partly calculates in described first generation value calculation in the described first predictive mode test section; And
The described second predictive mode test section has second generation value calculation part, this second generation value calculation partly is used for for each pattern in a plurality of predictive modes of described second predicted value generating means, calculate the cost value of the amount of the code of representing generation, described second optimum prediction mode is detected by the cost value that is relatively partly calculated by described second generation value calculation in the described second predictive mode test section; Wherein
For each first predicted value generating means, first generation value calculation part will be by calculating described cost value by the cost value phase Calais that described second generation value calculation partly calculates at each corresponding predictive mode.
2. encoding device according to claim 1, wherein:
About based on the predictive mode in described second predicted value generating means of non-existent prediction direction in the predictive mode in described first predicted value generating means, the mean value addition of the cost value of the different prediction direction that described first generation value calculation part will partly calculate in described second generation value calculation.
3. encoding device according to claim 1, wherein:
Described first optimum prediction mode is detected by by means of described first predicted value generating means second optimum prediction mode of the correspondence in described second predicted value generating means being sued for peace in the described first predictive mode test section.
4. encoding device according to claim 1, wherein:
Described first optimum prediction mode is the optimum prediction mode in 16 * 16 patterns in the frame, and
Described second optimum prediction mode is the optimum prediction mode in 8 * 8 patterns in optimum prediction mode in the intra-frame 4 * 4 pattern or the frame.
5. encoding device according to claim 1, wherein:
Described first optimum prediction mode is the optimum prediction mode in 8 * 8 patterns in the frame, and
Described second optimum prediction mode is the optimum prediction mode in the intra-frame 4 * 4 pattern.
6. an encoding device is used for generating differential data at intra prediction mode by deduct predicted value from input image data, and by described differential data is handled described input image data is carried out encoding process, and this encoding device comprises:
The first predictive mode test section is used for detecting first optimum prediction mode that is suitable for the encoding process in this first predicted value generating means most from a plurality of predictive modes of first predicted value generating means;
The second predictive mode test section, be used for detecting from a plurality of predictive modes of second predicted value generating means second optimum prediction mode that is suitable for the encoding process in this second predicted value generating means most, wherein said second predicted value generating means is first predicted value generating means of dividing; And
The predicted value generating portion is used for selecting to be suitable for most from described first optimum prediction mode and described second optimum prediction mode encoding process optimum prediction mode of encoding process, and generates predicted value according to this optimum prediction mode of encoding process, wherein:
Described second optimum prediction mode is detected by utilizing the processing in the described first predictive mode test section in the described second predictive mode test section.
7. encoding device according to claim 6, wherein:
The described first predictive mode test section has first generation value calculation part, this first generation value calculation partly is used for for each pattern in a plurality of predictive modes of described first predicted value generating means, calculate the cost value of the amount of the code of representing generation, described first optimum prediction mode is detected by the cost value that is relatively partly calculated by described first generation value calculation in the described first predictive mode test section, and
The described second predictive mode test section has second generation value calculation part, this second generation value calculation partly is used for for each pattern in a plurality of predictive modes of described second predicted value generating means, calculate the cost value of the amount of the code of representing generation, described second optimum prediction mode is detected by the cost value that is relatively partly calculated by described second generation value calculation in the described second predictive mode test section, wherein
Described second generation value calculation part is calculated described cost value by the cost value of the predictive mode of the correspondence that is distributed in described first generation value calculation and partly calculates.
8. encoding device according to claim 7, wherein:
About based on the predictive mode in described second predicted value generating means of non-existent prediction direction in the predictive mode in described first predicted value generating means, the cost value of the predictive mode of the correspondence that described second generation value calculation part will partly calculate in described first generation value calculation is distributed to a plurality of predictive modes with different prediction direction.
9. encoding device according to claim 6, wherein:
The described second predictive mode test section is detected corresponding to the predictive mode in described second predicted value generating means of first optimum prediction mode, and is set to second optimum prediction mode.
10. encoding device according to claim 6, wherein:
Described first optimum prediction mode is the optimum prediction mode in 16 * 16 patterns in the optimum prediction mode in 8 * 8 patterns or the frame in the frame, and
Described second optimum prediction mode is the optimum prediction mode in the intra-frame 4 * 4 pattern.
11. encoding device according to claim 6, wherein:
Described first optimum prediction mode is the optimum prediction mode in 16 * 16 patterns in the frame, and
Described second optimum prediction mode is the optimum prediction mode in 8 * 8 patterns in the frame.
12. a coding method is used for generating differential data at intra prediction mode by subtract predicted value from input image data, and by described differential data is handled described input image data is carried out encoding process, this coding method comprises:
First predictive mode detects step, is used for detecting from a plurality of predictive modes of first predicted value generating means first optimum prediction mode that is suitable for the encoding process in this first predicted value generating means most;
Second predictive mode detects step, be used for detecting from a plurality of predictive modes of second predicted value generating means second optimum prediction mode that is suitable for the encoding process in this second predicted value generating means most, wherein said second predicted value generating means is first predicted value generating means of dividing; And
Predicted value generates step, is used for selecting to be suitable for most from described first optimum prediction mode and described second optimum prediction mode encoding process optimum prediction mode of encoding process, and generates predicted value according to this optimum prediction mode of encoding process, wherein
Described first predictive mode detects step and detects described first optimum prediction mode by the processing that utilizes described second predictive mode to detect in the step, and wherein:
Described first predictive mode detects step and comprises first generation value calculation substep, this first generation value calculation substep is used for for each pattern in a plurality of predictive modes of described first predicted value generating means, calculate the cost value of the amount of the code of representing generation, described first predictive mode detects step and detects described first optimum prediction mode by the cost value that is relatively calculated by described first generation value calculation substep; And
Described second predictive mode detects step and comprises second generation value calculation substep, this second generation value calculation substep is used for for each pattern in a plurality of predictive modes of described second predicted value generating means, calculate the cost value of the amount of the code of representing generation, described second predictive mode detects step and detects described second optimum prediction mode by the cost value that is relatively calculated by described second generation value calculation substep; Wherein
For each first predicted value generating means, first generation value calculation substep will be by calculating described cost value by the cost value phase Calais that described second generation value calculation substep calculates at each corresponding predictive mode.
13. a coding method is used for generating differential data at intra prediction mode by deduct predicted value from input image data, and by described differential data is handled described input image data is carried out encoding process, comprising:
First predictive mode detects step, is used for detecting from a plurality of predictive modes of first predicted value generating means first optimum prediction mode that is suitable for the encoding process in this first predicted value generating means most;
Second predictive mode detects step, be used for detecting from a plurality of predictive modes of second predicted value generating means second optimum prediction mode that is suitable for the encoding process in this second predicted value generating means most, wherein said second predicted value generating means is first predicted value generating means of dividing; And
Predicted value generates step, is used for selecting to be suitable for most from described first optimum prediction mode and described second optimum prediction mode encoding process optimum prediction mode of encoding process, and generates predicted value according to this optimum prediction mode of encoding process, wherein
Described second predictive mode detects step and detects described second optimum prediction mode by the processing that utilizes described first predictive mode to detect in the step.
CN2008101268873A 2007-07-13 2008-07-10 In-frame predication encoding equipment and method in video coding Expired - Fee Related CN101345876B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2007184286A JP4650461B2 (en) 2007-07-13 2007-07-13 Encoding device, encoding method, program, and recording medium
JP2007184286 2007-07-13
JP2007-184286 2007-07-13

Publications (2)

Publication Number Publication Date
CN101345876A CN101345876A (en) 2009-01-14
CN101345876B true CN101345876B (en) 2010-11-10

Family

ID=40247745

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2008101268873A Expired - Fee Related CN101345876B (en) 2007-07-13 2008-07-10 In-frame predication encoding equipment and method in video coding

Country Status (3)

Country Link
US (1) US8213505B2 (en)
JP (1) JP4650461B2 (en)
CN (1) CN101345876B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104519352A (en) * 2014-12-17 2015-04-15 北京中星微电子有限公司 Method and device for judging optimum prediction mode

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102239694A (en) * 2008-12-08 2011-11-09 夏普株式会社 Image encoder and image decoder
CA2751802A1 (en) 2009-02-13 2010-08-19 Research In Motion Limited In-loop deblocking for intra-coded images or frames
JP5158003B2 (en) * 2009-04-14 2013-03-06 ソニー株式会社 Image coding apparatus, image coding method, and computer program
JP5219089B2 (en) 2009-04-30 2013-06-26 株式会社メガチップス Image data generation method
MX337875B (en) * 2010-12-17 2016-03-22 Mitsubishi Electric Corp Moving image encoding device, moving image decoding device, moving image encoding method and moving image decoding method.
US9049455B2 (en) * 2010-12-28 2015-06-02 Panasonic Intellectual Property Corporation Of America Image coding method of coding a current picture with prediction using one or both of a first reference picture list including a first current reference picture for a current block and a second reference picture list including a second current reference picture for the current block
KR101643121B1 (en) * 2011-01-12 2016-07-26 미쓰비시덴키 가부시키가이샤 Image encoding device, image decoding device, image encoding method, image decoding method and recording medium
EP2664147B1 (en) 2011-01-13 2020-03-11 Canon Kabushiki Kaisha Image coding apparatus, image coding method, and program, and image decoding apparatus, image decoding method, and program
US9654785B2 (en) 2011-06-09 2017-05-16 Qualcomm Incorporated Enhanced intra-prediction mode signaling for video coding using neighboring mode
BR112013033697A2 (en) * 2011-07-01 2017-07-11 Samsung Electronics Co Ltd intra prediction video encoding method using unified referencing verification process, video decoding method and your device
CN107197309B (en) * 2011-10-07 2020-02-18 英迪股份有限公司 Method for decoding video signal
JP5798539B2 (en) * 2012-09-24 2015-10-21 株式会社Nttドコモ Moving picture predictive coding apparatus, moving picture predictive coding method, moving picture predictive decoding apparatus, and moving picture predictive decoding method
CN107079153B (en) * 2014-08-25 2020-05-15 英特尔公司 Encoding method, apparatus, system and storage medium
WO2016072611A1 (en) * 2014-11-04 2016-05-12 삼성전자 주식회사 Method and device for encoding/decoding video using intra prediction
WO2016125604A1 (en) 2015-02-06 2016-08-11 ソニー株式会社 Image encoding device and method
US10893276B2 (en) 2015-03-04 2021-01-12 Sony Corporation Image encoding device and method
KR20170089778A (en) * 2016-01-27 2017-08-04 한국전자통신연구원 Method and apparatus for encoding and decoding video using prediction
CN106911934B (en) * 2017-03-01 2020-03-03 北京奇艺世纪科技有限公司 Blocking effect removing method and deblocking filter
CN111178099B (en) * 2018-11-28 2023-03-10 腾讯科技(深圳)有限公司 Text translation method and related device
CN110881125B (en) * 2019-12-10 2021-10-29 中国科学院深圳先进技术研究院 Intra-frame prediction method, video coding method, video decoding method and related equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1642285A (en) * 2004-01-13 2005-07-20 索尼株式会社 Image encoding method and apparatus and image encoding program
CN1719906A (en) * 2005-06-14 2006-01-11 北京中星微电子有限公司 Mirror processing method for YuY2 image
JP2007116351A (en) * 2005-10-19 2007-05-10 Ntt Docomo Inc Image prediction coding apparatus, image prediction decoding apparatus, image prediction coding method, image prediction decoding method, image prediction coding program, and image prediction decoding program
EP1802125A1 (en) * 2004-09-28 2007-06-27 Sony Corporation Encoder, encoding method, program of encoding method and recording medium wherein program of encoding method is recorded

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6519005B2 (en) * 1999-04-30 2003-02-11 Koninklijke Philips Electronics N.V. Method of concurrent multiple-mode motion estimation for digital video
KR100750110B1 (en) * 2003-04-22 2007-08-17 삼성전자주식회사 4x4 intra luma prediction mode determining method and apparatus
JP2005151017A (en) 2003-11-13 2005-06-09 Sharp Corp Image coding apparatus
JP4571069B2 (en) * 2005-11-28 2010-10-27 三菱電機株式会社 Video encoding device
JP4250638B2 (en) * 2006-06-30 2009-04-08 株式会社東芝 Video encoding apparatus and method
US8416857B2 (en) * 2007-03-29 2013-04-09 James Au Parallel or pipelined macroblock processing

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1642285A (en) * 2004-01-13 2005-07-20 索尼株式会社 Image encoding method and apparatus and image encoding program
EP1802125A1 (en) * 2004-09-28 2007-06-27 Sony Corporation Encoder, encoding method, program of encoding method and recording medium wherein program of encoding method is recorded
CN1719906A (en) * 2005-06-14 2006-01-11 北京中星微电子有限公司 Mirror processing method for YuY2 image
JP2007116351A (en) * 2005-10-19 2007-05-10 Ntt Docomo Inc Image prediction coding apparatus, image prediction decoding apparatus, image prediction coding method, image prediction decoding method, image prediction coding program, and image prediction decoding program

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
王鹏等.基于Prewitt算子的快速帧内编码算法.微计算机信息22 7-1.2006,22(7-1),192-194.
王鹏等.基于Prewitt算子的快速帧内编码算法.微计算机信息22 7-1.2006,22(7-1),192-194. *
肖广等.一种基于H.264/AVC的帧内预测模式快速选择算法.中国图象图形学报10 11.2005,10(11),1375-1378.
肖广等.一种基于H.264/AVC的帧内预测模式快速选择算法.中国图象图形学报10 11.2005,10(11),1375-1378. *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104519352A (en) * 2014-12-17 2015-04-15 北京中星微电子有限公司 Method and device for judging optimum prediction mode

Also Published As

Publication number Publication date
JP2009021927A (en) 2009-01-29
US8213505B2 (en) 2012-07-03
JP4650461B2 (en) 2011-03-16
US20090097556A1 (en) 2009-04-16
CN101345876A (en) 2009-01-14

Similar Documents

Publication Publication Date Title
CN101345876B (en) In-frame predication encoding equipment and method in video coding
US11076175B2 (en) Video encoding method for encoding division block, video decoding method for decoding division block, and recording medium for implementing the same
KR100994524B1 (en) Image information encoding device and method, and image information decoding device and method
US7792193B2 (en) Image encoding/decoding method and apparatus therefor
CN100417229C (en) Coding apparatus, coding method, coding method program, and recording medium recording the coding method program
CN100586187C (en) Method and apparatus for image intraperdiction encoding/decoding
CN1253008C (en) Spatial scalable compression
KR100739714B1 (en) Method and apparatus for intra prediction mode decision
US20140044367A1 (en) Apparatus and method for coding/decoding image selectively using discrete cosine/sine transform
US20050114093A1 (en) Method and apparatus for motion estimation using variable block size of hierarchy structure
KR20070007295A (en) Video encoding method and apparatus
JP5194119B2 (en) Image processing method and corresponding electronic device
CN104041035A (en) Lossless Coding and Associated Signaling Methods for Compound Video
CN103782598A (en) Fast encoding method for lossless coding
CN101273638A (en) Image processing device, image processing method, and program
JP2004241957A (en) Image processor and encoding device, and methods therefor
CN101742323B (en) Method and device for coding and decoding re-loss-free video
KR100359819B1 (en) An Efficient Edge Prediction Methods In Spatial Domain Of Video Coding
KR20060085003A (en) A temporal error concealment method in the h.264/avc standard
CN1722847B (en) Digital signal conversion method and digital signal conversion device
JP4100067B2 (en) Image information conversion method and image information conversion apparatus
CN112153385B (en) Encoding processing method, device, equipment and storage medium
JP5375935B2 (en) Encoding apparatus and method
CA2895857A1 (en) Video encoding and decoding apparatus and method using quantization in sub-blocks
Lee et al. Complexity modeling for context-based adaptive binary arithmetic coding (CABAC) in H. 264/AVC decoder

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20101110

Termination date: 20150710

EXPY Termination of patent right or utility model