CN104662907A - Image processing device and method - Google Patents

Image processing device and method Download PDF

Info

Publication number
CN104662907A
CN104662907A CN201380049112.XA CN201380049112A CN104662907A CN 104662907 A CN104662907 A CN 104662907A CN 201380049112 A CN201380049112 A CN 201380049112A CN 104662907 A CN104662907 A CN 104662907A
Authority
CN
China
Prior art keywords
image
predicted
block
vector
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201380049112.XA
Other languages
Chinese (zh)
Other versions
CN104662907B (en
Inventor
高桥良知
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp filed Critical Sony Corp
Publication of CN104662907A publication Critical patent/CN104662907A/en
Application granted granted Critical
Publication of CN104662907B publication Critical patent/CN104662907B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/53Multi-resolution motion estimation; Hierarchical motion estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/58Motion compensation with long-term prediction, i.e. the reference frame for a current frame not being the temporally closest one
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding

Abstract

The present disclosure relates to an image processing device and method that makes it possible to improve the coding efficiency of coding or decoding a motion vector in a multi-view image. When the Ref POC (Ref 0) of a current PU and the Ref POC (Ref 0) of a reference PU in a different view differ from each other, a motion vector of the reference PU is scaled and used as a candidate for a prediction vector of the current PU. That is, a prediction vector (PMV L0) of the current PU and a motion vector (MVL0) of the reference PU differ from each other in Ref POC. Therefore, the motion vector (MVL0) of the reference PU is scaled according to the Ref POC, and the scaled motion vector (MVL0) is used as a candidate for a prediction vector of the current PU. The present disclosure is applicable to, for example, image processing devices.

Description

Image processing equipment and method
Technical field
The disclosure relates to image processing equipment and method, more specifically, relates to the image processing equipment and method that are configured to improve the coding of motion vector (MV) in multi-view image or the code efficiency of decoding.
Background technology
Recently, by adopting digitally processing image information, and when digitally processing image information, in order to efficient information transmission and accumulation, utilize the distinctive redundancy of image information, carry out the encoding scheme compressed by the orthogonal transform of such as discrete cosine transform and so on and motion compensation, compress and become universal with the equipment of coded image.Motion Picture Experts Group (MPEG), H.264, MPEG-4 Part 10 (advanced video coding) (hereinafter referred to H.264/AVC) etc. is the example of this encoding scheme.
So, in order to improve code efficiency compared with H.264/AVC, that is undertaken by the integration and cooperation team-Video coding (JCTVC) of the combination with standardization tissue as standardization department of international telecommunication union telecommunication (ITU-T) and International Organization for standardization (ISO)/International Electrotechnical Commission (IEC) is called that the standardization of the encoding scheme of efficient video coding (HEVC) is in progress at present.
In the current draft of HEVC, studying by changing coding unit (CU) level as three-dimensional (3D) expansion, improving the scheme (non-patent literature 1) of the coding efficiency of not substantially viewpoint.
As a kind of instrument of this scheme, motion prediction (IVMP:inter-view motion prediction) between the viewpoint that the coding vector that there is wherein different points of view serves as the candidate prediction vector of not substantially viewpoint (non-base view).
Quoted passage list
Non-patent literature
Non-patent literature 1:Gerhard Tech, Krzysztof Wegner, Ying Chen, SehoonYea, " 3D-HEVC Test Model Description draft 1 ", JCT3V-A1005_d0, Joint Collaborative Team on 3D Video Coding Extension Development ofITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11 1th Meeting:Stockholm, SE, 16-20 July 2012.
Summary of the invention
But, in IVMP, only when the MV with reference to PU with reference to picture sequence number (POC) and the viewpoint different from current view point (view) identical with reference to POC of the MV of current prediction unit (PU), can the candidate prediction vector that be set as current PU with reference to the MV of PU.
In view of the foregoing, propose the disclosure, the disclosure can improve the coding of the MV in not substantially viewpoint or the code efficiency of decoding.
Predicted vector generating portion is comprised according to the image processing equipment of first aspect of the present disclosure, described predicted vector generating portion be configured to by according to current block with reference to destination and reference block with reference to destination, the motion vector (MV) of convergent-divergent (scale) reference block, generate the coding of the MV for current block predicted vector, described reference block is the block of the parallax that the position in the image of different points of view obtains from the periphery of the current block the image of not substantially viewpoint relative to the position deviation of current block, MV coded portion, described MV coded portion is configured to the predicted vector utilizing predicted vector generating portion to generate, the MV of coding current block, and coded portion, described coded portion is configured to by by the unit encoding image with hierarchy, generate encoding stream.
Predicted vector generating portion by the MV according to reference image frame sequence number (POC) of current block and the reference image POC convergent-divergent reference block of reference block, and adopts the MV alternatively predicted vector of convergent-divergent, generates predicted vector.
Also can comprise hop, described hop is configured to the MV transmitting the current block of being encoded by MV coded portion and the encoding stream generated by coded portion.
Comprise by image processing equipment by the MV according to the reference destination of current block and the reference destination convergent-divergent reference block of reference block according to the first image processing method of the present disclosure, generate the coding of the MV for current block predicted vector, described reference block is the block of the parallax that the position in the image of different points of view obtains from the periphery of the current block the image of not substantially viewpoint relative to the position deviation of current block, the predicted vector generated is utilized by image processing equipment, the MV of coding current block, pass through by the unit encoding image with hierarchy with by image processing equipment, generate encoding stream.
Predicted vector generating portion is comprised according to the second image processing equipment of the present disclosure, described predicted vector generating portion be configured to by according to current block with reference to destination and reference block with reference to destination, the MV of convergent-divergent reference block, generate the coding of the MV for current block predicted vector, described reference block is the block of the parallax that the position in the image of different points of view obtains from the periphery of the current block the image of not substantially viewpoint relative to the position deviation of current block, MV decoded portion, described MV decoded portion is configured to the predicted vector utilizing predicted vector generating portion to generate, the MV of decoding current block, and decoded portion, described decoded portion is configured to by decoding by the encoding stream of the unit encoding with hierarchy, carry out synthetic image.
Predicted vector generating portion by the MV according to the reference image POC of current block and the reference image POC convergent-divergent reference block of reference block, and adopts the MV alternatively predicted vector of convergent-divergent, generates predicted vector.
Also can comprise receiving unit, described receiving unit is configured to the MV of the current block of received code stream and coding.
Comprise by image processing equipment by the MV according to the reference destination of current block and the reference destination convergent-divergent reference block of reference block according to the second image processing method of the present disclosure, generate the coding of the MV for current block predicted vector, described reference block is the block of the parallax that the position in the image of different points of view obtains from the periphery of the current block the image of not substantially viewpoint relative to the position deviation of current block, the predicted vector generated is utilized by image processing equipment, the MV of decoding current block, with the encoding stream being passed through to decode by the unit encoding with hierarchy by image processing equipment, carry out synthetic image.
In first aspect of the present disclosure, by the MV according to the reference destination of current block and the reference destination convergent-divergent reference block of reference block, generate for the MV of current block coding predicted vector, described reference block is the block of the parallax that the position in the image of different points of view obtains from the periphery of the current block the image of not substantially viewpoint relative to the position deviation of current block.So, utilize the predicted vector generated, the MV of coding current block, and by by the unit encoding image with hierarchy, generate encoding stream.
In second aspect of the present disclosure, by the MV according to the reference destination of current block and the reference destination convergent-divergent reference block of reference block, generate the coding of the MV for current block predicted vector, described reference block is the block of the parallax that the position in the image of different points of view obtains from the periphery of the current block the image of not substantially viewpoint relative to the position deviation of current block.
In addition, above-mentioned image processing equipment can be autonomous device, or forms the internal part of an image encoding apparatus or image decoding apparatus.
According to first aspect of the present disclosure, can coded image.Especially, the coding of the MV in multi-view image or the code efficiency of decoding can be improved.
According to second aspect of the present disclosure, can decoded picture.Especially, the coding of the MV in multi-view image or the code efficiency of decoding can be improved.
Accompanying drawing explanation
Fig. 1 is the diagram of graphic extension as the IVMP of routine techniques.
Fig. 2 is the diagram of graphic extension as the IVMP of routine techniques.
Fig. 3 is the diagram of the overview of this technology of graphic extension.
Fig. 4 is the block diagram of the primary structure example of the encoder of the multi-view image encoding device that graphic extension structure cost technology is applicable to.
Fig. 5 is the block diagram of the configuration example of graphic extension motion prediction/compensated part.
Fig. 6 is the block diagram that the senior MV of graphic extension predicts the configuration example of (AMVP) pattern vector predicted portions.
Fig. 7 is the block diagram of the configuration example of graphic extension predicted vector generating portion.
Fig. 8 is the flow chart of the example of the flow process of graphic extension coded treatment.
Fig. 9 is the flow chart of graphic extension motion prediction/compensation deals.
Figure 10 is the flow chart of the vector forecasting process of graphic extension AMVP pattern.
Figure 11 is the flow chart that graphic extension generates the process of non-space predicted vector.
Figure 12 is the flow chart of the process of graphic extension generation forecast vector L0.
Figure 13 is the flow chart of the process of graphic extension generation forecast vector L1.
Figure 14 is the block diagram of the primary structure example of the decoder of the multi-view image decoding device that graphic extension structure cost technology is applicable to.
Figure 15 is the block diagram of the configuration example of graphic extension motion compensation portion.
Figure 16 is the block diagram of the configuration example of graphic extension AMVP pattern vector predicted portions.
Figure 17 is the block diagram of the configuration example of graphic extension predicted vector generating portion.
Figure 18 is the flow chart of the example of the flow process of graphic extension decoding process.
Figure 19 is the flow chart of graphic extension motion compensation process.
Figure 20 is the flow chart of the vector forecasting process of graphic extension AMVP pattern.
Figure 21 is the flow chart that graphic extension generates the process of non-space predicted vector.
Figure 22 is the block diagram of the primary structure example of graphic extension computer.
Figure 23 is the block diagram of the example of the schematic construction of graphic extension television set.
Figure 24 is the block diagram of the example of the schematic construction of graphic extension mobile phone.
Figure 25 is the block diagram of the example of the schematic construction of graphic extension recording/reproducing apparatus.
Figure 26 is the block diagram of the example of the schematic construction of graphic extension picture pick-up device.
Figure 27 is the block diagram of the example of graphic extension scalable video coding application.
Figure 28 is the block diagram of another example of graphic extension scalable video coding application.
Figure 29 is the block diagram of another example of graphic extension scalable video coding application.
Figure 30 is the block diagram of the example of the schematic construction of graphic extension video unit.
Figure 31 is the block diagram of the example of the schematic construction of graphic extension video processor.
Figure 32 is the block diagram of another example of the schematic construction of graphic extension video processor.
Figure 33 is the key diagram of the structure of graphic extension content reproduction system.
Figure 34 is the key diagram of the flowing of data in graphic extension content reproduction system.
Figure 35 is the key diagram that graphic extension media present the object lesson of description (MPD).
Figure 36 is the functional-block diagram of the structure of the content server of graphic extension content reproduction system.
Figure 37 is the functional-block diagram of the structure of the content reproducing device of graphic extension content reproduction system.
Figure 38 is the functional-block diagram of the structure of the content server of graphic extension content reproduction system.
Figure 39 is the sequence chart of the communication process example of each equipment of graphic extension wireless communication system.
Figure 40 is the sequence chart of the communication process example of each equipment of graphic extension wireless communication system.
Figure 41 is the diagram that schematic illustration illustrates the configuration example of the frame format of the frame of transmission and reception in the communication process of each equipment of wireless communication system.
Figure 42 is the sequence chart of the communication process example of each equipment of graphic extension wireless communication system.
Embodiment
The following describes and realize mode of the present disclosure (hereinafter referred to embodiment).In addition, will be described in the following order.
1. the overview of routine techniques and this technology
2. the first embodiment (multi-view image encoding device)
3. the second embodiment (multi-view image decoding device)
4. the 3rd embodiment (computer)
5. example application
6. the example application of ges forschung
7. the 6th embodiment (unit/units/modules/processor)
8.
MPEG-is based on the example application of the content reproduction system of dynamic self-adapting stream transmission (DASH) of HTML (Hypertext Markup Language) (HTTP)
9. the example application of the wireless communication system of Wireless Fidelity (Wi-Fi) standard
<1. the overview > of routine techniques and this technology
[explanation of routine techniques]
As a kind of scheme of coding efficiency improving not substantially viewpoint, the coding vector that there is wherein different points of view serves as the IVMP of the candidate prediction vector of not substantially viewpoint.
Below with reference to Fig. 1, IVMP is described.In the example in fig 1, the longitudinal axis represents viewpoint, and vision point 0 represents basic viewpoint, and vision point 1 represents not substantially viewpoint.Horizontal axis representing time T1-T4.
Basic vision point 0 is encoded.Subsequently, motion prediction and the compensation of the current PU (Curr PU) of the image of the time T3 of not substantially vision point 1 is carried out.Now, the POC of the image of the time T1 of same viewpoint V1 is Ref 1 (Ref POC=1), and the POC of the image of time T2 is Ref0 (Ref POC=0), and the POC of the image of time T4 is Ref 0 (Ref POC=0).
The image of the Ref 0 (Ref POC=0) of MV T2 instruction time of the direction L0 of the current PU obtained, the image of the Ref 0 (Ref POC=0) of MV T4 instruction time of direction L1.
In IVMP, except serving as the MV of the candidate in conventional AMVP, can the MV of current PU be added in the MV encoded in basic viewpoint, as the candidate prediction vector obtained when encoding.
That is, because the motion in basic vision point 0 and not substantially vision point 1 exists correlation, therefore in not substantially vision point 1, each MV MV with reference to PU (CorPU) in the basic vision point 0 of the same time of current PU l0and MV l1a candidate of predicted vector can be served as.Here, basic vision point 0 refer to from the PU around current PU (namely with reference to PU, the adjacent PU of contiguous current PU) MV in find out disparity vector after, from the position identical with the PU position the image of not substantially viewpoint, depart from the PU of the position of disparity vector.
But as shown in diagram in Fig. 1, this is only wherein with reference to the MV MV with reference to PU in basic vision point 0 l0and MV l1time T2 and T4 of image, and wherein with reference to the situation that time T2 with T4 of the image of the MV of the current PU in not substantially vision point 1 is identical.
That is, when only having the Ref POC (Ref 0) as current PU identical with the Ref POC (Ref 0) with reference to PU, the MV with reference to PU can be appointed as the candidate prediction MV of current PU.
Thus, as illustrated in Figure 2 shows, the MV MV with reference to PU of the time T3 in wherein basic vision point 0 is considered l0and MV l1reFIdx L0 and ReFIdx L1 be all 0 situation.
In this case, when the ReFIdx L0 of the MV of the current PU of the time T3 of not substantially vision point 1 is 1, and when ReFIdx L1 is 0, the Ref POC of the predicted vector PMV L1 of current PU and the MV MV with reference to PU l1ref POC identical.So, the MV MV with reference to PU of the time T3 in basic vision point 0 l1can be used as the candidate prediction vector of current PU.
But the Ref POC due to the predicted vector PMV L0 of current PU is different from the MV MV with reference to PU l0ref POC, therefore with reference to the MV MV of PU l0unavailable (vacation), thus the MV MV of reference PU l0be not designated as predicted vector.That is, as mentioned above, between viewpoint and not substantially viewpoint, there is correlation, but owing to being difficult to generate the predicted vector with high correlation, therefore code efficiency reduces.
So in this technique, when being different from Ref POC (Ref 0) with reference to PU in different points of view as the Ref POC (Ref 0) of current PU, the MV with reference to PU is scaled, the MV of convergent-divergent serves as the candidate prediction vector of current PU.
Such as, in the example in figure 3, the same with in the example of Fig. 2, the Ref POC of the predicted vector PMV L0 of current PU is different from the MV MV with reference to PU l0ref POC.So in this technique, according to current PU and the reference destination with reference to PU, convergent-divergent is with reference to the MV MV of PU l0.That is, according to the distance between current PU and the Ref POC of reference PU, convergent-divergent is with reference to the MV MV of PU l0, the MV of convergent-divergent l0be used as the candidate prediction vector of current PU.
Thus, due to the predicted vector with high correlation can be generated, therefore, it is possible to improve the code efficiency of MV.
That is, due in different viewpoints, camera properties is slightly different, even if therefore concerning the MV of same object, treat that the picture of reference also may be different.In this case, due to can convergent-divergent and use and have the MV of high correlation, and MV can not be made unavailable, the therefore improvement Be very effective of code efficiency.
<2. the first embodiment >
[configuration example of multi-view image encoding device]
Fig. 4 graphic extension forms the structure of the embodiment of the encoder of the multi-view image encoding device of the image processing equipment be applicable to as the disclosure.
Such as, multi-view image encoding device comprises the encoder 11-1 ~ 11-M for encodes multi-view image.
Encoder 11-1 presses HEVC scheme, the image of multi-view image that coding is such as taken and so on.Such as, the coloured image of the not substantially viewpoint of frame unit, as input picture, is transfused to encoder 11-1, the coloured image of encoder 11-1 coding not substantially viewpoint.
Such as, be configured for encoder 11-M and 11-N of the coloured image of other viewpoint (comprising basic viewpoint) of coded frame unit with being also similar to encoder 11-1.In addition, when also there is the encoder being used for encoded chroma frame and coloured image, this encoder is formed with being similar to encoder 11-1.
Encoder 11-1 is configured to comprise analog/digital (A/D) conversion portion 21, picture reorder buffer 22, calculating section 23, orthogonal transform part 24, quantized segment 25, lossless coding part 26, accumulation buffer 27, re-quantization part 28, inverse orthogonal transformation part 29 and calculating section 30.In addition, encoder 11-1 is configured to comprise loop filter 31, decoded picture buffer (DPB) 32-1, intra-frame prediction part 33, motion prediction/compensated part 34, predicted picture selection part 35 and MV memory 36-1.
As the pictures of the coloured image of the not substantially viewpoint of image (moving image) to be decoded, be supplied to A/D conversion portion 21 successively.
When the picture being supplied to A/D conversion portion 21 is analog signal, A/D conversion portion 21 is changed according to A/D, converting analogue signals, and the analog signal after conversion is supplied to picture reorder buffer 22.
Such as, from prime grammatical tagging part (not shown) etc., coded sequence is supplied to picture reorder buffer 22 as coded message.Picture reorder buffer 22 preserves the picture from A/D conversion portion 21 temporarily, and according to the structure of the picture group (GOP) indicated by the coded sequence that supplies, read picture, so that carry out process picture sequence being rearranged into coded sequence (decoding order) from DISPLAY ORDER.
The picture read from picture reorder buffer 22 is provided to calculating section 23, intra-frame prediction part 33 and motion prediction/compensated part 34.
Except the supply of the picture from picture reorder buffer 22, the predicted picture generated by intra-frame prediction part 33 or motion prediction/compensated part 34 of part 35 is selected also to be provided to calculating section 23 from predicted picture.
Calculating section 23 is appointed as the object picture as picture to be encoded using the picture read from picture reorder buffer 22, and successively the macro block (maximum coding unit (LCU)) forming object picture is appointed as object block to be encoded.
Subsequently, if at needs, by deducting the pixel value of the predicted picture selecting part 35 to supply from predicted picture from the pixel value of object block, after calculating subtraction value, calculating section 23 carries out predictive coding, and predictive coding result is supplied to orthogonal transform part 24.
Orthogonal transform part 24 utilizes TU as unit, to the object block (pixel value from calculating section 23, or the residual error obtained by deducting predicted picture), carry out the orthogonal transform of all middle discrete cosine transforms or Karhunen-Loeve conversion and so on, then the conversion coefficient obtained as a result is supplied to quantized segment 25.
Quantized segment 25 quantizes the conversion coefficient supplied from orthogonal transform part 24, and the quantized value obtained as a result is supplied to lossless coding part 26.
Lossless coding part 26 is to the quantized value from quantized segment 25, carry out such as variable-length encoding (such as, context-adaptive variable-length encoding (CAVLC) etc.) or arithmetic coding is (such as, context adaptive binary arithmetic coding (CABAC) etc.) and so on lossless coding, and the coded data obtained as a result is supplied to accumulation buffer 27.
In addition, except the supply of the quantized value from quantized segment 25, be included in and be also provided to lossless coding part 26 from the header information in the head of the coded data of intra-frame prediction part 33 or motion prediction/compensated part 34.
Lossless coding part 26 is encoded the header information from intra-frame prediction part 33 or motion prediction/compensated part 34, and the header information of coding is included in the head of coded data.
The coded data from lossless coding part 26 preserved by accumulation buffer 27 temporarily, and by pre-determined data rate, exports the coded data of preserving.In addition, accumulation buffer 27 plays hop.
The coded data exported from accumulation buffer 27 is multiplexing with the coded data of another viewpoint of being encoded by another encoder 11-M etc., and multiplexing coded data is transmitted to multi-view image decoding device as described later.
The quantized value that quantized segment 25 obtains is provided to re-quantization part 28, and lossless coding part 26, in re-quantization part 28, inverse orthogonal transformation part 29 and calculating section 30, carries out local decoder.
That is, re-quantization part 28 becomes conversion coefficient the quantized value re-quantization from quantized segment 25, then conversion coefficient is supplied to inverse orthogonal transformation part 29.
Inverse orthogonal transformation part 29, to the conversion coefficient from re-quantization part 28, carries out inverse orthogonal transformation, then the conversion coefficient after conversion is supplied to calculating section 30.
If calculating section 30 obtains wherein needs, selecting the pixel value of the predicted picture of part 35 supply and the decoded picture of (local decoder) object block of decoding from the data that inverse orthogonal transformation part 29 supplies by being added from predicted picture, then the decoded picture obtained being supplied to loop filter 31.
Such as, loop filter 31 is made up of deblocking filter.In addition, such as, when adopting HEVC scheme, loop filter 31 is made up of deblocking filter and self adaptation offset filter (sampling self adaptation offsets (SAO)).Loop filter 31, by the decoded picture filtering from calculating section 30, is eliminated the block distortion that (minimizing) occurs in decoded picture, then the distortion decoded picture eliminated after (minimizing) is supplied to DPB 32-1.In addition, loop filter 31 is supplied to intra-frame prediction part 33 not filtered decoded picture.
Here, DPB 32-1 preserves the decoded picture of loop filter 31, namely, the picture of coloured image of the not substantially viewpoint of coding and local decoder in encoder 11-1, as when generating will (candidate) reference picture of reference during the predicted picture of use in the predictive coding carried out after a while (wherein calculating section 23 carries out the coding of the subtraction of predicted picture).In addition, DPB 32-1 is shared by the encoder 11-M of another viewpoint.
In addition, by using as serving as with reference to the I picture that can refer to picture of picture and P picture as object, the local decoder utilizing re-quantization part 28, inverse orthogonal transformation part 29 and calculating section 30 is carried out.In DPB 32-1, preserve the decoded picture of I picture and P image.
Intra-frame prediction part 33 and motion prediction/compensated part 34, using in units of the PU of object block, carry out prediction processing.
When object block is I picture, P picture or the B picture (comprising Bs picture) obtained by infra-frame prediction (intra-frame prediction), intra-frame prediction part 33 is from the decoded portion (decoded picture) loop filter 31 reading object block.Subsequently, intra-frame prediction part 33 is appointed as a part for the decoded picture of the object picture read from loop filter 31 predicted picture of the object block of the object picture supplied from picture reorder buffer 22.
In addition, intra-frame prediction part 33 obtains the coding cost utilized needed for predicted picture coded object block, that is, the coding cost that the residual error for predicted picture in coded object block etc. are required, then obtain coding cost together with predicted picture, be supplied to predicted picture select part 35.
When object picture is prediction (the P)-picture obtained by inter prediction, or during bi-directional predicted (B) picture, motion prediction/compensated part 34 carries out the vector forecasting process of AMVP pattern, and the vector forecasting process of M/S pattern (merging/skip mode).
Motion prediction/compensated part 34 was encoded before DPB 32-1 reads in object images, and the one or more pictures be locally decoded, alternatively picture (candidate's inter prediction is with reference to picture).
In addition, motion prediction/compensated part 34 is from being arranged on the encoder of different points of view (such as, encoder 11-N) in DPB 32-N in read in object picture before encode, and the one or more pictures be locally decoded, alternatively picture (candidate's interview prediction is with reference to picture).
In addition, DPB 32-N is kept in encoder 11-N and encodes, and the picture of the coloured image of the different points of view be locally decoded, as when generate for the predictive coding carried out after a while predicted picture time will (candidate) reference picture of reference.
When AMVP pattern, motion prediction/compensated part 34 is by the object block utilized from the object picture of picture reorder buffer 22, with estimation (ME) (motion detection) of candidate's picture, detect shift vector (MV), shift vector (MV) represents the motion of the displacement between the corresponding blocks (block minimum relative to the absolute length chang (SAD) of object block) of serving as object block and candidate's picture corresponding with object block.Now, as the MV detected, there is the interframe MV representing time shift, and MV between the viewpoint be shifted between expression viewpoint.
Motion prediction/compensated part 34 is by compensating the motion compensation of the momental displacement of the candidate image from DPB32-1 or 32-N, generation forecast image according to the MV of object block.
That is, motion prediction/compensated part 34 obtains corresponding blocks as predicted picture, and described corresponding blocks is the position from object block in candidate's picture is moved the position of (displacement) block (region) according to the MV of object block.
In addition, motion prediction/compensated part 34 utilizes the spatial neighboring blocks of the vicinity in same frame, specifies the MV as the candidate prediction vector for encoding.Motion prediction/compensated part 34, from MV memory 36-1, reads the MV of the corresponding blocks utilizing MV to associate in the picture of the different time of same viewpoint, and the MV read is appointed as candidate prediction vector.Motion prediction/compensated part 34, from the MV memory 36-N of MV preserving different points of view, reads the MV of the reference block in the different points of view of same time, and the MV read is appointed as candidate prediction vector.
In addition, here, reference block (the Cor PU of Fig. 1) in different points of view be in the MV of the adjacent block of the periphery from contiguous object block (the Curr PU of Fig. 1), find out disparity vector after, the block being shifted the position of disparity vector from the position identical with the position of object block in the image of different points of view.
On the other hand, when M/S pattern, motion prediction/compensated part 34 utilizes the adjacent spatial neighboring blocks in same frame, specifies candidate MV.Motion prediction/compensated part 34 reads the MV of the corresponding blocks utilizing MV to associate in the picture of the different time of same viewpoint from MV memory 36-1, and the MV read is appointed as candidate MV.Motion prediction/compensated part 34 reads the MV of the reference block in the different points of view of same time from the MV memory 36-N of the MV of preservation different points of view, and the MV read is appointed as candidate MV.Motion prediction/compensated part 34 utilizes candidate MV, generates candidate's picture.
Subsequently, motion prediction/compensated part 34 for the generation for predicted picture each candidate's picture, each candidate MV, each candidate prediction vector, the variable often kind of inter-frame forecast mode (comprising interview prediction pattern) of block size or often kind of M/S pattern, obtain the coding cost utilized needed for predicted picture coded object block.
Motion prediction/compensated part 34, by the inter-frame forecast mode of coding cost minimization or interview prediction pattern being appointed as the best inter-frame forecast mode as best inter-frame forecast mode, being supplied to predicted picture the predicted picture obtained by best inter-frame forecast mode and coding cost and selecting part 35.
Motion prediction/compensated part 34 is appointed as predicted vector by the candidate prediction vector best inter-frame forecast mode, obtain the difference with MV, and using the difference obtained as MV information, the index together with predicted vector is supplied to lossless coding part 26.In addition, motion prediction/compensated part 34 is kept at the MV when best inter-frame forecast mode in MV memory 36-1.
Predicted picture selects part 35 from the predicted picture from intra-frame prediction part 33 and motion prediction/compensated part 34, selects the predicted picture that coding cost is less, and the predicted picture selected is supplied to calculating section 23 and 30.
Here, intra-frame prediction part 33 as header information, is supplied to lossless coding part 26 using the information about infra-frame prediction.Motion prediction/compensated part 34 as header information, is supplied to lossless coding part 26 using the information (information etc. of MV) about inter prediction.
Lossless coding part 26 is selected from the header information of intra-frame prediction part 33 and motion prediction/compensated part 34, carry out the header information that self-generating is encoded into a side of this less predicted picture, then the header information selected is included in the head of coded data.
MV memory 36-1 saves as (candidate) MV of the reference when generating the predicted vector of the MV performed after a while for encoding the MV determined in motion prediction/compensated part 34.In addition, MV memory 36-1 is shared by the encoder 11-M of other viewpoint.
In addition, MV memory 36-N is arranged on to be had in the encoder 11-N of different points of view, the MV determined is saved as (candidate) MV of the reference when generating the predicted vector of the MV performed after a while for encoding in encoder 11-N.The encoder 11-M of MV memory 36-N passive movement prediction/compensated part 34 or other viewpoint shares.
[structure of motion prediction/compensated part]
Fig. 5 is the block diagram of the configuration example of the motion prediction/compensated part of graphic extension Fig. 4.
In the example of fig. 5, motion prediction/compensated part 34 is configured to comprise motion prediction mode generating portion 51, automatically cross index generating portion 52, AMVP pattern vector predicted portions 53, M/S pattern vector predicted portions 54 and mode decision parts 55.
Motion prediction mode generating portion 51 generates motion prediction mode, such as inter-frame forecast mode, merging patterns and skip mode.Motion prediction mode generating portion 51 is the information of instruction inter prediction and be supplied to AMVP pattern vector predicted portions 53 with reference to image index (Ref index).Motion prediction mode generating portion 51 is supplied to automatic cross index generating portion 52 merging patterns or skip mode (M/S pattern).
Automatic cross index generating portion 52 generates automatically with reference to image index, and generate with reference to image index (Ref index), be supplied to M/S pattern vector predicted portions 54 together with from the merging patterns of motion prediction mode generating portion 51 or skip mode.
AMVP pattern vector predicted portions 53, according to the predictive mode from motion prediction mode generating portion 51 with reference to image index, was encoded and the one or more images be locally decoded before DPB 32-1 or 32-N reads in object picture, alternatively picture.
AMVP pattern vector predicted portions 53 utilizes and uses from the object block of the object picture of picture reorder buffer 22 and the motion detection of candidate's picture, detects the MV of the motion representing the displacement of serving as between the corresponding blocks corresponding to object block in object block and candidate's picture.AMVP pattern vector predicted portions 53 is by compensating the motion compensation of the momental displacement of the candidate's picture from DPB 32-1 or 32-N, generation forecast image according to the MV of object block.
AMVP pattern vector predicted portions 53 utilizes the spatial neighboring blocks of the vicinity in same frame, specifies the MV of alternatively predicted vector for coding.Motion prediction/compensated part 34 reads the MV of correspondence in the picture of the different time of same viewpoint or contiguous time adjacent block from MV memory 36-1, and the MV read is appointed as candidate prediction vector.AMVP pattern vector predicted portions 53 reads the MV of the reference block in the different points of view of same time from the MV memory 36-N of the MV of preservation different points of view, and the MV read is appointed as candidate prediction vector.
AMVP pattern vector predicted portions 53 is according to the original image from image rearrangement buffer 22, for the generation for predicted picture each candidate's picture, each candidate MV, each candidate prediction vector or the variable often kind of inter-frame forecast mode of block size, obtain the coding cost utilizing predicted picture to come needed for coded object block.AMVP pattern vector predicted portions 53 as pattern cost, is supplied to mode decision parts 55 using the forced coding cost in the coding cost obtained.Now, AMVP pattern vector predicted portions 53 is appointed as predicted vector the candidate prediction vector when forced coding cost, obtains the difference with MV, and the index (Mv index) of MV difference Mvd and predicted vector is encoded into MV information.
M/S pattern vector predicted portions 54, according to the pattern from automatic cross index generating portion 52 with reference to image index, is encoded before reading in object picture and the one or more pictures be locally decoded from DPB 32-1 or 32-N, alternatively picture.
In addition, M/S pattern vector predicted portions 54 utilizes the spatial neighboring blocks of the vicinity in same frame, specifies candidate MV.M/S pattern vector predicted portions 54 reads the MV of correspondence the picture of the different time of same viewpoint or contiguous time adjacent block from MV memory 36-1, and the MV read is appointed as candidate prediction vector.M/S pattern vector predicted portions 54 reads the MV of the reference block in the different points of view of same time from the MV memory 36-N of the MV of preservation different points of view, and the MV read is appointed as candidate MV.M/S pattern vector predicted portions 54 utilizes candidate MV, generates candidate's picture.
M/S pattern vector predicted portions 54 is according to the original image from image rearrangement buffer 22, for the generation for predicted picture each candidate's picture, each candidate MV or often kind of M/S pattern, obtain the coding cost utilizing predicted picture to come needed for coded object block.M/S pattern vector predicted portions 54 as pattern cost, is supplied to mode decision parts 55 using the forced coding cost in the coding cost obtained.In addition, M/S pattern vector predicted portions 54 is encoded into MV information the merging index of instruction MV.
Mode decision parts 55, with reference to the coding cost from AMVP pattern vector predicted portions 53 and M/S pattern vector predicted portions 54, is the optimum prediction mode as optimal movement predictive mode using the inter-frame forecast mode of coding cost minimization or interview prediction mode decision.Mode decision parts 55 returns to AMVP pattern vector predicted portions 53 and M/S pattern vector predicted portions 54 optimum prediction mode result of determination.
AMVP pattern vector predicted portions 53, according to the result of determination from mode decision parts 55, is supplied to predicted picture the predicted picture obtained by optimum prediction mode (pred.image) and coding cost and selects part 35.AMVP pattern vector predicted portions 53 is the inter-frame forecast mode (inter mode) being judged to be optimum prediction mode, and the MV information with reference to image index (Ref index) and coding is supplied to lossless coding part 26.
M/S pattern vector predicted portions 54, according to the result of determination from mode decision parts 55, is supplied to predicted picture the predicted picture obtained by optimum prediction mode (pred.image) and coding cost and selects part 35.In addition, M/S pattern vector predicted portions 54 is supplied to lossless coding part 26 the MV information of the predictive mode (M/S pattern) and coding that are judged to be optimum prediction mode.Now, the information of the MV of forced coding cost is preserved in the space MV memory of Fig. 6 that (rewriting) illustrates below by interim.
[structure of AMVP pattern vector predicted portions]
Fig. 6 is the block diagram of the configuration example of the AMVP pattern vector predicted portions of graphic extension Fig. 5.
In the example of fig. 6, AMVP pattern vector predicted portions 53 is configured to comprise vector search part 61, predicted picture generating portion 62, vectorial cost judging section 63, space MV memory 64, predicted vector generating portion 65 and 66, switch 67, subtraction part 68 and POC conversion fraction 69.
Reference image index from motion prediction mode generating portion 51 is provided to vector search part 61, POC conversion fraction 69 and lossless coding part 26.In addition, the predictive mode from motion prediction mode generating portion 51 is provided to vector search part 61.
Vector search part 61, according to the predictive mode from motion prediction mode generating portion 51 with reference to image index, was encoded and the one or more pictures be locally decoded before DPB 32-1 or 32-N reads in object picture, alternatively picture.Vector search part 61 is by utilization from the object block of the object picture of picture reorder buffer 22 and the motion detection of candidate's picture, and the MV of the motion of the displacement between the corresponding blocks corresponding to object block in object block and candidate's picture is served as in detection expression.Vector search part 61 is supplied to predicted picture generating portion 62 and vectorial cost judging section 63 the MV detected.
Predicted picture generating portion 62, by the MV according to the object block from vector search part 61, compensates the motion compensation of the momental displacement of the candidate's picture from DPB 32-1 or 32-N, generation forecast image.The predicted picture generated is provided to predicted picture and selects part 35 and vectorial cost judging section 63.
Vector cost judging section 63 utilizes the original image from picture reorder buffer 22, from the MV of vector search part 61, from the predicted picture of predicted picture generating portion 62, with from the predicted vector of predicted vector generating portion 65 and 66 and MV index thereof, obtain coding cost.Subsequently, vectorial cost judging section 63 judges minimum code cost, and minimum code cost (optimum cost) and predictive mode thereof are supplied to mode decision parts 55.Vector cost judging section 63 is kept at the MV of minimum code cost in space MV memory 64 temporarily.
Space MV memory 64 coding cost minimization MV be kept at for the generation for the predicted vector of carrying out after a while candidate.In space MV memory 64, MV is stored in the unit (PU) of the block wherein obtaining described MV.In addition, when the coding cost of M/S pattern is best, on the MV of the situation of M/S pattern, the MV of space MV memory 64 is rewritten.
In addition, when vectorial cost judging section 63 supplies the MV of minimum coding cost, space MV memory 64 is supplied to subtraction part 68 using MV as best MV.
Predicted vector generating portion 65 is by reading the MV of the spatial neighboring blocks of the vicinity in same frame, span predicted vector.Predicted vector generating portion 65, by switch 67, is supplied to vectorial cost judging section 63 and subtraction part 68 the spatial prediction vectors generated together with indicating the MV index of described predicted vector.
Predicted vector generating portion 66 generates the predicted vector utilizing time motion vector prediction (TMVP).That is, predicted vector generating portion 66 is passed through from MV memory 36-1, the correspondence in the picture of the different time of reading same viewpoint or the MV of contiguous time adjacent block, generation forecast vector.In addition, now, according to the POC information from POC conversion fraction 69, when reference POC (Ref POC) of object block is different from reference POC (Ref POC) of time adjacent block, convergent-divergent is carried out.That is, the MV of convergent-divergent serves as predicted vector.Predicted vector generating portion 66, by switch 67, is supplied to vectorial cost judging section 63 and subtraction part 68 the time prediction vector generated together with indicating the MV index of this predicted vector.
In addition, predicted vector generating portion 66 generates the predicted vector utilizing interview prediction (IVMP).Predicted vector generating portion 66, according to the MV of contiguous block adjacent with object block in space MV memory 64, obtains disparity vector, then according to the disparity vector obtained, obtains the reference block in the different points of view of same time.Subsequently, the MV of the reference block in the different points of view of same time is read the MV memory 36-N of predicted vector generating portion 66 by the MV from preservation different points of view, generation forecast vector.
In addition, at this moment, when reference POC (Ref POC) of object block is different from reference POC (Ref POC) of reference block, convergent-divergent is carried out.That is, the MV of convergent-divergent serves as predicted vector.Predicted vector generating portion 66 is by switch 67, and the interview prediction vector generated, the MV index together with the described predicted vector of instruction is supplied to vectorial cost judging section 63 and subtraction part 68.
The predicted vector from predicted vector generating portion 65 selected by switch 67, or from the predicted vector of predicted vector generating portion 66, then the predicted vector selected and MV index thereof is supplied to vectorial cost judging section 63 and subtraction part 68.
Subtraction part 68 the MV of the cost minimization from space MV memory 64 (best MV) and from switch 67 predicted vector between difference MVd, together with the MV index of index representing predicted vector, be encoded into MV information.Subtraction part 68 is supplied to lossless coding part 26 the MV information of coding.
POC conversion fraction 69 is transformed into POC reference image index (cross index) of the object block from motion prediction mode generating portion 51, then instruction is supplied to predicted vector generating portion 66 by the POC information of the POC of described conversion acquisition.
[configuration example of non-space predicted vector generating portion]
Fig. 7 is the block diagram of the configuration example of the non-space predicted vector generating portion of graphic extension Fig. 6.
In the example of fig. 7, predicted vector generating portion 66 is configured to comprise predicted vector index generating portion 81, with reference to vectorial generating portion 83 between viewpoint internal reference vector generating portion 82 and viewpoint.
Predicted vector index generating portion 81 generates the predicted vector index (MV index) of TMVP, and the predicted vector index generated is supplied to viewpoint internal reference vector generating portion 82.Predicted vector index generating portion 81 generates the predicted vector index (MV index) of IVMP, and the predicted vector index generated is supplied between viewpoint with reference to vectorial generating portion 83.
Viewpoint internal reference vector generating portion 82 generates the predicted vector utilizing TMVP.That is, viewpoint internal reference vector generating portion 82 is by from MV memory 36-1, reads the MV of the corresponding blocks utilizing MV to associate in the picture of the different time of same viewpoint, generates predicted vector.
In addition, now, according to the POC information from POC conversion fraction 69, when reference POC (Ref POC) of object block is different from reference POC (Ref POC) of corresponding blocks, convergent-divergent is carried out to the MV of corresponding blocks.That is, the MV of convergent-divergent serves as predicted vector.Viewpoint internal reference vector generating portion 82, by switch 67, is supplied to vectorial cost judging section 63 and subtraction part 68 the time prediction vector (PMV) generated together with indicating the MV index of this predicted vector.
Generate the predicted vector utilizing IVMP with reference to vectorial generating portion 83 between viewpoint.Between viewpoint with reference to vectorial generating portion 83 from the MV of the adjacent block of contiguous object block, find out disparity vector, and according to the disparity vector found out, obtain the reference block in the different points of view of same time.Subsequently, pass through from the MV memory 36-N of the MV preserving different points of view, to read the MV of the reference block in the different points of view of same time, generation forecast vector with reference to vectorial generating portion 83 between viewpoint.
In addition, now, according to the POC information from POC conversion fraction 69, when reference POC (Ref POC) of object block is different from reference POC (Ref POC) of reference block, convergent-divergent is carried out to the MV of reference block.That is, the MV of convergent-divergent serves as predicted vector.Pass through switch 67 with reference to vectorial generating portion 83 between viewpoint, the interview prediction vector generated is supplied to vectorial cost judging section 63 and subtraction part 68 together with indicating the MV index of this predicted vector.
[operation of encoder]
Below with reference to the flow chart of Fig. 8, the coded treatment of the encoder 11-1 of key diagram 4.In addition, encoder 11-N and 11-M for carrying out the process of the coded image of other viewpoint carries out similar coded treatment.
As the pictures of the coloured image of the not substantially viewpoint of image (moving image) to be encoded, be supplied to A/D conversion portion 21 successively.In step S11, when image is analog signal, A/D conversion portion 21 is changed according to A/D, converting analogue signals, and A/D transformation result is supplied to picture reorder buffer 22.
Picture reorder buffer 22 preserves the picture from A/D conversion portion 21 temporarily, and according to the structure of the GOP indicated by the coded sequence that supplies, read picture, so that carry out process picture sequence being rearranged into coded sequence (decoding order) from DISPLAY ORDER.The picture read from picture reorder buffer 22 is provided to calculating section 23, intra-frame prediction part 33 and motion prediction/compensated part 34.
In step S12, intra-frame prediction part 33 carries out intra-frame prediction.That is, intra-frame prediction part 33 is from the part of local decoder (decoded picture) of loop filter 31 reading object picture.Subsequently, intra-frame prediction part 33 is appointed as a part for the decoded picture of the object picture read from loop filter 31 predicted picture of the object block (PU) of the object picture supplied from picture reorder buffer 22.
Intra-frame prediction part 33 obtains the coding cost utilizing predicted picture to come needed for coded object block, namely, the coding cost that the residual error for predicted picture in coded object block etc. are required, then the coding cost obtained together with predicted picture, be supplied to predicted picture and select part 35.
In step S13, motion prediction/compensated part 34 carries out motion prediction and compensation.In addition, motion prediction and compensation deals describe in detail with reference to Fig. 9.
In step S13, carry out the motion prediction of all inter-frame forecast modes, compensation and predicted vector and generate, the MS of M/S pattern generates, and by all inter-frame forecast modes (comprising M/S pattern) generation forecast image.Subsequently, for the generation for predicted picture each candidate's picture, each candidate MV, each candidate prediction vector or the variable often kind of inter-frame forecast mode (comprising interview prediction pattern) of block size or often kind of M/S pattern, the coding cost that acquisition utilizes predicted picture to come needed for coded object block (PU), judge best inter-frame forecast mode, then coding cost is supplied to predicted picture together with predicted picture and selects part 35.
In addition, now, intra-frame prediction part 33 is supplied to lossless coding part 26 using the information about infra-frame prediction as header information.Motion prediction/compensated part 34, the information (MV information etc.) about inter prediction, is supplied to lossless coding part 26 as header information.
In step S14, predicted picture selects part 35 from the predicted picture from intra-frame prediction part 33 and motion prediction/compensated part 34, selects the lower-cost predicted picture of coding, and the predicted picture selected is supplied to calculating section 23 and 30.
In step S15, motion prediction/compensated part 34 (the vectorial cost judging section 63 of Fig. 6), the MV of the best inter-frame forecast mode when selecting motion prediction (except intra-frame prediction) in step S14, is kept in the space MV memory 64 of Fig. 6 temporarily.That is, even if in the process of step S13, the MV of the situation of AMVP pattern is stored in space MV memory 64, and when the pattern of not excessive forced coding cost in step S15 is M/S pattern, the MV of the space MV memory 64 of Fig. 6 becomes the MV of M/S pattern.
In step S16, calculating section 23 calculates the difference between the original image from picture reorder buffer 22 and the predicted picture from predicted picture selection part 35, and result of calculation is supplied to orthogonal transform part 24.Namely, if at needs, after calculating subtraction value by the pixel value deducting the predicted picture selecting part 35 to supply from predicted picture from the pixel value of object block, calculating section 23 carries out predictive coding, and predictive coding result is supplied to orthogonal transform part 24.
In step S17, orthogonal transform part 24 utilizes converter unit (TU) as unit, to the object block (pixel value from calculating section 23, or the residual error obtained by deducting predicted picture), carry out the orthogonal transform of all middle discrete cosine transforms or Karhunen-Loeve conversion and so on, then the conversion coefficient obtained as a result is supplied to quantized segment 25.
In step S18, quantized segment 25 quantizes the conversion coefficient supplied from orthogonal transform part 24, and the quantized value obtained as a result is supplied to re-quantization part 28 and lossless coding part 26.
In step S19, re-quantization part 28 becomes conversion coefficient the quantized value re-quantization from quantized segment 25, and conversion coefficient is supplied to inverse orthogonal transformation part 29.
In step S20, inverse orthogonal transformation part 29 carries out inverse orthogonal transformation to the conversion coefficient from re-quantization part 28, and the conversion coefficient after conversion is supplied to calculating section 30.
In step S21, lossless coding part 26, to the residual error coefficient as the quantized value from quantized segment 25, carries out lossless coding, and the coded data obtained as a result is supplied to accumulation buffer 27.In addition, lossless coding part 26 is encoded the header information from intra-frame prediction part 33 or motion prediction/compensated part 34, such as prediction mode information or MV information, and the header information of coding is included in the head of coded data.
In step S22, if calculating section 30 obtains wherein needs, by being added the pixel value of the predicted picture selecting part 35 to supply from predicted picture and the data from inverse orthogonal transformation part 29 supply, the decoded picture of (local decoder) object block of decoding, is then supplied to loop filter 31 the decoded picture obtained.
In step S23, loop filter 31 judges whether maximum coding unit (LCU) terminates.When in step S23, at the end of judging that LCU is not, the processing returns to step S12, repeat the process after step S12.
When in step S23, at the end of judging LCU, process enters step S24.In step S24, loop filter 31, by the decoded picture filtering from calculating section 30, eliminates the block distortion that (minimizing) occurs in decoded picture.
In step S25, loop filter 31 is kept at filtered decoded picture in DPB 32-1.
In step S26, motion prediction/compensated part 34 is compressed in the MV preserved in step S15.Such as, that is, although in space MV memory 64, a MV is preserved for each (4 × 4) block, but MV is compressed, so that a MV is preserved for each (16 × 16) block.Such as, in (16 × 16) block, select the MV of upper left lateral mass.
Subsequently in step S27, motion prediction/compensated part 34 is kept at the MV of compression in MV memory 36-1.
In step S28, the encoder 11-M of another viewpoint encodes the picture of another viewpoint.In addition, coded treatment is substantially similar to the coded treatment of Fig. 8.
As mentioned above, coded treatment is carried out.
[examples of motion prediction/compensation deals]
Below with reference to the flow chart of Fig. 9, the motion prediction/compensation deals of the step S13 of key diagram 8.
In step S41, motion prediction mode generating portion 51 generates motion prediction mode, such as inter-frame forecast mode (comprising interview prediction pattern), merging patterns, skip mode etc.
In step S42, motion prediction mode generating portion 51 judges whether the motion prediction mode generated is inter-frame forecast mode.When in step S42, when the motion prediction mode that judgement generates is inter-frame forecast mode, motion prediction mode generating portion 51 is supplied to AMVP motion vector prediction unit inter-frame forecast mode (inter mode) and reference image index (Ref index) and divides 53.Subsequently, process enters step S43.
The vector forecasting of AMVP pattern is carried out in step S43, AMVP pattern vector predicted portions 53.The details of the vector forecasting of AMVP pattern will in reference Figure 10 explanation below.
In the process of step S43, find out the MV of inter-frame forecast mode, generation forecast image, and generate residual image, respectively span predicted vector and non-space predicted vector.Especially, if when generating non-space predicted vector, the Ref POC of current PU is different from the Ref POC with reference to PU in different points of view, and the MV so with reference to PU is scaled, and the MV of convergent-divergent serves as the candidate prediction vector of current PU.Subsequently, the difference between calculated candidate predicted vector and MV, so that the predicted vector that alternative costs are minimum.The minimum cost of the predicted vector selected is provided to mode decision parts 55.In addition, the difference between the predicted vector of the minimum cost of selection and MV, and the index of this predicted vector is encoded as MV information.
On the other hand, when in step S42, when determinating mode is not inter-frame forecast mode, motion prediction mode generating portion 51 is supplied to automatic cross index generating portion 52 merging patterns or skip mode (M/S pattern).Subsequently, process enters step S44.
In step S44, automatic cross index generating portion 52 generates automatically with reference to image index, then reference image index (Ref index) generated, M/S pattern vector predicted portions 54 is supplied to together with from the instruction merging patterns of motion prediction mode generating portion 51 or the information of skip mode.
The vector forecasting of merging patterns or skip mode is carried out in step S45, M/S pattern vector predicted portions 54.
That is, M/S pattern vector predicted portions 54 is according to the pattern from automatic cross index generating portion 52 with reference to image index, encodes and the one or more pictures be locally decoded before DPB 32-1 or 32-N reads in object picture, alternatively picture.
In addition, M/S pattern vector predicted portions 54 utilizes the spatial neighboring blocks of the vicinity in same frame, specifies candidate MV.M/S pattern vector predicted portions 54 reads the MV of correspondence the picture of the different time of same viewpoint or contiguous time adjacent block from MV memory 36-1, and the MV read is appointed as candidate MV.M/S pattern vector predicted portions 54, from the MV memory 36-N of MV preserving different points of view, reads the MV of the reference block in the different points of view of same time, and the MV read is appointed as candidate prediction vector.M/S pattern vector predicted portions 54 utilizes candidate MV, generates candidate image.
M/S pattern vector predicted portions 54 is according to the original image from image rearrangement buffer 22, for the generation for predicted picture each candidate's picture, each candidate MV or often kind of M/S pattern, obtain the coding cost utilizing predicted picture to come needed for coded object block.M/S pattern vector predicted portions 54 as pattern cost, is supplied to mode decision parts 55 using the forced coding cost in the coding cost obtained.Now, M/S pattern vector predicted portions 54 is encoded into MV information the merging index of the MV of instruction forced coding cost.
In step S46, mode decision parts 55, with reference to the coding cost from AMVP pattern vector predicted portions 53 and M/S pattern vector predicted portions 54, is the optimum prediction mode as optimal movement predictive mode using inter-frame forecast mode minimum for coding cost or interview prediction mode decision.Mode decision parts 55 returns to AMVP pattern vector predicted portions 53 and M/S pattern vector predicted portions 54 best inter-frame forecast mode result of determination.
In step S47, AMVP pattern vector predicted portions 53 or M/S pattern vector predicted portions 54 are according to the result of determination from mode decision parts 55, select the coded motion information of the pattern that coding cost is low, then the movable information selected is supplied to lossless coding part 26.
[example of the vector forecasting process of AMVP pattern]
Below with reference to the flow chart of Figure 10, the vector forecasting process of the AMVP pattern of the step S43 of key diagram 9.
Predictive mode from motion prediction mode generating portion 51 is provided to vector search part 61 together with reference to image index.
In step S61, vector search part 61, according to the predictive mode from motion prediction mode generating portion 51 with reference to image index, carries out vector search.
That is, vector search part 61 is according to the predictive mode from motion prediction mode generating portion 51 with reference to image index, encodes and the one or more pictures be locally decoded before DPB 32-1 or 32-N reads in object picture, alternatively picture.Vector search part 61 from the object block of the object picture of picture reorder buffer 22 and the motion detection of candidate's picture, detects the MV of the motion representing the displacement of serving as between the corresponding blocks corresponding to object block in object block and candidate's picture by utilization.Vector search part 61 is supplied to predicted picture generating portion 62 and vectorial cost judging section 63 the MV detected.
In step S62, predicted picture generating portion 62 according to the MV of the object block from vector search part 61, generation forecast image.
That is, predicted picture generating portion 62 is by the MV according to the object block from vector search part 61, compensates the motion compensation of the momental displacement of the candidate's picture from DPB 32-1 or 32-N, generates predicted picture.The predicted picture generated is provided to predicted picture and selects part 35 and vectorial cost judging section 63.
In step S63, vectorial judging section 63 utilizes the original image from picture reorder buffer 22, from the MV of vector search part 61, with from the predicted picture of predicted picture generating portion 62, generates residual image.The residual image generated is used in step S67 as described later, calculation code cost.
In step S64, predicted vector generating portion 65 span predicted vector.That is, predicted vector generating portion 65 is by from space MV memory 64, reads the MV of the spatial neighboring blocks of the vicinity in same frame, span predicted vector.Predicted vector generating portion 65, by switch 67, is supplied to vectorial cost judging section 63 and subtraction part 68 the spatial prediction vectors generated together with indicating the MV index of described predicted vector.
In step S65, predicted vector generating portion 66 generates non-space predicted vector.That is, predicted vector generating portion 66 generates and utilizes the predicted vector of TMVP and utilize the predicted vector of IVMP.The process generating non-space predicted vector will in reference Figure 11 explanation below.
In the process of step S65, carry out the process generating the predicted vector utilizing TMVP, and carry out the process generating the predicted vector utilizing IVMP.In addition, when generation utilizes the process of the predicted vector of IVMP, from the MV of the adjacent block of contiguous object block, find out disparity vector, and according to the disparity vector found out, obtain the reference block in the different points of view of same time.Subsequently, from the MV memory 36-N of MV preserving different points of view, read the MV of the reference block in the different points of view of same time, when reference POC (Ref POC) of object block is different from reference POC (Ref POC) of reference block, carry out convergent-divergent.In addition, in POC conversion fraction 69, from Ref index conversion POC information, and provide the POC information of conversion.
What generate in the process of step S65 utilizes the predicted vector of TMVP and the predicted vector of IVMP through switch 67, and the MV index together with this predicted vector of instruction is provided to vectorial cost judging section 63 and subtraction part 68.
In step S66, the residual error between the MV of vectorial cost judging section 63 calculating object block and the predicted vector of object block supplied from predicted vector generating portion 65 and 66.
In step S67, vector cost judging section 63 utilizes the residual image obtained in step S63, the vectorial residual error etc. obtained in step S66, obtain coding cost, from the coding cost obtained, select the predicted vector of least cost, and the MV (best MV) corresponding with the predicted vector selected is accumulated in space MV memory 64.
This MV (best MV), through space MV memory 64, is provided to subtraction part 68.
In step S68, the MV (best MV) minimum from the cost of space MV memory 64 and correspond to best MV, from switch 67 predicted vector between difference MVd together with representing that the MV index of index of predicted vector is encoded into MV information.
[generating the example of the process of non-space predicted vector]
Below with reference to the flow chart of Figure 11, the process of the generation non-space predicted vector in the step S65 of Figure 10 is described.
Predicted vector index generating portion 81 generates the predicted vector index (MV index) of TMVP, and the predicted vector index generated is supplied to viewpoint internal reference vector generating portion 82.Predicted vector index generating portion 81 generates the predicted vector index (MV index) of IVMP, and the predicted vector index generated is supplied between viewpoint with reference to vectorial generating portion 83.
In step S81, viewpoint internal reference vector generating portion 82 generates the predicted vector utilizing TMVP.
That is, viewpoint internal reference vector generating portion 82 is passed through from MV memory 36-1, the correspondence in the picture of the different time of reading same viewpoint or the MV of contiguous time adjacent block, generation forecast vector.Viewpoint internal reference vector generating portion 82, through switch 67, is supplied to vectorial cost judging section 63 and subtraction part 68 the time prediction vector (PMV) generated together with indicating the MV index of this predicted vector.
In step S82-S84, between viewpoint, generate the predicted vector utilizing IVMP with reference to vectorial generating portion 83.
That is, in step S82, with reference in the MV of vectorial generating portion 83 adjacent block of contiguous object block (PU) from space MV memory 64 between viewpoint, find out disparity vector, and according to the disparity vector found out, calculate parallax.
In step S83, select in different points of view with reference to vectorial generating portion 83 between viewpoint, be shifted the PU of the position of the parallax obtained in step S82, as reference PU.
In step S84, pass through, from the MV memory 36-N of the MV preserving different points of view, to read the MV with reference to PU selected with reference to vectorial generating portion 83 between viewpoint, the next vector of the MV generation forecast with reference to PU from selecting.This predicted vector generating process will in reference Figure 12 and 13 explanations below.
In the process of step S84, the interview prediction vector of the IVMP of generation together with the MV index of indication predicting vector, is provided to vectorial cost judging section 63 and subtraction part 68 by switch 67.
[example of predicted vector generating process]
Below with reference to the flow chart of Figure 12, the predicted vector generating process of the step S84 of Figure 11 is described.In addition in the example in figure 12, the predicted vector generating process of graphic extension direction L0.
In step S101, between viewpoint, search the MV memory 36-N of the MV preserving different points of view with reference to vectorial generating portion 83, judge whether the MV MVbase=l0 of the direction L0 of different points of view (basic viewpoint) can use.
When in step S101, when judging that the MV MVbase=l0 of the direction L0 of different points of view (basic viewpoint) is available, process enters step S102.In step S102, between viewpoint, judge with reference to vectorial generating portion 83 POCbase_l0 whether equaling the Ref POC as reference PU as the POCcurr_l0 of the Ref POC of object PU.
When in step S102, when judging that POCcurr_l0 equals POCbase_l0, process enters step S103.In step S103, between viewpoint, with reference to vectorial generating portion 83, the MVMVbase=l0 of direction L0 is appointed as the predicted vector PMV_L0 of the direction L0 of object PU.Then predicted vector generating process is terminated.
When in step S101, when judging that the MV MVbase=l0 of the direction L0 of different points of view (basic viewpoint) is unavailable, or work as in step S102, when judgement POCcurr_l0 is not equal to POCbase_l0, process enters step S104.
In step S104, between viewpoint, search the MV memory 36-N of the MV preserving different points of view with reference to vectorial generating portion 83, judge whether the MV MVbase=l1 of the direction L1 of different points of view (basic viewpoint) can use.
When in step S104, when judging that the MV MVbase=l1 of the direction L1 of different points of view (basic viewpoint) is available, process enters step S105.In step S105, between viewpoint, judge with reference to vectorial generating portion 83 POCbase_l1 whether equaling the Ref POC as reference PU as the POCcurr_l0 of the Ref POC of object PU.
When in step S105, when judging that POCcurr_l0 equals POCbase_l1, process enters step S106.In step S106, between viewpoint, with reference to vectorial generating portion 83, the MVMVbase=l1 of direction L1 is appointed as the predicted vector PMV_L0 of the direction L0 of object PU.Then predicted vector generating process is terminated.
When in step S104, when judging that the MV MVbase=l1 of the direction L1 of different points of view (basic viewpoint) is unavailable, or work as in step S105, when judgement POCcurr_l0 is not equal to POCbase_l1, process enters step S107.
In step S107, between viewpoint, again judge with reference to vectorial generating portion 83 whether the MV MVbase=l0 of the direction L0 of different points of view (basic viewpoint) can use.
When in step S107, when judging that the MV MVbase=l0 of the direction L0 of different points of view (basic viewpoint) is available, process enters step S108.In step S108, with reference to the POCcurr_l0 of vectorial generating portion 83 according to the Ref POC as object PU between viewpoint, with as the POCbase_l0 of Ref POC with reference to PU, the MV MVbase=l0 of zoom direction L0.Subsequently, with reference to vectorial generating portion 83, the MV MVbase=l0 of convergent-divergent is appointed as the predicted vector PMV_L0 of the direction L0 of object PU between viewpoint, then terminates predicted vector generating process.
When in step S107, when judging that the MV MVbase=l0 of the direction L0 of different points of view (basic viewpoint) is unavailable, process enters step S109.
In step S109, between viewpoint, again judge with reference to vectorial generating portion 83 whether the MV MVbase=l1 of the direction L1 of different points of view (basic viewpoint) can use.
When in step S109, when judging that the MV MVbase=l1 of the direction L1 of different points of view (basic viewpoint) is available, process enters step S110.In step S110, with reference to the POCcurr_l0 of vectorial generating portion 83 according to the Ref POC as object PU between viewpoint, with as the POCbase_l1 of Ref POC with reference to PU, the MV MVbase=l1 of zoom direction L1.Subsequently, with reference to vectorial generating portion 83, the MV MVbase=l1 of convergent-divergent is appointed as the predicted vector PMV_L0 of the direction L0 of object PU between viewpoint, then terminates predicted vector generating process.
When in step S109, when judging that the MV MVbase=l1 of the direction L1 of different points of view (basic viewpoint) is unavailable, process enters step S111.In step S111, judge the predicted vector PMV_L0 of the direction L0 that there is not object PU between viewpoint with reference to vectorial generating portion 83, then terminate predicted vector generating process.
[example of predicted vector generating process]
Below with reference to the flow chart of Figure 13, the predicted vector generating process of the step S84 of Figure 11 is described.In addition, in the example of Figure 13, illustrate the predicted vector generating process of direction L1.
In step S131, search the MV memory 36-N of the MV preserving different points of view between viewpoint with reference to vectorial generating portion 83, and judge whether the MV MVbase=l1 of the direction L1 of different points of view (basic viewpoint) can use.
When in step S131, when judging that the MV MVbase=l1 of the direction L1 of different points of view (basic viewpoint) is available, process enters step S132.In step S132, between viewpoint, judge with reference to vectorial generating portion 83 POCbase_l1 whether equaling the Ref POC as reference PU as the POCcurr_l1 of the Ref POC of object PU.
When in step S132, when judging that POCcurr_l1 equals POCbase_l1, process enters step S133.In step S133, between viewpoint, with reference to vectorial generating portion 83, the MVMVbase=l1 of direction L1 is appointed as the predicted vector PMV_L1 of the direction L1 of object PU.Subsequently, predicted vector generating process is terminated.
When in step S131, when judging that the MV MVbase=l1 of the direction L1 of different points of view (basic viewpoint) is unavailable, or work as in step S132, when judgement POCcurr_l1 is not equal to POCbase_l1, process enters step S134.
In step S134, between viewpoint, search the MV memory 36-N of the MV preserving different points of view with reference to vectorial generating portion 83, judge whether the MV MVbase=l0 of the direction L0 of different points of view (basic viewpoint) can use.
When in step S134, when judging that the MV MVbase=l0 of the direction L0 of different points of view (basic viewpoint) is available, process enters step S135.In step S135, between viewpoint, judge with reference to vectorial generating portion 83 POCbase_l0 whether equaling the Ref POC as reference PU as the POCcurr_l1 of the Ref POC of object PU.
When in step S135, when judging that POCcurr_l1 equals POCbase_l0, process enters step S136.In step S136, between viewpoint, with reference to vectorial generating portion 83, the MVMVbase=l0 of direction L0 is appointed as the predicted vector PMV_L1 of the direction L1 of object PU.Subsequently, predicted vector generating process is terminated.
When in step S134, when judging that the MV MVbase=l0 of the direction L0 of different points of view (basic viewpoint) is unavailable, or work as in step S135, when judgement POCcurr_l1 is not equal to POCbase_l0, process enters step S137.
In step S137, between viewpoint, again judge with reference to vectorial generating portion 83 whether the MV MVbase=l1 of the direction L1 of different points of view (basic viewpoint) can use.
When in step S137, when judging that the MV MVbase=l1 of the direction L1 of different points of view (basic viewpoint) is available, process enters step S138.In step S138, with reference to the POCcurr_l1 of vectorial generating portion 83 according to the Ref POC as object PU between viewpoint, with as the POCbase_l1 of Ref POC with reference to PU, the MV MVbase=l1 of zoom direction L1.Subsequently, with reference to vectorial generating portion 83, the MV MVbase=l1 of convergent-divergent is appointed as the predicted vector PMV_L1 of the direction L1 of object PU between viewpoint, then terminates predicted vector generating process.
When in step S137, when judging that the MV MVbase=l1 of the direction L1 of different points of view (basic viewpoint) is unavailable, process enters step S139.
In step S139, between viewpoint, again judge with reference to vectorial generating portion 83 whether the MV MVbase=10 of the direction L0 of different points of view (basic viewpoint) can use.
When in step S139, when judging that the MV MVbase=l0 of the direction L0 of different points of view (basic viewpoint) is available, process enters step S140.In step S140, with reference to the POCcurr_l1 of vectorial generating portion 83 according to the Ref POC as object PU between viewpoint, with as the POCbase_l0 of Ref POC with reference to PU, the MV MVbase=l0 of zoom direction L0.Subsequently, with reference to vectorial generating portion 83, the MV MVbase=l0 of convergent-divergent is appointed as the predicted vector PMV_L1 of the direction L1 of object PU between viewpoint, then terminates predicted vector generating process.
When in step S139, when judging that the MV MVbase=l0 of the direction L0 of different points of view (basic viewpoint) is unavailable, process enters step S141.In step S141, judge the predicted vector PMV_L1 of the direction L1 that there is not object PU between viewpoint with reference to vectorial generating portion 83, then terminate predicted vector generating process.
In this manner, when being different from Ref POC (Ref 0) with reference to PU in different points of view as the Ref POC (Ref 0) of current PU, convergent-divergent is configured to reference to the MV of PU, the MV of convergent-divergent the candidate prediction vector serving as current PU.
Thus, the code efficiency of MV can be improved, because the predicted vector with high correlation can be generated.
<3. the second embodiment >
[configuration example of multiple views decoding device]
Figure 16 graphic extension forms the structure of the embodiment of the decoder of the multi-view image decoding device of the image processing equipment be applicable to as the disclosure.
Such as, multi-view image decoding device comprises the decoder 211-1 ~ 211-M for decoding multi-view image.
Decoder 211-1 presses the encoding stream of HEVC scheme code from encoder 11-1, the coded data corresponding with the coloured image of not substantially viewpoint of decoding, thus generates the coloured image of not substantially viewpoint.
Such as, be configured for the corresponding coded data of decoding by the encoding stream of encoder 11-M and 11-N coding with being also similar to decoder 211-1, and decoder 211-M and 211-N of the coloured image of other viewpoint of delta frame unit (comprising basic viewpoint).In addition, when also existing for generating chrominance information image, and during the decoder of coloured image, form this decoder with being similar to decoder 211-1.
In the example of Figure 16, decoder 211-1 is configured to comprise accumulation buffer 221, losslessly encoding part 222, re-quantization part 223, inverse orthogonal transformation part 224, calculating section 225, loop filter 226, picture reorder buffer 227 and D/A (D/A) conversion portion 228.In addition, decoder 211-1 is configured to comprise DPB 229-1, intra-frame prediction part 230, motion compensation portion 231, predicted picture selection part 232 and MV memory 233-1.
Accumulation buffer 221 receives the receiving unit from the corresponding coded data in the encoding stream of encoder 11-1.The coded data received preserved by accumulation buffer 221 temporarily, and the data of preserving are supplied to losslessly encoding part 222.Coded data not only comprises the coded data (quantification residual error coefficient) of the coloured image of basic viewpoint, and comprises header information.
Losslessly encoding part 222 is by carrying out variable-length encoding to the coded data from accumulation buffer 221, restores quantization residual error coefficient or header information.Subsequently, losslessly encoding part 222 is supplied to re-quantization part 223 quantized value, and the header information of correspondence is supplied to intra-frame prediction part 230 and motion compensation portion 231 respectively.
Re-quantization part 223 re-quantization, from the quantification residual error coefficient of losslessly encoding part 222, is supplied to inverse orthogonal transformation part 224 the residual error coefficient of re-quantization.
Inverse orthogonal transformation part 224, in units of TU, to the conversion coefficient from re-quantization part 223, carries out inverse orthogonal transformation, then with block (such as LCU) for unit, inverse orthogonal transformation result is supplied to calculating section 225.
Calculating section 225, by the block supplied from inverse orthogonal transformation part 224 being appointed as the object block of decoder object, and if need, being so added the predicted picture and object block of selecting part 232 to supply from predicted picture, decoding.Calculating section 225 is supplied to loop filter 226 the decoded picture obtained as a result.
Such as, loop filter 226 is made up of deblocking filter.In addition, such as, when adopting HEVC scheme, loop filter 226 is made up of deblocking filter and self adaptation offset filter.Such as, loop filter 226 carries out the filtering corresponding to the loop filter 31 of Fig. 4 to the decoded picture from calculating section 225, and filtered decoded picture is supplied to picture reorder buffer 227.
Picture reorder buffer 227, by preserving and read the picture of the decoded picture of loop filter 226 temporarily, is rearranged into original series (DISPLAY ORDER) picture sequence, and rearranged result is supplied to D/A conversion portion 228.
When must with the picture of analog signal output from picture reorder buffer 227 time, D/A conversion portion 228 pairs of pictures carry out D/A conversion, and export D/A transformation result.
In addition, loop filter 226 is supplied to DPB 229-1 using the decoded picture of (I) image, P picture and B picture in the frame that can refer to picture among filtered decoded picture.In addition, loop filter 226 is supplied to intra-frame prediction part 230 not filtered decoded picture.
Here, DPB 229-1 preserves the decoded picture of loop filter 226, namely, the picture of the coloured image of the not substantially viewpoint of the coding in decoder 211-1 and local decoder, as (candidate) reference picture of the reference when being created on the predicted picture used in the predictive coding (wherein calculating section 225 carries out the coding of the subtraction of predicted picture) carried out after a while.In addition, DPB 229-1 is shared by the decoder 211-M of another viewpoint.
Intra-frame prediction part 230, according to the header information (intra prediction mode) from losslessly encoding part 222, identifies whether to utilize the predicted picture generated in infra-frame prediction (intra-frame prediction), coded object block (PU).
When utilizing the predicted picture coded object block generated in infra-frame prediction, the same with in the intra-frame prediction part 33 of Fig. 4, intra-frame prediction part 230 reads from loop filter 226 decoded portion (decoded picture) comprised the picture (object picture) of object block.Subsequently, intra-frame prediction part 230 is a part for the decoded picture of the object picture read from loop filter 226, and the predicted picture as object block is supplied to predicted picture and selects part 232.
Motion compensation portion 231, according to the header information from losslessly encoding part 222, identifies whether to utilize the predicted picture generated by inter prediction, coded object block.
When utilizing the predicted picture coded object block pressing inter prediction generation, motion compensation portion 231, according to the header information from losslessly encoding part 222, identifies the optimum prediction mode of object block.
Motion compensation portion 231, when optimum prediction mode is inter-frame forecast mode, is carried out the vector forecasting process of AMVP pattern, and when optimum prediction mode is merging/skip mode, is carried out the vector forecasting process of M/S pattern (merging/skip mode).
Motion compensation portion 231, from the candidate's picture be kept at DPB 229-1 or 229-N, reads the candidate's picture (inter prediction is with reference to picture or interview prediction reference picture) corresponded to reference to image index.
Subsequently, when AMVP pattern, motion compensation portion 231 according to the index of the predicted vector in the header information from losslessly encoding part 222, generate for MV decoding predicted vector.
Such as, when the index instruction spatial prediction vectors of predicted vector, motion compensation portion 231 utilizes the spatial neighboring blocks of the vicinity in same frame, generation forecast vector.When index predicted vector instruction time of predicted vector, motion compensation portion 231 is passed through from MV memory 233-1, the correspondence in the picture of the different time of reading same viewpoint or the MV of contiguous time adjacent block, generation forecast vector.When the index instruction interview prediction vector of predicted vector, motion compensation portion 231 is from the MV memory 233-N of MV preserving different points of view, read the MV of the reference block (the Cor PU of Fig. 1) in the different points of view of same time, and generation forecast is vectorial.
Motion compensation portion 231, by being added from the movable information of the header information of losslessly encoding part 222 and the predicted vector of generation, identifies the MV representing the motion used in the generation of the predicted picture of object block.Subsequently, be similar to the motion prediction/compensated part 34 of Fig. 4, motion compensation portion 231 is by carrying out the motion compensation with reference to picture, generation forecast image according to MV.
That is, motion compensation portion 231 obtains the position from object block in candidate's picture, is moved the block (corresponding blocks) of (displacement), as predicted picture according to the shift vector of object block.
When M/S pattern, motion compensation portion 231, according to the merging index in the header information from losslessly encoding part 222, generates MV.
Such as, when merging index instruction spatial prediction vectors, motion compensation portion 231 utilizes the spatial neighboring blocks of the vicinity in same frame, generates MV.When merging index predicted vector instruction time, motion compensation portion 231 is passed through from MV memory 233-1, and the correspondence in the picture of the different time of reading same viewpoint or the MV of contiguous time adjacent block, generate MV.When merging index instruction interview prediction vector, motion compensation portion 231, from the MV memory 233-N of MV preserving different points of view, reads the MV of the reference block (CorPU of Fig. 1) in the different points of view of same time, thus generates MV.
Subsequently, be similar to the motion prediction/compensated part 34 of Fig. 4, motion compensation portion 231, by according to MV, carries out the motion compensation with reference to picture, generation forecast image.Motion compensation portion 231 is supplied to predicted picture predicted picture and selects part 232.
When from intra-frame prediction part 230 availability forecast image, predicted picture selects part 232 to select this predicted picture, and the predicted picture selected is supplied to calculating section 225.When from motion compensation portion 231 availability forecast image, predicted picture selects part 232 to select this predicted picture, and the predicted picture selected is supplied to calculating section 225.
MV memory 233-1 is kept at the MV determined in motion compensation portion 231, as will (candidate) MV of reference when generating the predicted vector of coding MV being used for performing after a while.In addition, MV memory 233-1 is shared by the decoder 211-M of another viewpoint.
In addition, MV memory 233-N is arranged on to be had in the encoder 11-N of different points of view, is kept at the MV determined in decoder 211-N, as will (candidate) MV of reference when generating the predicted vector of coding MV being used for performing after a while.MV memory 233-N is shared by the decoder 211-M of motion compensation portion 231 or another viewpoint.
[structure of motion compensation portion]
Figure 15 is the block diagram of the configuration example of the motion compensation portion of graphic extension Figure 14.
In the example of Figure 15, motion compensation portion 231 is configured to comprise automatic cross index generating portion 251, AMVP pattern vector predicted portions 252 and M/S pattern vector predicted portions 253.
When predictive mode is inter-frame forecast mode, the merging patterns in header information or skip mode and merging index are provided to automatic cross index generating portion 251 from losslessly encoding part 222.
Automatic cross index generating portion 251 automatically generates with reference to image index, and is supplied to M/S pattern vector predicted portions 253 with reference to image index (Ref index) and merging index together with from the merging patterns of losslessly encoding part 222 or skip mode what generate.
When predictive mode is inter-frame forecast mode, the inter-frame forecast mode (intermode) of predicted vector, with reference to image index (Ref index), MV difference information (Mvd) and index (MV index) are provided to AMVP pattern vector predicted portions 252 from losslessly encoding part 222.
AMVP pattern vector predicted portions 252, according to inter-frame forecast mode, from the candidate's picture be kept at DPB229-1 or 229-N, reads the candidate picture (inter prediction with reference to picture or interview prediction reference picture) corresponding with reference image index.
AMVP pattern vector predicted portions 252 according to the index of predicted vector, generate for MV decoding predicted vector.
Such as, when the index instruction spatial prediction vectors of predicted vector, AMVP pattern vector predicted portions 252 utilizes the spatial neighboring blocks of the vicinity in same frame, generation forecast vector.When index predicted vector instruction time of predicted vector, AMVP pattern vector predicted portions 252 is passed through from MV memory 233-1, the correspondence in the picture of the different time of reading same viewpoint or the MV of contiguous time adjacent block, generation forecast vector.When the index instruction interview prediction vector of predicted vector, AMVP pattern vector predicted portions 252 is from the MV memory 233-N of MV preserving different points of view, read the MV of the reference block (the Cor PU of Fig. 1) in the different points of view of same time, thus generation forecast is vectorial.
AMVP pattern vector predicted portions 252, by being added the predicted vector of MV difference information and generation, identifies the MV of the motion of the generation representing the predicted picture being used for object block.Subsequently, AMVP pattern vector predicted portions 252, by according to MV, carries out the motion compensation with reference to picture, generation forecast image (pred.image).The predicted picture generated is provided to predicted picture and selects part 232.
M/S pattern vector predicted portions 253, from the candidate's picture be kept at DPB 229-1 or 229-N, reads the candidate's picture (inter prediction is with reference to picture) corresponded to reference to image index.
M/S pattern vector predicted portions 253, according to the merging index in the header information from losslessly encoding part 222, generates MV.
Such as, when merging index instruction spatial prediction vectors, M/S pattern vector predicted portions 253 utilizes the spatial neighboring blocks of the vicinity in same frame, generates MV.When merging index predicted vector instruction time, M/S pattern vector predicted portions 253, by reading the MV of the corresponding blocks utilizing MV to associate the picture of the different time of same viewpoint from MV memory 233-1, generates MV.When merging index instruction interview prediction vector, M/S pattern vector predicted portions 253 is from the MV memory 233-N of MV preserving different points of view, read the MV of the reference block (the Cor PU of Fig. 1) in the different points of view of same time, thus generate MV.The information of the MV generated is temporarily stored in the space MV memory 262 of Figure 16 as described later.
M/S pattern vector predicted portions 253, by according to MV, carries out the motion compensation with reference to picture, generation forecast image.The predicted picture generated is provided to predicted picture and selects part 232.
[structure of AMVP pattern vector predicted portions]
Figure 16 is the block diagram of the configuration example of the AMVP pattern vector predicted portions of graphic extension Figure 15.
In the example of Figure 16, AMVP pattern vector predicted portions 252 is configured to comprise predicted picture generating portion 261, space MV memory 262, addition section 263, predicted vector generating portion 264 and 265, switch 266 and POC conversion fraction 267.
Predicted picture generating portion 261, through space MV memory 262, inputs by utilizing addition section 263 to be added predicted vector and MV difference information and the MV generated.Predicted picture generating portion 261 reads and the reference image corresponding with reference to image index (Ref index) from losslessly encoding part 222 from DPB 229-1 or 229-N, and pass through according to MV, carry out the motion compensation with reference to image of reading, generation forecast image (pred.image).The predicted picture generated is provided to predicted picture and selects part 232.
Space MV memory 262 preserve addition section 263 generate MV, as the generation for the predicted vector of carrying out after a while candidate.In space MV memory 262, MV is stored in its unit obtaining the block of this MV (PU).In addition, the MV of M/S pattern is also stored in space MV memory 64.
Addition section 263, by through switch 266, inputs the predicted vector generated by predicted vector generating portion 264 or predicted vector generating portion 265, and is added the difference information of the predicted vector of input and the MV from losslessly encoding part 222 supply, generates MV.Addition section 263 makes the MV of generation be stored in space MV memory 262.
Predicted vector generating portion 264, by from space MV memory 262, reads the MV indicated by the index of the predicted vector supplied from losslessly encoding part 222, span predicted vector.Predicted vector generating portion 264, through switch 266, is supplied to addition section 263 the predicted vector generated.
Predicted vector generating portion 265, by from MV memory 233-1 or 233-N, reads the MV indicated by the index of the predicted vector supplied from losslessly encoding part 222, generates non-space (that is, TMVP or IVMP) predicted vector.Predicted vector generating portion 265, through switch 266, is supplied to addition section 263 the predicted vector generated.
That is, when index predicted vector instruction time of predicted vector, predicted vector generating portion 265, by from MV memory 233-1, reads the MV of the corresponding blocks utilizing MV to associate in the picture of the different time of same viewpoint, generation forecast vector.Now, according to the POC information from POC conversion fraction 267, when reference POC (Ref POC) of object block is different from reference POC (Ref POC) of corresponding blocks, convergent-divergent is carried out to the MV of corresponding blocks.That is, the MV of convergent-divergent serves as predicted vector.
When the index instruction interview prediction vector of predicted vector, AMVP pattern vector predicted portions 252 is from the MV memory 233-N of MV preserving different points of view, read the MV of the reference block (the Cor PU of Fig. 1) in the different points of view of same time, and generation forecast is vectorial.Now, according to the POC information from POC conversion fraction 267, when reference POC (RefPOC) of object block is different from reference POC (Ref POC) of reference block, convergent-divergent is carried out to the MV of reference block.That is, the MV of convergent-divergent serves as predicted vector.
POC conversion fraction 267 is transformed into POC reference image index (Ref index) of the object block from losslessly encoding part 222, and instruction is supplied to predicted vector generating portion 265 by the POC information of the POC of described conversion acquisition.
[configuration example of non-space predicted vector generating portion]
Figure 17 is the block diagram of the configuration example of the non-space predicted vector generating portion of graphic extension Figure 16.
In the example of Figure 16, predicted vector generating portion 265 is configured to comprise between viewpoint internal reference vector generating portion 281 and viewpoint with reference to vectorial generating portion 282.
When index instruction time (TMVP) predicted vector of predicted vector, the index (MV index) of predicted vector is provided to viewpoint internal reference vector generating portion 281 from losslessly encoding part 222.
Viewpoint internal reference vector generating portion 281, by from MV memory 233-1, reads in the picture of the different time of same viewpoint, the MV of (that is, the utilizing MV to associate) corresponding blocks indicated by the index of predicted vector, generation forecast vector.
In addition, now, according to the POC information from POC conversion fraction 267, when reference POC (Ref POC) of object block is different from reference POC (Ref POC) of corresponding blocks, convergent-divergent is carried out to the MV of corresponding blocks.That is, the MV of convergent-divergent serves as predicted vector.
Viewpoint internal reference vector generating portion 281, by switch 266, is supplied to addition section 263 the predicted vector generated.
When the predicted vector of index instruction interview prediction (IVMP) of predicted vector, the index (MV index) of this predicted vector is provided between viewpoint with reference to vectorial generating portion 282 from losslessly encoding part 222.
Generate the predicted vector utilizing IVMP with reference to vectorial generating portion 282 between viewpoint.Between viewpoint with reference to vectorial generating portion 282 from space MV memory 262, in the MV of the adjacent block of contiguous object block, find out disparity vector, and according to the disparity vector found out, obtain the reference block in the different points of view of same time.Subsequently, pass through from the MV memory 233-N of the MV preserving different points of view, to read the MV of the reference block indicated by the index of predicted vector, generation forecast vector with reference to vectorial generating portion 282 between viewpoint.
In addition, now, according to the POC information from POC conversion fraction 267, when reference POC (Ref POC) of object block is different from reference POC (Ref POC) of reference block, convergent-divergent is carried out to the MV of reference block.That is, the MV of convergent-divergent serves as predicted vector.
Pass through switch 266 with reference to vectorial generating portion 282 between viewpoint, the predicted vector generated is supplied to addition section 263.
[operation of decoder]
Below with reference to the flow chart of Figure 18, the decoding process of the decoder 211-1 of Figure 14 is described.In addition, decoder 211-N and 211-M for the image of other viewpoint of decoding carries out similar decoding process.
Accumulation buffer 221 preserves the coded data corresponding with the coloured image of the not substantially viewpoint received temporarily, and the coded data of preserving is supplied to losslessly encoding part 222.
In step S211, losslessly encoding part 222 is decoded from the quantification residual error coefficient of the coded data of accumulation buffer 221.
In step S212, re-quantization part 223 becomes conversion coefficient the quantification residual error coefficient re-quantization from losslessly encoding part 222, and described conversion coefficient is supplied to inverse orthogonal transformation part 224.
In step S213, inverse orthogonal transformation part 224 carries out inverse orthogonal transformation to the conversion coefficient from re-quantization part 223, and inverse orthogonal transformation result is supplied to calculating section 225.
In step S214, intra-frame prediction part 230, according to the header information (intra prediction mode) from losslessly encoding part 222, judges whether the prediction for object block (PU) is intra-frame prediction.When in step S214, when judging that prediction is intra-frame prediction, process enters step S215.In step S215, intra-frame prediction part 230 carries out intra-frame prediction.
When in step S214, when judging that prediction is not intra-frame prediction, process enters step S216.In step S216, motion compensation portion 231 carries out motion compensation process.This motion compensation process will in reference Figure 19 explanation below.
In the process of step S216, when motion prediction is motion prediction mode, generates the predicted vector corresponding to predicted vector index, and generate MV.In addition, read the reference image corresponding to reference image index, carry out motion compensation according to the MV generated, thus generation forecast is vectorial.
When M/S pattern, generating the MV corresponding to merging index, reading with reference to image, carrying out motion compensation according to the MV generated, thus generation forecast image.The predicted picture generated is provided to predicted picture and selects part 232.
In step S217, motion compensation portion 213 (addition section 263) is kept at the MV generated in space MV memory 262.
When from intra-frame prediction part 230 availability forecast image, predicted picture selects part 232 to select this predicted picture, and the predicted picture selected is supplied to calculating section 225.When from motion compensation portion 231 availability forecast image, predicted picture selects part 232 to select this predicted picture, and the predicted picture selected is supplied to calculating section 225.
In step S218, calculating section 225 is added the block (difference) supplied from inverse orthogonal transformation part 224 and the predicted picture selecting part 232 to supply from predicted picture.Calculating section 225 is supplied to loop filter 226 the decoded picture obtained as a result.
In step S219, loop filter 226 judges whether LCU terminates.When in step S219, at the end of judging that LCU is not, the processing returns to step S211, repeat the process after step S211.
When in step S219, at the end of judging LCU, process enters step S220.In step S220, loop filter 226, by the decoded picture filtering from calculating section 225, eliminates the block distortion that (minimizing) occurs in decoded picture.
In step S221, loop filter 226 is kept at filtered decoded picture in DPB 229-1.
In step S222, motion compensation portion 231 is compressed in the MV preserved in step S217.That is, such as, although the same with in the space MV memory 64 of Fig. 6, in space MV memory 262, a MV is preserved for each (4 × 4) block, but MV compressed, so that a MV is preserved for each (16 × 16) block.Such as, in (16 × 16) block, select the MV of upper left block.
Subsequently, motion compensation portion 231 is kept at the MV of compression in MV memory 233-1.
In step S224, the decoder 211-M of another viewpoint decodes the picture of another viewpoint.In addition, this decoding process is substantially similar to the decoding process of Figure 18.
As mentioned above, decoding process is carried out.
[example of motion compensation process]
Below with reference to the flow chart of Figure 19, the motion compensation process of the step S216 of Figure 18 is described.
Losslessly encoding part 222 is in step S241, and the motion prediction mode in decode headers information, then in step S242, judges whether predictive mode is inter-frame forecast mode.
When in step S242, when judging that predictive mode is inter-frame forecast mode, losslessly encoding part 222 is inter-frame forecast mode (inter mode), with reference to image index (Ref index), the index (MV index) of MV difference information (Mvd) and predicted vector is supplied to AMVP pattern vector predicted portions 252.Subsequently, process enters step S243.
The vector forecasting of AMVP pattern is carried out in step S243, AMVP pattern vector predicted portions 252.The vector forecasting process of AMVP will join the flow chart explanation of Figure 20 below.
In the process of step S243, according to the index of predicted vector, generation forecast vector, by being added the predicted vector of MV difference information and generation, the MV of formation object block, according to the MV generated, generation forecast image.The predicted picture generated is provided to predicted picture and selects part 232.
On the other hand, when in step S242, when determinating mode is not inter-frame forecast mode, losslessly encoding part 222 is supplied to automatic cross index generating portion 251 merging patterns or skip mode and merging index.Subsequently, process enters step S244.
In step S244, automatic cross index generating portion 251 generates automatically with reference to image index, and generate with reference to image index (Ref index) and merging index, be supplied to M/S pattern vector predicted portions 253 together with from the merging patterns of losslessly encoding part 222 or skip mode.
The vector forecasting process of merging patterns or skip mode is carried out in step S245, M/S pattern vector predicted portions 253.That is, M/S pattern vector predicted portions 253 is from the candidate's picture be kept at DPB 229-1 or 229-N, reads the candidate's picture (inter prediction is with reference to picture) corresponded to reference to image index.
M/S pattern vector predicted portions 253, according to the merging index in the header information from losslessly encoding part 222, generates MV.
Such as, when merging index instruction spatial prediction vectors, M/S pattern vector predicted portions 253 utilizes the spatial neighboring blocks of the vicinity in same frame, generates MV.When merging index predicted vector instruction time, M/S pattern vector predicted portions 253, by from MV memory 233-1, reads in the picture of the different time of same viewpoint, the MV of the corresponding blocks utilizing MV to associate, generates MV.When merging index instruction interview prediction vector, M/S pattern vector predicted portions 253 is from the MV memory 233-N of MV preserving different points of view, read the MV of the reference block (the Cor PU of Fig. 1) in the different points of view of same time, thus generate MV.
M/S pattern vector predicted portions 253, by according to MV, carries out the motion compensation with reference to picture, generation forecast image.The predicted picture generated is provided to predicted picture and selects part 232.
[the vector forecasting process of AMVP pattern]
Below with reference to Figure 20, the vector forecasting process of AMVP pattern is described.
In step S261, the MV difference information (MVd) of losslessly encoding part 222 decode headers information, and the MV difference information of decoding is supplied to addition section 263.
In step S262, the reference image index of losslessly encoding part 222 decode headers information, and reference image index (Ref index) of decoding is supplied to predicted picture generating portion 261 and POC conversion fraction 267.
In step S263, the index of the predicted vector of losslessly encoding part 222 decode headers information.
In step S264, losslessly encoding part 222 with reference to the predicted vector index of decoding in step S263, and judges whether predicted vector is spatial prediction vectors.
When in step S264, when judging that predicted vector is spatial prediction vectors, losslessly encoding part 222 is supplied to predicted vector generating portion 264 the predicted vector index of decoding.Subsequently, process enters step S265.
In step S265, predicted vector generating portion 264 span predicted vector.That is, predicted vector generating portion 264 is by from space MV memory 262, reads the MV indicated by the index of the predicted vector supplied from losslessly encoding part 222, span predicted vector.Predicted vector generating portion 264, through switch 266, is supplied to addition section 263 the predicted vector generated.
When in step S264, when judging that predicted vector is not spatial prediction vectors, process enters step S266.
In step S266, predicted vector generating portion 264 generates non-space predicted vector.The process generating non-space predicted vector will in reference Figure 21 explanation below.
In the process of step S266, from the index of losslessly encoding part 222 availability forecast vector, from MV memory 233-1 or 233-N, read the MV indicated by the index of predicted vector, generate non-space (that is, TMVP or IVMP) predicted vector.The predicted vector generated, through switch 266, is provided to addition section 263.
In step S267, addition section 263 generates MV.That is, in predicted vector generating portion 264,265 predicted vector generated, through switch 266, are transfused to addition section 263.Addition section 263, by being added the difference information of the predicted vector of input and the MV from losslessly encoding part 222 supply, generates MV.
In step S268, addition section 263 is accumulated in the MV generated in space MV memory 262.In addition, now, the MV of generation is also provided to predicted picture generating portion 261 through space MV memory 262.
In step S269, predicted picture generating portion 261 generation forecast image (pred.image).That is, predicted picture generating portion 261 reads and the reference image corresponding with reference to image index (Ref index) from losslessly encoding part 222 from DPB 229-1 or 229-N.Predicted picture generating portion 261, by according to the MV from space MV memory 262, carries out the motion compensation with reference to image of reading, generation forecast image.
[generating the process of non-space predicted vector]
Below with reference to the flow chart of Figure 21, the process of the generation non-space predicted vector in the step S266 of Figure 20 is described.
In step S281, losslessly encoding part 222, with reference to the index of the predicted vector of decoding in the step S263 of Figure 20, judges whether predicted vector is time prediction vector.When in step S281, when judging that predicted vector is time prediction vector, losslessly encoding part 222 is supplied to viewpoint internal reference vector generating portion 281 the index of predicted vector.Subsequently, process enters step S282.
In step S282, viewpoint internal reference vector generating portion 281 generates the predicted vector utilizing TMVP.That is, viewpoint internal reference vector generating portion 281 is by from MV memory 233-1, reads in the picture of the different time of same viewpoint, the MV of (that is, the utilizing MV to associate) corresponding blocks indicated by the index of predicted vector, generation forecast vector.The predicted vector generated is provided to addition section 263 through switch 266.
When in step S281, when judging that predicted vector is not time prediction vector, losslessly encoding part 222 is supplied between viewpoint the index of predicted vector with reference to vectorial generating portion 282.Subsequently, process enters step S283.
In step S283-S285, between viewpoint, generate the predicted vector utilizing IVMP with reference to vectorial generating portion 282.
That is, in step S283, between viewpoint with reference to vectorial generating portion 282 from space MV memory 262, in the MV of the adjacent block of contiguous object block (PU), find out disparity vector, and according to the disparity vector found out, calculate parallax.
In step S284, select in different points of view with reference to vectorial generating portion 282 between viewpoint, be shifted the PU of the position of the parallax obtained in step S283, as reference PU.
In step S285, pass through, from the MV memory 233-N of the MV preserving different points of view, to read the MV with reference to PU selected with reference to vectorial generating portion 282 between viewpoint, from the vector of the MV generation forecast with reference to PU of selection.Because predicted vector generating process is substantially with identical with the predicted vector generating process that 13 illustrate above with reference to Figure 12, therefore omit the explanation that it repeats.
That is, in step S285, according to the POC information from POC conversion fraction 267, reference POC (Ref POC) whether being different from reference block with reference to POC (Ref POC) of object block is judged.When judging that Ref POC is different, convergent-divergent is carried out to the MV of reference block.That is, when judging that Ref POC is different, the MV of convergent-divergent reference block, thus generation forecast is vectorial.
In the process of step S285, the predicted vector of the IVMP of generation, together with the MV index of this predicted vector of instruction, is provided to vectorial cost judging section 63 and subtraction part 68 through switch 67.
As mentioned above, even if when reference POC (Ref POC) of object block is different from reference POC (Ref POC) of the reference block in different points of view, by the MV of convergent-divergent reference block, also the MV of convergent-divergent can be appointed as predicted vector.That is, the MV of the reference block of different points of view also can be designated as candidate prediction vector.So, due to can convergent-divergent and use the high MV of correlation, the therefore improvement Be very effective of code efficiency.
In addition, although describe the situation of AMVP pattern above in detail, but, this technology is also applicable to merging patterns.In addition, when merging patterns, the same with in the situation of TMVP, Ref index is fixed to 0, and when the Ref POC with reference to PU of basic viewpoint is different from the Ref POC of current PU, convergent-divergent is with reference to the MV of PU, and the MV of convergent-divergent serves as predicted vector.
In this case, the treatment circuit of TMVP and IVMP can be made general.
In addition, be described above wherein when obtaining the predicted vector of interframe MV of object block, in the viewpoint different with object block, be used according to POC along the example of interframe MV of reference block being shifted the parallax indicated by the disparity vector of the block be close to object block after time orientation convergent-divergent.
On the other hand, when when between viewpoint, MV is used as predicted vector, this technology is also suitable for.That is, when the MV of the corresponding blocks of the different time corresponding from the object block of certain time be indicate the viewpoint different with object block viewpoint between MV time, according to viewpoint id, the MV of convergent-divergent corresponding blocks, the MV of convergent-divergent can be used as the predicted vector of object block.
As mentioned above, HEVC scheme is configured to be used as basic framework encoding scheme.But, the disclosure is not limited thereto.Go for other coding/decoding scheme.
In addition, such as, the disclosure is applicable to when to be received by network medium (such as satellite broadcasting, cable TV, internet or mobile phone) and the same in HEVC scheme etc., the image encoding apparatus of use time image information (bit stream) that utilize the orthogonal transform of such as discrete cosine transform and so on and motion compensation to compress and image decoding apparatus.In addition, the disclosure is applicable to image encoding apparatus when using when the enterprising row relax of such as CD, disk or flash memory and so on storage medium and image decoding apparatus.
In addition, such as, this technology is applicable in units of section, from multiple coded datas that the resolution etc. prepared is different from each other, selects and utilizes the HTTP of suitable coded data to transmit as a stream, such as MPEG DASH.
<4. the 3rd embodiment >
[configuration example of computer]
The available hardware of above-mentioned a series of process performs, or available software performs.When performing described a series of process with software, the program forming described software is installed in computer.Here, computer comprises the computer be incorporated in specialized hardware, or by various program is installed in computer, can perform the general purpose personal computer (PC) of various function.
Figure 22 be graphic extension by program, perform the block diagram of the configuration example of the hardware of the computer of above-mentioned a series of process.
In computer 800, central processing unit (CPU) 801, read-only memory (ROM) 802 and random access memory (RAM) 803 are interconnected by bus 804.
In addition, input and output interface (I/F) 805 is connected to bus 804.Importation 806, output 807, storage area 808, communications portion 809 and driver 810 are connected to input and output I/F 805.
Importation 806 is made up of keyboard, mouse, microphone etc.Output 807 is made up of display, loud speaker etc.Storage area 808 is made up of hard disk, nonvolatile memory etc.Communications portion 809 is made up of network interface etc.Driver 810 drives detachable media 811, such as disk, CD, magneto optical disk or semiconductor disc.
In the computer formed as mentioned above, CPU 801, by through input and output I/F 805 and bus 804, is loaded into the program be kept in storage area 808 in RAM 803, and performs described program, so that carry out above-mentioned a series of process.
By program being recorded in as on the detachable media 811 of suit medium etc., the program performed by computer (CPU 801) can be provided.In addition, by wired or wireless transmission medium, such as local area network (LAN), internet or digital satellite broadcasting, provide described program.
In a computer, by detachable media 811 is put into driver 810, by input and output I/F 805, program is installed in storage area 808.In addition, communications portion 809 can be utilized, from wired or wireless transmission medium reception program, then program is installed in storage area 808.As another kind of alternative, program can be pre-installed in ROM 802 or storage area 808.
It should be noted that program that computer performs can be the order according to illustrating in this manual, sequentially processed program, or can be concurrently, or the program that (such as when invoked) is processed where necessary.
In the disclosure, the step of declare record program on the recording medium can comprise according to declaration order, the process sequentially carried out, and does not sequentially process, but the process performed concurrently or individually.
In the description, system refers to the whole equipment comprising multiple device (equipment).
In addition, the element being described as individual equipment (or processing unit) above can be divided, thus be configured to multiple equipment (or processing unit).On the contrary, the element being described as multiple equipment (or processing unit) above can be collectively configured to individual equipment (or processing unit).In addition, to each equipment (or processing unit), the element except element described above can be increased.In addition, a part for the element of particular device (or processing unit) can be comprised in the element of another equipment (or another processing unit), as long as the structure of whole system is substantially identical with operation.In other words, embodiment of the present disclosure is not limited to above-described embodiment, can make variations and modifications, and not depart from the scope of the present disclosure.
Various electronic equipment is applicable to according to the image encoding apparatus of embodiment and image decoding apparatus, such as satellite broadcasting, the wired broadcasting of such as cable TV and so on, distribution on internet, by cellular communication to the transmitter of the distribution etc. of terminal and receiver, image is recorded in the recording equipment in such as CD, disk or flash memory and so on medium, and from the reproducer of such storage medium reproducing image.The following describes 4 kinds of application.
<5. > is applied
[the first application: television receiver]
The example of the schematic construction of the television set that Figure 23 graphic extension embodiment is applicable to.Television set 900 comprises antenna 901, tuner 902, demultiplexer 903, decoder 904, video signal processing section 905, display section 906, Audio Signal Processing part 907, loud speaker 908, exterior I/F909, control section 910, user I/F 911 and bus 912.
Tuner 902, from the broadcast singal received by antenna 901, extracts the signal of desired channel, and the signal that demodulation is extracted.Subsequently, tuner 902 exports to demultiplexer 903 the coded bit stream obtained by demodulation.That is, tuner 902 serves as and receives wherein image by the transmitting device of the television set 900 of encoding stream of encoding.
Demultiplexer 903 points coded bit stream, thus obtain video flowing and the audio stream of program to be watched, and export to decoder 904 by dividing with each stream of acquisition.Demultiplexer 903, also from coded bit stream, extracts the auxiliary data of such as electronic program guides (EPG) and so on, then the data extracted is supplied to control section 910.In addition, when coded bit stream is by scrambling, demultiplexer 903 can carry out descrambling.
Decoder 904 decode from demultiplexer 903 input video flowing and audio stream.Subsequently, decoder 904 exports to video signal processing section 905 the video data generated in decoding process.Decoder 904 also exports to Audio Signal Processing part 907 the voice data generated in decoding process.
Video signal processing section 905 reproduces the video data inputted from decoder 904, makes display section 906 display video.The application picture that video signal processing section 905 also makes display section 906 show to be supplied by network.In addition, video signal processing section 905 according to setting, can carry out such as noise to video data and eliminates the additional treatments of (suppression) and so on.In addition, video signal processing section 905 can generate the image of the graphical user I/F (GUI) of such as menu, button and cursor and so on, and the image generated is overlapped on output image.
Display section 906 is driven by the drive singal supplied from video signal processing section 905, video or image are presented on the video screen of display device (such as, liquid crystal display, plasma scope, display of organic electroluminescence (OLED) etc.).
The reproduction processes that Audio Signal Processing part 907 is carried out such as D/A conversion and amplifies the voice data inputted from decoder 904 and so on, and from loud speaker 908 output sound.Audio Signal Processing part 907 also can be carried out such as noise to voice data and be eliminated the additional treatments of (suppression) and so on.
Exterior I/F 909 is the I/F for television set 900 being connected to external equipment or network.Such as, by exterior I/F 909 receive video flowing or audio stream can be decoded by decoder 904.That is, exterior I/F 909 also serves as and receives wherein image by the transmitting device of the television set 900 of encoding stream of encoding.
Control section 910 comprises the processor of such as central processing unit (CPU) and so on, and the memory of such as random access memory (RAM) and read-only memory (ROM) and so on.The program that CPU performs preserved by memory, routine data, EPG data, the data etc. obtained by network.When starting television set 900, CPU reads and performs the program of preserving in memory.CPU, by performing described program, according to the operation signal inputted from user I/F 911, controls the operation of television set 900.
User I/F 911 is connected to control section 910.Such as, user I/F 911 comprises the button and switch that supply user operation television set 900, and the receiving unit of remote signal.User I/F 911, by these structural details, detects the operation of user, generating run signal, then the operation signal generated is exported to control section 910.
Bus 912 interconnects tuner 902, demultiplexer 903, decoder 904, video signal processing section 905, Audio Signal Processing part 907, exterior I/F 909 and control section 910.
In the television set 900 formed in this manner, decoder 904 has the function of the image decoding apparatus 60 according to embodiment.The coding of the MV in multi-view image or the code efficiency of decoding can be improved.
< second applies: mobile phone >
The example of the schematic construction of the mobile phone that Figure 24 graphic extension embodiment is applicable to.Mobile phone 920 comprises antenna 921, communications portion 922, audio codec 923, loud speaker 924, microphone 925, camera part 926, image processing section 927, divides by part 928, recoding/reproduction part 929, display section 930, control section 931, operation part 932 and bus 933.
Antenna 921 is connected to communications portion 922.Loud speaker 924 and microphone 925 are connected to audio codec 923.Operation part 932 is connected to control section 931.Bus 933 connection communication part 922, audio codec 923, camera part 926, image processing section 927, divides with part 928, recoding/reproduction part 929, display section 930 and control section 931.
Mobile phone 920 is by various operator scheme, comprise voice calling mode, data communication mode, imaging pattern and video telephone mode, carry out the operation of the transmission of such as audio signal and reception, the transmission of Email or view data and reception, image taking and data record and so on.
Under voice calling mode, the simulated audio signal that microphone 925 produces is provided to audio codec 923.Audio codec 923 converts voice data to simulated audio signal, makes the voice data after conversion experience A/D conversion, and the data after compressing and converting.Subsequently, audio codec 923 exports to communications portion 922 the voice data of compression.Communications portion 922 is encoded and modulating audio frequency data, thus generates signal transmission.Subsequently, communications portion 922, by antenna 921, sends base station (not shown) to the signal transmission generated.Communications portion 922 also amplifies the wireless signal received by antenna 921, and the frequency of conversion wireless signal, to obtain Received signal strength.Subsequently, communications portion 922 demodulation code Received signal strength, generates voice data, and the voice data generated is exported to audio codec 923.Audio codec 923 extended audio data, make voice data experience D/A conversion, thus generate simulated audio signal.Subsequently, audio codec 923 is supplied to loud speaker the audio signal generated, thus output sound.
Control section 931, also according to the operation that user is undertaken by operation part 932, generates the text data such as forming Email.In addition, control section 931 makes display section 930 show text.In addition, control section 931, according to by operation part 932, from the transfer instruction of user, generates e-mail data, then the e-mail data generated is exported to communications portion 922.Communications portion 922 is encoded and is modulated e-mail data, thus generates signal transmission.Subsequently, communications portion 922, by antenna 921, sends base station (not shown) to the signal transmission generated.Communications portion 922 also amplifies the wireless signal received by antenna 921, and converts the frequency of this wireless signal, thus obtains Received signal strength.Subsequently, communications portion 922 demodulation code Received signal strength, to recover e-mail data, then exports to control section 931 the e-mail data recovered.Control section 931 makes display section 930 show the content of Email, also makes the storage medium of recoding/reproduction part 929 preserve e-mail data.
Recoding/reproduction part 929 comprises readablely writes storage medium.Such as, storage medium can be the built-in storage medium of such as RAM and flash memory and so on, or can be the outside installing type storage medium of such as hard disk, disk, magneto optical disk, CD, unallocated space bitmap (USB) memory and storage card and so on.
Under this external imaging pattern, such as, camera part 926 takes the image of subject, thus image data generating, and the view data generated is exported to image processing section 927.Image processing section 927 is encoded from the view data of camera part 926 input, and makes the recording medium of recoding/reproduction part 929 preserve encoding stream.
Under this external video telephone mode, such as, divide with the multiplexing video flowing of being encoded by image processing section 927 of part 928, and from the audio stream that audio codec 923 inputs, multiplex stream is exported to communications portion 922.Communications portion 922 is encoded and is modulated described stream, thus generates signal transmission.Subsequently, communications portion 922, by antenna 921, sends base station (not shown) to the signal transmission generated.Communications portion 922 also amplifies the wireless signal received by antenna 921, and converts the frequency of this wireless signal, thus obtains Received signal strength.These signal transmissions and Received signal strength can comprise coded bit stream.Subsequently, communications portion 922 demodulation code Received signal strength, thus recover described stream, and the stream recovered is exported to a point use part 928.Divide with part 928 points inlet flow, to obtain video flowing and audio stream, then video flowing is exported to image processing section 927, audio stream is exported to audio codec 923.Image processing section 927 decoded video streams, thus generating video data.Video data is provided to display section 930, and display section 930 shows a series of image.Audio codec 923 extended audio stream, makes audio stream experience D/A conversion, thus generates simulated audio signal.Subsequently, audio codec 923 is supplied to loud speaker 924 the audio signal generated, and sound is output.
In the mobile phone 920 formed in this manner, graphics processing unit 927 has the function of image encoding apparatus according to embodiment and image decoding apparatus.The coding of the MV in multi-view image or the code efficiency of decoding can be improved.
[the 3rd application: recording/reproducing apparatus]
The example of the schematic construction of the recording/reproducing apparatus that Figure 25 graphic extension embodiment is applicable to.Such as, recording/reproducing apparatus 940 is encoded the voice data of broadcast program and video data that receive, and the voice data of coding and video data recording in the recording medium.Such as, recording/reproducing apparatus 940 goes back the voice data and video data that codified obtains from another equipment, and the voice data of coding and video data recording in the recording medium.In addition, recording/reproducing apparatus 940, according to the instruction of user, utilizes monitor or loudspeaker reproduction record data in the recording medium.Now, recording/reproducing apparatus 940 decoding audio data and video data.
Recording/reproducing apparatus 940 comprises tuner 941, exterior I/F 942, encoder 943, hard disk drive (HDD) 944, CD drive 945, selector 946, decoder 947, screen display device (OSD) 948, control section 949, and user I/F 950.
Tuner 941, from the broadcast singal received by antenna (not shown), extracts the signal of desired channel, and the signal that demodulation is extracted.Subsequently, tuner 941 exports to selector 946 the coded bit stream obtained by demodulation process.That is, tuner 941 serves as the transmitting device of recording/reproducing apparatus 940.
Exterior I/F 942 is the I/F for linkage record/reproducer 940 and external equipment or network.Such as, external interface part 942 can be Institute of Electrical and Electric Engineers (IEEE) 1394I/F, network I/F, USB I/F, flash memory I/F etc.Such as, by exterior I/F 942 receive video data and voice data be transfused to encoder 943.That is, exterior I/F 942 serves as the transmitting device of recording/reproducing apparatus 940.
When the video data inputted from exterior I/F 942 and voice data be not by coding, encoder 943 coding video frequency data and voice data.Subsequently, encoder 943 exports to selector 946 coded bit stream.
HDD 944 is the content-data of wherein such as video and sound by the coded bit stream compressed, and various program and other data are recorded in internal hard drive.When rendered video or sound, HDD 944 also reads these data from hard disk.
Hard disk drive 945 records data on the recording medium of loading, and reads the data in the recording medium of loading.The recording medium be contained on hard disk drive 945 can be DVD CD (DVD-video, DVD-RAM, DVD-R, DVD-RW, DVD+R, DVD+RW etc.), blue light (registered trade mark) CD etc.
When recording of video or sound, the coded bit stream inputted from tuner 941 or encoder 943 selected by selector 946, then the coded bit stream selected exported to HDD 944 or disc driver 945.In addition, when rendered video or sound, selector 946 exports to decoder 947 the coded bit stream inputted from HDD 944 or disc driver 945.
Decoder 947 decoding and coding bit stream, thus generating video data and voice data.Subsequently, decoder 947 exports to OSD 948 the video data generated.In addition, decoder 947 exports to external loudspeaker the voice data generated.
OSD 948 reproduces the video data inputted from decoder 947, thus display video.OSD948 also can the imaging importing of the such as GUI of menu, button, cursor and so on the video of display.
Control section 949 comprises the processor of such as CPU and so on, and the memory of such as RAM and ROM and so on.The program that CPU performs preserved by memory, routine data etc.When starting recording/reproducing apparatus 940, CPU reads and performs the program of preserving in memory.CPU, by executive program, according to the operation signal inputted from user I/F 950, controls the operation of recording/reproducing apparatus 940.
User I/F 950 is connected to control section 949.Such as, user I/F 950 comprises button for user operation records/reproducer 940 and switch, and the receiving unit of remote signal.User I/F950 detects the operation of user by these structural details, generating run signal, and the operation signal generated is exported to control section 949.
In the recording/reproducing apparatus 940 formed in this manner, encoder 943 has the function of the image encoding apparatus according to embodiment.In addition, decoder has the function of the image decoding apparatus according to embodiment.The coding of the MV in multi-view image or the code efficiency of decoding can be improved.
[the 4th application: imaging device]
Figure 26 is the example of the schematic construction of the imaging device that embodiment is applicable to.The image of subject taken by imaging device 960, thus synthetic image, to coded image data, and Imagery Data Recording in the recording medium.
Imaging device 960 comprises optics 961, imaging moiety 962, signal processing 963, image processing section 964, display section 965, exterior I/F 966, memory 967, media drive 968, OSD 969, control section 970, user I/F 971 and bus 972.
Optics 961 is connected to imaging moiety 962.Imaging moiety 962 is connected to signal processing 963.Display section 965 is connected to image processing section 964.User I/F 971 is connected to control section 970.Bus 972 interconnects image processing section 964, exterior I/F 966, memory 967, media drive 968, OSD 969 and control section 970.
Optics 961 comprises condenser lens, aperture device etc.Optics 961 forms the optical imagery of subject in the image planes of imaging moiety 962.Imaging moiety 962 comprises the imageing sensor of such as charge coupled device (CCD) and complementary metal oxide semiconductors (CMOS) (CMOS) and so on, by opto-electronic conversion, the optical imagery that image planes are formed is converted to the picture signal as the signal of telecommunication.Subsequently, imaging moiety 962 exports to signal processing 963 picture signal.
Signal processing 963 carries out various camera signal process to the picture signal inputted from imaging moiety 962, and such as flex point corrects, γ corrects and color correction.Signal processing 963 exports to image processing section 964 the view data through camera signal process.
Image processing section 964 is encoded from the view data of signal processing 963 input, thus generates coded data.Subsequently, image processing section 964 exports to exterior I/F 966 or media drive 968 the coded data generated.Image processing section 964 is also decoded the coded data inputted from exterior I/F 966 or media drive 968, thus image data generating.Subsequently, image processing section 964 exports to display section 965 the view data generated.In addition, image processing section 964 exports to display section 965 the view data inputted from signal processing 963, thus image is shown.In addition, image processing section 964 can the display data investigation obtained from OSD 969 on the image exporting to display section 965.
Such as, OSD 969 generates the image of such as menu, button and cursor and so on GUI, and the image generated is exported to image processing section 964.
Exterior I/F 966 is such as configured to USB input and output terminal.Such as, when print image, exterior I/F 966 connects imaging device 960 and printer.In addition, when needed, driver is connected to exterior I/F 966.The detachable media of such as disk and CD and so on is loaded in driver, and the program read from detachable media can be installed in imaging device 960.In addition, exterior I/F 966 can be configured to network I/F, to be connected to the network of such as LAN and internet and so on.That is, exterior I/F 966 serves as the transmitting device of imaging device 960.
The recording medium loaded in media drive 968 can be the readable detachable media write, such as disk, magneto optical disk, CD and semiconductor memory.Recording medium also can be fixedly secured in media drive 968, thus forms not transplantable storage area, such as built-in disk driver or solid-state drive (SSD).
Control section 970 comprises the processor of such as CPU and so on, and the memory of such as RAM and ROM and so on.The program that CPU performs preserved by memory, routine data etc.When starting imaging device 960, CPU reads and performs the program of preserving in memory.CPU, by executive program, according to the operation signal inputted from user I/F 971, controls the operation of imaging device 960.
User I/F 971 is connected to control section 970.Such as, user I/F 971 comprises the button, switch etc. for user operation imaging device 960.User I/F 971, by these structural details, detects the operation of user, generating run signal, and the operation signal generated is exported to control section 970.
In the imaging device 960 formed in this manner, image processing section 964 has the function of image encoding apparatus according to embodiment and image decoding apparatus.The coding of the MV in multi-view image or the code efficiency of decoding can be improved.
<6. the example application > of scalable video coding
[the first system]
The following describes the object lesson that the scalable encoded data of scalable video coding (hierarchical coding) is wherein carried out in utilization.Example as shown in diagram in Figure 27, scalable video coding is such as waiting the selection transmitting data.
In data transmission system 1000 in figure 27 shown in diagram, the scalable encoded data of Distributor 1002 reading and saving in scalable encoded data storage area 1001, and through network 1003, scalable encoded data is distributed to terminal equipment, such as personal computer 1004, AV equipment 1005, flat-panel devices 1006 or mobile phone 1007.
Now, Distributor 1002 is according to the ability of terminal equipment, and communication environment etc., select the coded data with suitable quality.Even if when the data that Distributor 1002 delivery quality is unnecessarily high, also differ in terminal equipment and obtain high quality graphic surely, and it may be the reason that delay or overflow occur.In addition, unnecessarily may occupy communication bandwidth, or unnecessarily increase the load of terminal equipment.On the contrary, even if when the data that Distributor 1002 delivery quality is unnecessarily low, the image with enough quality can not be obtained.Thus Distributor 1002 suitably reads and transmits the scalable encoded data be kept in scalable encoded data storage area 1001, as having the ability with terminal equipment, the coded data of the corresponding appropriate quality such as communication environment.
Such as, scalable encoded data storage area 1001 is configured to preserve the scalable encoded data (BL+EL) 1011 of wherein carrying out scalable video coding.Scalable encoded data (BL+EL) 1011 is the coded datas comprising Primary layer and enhancement layer, is to obtain the data of Primary layer image and enhancement layer image from it by carrying out decoding.
Distributor 1002 is according to the ability of the terminal equipment of transmission data, and communication environment etc., select suitable layers, and reads the data of selected layer.Such as, the personal computer 1004 high for disposal ability or flat-panel devices 1006, Distributor 1002, from scalable encoded data storage area 1001, read scalable encoded data (BL+EL) 1011, and former state transmits described scalable data (BL+EL) 1011.On the other hand, such as, the AV equipment 1005 low for disposal ability or mobile phone 1007, Distributor 1002 is from scalable encoded data (BL+EL) 1011, extract the data of Primary layer, and transmit the data of the Primary layer extracted, as low-quality scalable encoded data (BL) 1012, scalable encoded data (BL) 1012 is that content is identical with scalable encoded data (BL+EL) 1011, but the data that quality is lower than scalable encoded data (BL+EL) 1011.
Due to by utilizing scalable encoded data, easily can adjust data volume, thus can suppress the generation of delay or overflow, or the unnecessary increase of the load of terminal equipment or communication media can be suppressed.In addition, due in scalable encoded data (BL+EL) 1011, the redundancy between each layer is reduced, and therefore compared with when the coded data of each layer is regarded as independent data, can reduce data volume further.In it is possible to the memory block more effectively utilizing scalable encoded data storage area 1001.
Because such as PC 1004 can be used as terminal equipment to the various equipment of mobile phone 1007 and so on, therefore the hardware performance of terminal equipment is different with equipment.In addition, owing to there are the various application programs performed by terminal equipment, therefore its software capability is also different.In addition, due to all communication networks, include spider lines and/or wireless network, such as internet and local area network (LAN) (LAN) can be used as the network 1003 serving as communication media, and therefore its data transmission capabilities is different.In addition, data transmission capabilities can wait according to other communication and change.
So, before beginning transfer of data, Distributor 1002 can with the terminal equipment in communication as transfer of data destination, thus obtain and the performance-relevant information of this terminal equipment, the such as hardware performance of described terminal equipment, or the performance of the application (software) that described terminal equipment performs, and about the information of communication environment, the such as available bandwidth of network 1003.Subsequently, Distributor 1002 according to the information obtained, can select suitable layer.
In addition, the extraction of layer can be carried out in terminal equipment.Such as, the scalable encoded data (BL+EL) 1011 that PC 1004 decodable code transmits, and show the image of Primary layer, or the image of display enhancement layer.In addition, such as, PC 1004 can be configured to the scalable encoded data (BL) 1012 extracting Primary layer from the scalable encoded data (BL+EL) 1011 transmitted, preserve the scalable encoded data (BL) 1012 of the Primary layer extracted, send another equipment to, or decode and show the image of Primary layer.
Certainly, scalable encoded data storage area 1001, Distributor 1002, the number of network 1003 and terminal equipment is arbitrary.In addition, although be described above Distributor 1002 transmits example from data to terminal equipment, but example is used to be not limited thereto.Data transmission system 1000 is applicable to when transmitting the coded data of ges forschung to terminal equipment, according to the ability, communication environment etc. of terminal equipment, selects and transmits any system of suitable layers.
Even if in the data transmission system 1000 in such as Figure 27, this technology illustrated above with reference to Fig. 1-2 1 by application, also can obtain the effect similar with the effect illustrated above with reference to Fig. 1-2 1.
[second system]
In addition, the same with in the example in Figure 28 shown in diagram, scalable video coding is used for the transmission undertaken by multiple communication media.
In data transmission system 1100 in Figure 28 shown in diagram, the scalable encoded data (BL) 1121 of Primary layer, by terrestrial broadcasting 1111, is transmitted in broadcasting station 1101.In addition, the arbitrary network 1112 of broadcasting station 1101 by being made up of wired and/or cordless communication network, transmits the scalable encoded data (EL) 1122 (such as, data are by subpackage and transmission) of enhancement layer.
Terminal equipment 1102 has the function receiving the terrestrial broadcasting 1111 that broadcasting station 1101 is broadcasted, and receives the scalable encoded data (BL) 1121 of the Primary layer transmitted by terrestrial broadcasting 1111.In addition, terminal equipment 1102 also has and carries out by network 1112 communication function that communicates, thus receives the scalable encoded data (EL) 1122 of the enhancement layer transmitted by network 1112.
Such as, according to user instruction etc., terminal equipment 1102 is decoded the scalable encoded data (BL) 1121 of the Primary layer obtained by terrestrial broadcasting 1111, thus obtains or preserve the image of Primary layer, or the image of Primary layer is sent to miscellaneous equipment.
In addition, such as, according to user instruction etc., terminal equipment 1102 is combined through the scalable encoded data (BL) 1121 of the Primary layer that terrestrial broadcasting 1111 obtains, with the scalable encoded data (EL) 1122 of the enhancement layer obtained by network 1112, thus obtain scalable encoded data (BL+EL), by Decoding Scalable coded data (BL+EL), obtain or preserve the image of enhancement layer, or the image of enhancement layer is sent to miscellaneous equipment.
As mentioned above, through concerning every layer different communication media, scalable encoded data can be transmitted.In it is possible to scatteredload, thus suppress the generation of delay or overflow.
In addition, according to situation, the communication media of the transmission for every one deck can be selected.Such as, can by having the communication media compared with large bandwidth, transmit the scalable encoded data (BL) 1121 of the relatively large Primary layer of data volume, by the communication media that bandwidth is narrower, transmit the scalable encoded data (EL) 1122 of the relatively little enhancement layer of data volume.In addition, such as, can according to the available bandwidth of network 1112, the communication media switching the scalable encoded data (EL) 1122 transmitting enhancement layer is network 1112 or terrestrial broadcasting 1111.Certainly, this is equally applicable to the data of random layer.
By controlling in this manner, the increase of the load in transfer of data can be suppressed further.
Certainly, the number of plies is arbitrary, and the number for the communication media transmitted also is arbitrary.In addition, the number as the terminal equipment 1102 of the destination of Data dissemination is also arbitrary.In addition, although be described above the example carrying out from broadcasting station 1101 broadcasting, but example is used to be not limited thereto.Data transmission system 1100 is applicable to and utilizes layer split scalable encoded data for unit and transmitted any system of scalable encoded data by multilink.
Even if in data transmission system 1100 as shown in Figure 28, this technology illustrated above with reference to Fig. 1-2 1 by application, also can obtain the effect similar with the effect illustrated above with reference to Fig. 1-2 1.
[the 3rd system]
In addition, the example as shown in diagram in Figure 29, scalable video coding is used for the storage of coded data.
In imaging system 1200 in Figure 29 shown in diagram, the view data that imaging device 1201 obtains the image by shooting subject 1211 carries out scalable video coding, and using scalable video result as scalable encoded data (BL+EL) 1221, be supplied to scalable encoded data memory device 1202.
Scalable encoded data memory device 1202, with the quality corresponding to situation, preserves the scalable encoded data (BL+EL) 1221 supplied from imaging device 1201.Such as under normal circumstances, scalable encoded data memory device 1202 is from scalable encoded data (BL+EL) 1221, extract the data of Primary layer, and preserve the data extracted, the scalable encoded data (BL) 1222 of the Primary layer that, data volume low as quality is little.On the other hand, such as, in concern situation, scalable encoded data memory device 1202 former state preserves the high and scalable encoded data (BL+EL) 1221 that data volume is large of quality.
Like this, because scalable encoded data memory device 1202 only where necessary, just can preserve image, in high quality therefore, it is possible to suppress the reduction of the image value caused by the reduction of picture quality, and suppress the increase of data volume, thus the service efficiency of memory block can be improved.
Such as, assuming that imaging device 1201 is monitoring cameras.Owing to working as monitored object (such as, invader) when not appearing in photographic images (under normal circumstances), the content of photographic images may be inessential, therefore the reduction of preference for data amount, thus view data (scalable encoded data) is preserved on low quality ground.On the other hand, owing to working as monitored object as subject 1211, when appearing in photographic images (in concern situation), the content of photographic images may be important, therefore prioritizes image quality, thus preserve view data (scalable encoded data) in high quality.
Such as, by analysis image, scalable encoded data memory device 1202 can judge that situation is normal condition or concern situation.In addition, imaging device 1201 can be configured to judge, and result of determination is sent to scalable encoded data memory device 1202.
Here, situation is the criterion of normal condition or the situation of concern is optional, and the content as the image of criterion is optional.Certainly, the condition except the content of image can be used as criterion.Such as, can according to the volume, waveform etc. of the sound of record, interval on schedule, or press external command, such as user instruction, switch.
In addition, although be described above normal condition and the situation of concern two states, but, the number of state is optional, such as, can be configured to 3 kinds or more kinds of state, such as normal condition, low concern situation, concern situation and showing great attention between situation switches.But the upper limit number of the state that can be switched depends on the number of plies of scalable encoded data.
In addition, imaging device 1201 according to state, can determine the number of plies of scalable video coding.Such as under normal circumstances, imaging device 1201 can generate the scalable encoded data (BL) 1222 of the Primary layer that quality is low, data volume is little, and these data are supplied to scalable encoded data memory device 1202.In addition, such as, in concern situation, imaging device 1201 can generate the scalable encoded data (BL+EL) 1221 of the Primary layer that quality is high, data volume is large, and these data are supplied to scalable encoded data memory device 1202.
Although illustrate monitoring camera above, but, the application of imaging system 1200 is optional, is not limited to monitoring camera.
Even if in such as Figure 29 in the imaging system 1200 shown in diagram, this technology illustrated above with reference to Fig. 1-2 1 by application, can obtain the effect similar with the effect illustrated above with reference to Fig. 1-2 1.
<7. the 6th embodiment >
[other realizes example]
Although be described above the example of equipment that this technology is applicable to and system, but, this technology is not limited thereto.Such as, this technology can be realized as the processor serving as system large-scale integrated (LSI) etc., utilize the module of multiple processors etc., utilize the unit of multiple modules etc., wherein increase further to described unit (that is, a part of structures of equipment) such as the units of other function.
< video unit >
Below with reference to Figure 30, illustrate that wherein this technology is realized as the example of unit.The example of the schematic construction of the video unit that this technology of Figure 30 graphic extension is applicable to.
Recently, the multifunctionality of electronic equipment makes progress, not only can see the research and development along with electronic equipment and manufacture, realize the situation of polyfunctional part-structure, the situation with a kind of function is realized as with described structure, and the various structures by combination with correlation function can be seen, realize the situation with a unit of several functions.
Video unit 1300 in Figure 30 shown in diagram is multifunction structures, serve as the equipment by combination with the function relevant to image coding and decoding (encode and/or decode), with the video unit that the equipment with other function relevant to described function obtains.
As illustrated in Figure 30, video unit 1300 has the module group of video module 1311, external memory storage 1312, power management module 1313, front-end module 1314 etc., and has the equipment of the correlation functions such as link block 1321, camera 1322, transducer 1323.
Module serves as the assembly had by assembling several assembly function be relative to each other and certain integrated functionality obtained.Although concrete physical structure is optional, but such as, can consider by multiple processors with several functions, such as resistor and capacitor and so on electronic circuit component, integrated structure on circuit boards arranged by other device etc.In addition, the new module wherein combining described module and other module or processor is also possible.
When the example of Figure 30, video module 1311 is the modules wherein combining the structure with the function relevant to image procossing, has application processor, video processor, broadband modem 1333 and radio frequency (RF) module 1334.
Processor is wherein by SOC (system on a chip) (SoC), on a semiconductor die the integrated processor with the structure of predetermined function, is also referred to as such as system large-scale integrated (LSI) etc.The structure with predetermined function can be logical circuit (hardware configuration) or CPU, ROM, RAM etc., and the program (software configuration) utilizing CPU, ROM, RAM etc. to perform.Such as, processor can have logical circuit, CPU, ROM, RAM etc., and the function of a part can be realized by logical circuit (hardware configuration), and other function is realized by the program performed in CPU (software configuration).
The application processor 1331 of Figure 30 is the processor performing the application relevant to image procossing.The application performed in application processor 1331 not only carries out computing, to realize predetermined function, and if need, to can be controlled within video module 1311 and outside structure, such as video processor 1332.
Video processor 1332 is the processors with the function relevant to image coding and decoding one of (or both).
Broadband modem 1333 is processors (or module) of wireless or wired (or wired and wireless) process that broadband connections is relevant for carrying out and will be undertaken by such as internet, public telephone network and so on wideband link.Such as, broadband modem 1333 is by carrying out digital modulation etc., data (digital signal) to be transmitted are converted to analog signal, or by the analog signal that demodulation receives, the analog signal received is converted to data (digital signal).Such as, broadband modem 1333 can to any information, the view data such as processed by video processor 1332 or coded image data stream, application program, and setting data etc., carry out digital modulation/demodulation.
RF module 1334 is to the RF signal by antenna transmission and reception, carries out the module of frequency inverted, modulating/demodulating, amplification and filtering process etc.Such as, RF module 1334, by the baseband signal generated broadband modem 1333, is carried out frequency inverted etc., is generated RF signal.In addition, such as, RF module 1334, by carrying out frequency inverted etc. to the RF signal received by front-end module 1314, generates baseband signal.
In addition, as shown in the dotted line 1341 in Figure 30, by Integrated predict model processor 1331 and video processor 1332, a processor can be formed.
External memory storage 1312 is arranged on outside video module 1311, has the module of the memory device used by video module 1311.Although the memory device of external memory storage 1312 can be configured to realize by any physical structure, but, it is desirable to utilize more cheap large-capacity semiconductor memory, such as dynamic random access memory (DRAM) realizes memory device, because in many cases, described memory device is generally used for preserves a large amount of data, the view data such as in units of frame.
Power management module 1313 manages and controls the power supply to video module 1311 (the various formations in video module 1311).
Front-end module 1314 is the modules providing front-end functionality (circuit at the transmission/reception end of antenna side) to RF module 1334.As illustrated in Figure 30, front-end module 1314 such as has antenna part 1351, filter 1352 and amplifier section 1353.
Antenna part 1351 has the antenna and peripheral structure thereof that transmit and receive radio signals.Antenna part 1351 in the form of radio signals, transmits the signal supplied from amplifier section 1353, and with the form of the signal of telecommunication (RF signal), the radio signal received is supplied to filter 1352.Filter 1352, to the RF signal received by antenna part 1351, carries out filtering process etc., and the RF signal after process is supplied to RF module 1334.Amplifier section 1353 amplifies the RF signal supplied from RF module 1334, and the RF signal after amplifying is supplied to antenna part 1351.
Link block 1321 has the module with the function of the join dependency with outside.The physical structure of link block 1321 is optional.Such as, link block 1321 comprises the structure, outside input and output port etc. of the communication function had except corresponding to the communication standard of broadband modem 1333.
Such as, link block 1321 can be configured to comprise and has based on wireless communication standard, such as bluetooth (registered trade mark), IEEE 802.11 are (such as, Wi-Fi (registered trade mark)), the module of the communication function of near-field communication (NFC) or Infrared Data Association (IrDA), transmit and receive the antenna based on the signal of described standard.In addition, link block 1321 can be configured to comprise the module of the communication function had based on wired communications standards (such as USB (USB) or high-definition multimedia I/F (HDMI) (registered trade mark)), and based on the port of described standard.In addition, such as, link block 1321 can be configured to other data (signal) transfer function with antenna input and output port etc.
In addition, link block 1321 can be configured to the equipment of the transmission destination comprising data (signal).Such as, link block 1321 can be configured to have relative to recording medium, such as disk, CD, magneto optical disk or semiconductor memory read and write the driver (not only comprise the driver of detachable media, and comprise hard disk, solid-state drive (SSD), network attached storage (NAS) etc.) of data.In addition, link block 1321 can be configured to have image and audio output device (monitor, loud speaker etc.).
Camera 1322 is the images with shooting subject, thus obtains the module of the function of the view data of subject.The view data utilizing the image taking of camera 1322 to obtain is provided to video processor 1332, and is encoded.
Transducer 1323 is such as have any sensor function, such as the module of sound transducer, ultrasonic sensor, optical sensor, illuminance transducer, infrared sensor, imageing sensor, rotation sensor, angle transducer, angular-rate sensor, velocity transducer, acceleration transducer, inclination sensor, magnetic identification transducer, shock transducer or temperature sensor.The data utilizing transducer 1323 to detect are provided to application processor 1331, and by uses such as application.
The structure being described as module above can be realized as processor, and on the contrary, the structure being described as processor above can be realized as module.
In the video unit 1300 formed as mentioned above, this technology can be applied to video processor 1332 as described later.So, can the form of unit that is applicable to of this technology, implement video unit 1300.
[configuration example of video processor]
The example of the schematic construction of the video processor 1332 (Figure 30) that this technology of Figure 31 graphic extension is applicable to.
When the example of Figure 31, video processor 1332 has the input of receiving video signals and audio signal, and the function of input of encode according to predetermined scheme described vision signal and audio signal, with the Voice & Video data of decoding and coding, and the function of reproduction and outputting video signal and audio signal.
As shown in diagram in Figure 31, video processor 1332 has video input processing section 1401, first Nonlinear magnify/reduce part 1402, second Nonlinear magnify/reduce part 1403, video frequency output processing section 1404, frame memory 1405, and memory control unit divides 1406.In addition, video processor 1332 has Code And Decode engine 1407, video elementary code stream (ES) buffer 1408A and 1408B, and audio stream (ES) buffer 1409A and 1409B.In addition, video processor 1332 has audio coder 1410, audio decoder 1411, multiplexer (MUX) 1412, divides with part (demultiplexer (DMUX)) 1413, and stream damper 1414.
Such as, video input processing section 1401 obtains the vision signal inputted from link block 1321 (Figure 30), and vision signal is converted to DID.First Nonlinear magnify/reducing part 1402 pairs of view data carries out format conversion process, Nonlinear magnify/reduce process etc.Form in the destination that second Nonlinear magnify/reduce part 1403 is output to through video frequency output processing section 1404 according to view data, Nonlinear magnify/reduce process is carried out to view data, or carries out and the first Nonlinear magnify/reduce format conversion the same in part 1402, Nonlinear magnify/reduce process etc.1404 pairs, video frequency output processing section view data carries out format conversion, to the conversion etc. of analog signal, and using transformation results as playback video signal, exports to link block 1321 (Figure 30) etc.
Frame memory 1405 is by the memory of video input processing section 1401, first Nonlinear magnify/reduce part 1402, second Nonlinear magnify/the reduce view data that part 1403, video frequency output processing section 1404 and Code And Decode engine 1407 share.Such as, frame memory 1405 is realized as the semiconductor memory of such as DRAM and so on.
Memory control unit divides 1406 receptions from the synchronizing signal of Code And Decode engine 1407, and shows according to the access time of the frame memory in write access management table 1406A, controls the write/read access to frame memory 1405.Memory control unit divides 1406 process carried out according to Code And Decode engine 1407, first Nonlinear magnify/reduce part 1402, second Nonlinear magnify/reduce part 1403 etc., upgrades access management table 1406A.
Code And Decode engine 1407 carries out coded image data process, and decoding as wherein view data by the process of the video flowing of data of encoding.Such as, Code And Decode engine 1407 is encoded from the view data of frame memory 1405 reading, and using the view data of coding as video flowing, is sequentially written in video ES buffer 1408A.In addition, such as, the video flowing from video ES buffer 1408B is read sequentially and decodes, and the video flowing of decoding, as view data, is sequentially written into frame memory 1405.Code And Decode engine 1407 utilizes frame memory 1405 as the service area in the coding of view data or decoding.In addition, such as, when the process starting each macro block, Code And Decode engine 1407 divides 1406 output synchronizing signals to memory control unit.
The video flowing that video ES buffer 1408A buffer memory is generated by Code And Decode engine 1407, and this video flowing is supplied to multiplexing part (MUX) 1412.Video ES buffer 1408B buffer memory from point video flowing supplied by part (DMUX) 1413, and is supplied to Code And Decode engine 1407 this video flowing.
The audio stream that audio ES buffer 1409A buffer memory is generated by audio coder 1410, and this audio stream is supplied to multiplexing part (MUX) 1412.Audio ES buffer 1409B buffer memory from point audio stream supplied by part (DMUX) 1413, and is supplied to audio decoder 1411 this audio stream.
Such as, the audio signal that audio coder 1410 digital translation inputs from link block 1321 (Figure 30) etc., and according to the predetermined scheme of such as mpeg audio scheme or audio coding number 3 (AC3) scheme and so on, the audio signal after coded digital conversion.Audio coder 1410 using as wherein audio signal by the audio data stream of data of encoding, be sequentially written in audio ES buffer 1409A.Audio decoder 1411 is decoded from the audio stream of audio ES buffer 1409B supply, and by proceeding to the conversion etc. of analog signal, the audio stream of decoding is supplied to such as link block 1321 (Figure 30) etc. as reproducing audio signal.
Multiplexing part (MUX) 1412 multiplexed video streams and audio stream.This multiplexing method (that is, by the form of the bit stream of multiplexing generation) is optional.In addition, when multiplexing, multiplexing part (MUX) 1412 can add predetermined header information etc. in bit stream.That is, multiplexing part (MUX) 1412 is by multiplexing, the form of conversion stream.Such as, by multiplexed video streams and audio stream, multiplexing part (MUX) 1412 proceeds to the conversion of transport stream, and described transport stream is the bit stream of transformat.In addition, by multiplexed video streams and audio stream, multiplexing part (MUX) 1412 proceeds to the conversion of the data (file data) of log file form.
Divide and utilize the multiplexing corresponding method with multiplexing part (MUX) 1412 by part (DMUX) 1413, divide the bit stream using wherein multiplexed video streams and audio stream.That is, divide with part (DMUX) 1413 from reading from the bit stream of stream damper 1414, extraction video flowing and audio stream (dividing with video flowing and audio stream).That is, the form (inverse transformation of the conversion of multiplexing part (MUX) 1412) with part (DMUX) 1413 convertible points stream is divided.Such as, divide by part (DMUX) 1413 by stream damper 1414, obtain the transport stream supplied from link block 1321, broadband modem 1333 etc. (all in fig. 30), divide the transport stream with obtaining, thus transport stream is converted to video flowing and audio stream.In addition, such as, divide with part (DMUX) 1413 by stream damper 1414, the file data that acquisition utilizes link block 1321 (Figure 30) to read from various recording medium, divide the file data with obtaining, thus proceed to the conversion of video flowing and audio stream.
Stream damper 1414 buffer memory bit stream.Such as, the transport stream that stream damper 1414 buffer memory supplies from multiplexing part (MUX) 1412, and in predetermined timing or according to external request etc., this transport stream is supplied to such as link block 1321, broadband modem 1333 etc. (all in fig. 30).
In addition, such as, the file data that stream damper 1414 buffer memory supplies from multiplexing part (MUX) 1412, and in predetermined timing or according to external request etc., the file data of buffer memory is supplied to such as link block 1321 (Figure 30) etc., to make file data described in various recording medium recording.
In addition, stream damper 1414 buffer memory such as passes through the transport stream that link block 1321, broadband modem 1333 etc. (all in fig. 30) obtain, and in predetermined timing or according to external request etc., this transport stream is supplied to point use part (DMUX) 1413.
In addition, the file data that the various recording mediums of stream damper 1414 buffer memory such as from link block 1321 (Figure 30) etc. read, and in predetermined timing or according to external request etc., this file data is supplied to point use part (DMUX) 1413.
Below, the example of operation of the video processor 1332 of this structure is described.Such as, from the vision signal of the input video processors 1332 such as link block 1321 (Figure 30) video input processing section 1401, be converted into the DID of such as 4:2:2 Y/Cb/Cr scheme and so on predetermined scheme, DID is sequentially written into frame memory 1405.This DID is read the first Nonlinear magnify/reduce part 1402 or the second Nonlinear magnify/reduce part 1403, proceed to the format conversion of such as 4:2:0 Y/Cb/Cr scheme and so on predetermined scheme, with zoom in/out process, view data is write frame memory 1405 again.This view data is encoded by Code And Decode engine 1407, and the view data of then encoding, as video flowing, is written into video ES buffer 1408A.
In addition, encoded from the audio signal of the input video processors 1332 such as link block 1321 (Figure 30) by audio coder 1410, the audio signal of coding is written into audio ES buffer 1409A as audio stream.
The video flowing of video ES buffer 1408A and the audio stream of audio ES buffer 1409A are read multiplexing part (MUX) 1412, and are re-used, thus are converted into transport stream, file data etc.After the transport stream of multiplexing part (MUX) 1412 generation is buffered in stream damper 1414, such as, transport stream is output to external network by link block 1321, width modulated demodulator 1333 etc. one of (any (Figure 30)).In addition, after the file data of multiplexing part (MUX) 1412 generation is buffered in stream damper 1414, file data is output to such as link block 1321 (Figure 30) etc., is then recorded on various recording medium.
In addition, such as, from the transport stream of external network input video processor 1332 by link block 1321, broadband modem 1333 etc. one of (any (Figure 30)), after being buffered in stream damper 1414, transport stream is divided part (DMUX) 1413 points of use.In addition, such as, after the various recording mediums from link block 1321 (Figure 30) etc. read and the file data being transfused to video processor 1332 is buffered in stream damper 1414, described file data is divided part (DMUX) 1413 points of use.That is, the transport stream of input video processor 1332 or file data are divided part (DMUX) 1413 and are separated into video flowing and audio stream.
By audio stream being supplied to audio decoder 1411 through audio ES buffer 1409B, and this audio stream of decoding, reproducing audio signal.In addition, after video flowing is written into video ES buffer 1408B, video flowing is read and decoding by Code And Decode engine 1407 order, is then written into frame memory 1405.Second Nonlinear magnify/reducing the view data of part 1403 to decoding carries out zoom in/out process, and the data after process are written into frame memory 1405.Subsequently, by the view data of decoding is read video frequency output processing section 1404, according to the form of the predetermined scheme transforms decode view data of such as 4:2:2 Y/Cb/Cr scheme and so on, then further decode image data is converted to analog signal, reproduce and outputting video signal.
When this technology being applied to the video processor 1332 formed as mentioned above, only need this technology according to each embodiment above-mentioned to be applied to Code And Decode engine 1407.That is, such as, a pattern of wants has the Code And Decode engine 1407 of the function of the image encoding apparatus (Fig. 4) according to the first embodiment and the image decoding apparatus (Figure 14) according to the second embodiment.As mentioned above, video processor 1332 can obtain the effect similar to the effect illustrated above with reference to Fig. 1-2 1.
In addition, in Code And Decode engine 1407, the hardware of this technology (that is, according to the image encoding apparatus of above-described embodiment and the function of image decoding apparatus) available such as logical circuit and so on and/or such as embedded program and so on software simulating.
[other configuration example of video processor]
Another example of the schematic construction of the video processor 1332 (Figure 30) that this technology of Figure 32 graphic extension is applicable to.When the example of Figure 32, video processor 1332 has according to predetermined scheme, the function of Code And Decode video data.
More specifically, as shown in diagram in Figure 32, video processor 1332 has control section 1511, display I/F 1512, display engine 1513, image processing engine 1514, and internal storage 1515.In addition, video processor 1332 has codec engine 1516, stores I/F 1517, multiplexing/point with part (MUX/DMUX) 1518, network I/F 1519, and video I/F 1520.
Control section 1511 controls the processing section in video processor 1332, such as shows the operation of I/F1512, display engine 1513, image processing engine 1514 and codec engine 1516.
As shown in diagram in Figure 32, such as, control section 1511 has host CPU 1531, secondary CPU 1532 and system controller 1533.Host CPU 1531 performs the program of the operation for controlling the processing section in video processor 1332.Host CPU 1531, according to described program etc., generates control signal, and control signal is supplied to each processing section (that is, controlling the operation of each processing section).Secondary CPU 1532 completes the nonproductive task of host CPU 1531.Such as, secondary CPU 1532 performs the sub-process, subroutine etc. of the program of host CPU 1531 execution etc.System controller 1533 controls the operation of host CPU 1531 and secondary CPU 1532, the appointment of program such as will performed by host CPU 1531 and secondary CPU 1532.
Under the control of control section 1511, display I/F 1512 exports to such as link block 1321 (Figure 30) etc. view data.Such as, display I/F 1512 view data of numerical data is converted to analog signal, and using analog signal as reproduce vision signal or former state the view data of numerical data, export to the surveillance equipment etc. of link block 1321 (Figure 30).
Under the control of control section 1511, display engine 1513 carries out the various conversion process of such as format conversion, size change over and gamut transform and so on, so that view data is suitable for the hardware specification of the surveillance equipment showing its image etc.
Under the control of control section 1511, the predetermined image process of image processing engine 1514 pairs of view data filtering process such as improving picture quality and so on.
Internal storage 1515 is shared by display engine 1513, image processing engine 1514 and codec engine 1516, is be arranged on the memory in video processor 1332.Such as, internal storage 1515 is in display engine 1513, the exchanges data of carrying out between image processing engine 1514 and codec engine 1516.Such as, the data from the supply of display engine 1513, image processing engine 1514 or codec engine 1516 preserved by internal storage 1515, and where necessary (such as, according to request), data are supplied to display engine 1513, image processing engine 1514 or codec engine 1516.Although internal storage 1515 can realize with any memory device, but it is desirable to less with capacity and that response speed is high (compared with external memory storage 1312) semiconductor memory, such as static RAM (SRAM) realizes internal storage 1515, because in many cases, the data that usual use capacity is less, view data such as in units of block, or parameter.
Codec engine 1516 carries out the process relevant to the Code And Decode of view data.Code And Decode scheme corresponding to codec engine 1516 is optional, and the number of Code And Decode scheme can be 1 or more.Such as, codec engine 1516 can have the codec capability of multiple Code And Decode scheme, and by a kind of Code And Decode scheme selected from Code And Decode scheme, can carry out the coding of view data or the decoding of coded data.
In example in Figure 32 shown in diagram, as the functional block of the process relevant to codec, codec engine 1516 such as has MPEG-2 Video 1541, AVC/H.264 1542, HEVC/H.265 1543, HEVC/H.265 (scalable) 1544, HEVC/H.265 (multiple views) 1545 and MPEG-DASH 1551.
MPEG-2Video 1541 is according to MPEG-2 scheme, the functional block of coding or decode image data.AVC/H.264 1542 is according to AVC scheme, the functional block of coding or decode image data.HEVC/H.265 1543 is according to HEVC scheme, the functional block of coding or decode image data.HEVC/H.265 (scalable) 1544 is according to HEVC scheme, view data is carried out to the functional block of scalable video coding or scalable video decoding.HEVC/H.265 (multiple views) 1545 is according to HEVC scheme, the functional block of the multi-vision-point encoding carry out view data or multiple views decoding.
MPEG-DASH 1551 is according to MPEG-DASH scheme, transmits and receive the functional block of view data.MPEG-DASH is the technology that a kind of HTTP of utilization carries out stream video, has and in units of fragment, selects suitable coded data from the resolution etc. prepared multiple coded datas different from each other and the feature transmitting the coded data selected.MPEG-DASH 1551 carries out the generation of measured stream, the transmission control etc. of stream, and the Code And Decode of above-mentioned MPEG-2 Video1541-HEVC/H.265 (multiple views) 1545 for view data.
Storage I/F 1517 is the interfaces for external memory storage 1312.The data supplied from image processing engine 1514 or codec engine 1516 are passed through to store I/F 1517, are provided to external memory storage 1312.In addition, the data read from external memory storage 1312 are passed through to store I/F 1517, are provided to video processor 1332 (image processing engine 1514 or codec engine 1516).
Multiplexing/point to carry out the various data relevant to image with part (MUX/DMUX) 1518, such as encoded data bits stream, view data and vision signal multiplexing or point use.Multiplexing/point be optional by method.Such as, carry out multiplexing when, multiplexing/point not only multiple data integration can be become data with part (MUX/DMUX) 1518, but also predetermined header information etc. can be added in these data.In addition, to carry out point with when, multiplexing/point not only data can be divided into multiple data with part (MUX/DMUX) 1518, but also predetermined header information etc. can be added in the data division of each segmentation.That is, multiplexing/point with part (MUX/DMUX) 1518 by multiplexing/point with process, the form of translation data.Multiplexing/point with part (MUX/DMUX) 1518 by multiplexed bit stream, proceed to transport stream (it is the bit stream of transformat), or the conversion of the data of log file form (file data).Certainly, by dividing with process, also inverse transformation can be carried out.
Network I/F 1519 is the I/F for broadband modem 1333 or link block 1321 (both in fig. 30) etc.Video I/F 1520 is the I/F for link block 1321 or camera 1322 (both in fig. 30) etc.
Below, the example of the operation of this video processor 1332 is described.Such as, when passing through link block 1321 or broadband modem 1333 (both in fig. 30) etc., when receiving transport stream from external network, described transport stream be provided to by network I/F 1519 multiplexing/point with part (MUX/DMUX) 1518, and be divided, codec engine 1516 is decoded described transport stream.Such as, image processing engine 1514 is to the view data utilizing the decoding process of codec engine 1516 to obtain, carry out predetermined image process, the view data of display engine 1513 to process carries out predetermined map, view data after conversion is such as through display I/F 1512, be provided to link block 1321 (Figure 30) etc., thus its image is shown on a monitor.In addition, such as, the view data that codec engine 1516 recompile utilizes the decoding process of codec engine 1516 to obtain, multiplexing/point by part (MUX/DMUX) 1518 view data of multiplexing recompile, to proceed to the conversion of file data, file data is by video I/F 1520, and be exported to such as link block 1321 (Figure 30) etc., the file data of output is recorded on various recording medium.
In addition, such as, utilize link block 1321 (Figure 30) etc., the wherein view data read from recording medium (not shown) by the file data of coded data of encoding through video I/F 1520, be provided to multiplexing/point with part (MUX/DMUX) 1518, and be divided, then decoded by codec engine 1516.Image processing engine 1514 is to the view data utilizing the decoding process of codec engine 1516 to obtain, carry out predetermined image process, the view data of display engine 1513 to process carries out predetermined map, view data after conversion is through display I/F 1512, be provided to such as link block 1321 (Figure 30) etc., thus its image is shown on a monitor.In addition, such as, the view data quilt that codec engine 1516 recompile utilizes the decoding process of codec engine 1516 to obtain, multiplexing/point by part (MUX/DMUX) 1518 data of multiplexing recompile, to proceed to the conversion of transport stream, transport stream is provided to such as link block 1321 or broadband modem 1333 (both in fig. 30) etc. through network I/F 1519, and is transmitted to another equipment (not shown).
In addition, utilize internal storage 1515 or external memory storage 1312, carry out the exchange of view data between the processing section in video processor 1332 or other data.In addition, power management module 1313 such as controls the power supply to control section 1511.
When this technology being applied to the video processor 1332 formed as mentioned above, only need this technology according to each embodiment above-mentioned to be applied to codec engine 1516.That is, such as, a pattern of wants has the codec engine 1516 of the functional block realizing the image encoding apparatus (Fig. 4) according to the first embodiment and the image decoding apparatus (Figure 14) according to the second embodiment.As mentioned above, video processor 1332 can obtain the effect similar with the effect illustrated above with reference to Fig. 1-2 1.
In addition, in codec engine 1516, this technology (that is, according to the image encoding apparatus of above-described embodiment and the function of image decoding apparatus) can utilize the hardware of such as logical circuit and so on and/or the software simulating of such as embedded program and so on.
Although as above describe two examples of the structure of video processor 1332, but, the structure of video processor 1332 is optional, can be different from above-mentioned two examples.In addition, although video processor 1332 is configured to a semiconductor chip, but video processor 1332 can be configured to multiple semiconductor chip.Such as, video processor 1332 can be configured to the three-dimensional laminated LSI of wherein stacking multiple semiconductor.In addition, video processor 1332 can be configured to realize with multiple LSI.
[example application for equipment]
Video unit 1300 can be built in the various equipment of image data processing.Such as, video unit 1300 can be built in television set 900 (Figure 23), mobile phone 920 (Figure 24), recording/reproducing apparatus 940 (Figure 25), imaging device 960 (Figure 26) etc.By built in video unit 1300, described equipment can obtain the effect similar to the effect illustrated above with reference to Fig. 1-2 1.
In addition, such as, video unit 1300 also can be built in terminal equipment, PC 1004 in the data transmission system 1000 of such as Figure 27, AV equipment 1005, flat-panel devices 1006 or mobile phone 1007, broadcasting station 1101 in the data transmission system 1100 of Figure 28 and terminal equipment 1102, and in imaging device 1201 and scalable encoded data memory device 1202 etc. in the imaging system 1200 of Figure 29.By built in video unit 1300, described equipment can obtain the effect similar to the effect illustrated above with reference to Fig. 1-2 1.In addition, video unit 1300 can be built in each equipment of the content reproduction system of Fig. 3 or the wireless communication system of Figure 39.
In addition, if a part for each structure of above-mentioned video unit 1300 comprises video processor 1332, so can the form of structure that is applicable to of this technology, implement described part.Such as, the video processor that video processor 1332 can be only had to be implemented cost technology be applicable to.In addition, such as, the processor, video module 1311 etc. with dotted line 1341 instruction described above can be implemented the processor, module etc. that cost technology is applicable to.In addition, such as, video module 1311, external memory storage 1312, power management module 1313 and front-end module 1314 can be combined, and are implemented the video unit 1361 that cost technology is applicable to.Arbitrary structures can obtain the effect similar with the effect illustrated above with reference to Fig. 1-2 1.
That is, the same with in video unit 1300, any structure comprising video processor 1332 can be built in the various equipment of image data processing.Such as, video processor 1332, with the processor that dotted line 1341 indicates, video module 1311 or video unit 1361 can be built in television set 900 (Figure 23), mobile phone 920 (Figure 24), recording/reproducing apparatus 940 (Figure 25), imaging device 960 (Figure 26), PC 1004 in the data transmission system 1000 of such as Figure 27, AV equipment 1005, the terminal equipment of flat-panel devices 1006 or mobile phone 1007 and so on, broadcasting station 1101 in the data transmission system 1100 of Figure 28 and terminal equipment 1102, in imaging device 1201 in the imaging system 1200 of Figure 29 and scalable encoded data memory device 1202 etc.In addition, video processor 1332 can be built in each equipment of the content reproduction system of Figure 33 or the wireless communication system of Figure 39.By the arbitrary structures that built-in technology is applicable to, the same with the situation of video unit 1300, described equipment can obtain the effect similar with the effect illustrated above with reference to Fig. 1-2 1.
In addition, such as, this technology is applicable in units of fragment, from multiple coded datas that the resolution wherein prepared etc. is different from each other, select and the wireless communication system of the HTTP of MPEG DASH such as illustrated below utilizing suitable coded data and so on the content reproduction system of transmitting as a stream or Wi-Fi standard.
The example application > of <8.MPEG-DASH
[overview of content reproduction system]
First, with reference to figure 31-35, signal illustrates the content reproduction system that this technology is applicable to.
Below, first with reference to Figure 33 and 34, basic structure total in these embodiments is described.
Figure 33 is the key diagram of the structure of graphic extension content reproduction system.As shown in diagram in Figure 33, content reproduction system comprises content server 1610 and 1611, network 1612, and content reproducing device 1620 (client device).
Content server 1610 is connected by network 1612 with content reproducing device 1620 with 1611.Network 1612 is wired or wireless transmission paths of the information from the equipment transmission being connected to network 1612.
Such as, network 1612 comprises such as internet, telephone wire road network, and the common line network of satellite communication network and so on, or comprises the various local area network (LAN)s (LAN) of Ethernet (registered trade mark), wide area network (WAN) etc.In addition, network 1612 can comprise the private line network of such as Internet protocol-Virtual Private Network (IP-VPN) and so on.
Content server 1610 coded content data, generates and preserves the data file of the metamessage comprising coded data and described coded data.In addition, when content server 1610 generates the data file of MP4 form, coded data corresponds to " mdat ", and metamessage corresponds to " moov ".
In addition, content-data can be the music data of such as music, speech and broadcast program and so on, such as the video data of film, TV programme, video program, photo, document, drawing and chart and so on, game, software etc.
Here, for the rendering requests of the content from content reproducing device 1620, content server 1610 is about identical content, generate multiple data file with different bit rates.In addition, for the rendering requests of the content from content reproducing device 1620, content server 1611, by the parameter information added in URL in content reproducing device 1620 is included in the URL information of content server 1610, sends content reproducing device 1620 to the information of the URL(uniform resource locator) of content server 1610 (URL).Below with reference to Figure 34, specifically describe related content.
Figure 32 is the key diagram of the flowing of data in the content reproduction system of graphic extension Figure 33.Content server 1610 with different bit rates, identical content-data of encoding, thus the file A generating 2Mbps as shown in diagram in Figure 34, the file C of file B and 1Mbps of 1.5Mbps.Relatively, file A has high bit rate, and file B has standard bit-rate, and file C has low bit rate.
In addition, as shown in diagram in Figure 34, the coded data of each file is divided into multiple fragment.Such as, the coded data of file A be divided into fragment " A1 ", " A2 ", " A3 " ..., " An ", the coded data of file B be divided into fragment " B1 ", " B2 ", " B3 " ..., " Bn ", and the coded data of file C be divided into fragment " C1 ", " C2 ", " C3 " ..., " Cn ".
In addition, each fragment can comprise the formation sample coming from video data encoder that of starting from the synchronized samples of MP4 (Instantaneous Decoder in the Video coding of such as, AVC/H.264 refreshes (IDR)-image) or more item can reproduce separately and coded audio data.Such as, when the video data of 30 frames per second of encoding with the GOP of 15 frame regular lengths, each fragment can be the Audio and Video coded data of 2 seconds corresponding to 4 GOP, or corresponds to the Audio and Video coded data of 10 seconds of 20 GOP.
In addition, utilize the reproduction range (scope of the time location lighted from content) with the fragment of aligned identical order in each file identical.Such as, when fragment " A2 ", fragment " B2 " are identical with the reproduction range of fragment " C2 ", and when each fragment is the coded data of 2 seconds, the reproduction range of fragment " A2 ", fragment " B2 " and fragment " C2 " is all 2 seconds of content to 4 seconds.
When generating the file A-C be made up of above-mentioned multiple fragment, content server 1610 preserves file A-C.Subsequently, as shown in diagram in Figure 34, content server 1610 sends content reproducing device 1620 to the fragment order forming different file, and then content reproducing device 1620 reproduces according to streaming, the fragment of reproducing received.
Here, according to the content server 1610 of the present embodiment, the play list file (hereinafter referred to MPD) of the bitrate information and access information that comprise coded data is sent to content reproducing device 1620, content reproducing device 1620 is according to MPD, select any one bit rate in multiple bit rate, then request content server 1610 transmits the fragment corresponding with selected bits rate.
Although merely illustrate a content server 1610 in Figure 33, but obviously the disclosure is not limited to this related example.
Figure 35 is the key diagram of the object lesson of graphic extension MPD.As shown in diagram in Figure 35, MPD comprises the access information relevant to multiple coded datas with bit rate different from each other (bandwidth).Such as, there is the coded data of 256Kbps, 1.024Mbps, 1.384Mbps, 1.536Mbps and 2.048Mbps in the MPD instruction in Figure 35 shown in diagram, and comprises the access information relevant to coded data.Content reproducing device 1620 according to MPD, can dynamically change the bit rate reproducing reproduced coded data according to streaming.
In addition, although in fig. 33, as the example of content reproducing device 1620 exemplified with portable terminal, but content reproducing device 1620 is not limited to this example.Such as, content reproducing device 1620 can be the messaging device of such as PC, home video treatment facility (digital versatile disc (DVD) register, video tape recorder etc.), personal digital assistant (PDA), home game machine or household electrical appliance and so on.In addition, content reproducing device 1620 can be the messaging device of such as mobile phone, individual mobile telephone system (PHS), mobile music reproduction device, mobile video treatment facility or portable game machine and so on.
[structure of content server 1610]
With reference to figure 33-35, describe the overview of content reproduction system.Below with reference to Figure 36, the structure of description server 1610.
Figure 36 is the functional-block diagram of the structure of graphic extension content server 1610.As shown in diagram in Figure 36, content server 1610 comprises file generated part 1631, storage area 1632, and communications portion 1633.
File generated part 1631 comprises the encoder 1641 of coded content data, generates the multinomial coded data that in identical content, bit rate is mutually different, and MPD described above.Such as, when generating the coded data of 256Kbps, 1.024Mbps, 1.384Mbps, 1.536Mbps and 2.048Mbps, file generated part 1631 generates the MPD as shown in diagram in Figure 35.
Storage area 1632 preserves the multinomial coded data with different bit rates and the MPD of file generated part 1631 generation.Storage area 1632 can be the storage medium of such as nonvolatile memory, disk, CD, magneto-optic (MO) dish and so on.As nonvolatile memory, include, for example EEPROM (Electrically Erasable Programmable Read Only Memo) (EEPROM) and electronically erasable programmable rom (EPROM).In addition, as disk, hard disk, disc disk etc. can be enumerated.In addition, as CD, compact disk (CD), recordable DVD (DVD-R), Blu-ray Disc (BD) (registered trade mark) etc. can be enumerated.
Communications portion 1633 is the I/F with content reproducing device 1620, is communicated with content reproducing device 1620 by network 1612.More particularly, communications portion 1633 has serves as according to HTTP, the function of the http server communicated with content reproducing device 1620.Such as, communications portion 1633 sends content reproducing device 1620 to MPD, extracts the coded data of asking according to MPD from content reproducing device 1620 according to HTTP, then described coded data is sent to content reproducing device 1620 as http response.
[structure of content reproducing device 1620]
Be described above the structure of the content server 1610 according to the present embodiment.Below with reference to Figure 37, the structure of description reproducer 1620.
Figure 37 is the functional-block diagram of the structure of graphic extension content reproducing device 1620.As shown in diagram in Figure 37, content reproducing device 1620 comprises communications portion 1651, storage area 1652, reproducing part 1653, selects part 1654, and current location fetching portion 1656.
Communications portion 1651 is the I/F with content server 1610, and request content server 1610 provides data, and obtains data from content server 1610.More particularly, communications portion 1651 has the function of serving as the HTTP client communicated with content reproducing device 1620 according to HTTP.Such as, by utilizing HTTP Range, communications portion 1651 can obtain the fragment of MPD or coded data selectively from content server 1610.
Storage area 1652 preserves the various information relevant to the reproduction of content.Such as, the fragment that obtains from content server 1610 of storage area 1652 order buffer communications portion 1651.The fragment being buffered in the coded data in storage area 1652, according to first-in first-out (FIFO), is sequentially supplied to reproducing part 1653.
In addition, storage area 1652 preserves the definition for accessing URL, and communications portion 1651, according to instruction parameter added in the URL of the content be documented in MPD of asking at the content server 1611 illustrated from behind, adds parameter in described URL.
The fragment that reproducing part 1653 sequential reproduction supplies from storage area 1652.Particularly, reproducing part 1653 pairs of fragments decode, D/A conversion and play up.
Select part 1654 in identical content, whether selective sequential sequentially obtains the fragment of the coded data corresponding with the bit rate be included in MPD to be obtained.Such as, when the bandwidth selecting part 1654 according to network 1612, when selective sequential fragment " A1 ", " B2 " and " A3 ", as shown in diagram in Figure 34, communications portion 1651 order obtains fragment " A1 ", " B2 " and " A3 " from content server 1610.
Current location fetching portion 1656 can be the part of the current location obtaining content reproducing device 1620, such as, and can by the module composition of the current location of acquisition global positioning system (GPS) receiver etc.In addition, current location fetching portion 1656 can be utilize wireless network, obtains the part of the current location of content reproducing device 1620.
[structure of content server 1611]
Figure 38 is the key diagram of the configuration example of graphic extension content server 1611.As shown in diagram in Figure 38, content server 1611 comprises storage area 1671 and communications portion 1672.
Storage area 1671 preserves the information of the URL of MPD.According to the request of the reproduction of the request content from content reproducing device 1620, the information of the URL of MPD is transmitted to content reproducing device 1620 from content server 1611.In addition, when being supplied to the information of URL of MPD of content reproducing device 1620, storage area 1671 preserves the definition information when content reproducing device 1620 adds parameter to the ULR be documented in MPD.
Communications portion 1672 is the I/F with content reproducing device 1620, is communicated with content reproducing device 1620 by network 1612.That is, communications portion 1672 receives the request of the information of the URL of MPD from the content reproducing device 1620 of the reproduction of request content, and the information of the URL of MPD is sent to content reproducing device 1620.The information for adding parameter content reproducing device 1620 is comprised from the URL of the MPD of communications portion 1672 transmission.
For the parameter that will be added in content reproducing device 1620 in the URL of MPD, in the definition information can shared at content server 1611 and content reproducing device 1620, set various parameter.Such as, the information of the current location of content reproducing device 1620, utilizes the user ID of the user of content reproducing device 1620, the memory size of content reproducing device 1620, the memory capacity etc. of content reproducing device 1620 in content reproducing device 1620, can be added in the URL of MPD.
In the content reproduction system of said structure, by application as above with reference to this technology that figure 1-21 illustrates, the effect similar with the effect illustrated above with reference to Fig. 1-2 1 can be obtained.
That is, the encoder 1641 of content server 1610 has the function of the image encoding apparatus (Fig. 4) according to above-described embodiment.In addition, the reproducing part 1653 of content reproducing device 1620 has the function of the image decoding apparatus (Figure 14) according to above-described embodiment.Thus, the coding of the MV in multi-view image or the code efficiency of decoding can be improved.
In addition, due to by content reproduction system according to this technology transfer and received code data, the V direction of MV between viewpoint can be limited, therefore, it is possible to improve the coding of the MV in multi-view image or the code efficiency of decoding.
The example application > of the wireless communication system of <9.Wi-Fi standard
[the basic operation example of Wireless Telecom Equipment]
The following describes the basic operation example of the Wireless Telecom Equipment in the wireless communication system that this technology is applicable to.
First, carry out until connect by setting up equity (P2P), the radio-packet transport till operation application-specific and reception.
Afterwards, before setting up the connection in the second layer, carry out until set up P2P and to connect and specifying the radio-packet transport and the reception that operate described application-specific after application-specific.Afterwards, after connection in the second layer, carry out the radio-packet transport when described application-specific is activated and reception.
[the communication example when application-specific operation starts]
Figure 39 and 40 is being connected by setting up above-mentioned P2P, the radio-packet transport before operation application-specific and the example of reception, the sequence chart of the communication process example that to be graphic extension carry out based on each equipment of radio communication.Particularly, the example caused by the direct establishment of connection process of the connection of Wi-Fi Alliance standardized Wi-FiDirect standard (also referred to as Wi-Fi P2P) is illustrated.
Here, in Wi-Fi Direct, multiple Wireless Telecom Equipment detects existence (device discovery and service discovery) each other.Subsequently, when carrying out connection device and selecting, by carrying out the device authentication by Wi-Fi protection setting (WPS) with the equipment selected, set up and directly connect.In addition, in Wi-FiDirect, by the role of each equipment in multiple Wireless Telecom Equipment being defined as parent device (the group owner) or subset (client), form communication group.
But, in this communication process example, some transmitted in packets and reception are omitted.Such as, when initial connection, as mentioned above, for using the packet switching of WPS to be required, even if in the exchange of authentication request/response etc., packet switching is also required.But in Figure 39 and 40, eliminate the illustration of these packet switchinges, merely illustrate the connection from second time.
In addition, although in Figure 39 and 40, illustrate the communication process example between the first Wireless Telecom Equipment 1701 and the second Wireless Telecom Equipment 1702, but, same with the communication process of other Wireless Telecom Equipment like this.
First, between the first Wireless Telecom Equipment 1701 and the second Wireless Telecom Equipment 1702, device discovery (1711) is carried out.Such as, the first Wireless Telecom Equipment 1701 transmits the request of detecting (response request signal), receives detect the probe response (response signal) of request to this from the second Wireless Telecom Equipment 1702.Thus the first Wireless Telecom Equipment 1701 and the second Wireless Telecom Equipment 1702 can find position each other.In addition, by device discovery, implementor name or the kind (TV, PC, intelligent telephone set etc.) of partner appliance can be obtained.
Subsequently, between the first Wireless Telecom Equipment 1701 and the second Wireless Telecom Equipment 1702, service discovery (1712) is carried out.Such as, the service discovery of the service that the first Wireless Telecom Equipment 1701 transmission inquiry is corresponding with the second Wireless Telecom Equipment 1702 found in device discovery is inquired about.Subsequently, the first Wireless Telecom Equipment 1701, by receiving service discovery response from the second Wireless Telecom Equipment 1702, obtains the service corresponding with the second Wireless Telecom Equipment 1702.That is, by service discovery, the service etc. that partner appliance can perform can be obtained.The service that partner appliance can perform is such as service, agreement (DLNA (DLNA)), Digital Media renderer (DMR) etc.
Subsequently, user carries out selecting the operation of connection partner (connecting partner selection operation) (1713).The operation of described connection partner selection only the first Wireless Telecom Equipment 1701 and the second Wireless Telecom Equipment 1702 one of any in occur.Such as, on the display section of the first Wireless Telecom Equipment 1701, display connects partner selection picture, according to user operation, in described connection partner selection picture, selects the second Wireless Telecom Equipment 1702 as connection partner.
When user carries out connection partner selection operation (1713), between the first Wireless Telecom Equipment 1701 and the second Wireless Telecom Equipment 1702, carry out group owner negotiation (1714).In Figure 39 and 40, illustrate wherein according to the result that the group owner consults, the first Wireless Telecom Equipment 1701 becomes the group owner 1715, and the second Wireless Telecom Equipment 1702 becomes the example of client 1716.
Subsequently, between the first Wireless Telecom Equipment 1701 and the second Wireless Telecom Equipment 1702, carry out process 1717-1720, thus set up directly connection.That is, order carries out associating (L2 (second layer) link establishment) 1717 and safety chain sets up 1718.In addition, order carry out IP address assignment 1719, utilize SSDP (SSDP) etc. L3 on L4 arrange 1720.In addition, L2 (layer 2) refers to the second layer (data link layer), and L3 (layer 3) refers to third layer (network layer), and L4 (layer 4) refers to the 4th layer (transport layer).
Subsequently, user carries out for the appointment of application-specific or start-up operation and start-up operation (application appointment/start-up operation) (1721).Described application appointment/start-up operation only the first Wireless Telecom Equipment 1701 and the second Wireless Telecom Equipment 1702 one of any in occur.Such as, on the display section of the first Wireless Telecom Equipment 1701, display application appointment/start-up operation picture, in this application appointment/start-up operation picture, utilizes user operation, selects application-specific.
When user carries out application appointment/start-up operation (1721), between the first Wireless Telecom Equipment 1701 and the second Wireless Telecom Equipment 1702, perform and specify/application-specific (1722) that start-up operation is corresponding with this application.
Here, in the scope of the specification (specification at IEEE 802.11 Plays) of imagination wherein before Wi-Fi Direct standard, the situation of the connection between access point (AP) and station (STA) is set up.In this case, before connecting in the second layer (with the term of IEEE 802.11, before association), be difficult to know which equipment is connected in advance.
On the other hand, as shown in diagram in Figure 39 and 40, in Wi-Fi Direct, when in device discovery or service discovery (optional), when finding out candidate connection partner, the information connecting partner can be obtained.Such as, this information connecting partner is basic equipment kind, corresponding application-specific etc.Subsequently, according to the information of the connection partner obtained, user can be allowed to select to connect partner.
By expanding this mechanism, specifying application-specific before also can realizing wherein connecting in the second layer, selecting to connect partner and the wireless communication system automatically starting application-specific after described selection.The example up to the sequence connected of this situation is illustrated in Figure 42.In addition, the configuration example of the form of the frame transmitting and receive is illustrated in this communication process in Figure 41.
[configuration example of frame format]
Figure 41 is the diagram that schematic illustration illustrates the configuration example of the form of the frame that each equipment on the basis being used as this technology transmits and receives in communication process.That is, in Figure 41, graphic extension is for setting up the configuration example of medium education (MAC) frame of the connection in the second layer.Particularly, this is the example of the frame format of association request/response 1787 for realizing the sequence in Figure 42 shown in diagram.
In addition, the field controlling 1756 from frame control 1751 to sequence is MAC head.In addition, when transmitting association request, control to set B3B2=" 0b00 " and B7B6B5B4=" 0b0000 " in 1751 at frame.In addition, when encapsulated association responds, control to set B3B2=" 0b00 " and B7B6B5B4=" 0b0001 " in 1751 at frame.In addition, " 0b00 " represents binary system " 00 ", and " 0b0000 " represents binary system " 0000 ", and " 0b0001 " represents binary system " 0001 ".
Here, in Figure 41 the mac frame shown in diagram be substantially IEEE 802.11-2007 authority file 7.2.3.4 with 7.2.3.5 joint in record associate request/response frame format.But the difference of this form is not only to comprise information element (below referred to as IE), and comprise the IE of independent expansion.
In addition, there is the distinctive IE 1760 of producer to indicate, in IE kind 1761, setting decimal number 127.In this case, saving according to the 7.3.2.26 of IEEE 802.11-2007 authority file, is next length field 1762 and organization unique identifier (OUI) field 1763, afterwards, arranges the peculiar content 1764 of producer.
As the content of the peculiar content 1764 of producer, first, the field (IE kind 1765) of the kind of the instruction peculiar IE of producer is set.Afterwards, be configured to preserve multiple daughter element 1766.
As the content of daughter element 1766, consider the title 1767 comprising application-specific to be used, or the device role 1768 when described application-specific work.In addition, consider to comprise described application-specific, such as the information (information for L4 is arranged) 1769 of the port numbers that controls and so on, and the information relevant to ability (ability information).Here, such as, when the application-specific of specifying is DLNA, ability information be used to specify with audio transmission/reproduction corresponding, with corresponding etc. the information of transmission of video/reproduction.
In the wireless communication system of said structure, this technology illustrated above with reference to Fig. 1-2 1 by application, can obtain the effect similar with the effect illustrated above with reference to Fig. 1-2 1.That is, the coding of the MV in multi-view image or the code efficiency of decoding can be improved.In addition, in above-mentioned wireless communication system, by transmitting and receive the data utilizing this technology for encoding, the coding of the MV in multi-view image or the code efficiency of decoding can be improved.
In addition, in the present note, describe wherein multiplexing various information (parameter of such as deblocking filter or the parameter of self adaptation offset filter) in encoding stream, and send the example of decoding side from coding staff to.But, the technology transmitting described information is not limited to this example.Such as, described information can be used as the independent data associated with coded bit stream and is transmitted or record, and is re-used in code stream of not being on the permanent staff.Here, term " association " refers to the image (it can be a part for image, such as section or block) comprised in the bitstream, and the information corresponding to described image is configured to be connected when decoding.That is, with image (or bit stream) discretely in independent transmission paths, described information can be transmitted.In addition, with image (or bit stream) discretely on independently recording medium (or separate records region of identical recordings medium), described information can be recorded.In addition, can arbitrary unit, such as a part for multiframe, a frame or a frame, described information associated with each other and image (or bit stream).
Above with reference to accompanying drawing, describe preferred embodiment of the present disclosure, but the present invention is not limited to example above.In the scope of accessory claim, those skilled in the art can obtain various changes and modifications, should understand that described various changes and modifications nature is within technical scope of the present disclosure.
In addition, also can structure cost technology as follows.
(1) image processing equipment, comprising:
Predicted vector generating portion, described predicted vector generating portion be configured to by according to current block with reference to destination and reference block with reference to destination, the motion vector (MV) of convergent-divergent reference block, generate the coding of the MV for current block predicted vector, described reference block is the block of the parallax that the position in the image of different points of view obtains from the periphery of the current block the image of not substantially viewpoint relative to the position deviation of current block;
MV coded portion, described MV coded portion is configured to the predicted vector utilizing predicted vector generating portion to generate, the MV of coding current block; With
Coded portion, described coded portion is configured to, by by the unit encoding image with hierarchy, generate encoding stream.
(2) according to the image processing equipment described in (1), wherein predicted vector generating portion is by the MV according to reference image frame sequence number (POC) of current block and the reference image POC convergent-divergent reference block of reference block, and adopt the MV alternatively predicted vector of convergent-divergent, generate predicted vector.
(3) according to the image processing equipment described in (1) or (2), also comprise:
Hop, described hop is configured to the MV transmitting the current block of being encoded by MV coded portion and the encoding stream generated by coded portion.
(4) image processing method, comprising:
By image processing equipment by the MV according to the reference destination of current block and the reference destination convergent-divergent reference block of reference block, generate the coding of the MV for current block predicted vector, described reference block is the block of the parallax that the position in the image of different points of view obtains from the periphery of the current block the image of not substantially viewpoint relative to the position deviation of current block;
The predicted vector generated is utilized, the MV of coding current block by image processing equipment; With
By image processing equipment by by the unit encoding image with hierarchy, generate encoding stream.
(5) image processing equipment, comprising:
Predicted vector generating portion, described predicted vector generating portion be configured to by according to current block with reference to destination and reference block with reference to destination, the MV of convergent-divergent reference block, generate the coding of the MV for current block predicted vector, described reference block is the block of the parallax that the position in the image of different points of view obtains from the periphery of the current block the image of not substantially viewpoint relative to the position deviation of current block;
MV decoded portion, described MV decoded portion is configured to the predicted vector utilizing predicted vector generating portion to generate, the MV of decoding current block; With
Decoded portion, described decoded portion is configured to, by decoding by the encoding stream of the unit encoding with hierarchy, carry out synthetic image.
(6) according to the image processing equipment described in (5), wherein predicted vector generating portion is by the MV according to the reference image POC of current block and the reference image POC convergent-divergent reference block of reference block, and adopt the MV alternatively predicted vector of convergent-divergent, generate predicted vector.
(7) according to the image processing equipment described in (5) or (6), also comprise:
Receiving unit, described receiving unit is configured to the MV of the current block of received code stream and coding.
(8) image processing method, comprising:
By image processing equipment by according to current block with reference to destination and reference block with reference to destination, the MV of convergent-divergent reference block, generate the coding of the MV for current block predicted vector, described reference block is the block of the parallax that the position in the image of different points of view obtains from the periphery of the current block the image of not substantially viewpoint relative to the position deviation of current block;
The predicted vector generated is utilized, the MV of decoding current block by image processing equipment; With
By image processing equipment by decoding by the encoding stream of the unit encoding with hierarchy, carry out synthetic image.
Reference numerals list
11-1,11-N, 11-M encoder
26 lossless coding parts
32-1,32-N,32-M DPB
34 motion predictions/compensated part
36-1,36-N, 36-M MV memory
51 motion prediction mode generating portions
52 automatic cross index generating portions
53 AMVP pattern vector predicted portions
54 M/S pattern vector predicted portions
55 mode decision parts
61 vector search parts
62 predicted picture generating portions
63 vectorial cost judging sections
64 space MV memories
65,66 predicted vector generating portions
67 switches
68 subtraction parts
69 POC conversion fractions
81 predicted vector index generating portions
82 viewpoint internal reference vector generating portions
With reference to vectorial generating portion between 83 viewpoints
211-1,211-N, 211-M decoder
222 losslessly encoding parts
233-1,233-N,233-M DPB
231 motion compensation portion
229-1,229-N, 229-M MV memory
251 automatic cross index generating portions
252 AMVP pattern vector predicted portions
253 M/S pattern vector predicted portions
261 predicted picture generating portions
262 space MV memories
263 addition section
264,265 predicted vector generating portions
266 switches
267 POC conversion fractions
281 viewpoint internal reference vector generating portions
With reference to vectorial generating portion between 282 viewpoints

Claims (8)

1. an image processing equipment, comprising:
Predicted vector generating portion, described predicted vector generating portion be configured to by according to current block with reference to destination and reference block with reference to destination, the motion vector MV of convergent-divergent reference block, generate the coding of the MV for current block predicted vector, described reference block is the block of the parallax that the position in the image of different points of view obtains from the periphery of the current block the image of not substantially viewpoint relative to the position deviation of current block;
MV coded portion, described MV coded portion is configured to the predicted vector utilizing predicted vector generating portion to generate, the MV of coding current block; With
Coded portion, described coded portion is configured to, by by the unit encoding image with hierarchy, generate encoding stream.
2. according to image processing equipment according to claim 1, wherein predicted vector generating portion is by the MV according to the reference image frame sequence number POC of current block and the reference image POC convergent-divergent reference block of reference block, and adopt the MV alternatively predicted vector of convergent-divergent, generate predicted vector.
3., according to image processing equipment according to claim 1, also comprise:
Hop, described hop is configured to the MV transmitting the current block of being encoded by MV coded portion and the encoding stream generated by coded portion.
4. an image processing method, comprising:
By image processing equipment by the MV according to the reference destination of current block and the reference destination convergent-divergent reference block of reference block, generate the coding of the MV for current block predicted vector, described reference block is the block of the parallax that the position in the image of different points of view obtains from the periphery of the current block the image of not substantially viewpoint relative to the position deviation of current block;
The predicted vector generated is utilized, the MV of coding current block by image processing equipment; With
By image processing equipment by by the unit encoding image with hierarchy, generate encoding stream.
5. an image processing equipment, comprising:
Predicted vector generating portion, described predicted vector generating portion be configured to by according to current block with reference to destination and reference block with reference to destination, the MV of convergent-divergent reference block, generate the coding of the MV for current block predicted vector, described reference block is the block of the parallax that the position in the image of different points of view obtains from the periphery of the current block the image of not substantially viewpoint relative to the position deviation of current block;
MV decoded portion, described MV decoded portion is configured to the predicted vector utilizing predicted vector generating portion to generate, the MV of decoding current block; With
Decoded portion, described decoded portion is configured to, by decoding by the encoding stream of the unit encoding with hierarchy, carry out synthetic image.
6. according to image processing equipment according to claim 5, wherein predicted vector generating portion is by the MV according to the reference image POC of current block and the reference image POC convergent-divergent reference block of reference block, and adopt the MV alternatively predicted vector of convergent-divergent, generate predicted vector.
7., according to image processing equipment according to claim 5, also comprise:
Receiving unit, described receiving unit is configured to the MV of the current block of received code stream and coding.
8. an image processing method, comprising:
By image processing equipment by according to current block with reference to destination and reference block with reference to destination, the MV of convergent-divergent reference block, generate the coding of the MV for current block predicted vector, described reference block is the block of the parallax that the position in the image of different points of view obtains from the periphery of the current block the image of not substantially viewpoint relative to the position deviation of current block;
The predicted vector generated is utilized, the MV of decoding current block by image processing equipment; With
By image processing equipment by decoding by the encoding stream of the unit encoding with hierarchy, carry out synthetic image.
CN201380049112.XA 2012-09-28 2013-09-19 Image processing equipment and method Expired - Fee Related CN104662907B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2012-218304 2012-09-28
JP2012218304 2012-09-28
PCT/JP2013/075226 WO2014050675A1 (en) 2012-09-28 2013-09-19 Image processing device and method

Publications (2)

Publication Number Publication Date
CN104662907A true CN104662907A (en) 2015-05-27
CN104662907B CN104662907B (en) 2019-05-03

Family

ID=50388080

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201380049112.XA Expired - Fee Related CN104662907B (en) 2012-09-28 2013-09-19 Image processing equipment and method

Country Status (13)

Country Link
US (2) US10516894B2 (en)
EP (1) EP2903286A4 (en)
JP (1) JP6172155B2 (en)
KR (1) KR20150065674A (en)
CN (1) CN104662907B (en)
AU (1) AU2013321313B2 (en)
BR (1) BR112015006317A2 (en)
CA (1) CA2885642A1 (en)
MX (1) MX339432B (en)
MY (1) MY186413A (en)
RU (1) RU2639647C2 (en)
SG (1) SG11201502200RA (en)
WO (1) WO2014050675A1 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3907999B1 (en) 2010-09-02 2023-11-22 LG Electronics, Inc. Inter prediction
EP2899984B1 (en) * 2012-09-28 2019-11-20 Sony Corporation Image processing device and method
TWI652936B (en) 2013-04-02 2019-03-01 Vid衡器股份有限公司 Enhanced temporal motion vector prediction for scalable video coding
US10218644B1 (en) * 2016-09-21 2019-02-26 Apple Inc. Redundant communication path transmission
KR20230169429A (en) * 2018-04-13 2023-12-15 엘지전자 주식회사 Method and apparatus for inter prediction in video processing system
US11418793B2 (en) * 2018-10-04 2022-08-16 Qualcomm Incorporated Adaptive affine motion vector coding
KR102192980B1 (en) * 2018-12-13 2020-12-18 주식회사 픽스트리 Image processing device of learning parameter based on machine Learning and method of the same

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102257818A (en) * 2008-10-17 2011-11-23 诺基亚公司 Sharing of motion vector in 3d video coding
US20120026970A1 (en) * 2000-01-11 2012-02-02 At&T Intellectual Property Ii, L.P. System and method for selecting a transmission channel in a wireless communication system that includes an adaptive array
US20120224634A1 (en) * 2011-03-01 2012-09-06 Fujitsu Limited Video decoding method, video coding method, video decoding device, and computer-readable recording medium storing video decoding program

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7710462B2 (en) * 2004-12-17 2010-05-04 Mitsubishi Electric Research Laboratories, Inc. Method for randomly accessing multiview videos
US8644386B2 (en) * 2005-09-22 2014-02-04 Samsung Electronics Co., Ltd. Method of estimating disparity vector, and method and apparatus for encoding and decoding multi-view moving picture using the disparity vector estimation method
ZA200805337B (en) * 2006-01-09 2009-11-25 Thomson Licensing Method and apparatus for providing reduced resolution update mode for multiview video coding
KR20080066522A (en) 2007-01-11 2008-07-16 삼성전자주식회사 Method and apparatus for encoding and decoding multi-view image
CN101690220B (en) * 2007-04-25 2013-09-25 Lg电子株式会社 A method and an apparatus for decoding/encoding a video signal
CN101785317B (en) * 2007-08-15 2013-10-16 汤姆逊许可证公司 Methods and apparatus for motion skip mode in multi-view coded video using regional disparity vectors
US9124898B2 (en) * 2010-07-12 2015-09-01 Mediatek Inc. Method and apparatus of temporal motion vector prediction
US9398308B2 (en) * 2010-07-28 2016-07-19 Qualcomm Incorporated Coding motion prediction direction in video coding
US8755437B2 (en) * 2011-03-17 2014-06-17 Mediatek Inc. Method and apparatus for derivation of spatial motion vector candidate and motion vector prediction candidate
WO2012124121A1 (en) * 2011-03-17 2012-09-20 富士通株式会社 Moving image decoding method, moving image coding method, moving image decoding device and moving image decoding program
US9485517B2 (en) 2011-04-20 2016-11-01 Qualcomm Incorporated Motion vector prediction with motion vectors from multiple views in multi-view video coding
JP5786478B2 (en) * 2011-06-15 2015-09-30 富士通株式会社 Moving picture decoding apparatus, moving picture decoding method, and moving picture decoding program
US20130188718A1 (en) * 2012-01-20 2013-07-25 Qualcomm Incorporated Motion prediction in svc without including a temporally neighboring block motion vector in a candidate list
US9445076B2 (en) * 2012-03-14 2016-09-13 Qualcomm Incorporated Disparity vector construction method for 3D-HEVC
US20130343459A1 (en) * 2012-06-22 2013-12-26 Nokia Corporation Method and apparatus for video coding
US9635356B2 (en) * 2012-08-07 2017-04-25 Qualcomm Incorporated Multi-hypothesis motion compensation for scalable video coding and 3D video coding
US20140085415A1 (en) * 2012-09-27 2014-03-27 Nokia Corporation Method and apparatus for video coding
WO2014049196A1 (en) * 2012-09-27 2014-04-03 Nokia Corporation Method and techniqal equipment for scalable video coding

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120026970A1 (en) * 2000-01-11 2012-02-02 At&T Intellectual Property Ii, L.P. System and method for selecting a transmission channel in a wireless communication system that includes an adaptive array
CN102257818A (en) * 2008-10-17 2011-11-23 诺基亚公司 Sharing of motion vector in 3d video coding
US20120224634A1 (en) * 2011-03-01 2012-09-06 Fujitsu Limited Video decoding method, video coding method, video decoding device, and computer-readable recording medium storing video decoding program

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GERHARD TECH,ET AL.: "3D-HEVC Test Model 1", 《JOINT COLLABORATIVE TEAM ON 3D VIDEO CODING EXTENSION DEVELOPMENT OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11,DOCUMENT: JCT3V-A1005_D0》 *
GERHARD TECH,ET AL.: "3D-HEVC Test Model 1", 《JOINT COLLABORATIVE TEAM ON 3D VIDEO CODING EXTENSION DEVELOPMENT OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11》 *

Also Published As

Publication number Publication date
AU2013321313B2 (en) 2016-10-27
EP2903286A4 (en) 2016-05-25
MY186413A (en) 2021-07-22
WO2014050675A1 (en) 2014-04-03
US10516894B2 (en) 2019-12-24
JPWO2014050675A1 (en) 2016-08-22
EP2903286A1 (en) 2015-08-05
JP6172155B2 (en) 2017-08-02
MX2015003590A (en) 2015-06-22
RU2639647C2 (en) 2017-12-21
CN104662907B (en) 2019-05-03
MX339432B (en) 2016-05-26
AU2013321313A1 (en) 2015-03-26
SG11201502200RA (en) 2015-07-30
BR112015006317A2 (en) 2017-07-04
US10917656B2 (en) 2021-02-09
CA2885642A1 (en) 2014-04-03
KR20150065674A (en) 2015-06-15
US20200007887A1 (en) 2020-01-02
US20150237365A1 (en) 2015-08-20
RU2015109932A (en) 2016-10-10

Similar Documents

Publication Publication Date Title
RU2667719C1 (en) Image processing device and image processing method
KR102179087B1 (en) Decoding device, and decoding method
RU2662922C2 (en) Image encoding method and device
KR102245026B1 (en) Image processing device and method
CN104662907B (en) Image processing equipment and method
CN104798374A (en) Image processing device and method
KR20170020746A (en) Image encoding apparatus and method, and image decoding apparatus and method
CN104380739A (en) Image processing device and image processing method
CN104838659A (en) Image processing device and method
CN105230017A (en) Picture coding device and method and picture decoding apparatus and method
CN104584551A (en) Decoding device, decoding method, encoding device, and encoding method
CN104854868A (en) Image processing device and method
RU2649758C2 (en) Image processing device and method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190503