CN103533361B - Determination method, encoding device and the decoding device of multi-view disparity vector - Google Patents
Determination method, encoding device and the decoding device of multi-view disparity vector Download PDFInfo
- Publication number
- CN103533361B CN103533361B CN201310497296.8A CN201310497296A CN103533361B CN 103533361 B CN103533361 B CN 103533361B CN 201310497296 A CN201310497296 A CN 201310497296A CN 103533361 B CN103533361 B CN 103533361B
- Authority
- CN
- China
- Prior art keywords
- view
- adjacent block
- predicting unit
- unit
- determines
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Abstract
Embodiments provide a kind of determination method of multi-view disparity vector, this determines that method is during image or the coding of video or decoding, this determines that method includes: according to the adjacent block information of the PU of current View, determine the reference View of described PU, and determine that the NBDV, described current View of described PU are other View in addition to basic View;Reference View according to described PU and the NBDV of described PU, determines the Do NBDV of described PU.The embodiment of the present invention determines the reference View of this PU by the adjacent block information of the PU according to current View, it is possible to obtains more suitably with reference to View, and then can improve the accuracy that DV calculates.
Description
Technical field
The present embodiments relate to image processing field, and more particularly, to a kind of multiple views parallax
Vow method for determination of amount, encoding device and decoding device.
Background technology
In the encoding-decoding process to multi-view image, in order to eliminate further between the image of different points of view
Information redundancy, includes Predicting Technique between regarding in inter-frame prediction techniques.Wherein difference vector (Disparity
Vector, DV) represent the vector correlation of corresponding similar macro blocks between the image of synchronization different points of view.
DV can determine that the corresponding macro block position between different points of view, for prediction encoding and decoding, and between corresponding macro block
Similarity degree determine eliminate regard between the degree of redundancy of image information.DV whether accurate directly affects pre-
Survey the performance of encoding and decoding.
In the computational methods of existing DV based on predicting unit (Prediction Unit, PU), no
The reference View of the PU of same viewpoint (View) is constant, result in calculated DV inaccurate.
Summary of the invention
The embodiment of the present invention provides a kind of determination method of multi-view disparity vector, and this determines that method is for scheming
As or the coding of video or decoding during, it is possible to increase DV calculate accuracy.
First aspect, it is provided that a kind of determination method of multi-view disparity vector, this determines that method is for scheming
As or the coding of video or decoding during, this determines that method includes: according to the prediction of current view point View
The adjacent block information of unit PU, determines the reference View of described PU, and determines that described PU's is adjacent
Block difference vector (Neighborhood Block Disparity Vector, NBDV), described current View
It is other View in addition to first View first carrying out multiple views encoding and decoding;According to described PU
Reference View and the NBDV of described PU, determine the adjacent block parallax based on depth map of described PU
Vector (Depth-oriented Neighborhood Block Disparity Vector, Do-NBDV).
In conjunction with first aspect, in the implementation that the first is possible, described determine that method also includes: root
According to the Do-NBDV of described PU, carry out multi-vision-point encoding or decoding.
In conjunction with the first possible implementation of first aspect or first aspect, possible at the second
In implementation, the adjacent block information of the PU of the current View of described basis, determine the reference of described PU
View, and determine the NBDV of described PU, comprise determining that the time domain adjacent block of described PU;Obtain
Take first motion vector (Motion Vector, MV) of described time domain adjacent block, and described time domain phase
The first of adjacent block is with reference to View;Determine that the NBDV of described PU is a described MV, and determine institute
The reference View stating PU is described first reference View.
In conjunction with the implementation that the second of first aspect is possible, in the implementation that the third is possible,
The adjacent block information of the PU of the current View of described basis, determines the reference View of described PU, and really
The NBDV of fixed described PU, also includes: when the time domain adjacent block of described PU is unavailable, determines described
The first spatial domain adjacent block of PU, described first spatial domain adjacent block is the left adjacent block of described PU;Obtain institute
State the 2nd MV of the first spatial domain adjacent block, and the second of described first spatial domain adjacent block is with reference to View;
Determine that the NBDV of described PU is described 2nd MV, and determine that the reference View of described PU is institute
State second with reference to View.
In conjunction with the third possible implementation of first aspect, in the 4th kind of possible implementation,
The adjacent block information of the PU of the current View of described basis, determines the reference View of described PU, and really
The NBDV of fixed described PU, also includes: when the first spatial domain adjacent block of described PU is unavailable, determine
The second spatial domain adjacent block of described PU, described second spatial domain adjacent block is the top adjacent block of described PU;
Obtain the 3rd MV of described second spatial domain adjacent block, and the 3rd reference of described second spatial domain adjacent block
View;Determine that the NBDV of described PU is described 3rd MV, and determine the reference View of described PU
For the described 3rd with reference to View.
In conjunction with the 4th kind of possible implementation of first aspect, in the 5th kind of possible implementation,
The adjacent block information of the PU of the current View of described basis, determines the reference View of described PU, and really
The NBDV of fixed described PU, also includes: when the second spatial domain adjacent block of described PU is unavailable, determine
The motion predicted compensation block of described PU;Obtain the 4th MV of described motion predicted compensation block, and described
The 4th of motion predicted compensation block is with reference to View;Determine that the NBDV of described PU is described 4th MV,
And determine that the reference View of described PU is described 4th reference View.
In conjunction with the 5th kind of possible implementation of first aspect, in the 6th kind of possible implementation,
The adjacent block information of the PU of the current View of described basis, determines the reference View of described PU, and really
The NBDV of fixed described PU, also includes: when described PU does not has motion predicted compensation block, determines described
The NBDV of PU is zero;When the reference frame of described PU and the 5th is with reference to the reference of the corresponding PU of View
When frame is synchronization reference frame, determine that the reference View of described PU is described 5th reference View.
In conjunction with any one possible implementation of first aspect or above-mentioned first aspect, can at the 7th kind
In the implementation of energy, the described NBDV according to described PU and the reference View of described PU, determine
The Do-NBDV of described PU, including: obtain the texture maps information of the reference View of described PU, with
And the depth map information of the reference View of the described PU corresponding with described texture maps information;According to described PU
Reference View deep of NBDV, the texture maps information of reference View of described PU and described PU
Degree figure information, determines the Do-NBDV of described PU.
Second aspect, it is provided that a kind of encoding device, this encoding device includes: first determines unit, uses
In the adjacent block information of predicting unit PU according to current view point View, determine the reference of described PU
View, and determine that the adjacent block difference vector NBDV of described PU, described current View are except first
Carry out other View outside first View of multiple views encoding and decoding;Second determines unit, for root
According to the reference View and the NBDV of described PU of described PU, determine that described PU's is based on depth map
Adjacent block difference vector Do-NBDV.
In conjunction with second aspect, in the implementation that the first is possible, described encoding device also includes: compile
Code unit, for encoding according to the Do-NBDV of described PU.
In conjunction with the first possible implementation of second aspect or second aspect, possible at the second
In implementation, described first determines unit, including: the 3rd determines unit, is used for determining described PU
Time domain adjacent block;First acquiring unit, for obtaining the first motion vector of described time domain adjacent block
MV, and the first of described time domain adjacent block is with reference to View;Described 3rd determines unit, is additionally operable to really
The NBDV of fixed described PU is a described MV, and determines that the reference View of described PU is described
First with reference to View.
In conjunction with the implementation that the second of second aspect is possible, in the implementation that the third is possible,
Described 3rd determines unit, is additionally operable to, when the time domain adjacent block of described PU is unavailable, determine described PU
The first spatial domain adjacent block, described first spatial domain adjacent block is the left adjacent block of described PU;Described first
Acquiring unit, is additionally operable to obtain the 2nd MV of described first spatial domain adjacent block, and described first spatial domain
The second of adjacent block is with reference to View;Described 3rd determines unit, is additionally operable to determine the NBDV of described PU
For described 2nd MV, and determine that the reference View of described PU is described second with reference to View.
In conjunction with the third possible implementation of second aspect, in the 4th kind of possible implementation,
Described 3rd determines unit, is additionally operable to, when the first spatial domain adjacent block of described PU is unavailable, determine institute
Stating the second spatial domain adjacent block of PU, described second spatial domain adjacent block is the top adjacent block of described PU;Institute
State the first acquiring unit, be additionally operable to obtain the 3rd MV of described second spatial domain adjacent block, and described
The 3rd of two spatial domain adjacent blocks is with reference to View;Described 3rd determines unit, is additionally operable to determine described PU's
NBDV is described 3rd MV, and determines that the reference View of described PU is described 3rd reference View.
In conjunction with the 4th kind of possible implementation of second aspect, in the 5th kind of possible implementation,
Described 3rd determines unit, is additionally operable to, when the second spatial domain adjacent block of described PU is unavailable, determine institute
State the motion predicted compensation block of PU;Described first acquiring unit, is additionally operable to obtain described motion prediction and mends
Repay the 4th MV of block, and the 4th of described motion predicted compensation block is with reference to View;Described 3rd determines
Unit, is additionally operable to determine that the NBDV of described PU is described 4th MV, and determines described PU's
It is described 4th reference View with reference to View.
In conjunction with the 5th kind of possible implementation of second aspect, in the 6th kind of possible implementation,
Described first determines unit, is additionally operable to, when described PU does not has motion predicted compensation block, determine described PU
NBDV be zero, and, when the reference frame of described PU and the 5th is with reference to the reference frame of the PU of View
When being synchronization reference frame, determine that the reference View of described PU is described 5th reference View.
In conjunction with any one possible implementation of second aspect or above-mentioned second aspect, can at the 7th kind
Can implementation in, described second determine unit, including second acquisition unit, is used for obtaining described
The texture maps information of the reference View of PU, and the ginseng of the described PU corresponding with described texture maps information
Examine the depth map information of View;4th determines unit, for according to the NBDV of described PU, described
The texture maps information of the reference View of PU and the depth map information of the reference View of described PU, determine institute
State the Do-NBDV of PU.
The third aspect, it is provided that a kind of decoding device, this decoding device includes: first determines unit, uses
In the adjacent block information of predicting unit PU according to current view point View, determine described predicting unit PU
Reference view View, and determine the adjacent block difference vector NBDV of described predicting unit PU, institute
Stating current view point View is other vision points iew in addition to basic vision point iew;Second determines unit,
Adjacent block for the reference view View according to described predicting unit PU and described predicting unit PU regards
Difference vector NBDV, determine the adjacent block difference vector based on depth map of described predicting unit PU based on
The adjacent block difference vector Do-NBDV of depth map.
In conjunction with the first possible implementation of the third aspect or the third aspect, possible at the second
In implementation, described first determines unit, including: the 3rd determines unit, is used for determining described PU
Time domain adjacent block;First acquiring unit, for obtaining the first motion vector of described time domain adjacent block
MV, and the first of described time domain adjacent block is with reference to View;Described 3rd determines unit, is additionally operable to really
The NBDV of fixed described PU is a described MV, and determines that the reference View of described PU is described
First with reference to View.
In conjunction with the implementation that the second of the third aspect is possible, in the implementation that the third is possible,
Described 3rd determines unit, is additionally operable to, when the time domain adjacent block of described PU is unavailable, determine described PU
The first spatial domain adjacent block, described first spatial domain adjacent block is the left adjacent block of described PU;Described first
Acquiring unit, is additionally operable to obtain the 2nd MV of described first spatial domain adjacent block, and described first spatial domain
The second of adjacent block is with reference to View;Described 3rd determines unit, is additionally operable to determine the NBDV of described PU
For described 2nd MV, and determine that the reference View of described PU is described second with reference to View.
In conjunction with the third possible implementation of the third aspect, in the 4th kind of possible implementation,
Described 3rd determines unit, is additionally operable to, when the first spatial domain adjacent block of described PU is unavailable, determine institute
Stating the second spatial domain adjacent block of PU, described second spatial domain adjacent block is the top adjacent block of described PU;Institute
State the first acquiring unit, be additionally operable to obtain the 3rd MV of described second spatial domain adjacent block, and described
The 3rd of two spatial domain adjacent blocks is with reference to View;Described 3rd determines unit, is additionally operable to determine described PU's
NBDV is described 3rd MV, and determines that the reference View of described PU is described 3rd reference View.
In conjunction with the 4th kind of possible implementation of the third aspect, in the 5th kind of possible implementation,
Described 3rd determines unit, is additionally operable to, when the second spatial domain adjacent block of described PU is unavailable, determine institute
State the motion predicted compensation block of PU;Described first acquiring unit, is additionally operable to obtain described motion prediction and mends
Repay the 4th MV of block, and the 4th of described motion predicted compensation block is with reference to View;Described 3rd determines
Unit, is additionally operable to determine that the NBDV of described PU is described 4th MV, and determines described PU's
It is described 4th reference View with reference to View.
In conjunction with the 5th kind of possible implementation of the third aspect, in the 6th kind of possible implementation,
Described first determines unit, is additionally operable to, when described PU does not has motion predicted compensation block, determine described PU
NBDV be zero, and, when the reference frame of described PU and the 5th is with reference to the reference frame of the PU of View
When being synchronization reference frame, determine that the reference View of described PU is described 5th reference View.
In conjunction with any one possible implementation of the third aspect or the above-mentioned third aspect, can at the 7th kind
Can implementation in, described second determine unit, including second acquisition unit, is used for obtaining described
The texture maps information of the reference View of PU, and the ginseng of the described PU corresponding with described texture maps information
Examine the depth map information of View;4th determines unit, for according to the NBDV of described PU, described
The texture maps information of the reference View of PU and the depth map information of the reference View of described PU, determine institute
State the Do-NBDV of PU.
The embodiment of the present invention determines the ginseng of this PU by the adjacent block information of the PU according to current View
Examine View, it is possible to obtain more suitably with reference to View, and then the accuracy that DV calculates can be improved.
Accompanying drawing explanation
In order to be illustrated more clearly that the technical scheme of the embodiment of the present invention, below will be to embodiment or existing skill
In art description, the required accompanying drawing used is briefly described, it should be apparent that, the accompanying drawing in describing below
It is only some embodiments of the present invention, for those of ordinary skill in the art, is not paying creation
On the premise of property is laborious, it is also possible to obtain other accompanying drawing according to these accompanying drawings.
Fig. 1 is the indicative flowchart of 3D encoding-decoding process.
Fig. 2 is the flow chart of the determination method of the multi-view disparity vector of the embodiment of the present invention.
Fig. 3 is the time domain adjacent block of the multi-view disparity vector of the embodiment of the present invention and showing of spatial domain adjacent block
It is intended to.
Fig. 4 is the indicative flowchart of the determination method of the multi-view disparity vector of the embodiment of the present invention.
Fig. 5 is the block diagram of the encoding device of one embodiment of the invention.
Fig. 6 is the block diagram of the decoding device of another embodiment of the present invention.
Fig. 7 is the block diagram of the encoding device of another embodiment of the present invention.
Fig. 8 is the block diagram of the decoding device of another embodiment of the present invention.
Detailed description of the invention
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out
Clearly and completely describe, it is clear that described embodiment be a part of embodiment of the present invention rather than
Whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art are not making wound
The every other embodiment obtained under the property made work premise, broadly falls into the scope of protection of the invention.
Fig. 1 is the indicative flowchart of 3D encoding-decoding process.
Video encoding-decoding process is to carry out encoding and decoding, many by multiple views encoding and decoding plus depth map form
In viewpoint encoding-decoding process, one of them viewpoint will be as basic viewpoint (Base View) or independent viewpoint
(Independent View).Such as, in an encoding process, this basic viewpoint or independent viewpoint use with
The coded system that other viewpoints are different, its corresponding depth map encoding uses tradition HEVC coded system.
And other viewpoints can be described as dependent viewpoint (Dependent View), in order to utilize independent viewpoint coding letter
Breath, and in order to improve code efficiency, the coded system of dependent viewpoint is except tradition efficient video coding
Outside (High Efficiency Video Coding, HEVC) mode, more coding tools, such as
Disparity compensation prediction technology, difference vector acquiring technology, depending between motion prediction, depending between residual prediction, bright
Degree compensates, and View Synthesis prediction etc. can be applicable in the cataloged procedure of dependent viewpoint.
As it is shown in figure 1, when three-dimensional (three-dimension, 3D) video is carried out encoding and decoding, first
3D video is encoded by first encoder, sends code stream to decoder the most again.3D Video Decoder,
After stereodecoder and 2D Video Decoder are decoded respectively, synthesized by intermediate-view, permissible
Generate that N-View shows, stereo display and 2D show respectively.
Fig. 2 is the flow chart of the determination method of the multi-view disparity vector of the embodiment of the present invention.Shown in Fig. 2
Determination method during image or the coding of video or decoding, the determination side in cataloged procedure
Method is performed by encoding device, and the determination method during decoding is performed by decoding device.Shown in Fig. 2
Determination method include:
201, according to the adjacent block information of the PU of current View, determine the reference View of this PU, with
And determine that the NBDV, this current View of this PU are other View in addition to basic View.
202, according to the NBDV of the reference View and this PU of this PU, determine the Do-NBDV of this PU.
The embodiment of the present invention determines the ginseng of this PU by the adjacent block information of the PU according to current View
Examine View, it is possible to obtain more suitably with reference to View, and then the accuracy that DV calculates can be improved.
In the embodiment of the present invention, basic View in step 202. is also referred to as independent View, when
Front View is dependent View.
In the embodiment of the present invention, adjacent block can be time domain adjacent block, can be spatial domain adjacent block, permissible
It is motion predicted compensation block, it is also possible to being the adjacent block of other forms, this is not construed as limiting by the present invention.Phase
Adjacent block message can include the size of adjacent block, position, shape etc., and adjacent block information may also include adjacent block
MV and the reference View etc. of adjacent block, this is not construed as limiting by the present invention.
It should be noted that the adjacent block in the embodiment of the present invention can have identical size and shape with this PU,
Also can have different size and/or shapes from this PU, this is not construed as limiting by the present invention.
Alternatively, as an embodiment, the determination method shown in Fig. 1 may also include that according to this PU
Do-NBDV, carry out multi-vision-point encoding or decoding.
Alternatively, as an embodiment, in step 202., it may be determined that the time domain adjacent block of this PU.
And obtain a MV of this time domain adjacent block and the first of this time domain adjacent block with reference to View.Further
Determine that the NBDV of this PU is a MV, and determine that the reference View of this PU is this first ginseng
Examine View.
Specifically, in step 202., first have to determine whether this PU exists time domain adjacent block, only
In the presence of this time domain adjacent block, just obtain a MV and first with reference to View.
Alternatively, as another embodiment, in step 202., when this PU time domain adjacent block not
Time available, it may be determined that the first spatial domain adjacent block of this PU, this first spatial domain adjacent block is the left phase of this PU
Adjacent block.And obtain the 2nd MV of this first spatial domain adjacent block, and the second of this first spatial domain adjacent block
With reference to View.Further determine that the NBDV of this PU is the 2nd MV, and determine the ginseng of this PU
Examining View is this second reference View.
In the embodiment of the present invention, the priority of the first spatial domain adjacent block is less than the priority of time domain adjacent block.
Only when time domain adjacent block is unavailable, just judge whether this PU exists the first spatial domain adjacent block.This
Time domain adjacent block in bright embodiment is unavailable, refers to that this time domain adjacent block does not exists, or when referring to this
But territory adjacent block exists does not has a MV, or refers to that the existence of this time domain adjacent block does not still have first
With reference to View, or referring to other disabled situations, this is not construed as limiting by the present invention.
Alternatively, as another embodiment, in step 202., adjacent when first spatial domain of this PU
When block is unavailable, it may be determined that the second spatial domain adjacent block of this PU, this second spatial domain adjacent block is this PU's
Top adjacent block.And obtain the 3rd MV of this second spatial domain adjacent block, and this second spatial domain adjacent block
The 3rd with reference to View.Further determine that the NBDV of this PU is the 3rd MV, and determine this PU
Reference View be the 3rd with reference to View.
In the embodiment of the present invention, the priority of the second spatial domain adjacent block is preferential less than the first spatial domain adjacent block
Level.Only when the first spatial domain adjacent block is unavailable, just judge whether this PU exists the second spatial domain adjacent
Block.The first spatial domain adjacent block in the embodiment of the present invention is unavailable, refers to that this first spatial domain adjacent block is not deposited
, or refer to that this first spatial domain adjacent block existence does not still have the 2nd MV, or refer to this first sky
But territory adjacent block exists does not has the second reference View, or refers to other disabled situations, the present invention
This is not construed as limiting.
Fig. 3 is time domain adjacent block and the signal of spatial domain adjacent block of embodiment of the present invention multi-view disparity vector
Figure.T0 in Fig. 3 is time domain adjacent block, and A1 is the first spatial domain adjacent block, and A2 is that the second spatial domain is adjacent
Block.Further, the priority of the T0 priority higher than A1, the priority of the A1 priority higher than A2.
It should be noted that the time domain adjacent block in Fig. 3 and spatial domain adjacent block the most schematically represent.Fig. 3 institute
PU and T0 shown not is to comprise or the relation of overlap, and both are independent.Although in figure 3,
The size of T0, A1 and A2 is all little than PU, but the actually respective size of T0, A1 and A2 and PU
Size do not have certainty relation, T0, A1 may be identical with the size of PU with any one in A2,
Any one in T0, A1 and A2 all may be bigger than PU or less than PU.Similarly, although at Fig. 3
In, the shape of PU, T0, A1 and A2 looks it is all square, but actually PU, T0, A1
Can be different with shape with A2, can with therein two or three shapes therein identical.Shape can
To be square or rectangle, it is also possible to being other shape, this is not construed as limiting by the present invention.It addition,
Although in figure 3, the position of A1 is in the lower left of PU, but actually A1 can be in the front-left of PU
Or the upper left side at PU;Similarly, the position of A2 can be in the upper right of PU, or can be at PU
Surface, or can be in the upper left of PU, this is not construed as limiting by the present invention.
It should be noted that in Fig. 3 though it is shown that time domain adjacent block T0, the first spatial domain adjacent block A1 and second
Spatial domain adjacent block A2.But in the step 202 shown in Fig. 1, can only determine T0;Or T0 does not deposits
, only determine A1;Or T0 and A1 does not exists, only determine A2.This is not made by the present invention
Limit.
In the embodiment of the present invention, at most select a time domain adjacent block and two spatial domain adjacent blocks.So,
When selecting less/number of spatial domain adjacent block, it is possible to reduce the complexity during DV calculates.
Alternatively, as another embodiment, in step 202., adjacent when second spatial domain of this PU
When block is unavailable, it may be determined that the motion predicted compensation block of this PU.And obtain this motion predicted compensation block
4th MV, and the 4th of this motion predicted compensation block is with reference to View.Further determine that this PU's
NBDV is the 4th MV, and determines that the reference View of this PU is the 4th reference View.
Specifically, when time domain adjacent block, the first spatial domain adjacent block and the second spatial domain adjacent block of this PU
Time the most unavailable, can determine whether whether this PU has motion predicted compensation block.
Alternatively, as another embodiment, in step 202., when this PU does not has motion prediction to mend
When repaying block, it may be determined that the NBDV of this PU is zero.Further, when reference frame and the 5th reference of this PU
When the reference frame of the PU of View is synchronization reference frame, determine that the reference View of this PU is the 5th
With reference to View.
Specifically, when the NBDV determining this PU is zero, it is understood that there may be multiple View can make
Reference View for this PU.It is to say, now need may reference View select from the multiple of PU
Select one of them reference View as this PU.Now, the reference frame of this PU can be obtained, and obtain
The plurality of possible reference frame with reference to the possible corresponding PU with reference to View of each in View.If
The reference frame of this PU with the plurality of may be with reference to the 5th in View with reference to the ginseng of the corresponding PU of View
Examining frame is synchronization reference frame, then be defined as the reference View of this PU with reference to View by the 5th.
Alternatively, as an embodiment, in step 202., can obtain the reference View's of this PU
Texture maps information, and the depth map information of the reference View of this PU corresponding with this texture maps information.
And according to the NBDV of this PU, the texture maps information of the reference View of this PU and the reference View of this PU
Depth map information, determine the Do-NBDV of this PU.
Specifically, the texture maps information with reference to View and depth map information, and this texture can first be obtained
Figure information and depth map information are corresponding.Then, can be according to the NBDV of this PU, the NBDV of this PU
And the relation between texture maps information, the corresponding relation between this texture maps information and depth map information, really
The Do-NBDV of this PU fixed.
Here depth map information includes the information etc. of the parallax parameter of corresponding reference View.Now
The depth map information of the most encoded available complete this reference View, determines this PU of current View
Do-NBDV, and according to the Do-NBDV of this PU, the texture maps of this current View is entered further
Row coding, and the accuracy of the DV of the texture maps of current View can be improved.
So, the embodiment of the present invention utilizes the determination method of DV based on PU, the PU of different View
Reference View be not changeless, so can improve the accuracy of DV.Further, this side of determination
Method process is simple, it is possible to optimize the follow-up performance carrying out according to this DV and encoding or decode, it is possible to after minimizing
Sequel code or decoding process redundancy.
Fig. 4 is the indicative flowchart of the determination method of the multi-view disparity vector of the embodiment of the present invention.Figure
Determination method shown in 4 is during image or the coding of video or decoding, in cataloged procedure
Determining that method is performed by encoding device, the determination method during decoding is performed by decoding device..Figure
Determination method shown in 4 includes:
401, start coding or decoding.
Specifically, can be that image or video are encoded;Can also be that image or video are entered
Row decoding.
402, complete coding or the decoding of basic View.
Specifically, this basic View is alternatively referred to as independent View, is first to carry out encoding or first entering
Row decoding viewpoint, and this basic View can use tradition HEVC mode, or be referred to as the most H.265 side
Formula encodes or decodes.
403, it is judged that whether the time domain adjacent block of the PU of current View can be used.If it is available, perform step
Rapid 404;If it is not, perform step 405.
Specifically, can there is time domain adjacent block with the PU referring to current View in time domain adjacent block, and can
Obtain a MV of this time domain adjacent block and the first of this time domain adjacent block with reference to View.
404, obtain a MV of this time domain adjacent block and the first of this time domain adjacent block with reference to View.
405, it is judged that whether the first spatial domain adjacent block of the PU of current View can be used.If it is available, hold
Row step 406;If it is not, perform step 407.
Specifically, the first spatial domain adjacent block is the left adjacent block of this PU, and the first spatial domain adjacent block can be with being
Refer to that the PU of current View exists the first spatial domain adjacent block, and the of this first spatial domain adjacent block can be obtained
Two MV and the second reference View of this first spatial domain adjacent block.
406, obtain the 2nd MV and the second of this first spatial domain adjacent block of this first spatial domain adjacent block
With reference to View.
407, it is judged that whether the second spatial domain adjacent block of the PU of current View can be used.If it is available, hold
Row step 408;If it is not, perform step 409.
Specifically, the second spatial domain adjacent block is the top adjacent block of this PU, and the second spatial domain adjacent block can be used
Refer to that the PU of current View exists the second spatial domain adjacent block, and this second spatial domain adjacent block can be obtained
3rd MV and the 3rd reference View of this second spatial domain adjacent block.
408, obtain the 3rd MV and the 3rd of this second spatial domain adjacent block the of this second spatial domain adjacent block
With reference to View.
409, it is judged that whether the PU of current View has motion detection compensation block.If it has, execution step
410;If it did not, perform step 411.
410, obtain the 4th MV and the 4th of this motion detection compensation block the of this motion detection compensation block
With reference to View.
411, determine that the NBDV of this PU is zero, and, when reference frame and the 5th reference of this PU
When the reference frame of the PU of View is synchronization reference frame, determine that the reference View of this PU is the 5th
With reference to View.
412, determine the reference View of the PU of current View, and determine the PU of current View
NBDV。
Specifically, if performing step 404, then the reference View of the PU of this current View is
One reference View, the NBDV of the PU of this current View is a MV.If performing step 406,
Then the reference View of the PU of this current View is the second reference View, the PU's of this current View
NBDV is the 2nd MV.If performing step 408, then the reference View of the PU of this current View
Be the 3rd with reference to View, the NBDV of the PU of this current View be the 3rd MV.If performing step
Rapid 410, then the reference View of the PU of this current View is the 4th reference View, this current View
The NBDV of PU be the 4th MV.If performing step 411, then the PU of this current View
It is the 5th reference View, the NBDV of the PU of this current View determined by step 411 with reference to View
It is zero.
413, determine the Do-NBDV of the PU of current View.
Specifically, the texture maps information of the reference View of the PU of this current View, Yi Jiyu can be obtained
The depth map information of the reference View of the PU of this current View that this texture maps information is corresponding.And according to
The NBDV of the PU of this current View, the texture maps information of reference View of PU of this current View
With the depth map information of the reference View of the PU of this current View, determine the PU's of this current View
Do-NBDV。
414, complete coding or decoding process.
Specifically, it is predicted coded treatment according to Do-NBDV, or carries out according to Do-NBDV
Prediction decoding processes.
So, the embodiment of the present invention utilizes the determination method of DV based on PU, the PU of different View
Reference View be not changeless, so can obtain more suitably with reference to View, and then can
Improve the accuracy of DV.Further, this determines that procedure is simple, it is possible to optimize follow-up according to this DV
Carry out the performance encoding or decoding, it is possible to reduce next code or the redundancy of decoding process.
Fig. 5 is the block diagram of the encoding device of one embodiment of the invention.Encoding device 500 shown in Fig. 5
Determine that unit 501 and second determines unit 502 including first.
First determines the unit 501 adjacent block information for the PU according to current View, determines this PU
Reference View, and determine the NBDV of this PU, this current View is in addition to basic View
Other View.Second determines the unit 502 NBDV for the reference View according to this PU He this PU,
Determine the Do-NBDV of this PU.
The embodiment of the present invention determines the ginseng of this PU by the adjacent block information of the PU according to current View
Examine View, it is possible to obtain more suitably with reference to View, and then DV calculating can be improved in an encoding process
Accuracy.
Encoding device 500 is capable of in the embodiment of Fig. 2 and Fig. 4 each mistake realized by encoding device
Journey, for avoiding repeating, repeats no more here.Encoding device 500 can be used for the coding of image and/or video
Process.
Alternatively, as an embodiment, encoding device 500 may also include coding unit.Coding unit
Encode for the Do-NBDV according to this PU.
Alternatively, as another embodiment, first determines that unit 501 includes that the 3rd determines unit 503
With the first acquiring unit 504.3rd determines that unit 503 can be used for determining the time domain adjacent block of this PU.The
One acquiring unit 504 can be used for obtaining a MV of this time domain adjacent block, and this time domain adjacent block
First with reference to View.3rd determine unit 503 can be additionally used in determine the NBDV of this PU be this first
MV, and determine that the reference View of this PU is this first reference View.
Alternatively, as another embodiment, the 3rd determines that unit 503 can be additionally used in as this PU
When territory adjacent block is unavailable, determining the first spatial domain adjacent block of this PU, this first spatial domain adjacent block is for being somebody's turn to do
The left adjacent block of PU.First acquiring unit 504 can be additionally used in and obtains the second of this first spatial domain adjacent block
MV, and the second of this first spatial domain adjacent block is with reference to View.3rd determines that unit 503 can be additionally used in
Determine that the NBDV of this PU is the 2nd MV, and determine that the reference View of this PU is this second ginseng
Examine View.
Alternatively, as another embodiment, the 3rd determines that unit 503 can be additionally used in as this PU
When one spatial domain adjacent block is unavailable, determine the second spatial domain adjacent block of this PU, this second spatial domain adjacent block
Top adjacent block for this PU.First acquiring unit 504 can be additionally used in this second spatial domain adjacent block of acquisition
The 3rd MV, and the 3rd of this second spatial domain adjacent block is with reference to View.3rd determines unit 503 also
Can be used for determining that the NBDV of this PU is the 3rd MV, and determine that the reference View of this PU is for being somebody's turn to do
3rd with reference to View.
Alternatively, as another embodiment, the 3rd determines that unit 503 can be additionally used in as this PU
When two spatial domain adjacent blocks are unavailable, determine the motion predicted compensation block of this PU.First acquiring unit 504
Can be additionally used in the 4th MV obtaining this motion predicted compensation block, and the 4th of this motion predicted compensation block the
With reference to View.3rd determines that unit 503 can be additionally used in determines that the NBDV of this PU is the 4th MV,
And determine that the reference View of this PU is the 4th reference View.
Alternatively, as another embodiment, first determines that unit 501 can be additionally used in when this PU does not has
During motion predicted compensation block, determine that the NBDV of this PU is zero, and, when this PU reference frame with
When the reference frame of the PU of the 5th reference View is synchronization reference frame, determine the reference View of this PU
For the 5th with reference to View.
Alternatively, as another embodiment, second determines that unit 502 includes second acquisition unit 505
Unit 506 is determined with the 4th.Second acquisition unit 505 can be used for obtaining the stricture of vagina of the reference View of this PU
Reason figure information, and the depth map information of the reference View of this PU corresponding with this texture maps information.The
Four determine that unit 506 can be used for the texture maps letter of the reference View of the NBDV according to this PU, this PU
The depth map information of the reference View of breath and this PU, determines the Do-NBDV of this PU.
Fig. 6 is the block diagram of the decoding device of one embodiment of the invention.Decoding device 600 shown in Fig. 6
Determine that unit 601 and second determines unit 602 including first.
First determines the unit 601 adjacent block information for the PU according to current View, determines this PU
Reference View, and determine the NBDV of this PU, this current View is in addition to basic View
Other View.Second determines the unit 602 NBDV for the reference View according to this PU He this PU,
Determine the Do-NBDV of this PU.
The embodiment of the present invention determines the ginseng of this PU by the adjacent block information of the PU according to current View
Examine View, it is possible to obtain more suitably with reference to View, and then DV calculating can be improved during decoding
Accuracy.
Decoding device 600 is capable of in the embodiment of Fig. 2 and Fig. 4 each mistake realized by decoding device
Journey, for avoiding repeating, repeats no more here.Decoding device 600 can be used for the decoding of image and/or video
Process.
Alternatively, as an embodiment, decoding device 600 may also include decoding unit.Decoding unit
It is decoded for the Do-NBDV according to this PU.
Alternatively, as another embodiment, first determines that unit 601 includes that the 3rd determines unit 603
With the first acquiring unit 604.3rd determines that unit 603 can be used for determining the time domain adjacent block of this PU.The
One acquiring unit 604 can be used for obtaining a MV of this time domain adjacent block, and this time domain adjacent block
First with reference to View.3rd determine unit 603 can be additionally used in determine the NBDV of this PU be this first
MV, and determine that the reference View of this PU is this first reference View.
Alternatively, as another embodiment, the 3rd determines that unit 603 can be additionally used in as this PU
When territory adjacent block is unavailable, determining the first spatial domain adjacent block of this PU, this first spatial domain adjacent block is for being somebody's turn to do
The left adjacent block of PU.First acquiring unit 604 can be additionally used in and obtains the second of this first spatial domain adjacent block
MV, and the second of this first spatial domain adjacent block is with reference to View.3rd determines that unit 603 can be additionally used in
Determine that the NBDV of this PU is the 2nd MV, and determine that the reference View of this PU is this second ginseng
Examine View.
Alternatively, as another embodiment, the 3rd determines that unit 603 can be additionally used in as this PU
When one spatial domain adjacent block is unavailable, determine the second spatial domain adjacent block of this PU, this second spatial domain adjacent block
Top adjacent block for this PU.First acquiring unit 604 can be additionally used in this second spatial domain adjacent block of acquisition
The 3rd MV, and the 3rd of this second spatial domain adjacent block is with reference to View.3rd determines unit 603 also
Can be used for determining that the NBDV of this PU is the 3rd MV, and determine that the reference View of this PU is for being somebody's turn to do
3rd with reference to View.
Alternatively, as another embodiment, the 3rd determines that unit 603 can be additionally used in as this PU
When two spatial domain adjacent blocks are unavailable, determine the motion predicted compensation block of this PU.First acquiring unit 604
Can be additionally used in the 4th MV obtaining this motion predicted compensation block, and the 4th of this motion predicted compensation block the
With reference to View.3rd determines that unit 603 can be additionally used in determines that the NBDV of this PU is the 4th MV,
And determine that the reference View of this PU is the 4th reference View.
Alternatively, as another embodiment, first determines that unit 601 can be additionally used in when this PU does not has
During motion predicted compensation block, determine that the NBDV of this PU is zero, and, when this PU reference frame with
When the reference frame of the PU of the 5th reference View is synchronization reference frame, determine the reference View of this PU
For the 5th with reference to View.
Alternatively, as another embodiment, second determines that unit 602 includes second acquisition unit 605
Unit 606 is determined with the 4th.Second acquisition unit 605 can be used for obtaining the stricture of vagina of the reference View of this PU
Reason figure information, and the depth map information of the reference View of this PU corresponding with this texture maps information.The
Four determine that unit 606 can be used for the texture maps letter of the reference View of the NBDV according to this PU, this PU
The depth map information of the reference View of breath and this PU, determines the Do-NBDV of this PU.
Fig. 7 is the block diagram of the encoding device of another embodiment of the present invention.Encoding device 700 shown in Fig. 7
Including processor 701, memorizer 702, transmission circuit 703.
Processor 701, for the adjacent block information of the PU according to current View, determines the reference of this PU
View, and determine the NBDV of this PU, this current View is other View in addition to basic View.
And the NBDV of the reference View and this PU according to this PU, determine the Do-NBDV of this PU.
The embodiment of the present invention determines the ginseng of this PU by the adjacent block information of the PU according to current View
Examine View, it is possible to obtain more suitably with reference to View, and then DV calculating can be improved in an encoding process
Accuracy.
Each assembly in encoding device 700 is coupled by bus system 704, wherein total linear system
System 704, in addition to including data/address bus, also includes power bus, controls bus and status signal bus in addition.
But for the sake of understanding explanation, in Figure 5 various buses are all designated as bus system 704.
The method that the invention described above embodiment discloses can apply in processor 701, or by processor
701 realize.Processor 701 is probably a kind of IC chip, has the disposal ability of signal.?
During realization, each step of said method can be by the integration logic electricity of the hardware in processor 701
The instruction of road or software form completes.Above-mentioned processor 701 can be general processor, numeral letter
Number processor (Digital Signal Processor, DSP), special IC (Application Specific
Integrated Circuit, ASIC), ready-made programmable gate array (Field Programmable Gate Array,
Or other PLDs, discrete gate or transistor logic, discrete hardware group FPGA)
Part.Can realize or perform disclosed each method, step and the logic diagram in the embodiment of the present invention.
The processor etc. that general processor can be microprocessor or this processor can also be any routine.Knot
The step of conjunction method disclosed in the embodiment of the present invention can be embodied directly in hardware decoding processor and perform
Become, or complete with the hardware in decoding processor and software module combination execution.Software module can position
In random access memory (Random Access Memory, RAM), flash memory, read only memory
(Read-Only Memory, ROM), programmable read only memory or electrically erasable programmable storage
In the storage medium that this area such as device, depositor is ripe.This storage medium is positioned at memorizer 702, processes
Device 701 reads the information in memorizer 702, completes the step of said method in conjunction with its hardware.
Encoding device 700 is capable of in the embodiment of Fig. 1 and Fig. 3 each mistake realized by encoding device
Journey, for avoiding repeating, repeats no more here.Encoding device 700 can be used for the coding of image and/or video
Process.
Alternatively, as an embodiment, processor 701 can be additionally used in the Do-NBDV according to described PU
Encode.
Alternatively, as another embodiment, processor 701 can be additionally used in the time domain phase determining this PU
Adjacent block.Transmission circuit 703 can be used for obtaining a MV of this time domain adjacent block, and this time domain is adjacent
The first of block is with reference to View.Processor 701 can be additionally used in and determines that the NBDV of this PU is a MV,
And determine that the reference View of this PU is this first reference View.
Alternatively, as another embodiment, processor 701 can be additionally used in when the time domain of this PU is adjacent
When block is unavailable, determining the first spatial domain adjacent block of this PU, this first spatial domain adjacent block is a left side of this PU
Adjacent block.Transmission circuit 703 can be additionally used in the 2nd MV obtaining this first spatial domain adjacent block, and should
The second of first spatial domain adjacent block is with reference to View.Processor 701 can be additionally used in the NBDV determining this PU
For the 2nd MV, and determine that the reference View of this PU is that this is second with reference to View.
Alternatively, as another embodiment, processor 701 can be additionally used in the first spatial domain as this PU
When adjacent block is unavailable, determining the second spatial domain adjacent block of this PU, this second spatial domain adjacent block is this PU
Top adjacent block.Transmission circuit 703 can be additionally used in the 3rd MV obtaining this second spatial domain adjacent block,
And the 3rd of this second spatial domain adjacent block is with reference to View.Processor 701 can be additionally used in and determines this PU's
NBDV is the 3rd MV, and determines that the reference View of this PU is the 3rd reference View.
Alternatively, as another embodiment, processor 701 can be additionally used in the second spatial domain as this PU
When adjacent block is unavailable, determine the motion predicted compensation block of this PU.Transmission circuit 703 can be additionally used in and obtains
Take the 4th MV of this motion predicted compensation block, and the 4th of this motion predicted compensation block is with reference to View.
Processor 701 can be additionally used in and determines that the NBDV of this PU is the 4th MV, and determines this PU's
It is the 4th reference View with reference to View.
Alternatively, as another embodiment, processor 701 can be additionally used in when this PU does not move pre-
When surveying compensation block, determine that the NBDV of this PU is zero, and, when reference frame and the 5th ginseng of this PU
When the reference frame of the PU examining View is synchronization reference frame, determine that the reference View of this PU is for being somebody's turn to do
5th with reference to View.
Alternatively, as another embodiment, transmission circuit 703 can be used for obtaining the reference View of this PU
Texture maps information, and the depth map information of the reference View of this PU corresponding with this texture maps information.
Processor 701 can be used for the reference View of the NBDV according to this PU, this PU texture maps information and
The depth map information of the reference View of this PU, determines the Do-NBDV of this PU.
Fig. 8 is the block diagram of the decoding device of another embodiment of the present invention.Decoding device 800 shown in Fig. 8
Including processor 801, memorizer 802, transmission circuit 803.
Processor 801, for the adjacent block information of the PU according to current View, determines the reference of this PU
View, and determine the NBDV of this PU, this current View is other View in addition to basic View.
And the NBDV of the reference View and this PU according to this PU, determine the Do-NBDV of this PU.
The embodiment of the present invention determines the ginseng of this PU by the adjacent block information of the PU according to current View
Examine View, it is possible to obtain more suitably with reference to View, and then DV calculating can be improved during decoding
Accuracy.
Each assembly in decoding device 800 is coupled by bus system 804, wherein total linear system
System 804, in addition to including data/address bus, also includes power bus, controls bus and status signal bus in addition.
But for the sake of understanding explanation, in Figure 5 various buses are all designated as bus system 804.
The method that the invention described above embodiment discloses can apply in processor 801, or by processor
801 realize.Processor 801 is probably a kind of IC chip, has the disposal ability of signal.?
During realization, each step of said method can be by the integration logic electricity of the hardware in processor 801
The instruction of road or software form completes.Above-mentioned processor 801 can be general processor, numeral letter
Number processor (Digital Signal Processor, DSP), special IC (Application Specific
Integrated Circuit, ASIC), ready-made programmable gate array (Field Programmable Gate Array,
Or other PLDs, discrete gate or transistor logic, discrete hardware group FPGA)
Part.Can realize or perform disclosed each method, step and the logic diagram in the embodiment of the present invention.
The processor etc. that general processor can be microprocessor or this processor can also be any routine.Knot
The step of conjunction method disclosed in the embodiment of the present invention can be embodied directly in hardware decoding processor and perform
Become, or complete with the hardware in decoding processor and software module combination execution.Software module can position
In random access memory (Random Access Memory, RAM), flash memory, read only memory
(Read-Only Memory, ROM), programmable read only memory or electrically erasable programmable storage
In the storage medium that this area such as device, depositor is ripe.This storage medium is positioned at memorizer 802, processes
Device 801 reads the information in memorizer 802, completes the step of said method in conjunction with its hardware.
Decoding device 800 is capable of in the embodiment of Fig. 1 and Fig. 3 each mistake realized by decoding device
Journey, for avoiding repeating, repeats no more here.Decoding device 800 can be used for the decoding of image and/or video
Process.
Alternatively, as an embodiment, processor 801 can be additionally used in the Do-NBDV according to described PU
It is decoded.
Alternatively, as another embodiment, processor 801 can be additionally used in the time domain phase determining this PU
Adjacent block.Transmission circuit 803 can be used for obtaining a MV of this time domain adjacent block, and this time domain is adjacent
The first of block is with reference to View.Processor 801 can be additionally used in and determines that the NBDV of this PU is a MV,
And determine that the reference View of this PU is this first reference View.
Alternatively, as another embodiment, processor 801 can be additionally used in when the time domain of this PU is adjacent
When block is unavailable, determining the first spatial domain adjacent block of this PU, this first spatial domain adjacent block is a left side of this PU
Adjacent block.Transmission circuit 803 can be additionally used in the 2nd MV obtaining this first spatial domain adjacent block, and should
The second of first spatial domain adjacent block is with reference to View.Processor 801 can be additionally used in the NBDV determining this PU
For the 2nd MV, and determine that the reference View of this PU is that this is second with reference to View.
Alternatively, as another embodiment, processor 801 can be additionally used in the first spatial domain as this PU
When adjacent block is unavailable, determining the second spatial domain adjacent block of this PU, this second spatial domain adjacent block is this PU
Top adjacent block.Transmission circuit 803 can be additionally used in the 3rd MV obtaining this second spatial domain adjacent block,
And the 3rd of this second spatial domain adjacent block is with reference to View.Processor 801 can be additionally used in and determines this PU's
NBDV is the 3rd MV, and determines that the reference View of this PU is the 3rd reference View.
Alternatively, as another embodiment, processor 801 can be additionally used in the second spatial domain as this PU
When adjacent block is unavailable, determine the motion predicted compensation block of this PU.Transmission circuit 803 can be additionally used in and obtains
Take the 4th MV of this motion predicted compensation block, and the 4th of this motion predicted compensation block is with reference to View.
Processor 801 can be additionally used in and determines that the NBDV of this PU is the 4th MV, and determines this PU's
It is the 4th reference View with reference to View.
Alternatively, as another embodiment, processor 801 can be additionally used in when this PU does not move pre-
When surveying compensation block, determine that the NBDV of this PU is zero, and, when reference frame and the 5th ginseng of this PU
When the reference frame of the PU examining View is synchronization reference frame, determine that the reference View of this PU is for being somebody's turn to do
5th with reference to View.
Alternatively, as another embodiment, transmission circuit 803 can be used for obtaining the reference View of this PU
Texture maps information, and the depth map information of the reference View of this PU corresponding with this texture maps information.
Processor 801 can be used for the reference View of the NBDV according to this PU, this PU texture maps information and
The depth map information of the reference View of this PU, determines the Do-NBDV of this PU.
Those of ordinary skill in the art are it is to be appreciated that combine each of the embodiments described herein description
The unit of example and algorithm steps, it is possible to electronic hardware or computer software and the knot of electronic hardware
Incompatible realization.These functions perform with hardware or software mode actually, depend on the spy of technical scheme
Fixed application and design constraint.Professional and technical personnel can use not Tongfang to each specifically should being used for
Method realizes described function, but this realization is it is not considered that beyond the scope of this invention.
Those skilled in the art is it can be understood that arrive, and for convenience and simplicity of description, above-mentioned retouches
The specific works process of system, device and the unit stated, is referred to the correspondence in preceding method embodiment
Process, does not repeats them here.
In several embodiments provided herein, it should be understood that disclosed system, device and
Method, can realize by another way.Such as, device embodiment described above is only shown
Meaning property, such as, the division of described unit, be only a kind of logic function and divide, actual can when realizing
There to be other dividing mode, the most multiple unit or assembly can in conjunction with or be desirably integrated into another
System, or some features can ignore, or do not perform.Another point, shown or discussed each other
Coupling direct-coupling or communication connection can be the INDIRECT COUPLING by some interfaces, device or unit
Or communication connection, can be electrical, machinery or other form.
The described unit illustrated as separating component can be or may not be physically separate, makees
The parts shown for unit can be or may not be physical location, i.e. may be located at a place,
Or can also be distributed on multiple NE.Can select according to the actual needs part therein or
The whole unit of person realizes the purpose of the present embodiment scheme.
It addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit,
Can also be that unit is individually physically present, it is also possible to two or more unit are integrated in a list
In unit.
If described function realizes and as independent production marketing or use using the form of SFU software functional unit
Time, can be stored in a computer read/write memory medium.Based on such understanding, the present invention's
Part or the part of this technical scheme that prior art is contributed by technical scheme the most in other words can
Embodying with the form with software product, this computer software product is stored in a storage medium,
Including some instructions with so that computer equipment (can be personal computer, server, or
The network equipment etc.) perform all or part of step of method described in each embodiment of the present invention.And it is aforesaid
Storage medium includes: USB flash disk, portable hard drive, ROM, RAM, magnetic disc or CD etc. are various can be deposited
The medium of storage program code.
The above, the only detailed description of the invention of the present invention, but protection scope of the present invention is not limited to
In this, any those familiar with the art, can be easily in the technical scope that the invention discloses
Expect change or replace, all should contain within protection scope of the present invention.Therefore, the protection of the present invention
Scope should be as the criterion with scope of the claims.
Claims (24)
1. a determination method for multi-view disparity vector, described determines that method is for image or the volume of video
During code or decoding, it is characterised in that described determine that method includes:
The adjacent block information of predicting unit PU according to current view point View, determines described predicting unit
The reference view View of PU, and determine the adjacent block difference vector NBDV of described predicting unit PU,
Described current view point View is other vision points iew in addition to basic vision point iew;
Reference view View and the adjacent block of described predicting unit PU according to described predicting unit PU regard
Difference vector NBDV, determine the adjacent block difference vector based on depth map of described predicting unit PU based on
The adjacent block difference vector Do-NBDV of depth map.
The most according to claim 1 determine method, it is characterised in that described determine that method also includes:
According to the adjacent block difference vector Do-NBDV based on depth map of described predicting unit PU, carry out
Multi-vision-point encoding or decoding.
The most according to claim 1 determine method, it is characterised in that described according to current view point
The adjacent block information of predicting unit PU of View, determines the reference view View of described predicting unit PU,
And determine the adjacent block difference vector NBDV of described predicting unit PU, including:
Determine the time domain adjacent block of described predicting unit PU;
Obtain the first motion vector MV of described time domain adjacent block, and the first of described time domain adjacent block
Reference view View;
Determine that the adjacent block difference vector NBDV of described predicting unit PU is described first motion vector
MV, and determine that the reference view View of described predicting unit PU is described first reference view View.
The most according to claim 3 determine method, it is characterised in that described according to current view point
The adjacent block information of predicting unit PU of View, determines the reference view View of described predicting unit PU,
And determine the adjacent block difference vector NBDV of described predicting unit PU, including:
When the time domain adjacent block of described predicting unit PU is unavailable, determine the of described predicting unit PU
One spatial domain adjacent block, described first spatial domain adjacent block is the left adjacent block of described predicting unit PU;
Obtain the second motion vector MV of described first spatial domain adjacent block, and described first spatial domain is adjacent
Second reference view View of block;
Determine that the adjacent block difference vector NBDV of described predicting unit PU is described second motion vector
MV, and determine that the reference view View of described predicting unit PU is described second reference view View.
The most according to claim 4 determine method, it is characterised in that described according to current view point
The adjacent block information of predicting unit PU of View, determines the reference view View of described predicting unit PU,
And determine the adjacent block difference vector NBDV of described predicting unit PU, including:
When the first spatial domain adjacent block of described predicting unit PU is unavailable, determine described predicting unit PU
The second spatial domain adjacent block, described second spatial domain adjacent block is the top adjacent block of described predicting unit PU;
Obtain the 3rd motion vector MV of described second spatial domain adjacent block, and described second spatial domain is adjacent
3rd reference view View of block;
Determine that the adjacent block difference vector NBDV of described predicting unit PU is described 3rd motion vector
MV, and determine that the reference view View of described predicting unit PU is described 3rd reference view View.
The most according to claim 5 determine method, it is characterised in that described according to current view point
The adjacent block information of predicting unit PU of View, determines the reference view View of described predicting unit PU,
And determine the adjacent block difference vector NBDV of described predicting unit PU, including:
When the second spatial domain adjacent block of described predicting unit PU is unavailable, determine described predicting unit PU
Motion predicted compensation block;
Obtain the 4th motion vector MV of described motion predicted compensation block, and described motion predicted compensation
4th reference view View of block;
Determine that the adjacent block difference vector NBDV of described predicting unit PU is described 4th motion vector
MV, and determine that the reference view View of described predicting unit PU is described 4th reference view View.
The most according to claim 6 determine method, it is characterised in that described according to current view point
The adjacent block information of predicting unit PU of View, determines the reference view View of described predicting unit PU,
And determine the adjacent block difference vector NBDV of described predicting unit PU, including:
When described predicting unit PU does not has motion predicted compensation block, determine the phase of described predicting unit PU
Adjacent block difference vector NBDV is zero;
Reference frame and corresponding predicting unit PU of the 5th reference view View when described predicting unit PU
Reference frame when being synchronization reference frame, determine that the reference view View of described predicting unit PU is institute
State the 5th reference view View.
8. according to the determination method described in any one of claim 1 to 7, it is characterised in that described basis
The adjacent block difference vector NBDV of described predicting unit PU and the reference view of described predicting unit PU
View, determines the adjacent block difference vector Do-NBDV based on depth map of described predicting unit PU, bag
Include:
Obtain the texture maps information of the reference view View of described predicting unit PU, and with described texture
The depth map information of the reference view View of described predicting unit PU that figure information is corresponding;
Adjacent block difference vector NBDV according to described predicting unit PU, described predicting unit PU
The texture maps information of reference view View and the depth map of the reference view View of described predicting unit PU
Information, determines the adjacent block difference vector Do-NBDV based on depth map of described predicting unit PU.
9. an encoding device, it is characterised in that described encoding device includes:
First determines unit, for the adjacent block information of predicting unit PU according to current view point View,
Determine the reference view View of described predicting unit PU, and determine the adjacent of described predicting unit PU
Block difference vector NBDV, described current view point View is other viewpoints in addition to basic vision point iew
View;
Second determines unit, for the reference view View according to described predicting unit PU and described prediction
The adjacent block difference vector NBDV of unit PU, determines the phase based on depth map of described predicting unit PU
Adjacent block difference vector adjacent block based on depth map difference vector Do-NBDV.
Encoding device the most according to claim 9, it is characterised in that described encoding device also wraps
Include:
Coding unit, for the adjacent block difference vector based on depth map according to described predicting unit PU
Do-NBDV encodes.
11. encoding devices according to claim 9, it is characterised in that described first determines unit,
Including:
3rd determines unit, for determining the time domain adjacent block of described predicting unit PU;
First acquiring unit, for obtaining the first motion vector MV, Yi Jisuo of described time domain adjacent block
State the first reference view View of time domain adjacent block;
Described 3rd determines unit, is additionally operable to determine the adjacent block difference vector of described predicting unit PU
NBDV is described first motion vector MV, and determines the reference view View of described predicting unit PU
For described first reference view View.
12. encoding devices according to claim 11, it is characterised in that
Described 3rd determines unit, is additionally operable to when the time domain adjacent block of described predicting unit PU is unavailable,
Determining the first spatial domain adjacent block of described predicting unit PU, described first spatial domain adjacent block is described prediction
The left adjacent block of unit PU;
Described first acquiring unit, is additionally operable to obtain the second motion vector of described first spatial domain adjacent block
MV, and the second reference view View of described first spatial domain adjacent block;
Described 3rd determines unit, is additionally operable to determine the adjacent block difference vector of described predicting unit PU
NBDV is described second motion vector MV, and determines the reference view View of described predicting unit PU
For described second reference view View.
13. encoding devices according to claim 12, it is characterised in that
Described 3rd determines unit, is additionally operable to when the first spatial domain adjacent block of described predicting unit PU can not
Used time, determining the second spatial domain adjacent block of described predicting unit PU, described second spatial domain adjacent block is institute
State the top adjacent block of predicting unit PU;
Described first acquiring unit, is additionally operable to obtain the 3rd motion vector of described second spatial domain adjacent block
MV, and the 3rd reference view View of described second spatial domain adjacent block;
Described 3rd determines unit, is additionally operable to determine the adjacent block difference vector of described predicting unit PU
NBDV is described 3rd motion vector MV, and determines the reference view View of described predicting unit PU
For described 3rd reference view View.
14. encoding devices according to claim 13, it is characterised in that
Described 3rd determines unit, is additionally operable to when the second spatial domain adjacent block of described predicting unit PU can not
Used time, determine the motion predicted compensation block of described predicting unit PU;
Described first acquiring unit, is additionally operable to obtain the 4th motion vector of described motion predicted compensation block
MV, and the 4th reference view View of described motion predicted compensation block;
Described 3rd determines unit, is additionally operable to determine the adjacent block difference vector of described predicting unit PU
NBDV is described 4th motion vector MV, and determines the reference view View of described predicting unit PU
For described 4th reference view View.
15. encoding devices according to claim 14, it is characterised in that described first determines unit,
It is additionally operable to, when described predicting unit PU does not has motion predicted compensation block, determine described predicting unit PU
Adjacent block difference vector NBDV is zero, and, when reference frame and the 5th ginseng of described predicting unit PU
When the reference frame of predicting unit PU examining vision point iew is synchronization reference frame, determine that described prediction is single
The reference view View of unit PU is described 5th reference view View.
16. according to the encoding device described in any one of claim 9 to 15, it is characterised in that described
Two determine unit, including:
Second acquisition unit, for obtaining the texture maps letter of the reference view View of described predicting unit PU
Breath, and the degree of depth of the reference view View of described predicting unit PU corresponding with described texture maps information
Figure information;
4th determines unit, for according to the adjacent block difference vector NBDV of described predicting unit PU,
The texture maps information of the reference view View of described predicting unit PU and the reference of described predicting unit PU
The depth map information of vision point iew, determines the adjacent block parallax based on depth map of described predicting unit PU
Vector Do-NBDV.
17. 1 kinds of decoding devices, it is characterised in that described decoding device includes:
First determines unit, for the adjacent block information of predicting unit PU according to current view point View,
Determine the reference view View of described predicting unit PU, and determine the adjacent of described predicting unit PU
Block difference vector NBDV, described current view point View is other viewpoints in addition to basic vision point iew
View;
Second determines unit, for the reference view View according to described predicting unit PU and described prediction
The adjacent block difference vector NBDV of unit PU, determines the phase based on depth map of described predicting unit PU
Adjacent block difference vector adjacent block based on depth map difference vector Do-NBDV.
18. decoding devices according to claim 17, it is characterised in that described decoding device also wraps
Include:
Decoding unit, for the adjacent block difference vector based on depth map according to described predicting unit PU
Do-NBDV is decoded.
19. decoding devices according to claim 17, it is characterised in that described first determines unit,
Including:
3rd determines unit, for determining the time domain adjacent block of described predicting unit PU;
First acquiring unit, for obtaining the first motion vector MV, Yi Jisuo of described time domain adjacent block
State the first reference view View of time domain adjacent block;
Described 3rd determines unit, is additionally operable to determine the adjacent block difference vector of described predicting unit PU
NBDV is described first motion vector MV, and determines the reference view View of described predicting unit PU
For described first reference view View.
20. decoding devices according to claim 19, it is characterised in that
Described 3rd determines unit, is additionally operable to when the time domain adjacent block of described predicting unit PU is unavailable,
Determining the first spatial domain adjacent block of described predicting unit PU, described first spatial domain adjacent block is described prediction
The left adjacent block of unit PU;
Described first acquiring unit, is additionally operable to obtain the second motion vector of described first spatial domain adjacent block
MV, and the second reference view View of described first spatial domain adjacent block;
Described 3rd determines unit, is additionally operable to determine the adjacent block difference vector of described predicting unit PU
NBDV is described second motion vector MV, and determines the reference view View of described predicting unit PU
For described second reference view View.
21. decoding devices according to claim 20, it is characterised in that
Described 3rd determines unit, is additionally operable to when the first spatial domain adjacent block of described predicting unit PU can not
Used time, determining the second spatial domain adjacent block of described predicting unit PU, described second spatial domain adjacent block is institute
State the top adjacent block of predicting unit PU;
Described first acquiring unit, is additionally operable to obtain the 3rd motion vector of described second spatial domain adjacent block
MV, and the 3rd reference view View of described second spatial domain adjacent block;
Described 3rd determines unit, is additionally operable to determine the adjacent block difference vector of described predicting unit PU
NBDV is described 3rd motion vector MV, and determines the reference view View of described predicting unit PU
For described 3rd reference view View.
22. decoding devices according to claim 21, it is characterised in that
Described 3rd determines unit, is additionally operable to when the second spatial domain adjacent block of described predicting unit PU can not
Used time, determine the motion predicted compensation block of described predicting unit PU;
Described first acquiring unit, is additionally operable to obtain the 4th motion vector of described motion predicted compensation block
MV, and the 4th reference view View of described motion predicted compensation block;
Described 3rd determines unit, is additionally operable to determine the adjacent block difference vector of described predicting unit PU
NBDV is described 4th motion vector MV, and determines the reference view View of described predicting unit PU
For described 4th reference view View.
23. decoding devices according to claim 22, it is characterised in that described first determines unit,
It is additionally operable to, when described predicting unit PU does not has motion predicted compensation block, determine described predicting unit PU
Adjacent block difference vector NBDV is zero, and, when reference frame and the 5th ginseng of described predicting unit PU
When the reference frame of predicting unit PU examining vision point iew is synchronization reference frame, determine that described prediction is single
The reference view View of unit PU is described 5th reference view View.
24. according to the decoding device described in any one of claim 17 to 23, it is characterised in that described
Second determines unit, including:
Second acquisition unit, for obtaining the texture maps letter of the reference view View of described predicting unit PU
Breath, and the degree of depth of the reference view View of described predicting unit PU corresponding with described texture maps information
Figure information;
4th determines unit, for according to the adjacent block difference vector NBDV of described predicting unit PU,
The texture maps information of the reference view View of described predicting unit PU and the reference of described predicting unit PU
The depth map information of vision point iew, determines the adjacent block parallax based on depth map of described predicting unit PU
Vector Do-NBDV.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310497296.8A CN103533361B (en) | 2013-10-21 | 2013-10-21 | Determination method, encoding device and the decoding device of multi-view disparity vector |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310497296.8A CN103533361B (en) | 2013-10-21 | 2013-10-21 | Determination method, encoding device and the decoding device of multi-view disparity vector |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103533361A CN103533361A (en) | 2014-01-22 |
CN103533361B true CN103533361B (en) | 2017-01-04 |
Family
ID=49934976
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310497296.8A Expired - Fee Related CN103533361B (en) | 2013-10-21 | 2013-10-21 | Determination method, encoding device and the decoding device of multi-view disparity vector |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103533361B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104394417B (en) * | 2014-12-15 | 2017-07-28 | 哈尔滨工业大学 | A kind of difference vector acquisition methods in multiple view video coding |
CN104768019B (en) * | 2015-04-01 | 2017-08-11 | 北京工业大学 | A kind of adjacent parallax vector acquisition methods towards many deep videos of multi-texturing |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101170692A (en) * | 2006-10-24 | 2008-04-30 | 华为技术有限公司 | Multi-view image encoding and decoding method and encoder and decoder |
CN101222639A (en) * | 2007-01-09 | 2008-07-16 | 华为技术有限公司 | Inter-view prediction method, encoder and decoder of multi-viewpoint video technology |
WO2008108566A1 (en) * | 2007-03-02 | 2008-09-12 | Lg Electronics Inc. | A method and an apparatus for decoding/encoding a video signal |
WO2008085876A3 (en) * | 2007-01-04 | 2008-10-09 | Thomson Licensing | Method and apparatus for video error concealment using high level syntax reference views in multi-view coded video |
-
2013
- 2013-10-21 CN CN201310497296.8A patent/CN103533361B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101170692A (en) * | 2006-10-24 | 2008-04-30 | 华为技术有限公司 | Multi-view image encoding and decoding method and encoder and decoder |
WO2008085876A3 (en) * | 2007-01-04 | 2008-10-09 | Thomson Licensing | Method and apparatus for video error concealment using high level syntax reference views in multi-view coded video |
CN101222639A (en) * | 2007-01-09 | 2008-07-16 | 华为技术有限公司 | Inter-view prediction method, encoder and decoder of multi-viewpoint video technology |
WO2008108566A1 (en) * | 2007-03-02 | 2008-09-12 | Lg Electronics Inc. | A method and an apparatus for decoding/encoding a video signal |
Also Published As
Publication number | Publication date |
---|---|
CN103533361A (en) | 2014-01-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104662910B (en) | The method and apparatus of virtual depth value in 3D Video codings | |
CN100576934C (en) | Virtual visual point synthesizing method based on the degree of depth and block information | |
CN103686165B (en) | Decoding method and Video Codec in depth image frame | |
CN102609974B (en) | Virtual viewpoint image generation process on basis of depth map segmentation and rendering | |
CN103996220A (en) | Three-dimensional reconstruction method and system in intelligent transportation | |
CN103686139B (en) | Two field picture conversion method, frame video conversion method and device | |
CN105469386B (en) | A kind of method and device of determining stereoscopic camera height and pitch angle | |
CN102821293A (en) | Image processing method and apparatus | |
CN103533361B (en) | Determination method, encoding device and the decoding device of multi-view disparity vector | |
CN103384343A (en) | Image cavity filling method and device thereof | |
US20130071008A1 (en) | Image conversion system using edge information | |
CN103747264B (en) | Method, encoding device and the decoding device of predicted motion vector | |
EP3552396A1 (en) | Lightfield compression using disparity predicted replacement | |
Wu et al. | HeightFormer: Explicit Height Modeling without Extra Data for Camera-only 3D Object Detection in Bird's Eye View | |
CN103139569B (en) | The coding of multi-view point video, coding/decoding method, device and codec | |
CN105574926A (en) | Method and device for generating three-dimensional image | |
EP2966867A1 (en) | Methods and devices for encoding and decoding a sequence of frames representing a 3D scene, and corresponding computer program products and computer-readable medium | |
CN104768012B (en) | The method and encoding device of asymmetrical movement partitioning scheme coding | |
US10354400B2 (en) | Method and system for matching stereo images | |
CN104284194B (en) | Utilize View synthesis predictive coding or the method and device of decoding three-dimensional or multi-view video | |
Kuo et al. | Depth map estimation from a single video sequence | |
CN110245553A (en) | Road surface distance measuring method and device | |
Cho et al. | Occlusion detection and filling in disparity map for multiple view synthesis | |
CN103763557B (en) | A kind of Do NBDV acquisition methods and video decoder | |
Worby et al. | Establishing visual correspondence from multi-resolution graph cuts for stereo-motion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20170104 Termination date: 20181021 |