CN103533361A - Determination method of multi-view disparity vector, coding equipment and decoding equipment - Google Patents

Determination method of multi-view disparity vector, coding equipment and decoding equipment Download PDF

Info

Publication number
CN103533361A
CN103533361A CN201310497296.8A CN201310497296A CN103533361A CN 103533361 A CN103533361 A CN 103533361A CN 201310497296 A CN201310497296 A CN 201310497296A CN 103533361 A CN103533361 A CN 103533361A
Authority
CN
China
Prior art keywords
view
adjacent block
predicting unit
nbdv
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310497296.8A
Other languages
Chinese (zh)
Other versions
CN103533361B (en
Inventor
陈旭
郑萧桢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201310497296.8A priority Critical patent/CN103533361B/en
Publication of CN103533361A publication Critical patent/CN103533361A/en
Application granted granted Critical
Publication of CN103533361B publication Critical patent/CN103533361B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The embodiment of the invention provides a determination method of a multi-view disparity vector, which is used for a coding or decoding process of an image or video, and comprises the steps of determining reference View of a PU (prediction unit) according to neighborhood block information of the PU of the current View, determining a NBDV (neighborhood block disparity vector) of the PU, and determining a Do-NBDV of the PU according to the reference View and the NBDV of the PU, wherein the current View is a view selected from views except the basic View. According to the method, the reference View of the PU is determined according to the neighborhood block information of the PU of the current View, the more appropriate reference View can be obtained, and the DV calculation accuracy can be further improved.

Description

Definite method, encoding device and the decoding device of many viewpoints difference vector
Technical field
The embodiment of the present invention relates to image processing field, and more specifically, relates to a kind of definite method, encoding device and decoding device of many viewpoints difference vector.
Background technology
In to the encoding-decoding process of multi-view image, for the information redundancy between the image of further elimination different points of view, inter prediction technology has comprised looks a Predicting Technique.Wherein difference vector (Disparity Vector, DV) has represented the vector correlation of corresponding similar macro blocks between the image of synchronization different points of view.DV can determine the corresponding macro block position between different points of view, and for prediction encoding and decoding, and similarity degree between corresponding macro block has determined to eliminate the redundancy degree of looking an image information.The whether accurate performance that directly affects prediction encoding and decoding of DV.
In the computational methods of the existing DV based on predicting unit (Prediction Unit, PU), the reference View of the PU of different points of view (View) is constant, has caused the DV calculating inaccurate.
Summary of the invention
The embodiment of the present invention provides a kind of definite method of many viewpoints difference vector, and this determines that method is for coding or the decode procedure of image or video, can improve the accuracy that DV calculates.
First aspect, a kind of definite method of many viewpoints difference vector is provided, this determines that method is for coding or the decode procedure of image or video, this determines that method comprises: according to the adjacent block information of the predicting unit PU of current view point View, determine the reference View of described PU, and adjacent block difference vector (the Neighborhood Block Disparity Vector that determines described PU, NBDV), described current View is other View except first carrying out first View of many viewpoints encoding and decoding; According to the reference View of described PU and the NBDV of described PU, determine the adjacent block difference vector based on depth map (Depth-oriented Neighborhood Block Disparity Vector, Do-NBDV) of described PU.
In conjunction with first aspect, in the possible implementation of the first, described definite method also comprises: according to the Do-NBDV of described PU, carry out multi-vision-point encoding or decoding.
In conjunction with the possible implementation of the first of first aspect or first aspect, in the possible implementation of the second, described according to the adjacent block information of the PU of current View, determine the reference View of described PU, and the NBDV that determines described PU, comprising: the time domain adjacent block of determining described PU; Obtain first motion vector (Motion Vector, MV) of described time domain adjacent block, and described time domain adjacent block first with reference to View; The NBDV that determines described PU is a described MV, and the reference View that determines described PU is described first with reference to View.
In conjunction with the possible implementation of the second of first aspect, in the third possible implementation, described according to the adjacent block information of the PU of current View, determine the reference View of described PU, and the NBDV that determines described PU, also comprise: when the time domain adjacent block of described PU is unavailable, determine the first spatial domain adjacent block of described PU, the left adjacent block that described the first spatial domain adjacent block is described PU; Obtain the 2nd MV of described the first spatial domain adjacent block, and described the first spatial domain adjacent block second with reference to View; The NBDV that determines described PU is described the 2nd MV, and the reference View that determines described PU is described second with reference to View.
The third possible implementation in conjunction with first aspect, in the 4th kind of possible implementation, described according to the adjacent block information of the PU of current View, determine the reference View of described PU, and the NBDV that determines described PU, also comprise: when the first spatial domain adjacent block of described PU is when unavailable, determine the second spatial domain adjacent block of described PU, the top adjacent block that described the second spatial domain adjacent block is described PU; Obtain the 3rd MV of described the second spatial domain adjacent block, and described the second spatial domain adjacent block the 3rd with reference to View; The NBDV that determines described PU is described the 3rd MV, and the reference View that determines described PU is the described the 3rd with reference to View.
The 4th kind of possible implementation in conjunction with first aspect, in the 5th kind of possible implementation, described according to the adjacent block information of the PU of current View, determine the reference View of described PU, and the NBDV that determines described PU, also comprise: when the second spatial domain adjacent block of described PU is when unavailable, determine the motion prediction compensation block of described PU; Obtain the 4th MV of described motion prediction compensation block, and described motion prediction compensation block the 4th with reference to View; The NBDV that determines described PU is described the 4th MV, and the reference View that determines described PU is the described the 4th with reference to View.
The 5th kind of possible implementation in conjunction with first aspect, in the 6th kind of possible implementation, described according to the adjacent block information of the PU of current View, determine the reference View of described PU, and the NBDV that determines described PU, also comprise: when described PU does not have motion prediction compensation block, the NBDV that determines described PU is zero; When the reference frame of described PU and the reference frame of the 5th corresponding PU with reference to View are synchronization reference frame, the reference View that determines described PU is the described the 5th with reference to View.
Any possible implementation in conjunction with first aspect or above-mentioned first aspect, in the 7th kind of possible implementation, described according to the reference View of the NBDV of described PU and described PU, determine the Do-NBDV of described PU, comprise: obtain the texture maps information of the reference View of described PU, and the depth map information of the reference View of the described PU corresponding with described texture maps information; According to the depth map information of the reference View of the texture maps information of the reference View of the NBDV of described PU, described PU and described PU, determine the Do-NBDV of described PU.
Second aspect, a kind of encoding device is provided, this encoding device comprises: the first determining unit, be used for according to the adjacent block information of the predicting unit PU of current view point View, determine the reference View of described PU, and the adjacent block difference vector NBDV that determines described PU, described current View is other View except first carrying out first View of many viewpoints encoding and decoding; The second determining unit, for according to the reference View of described PU and the NBDV of described PU, determines the adjacent block difference vector Do-NBDV based on depth map of described PU.
In conjunction with second aspect, in the possible implementation of the first, described encoding device also comprises: coding unit, for the Do-NBDV according to described PU, encode.
In conjunction with the possible implementation of the first of second aspect or second aspect, in the possible implementation of the second, described the first determining unit, comprising: the 3rd determining unit, for determining the time domain adjacent block of described PU; The first acquiring unit, for obtaining the first motion vector MV of described time domain adjacent block, and described time domain adjacent block first with reference to View; Described the 3rd determining unit, also for determining that the NBDV of described PU is a described MV, and the reference View that determines described PU is described first with reference to View.
In conjunction with the possible implementation of the second of second aspect, in the third possible implementation, described the 3rd determining unit, when also unavailable for the time domain adjacent block as described PU, determine the first spatial domain adjacent block of described PU, the left adjacent block that described the first spatial domain adjacent block is described PU; Described the first acquiring unit, also for obtaining the 2nd MV of described the first spatial domain adjacent block, and described the first spatial domain adjacent block second with reference to View; Described the 3rd determining unit, also for determining that the NBDV of described PU is described the 2nd MV, and the reference View that determines described PU is described second with reference to View.
The third possible implementation in conjunction with second aspect, in the 4th kind of possible implementation, described the 3rd determining unit, also for the first spatial domain adjacent block as described PU when unavailable, determine the second spatial domain adjacent block of described PU, the top adjacent block that described the second spatial domain adjacent block is described PU; Described the first acquiring unit, also for obtaining the 3rd MV of described the second spatial domain adjacent block, and described the second spatial domain adjacent block the 3rd with reference to View; Described the 3rd determining unit, also for determining that the NBDV of described PU is described the 3rd MV, and the reference View that determines described PU is the described the 3rd with reference to View.
In conjunction with the 4th kind of possible implementation of second aspect, in the 5th kind of possible implementation, described the 3rd determining unit, also when unavailable, determines the motion prediction compensation block of described PU for the second spatial domain adjacent block as described PU; Described the first acquiring unit, also for obtaining the 4th MV of described motion prediction compensation block, and described motion prediction compensation block the 4th with reference to View; Described the 3rd determining unit, also for determining that the NBDV of described PU is described the 4th MV, and the reference View that determines described PU is the described the 4th with reference to View.
The 5th kind of possible implementation in conjunction with second aspect, in the 6th kind of possible implementation, described the first determining unit, also for when described PU does not have motion prediction compensation block, the NBDV that determines described PU is zero, and when the reference frame of described PU and the reference frame of the 5th PU with reference to View are synchronization reference frame, the reference View that determines described PU is the described the 5th with reference to View.
Any possible implementation in conjunction with second aspect or above-mentioned second aspect, in the 7th kind of possible implementation, described the second determining unit, comprise: second acquisition unit, for obtaining the texture maps information of the reference View of described PU, and the depth map information of the reference View of the described PU corresponding with described texture maps information; The 4th determining unit, for according to the depth map information of the reference View of the texture maps information of the reference View of the NBDV of described PU, described PU and described PU, determines the Do-NBDV of described PU.
The third aspect, a kind of decoding device is provided, this decoding device comprises: the first determining unit, be used for according to the adjacent block information of the predicting unit PU of current view point View, determine the reference view View of described predicting unit PU, and the adjacent block difference vector NBDV that determines described predicting unit PU, described current view point View is other vision points iew except basic vision point iew; The second determining unit, for according to the adjacent block difference vector NBDV of the reference view View of described predicting unit PU and described predicting unit PU, the adjacent block difference vector Do-NBDV of the adjacent block difference vector based on depth map of determining described predicting unit PU based on depth map.
In conjunction with the possible implementation of the first of the third aspect or the third aspect, in the possible implementation of the second, described the first determining unit, comprising: the 3rd determining unit, for determining the time domain adjacent block of described PU; The first acquiring unit, for obtaining the first motion vector MV of described time domain adjacent block, and described time domain adjacent block first with reference to View; Described the 3rd determining unit, also for determining that the NBDV of described PU is a described MV, and the reference View that determines described PU is described first with reference to View.
In conjunction with the possible implementation of the second of the third aspect, in the third possible implementation, described the 3rd determining unit, when also unavailable for the time domain adjacent block as described PU, determine the first spatial domain adjacent block of described PU, the left adjacent block that described the first spatial domain adjacent block is described PU; Described the first acquiring unit, also for obtaining the 2nd MV of described the first spatial domain adjacent block, and described the first spatial domain adjacent block second with reference to View; Described the 3rd determining unit, also for determining that the NBDV of described PU is described the 2nd MV, and the reference View that determines described PU is described second with reference to View.
The third possible implementation in conjunction with the third aspect, in the 4th kind of possible implementation, described the 3rd determining unit, also for the first spatial domain adjacent block as described PU when unavailable, determine the second spatial domain adjacent block of described PU, the top adjacent block that described the second spatial domain adjacent block is described PU; Described the first acquiring unit, also for obtaining the 3rd MV of described the second spatial domain adjacent block, and described the second spatial domain adjacent block the 3rd with reference to View; Described the 3rd determining unit, also for determining that the NBDV of described PU is described the 3rd MV, and the reference View that determines described PU is the described the 3rd with reference to View.
In conjunction with the 4th kind of possible implementation of the third aspect, in the 5th kind of possible implementation, described the 3rd determining unit, also when unavailable, determines the motion prediction compensation block of described PU for the second spatial domain adjacent block as described PU; Described the first acquiring unit, also for obtaining the 4th MV of described motion prediction compensation block, and described motion prediction compensation block the 4th with reference to View; Described the 3rd determining unit, also for determining that the NBDV of described PU is described the 4th MV, and the reference View that determines described PU is the described the 4th with reference to View.
The 5th kind of possible implementation in conjunction with the third aspect, in the 6th kind of possible implementation, described the first determining unit, also for when described PU does not have motion prediction compensation block, the NBDV that determines described PU is zero, and when the reference frame of described PU and the reference frame of the 5th PU with reference to View are synchronization reference frame, the reference View that determines described PU is the described the 5th with reference to View.
Any possible implementation in conjunction with the third aspect or the above-mentioned third aspect, in the 7th kind of possible implementation, described the second determining unit, comprise: second acquisition unit, for obtaining the texture maps information of the reference View of described PU, and the depth map information of the reference View of the described PU corresponding with described texture maps information; The 4th determining unit, for according to the depth map information of the reference View of the texture maps information of the reference View of the NBDV of described PU, described PU and described PU, determines the Do-NBDV of described PU.
The embodiment of the present invention can obtain the reference View of this PU more suitably with reference to View by determining according to the adjacent block information of the PU of current View, and then can improve the accuracy that DV calculates.
Accompanying drawing explanation
In order to be illustrated more clearly in the technical scheme of the embodiment of the present invention, to the accompanying drawing of required use in embodiment or description of the Prior Art be briefly described below, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skills, do not paying under the prerequisite of creative work, can also obtain according to these accompanying drawings other accompanying drawing.
Fig. 1 is the indicative flowchart of 3D encoding-decoding process.
Fig. 2 is the flow chart of definite method of many viewpoints difference vector of the embodiment of the present invention.
Fig. 3 is the time domain adjacent block of many viewpoints difference vector and the schematic diagram of spatial domain adjacent block of the embodiment of the present invention.
Fig. 4 is the indicative flowchart of definite method of many viewpoints difference vector of the embodiment of the present invention.
Fig. 5 is the block diagram of the encoding device of one embodiment of the invention.
Fig. 6 is the block diagram of the decoding device of another embodiment of the present invention.
Fig. 7 is the block diagram of the encoding device of another embodiment of the present invention.
Fig. 8 is the block diagram of the decoding device of another embodiment of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is clearly and completely described, obviously, described embodiment is the present invention's part embodiment, rather than whole embodiment.Embodiment based in the present invention, those of ordinary skills, not making the every other embodiment obtaining under creative work prerequisite, belong to the scope of protection of the invention.
Fig. 1 is the indicative flowchart of 3D encoding-decoding process.
Video encoding-decoding process is to add that by the encoding and decoding of many viewpoints depth map form carries out encoding and decoding, and in many viewpoints encoding-decoding process, one of them viewpoint will be as basic viewpoint (Base View) or independent viewpoint (Independent View).For example, in cataloged procedure, this basic viewpoint or independent viewpoint adopt the coded system different from other viewpoints, and its corresponding depth map encoding adopts traditional HEVC coded system.And other viewpoints can be described as dependent viewpoint (Dependent View), in order to utilize independent viewpoint coded message, and in order to improve code efficiency, the coded system of dependent viewpoint is except traditional efficient video coding (High Efficiency Video Coding, HEVC) outside mode, more coding tools, such as disparity compensation prediction technology, difference vector obtains technology, depending on a motion prediction, depending on a residual prediction, luminance compensation, View Synthesis predictions etc., can be applicable in the cataloged procedure of dependent viewpoint.
As shown in Figure 1, when three-dimensional (three-dimension, 3D) video is carried out to encoding and decoding, first encoder is encoded to 3D video, again code stream is sent to decoder afterwards.3D Video Decoder, stereodecoder and 2D Video Decoder synthesize by intermediate-view after decoding respectively, can generate respectively N-View demonstration, stereo demonstration and 2D and show.
Fig. 2 is the flow chart of definite method of many viewpoints difference vector of the embodiment of the present invention.Definite method shown in Fig. 2, for coding or the decode procedure of image or video, is carried out by encoding device for definite method of cataloged procedure, for definite method of decode procedure, by decoding device, is carried out.Definite method shown in Fig. 2 comprises:
201, according to the adjacent block information of the PU of current View, determine the reference View of this PU, and the NBDV that determines this PU, this current View is other View except basic View.
202, according to the NBDV of the reference View of this PU and this PU, determine the Do-NBDV of this PU.
The embodiment of the present invention can obtain the reference View of this PU more suitably with reference to View by determining according to the adjacent block information of the PU of current View, and then can improve the accuracy that DV calculates.
In the embodiment of the present invention, the basic View in step 202 also can be referred to as independent View, and current View is dependent View.
In the embodiment of the present invention, adjacent block can be time domain adjacent block, can be spatial domain adjacent block, can be motion prediction compensation block, can be also other forms of adjacent block, and the present invention is not construed as limiting this.Adjacent block information can comprise size, position, shape of adjacent block etc., and adjacent block information also can comprise the MV of adjacent block and the reference View of adjacent block etc., and the present invention is not construed as limiting this.
It should be noted that the adjacent block in the embodiment of the present invention can have identical size and shape with this PU, also can have different sizes and/or shape from this PU, the present invention is not construed as limiting this.
Alternatively, as an embodiment, the definite method shown in Fig. 1 also can comprise: according to the Do-NBDV of this PU, carry out multi-vision-point encoding or decoding.
Alternatively, as an embodiment, in step 202, can determine the time domain adjacent block of this PU.And obtain a MV of this time domain adjacent block and this time domain adjacent block first with reference to View.The NBDV that further determines this PU be a MV, and the reference View of definite this PU is that this is first with reference to View.
Particularly, in step 202, first to determine whether this PU exists time domain adjacent block, only, when this time domain adjacent block exists, just obtain a MV and first with reference to View.
Alternatively, as another embodiment, in step 202, when the time domain adjacent block of this PU is unavailable, can determine the first spatial domain adjacent block of this PU, the left adjacent block that this first spatial domain adjacent block is this PU.And obtain the 2nd MV of this first spatial domain adjacent block, and this first spatial domain adjacent block second with reference to View.The NBDV that further determines this PU be the 2nd MV, and the reference View of definite this PU is that this is second with reference to View.
In the embodiment of the present invention, the priority of the first spatial domain adjacent block is lower than the priority of time domain adjacent block.Only, when time domain adjacent block is unavailable, just judge whether this PU exists the first spatial domain adjacent block.Time domain adjacent block in the embodiment of the present invention is unavailable, refer to that this time domain adjacent block does not exist, or refer to that this time domain adjacent block existence does not still have a MV, or refer to that this time domain adjacent block existence does not still have first with reference to View, or refer to other disabled situations, the present invention is not construed as limiting this.
Alternatively, as another embodiment, in step 202, when the first spatial domain adjacent block of this PU is when unavailable, can determine the second spatial domain adjacent block of this PU, the top adjacent block that this second spatial domain adjacent block is this PU.And obtain the 3rd MV of this second spatial domain adjacent block, and this second spatial domain adjacent block the 3rd with reference to View.The NBDV that further determines this PU be the 3rd MV, and the reference View of definite this PU is the 3rd with reference to View.
In the embodiment of the present invention, the priority of the second spatial domain adjacent block is lower than the priority of the first spatial domain adjacent block.Only at the first spatial domain adjacent block, when unavailable, just judge whether this PU exists the second spatial domain adjacent block.The first spatial domain adjacent block in the embodiment of the present invention is unavailable, refer to that this first spatial domain adjacent block does not exist, or refer to that this first spatial domain adjacent block existence does not still have the 2nd MV, or refer to that this first spatial domain adjacent block existence does not still have second with reference to View, or refer to other disabled situations, the present invention is not construed as limiting this.
Fig. 3 is the time domain adjacent block of many viewpoints of embodiment of the present invention difference vector and the schematic diagram of spatial domain adjacent block.T0 in Fig. 3 is time domain adjacent block, and A1 is the first spatial domain adjacent block, and A2 is the second spatial domain adjacent block.And the priority of T0 is higher than the priority of A1, the priority of A1 is higher than the priority of A2.
It should be noted that time domain adjacent block and spatial domain adjacent block in Fig. 3 just schematically represent.PU shown in Fig. 3 and T0 comprise or overlapping relation, and both are independently.Although in Fig. 3, the size of T0, A1 and A2 is all little than PU, but in fact the size of size separately of T0, A1 and A2 and PU does not have inevitable relation, T0, A1 may be identical with the size of PU with any in A2, and any in T0, A1 and A2 all may be larger or less than PU than PU.Similarly, although in Fig. 3, the shape of PU, T0, A1 and A2 looks it is all square, and in fact PU, T0, A1 and A2 are can shape different, can be wherein two or three shapes wherein identical.Shape can be square or rectangle, can be also other shape, and the present invention is not construed as limiting this.In addition, although in Fig. 3, the position of A1 is in the lower left of PU, and in fact A1 can be in the front-left of PU or on the upper left side of PU; Similarly, what the position of A2 can be at PU is upper right-hand, or can be directly over PU, or can be at the upper left of PU, and the present invention is not construed as limiting this.
Although it should be noted that and show time domain adjacent block T0, the first spatial domain adjacent block A1 and the second spatial domain adjacent block A2 in Fig. 3.But in the step 202 shown in Fig. 1, can only determine T0; Or T0 does not exist, only determined A1; Or T0 and A1 do not exist, only determined A2.The present invention is not construed as limiting this.
In the embodiment of the present invention, select at most a time domain adjacent block and two spatial domain adjacent blocks.Like this, select the number of less time/spatial domain adjacent block, can reduce the complexity in DV computational process.
Alternatively, as another embodiment, in step 202, when the second spatial domain adjacent block of this PU is when unavailable, can determine the motion prediction compensation block of this PU.And obtain the 4th MV of this motion prediction compensation block, and this motion prediction compensation block the 4th with reference to View.The NBDV that further determines this PU be the 4th MV, and the reference View of definite this PU is the 4th with reference to View.
Particularly, when time domain adjacent block, the first spatial domain adjacent block and the second spatial domain adjacent block of this PU are all unavailable, can judge whether this PU has motion prediction compensation block.
Alternatively, as another embodiment, in step 202, when this PU does not have motion prediction compensation block, the NBDV that can determine this PU is zero.And, when the reference frame of this PU and the reference frame of the 5th PU with reference to View are synchronization reference frame, determine that the reference View of this PU is for the 5th with reference to View.
Particularly, when determining that the NBDV of this PU is zero, may exist a plurality of View can be as the reference View of this PU.That is to say, now need to may be with reference to the reference View that selects one of them as this PU View from a plurality of of PU.Now, can obtain the reference frame of this PU, and obtain the plurality of may may be with reference to the reference frame of the corresponding PU of View with reference to each in View.If the reference frame of this PU with the plurality of may be synchronization reference frame with reference to the reference frame of the 5th corresponding PU with reference to View in View, by the 5th, with reference to View, be defined as the reference View of this PU.
Alternatively, as an embodiment, in step 202, can obtain the texture maps information of the reference View of this PU, and the depth map information of the reference View of this PU corresponding with this texture maps information.And according to the depth map information of the reference View of the texture maps information of the reference View of the NBDV of this PU, this PU and this PU, determine the Do-NBDV of this PU.
Particularly, can first obtain texture maps information and depth map information with reference to View, and this texture maps information and depth map information are corresponding.Then, can be according to the NBDV of this PU, the relation between the NBDV of this PU and texture maps information, the corresponding relation between this texture maps information and depth map information, determines the Do-NBDV of this PU.
The depth map information here comprises the information of the corresponding parallax parameter with reference to View etc.Now can utilize encoded this complete depth map information with reference to View, determine the Do-NBDV of this PU of current View, and further according to the Do-NBDV of this PU, the texture maps of this current View is encoded, and can improve the accuracy of DV of the texture maps of current View.
Like this, definite method of the DV of embodiment of the present invention utilization based on PU, the reference View of the PU of different View is not changeless, can improve like this accuracy of DV.And this determines that procedure is simple, can optimize the follow-up performance of encoding or decoding according to this DV, can reduce the redundancy of next code or decode procedure.
Fig. 4 is the indicative flowchart of definite method of many viewpoints difference vector of the embodiment of the present invention.Definite method shown in Fig. 4, for coding or the decode procedure of image or video, is carried out by encoding device for definite method of cataloged procedure, for definite method of decode procedure, by decoding device, is carried out.。Definite method shown in Fig. 4 comprises:
401, start coding or decoding.
Particularly, can be that image or video are encoded; Also can be that image or video are decoded.
402, complete coding or the decoding of basic View.
Particularly, this basic View also can be described as independent View, is the viewpoint of first encoding or first decoding, and this basic View can adopt traditional HEVC mode, or the H.265 mode that is called is encoded or decoded.
403, judge that whether the time domain adjacent block of PU of current View is available.If available, perform step 404; If unavailable, perform step 405.
Particularly, can there is time domain adjacent block with the PU that refers to current View in time domain adjacent block, and can obtain a MV of this time domain adjacent block and this time domain adjacent block first with reference to View.
404, obtain a MV of this time domain adjacent block and this time domain adjacent block first with reference to View.
405, judge that whether the first spatial domain adjacent block of PU of current View is available.If available, perform step 406; If unavailable, perform step 407.
Particularly, the left adjacent block that the first spatial domain adjacent block is this PU, can there is the first spatial domain adjacent block with the PU that refers to current View in the first spatial domain adjacent block, and can obtain the 2nd MV of this first spatial domain adjacent block and this first spatial domain adjacent block second with reference to View.
406, obtain the 2nd MV of this first spatial domain adjacent block and this first spatial domain adjacent block second with reference to View.
407, judge that whether the second spatial domain adjacent block of PU of current View is available.If available, perform step 408; If unavailable, perform step 409.
Particularly, the top adjacent block that the second spatial domain adjacent block is this PU, can there is the second spatial domain adjacent block with the PU that refers to current View in the second spatial domain adjacent block, and can obtain the 3rd MV of this second spatial domain adjacent block and this second spatial domain adjacent block the 3rd with reference to View.
408, obtain the 3rd MV of this second spatial domain adjacent block and this second spatial domain adjacent block the 3rd with reference to View.
409, judge whether the PU of current View has motion detection compensation block.If had, perform step 410; If no, perform step 411.
410, obtain the 4th MV of this motion detection compensation block and this motion detection compensation block the 4th with reference to View.
411, the NBDV that determines this PU is zero, and when the reference frame of this PU and the reference frame of the 5th PU with reference to View are synchronization reference frame, the reference View that determines this PU is the 5th with reference to View.
412, determine the reference View of the PU of current View, and the NBDV that determines the PU of current View.
Particularly, if carried out step 404, the reference View of the PU of this current View is first with reference to View, and the NBDV of the PU of this current View is a MV.If carried out step 406, the reference View of the PU of this current View is second with reference to View, and the NBDV of the PU of this current View is the 2nd MV.If carried out step 408, the reference View of the PU of this current View is the 3rd with reference to View, and the NBDV of the PU of this current View is the 3rd MV.If carried out step 410, the reference View of the PU of this current View is the 4th with reference to View, and the NBDV of the PU of this current View is the 4th MV.If carried out step 411, the reference View of the PU of this current View be in step 411 the determined the 5th with reference to View, the NBDV of the PU of this current View is zero.
413, determine the Do-NBDV of the PU of current View.
Particularly, can obtain the texture maps information of reference View of the PU of this current View, and the depth map information of the reference View of the PU of this current View corresponding with this texture maps information.And according to the depth map information of the reference View of the PU of the texture maps information of the reference View of the PU of the NBDV of the PU of this current View, this current View and this current View, determine the Do-NBDV of the PU of this current View.
414, complete coding or decoding processing.
Particularly, according to Do-NBDV, carry out predictive coding processing, or carry out prediction decoding processing according to Do-NBDV.
Like this, definite method of the DV of embodiment of the present invention utilization based on PU, the reference View of the PU of different View is not changeless, can obtain more suitably with reference to View like this, and then can improve the accuracy of DV.And this determines that procedure is simple, can optimize the follow-up performance of encoding or decoding according to this DV, can reduce the redundancy of next code or decode procedure.
Fig. 5 is the block diagram of the encoding device of one embodiment of the invention.Encoding device 500 shown in Fig. 5 comprises the first determining unit 501 and the second determining unit 502.
The first determining unit 501, for according to the adjacent block information of the PU of current View, is determined the reference View of this PU, and the NBDV that determines this PU, and this current View is other View except basic View.The second determining unit 502, for according to the NBDV of the reference View of this PU and this PU, is determined the Do-NBDV of this PU.
The embodiment of the present invention can obtain the reference View of this PU more suitably with reference to View by determining according to the adjacent block information of the PU of current View, and then in cataloged procedure, can improve the accuracy that DV calculates.
Encoding device 500 can be realized each process being realized by encoding device in the embodiment of Fig. 2 and Fig. 4, for avoiding repetition, repeats no more here.Encoding device 500 can be used for the cataloged procedure of image and/or video.
Alternatively, as an embodiment, encoding device 500 also can comprise coding unit.Coding unit is for encoding according to the Do-NBDV of this PU.
Alternatively, as another embodiment, the first determining unit 501 comprises the 3rd determining unit 503 and the first acquiring unit 504.The 3rd determining unit 503 can be used for determining the time domain adjacent block of this PU.The first acquiring unit 504 can be used for obtaining a MV of this time domain adjacent block, and this time domain adjacent block first with reference to View.The 3rd determining unit 503 also can be used for determining that the NBDV of this PU be a MV, and the reference View of definite this PU is that this is first with reference to View.
Alternatively, as another embodiment, the 3rd determining unit 503 also can be used for time domain adjacent block as this PU when unavailable, determines the first spatial domain adjacent block of this PU, the left adjacent block that this first spatial domain adjacent block is this PU.The first acquiring unit 504 also can be used for obtaining the 2nd MV of this first spatial domain adjacent block, and this first spatial domain adjacent block second with reference to View.The 3rd determining unit 503 also can be used for determining that the NBDV of this PU be the 2nd MV, and the reference View of definite this PU is that this is second with reference to View.
Alternatively, as another embodiment, the 3rd determining unit 503 also can be used for the first spatial domain adjacent block as this PU when unavailable, determines the second spatial domain adjacent block of this PU, the top adjacent block that this second spatial domain adjacent block is this PU.The first acquiring unit 504 also can be used for obtaining the 3rd MV of this second spatial domain adjacent block, and this second spatial domain adjacent block the 3rd with reference to View.The 3rd determining unit 503 also can be used for determining that the NBDV of this PU be the 3rd MV, and the reference View of definite this PU is the 3rd with reference to View.
Alternatively, as another embodiment, the 3rd determining unit 503 also can be used for the second spatial domain adjacent block as this PU when unavailable, determines the motion prediction compensation block of this PU.The first acquiring unit 504 also can be used for obtaining the 4th MV of this motion prediction compensation block, and this motion prediction compensation block the 4th with reference to View.The 3rd determining unit 503 also can be used for determining that the NBDV of this PU be the 4th MV, and the reference View of definite this PU is the 4th with reference to View.
Alternatively, as another embodiment, the first determining unit 501 also can be used for when this PU does not have motion prediction compensation block, the NBDV that determines this PU is zero, and, when the reference frame of this PU and the reference frame of the 5th PU with reference to View are synchronization reference frame, determine that the reference View of this PU is for the 5th with reference to View.
Alternatively, as another embodiment, the second determining unit 502 comprises second acquisition unit 505 and the 4th determining unit 506.Second acquisition unit 505 can be used for obtaining the texture maps information of the reference View of this PU, and the depth map information of the reference View of this PU corresponding with this texture maps information.The 4th determining unit 506 can be used for the texture maps information of reference View of NBDV, this PU according to this PU and the depth map information of the reference View of this PU, determines the Do-NBDV of this PU.
Fig. 6 is the block diagram of the decoding device of one embodiment of the invention.Decoding device 600 shown in Fig. 6 comprises the first determining unit 601 and the second determining unit 602.
The first determining unit 601, for according to the adjacent block information of the PU of current View, is determined the reference View of this PU, and the NBDV that determines this PU, and this current View is other View except basic View.The second determining unit 602, for according to the NBDV of the reference View of this PU and this PU, is determined the Do-NBDV of this PU.
The embodiment of the present invention can obtain the reference View of this PU more suitably with reference to View by determining according to the adjacent block information of the PU of current View, and then in decode procedure, can improve the accuracy that DV calculates.
Decoding device 600 can be realized each process being realized by decoding device in the embodiment of Fig. 2 and Fig. 4, for avoiding repetition, repeats no more here.Decoding device 600 can be used for the decode procedure of image and/or video.
Alternatively, as an embodiment, decoding device 600 also can comprise decoding unit.Decoding unit is for decoding according to the Do-NBDV of this PU.
Alternatively, as another embodiment, the first determining unit 601 comprises the 3rd determining unit 603 and the first acquiring unit 604.The 3rd determining unit 603 can be used for determining the time domain adjacent block of this PU.The first acquiring unit 604 can be used for obtaining a MV of this time domain adjacent block, and this time domain adjacent block first with reference to View.The 3rd determining unit 603 also can be used for determining that the NBDV of this PU be a MV, and the reference View of definite this PU is that this is first with reference to View.
Alternatively, as another embodiment, the 3rd determining unit 603 also can be used for time domain adjacent block as this PU when unavailable, determines the first spatial domain adjacent block of this PU, the left adjacent block that this first spatial domain adjacent block is this PU.The first acquiring unit 604 also can be used for obtaining the 2nd MV of this first spatial domain adjacent block, and this first spatial domain adjacent block second with reference to View.The 3rd determining unit 603 also can be used for determining that the NBDV of this PU be the 2nd MV, and the reference View of definite this PU is that this is second with reference to View.
Alternatively, as another embodiment, the 3rd determining unit 603 also can be used for the first spatial domain adjacent block as this PU when unavailable, determines the second spatial domain adjacent block of this PU, the top adjacent block that this second spatial domain adjacent block is this PU.The first acquiring unit 604 also can be used for obtaining the 3rd MV of this second spatial domain adjacent block, and this second spatial domain adjacent block the 3rd with reference to View.The 3rd determining unit 603 also can be used for determining that the NBDV of this PU be the 3rd MV, and the reference View of definite this PU is the 3rd with reference to View.
Alternatively, as another embodiment, the 3rd determining unit 603 also can be used for the second spatial domain adjacent block as this PU when unavailable, determines the motion prediction compensation block of this PU.The first acquiring unit 604 also can be used for obtaining the 4th MV of this motion prediction compensation block, and this motion prediction compensation block the 4th with reference to View.The 3rd determining unit 603 also can be used for determining that the NBDV of this PU be the 4th MV, and the reference View of definite this PU is the 4th with reference to View.
Alternatively, as another embodiment, the first determining unit 601 also can be used for when this PU does not have motion prediction compensation block, the NBDV that determines this PU is zero, and, when the reference frame of this PU and the reference frame of the 5th PU with reference to View are synchronization reference frame, determine that the reference View of this PU is for the 5th with reference to View.
Alternatively, as another embodiment, the second determining unit 602 comprises second acquisition unit 605 and the 4th determining unit 606.Second acquisition unit 605 can be used for obtaining the texture maps information of the reference View of this PU, and the depth map information of the reference View of this PU corresponding with this texture maps information.The 4th determining unit 606 can be used for the texture maps information of reference View of NBDV, this PU according to this PU and the depth map information of the reference View of this PU, determines the Do-NBDV of this PU.
Fig. 7 is the block diagram of the encoding device of another embodiment of the present invention.Encoding device 700 shown in Fig. 7 comprises processor 701, memory 702, transmission circuit 703.
Processor 701, for according to the adjacent block information of the PU of current View, is determined the reference View of this PU, and the NBDV that determines this PU, and this current View is other View except basic View.And according to the NBDV of the reference View of this PU and this PU, determine the Do-NBDV of this PU.
The embodiment of the present invention can obtain the reference View of this PU more suitably with reference to View by determining according to the adjacent block information of the PU of current View, and then in cataloged procedure, can improve the accuracy that DV calculates.
Each assembly in encoding device 700 is coupled by bus system 704, and wherein bus system 704, except comprising data/address bus, also comprises power bus, control bus and status signal bus in addition.But for the purpose of clearly demonstrating, in Fig. 5, various buses are all designated as to bus system 704.
The method that the invention described above embodiment discloses can be applied in processor 701, or is realized by processor 701.Processor 701 may be a kind of integrated circuit (IC) chip, has the disposal ability of signal.In implementation procedure, each step of said method can complete by the integrated logic circuit of the hardware in processor 701 or the instruction of form of software.Above-mentioned processor 701 can be general processor, digital signal processor (Digital Signal Processor, DSP), application-specific integrated circuit (ASIC) (Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic device, discrete hardware components.Can realize or carry out disclosed each method, step and logic diagram in the embodiment of the present invention.General processor can be that microprocessor or this processor can be also the processors of any routine etc.Step in conjunction with the disclosed method of the embodiment of the present invention can directly be presented as that hardware decoding processor is complete, or complete with the hardware in decoding processor and software module combination.Software module can be positioned at random access memory (Random Access Memory, RAM), in the storage medium of this area maturation such as flash memory, read-only memory (Read-Only Memory, ROM), programmable read only memory or electrically erasable programmable memory, register.This storage medium is positioned at memory 702, and the information in processor 701 read memories 702 completes the step of said method in conjunction with its hardware.
Encoding device 700 can be realized each process being realized by encoding device in the embodiment of Fig. 1 and Fig. 3, for avoiding repetition, repeats no more here.Encoding device 700 can be used for the cataloged procedure of image and/or video.
Alternatively, as an embodiment, processor 701 also can be used for encoding according to the Do-NBDV of described PU.
Alternatively, as another embodiment, processor 701 also can be used for determining the time domain adjacent block of this PU.Transmission circuit 703 can be used for obtaining a MV of this time domain adjacent block, and this time domain adjacent block first with reference to View.Processor 701 also can be used for determining that the NBDV of this PU be a MV, and the reference View of definite this PU is that this is first with reference to View.
Alternatively, as another embodiment, processor 701 also can be used for time domain adjacent block as this PU when unavailable, determines the first spatial domain adjacent block of this PU, the left adjacent block that this first spatial domain adjacent block is this PU.Transmission circuit 703 also can be used for obtaining the 2nd MV of this first spatial domain adjacent block, and this first spatial domain adjacent block second with reference to View.Processor 701 also can be used for determining that the NBDV of this PU be the 2nd MV, and the reference View of definite this PU is that this is second with reference to View.
Alternatively, as another embodiment, processor 701 also can be used for the first spatial domain adjacent block as this PU when unavailable, determines the second spatial domain adjacent block of this PU, the top adjacent block that this second spatial domain adjacent block is this PU.Transmission circuit 703 also can be used for obtaining the 3rd MV of this second spatial domain adjacent block, and this second spatial domain adjacent block the 3rd with reference to View.Processor 701 also can be used for determining that the NBDV of this PU be the 3rd MV, and the reference View of definite this PU is the 3rd with reference to View.
Alternatively, as another embodiment, processor 701 also can be used for the second spatial domain adjacent block as this PU when unavailable, determines the motion prediction compensation block of this PU.Transmission circuit 703 also can be used for obtaining the 4th MV of this motion prediction compensation block, and this motion prediction compensation block the 4th with reference to View.Processor 701 also can be used for determining that the NBDV of this PU be the 4th MV, and the reference View of definite this PU is the 4th with reference to View.
Alternatively, as another embodiment, processor 701 also can be used for when this PU does not have motion prediction compensation block, the NBDV that determines this PU is zero, and, when the reference frame of this PU and the reference frame of the 5th PU with reference to View are synchronization reference frame, determine that the reference View of this PU is for the 5th with reference to View.
Alternatively, as another embodiment, transmission circuit 703 can be used for obtaining the texture maps information of the reference View of this PU, and the depth map information of the reference View of this PU corresponding with this texture maps information.Processor 701 can be used for the texture maps information of reference View of NBDV, this PU according to this PU and the depth map information of the reference View of this PU, determines the Do-NBDV of this PU.
Fig. 8 is the block diagram of the decoding device of another embodiment of the present invention.Decoding device 800 shown in Fig. 8 comprises processor 801, memory 802, transmission circuit 803.
Processor 801, for according to the adjacent block information of the PU of current View, is determined the reference View of this PU, and the NBDV that determines this PU, and this current View is other View except basic View.And according to the NBDV of the reference View of this PU and this PU, determine the Do-NBDV of this PU.
The embodiment of the present invention can obtain the reference View of this PU more suitably with reference to View by determining according to the adjacent block information of the PU of current View, and then in decode procedure, can improve the accuracy that DV calculates.
Each assembly in decoding device 800 is coupled by bus system 804, and wherein bus system 804, except comprising data/address bus, also comprises power bus, control bus and status signal bus in addition.But for the purpose of clearly demonstrating, in Fig. 5, various buses are all designated as to bus system 804.
The method that the invention described above embodiment discloses can be applied in processor 801, or is realized by processor 801.Processor 801 may be a kind of integrated circuit (IC) chip, has the disposal ability of signal.In implementation procedure, each step of said method can complete by the integrated logic circuit of the hardware in processor 801 or the instruction of form of software.Above-mentioned processor 801 can be general processor, digital signal processor (Digital Signal Processor, DSP), application-specific integrated circuit (ASIC) (Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic device, discrete hardware components.Can realize or carry out disclosed each method, step and logic diagram in the embodiment of the present invention.General processor can be that microprocessor or this processor can be also the processors of any routine etc.Step in conjunction with the disclosed method of the embodiment of the present invention can directly be presented as that hardware decoding processor is complete, or complete with the hardware in decoding processor and software module combination.Software module can be positioned at random access memory (Random Access Memory, RAM), in the storage medium of this area maturation such as flash memory, read-only memory (Read-Only Memory, ROM), programmable read only memory or electrically erasable programmable memory, register.This storage medium is positioned at memory 802, and the information in processor 801 read memories 802 completes the step of said method in conjunction with its hardware.
Decoding device 800 can be realized each process being realized by decoding device in the embodiment of Fig. 1 and Fig. 3, for avoiding repetition, repeats no more here.Decoding device 800 can be used for the decode procedure of image and/or video.
Alternatively, as an embodiment, processor 801 also can be used for decoding according to the Do-NBDV of described PU.
Alternatively, as another embodiment, processor 801 also can be used for determining the time domain adjacent block of this PU.Transmission circuit 803 can be used for obtaining a MV of this time domain adjacent block, and this time domain adjacent block first with reference to View.Processor 801 also can be used for determining that the NBDV of this PU be a MV, and the reference View of definite this PU is that this is first with reference to View.
Alternatively, as another embodiment, processor 801 also can be used for time domain adjacent block as this PU when unavailable, determines the first spatial domain adjacent block of this PU, the left adjacent block that this first spatial domain adjacent block is this PU.Transmission circuit 803 also can be used for obtaining the 2nd MV of this first spatial domain adjacent block, and this first spatial domain adjacent block second with reference to View.Processor 801 also can be used for determining that the NBDV of this PU be the 2nd MV, and the reference View of definite this PU is that this is second with reference to View.
Alternatively, as another embodiment, processor 801 also can be used for the first spatial domain adjacent block as this PU when unavailable, determines the second spatial domain adjacent block of this PU, the top adjacent block that this second spatial domain adjacent block is this PU.Transmission circuit 803 also can be used for obtaining the 3rd MV of this second spatial domain adjacent block, and this second spatial domain adjacent block the 3rd with reference to View.Processor 801 also can be used for determining that the NBDV of this PU be the 3rd MV, and the reference View of definite this PU is the 3rd with reference to View.
Alternatively, as another embodiment, processor 801 also can be used for the second spatial domain adjacent block as this PU when unavailable, determines the motion prediction compensation block of this PU.Transmission circuit 803 also can be used for obtaining the 4th MV of this motion prediction compensation block, and this motion prediction compensation block the 4th with reference to View.Processor 801 also can be used for determining that the NBDV of this PU be the 4th MV, and the reference View of definite this PU is the 4th with reference to View.
Alternatively, as another embodiment, processor 801 also can be used for when this PU does not have motion prediction compensation block, the NBDV that determines this PU is zero, and, when the reference frame of this PU and the reference frame of the 5th PU with reference to View are synchronization reference frame, determine that the reference View of this PU is for the 5th with reference to View.
Alternatively, as another embodiment, transmission circuit 803 can be used for obtaining the texture maps information of the reference View of this PU, and the depth map information of the reference View of this PU corresponding with this texture maps information.Processor 801 can be used for the texture maps information of reference View of NBDV, this PU according to this PU and the depth map information of the reference View of this PU, determines the Do-NBDV of this PU.
Those of ordinary skills can recognize, unit and the algorithm steps of each example of describing in conjunction with embodiment disclosed herein, can realize with the combination of electronic hardware or computer software and electronic hardware.These functions are carried out with hardware or software mode actually, depend on application-specific and the design constraint of technical scheme.Professional and technical personnel can specifically should be used for realizing described function with distinct methods to each, but this realization should not thought and exceeds scope of the present invention.
Those skilled in the art can be well understood to, and for convenience and simplicity of description, the specific works process of the system of foregoing description, device and unit, can, with reference to the corresponding process in preceding method embodiment, not repeat them here.
In the several embodiment that provide in the application, should be understood that disclosed system, apparatus and method can realize by another way.For example, device embodiment described above is only schematic, for example, the division of described unit, be only that a kind of logic function is divided, during actual realization, can have other dividing mode, for example a plurality of unit or assembly can in conjunction with or can be integrated into another system, or some features can ignore, or do not carry out.Another point, shown or discussed coupling each other or direct-coupling or communication connection can be by some interfaces, indirect coupling or the communication connection of device or unit can be electrically, machinery or other form.
The described unit as separating component explanation can or can not be also physically to separate, and the parts that show as unit can be or can not be also physical locations, can be positioned at a place, or also can be distributed in a plurality of network element.Can select according to the actual needs some or all of unit wherein to realize the object of the present embodiment scheme.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, can be also that the independent physics of unit exists, and also can be integrated in a unit two or more unit.
If described function usings that the form of SFU software functional unit realizes and during as production marketing independently or use, can be stored in a computer read/write memory medium.Understanding based on such, the part that technical scheme of the present invention contributes to prior art in essence in other words or the part of this technical scheme can embody with the form of software product, this computer software product is stored in a storage medium, comprise that some instructions are with so that a computer equipment (can be personal computer, server, or the network equipment etc.) carry out all or part of step of method described in each embodiment of the present invention.And aforesaid storage medium comprises: various media that can be program code stored such as USB flash disk, portable hard drive, ROM, RAM, magnetic disc or CDs.
The above; be only the specific embodiment of the present invention, but protection scope of the present invention is not limited to this, is anyly familiar with those skilled in the art in the technical scope that the present invention discloses; can expect easily changing or replacing, within all should being encompassed in protection scope of the present invention.Therefore, protection scope of the present invention should be as the criterion with the protection range of claim.

Claims (24)

1. a definite method for the difference vector of viewpoint more than, described definite method is used for coding or the decode procedure of image or video, it is characterized in that, and described definite method comprises:
According to the adjacent block information of the predicting unit PU of current view point View, determine the reference view View of described predicting unit PU, and the adjacent block difference vector NBDV that determines described predicting unit PU, described current view point View is other vision points iew except basic vision point iew;
According to the adjacent block difference vector NBDV of the reference view View of described predicting unit PU and described predicting unit PU, the adjacent block difference vector Do-NBDV of the adjacent block difference vector based on depth map of determining described predicting unit PU based on depth map.
2. definite method according to claim 1, is characterized in that, described definite method also comprises:
The adjacent block difference vector Do-NBDV based on depth map according to described predicting unit PU, carries out multi-vision-point encoding or decoding.
3. definite method according to claim 1 and 2, it is characterized in that, described according to the adjacent block information of the predicting unit PU of current view point View, determine the reference view View of described predicting unit PU, and the adjacent block difference vector NBDV that determines described predicting unit PU, comprising:
Determine the time domain adjacent block of described predicting unit PU;
Obtain the first motion vector MV of described time domain adjacent block, and the first reference view View of described time domain adjacent block;
The adjacent block difference vector NBDV that determines described predicting unit PU is described the first motion vector MV, and determines that the reference view View of described predicting unit PU is described the first reference view View.
4. definite method according to claim 3, it is characterized in that, described according to the adjacent block information of the predicting unit PU of current view point View, determine the reference view View of described predicting unit PU, and the adjacent block difference vector NBDV that determines described predicting unit PU, comprising:
When the time domain adjacent block of described predicting unit PU is unavailable, determine the first spatial domain adjacent block of described predicting unit PU, the left adjacent block that described the first spatial domain adjacent block is described predicting unit PU;
Obtain the second motion vector MV of described the first spatial domain adjacent block, and the second reference view View of described the first spatial domain adjacent block;
The adjacent block difference vector NBDV that determines described predicting unit PU is described the second motion vector MV, and determines that the reference view View of described predicting unit PU is described the second reference view View.
5. definite method according to claim 4, it is characterized in that, described according to the adjacent block information of the predicting unit PU of current view point View, determine the reference view View of described predicting unit PU, and the adjacent block difference vector NBDV that determines described predicting unit PU, comprising:
When the first spatial domain adjacent block of described predicting unit PU is when unavailable, determine the second spatial domain adjacent block of described predicting unit PU, the top adjacent block that described the second spatial domain adjacent block is described predicting unit PU;
Obtain the 3rd motion vector MV of described the second spatial domain adjacent block, and the 3rd reference view View of described the second spatial domain adjacent block;
The adjacent block difference vector NBDV that determines described predicting unit PU is described the 3rd motion vector MV, and determines that the reference view View of described predicting unit PU is described the 3rd reference view View.
6. definite method according to claim 5, it is characterized in that, described according to the adjacent block information of the predicting unit PU of current view point View, determine the reference view View of described predicting unit PU, and the adjacent block difference vector NBDV that determines described predicting unit PU, comprising:
When the second spatial domain adjacent block of described predicting unit PU is when unavailable, determine the motion prediction compensation block of described predicting unit PU;
Obtain the 4th motion vector MV of described motion prediction compensation block, and the 4th reference view View of described motion prediction compensation block;
The adjacent block difference vector NBDV that determines described predicting unit PU is described the 4th motion vector MV, and determines that the reference view View of described predicting unit PU is described the 4th reference view View.
7. definite method according to claim 6, it is characterized in that, described according to the adjacent block information of the predicting unit PU of current view point View, determine the reference view View of described predicting unit PU, and the adjacent block difference vector NBDV that determines described predicting unit PU, comprising:
When described predicting unit PU does not have motion prediction compensation block, the adjacent block difference vector NBDV that determines described predicting unit PU is zero;
When the reference frame of described predicting unit PU and the reference frame of the corresponding predicting unit PU of the 5th reference view View are synchronization reference frame, the reference view View that determines described predicting unit PU is described the 5th reference view View.
8. according to the definite method described in claim 1 to 7 any one, it is characterized in that, described according to the reference view View of the adjacent block difference vector NBDV of described predicting unit PU and described predicting unit PU, the adjacent block difference vector Do-NBDV based on depth map that determines described predicting unit PU, comprising:
Obtain the texture maps information of the reference view View of described predicting unit PU, and the depth map information of the reference view View of the described predicting unit PU corresponding with described texture maps information;
According to the depth map information of the reference view View of the texture maps information of the reference view View of the adjacent block difference vector NBDV of described predicting unit PU, described predicting unit PU and described predicting unit PU, determine the adjacent block difference vector Do-NBDV based on depth map of described predicting unit PU.
9. an encoding device, is characterized in that, described encoding device comprises:
The first determining unit, be used for according to the adjacent block information of the predicting unit PU of current view point View, determine the reference view View of described predicting unit PU, and the adjacent block difference vector NBDV that determines described predicting unit PU, described current view point View is other vision points iew except basic vision point iew;
The second determining unit, for according to the adjacent block difference vector NBDV of the reference view View of described predicting unit PU and described predicting unit PU, the adjacent block difference vector Do-NBDV of the adjacent block difference vector based on depth map of determining described predicting unit PU based on depth map.
10. encoding device according to claim 9, is characterized in that, described encoding device also comprises:
Coding unit, encodes for the adjacent block difference vector Do-NBDV based on depth map according to described predicting unit PU.
11. according to the encoding device described in claim 9 or 10, it is characterized in that, described the first determining unit, comprising:
The 3rd determining unit, for determining the time domain adjacent block of described predicting unit PU;
The first acquiring unit, for obtaining the first motion vector MV of described time domain adjacent block, and the first reference view View of described time domain adjacent block;
Described the 3rd determining unit, also for determining that the adjacent block difference vector NBDV of described predicting unit PU is described the first motion vector MV, and determines that the reference view View of described predicting unit PU is described the first reference view View.
12. encoding devices according to claim 11, is characterized in that,
Described the 3rd determining unit, when also unavailable for the time domain adjacent block as described predicting unit PU, determines the first spatial domain adjacent block of described predicting unit PU, the left adjacent block that described the first spatial domain adjacent block is described predicting unit PU;
Described the first acquiring unit, also for obtaining the second motion vector MV of described the first spatial domain adjacent block, and the second reference view View of described the first spatial domain adjacent block;
Described the 3rd determining unit, also for determining that the adjacent block difference vector NBDV of described predicting unit PU is described the second motion vector MV, and determines that the reference view View of described predicting unit PU is described the second reference view View.
13. encoding devices according to claim 12, is characterized in that,
Described the 3rd determining unit, also when unavailable, determines the second spatial domain adjacent block of described predicting unit PU, the top adjacent block that described the second spatial domain adjacent block is described predicting unit PU for the first spatial domain adjacent block as described predicting unit PU;
Described the first acquiring unit, also for obtaining the 3rd motion vector MV of described the second spatial domain adjacent block, and the 3rd reference view View of described the second spatial domain adjacent block;
Described the 3rd determining unit, also for determining that the adjacent block difference vector NBDV of described predicting unit PU is described the 3rd motion vector MV, and determines that the reference view View of described predicting unit PU is described the 3rd reference view View.
14. encoding devices according to claim 13, is characterized in that,
Described the 3rd determining unit, also when unavailable, determines the motion prediction compensation block of described predicting unit PU for the second spatial domain adjacent block as described predicting unit PU;
Described the first acquiring unit, also for obtaining the 4th motion vector MV of described motion prediction compensation block, and the 4th reference view View of described motion prediction compensation block;
Described the 3rd determining unit, also for determining that the adjacent block difference vector NBDV of described predicting unit PU is described the 4th motion vector MV, and determines that the reference view View of described predicting unit PU is described the 4th reference view View.
15. encoding devices according to claim 14, it is characterized in that, described the first determining unit, also for when described predicting unit PU does not have motion prediction compensation block, the adjacent block difference vector NBDV that determines described predicting unit PU is zero, and when the reference frame of the reference frame of described predicting unit PU and the predicting unit PU of the 5th reference view View is synchronization reference frame, the reference view View that determines described predicting unit PU is described the 5th reference view View.
16. according to the encoding device described in claim 9 to 15 any one, it is characterized in that, described the second determining unit, comprising:
Second acquisition unit, for obtaining the texture maps information of the reference view View of described predicting unit PU, and the depth map information of the reference view View of the described predicting unit PU corresponding with described texture maps information;
The 4th determining unit, for according to the depth map information of the reference view View of the texture maps information of the reference view View of the adjacent block difference vector NBDV of described predicting unit PU, described predicting unit PU and described predicting unit PU, determine the adjacent block difference vector Do-NBDV based on depth map of described predicting unit PU.
17. 1 kinds of decoding devices, is characterized in that, described decoding device comprises:
The first determining unit, be used for according to the adjacent block information of the predicting unit PU of current view point View, determine the reference view View of described predicting unit PU, and the adjacent block difference vector NBDV that determines described predicting unit PU, described current view point View is other vision points iew except basic vision point iew;
The second determining unit, for according to the adjacent block difference vector NBDV of the reference view View of described predicting unit PU and described predicting unit PU, the adjacent block difference vector Do-NBDV of the adjacent block difference vector based on depth map of determining described predicting unit PU based on depth map.
18. decoding devices according to claim 17, is characterized in that, described decoding device also comprises:
Decoding unit, decodes for the adjacent block difference vector Do-NBDV based on depth map according to described predicting unit PU.
19. according to the decoding device described in claim 17 or 18, it is characterized in that, described the first determining unit, comprising:
The 3rd determining unit, for determining the time domain adjacent block of described predicting unit PU;
The first acquiring unit, for obtaining the first motion vector MV of described time domain adjacent block, and the first reference view View of described time domain adjacent block;
Described the 3rd determining unit, also for determining that the adjacent block difference vector NBDV of described predicting unit PU is described the first motion vector MV, and determines that the reference view View of described predicting unit PU is described the first reference view View.
20. decoding devices according to claim 19, is characterized in that,
Described the 3rd determining unit, when also unavailable for the time domain adjacent block as described predicting unit PU, determines the first spatial domain adjacent block of described predicting unit PU, the left adjacent block that described the first spatial domain adjacent block is described predicting unit PU;
Described the first acquiring unit, also for obtaining the second motion vector MV of described the first spatial domain adjacent block, and the second reference view View of described the first spatial domain adjacent block;
Described the 3rd determining unit, also for determining that the adjacent block difference vector NBDV of described predicting unit PU is described the second motion vector MV, and determines that the reference view View of described predicting unit PU is described the second reference view View.
21. decoding devices according to claim 20, is characterized in that,
Described the 3rd determining unit, also when unavailable, determines the second spatial domain adjacent block of described predicting unit PU, the top adjacent block that described the second spatial domain adjacent block is described predicting unit PU for the first spatial domain adjacent block as described predicting unit PU;
Described the first acquiring unit, also for obtaining the 3rd motion vector MV of described the second spatial domain adjacent block, and the 3rd reference view View of described the second spatial domain adjacent block;
Described the 3rd determining unit, also for determining that the adjacent block difference vector NBDV of described predicting unit PU is described the 3rd motion vector MV, and determines that the reference view View of described predicting unit PU is described the 3rd reference view View.
22. decoding devices according to claim 21, is characterized in that,
Described the 3rd determining unit, also when unavailable, determines the motion prediction compensation block of described predicting unit PU for the second spatial domain adjacent block as described predicting unit PU;
Described the first acquiring unit, also for obtaining the 4th motion vector MV of described motion prediction compensation block, and the 4th reference view View of described motion prediction compensation block;
Described the 3rd determining unit, also for determining that the adjacent block difference vector NBDV of described predicting unit PU is described the 4th motion vector MV, and determines that the reference view View of described predicting unit PU is described the 4th reference view View.
23. decoding devices according to claim 22, it is characterized in that, described the first determining unit, also for when described predicting unit PU does not have motion prediction compensation block, the adjacent block difference vector NBDV that determines described predicting unit PU is zero, and when the reference frame of the reference frame of described predicting unit PU and the predicting unit PU of the 5th reference view View is synchronization reference frame, the reference view View that determines described predicting unit PU is described the 5th reference view View.
24. according to claim 17 to the decoding device described in 23 any one, it is characterized in that, described the second determining unit, comprising:
Second acquisition unit, for obtaining the texture maps information of the reference view View of described predicting unit PU, and the depth map information of the reference view View of the described predicting unit PU corresponding with described texture maps information;
The 4th determining unit, for according to the depth map information of the reference view View of the texture maps information of the reference view View of the adjacent block difference vector NBDV of described predicting unit PU, described predicting unit PU and described predicting unit PU, determine the adjacent block difference vector Do-NBDV based on depth map of described predicting unit PU.
CN201310497296.8A 2013-10-21 2013-10-21 Determination method, encoding device and the decoding device of multi-view disparity vector Expired - Fee Related CN103533361B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310497296.8A CN103533361B (en) 2013-10-21 2013-10-21 Determination method, encoding device and the decoding device of multi-view disparity vector

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310497296.8A CN103533361B (en) 2013-10-21 2013-10-21 Determination method, encoding device and the decoding device of multi-view disparity vector

Publications (2)

Publication Number Publication Date
CN103533361A true CN103533361A (en) 2014-01-22
CN103533361B CN103533361B (en) 2017-01-04

Family

ID=49934976

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310497296.8A Expired - Fee Related CN103533361B (en) 2013-10-21 2013-10-21 Determination method, encoding device and the decoding device of multi-view disparity vector

Country Status (1)

Country Link
CN (1) CN103533361B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104394417A (en) * 2014-12-15 2015-03-04 哈尔滨工业大学 Method for acquiring parallax vector in multi-view video coding
WO2016155070A1 (en) * 2015-04-01 2016-10-06 北京工业大学 Method for acquiring adjacent disparity vectors in multi-texture multi-depth video

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101170692A (en) * 2006-10-24 2008-04-30 华为技术有限公司 Multi-view image encoding and decoding method and encoder and decoder
CN101222639A (en) * 2007-01-09 2008-07-16 华为技术有限公司 Inter-view prediction method, encoder and decoder of multi-viewpoint video technology
WO2008085876A2 (en) * 2007-01-04 2008-07-17 Thomson Licensing Method and apparatus for video error concealment using high level syntax reference views in multi-view coded video
WO2008108566A1 (en) * 2007-03-02 2008-09-12 Lg Electronics Inc. A method and an apparatus for decoding/encoding a video signal

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101170692A (en) * 2006-10-24 2008-04-30 华为技术有限公司 Multi-view image encoding and decoding method and encoder and decoder
WO2008085876A2 (en) * 2007-01-04 2008-07-17 Thomson Licensing Method and apparatus for video error concealment using high level syntax reference views in multi-view coded video
WO2008085876A3 (en) * 2007-01-04 2008-10-09 Thomson Licensing Method and apparatus for video error concealment using high level syntax reference views in multi-view coded video
CN101222639A (en) * 2007-01-09 2008-07-16 华为技术有限公司 Inter-view prediction method, encoder and decoder of multi-viewpoint video technology
WO2008108566A1 (en) * 2007-03-02 2008-09-12 Lg Electronics Inc. A method and an apparatus for decoding/encoding a video signal

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104394417A (en) * 2014-12-15 2015-03-04 哈尔滨工业大学 Method for acquiring parallax vector in multi-view video coding
CN104394417B (en) * 2014-12-15 2017-07-28 哈尔滨工业大学 A kind of difference vector acquisition methods in multiple view video coding
WO2016155070A1 (en) * 2015-04-01 2016-10-06 北京工业大学 Method for acquiring adjacent disparity vectors in multi-texture multi-depth video
US9883200B2 (en) 2015-04-01 2018-01-30 Beijing University Of Technology Method of acquiring neighboring disparity vectors for multi-texture and multi-depth video

Also Published As

Publication number Publication date
CN103533361B (en) 2017-01-04

Similar Documents

Publication Publication Date Title
EP2898688B1 (en) Method and apparatus for deriving virtual depth values in 3d video coding
US11317074B2 (en) Method and apparatus for processing data for three-dimensional image
CN103621093A (en) Method and apparatus of texture image compression in 3D video coding
CN104079944A (en) Video coding motion vector list establishing method and system
JP2015533038A5 (en)
CN104333760A (en) Three-dimensional image coding method, three-dimensional image decoding method and related devices
US9191677B2 (en) Method and apparatus for encoding image and method and appartus for decoding image
CA2692250A1 (en) Video encoding and decoding methods using residual prediction, and corresponding apparatuses
CN104284188A (en) Partitioning method for prediction block, encoding device and decoding device
CN102685505B (en) Intra-frame prediction method and prediction device
JP2017526268A (en) Arrangement of adaptive search windows for video coding
US20150264356A1 (en) Method of Simplified Depth Based Block Partitioning
US20200036996A1 (en) Method and apparatus for determining a motion vector
CN103636198A (en) Image processing method and apparatus for 3d video
EP3552396A1 (en) Lightfield compression using disparity predicted replacement
CN103533361A (en) Determination method of multi-view disparity vector, coding equipment and decoding equipment
CN103747264A (en) Motion vector prediction method, coding equipment and decoding equipment
US20150358643A1 (en) Method of Depth Coding Compatible with Arbitrary Bit-Depth
US9848205B2 (en) Method for predictive coding of depth maps with plane points
CN105122808B (en) Three-dimensional or multi-view video coding or decoded method and device
CN103139569B (en) The coding of multi-view point video, coding/decoding method, device and codec
US20170013265A1 (en) Encoding/Decoding Processing Method and Apparatus for Image Segmentation Manner
EP2966867A1 (en) Methods and devices for encoding and decoding a sequence of frames representing a 3D scene, and corresponding computer program products and computer-readable medium
CN103916672A (en) Data coding and decoding method, related device and system
CN103997635A (en) Synthesis viewpoint distortion prediction method and coding method of free viewpoint video

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170104

Termination date: 20181021