CN104995915A - Coding-decoding method, and coder-decoder - Google Patents

Coding-decoding method, and coder-decoder Download PDF

Info

Publication number
CN104995915A
CN104995915A CN201580000242.3A CN201580000242A CN104995915A CN 104995915 A CN104995915 A CN 104995915A CN 201580000242 A CN201580000242 A CN 201580000242A CN 104995915 A CN104995915 A CN 104995915A
Authority
CN
China
Prior art keywords
coordinate
point
block
difference vector
upper left
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201580000242.3A
Other languages
Chinese (zh)
Other versions
CN104995915B (en
Inventor
陈旭
郑萧桢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN104995915A publication Critical patent/CN104995915A/en
Application granted granted Critical
Publication of CN104995915B publication Critical patent/CN104995915B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/174Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component

Abstract

An embodiment of the present invention provides a coding-decoding method, and a coder-decoder. The coding method includes steps as follows: determining a present luminance coding block from a texture diagram; determining coordinates of an upper left luminance sampling point of the luminance coding block, wherein, the coordinates are used for indicating a position of the upper left luminance sampling point of the luminance coding block relative to the upper left luminance sampling point of the texture diagram; obtaining a display vector between the present viewpoint corresponding to the texture diagram and a reference viewpoint; determining an X coordinate of a target sampling point of a depth diagram corresponding to the reference viewpoint according to an X coordinate and a display vector of the upper left luminance sampling point of the luminance coding block; determing a Y coordinate of the upper left luminance sampling point of the luminance coding block as a Y coordinate of the target sampling point of the depth diagram; determining the depth information corresponding to various sampling points of the luminance coding block according to the X coordinate and the Y coordinate of the target sampling point of the depth diagram, and the size and the luminance coding block; obtaining a block dividing method of the luminance coding block according to the depth information, and dividing the luminance coding block; and decoding the divided luminance coding block. The coding method can increase coding efficiency.

Description

Decoding method and codec
Technical field
The embodiment of the present invention relates to video image encoding and decoding field, and more specifically, relates to a kind of decoding method and codec.
Background technology
In Video coding and decoding framework, hybrid coding structure is generally used for the Code And Decode of video sequence.The coding side of hybrid coding structure generally includes: prediction module, conversion module, quantization modules and entropy code module; The decoding end of hybrid coding structure generally includes: entropy decoder module, inverse quantization module, inverse transform block and predictive compensation module.The combination of these Code And Decode modules effectively can remove the redundant information of video sequence, and can ensure the coded image obtaining video sequence in decoding end.
In Video coding and decoding framework, the image of video sequence is usually divided into image block and encodes.Piece image is divided into some image blocks, and these image blocks use above-mentioned module to carry out Code And Decode.
In above-mentioned module, prediction module is used for the prediction block message that coding side obtains the image block of video sequence coding image, and then obtain the residual error of image block, predictive compensation module is used for the prediction block message that decoding end obtains current decoded image blocks, then obtains current decoded image blocks according to the image block residual error obtained of decoding.Prediction module comprises infra-frame prediction and inter prediction two kinds of technology usually.Wherein, infra-prediction techniques utilizes the aerial image prime information of current image block to remove the redundant information of current image block to obtain residual error; The redundant information that is encoded or decoded picture Pixel Information removal current image block that inter-frame prediction techniques utilizes present image contiguous is to obtain residual error.In inter-frame prediction techniques, the image contiguous for the present image of inter prediction is called as reference picture.
Above-mentioned infra-frame prediction or inter-frame prediction techniques all relate to block and divide (block partitioning) technology, be divided into more than one region (partition) by an image block, and then carry out infra-frame prediction or inter prediction in units of described region.Conventional block division methods comprises: a square image block is divided into along the horizontal or vertical direction two rectangular areas (rectangular partition), as shown in A and B in Fig. 1, in figure, square image block is divided into two rectangular areas along level and vertical direction respectively.In addition, an image block can also be divided into two non-rectangular areas (non-rectangular partition) by a square image block, as shown in Figure 2.
3 D video encoding and decoding technique also can use above-mentioned block partitioning technology.In the texture maps encoding and decoding technique of 3 D video, the block comminute based on the degree of depth is a kind of conventional method.Its principle is to utilize the depth value information that in luminance coding block, each sampled point is corresponding to generate binaryzation division template, utilizes binaryzation to divide template and divides luminance coding block.This method is also referred to as the block partition mode (DBBP, depth-based block partitioning) based on the degree of depth.
In prior art, the division of the depth value information realization present intensity encoding block that each sampled point in present intensity encoding block be utilized corresponding, first will determine the depth value information that in present intensity encoding block, each sampled point is corresponding.But, because the depth coding of current view point does not also start, therefore directly depth value information corresponding to each sampled point in present intensity encoding block cannot be obtained from depth map corresponding to the texture maps of current view point, therefore, need to utilize difference vector (DV, disparity vector) from depth map corresponding to encoded reference view, obtain depth value information (as shown in Figure 3) corresponding to each sampled point in present intensity encoding block, due to the parallax between viewpoint, the operation such as Clip and displacement that the process need of the depth value information of each sampled point in the luminance coding block of current view point is a large amount of is found from the depth map that reference view is corresponding, reduce the efficiency of coding.
Summary of the invention
The embodiment of the present invention provides a kind of decoding method and codec, to improve the efficiency of coding.
First aspect, provides a kind of coding method, comprising: from texture maps, determine current luminance coding block; Determine the coordinate of the upper left luma samples point of described luminance coding block, described coordinate is used to indicate the position of upper left luma samples point relative to the upper left luma samples point of described texture maps of described luminance coding block, and described coordinate comprises X-coordinate and Y-coordinate; Obtain the difference vector between current view point corresponding to described texture maps and reference view; According to X-coordinate and the described difference vector of the upper left luma samples point of described luminance coding block, determine the X-coordinate of the destination sample point in the depth map that described reference view is corresponding, wherein said destination sample point is the sampled point corresponding with the upper left luma samples point of described luminance coding block in described depth map; The Y-coordinate of the upper left luma samples point of described luminance coding block is defined as the Y-coordinate of the destination sample point of described depth map; According to X-coordinate and the Y-coordinate of the destination sample point in described depth map, and the size of described luminance coding block, determine the depth value information corresponding to each sampled point in described luminance coding block; According to described depth value information, obtain the block dividing mode of described luminance coding block, and described luminance coding block is divided; Described luminance coding block after dividing is encoded.
In conjunction with first aspect, in a kind of implementation of first aspect, the X-coordinate of the described upper left luma samples point according to described luminance coding block and described difference vector, determine the X-coordinate of the destination sample point in the depth map that described reference view is corresponding, comprise: according to described difference vector, the side-play amount between the X-coordinate determining the destination sample point in the X-coordinate of the upper left luma samples point of described luminance coding block and described depth map; According to X-coordinate and the described side-play amount of the upper left luma samples point of described luminance coding block, determine the X-coordinate of the destination sample point in described depth map.
In conjunction with any one of first aspect or its above-mentioned implementation, in the another kind of implementation of first aspect, described according to described difference vector, side-play amount between the X-coordinate determining the destination sample point in the X-coordinate of the upper left luma samples point of described luminance coding block and described depth map, comprise: the result that the horizontal component of described difference vector adds divided by 4 after 2 is rounded downwards, obtain described side-play amount.
In conjunction with any one of first aspect or its above-mentioned implementation, in the another kind of implementation of first aspect, difference vector between the current view point that the described texture maps of described acquisition is corresponding and reference view, comprising: to become more meticulous flag according to the degree of depth, determine described difference vector.
In conjunction with any one of first aspect or its above-mentioned implementation, in the another kind of implementation of first aspect, describedly to become more meticulous flag according to the degree of depth, determine described difference vector, comprise: when the described degree of depth become more meticulous flag be 0 time, adjacent block difference vector NBDV is defined as described difference vector; When the described degree of depth become more meticulous flag be 1 time, the adjacent block difference vector DoNBDV based on the degree of depth is defined as described difference vector.
Second aspect, provides a kind of coding/decoding method, comprising: from texture maps, determine current brightness decoding block; Determine the coordinate of the upper left luma samples point of described brightness decoding block, described coordinate is used to indicate the position of upper left luma samples point relative to the upper left luma samples point of described texture maps of described brightness decoding block, and described coordinate comprises X-coordinate and Y-coordinate; Obtain the difference vector between current view point corresponding to described texture maps and reference view; According to X-coordinate and the described difference vector of the upper left luma samples point of described brightness decoding block, determine the X-coordinate of the destination sample point in the depth map that described reference view is corresponding, wherein said destination sample point is the sampled point corresponding with the upper left luma samples point of described brightness decoding block in described depth map; The Y-coordinate of the upper left luma samples point of described brightness decoding block is defined as the Y-coordinate of the destination sample point of described depth map; According to X-coordinate and the Y-coordinate of the destination sample point in described depth map, and the size of described brightness decoding block, determine the depth value information corresponding to each sampled point in described brightness decoding block; According to described depth value information, obtain the block dividing mode of described brightness decoding block, and described brightness decoding block is divided; Described brightness decoding block after dividing is decoded.
In conjunction with second aspect, in a kind of implementation of second aspect, the X-coordinate of the described upper left luma samples point according to described brightness decoding block and described difference vector, determine the X-coordinate of the destination sample point in the depth map that described reference view is corresponding, comprise: according to described difference vector, the side-play amount between the X-coordinate determining the destination sample point in the X-coordinate of the upper left luma samples point of described brightness decoding block and described depth map; According to X-coordinate and the described side-play amount of the upper left luma samples point of described brightness decoding block, determine the X-coordinate of the destination sample point in described depth map.
In conjunction with any one of second aspect or its above-mentioned implementation, in the another kind of implementation of second aspect, described according to described difference vector, side-play amount between the X-coordinate determining the destination sample point in the X-coordinate of the upper left luma samples point of described brightness decoding block and described depth map, comprise: the result that the horizontal component of described difference vector adds divided by 4 after 2 is rounded downwards, obtain described side-play amount.
In conjunction with any one of second aspect or its above-mentioned implementation, in the another kind of implementation of second aspect, difference vector between the current view point that the described texture maps of described acquisition is corresponding and reference view, comprising: to become more meticulous flag according to the degree of depth, determine described difference vector.
In conjunction with any one of second aspect or its above-mentioned implementation, in the another kind of implementation of second aspect, describedly to become more meticulous flag according to the degree of depth, determine described difference vector, comprise: when the described degree of depth become more meticulous flag be 0 time, adjacent block difference vector NBDV is defined as described difference vector; When the described degree of depth become more meticulous flag be 1 time, the adjacent block difference vector DoNBDV based on the degree of depth is defined as described difference vector.
The third aspect, provides a kind of encoder, comprising: the first determining unit, for determining current luminance coding block from texture maps; Second determining unit, for determining the coordinate of the upper left luma samples point of described luminance coding block, described coordinate is used to indicate the position of upper left luma samples point relative to the upper left luma samples point of described texture maps of described luminance coding block, and described coordinate comprises X-coordinate and Y-coordinate; Acquiring unit, for obtaining the difference vector between current view point corresponding to described texture maps and reference view; 3rd determining unit, for X-coordinate and the described difference vector of the upper left luma samples point according to described luminance coding block, determine the X-coordinate of the destination sample point in the depth map that described reference view is corresponding, wherein said destination sample point is the sampled point corresponding with the upper left luma samples point of described luminance coding block in described depth map; 4th determining unit, the Y-coordinate for the upper left luma samples point by described luminance coding block is defined as the Y-coordinate of the destination sample point of described depth map; 5th determining unit, for according to the X-coordinate of the destination sample point in described depth map and Y-coordinate, and the size of described luminance coding block, determine the depth value information corresponding to each sampled point in described luminance coding block; Block division unit, for according to described depth value information, obtains the block dividing mode of described luminance coding block, and divides described luminance coding block; Coding unit, for encoding to the described luminance coding block after division.
In conjunction with the third aspect, in a kind of implementation of the third aspect, described 3rd determining unit specifically for according to described difference vector, the side-play amount between the X-coordinate determining the destination sample point in the X-coordinate of the upper left luma samples point of described luminance coding block and described depth map; According to X-coordinate and the described side-play amount of the upper left luma samples point of described luminance coding block, determine the X-coordinate of the destination sample point in described depth map.
In conjunction with any one of the third aspect or its above-mentioned implementation, in the another kind of implementation of the third aspect, described 3rd determining unit rounds downwards specifically for the result adding the horizontal component of described difference vector divided by 4 after 2, obtains described side-play amount.
In conjunction with any one of the third aspect or its above-mentioned implementation, in the another kind of implementation of the third aspect, described acquiring unit, specifically for the flag that becomes more meticulous according to the degree of depth, determines described difference vector.
In conjunction with any one of the third aspect or its above-mentioned implementation, in the another kind of implementation of the third aspect, when described acquiring unit is 0 specifically for the flag that becomes more meticulous when the described degree of depth, adjacent block difference vector NBDV is defined as described difference vector; When the described degree of depth become more meticulous flag be 1 time, the adjacent block difference vector DoNBDV based on the degree of depth is defined as described difference vector.
Fourth aspect, provides a kind of decoder, comprising: the first determining unit, for determining current brightness decoding block from texture maps; Second determining unit, for determining the coordinate of the upper left luma samples point of described brightness decoding block, described coordinate is used to indicate the position of upper left luma samples point relative to the upper left luma samples point of described texture maps of described brightness decoding block, and described coordinate comprises X-coordinate and Y-coordinate; Acquiring unit, for obtaining the difference vector between current view point corresponding to described texture maps and reference view; 3rd determining unit, for X-coordinate and the described difference vector of the upper left luma samples point according to described brightness decoding block, determine the X-coordinate of the destination sample point in the depth map that described reference view is corresponding, wherein said destination sample point is the sampled point corresponding with the upper left luma samples point of described brightness decoding block in described depth map; 4th determining unit, the Y-coordinate for the upper left luma samples point by described brightness decoding block is defined as the Y-coordinate of the destination sample point of described depth map; 5th determining unit, for according to the X-coordinate of the destination sample point in described depth map and Y-coordinate, and the size of described brightness decoding block, determine the depth value information corresponding to each sampled point in described brightness decoding block; Block division unit, for according to described depth value information, obtains the block dividing mode of described brightness decoding block, and divides described brightness decoding block; Decoding unit, for decoding to the described brightness decoding block after division.
In conjunction with fourth aspect, in a kind of implementation of fourth aspect, described 3rd determining unit specifically for according to described difference vector, the side-play amount between the X-coordinate determining the destination sample point in the X-coordinate of the upper left luma samples point of described brightness decoding block and described depth map; According to X-coordinate and the described side-play amount of the upper left luma samples point of described brightness decoding block, determine the X-coordinate of the destination sample point in described depth map.
In conjunction with any one of fourth aspect or its above-mentioned implementation, in the another kind of implementation of fourth aspect, described 3rd determining unit rounds downwards specifically for the result adding the horizontal component of described difference vector divided by 4 after 2, obtains described side-play amount.
In conjunction with any one of fourth aspect or its above-mentioned implementation, in the another kind of implementation of fourth aspect, described acquiring unit, specifically for the flag that becomes more meticulous according to the degree of depth, determines described difference vector.
In conjunction with any one of fourth aspect or its above-mentioned implementation, in the another kind of implementation of fourth aspect, when described acquiring unit is 0 specifically for the flag that becomes more meticulous when the described degree of depth, adjacent block difference vector NBDV is defined as described difference vector; When the described degree of depth become more meticulous flag be 1 time, the adjacent block difference vector DoNBDV based on the degree of depth is defined as described difference vector.
The Y-coordinate of the upper left luma samples point of luminance coding block is directly defined as the Y-coordinate of the corresponding sampled point in the depth map of reference view by the embodiment of the present invention, eliminate the computing cost according to this process of Y-coordinate of corresponding pixel points in the Y-coordinate of the upper left luma samples point of luminance coding block and difference vector compute depth figure, improve the efficiency of coding.
Accompanying drawing explanation
In order to be illustrated more clearly in the technical scheme of the embodiment of the present invention, be briefly described to the accompanying drawing used required in the embodiment of the present invention below, apparently, accompanying drawing described is below only some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
Fig. 1 is the exemplary plot of a kind of piece of dividing mode.
Fig. 2 is the exemplary plot of a kind of piece of dividing mode.
Fig. 3 is the principle schematic of a kind of DBBP.
Fig. 4 is the indicative flowchart of the coding method of the embodiment of the present invention.
Fig. 5 is the schematic diagram of the coordinate of the upper left sampled point of luminance coding block.
Fig. 6 is the indicative flowchart of the coding/decoding method of the embodiment of the present invention.
Fig. 7 is the exemplary plot of the coding method of the embodiment of the present invention.
Fig. 8 is the exemplary plot of the coding method of the embodiment of the present invention.
Fig. 9 is the schematic block diagram of the encoder of the embodiment of the present invention.
Figure 10 is the schematic block diagram of the decoder of the embodiment of the present invention.
Figure 11 is the schematic block diagram of the encoder of the embodiment of the present invention.
Figure 12 is the schematic block diagram of the decoder of the embodiment of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, be clearly and completely described the technical scheme in the embodiment of the present invention, obviously, described embodiment is a part of embodiment of the present invention, instead of whole embodiment.Based on the embodiment in the present invention, the every other embodiment that those of ordinary skill in the art obtain under the prerequisite not making creative work, all should belong to the scope of protection of the invention.
At present, three dimension high efficiency Video coding (3D-HEVC, 3Dimension – High EfficiencyVideo Coding) in, texture video camera and depth camera adopt horizontally, utilize this feature, the calculating of the embodiment of the present invention to depth value information corresponding to sampled point each in luminance coding block simplifies.Below in conjunction with concrete accompanying drawing 4, describe the coding method according to the embodiment of the present invention in detail.
Fig. 4 is the indicative flowchart of the coding method of the embodiment of the present invention.The method of Fig. 4 comprises:
410, from texture maps, current luminance coding block is determined.
420, determine the coordinate of the upper left luma samples point of luminance coding block, this coordinate is used to indicate the position of upper left luma samples point relative to the upper left luma samples point of texture maps of this luminance coding block, and this coordinate comprises X-coordinate and Y-coordinate.
For Fig. 5, the resolution of texture maps is 168 × 1024, and the size of each luminance coding block is 32 × 32.In Figure 5, the coordinate of the upper left luma samples point of block 1 is (0,0), the coordinate of the upper left luma samples point of block 2 is (32,0), the coordinate of the upper left luma samples point of block 3 is (0,32), the coordinate of the upper left luma samples point of block 4 is (32,32).
430, the difference vector between current view point corresponding to texture maps and reference view is obtained.
It should be noted that, the depth map that reference view is corresponding is encoded.Difference vector can be utilize the encoded information of other viewpoints of current time, to current prediction unit (PU, Prediction Unit) or coding unit (CU, Coding Unit) locate the Vector Message of corresponding blocks position in other encoded viewpoints.
440, according to X-coordinate and the described difference vector of the upper left luma samples point of luminance coding block, determine the X-coordinate of the destination sample point in the depth map that reference view is corresponding, wherein destination sample point is the sampled point corresponding with the upper left luma samples point of luminance coding block in depth map.
Particularly, step 440 can comprise: according to difference vector, the side-play amount between the X-coordinate determining the destination sample point in the X-coordinate of the upper left luma samples point of luminance coding block and depth map; According to X-coordinate and the side-play amount of the upper left luma samples point of luminance coding block, determine the X-coordinate of the destination sample point in depth map, wherein, can adopt the X-coordinate of the destination sample point in following two kinds of limiting mode limited depth figure: the X-coordinate of the destination sample point in depth map is not less than 0, the picture traverse being not more than texture maps deducts the width of luminance coding block; Or the X-coordinate of the destination sample point in depth map is not less than 0, and the picture traverse being not more than texture maps subtracts 1.
Above-mentioned according to difference vector, determine that the side-play amount between the X-coordinate of the upper left luma samples point of luminance coding block and the X-coordinate of the corresponding sampled point in depth map can comprise: the result that the horizontal component of difference vector adds divided by 4 after 2 rounded, obtain side-play amount downwards.
450, the Y-coordinate of the upper left luma samples point of luminance coding block is defined as the Y-coordinate of the destination sample point of depth map.
Particularly, the X-coordinate of the destination sample point of depth map and Y-coordinate can be used for the position of indicating target sampled point relative to the upper left sampled point of depth map.
460, according to X-coordinate and the Y-coordinate of the destination sample point in depth map, and the size of luminance coding block, determine the depth value information in luminance coding block corresponding to each sampled point.
Particularly, can with the X-coordinate of the destination sample point in depth map and Y-coordinate for upper left angle point, in this depth map, divide one piece and luminance coding block of a size piece of region, and the depth value information in this block region is defined as depth information corresponding to this luminance coding block.
470, according to depth value information, obtain the block dividing mode of luminance coding block, and luminance coding block is divided.
Here depth value information can comprise the depth value that in luminance coding block, each sampled point is corresponding.Step 470 can comprise: the depth value corresponding according to each sampled point recorded in depth value information and the comparison of depth threshold, generates binaryzation and divides template.Then divide template according to this binaryzation to divide luminance coding block.Such as, first using the mean value of depth value corresponding for luminance coding block four angle points as above-mentioned depth threshold, then the relation of depth value that in luminance coding block, each sampled point is corresponding and above-mentioned depth threshold is determined, the sampled point that the depth value of correspondence is greater than above-mentioned depth threshold is designated as 1, the sampled point that the depth value of correspondence is less than above-mentioned depth threshold is designated as 0, generate the binaryzation be made up of 0 and 1 and divide template, then by the depth value of the correspondence in luminance coding block be 0 sampled point be divided to one piece, corresponding depth value be 1 sampled point be divided to another block, thus realize the segmentation of luminance coding block.Specifically can refer to prior art herein.
480, the luminance coding block after division is encoded.
Such as, the subsequent encode operations such as motion compensation, filtering merging is carried out to the luminance coding block after division.
In prior art, when determining the depth value information that in luminance coding block, each sampled point is corresponding, the X-coordinate of upper left luma samples point of luminance coding block and the horizontal and vertical component of Y-coordinate and difference vector to be utilized respectively to carry out the arithmetic operation such as Clip and displacement, to obtain X-coordinate and the Y-coordinate of the target pixel points in depth map corresponding to reference view.But, because texture video camera and depth camera arrange according to horizontal mode, in the texture maps that different points of view collects and depth map, the Y-coordinate of corresponding sampled point should be identical, that is, the Y-coordinate of upper left luma samples point of luminance coding block and the vertical component of difference vector is utilized to carry out computing, the operation obtaining the Y-coordinate of the corresponding sampled point in the depth map of reference view is redundancy, the Y-coordinate of the upper left luma samples point of luminance coding block is directly defined as the Y-coordinate of the corresponding sampled point in the depth map of reference view by the embodiment of the present invention, eliminate the computing cost according to this process of Y-coordinate of corresponding pixel points in the Y-coordinate of the upper left luma samples point of luminance coding block and difference vector compute depth figure, improve the efficiency of coding.
Alternatively, as an embodiment, step 430 can comprise: to become more meticulous flag according to the degree of depth, determine difference vector.
It should be noted that, difference vector can comprise: adjacent block difference vector (NBDV, Neighboring Block Disparity Vector) and based on the adjacent block difference vector (DoNBDV, Depth oriented NBDV) two kinds of the degree of depth.Particularly, NBDV utilizes the adjacent block of time-space domain or motion predicted compensation (MCP, Motion compensated prediction) to calculate the difference vector acquired; DoNBDV is the corresponding depth block information utilizing NBDV to obtain reference view, and according to the difference vector that depth block convert information obtains.The degree of depth becomes more meticulous flag, i.e. DepthRefinementFlag, uses NBDV or DoNBDV when being used to indicate present encoding.
Specifically, when the degree of depth become more meticulous flag be 1 time, DoNBDV can be defined as difference vector; When the degree of depth become more meticulous flag be 0 time, NBDV can be defined as difference vector.
Composition graphs 4 above, describes the coding method of the embodiment of the present invention in detail from the angle of coding side, hereafter composition graphs 6, describes the coding/decoding method of the embodiment of the present invention from the angle of decoding end in detail.Should be understood that step and the operation of coding side and decoding end are mutually corresponding, for avoiding repetition, no longer describing in detail herein.
Fig. 6 is the indicative flowchart of the coding/decoding method of the embodiment of the present invention.The method of Fig. 6 comprises:
610, from texture maps, current brightness decoding block is determined;
620, determine the coordinate of the upper left luma samples point of brightness decoding block, coordinate is used to indicate the position of upper left luma samples point relative to the upper left luma samples point of texture maps of brightness decoding block, and coordinate comprises X-coordinate and Y-coordinate;
630, the difference vector between current view point corresponding to texture maps and reference view is obtained;
640, according to X-coordinate and the difference vector of the upper left luma samples point of brightness decoding block, determine the X-coordinate of the destination sample point in the depth map that reference view is corresponding, wherein destination sample point is the sampled point corresponding with the upper left luma samples point of brightness decoding block in depth map;
650, the Y-coordinate of the upper left luma samples point of brightness decoding block is defined as the Y-coordinate of the destination sample point of depth map;
660, according to X-coordinate and the Y-coordinate of the destination sample point in depth map, and the size of brightness decoding block, determine the depth value information in brightness decoding block corresponding to each sampled point;
670, according to depth value information, obtain the block dividing mode of brightness decoding block, and brightness decoding block is divided;
680, the brightness decoding block after division is decoded.
In prior art, when determining the depth value information that in brightness decoding block, each sampled point is corresponding, the X-coordinate of upper left luma samples point of brightness decoding block and the horizontal and vertical component of Y-coordinate and difference vector to be utilized respectively to carry out the arithmetic operation such as Clip and displacement, to obtain X-coordinate and the Y-coordinate of the target pixel points in depth map corresponding to reference view.But, because texture video camera and depth camera arrange according to horizontal mode, in the texture maps that different points of view collects and depth map, the Y-coordinate of corresponding sampled point should be identical, that is, the Y-coordinate of upper left luma samples point of brightness decoding block and the vertical component of difference vector is utilized to carry out computing, the operation obtaining the Y-coordinate of the corresponding sampled point in the depth map of reference view is redundancy, the Y-coordinate of the upper left luma samples point of brightness decoding block is directly defined as the Y-coordinate of the corresponding sampled point in the depth map of reference view by the embodiment of the present invention, eliminate the computing cost according to this process of Y-coordinate of corresponding pixel points in the Y-coordinate of the upper left luma samples point of brightness decoding block and difference vector compute depth figure, improve the efficiency of decoding.
Alternatively, as an embodiment, the X-coordinate of the above-mentioned upper left luma samples point according to brightness decoding block and difference vector, determine that the X-coordinate of the destination sample point in the depth map that reference view is corresponding can comprise: according to difference vector, the side-play amount between the X-coordinate determining the destination sample point in the X-coordinate of the upper left luma samples point of brightness decoding block and depth map; According to X-coordinate and the side-play amount of the upper left luma samples point of brightness decoding block, determine the X-coordinate of the destination sample point in depth map.
Alternatively, as an embodiment, above-mentioned determine the upper left luma samples point of brightness decoding block X-coordinate and depth map in destination sample point X-coordinate between side-play amount can comprise: the result that the horizontal component of difference vector adds divided by 4 after 2 is rounded downwards, obtains side-play amount.
Alternatively, as an embodiment, the difference vector between the current view point that above-mentioned acquisition texture maps is corresponding and reference view can comprise: to become more meticulous flag according to the degree of depth, determine difference vector.
Alternatively, as an embodiment, above-mentionedly to become more meticulous flag according to the degree of depth, determine that difference vector can comprise: when the degree of depth become more meticulous flag be 0 time, adjacent block difference vector NBDV is defined as difference vector; When the degree of depth become more meticulous flag be 1 time, the adjacent block difference vector DoNBDV based on the degree of depth is defined as difference vector.
Below in conjunction with object lesson, in further detail the embodiment of the present invention is described.It should be noted that the example of Fig. 7 to Fig. 8 is only used to help skilled in the art to understand the embodiment of the present invention, and the embodiment of the present invention have to be limited to illustrated concrete numerical value or concrete scene.Those skilled in the art, according to the example of given Fig. 7 to Fig. 8, obviously can carry out amendment or the change of various equivalence, and such amendment or change also fall in the scope of the embodiment of the present invention.
What Fig. 7 described is use the flow process determining the depth value information that each pixel is corresponding in the luminance coding block of 16 × 16 sizes in the process of DBBP.The coordinate supposing the upper left luma samples point of luminance coding block is (368,64), and DepthRefinementFlag is 1, DoNBDV is (-250,0), and NBDV is (-156,0).First, choose DoNBDV according to DepthRefinementFlag and carry out coordinate calculating as difference vector, the X-coordinate obtaining the destination sample point in depth map corresponding to reference view is: (>>2 represents and moves 2 368+ ((-250+2) >>2)=306, be equivalent to divided by 4), the Y-coordinate of the destination sample point in depth map is 64, the coordinate of the destination sample point namely in depth map is (306,64).Based on the coordinate of destination sample point in depth map and the size of luminance coding block, in depth map, mark off the depth block of a piece 16 × 16, this depth block with this destination sample point for upper left angle point.Depth value information in this depth block is defined as depth value information corresponding to this luminance coding block, that is, because this depth block is identical with luminance coding block size, sampled point between two blocks has one-to-one relationship, and the depth value of the luma samples point in luminance coding block is exactly the depth value of sampled point corresponding with this luma samples point in this depth block.
The flow process of the depth value information that in the luminance coding block determining 32 × 32 sizes in DBBP process, each pixel is corresponding that what Fig. 8 described is.The coordinate supposing the upper left luma samples point of luminance coding block is (320,96), and DepthRefinementFlag is 0, DoNBDV is (-248,0), and NBDV is (-179,4).First, choose NBDV according to DepthRefinementFlag and carry out coordinate calculating as difference vector, the X-coordinate obtaining the corresponding sampled point in depth map is: 320+ ((-179+2) >>2)=276, the Y-coordinate of the pixel in the upper left corner of depth block is 96, namely the coordinate of the pixel in the upper left corner of depth block is (276,96).Based on the coordinate of the corresponding sampled point in depth map and the size of luminance coding block, in depth map, mark off the depth block of a piece 32 × 32, this depth block with this destination sample point for upper left angle point.Depth value information in this depth block is defined as depth value information corresponding to this luminance coding block, that is, because this depth block is identical with luminance coding block size, sampled point between two blocks has one-to-one relationship, and the depth value of the luma samples point in luminance coding block is exactly the depth value of sampled point corresponding with this luma samples point in this depth block.
Should be understood that Fig. 7 and Fig. 8 is the specific embodiment described from the angle of coding, the embodiment that Fig. 7 and Fig. 8 describes can be applied to decoding end equally.
Composition graphs 1-Fig. 8 above, describes the decoding method of the embodiment of the present invention in detail, hereafter composition graphs 9-Figure 12, describes the codec of the embodiment of the present invention in detail.
Fig. 9 is the schematic block diagram of the encoder of the embodiment of the present invention.Should be understood that the encoder 900 of Fig. 9 can realize each step in Fig. 4, for avoiding repetition, no longer describing in detail herein.Encoder 900 comprises:
First determining unit 910, for determining current luminance coding block from texture maps;
Second determining unit 920, for determining the coordinate of the upper left luma samples point of described luminance coding block, described coordinate is used to indicate the position of upper left luma samples point relative to the upper left luma samples point of described texture maps of described luminance coding block, and described coordinate comprises X-coordinate and Y-coordinate;
Acquiring unit 930, for obtaining the difference vector between current view point corresponding to described texture maps and reference view;
3rd determining unit 940, for X-coordinate and the described difference vector of the upper left luma samples point according to described luminance coding block, determine the X-coordinate of the destination sample point in the depth map that described reference view is corresponding, wherein said destination sample point is the sampled point corresponding with the upper left luma samples point of described luminance coding block in described depth map;
4th determining unit 950, the Y-coordinate for the upper left luma samples point by described luminance coding block is defined as the Y-coordinate of the destination sample point of described depth map;
5th determining unit 960, for according to the X-coordinate of the destination sample point in described depth map and Y-coordinate, and the size of described luminance coding block, determine the depth value information corresponding to each sampled point in described luminance coding block;
Block division unit 970, for according to described depth value information, obtains the block dividing mode of described luminance coding block, and divides described luminance coding block;
Coding unit 980, for encoding to the described luminance coding block after division.
In prior art, when determining the depth value information that in luminance coding block, each sampled point is corresponding, the X-coordinate of upper left luma samples point of luminance coding block and the horizontal and vertical component of Y-coordinate and difference vector to be utilized respectively to carry out the arithmetic operation such as Clip and displacement, to obtain X-coordinate and the Y-coordinate of the target pixel points in depth map corresponding to reference view.But, because texture video camera and depth camera arrange according to horizontal mode, in the texture maps that different points of view collects and depth map, the Y-coordinate of corresponding sampled point should be identical, that is, the Y-coordinate of upper left luma samples point of luminance coding block and the vertical component of difference vector is utilized to carry out computing, the operation obtaining the Y-coordinate of the corresponding sampled point in the depth map of reference view is redundancy, the Y-coordinate of the upper left luma samples point of luminance coding block is directly defined as the Y-coordinate of the corresponding sampled point in the depth map of reference view by the embodiment of the present invention, eliminate the computing cost according to this process of Y-coordinate of corresponding pixel points in the Y-coordinate of the upper left luma samples point of luminance coding block and difference vector compute depth figure, improve the efficiency of coding.
Alternatively, as an embodiment, described 3rd determining unit 940 specifically for according to described difference vector, the side-play amount between the X-coordinate determining the destination sample point in the X-coordinate of the upper left luma samples point of described luminance coding block and described depth map; According to X-coordinate and the described side-play amount of the upper left luma samples point of described luminance coding block, determine the X-coordinate of the destination sample point in described depth map.
Alternatively, as an embodiment, described 3rd determining unit 940 rounds downwards specifically for the result adding the horizontal component of described difference vector divided by 4 after 2, obtains described side-play amount.
Alternatively, as an embodiment, described acquiring unit 930, specifically for the flag that becomes more meticulous according to the degree of depth, determines described difference vector.
Alternatively, as an embodiment, when described acquiring unit 930 is 0 specifically for the flag that becomes more meticulous when the described degree of depth, adjacent block difference vector NBDV is defined as described difference vector; When the described degree of depth become more meticulous flag be 1 time, the adjacent block difference vector DoNBDV based on the degree of depth is defined as described difference vector.
Figure 10 is the schematic block diagram of the decoder of the embodiment of the present invention.Should be understood that the decoder 1000 of Figure 10 can realize each step in Fig. 6, for avoiding repetition, no longer describing in detail herein.Decoder 1000 can comprise:
First determining unit 1010, for determining current brightness decoding block from texture maps;
Second determining unit 1020, for determining the coordinate of the upper left luma samples point of described brightness decoding block, described coordinate is used to indicate the position of upper left luma samples point relative to the upper left luma samples point of described texture maps of described brightness decoding block, and described coordinate comprises X-coordinate and Y-coordinate;
Acquiring unit 1030, for obtaining the difference vector between current view point corresponding to described texture maps and reference view;
3rd determining unit 1040, for X-coordinate and the described difference vector of the upper left luma samples point according to described brightness decoding block, determine the X-coordinate of the destination sample point in the depth map that described reference view is corresponding, wherein said destination sample point is the sampled point corresponding with the upper left luma samples point of described brightness decoding block in described depth map;
4th determining unit 1050, the Y-coordinate for the upper left luma samples point by described brightness decoding block is defined as the Y-coordinate of the destination sample point of described depth map;
5th determining unit 1060, for according to the X-coordinate of the destination sample point in described depth map and Y-coordinate, and the size of described brightness decoding block, determine the depth value information corresponding to each sampled point in described brightness decoding block;
Block division unit 1070, for according to described depth value information, obtains the block dividing mode of described brightness decoding block, and divides described brightness decoding block;
Decoding unit 1080, for decoding to the described brightness decoding block after division.
In prior art, when determining the depth value information that in brightness decoding block, each sampled point is corresponding, the X-coordinate of upper left luma samples point of brightness decoding block and the horizontal and vertical component of Y-coordinate and difference vector to be utilized respectively to carry out the arithmetic operation such as Clip and displacement, to obtain X-coordinate and the Y-coordinate of the target pixel points in depth map corresponding to reference view.But, because texture video camera and depth camera arrange according to horizontal mode, in the texture maps that different points of view collects and depth map, the Y-coordinate of corresponding sampled point should be identical, that is, the Y-coordinate of upper left luma samples point of brightness decoding block and the vertical component of difference vector is utilized to carry out computing, the operation obtaining the Y-coordinate of the corresponding sampled point in the depth map of reference view is redundancy, the Y-coordinate of the upper left luma samples point of brightness decoding block is directly defined as the Y-coordinate of the corresponding sampled point in the depth map of reference view by the embodiment of the present invention, eliminate the computing cost according to this process of Y-coordinate of corresponding pixel points in the Y-coordinate of the upper left luma samples point of brightness decoding block and difference vector compute depth figure, improve the efficiency of decoding.
Alternatively, as an embodiment, described 3rd determining unit 1040 specifically for according to described difference vector, the side-play amount between the X-coordinate determining the destination sample point in the X-coordinate of the upper left luma samples point of described brightness decoding block and described depth map; According to X-coordinate and the described side-play amount of the upper left luma samples point of described brightness decoding block, determine the X-coordinate of the destination sample point in described depth map.
Alternatively, as an embodiment, described 3rd determining unit 1040 rounds downwards specifically for the result adding the horizontal component of described difference vector divided by 4 after 2, obtains described side-play amount.
Alternatively, as an embodiment, described acquiring unit 1030, specifically for the flag that becomes more meticulous according to the degree of depth, determines described difference vector.
Alternatively, as an embodiment, when described acquiring unit 1030 is 0 specifically for the flag that becomes more meticulous when the described degree of depth, adjacent block difference vector NBDV is defined as described difference vector; When the described degree of depth become more meticulous flag be 1 time, the adjacent block difference vector DoNBDV based on the degree of depth is defined as described difference vector.
Figure 11 is the schematic block diagram of the encoder of the embodiment of the present invention.Should be understood that the encoder 1100 of Figure 11 can realize each step in Fig. 4, for avoiding repetition, no longer describing in detail herein.Encoder 1100 comprises:
Memory 1110, for storing instruction;
Processor 1120, for performing instruction, when executed, described processor 1120 specifically for determining current luminance coding block from texture maps; Determine the coordinate of the upper left luma samples point of described luminance coding block, described coordinate is used to indicate the position of upper left luma samples point relative to the upper left luma samples point of described texture maps of described luminance coding block, and described coordinate comprises X-coordinate and Y-coordinate; Obtain the difference vector between current view point corresponding to described texture maps and reference view
According to X-coordinate and the described difference vector of the upper left luma samples point of described luminance coding block, determine the X-coordinate of the destination sample point in the depth map that described reference view is corresponding, wherein said destination sample point is the sampled point corresponding with the upper left luma samples point of described luminance coding block in described depth map; The Y-coordinate of the upper left luma samples point of described luminance coding block is defined as the Y-coordinate of the destination sample point of described depth map; According to X-coordinate and the Y-coordinate of the destination sample point in described depth map, and the size of described luminance coding block, determine the depth value information corresponding to each sampled point in described luminance coding block; According to described depth value information, obtain the block dividing mode of described luminance coding block, and described luminance coding block is divided; Described luminance coding block after dividing is encoded.
In prior art, when determining the depth value information that in luminance coding block, each sampled point is corresponding, the X-coordinate of upper left luma samples point of luminance coding block and the horizontal and vertical component of Y-coordinate and difference vector to be utilized respectively to carry out the arithmetic operation such as Clip and displacement, to obtain X-coordinate and the Y-coordinate of the target pixel points in depth map corresponding to reference view.But, because texture video camera and depth camera arrange according to horizontal mode, in the texture maps that different points of view collects and depth map, the Y-coordinate of corresponding sampled point should be identical, that is, the Y-coordinate of upper left luma samples point of luminance coding block and the vertical component of difference vector is utilized to carry out computing, the operation obtaining the Y-coordinate of the corresponding sampled point in the depth map of reference view is redundancy, the Y-coordinate of the upper left luma samples point of luminance coding block is directly defined as the Y-coordinate of the corresponding sampled point in the depth map of reference view by the embodiment of the present invention, eliminate the computing cost according to this process of Y-coordinate of corresponding pixel points in the Y-coordinate of the upper left luma samples point of luminance coding block and difference vector compute depth figure, improve the efficiency of coding.
Alternatively, as an embodiment, described processor 1120 specifically for according to described difference vector, the side-play amount between the X-coordinate determining the destination sample point in the X-coordinate of the upper left luma samples point of described luminance coding block and described depth map; According to X-coordinate and the described side-play amount of the upper left luma samples point of described luminance coding block, determine the X-coordinate of the destination sample point in described depth map.
Alternatively, as an embodiment, described processor 1120 rounds downwards specifically for the result adding the horizontal component of described difference vector divided by 4 after 2, obtains described side-play amount.
Alternatively, as an embodiment, described processor 1120, specifically for the flag that becomes more meticulous according to the degree of depth, determines described difference vector.
Alternatively, as an embodiment, when described processor 1120 is 0 specifically for the flag that becomes more meticulous when the described degree of depth, adjacent block difference vector NBDV is defined as described difference vector; When the described degree of depth become more meticulous flag be 1 time, the adjacent block difference vector DoNBDV based on the degree of depth is defined as described difference vector.
Figure 12 is the schematic block diagram of the decoder of the embodiment of the present invention.Should be understood that the decoder 1200 of Figure 12 can realize each step in Fig. 6, for avoiding repetition, no longer describing in detail herein.Decoder 1200 can comprise:
Memory 1210, for storing instruction;
Processor 1220, for performing instruction, when executed, described processor 1220 specifically for determining current brightness decoding block from texture maps; Determine the coordinate of the upper left luma samples point of described brightness decoding block, described coordinate is used to indicate the position of upper left luma samples point relative to the upper left luma samples point of described texture maps of described brightness decoding block, and described coordinate comprises X-coordinate and Y-coordinate
Obtain the difference vector between current view point corresponding to described texture maps and reference view; According to X-coordinate and the described difference vector of the upper left luma samples point of described brightness decoding block, determine the X-coordinate of the destination sample point in the depth map that described reference view is corresponding, wherein said destination sample point is the sampled point corresponding with the upper left luma samples point of described brightness decoding block in described depth map; The Y-coordinate of the upper left luma samples point of described brightness decoding block is defined as the Y-coordinate of the destination sample point of described depth map; According to X-coordinate and the Y-coordinate of the destination sample point in described depth map, and the size of described brightness decoding block, determine the depth value information corresponding to each sampled point in described brightness decoding block; According to described depth value information, obtain the block dividing mode of described brightness decoding block, and described brightness decoding block is divided; Described brightness decoding block after dividing is decoded.
In prior art, when determining the depth value information that in brightness decoding block, each sampled point is corresponding, the X-coordinate of upper left luma samples point of brightness decoding block and the horizontal and vertical component of Y-coordinate and difference vector to be utilized respectively to carry out the arithmetic operation such as Clip and displacement, to obtain X-coordinate and the Y-coordinate of the target pixel points in depth map corresponding to reference view.But, because texture video camera and depth camera arrange according to horizontal mode, in the texture maps that different points of view collects and depth map, the Y-coordinate of corresponding sampled point should be identical, that is, the Y-coordinate of upper left luma samples point of brightness decoding block and the vertical component of difference vector is utilized to carry out computing, the operation obtaining the Y-coordinate of the corresponding sampled point in the depth map of reference view is redundancy, the Y-coordinate of the upper left luma samples point of brightness decoding block is directly defined as the Y-coordinate of the corresponding sampled point in the depth map of reference view by the embodiment of the present invention, eliminate the computing cost according to this process of Y-coordinate of corresponding pixel points in the Y-coordinate of the upper left luma samples point of brightness decoding block and difference vector compute depth figure, improve the efficiency of decoding.
Alternatively, as an embodiment, described processor 1220 specifically for according to described difference vector, the side-play amount between the X-coordinate determining the destination sample point in the X-coordinate of the upper left luma samples point of described brightness decoding block and described depth map; According to X-coordinate and the described side-play amount of the upper left luma samples point of described brightness decoding block, determine the X-coordinate of the destination sample point in described depth map.
Alternatively, as an embodiment, described processor 1220 rounds downwards specifically for the result adding the horizontal component of described difference vector divided by 4 after 2, obtains described side-play amount.
Alternatively, as an embodiment, described processor 1220, specifically for the flag that becomes more meticulous according to the degree of depth, determines described difference vector.
Alternatively, as an embodiment, when described processor 1220 is 0 specifically for the flag that becomes more meticulous when the described degree of depth, adjacent block difference vector NBDV is defined as described difference vector; When the described degree of depth become more meticulous flag be 1 time, the adjacent block difference vector DoNBDV based on the degree of depth is defined as described difference vector.
Those of ordinary skill in the art can recognize, in conjunction with unit and the algorithm steps of each example of embodiment disclosed herein description, can realize with the combination of electronic hardware or computer software and electronic hardware.These functions perform with hardware or software mode actually, depend on application-specific and the design constraint of technical scheme.Professional and technical personnel can use distinct methods to realize described function to each specifically should being used for, but this realization should not thought and exceeds scope of the present invention.
Those skilled in the art can be well understood to, and for convenience and simplicity of description, the specific works process of the system of foregoing description, device and unit, with reference to the corresponding process in preceding method embodiment, can not repeat them here.
In several embodiments that the application provides, should be understood that disclosed system, apparatus and method can realize by another way.Such as, device embodiment described above is only schematic, such as, the division of described unit, be only a kind of logic function to divide, actual can have other dividing mode when realizing, such as multiple unit or assembly can in conjunction with or another system can be integrated into, or some features can be ignored, or do not perform.Another point, shown or discussed coupling each other or direct-coupling or communication connection can be by some interfaces, and the indirect coupling of device or unit or communication connection can be electrical, machinery or other form.
The described unit illustrated as separating component or can may not be and physically separates, and the parts as unit display can be or may not be physical location, namely can be positioned at a place, or also can be distributed in multiple network element.Some or all of unit wherein can be selected according to the actual needs to realize the object of the present embodiment scheme.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, also can be that the independent physics of unit exists, also can two or more unit in a unit integrated.
If described function using the form of SFU software functional unit realize and as independently production marketing or use time, can be stored in a computer read/write memory medium.Based on such understanding, the part of the part that technical scheme of the present invention contributes to prior art in essence in other words or this technical scheme can embody with the form of software product, this computer software product is stored in a storage medium, comprising some instructions in order to make a computer equipment (can be personal computer, server, or the network equipment etc.) perform all or part of step of method described in each embodiment of the present invention.And aforesaid storage medium comprises: USB flash disk, portable hard drive, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disc or CD etc. various can be program code stored medium.
The above; be only the specific embodiment of the present invention, but protection scope of the present invention is not limited thereto, is anyly familiar with those skilled in the art in the technical scope that the present invention discloses; change can be expected easily or replace, all should be encompassed within protection scope of the present invention.Therefore, protection scope of the present invention should described be as the criterion with the protection range of claim.

Claims (20)

1. a coding method, is characterized in that, comprising:
Current luminance coding block is determined from texture maps;
Determine the coordinate of the upper left luma samples point of described luminance coding block, described coordinate is used to indicate the position of upper left luma samples point relative to the upper left luma samples point of described texture maps of described luminance coding block, and described coordinate comprises X-coordinate and Y-coordinate;
Obtain the difference vector between current view point corresponding to described texture maps and reference view;
According to X-coordinate and the described difference vector of the upper left luma samples point of described luminance coding block, determine the X-coordinate of the destination sample point in the depth map that described reference view is corresponding, wherein said destination sample point is the sampled point corresponding with the upper left luma samples point of described luminance coding block in described depth map;
The Y-coordinate of the upper left luma samples point of described luminance coding block is defined as the Y-coordinate of the destination sample point of described depth map;
According to X-coordinate and the Y-coordinate of the destination sample point in described depth map, and the size of described luminance coding block, determine the depth value information corresponding to each sampled point in described luminance coding block;
According to described depth value information, obtain the block dividing mode of described luminance coding block, and described luminance coding block is divided;
Described luminance coding block after dividing is encoded.
2. the method for claim 1, is characterized in that, the X-coordinate of the described upper left luma samples point according to described luminance coding block and described difference vector, determines the X-coordinate of the destination sample point in the depth map that described reference view is corresponding, comprising:
According to described difference vector, the side-play amount between the X-coordinate determining the destination sample point in the X-coordinate of the upper left luma samples point of described luminance coding block and described depth map;
According to X-coordinate and the described side-play amount of the upper left luma samples point of described luminance coding block, determine the X-coordinate of the destination sample point in described depth map.
3. method as claimed in claim 2, is characterized in that, described according to described difference vector, and the side-play amount between the X-coordinate determining the destination sample point in the X-coordinate of the upper left luma samples point of described luminance coding block and described depth map, comprising:
The result that the horizontal component of described difference vector adds divided by 4 after 2 is rounded downwards, obtains described side-play amount.
4. the method according to any one of claim 1-3, is characterized in that, the difference vector between the current view point that the described texture maps of described acquisition is corresponding and reference view, comprising:
To become more meticulous flag according to the degree of depth, determine described difference vector.
5. method as claimed in claim 4, is characterized in that, describedly to become more meticulous flag according to the degree of depth, determines described difference vector, comprising:
When the described degree of depth become more meticulous flag be 0 time, adjacent block difference vector NBDV is defined as described difference vector;
When the described degree of depth become more meticulous flag be 1 time, the adjacent block difference vector DoNBDV based on the degree of depth is defined as described difference vector.
6. a coding/decoding method, is characterized in that, comprising:
Current brightness decoding block is determined from texture maps;
Determine the coordinate of the upper left luma samples point of described brightness decoding block, described coordinate is used to indicate the position of upper left luma samples point relative to the upper left luma samples point of described texture maps of described brightness decoding block, and described coordinate comprises X-coordinate and Y-coordinate;
Obtain the difference vector between current view point corresponding to described texture maps and reference view;
According to X-coordinate and the described difference vector of the upper left luma samples point of described brightness decoding block, determine the X-coordinate of the destination sample point in the depth map that described reference view is corresponding, wherein said destination sample point is the sampled point corresponding with the upper left luma samples point of described brightness decoding block in described depth map;
The Y-coordinate of the upper left luma samples point of described brightness decoding block is defined as the Y-coordinate of the destination sample point of described depth map;
According to X-coordinate and the Y-coordinate of the destination sample point in described depth map, and the size of described brightness decoding block, determine the depth value information corresponding to each sampled point in described brightness decoding block;
According to described depth value information, obtain the block dividing mode of described brightness decoding block, and described brightness decoding block is divided;
Described brightness decoding block after dividing is decoded.
7. method as claimed in claim 6, is characterized in that the X-coordinate of the described upper left luma samples point according to described brightness decoding block and described difference vector are determined the X-coordinate of the destination sample point in the depth map that described reference view is corresponding, being comprised:
According to described difference vector, the side-play amount between the X-coordinate determining the destination sample point in the X-coordinate of the upper left luma samples point of described brightness decoding block and described depth map;
According to X-coordinate and the described side-play amount of the upper left luma samples point of described brightness decoding block, determine the X-coordinate of the destination sample point in described depth map.
8. method as claimed in claim 7, is characterized in that, described according to described difference vector, and the side-play amount between the X-coordinate determining the destination sample point in the X-coordinate of the upper left luma samples point of described brightness decoding block and described depth map, comprising:
The result that the horizontal component of described difference vector adds divided by 4 after 2 is rounded downwards, obtains described side-play amount.
9. the method according to any one of claim 6-8, is characterized in that, the difference vector between the current view point that the described texture maps of described acquisition is corresponding and reference view, comprising:
To become more meticulous flag according to the degree of depth, determine described difference vector.
10. method as claimed in claim 9, is characterized in that, describedly to become more meticulous flag according to the degree of depth, determines described difference vector, comprising:
When the described degree of depth become more meticulous flag be 0 time, adjacent block difference vector NBDV is defined as described difference vector;
When the described degree of depth become more meticulous flag be 1 time, the adjacent block difference vector DoNBDV based on the degree of depth is defined as described difference vector.
11. 1 kinds of encoders, is characterized in that, comprising:
First determining unit, for determining current luminance coding block from texture maps;
Second determining unit, for determining the coordinate of the upper left luma samples point of described luminance coding block, described coordinate is used to indicate the position of upper left luma samples point relative to the upper left luma samples point of described texture maps of described luminance coding block, and described coordinate comprises X-coordinate and Y-coordinate;
Acquiring unit, for obtaining the difference vector between current view point corresponding to described texture maps and reference view;
3rd determining unit, for X-coordinate and the described difference vector of the upper left luma samples point according to described luminance coding block, determine the X-coordinate of the destination sample point in the depth map that described reference view is corresponding, wherein said destination sample point is the sampled point corresponding with the upper left luma samples point of described luminance coding block in described depth map;
4th determining unit, the Y-coordinate for the upper left luma samples point by described luminance coding block is defined as the Y-coordinate of the destination sample point of described depth map;
5th determining unit, for according to the X-coordinate of the destination sample point in described depth map and Y-coordinate, and the size of described luminance coding block, determine the depth value information corresponding to each sampled point in described luminance coding block;
Block division unit, for according to described depth value information, obtains the block dividing mode of described luminance coding block, and divides described luminance coding block;
Coding unit, for encoding to the described luminance coding block after division.
12. encoders as claimed in claim 11, it is characterized in that, described 3rd determining unit specifically for according to described difference vector, the side-play amount between the X-coordinate determining the destination sample point in the X-coordinate of the upper left luma samples point of described luminance coding block and described depth map; According to X-coordinate and the described side-play amount of the upper left luma samples point of described luminance coding block, determine the X-coordinate of the destination sample point in described depth map.
13. encoders as claimed in claim 12, is characterized in that, described 3rd determining unit rounds specifically for the result adding the horizontal component of described difference vector divided by 4 after 2, obtains described side-play amount downwards.
14. encoders according to any one of claim 11-13, it is characterized in that, described acquiring unit, specifically for the flag that becomes more meticulous according to the degree of depth, determines described difference vector.
15. encoders as claimed in claim 14, is characterized in that, when described acquiring unit is 0 specifically for the flag that becomes more meticulous when the described degree of depth, adjacent block difference vector NBDV are defined as described difference vector; When the described degree of depth become more meticulous flag be 1 time, the adjacent block difference vector DoNBDV based on the degree of depth is defined as described difference vector.
16. 1 kinds of decoders, is characterized in that, comprising:
First determining unit, for determining current brightness decoding block from texture maps;
Second determining unit, for determining the coordinate of the upper left luma samples point of described brightness decoding block, described coordinate is used to indicate the position of upper left luma samples point relative to the upper left luma samples point of described texture maps of described brightness decoding block, and described coordinate comprises X-coordinate and Y-coordinate;
Acquiring unit, for obtaining the difference vector between current view point corresponding to described texture maps and reference view;
3rd determining unit, for X-coordinate and the described difference vector of the upper left luma samples point according to described brightness decoding block, determine the X-coordinate of the destination sample point in the depth map that described reference view is corresponding, wherein said destination sample point is the sampled point corresponding with the upper left luma samples point of described brightness decoding block in described depth map;
4th determining unit, the Y-coordinate for the upper left luma samples point by described brightness decoding block is defined as the Y-coordinate of the destination sample point of described depth map;
5th determining unit, for according to the X-coordinate of the destination sample point in described depth map and Y-coordinate, and the size of described brightness decoding block, determine the depth value information corresponding to each sampled point in described brightness decoding block;
Block division unit, for according to described depth value information, obtains the block dividing mode of described brightness decoding block, and divides described brightness decoding block;
Decoding unit, for decoding to the described brightness decoding block after division.
17. decoders as claimed in claim 16, it is characterized in that, described 3rd determining unit specifically for according to described difference vector, the side-play amount between the X-coordinate determining the destination sample point in the X-coordinate of the upper left luma samples point of described brightness decoding block and described depth map; According to X-coordinate and the described side-play amount of the upper left luma samples point of described brightness decoding block, determine the X-coordinate of the destination sample point in described depth map.
18. decoders as claimed in claim 17, is characterized in that, described 3rd determining unit rounds specifically for the result adding the horizontal component of described difference vector divided by 4 after 2, obtains described side-play amount downwards.
19. decoders according to any one of claim 16-18, it is characterized in that, described acquiring unit, specifically for the flag that becomes more meticulous according to the degree of depth, determines described difference vector.
20. decoders as claimed in claim 19, is characterized in that, when described acquiring unit is 0 specifically for the flag that becomes more meticulous when the described degree of depth, adjacent block difference vector NBDV are defined as described difference vector; When the described degree of depth become more meticulous flag be 1 time, the adjacent block difference vector DoNBDV based on the degree of depth is defined as described difference vector.
CN201580000242.3A 2015-02-05 2015-02-05 Decoding method and codec Active CN104995915B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/072301 WO2016123774A1 (en) 2015-02-05 2015-02-05 Method and device for encoding and decoding

Publications (2)

Publication Number Publication Date
CN104995915A true CN104995915A (en) 2015-10-21
CN104995915B CN104995915B (en) 2018-11-30

Family

ID=54306447

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201580000242.3A Active CN104995915B (en) 2015-02-05 2015-02-05 Decoding method and codec

Country Status (2)

Country Link
CN (1) CN104995915B (en)
WO (1) WO2016123774A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110462733A (en) * 2017-03-31 2019-11-15 华为技术有限公司 The decoding method and codec of multi-channel signal

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101483770A (en) * 2008-01-08 2009-07-15 华为技术有限公司 Method and apparatus for encoding and decoding
US20130342644A1 (en) * 2012-06-20 2013-12-26 Nokia Corporation Apparatus, a method and a computer program for video coding and decoding
CN103916652A (en) * 2013-01-09 2014-07-09 浙江大学 Method and device for generating disparity vector
CN104104933A (en) * 2013-04-12 2014-10-15 浙江大学 Disparity vector generation method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101483770A (en) * 2008-01-08 2009-07-15 华为技术有限公司 Method and apparatus for encoding and decoding
US20130342644A1 (en) * 2012-06-20 2013-12-26 Nokia Corporation Apparatus, a method and a computer program for video coding and decoding
CN103916652A (en) * 2013-01-09 2014-07-09 浙江大学 Method and device for generating disparity vector
CN104104933A (en) * 2013-04-12 2014-10-15 浙江大学 Disparity vector generation method and device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110462733A (en) * 2017-03-31 2019-11-15 华为技术有限公司 The decoding method and codec of multi-channel signal
US11386907B2 (en) 2017-03-31 2022-07-12 Huawei Technologies Co., Ltd. Multi-channel signal encoding method, multi-channel signal decoding method, encoder, and decoder
US11894001B2 (en) 2017-03-31 2024-02-06 Huawei Technologies Co., Ltd. Multi-channel signal encoding method, multi-channel signal decoding method, encoder, and decoder

Also Published As

Publication number Publication date
CN104995915B (en) 2018-11-30
WO2016123774A1 (en) 2016-08-11

Similar Documents

Publication Publication Date Title
KR102447241B1 (en) Image encoding/decoding method and device
US11350129B2 (en) Method and device for encoding and decoding a video bitstream using a selected motion vector
KR20160003334A (en) Method and apparatus of texture image compression in 3d video coding
WO2014036848A1 (en) Depth picture intra coding /decoding method and video coder/decoder
CN111295881B (en) Method and apparatus for intra prediction fusion of image and video codecs
CN103067715A (en) Encoding and decoding methods and encoding and decoding device of range image
US20150264356A1 (en) Method of Simplified Depth Based Block Partitioning
US20200036996A1 (en) Method and apparatus for determining a motion vector
EP2843952A1 (en) Methods and apparatuses for predicting depth quadtree in three-dimensional video
US11240512B2 (en) Intra-prediction for video coding using perspective information
CN103079072A (en) Inter-frame prediction method, encoding equipment and decoding equipment
CN104995915A (en) Coding-decoding method, and coder-decoder
CN104333758A (en) Depth map prediction method, pixel detection method and related devices
CN102447894B (en) Video image coding method and device as well as video image decoding method and device
US9848205B2 (en) Method for predictive coding of depth maps with plane points
CN103533361A (en) Determination method of multi-view disparity vector, coding equipment and decoding equipment
US20170359575A1 (en) Non-Uniform Digital Image Fidelity and Video Coding
CN105637865A (en) Image prediction method and related equipment
CN104768012A (en) Asymmetric motion partition coding method and coding equipment
WO2015188332A1 (en) A default vector selection method for block copy vector prediction in video compression
CN103763557B (en) A kind of Do NBDV acquisition methods and video decoder
CN112601094A (en) Video coding and decoding method and device
CN103747265A (en) NBDV (Disparity Vector from Neighboring Block) acquisition method and video decoding device
WO2013077638A1 (en) Device and method for coding/decoding multi-view depth image by using color image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant