CN103916652A - Method and device for generating disparity vector - Google Patents

Method and device for generating disparity vector Download PDF

Info

Publication number
CN103916652A
CN103916652A CN201310007164.2A CN201310007164A CN103916652A CN 103916652 A CN103916652 A CN 103916652A CN 201310007164 A CN201310007164 A CN 201310007164A CN 103916652 A CN103916652 A CN 103916652A
Authority
CN
China
Prior art keywords
depth
image
difference vector
viewpoint
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310007164.2A
Other languages
Chinese (zh)
Other versions
CN103916652B (en
Inventor
虞露
赵寅
张熠辰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201310007164.2A priority Critical patent/CN103916652B/en
Publication of CN103916652A publication Critical patent/CN103916652A/en
Application granted granted Critical
Publication of CN103916652B publication Critical patent/CN103916652B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses a method and device for generating a disparity vector. The method for generating the disparity vector comprises the steps that a first depth value of a basic depth block is obtained according to the depth pixel value of the basic depth block in a first viewpoint depth image, and the first disparity vector of a basic image block in a second viewpoint image is generated according to the first depth value, wherein the second viewpoint image is a second viewpoint texture image or a second viewpoint depth image. By the adoption of the method and device for generating the disparity vector, the complexity of disparity vector calculation is lowered, the projection frequency is reduced, and the storage space is small.

Description

Difference vector generation method and device
Technical field
The present invention relates to the communications field, in particular to a kind of difference vector generation method and device.
Background technology
3 D video (3D video) comprises multichannel (being generally 2 roads or 3 tunnels) texture image sequence (wherein each frame is the image that represents subject color or brightness) and range image sequence (wherein each frame is the image that represents subject and take the distance between camera).Conventionally, a road texture image sequence, corresponding to a road range image sequence, is called multi-view point video plus depth (Multi-View Video Plus Depth, referred to as MVD) form; Sometimes, texture image sequence may not have corresponding range image sequence, and for example two-way texture image sequence only has a wherein road to have corresponding range image sequence, is called the MVD (unpaired MVD) without pairing.3 D video produces virtual view video sequence by View Synthesis (view synthesis) technology.In addition, the resolution between multichannel texture image is conventionally equal, and the resolution between multichannel depth image also equates conventionally; The resolution of one road texture image and depth image may equate, or the resolution of depth image is less than texture image, the level (horizontal) of for example depth image and vertically (vertical) resolution are the half of texture image analog value, the resolution (total pixel number) that also claims texture image is depth image resolution 4 times.
Can be by the correlation between viewpoint in 3 d video encoding, for example between the texture image of two viewpoints, there is certain similarity (comprising the similarity of pixel value, movable information etc.), between the depth image of two viewpoints, also there is certain similarity.But, between multiple viewpoints, there is parallax, i.e. location skew between corresponding points between viewpoint (correspondence), the corresponding points between two viewpoints are indicated by difference vector conventionally.Please refer to Fig. 1, Fig. 1 is according to the schematic diagram that is related between the difference vector of correlation technique and difference vector original position, for example, when the centre coordinate in viewpoint 1 image is P1 (150, 100) P2 (180 that pixel region A (pixel region is a pixel or a rectangular block of pixels etc. for example) corresponding to the centre coordinate in viewpoint 2 images is, 95) pixel region B, for A and B, the difference vector DV1 that is pointed to P2 in viewpoint 2 images by P1 in viewpoint 1 image is (30,-5), 30 is horizontal component,-5 is vertical component, there is P2=P1+DV1, P1=P2-DV1, direction vector can be sketched as " pointing to viewpoint 1 image by viewpoint 2 images ", or " pointing to viewpoint 1 by viewpoint 2 ", the difference vector DV2 that is pointed to P1 in viewpoint 1 image by P2 in viewpoint 2 images is (30,5), has P1=P2+DV2, P2=P1-DV2, there is accordingly DV2=-DV1.Do not distinguish direction vector and broadly say, DV1 and DV2 are the difference vector between viewpoint 1 and viewpoint 2, only point to contrary.Especially, when being parallel vidicon, two visual point images arrange (1D parallel camera arrangement, be that optical axis is parallel, focal length equates, photocentre on same level straight line and image resolution ratio identical) time, between two visual point images, the vertical component of the difference vector of pixel is 0 (being that vertical parallax is 0) arbitrarily; Now, only need to indicate the horizontal component of difference vector; In this case, horizontal parallax, i.e. the horizontal component of parallax, also referred to as parallax, the coordinate computations such as above P2=P1+DV1 can deteriorate to horizontal direction scalar operation.
Between parallax and the degree of depth, there is certain geometrical relationship.What certain pixel region A was corresponding can be converted to according to the camera parameters between the depth value of this pixel region and this two viewpoints at the parallax between viewpoint 1 image and viewpoint 2 images.For example, when two visual point images are that usually said parallel vidicon is while arranging, vertically parallax is 0, the horizontal parallax value DV of each image-region (a for example pixel) can obtain by DV=(f × L/Z)+du, wherein f is the focal length of the video camera that viewpoint 1 image is corresponding, L is the parallax range between viewpoint 1 and viewpoint 2, Z is the pixel place object of depth value D instruction and the distance of corresponding video camera corresponding by image-region, and du is the horizontal-shift between viewpoint 1 and viewpoint 2 image centers (principle point).Now, the image-region respectively with same depth value has identical horizontal parallax value, and the image-region at horizontal parallax value and its place is location-independent; Conventionally can first set up the mapping table (look-up table) that is mapped to horizontal parallax value by depth value; To each region, by its position and parallax corresponding to the degree of depth, find the correspondence position of this region in another viewpoint.It should be noted that, the parallax of same object between two different visual angles images is along with the parallax range (baseline) at these two visual angles increases and increases, therefore, at least need the size of two clear and definite these difference vectors of viewpoints ability that illustrate that difference vector is corresponding.
In the time that the image of two viewpoints does not meet parallel vidicon arrangement, can pass through more complicated tripleplane (3D warping) equation, position and the degree of depth (and camera parameters) by each image-region obtain the projected position of each image-region in another viewpoint, thereby obtain parallax.Now horizontal parallax, vertically parallax is relevant with the position of parallax place image-region.A disparity computation conventional simple special case of situation for this reason under above-mentioned parallel vidicon arrangement.
Some coding toolses utilize the depth image of present encoding viewpoint reconstruction to help the code efficiency of present encoding viewpoint texture image, the for example prediction of the View Synthesis based on back projection (backward warping) in 3D-ATM platform (ViewSynthesis Prediction, referred to as VSP) and motion-vector prediction (Depth-based Motion Vector Prediction, referred to as DMVP) based on the degree of depth.They all need to obtain from the degree of depth pixel value of the reconstruction depth image of present encoding viewpoint the difference vector (for example,, to a 4x4 piece, being converted to the difference vector of this piece by the degree of depth pixel value of its central point) of present encoding texture block.So, if the texture image of present encoding viewpoint is encoded prior to depth image, depth image is not also rebuild when encoding texture image, and above-mentioned two coding toolses all cannot obtain difference vector, cisco unity malfunction, thus cause the coding efficiency of present encoding viewpoint texture image to decline.These coding toolses still can be worked in the time that texture image is encoded prior to depth image, need another kind not rely on the difference vector generation method of the current view point degree of depth.
In coding, difference vector generation method comprises following 2 kinds:
1), if the depth image of the current view point viewpoint of coding/decoding (just) can obtain, be converted to the difference vector of object block by the depth value of depth image corresponding to an object block (a normally texture block) in current view point.But there is following shortcoming in the method: if the texture image of present encoding viewpoint is encoded prior to depth image, depth image is not also rebuild when encoding texture image, and this method cannot obtain difference vector;
2) if the depth image non-availability of current view point, the people such as S.Shimizu propose another kind of difference vector deriving method in JCT3V-B0103: by the depth image of the synthetic current view point of mode of forward projection (forward warping), be used for replacing the depth image of present encoding viewpoint with the depth image of another viewpoint (the viewpoint of coding/decoding).Be converted to difference vector according to synthetic depth image again.But there is following shortcoming in the method: all degree of depth pixels of projection, projection number of times is more, has very high complexity, also needs larger data space for storing synthetic depth image; In addition, still need to be converted by the depth image synthesizing to the processing of the difference vector of target area.
Higher for the difference vector deriving method complexity in correlation technique, take larger data memory space and need to carry out the problem of image conversion processing, effective solution is not yet proposed at present.
Summary of the invention
The invention provides a kind of difference vector generation method and device, at least to address the above problem.
According to an aspect of the present invention, provide a kind of difference vector generation method, having comprised: the first depth value that obtains basic depth block according to the degree of depth pixel value of basic depth block in the first viewpoint depth image; Generate the first difference vector of primary image piece in the second visual point image according to the first depth value, wherein, the second visual point image is the second viewpoint texture image or the second viewpoint depth image.
Preferably, obtain the first depth value of basic depth block according to the degree of depth pixel value of basic depth block in the first viewpoint depth image, comprise one of following mode: using the depth value of the degree of depth pixel in a precalculated position in basic depth block as the first depth value; Using a depth value of selecting in the depth value of the degree of depth pixel in multiple precalculated positions from basic depth block as the first depth value, wherein, maximum, minimum value or median in the depth value of the degree of depth pixel that the depth value of selection is multiple precalculated positions; Using the weighted average of the depth value of the degree of depth pixel in multiple precalculated positions in basic depth block as the first depth value.
Preferably, generate the first difference vector of primary image piece in the second visual point image according to the first depth value, comprise: the first depth value is converted to the second difference vector between the first visual point image and the second visual point image, and obtain the correspondence position of basic depth block in the second visual point image, wherein, in the time that the second visual point image is the second viewpoint texture image, the first visual point image is the first viewpoint texture image, in the time that the second visual point image is the second viewpoint depth image, the first visual point image is the first viewpoint depth image; Using the product of the second difference vector or the second difference vector and predetermined real number as the first difference vector, wherein, primary image piece is positioned on correspondence position, it is one of following that predetermined real number comprises: constant, zoom factor, wherein, the absolute value of zoom factor is the first resolution of the first viewpoint depth image and the ratio of second resolution of the second visual point image or the inverse of ratio.
Preferably, in generating the second visual point image according to the first depth value, after the first difference vector of primary image piece, also comprise: generate the 3rd difference vector of target image piece according to the first difference vector, wherein, target image piece comprises multiple primary image pieces.
Preferably, generate the 3rd difference vector of target image piece according to the first difference vector, comprising: the first difference vector of determining the primary image piece in one or more precalculated positions in target image piece; The value of choosing first difference vector from all the first difference vectors of determining is as the 3rd difference vector, or, using the weighted average of all the first difference vectors of determining as the 3rd difference vector.
According to a further aspect in the invention, provide a kind of difference vector generating apparatus, having comprised: acquisition module, for obtain the first depth value of basic depth block according to the degree of depth pixel value of the basic depth block of the first viewpoint depth image; The first generation module, for generate the first difference vector of the second visual point image primary image piece according to the first depth value, wherein, the second visual point image is the second viewpoint texture image or the second viewpoint depth image.
Preferably, it is one of following that acquisition module comprises: the first setting unit, for using the depth value of the degree of depth pixel in a precalculated position of basic depth block as the first depth value; Choose unit, for using a depth value of selecting from the depth value of the degree of depth pixel in the multiple precalculated positions of basic depth block as the first depth value, maximum, minimum value or median in the depth value of the degree of depth pixel that wherein, the depth value of selection is multiple precalculated positions; The second setting unit, for using the weighted average of the depth value of the degree of depth pixel in the multiple precalculated positions of basic depth block as the first depth value.
Preferably, the first generation module comprises: converting unit, for the first depth value is converted to the second difference vector between the first visual point image and the second visual point image, and obtain the correspondence position of basic depth block in the second visual point image, wherein, in the time that the second visual point image is the second viewpoint texture image, the first visual point image is the first viewpoint texture image, in the time that the second visual point image is the second viewpoint depth image, the first visual point image is the first viewpoint depth image; The 3rd setting unit, be used for using the product of the second difference vector or the second difference vector and predetermined real number as the first difference vector, wherein, primary image piece is positioned on correspondence position, it is one of following that predetermined real number comprises: constant, zoom factor, wherein, the absolute value of zoom factor is the first resolution of the first viewpoint depth image and the ratio of second resolution of the second visual point image or the inverse of ratio.
Preferably, this device also comprises: the second generation module, for generate the 3rd difference vector of target image piece according to the first difference vector, wherein, target image piece comprises multiple primary image pieces.
Preferably, the second generation module comprises: determining unit, for determining first difference vector of primary image piece in target image piece one or more precalculated positions; The 4th setting unit, for the value of choosing first difference vector from all the first difference vectors of determining as the 3rd difference vector, or, using the weighted average of all the first difference vectors of determining as the 3rd difference vector.
By the present invention, adopt the difference vector that is obtained the each primary image piece of present encoding viewpoint by the basic depth block in coded views, generated the mode of the difference vector of each target image piece by the difference vector of primary image piece, solved the difference vector deriving method complexity in correlation technique higher, take larger data memory space and need to carry out the problem of image conversion processing, and then the computational complexity that has reached difference vector reduces, projection number of times is few and memory space is little effect.
Brief description of the drawings
Accompanying drawing described herein is used to provide a further understanding of the present invention, forms the application's a part, and schematic description and description of the present invention is used for explaining the present invention, does not form inappropriate limitation of the present invention.In the accompanying drawings:
Fig. 1 is according to the schematic diagram that is related between the difference vector of correlation technique and difference vector original position;
Fig. 2 generates method flow diagram according to the difference vector of the embodiment of the present invention;
Fig. 3 is according to the structured flowchart of the difference vector generating apparatus of the embodiment of the present invention;
Fig. 4 is the structured flowchart of difference vector generating apparatus according to the preferred embodiment of the invention;
Fig. 5 is according to the structural representation of the difference vector generating apparatus of the embodiment of the present invention preferred implementation;
Fig. 6 is according to the structural representation of the difference vector generating apparatus of another preferred implementation of the embodiment of the present invention.
Embodiment
Hereinafter also describe the present invention in detail with reference to accompanying drawing in conjunction with the embodiments.It should be noted that, in the situation that not conflicting, the feature in embodiment and embodiment in the application can combine mutually.
Fig. 2 generates method flow diagram according to the difference vector of the embodiment of the present invention, and as shown in Figure 2, the method mainly comprises the following steps (step S202-step S204):
Step S202, obtains the first depth value of basic depth block according to the degree of depth pixel value of basic depth block in the first viewpoint depth image;
Step S204, generates the first difference vector of primary image piece in the second visual point image according to the first depth value, wherein, the second visual point image is the second viewpoint texture image or the second viewpoint depth image.
In the present embodiment, step S202 can adopt one of following mode to realize: (1) is using the depth value of the degree of depth pixel in a precalculated position in basic depth block as the first depth value; (2) using a depth value of selecting in the depth value of the degree of depth pixel in multiple precalculated positions from basic depth block as the first depth value, maximum, minimum value or median in the depth value of the degree of depth pixel that wherein, the depth value of selection is multiple precalculated positions; (3) using the weighted average of the depth value of the degree of depth pixel in multiple precalculated positions in basic depth block as the first depth value.
In the present embodiment, step S204 can adopt in such a way and realize: the first depth value is converted to the second difference vector between the first visual point image and the second visual point image, and obtain the correspondence position of basic depth block in the second visual point image, wherein, in the time that the second visual point image is the second viewpoint texture image, the first visual point image is the first viewpoint texture image, and in the time that the second visual point image is the second viewpoint depth image, the first visual point image is the first viewpoint depth image; Using the product of the second difference vector or the second difference vector and predetermined real number as the first difference vector, wherein, primary image piece is positioned on correspondence position, it is one of following that predetermined real number comprises: constant, zoom factor, wherein, the absolute value of zoom factor is the first resolution of the first viewpoint depth image and the ratio of second resolution of the second visual point image or the inverse of ratio.
In a preferred embodiment of the present invention, after having carried out step S204, can also generate according to the first difference vector the 3rd difference vector of target image piece, wherein, target image piece comprises multiple primary image pieces.
Preferably, in the time generating the 3rd difference vector of target image piece according to the first difference vector, can be achieved like this: the first difference vector of first determining the primary image piece in one or more precalculated positions in target image piece; The value of choosing first difference vector again from all the first difference vectors of determining is as the 3rd difference vector, or, using the weighted average of all the first difference vectors of determining as the 3rd difference vector.
The difference vector generation method providing for the ease of understanding above-described embodiment, first suitably illustrates with an example that is provided with detail parameters here.
For example, in actual applications, the difference vector generation method that above-described embodiment provides can be implemented in the following way:
The process that obtains the difference vector of the primary image piece of M × N size in viewpoint 2 images (being above-mentioned the second visual point image) by the basic depth block of E × F size in viewpoint 1 depth image (being above-mentioned the first viewpoint depth image) (wherein, E × F > 1, M × N > 1) can comprise the following steps:
1, obtained the depth value D of basic depth block by the depth value of the individual degree of depth pixel of X (1≤X≤E × F) of basic depth block;
2, depth value D is converted to the difference vector DV1 (being above-mentioned the second difference vector) between viewpoint 1 image and viewpoint 2 images, and obtains the correspondence position Pos2 of basic depth block in viewpoint 2 images;
3, the difference vector DV2 of primary image piece (being above-mentioned the first difference vector) is difference vector DV1, or be the product of DV1 and a real number, wherein, primary image piece is arranged in the correspondence position Pos2 of viewpoint 2 images, wherein, real number can be :-1, the constants such as 1/2 ,-1/2,2 ,-2 can be also zoom factors, wherein, the absolute value of this zoom factor is the depth image resolution of viewpoint 1 and the image resolution ratio ratio of viewpoint 2 or the inverse of this ratio.
It should be noted that, in this example, between E, F and M, N, have one of following relation:
Be related to one: E=M × S1, F=N × S2, wherein, S1 and S2 are constant, for example S1=S2=1, S1=S2=2, S1=S2=1/2, S1=1, S2=2;
Be related to two: the ratio of viewpoint 1 depth image resolution and viewpoint 2 image resolution ratios is multiplied by respectively to M and N, obtains E and F.
Wherein, the process that obtains the depth value D of basic depth block by the depth value of the individual degree of depth pixel of X (1≤X≤E × F) of basic depth block can comprise and adopts one of following processing method to carry out:
Method 1: using the depth value of the degree of depth pixel of a fixed position in basic depth block as depth value D;
Method 2: select one as depth value D in the depth value of the degree of depth pixel of multiple fixed positions from basic depth block; Wherein system of selection comprises selection maximum, minimum value or intermediate value wherein from multiple depth values;
Method 3: using the weighted average of the depth value of the degree of depth pixel of multiple fixed positions in basic depth block as depth value D.
Certainly, in actual applications, after obtaining the difference vector of the primary image piece of M × N size in viewpoint 2 images, continue to adopt above-mentioned mode to carry out repetitive operation, further obtained the difference vector (being above-mentioned the 3rd difference vector) of the target image piece of J × K size in visual point image 2 by the basic depth block of E × F size in viewpoint 1 depth image, wherein, target image piece comprises the primary image piece of the individual M × N of Q (Q >=2) size, specifically can carry out following processing again:
To one or more basic depth block repeated execution of steps 1 to step 3, determine the difference vector of the individual primary image piece of Q1 (1≤Q1≤Q) in Q the primary image piece that described target image piece comprises, after obtaining Q1 difference vector, be a difference vector in Q1 difference vector by the difference vector assignment of target image piece, or the weighted average of this Q1 difference vector.
Fig. 3 is according to the structured flowchart of the difference vector generating apparatus of the embodiment of the present invention, and this device is for realizing the difference vector generation method that above-described embodiment provides, and as shown in Figure 3, this device comprises: acquisition module 10 and the first generation module 20.Wherein, for obtain the first depth value of basic depth block according to the degree of depth pixel value of the basic depth block of the first viewpoint depth image; The first generation module 20, is connected to acquisition module 10, and for generate the first difference vector of the second visual point image primary image piece according to the first depth value, wherein, the second visual point image is the second viewpoint texture image or the second viewpoint depth image.
Fig. 4 is the structured flowchart of difference vector generating apparatus according to the preferred embodiment of the invention, as shown in Figure 4, in the difference vector generating apparatus providing in the preferred embodiment, it is one of following that acquisition module 10 can comprise: the first setting unit 12, for using the depth value of the degree of depth pixel in a precalculated position of basic depth block as the first depth value; Choose unit 14, for using a depth value of selecting from the depth value of the degree of depth pixel in the multiple precalculated positions of basic depth block as the first depth value, maximum, minimum value or median in the depth value of the degree of depth pixel that wherein, the depth value of selection is multiple precalculated positions; The second setting unit 16, for using the weighted average of the depth value of the degree of depth pixel in the multiple precalculated positions of basic depth block as the first depth value.
In this preferred embodiment, the first generation module 20 comprises: converting unit 22, for the first depth value being converted to the second difference vector between the first visual point image and the second visual point image, and obtains the correspondence position of basic depth block in the second visual point image; The 3rd setting unit 24, be connected to converting unit 22, be used for using the product of the second difference vector or the second difference vector and predetermined real number as the first difference vector, wherein, primary image piece is positioned on correspondence position, it is one of following that predetermined real number comprises: constant, zoom factor, wherein, the absolute value of zoom factor is the first resolution of the first viewpoint depth image and the ratio of second resolution of the second visual point image or the inverse of ratio.
In this preferred embodiment, this difference vector generating apparatus can also comprise: the second generation module 30, for generate the 3rd difference vector of target image piece according to the first difference vector, wherein, target image piece comprises multiple primary image pieces.
Preferably, the second generation module 30 can comprise: determining unit 32, for determining first difference vector of primary image piece in target image piece one or more precalculated positions; The 4th setting unit 34, be connected to determining unit 32, for the value of choosing first difference vector from all the first difference vectors of determining as the 3rd difference vector, or, using the weighted average of all the first difference vectors of determining as the 3rd difference vector.
The difference vector generation method and the difference vector generating apparatus that above-described embodiment are provided below in conjunction with Fig. 5, Fig. 6 and preferred embodiment 1 to preferred embodiment 15 are further described in more detail.
Before being described, some parameters that first need to use following preferred embodiment are carried out concise and to the point introduction: viewpoint 1 depth image resolution is Wd1 × Hd1 (being that width is Wd1 degree of depth pixel, is highly Hd1 degree of depth pixel); Viewpoint 1 texture image resolution is Wt1 × Ht1 (being that width is Wt1 texture pixel, is highly Ht1 texture pixel); Viewpoint 2 depth image resolution are that Wd2 × Hd2 (is that width is Wd2 degree of depth pixel, be highly Hd2 degree of depth pixel), viewpoint 2 texture image resolution are Wt2 × Ht2 (being that width is Wt2 texture pixel, is highly Ht2 texture pixel).
In addition, here definition " viewpoint 2 images " refers to viewpoint 2 texture images (representing the image of subject color or brightness) or viewpoint 2 depth images (represent subject and take the image of the distance between camera), and its resolution is W2 × H2; That is to say, in the time that viewpoint 2 images refer to viewpoint 2 texture image, resolution W2 × H2 is the resolution of viewpoint 2 texture images, has W2=Wt2, H2=Ht2, and viewpoint 2 image pixels refer to the texture pixel in viewpoint 2 texture images; In the time that viewpoint 2 images refer to viewpoint 2 depth image, resolution W2 × H2 is the resolution of viewpoint 2 depth images, has W2=Wd2, H2=Hd2, and viewpoint 2 image pixels refer to the degree of depth pixel in viewpoint 2 depth images.Conventionally, the texture image resolution of a viewpoint and the ratio of depth image resolution are constant (for example 1 times, 2 times, 4 times etc.), and the texture image resolution of different points of view is identical, and the depth image resolution of different points of view is identical.
The horizontal parallax component of difference vector DV and vertically parallax component are represented by DVx and DVy respectively; The horizontal coordinate of a position coordinates P (x, y) and vertically coordinate are represented by Px and Py respectively.
Preferred embodiment 1
This preferred embodiment relates to a kind of difference vector generation method:
Viewpoint 1 depth image comprises Nd basic depth block, basic depth block comprises that (width that is each basic depth block is E pixel to E × F degree of depth pixel, be highly F pixel), for example: viewpoint 1 depth image horizontal direction comprises Wd1/E basic depth block, vertical direction comprises Hd1/F basic depth block, amounts to Nd=(Wd1/E) × (Hd1/F) individual basic depth block.
Viewpoint 2 images comprise Nt primary image piece, primary image piece comprises M × N image pixel, for example: viewpoint 2 image level directions have W2/M primary image piece, vertical direction has H2/N primary image piece, amounts to Nt=(W2/M) × (H2/N) individual primary image piece.As mentioned above, viewpoint 2 images can be viewpoint 2 texture images, can be also viewpoint 2 depth images.Difference vector between corresponding viewpoint 1 image of each primary image piece and viewpoint 2 images, in particular, pointed to the difference vector (or can be also the difference vector that is pointed to viewpoint 2 images by viewpoint 1 image) of viewpoint 1 image by viewpoint 2 images, be called for short the difference vector of primary image piece.It should be noted that, viewpoint 1 image refers to viewpoint 1 texture image or depth image, and has following relation: in the time that viewpoint 2 images refer to viewpoint 2 texture image, viewpoint 1 image refers to viewpoint 1 texture image; In the time that viewpoint 2 images refer to viewpoint 2 depth image, viewpoint 1 image refers to viewpoint 1 depth image.
Definite mode of basic depth block size and primary image block size has multiple, can select one, for example:
Mode 1: primary image block size and basic depth block size are all set to M × N (for example 2 × 2,4 × 4,8 × 8,16 × 16,4 × 2,8 × 4,16 × 8,2 × 4,4 × 8,8 × 16,3 × 5 etc.), i.e. E=M, F=N;
Mode 2: primary image block size is set to M × N (for example 2 × 2,4 × 4,8 × 8,16 × 16,4 × 2,8 × 4,16 × 8,2 × 4,4 × 8,8 × 16,3 × 5 etc.); Basic depth block size is set to (M × S1) × (N × S2), and wherein S1 and S2 are constant, for example: S1=S2=1/2, or S1=1, S2=1/2, or S1=S2=2, or S1=2, S2=1 etc.;
Mode 3: primary image block size is set to M × N (for example 2 × 2,4 × 4,8 × 8,16 × 16,4 × 2,8 × 4,16 × 8,2 × 4,4 × 8,8 × 16,3 × 5 etc.), the ratio of viewpoint 1 depth image resolution and viewpoint 2 image resolution ratios is multiplied by respectively to M and N, obtains E and F, be that E, F and M, N meet following relation: E=M × Wd1/W2, F=N × Hd1/H2.In the time of Wd1/W2=Hd1/H2=S, E=M × S, F=N × S.
It should be noted that, in aforesaid way 2 and mode 3, also first basic depth block size is set to E × F of equal valuely, then the inverse that E and F are multiplied by the coefficient in aforesaid way 2 and mode 3 is obtained to M and N.
The difference vector of the individual primary image piece of Nta (Nta≤Nt) in Nt primary image piece can be stored as a difference vector field that comprises this Nta difference vector.Especially, in the time that the vertical component of all difference vectors is 0, the difference vector field that only needs storage to be formed by the horizontal component of parallax.In addition, the difference vector of primary image piece also can be stored as another kind of form: the depth value that difference vector is corresponding; In the time need to accessing the difference vector of primary image piece, the depth value of primary image piece is converted to the difference vector of primary image piece.
Obtained the difference vector of the primary image piece of M × N size in viewpoint 2 images by the basic depth block of E × F size in viewpoint 1 depth image, wherein E × F > 1, M × N > 1, comprises following processing:
(1) to any one basic depth block, obtain a depth value D by the individual degree of depth pixel of X (1≤X≤E × F) wherein, its method has multiple, can select one, for example:
Method 1: using the pixel value of the degree of depth pixel in a precalculated position in basic depth block as depth value D, such as upper left corner degree of depth pixel or lower left corner degree of depth pixel or upper right corner degree of depth pixel or lower right corner degree of depth pixel or depth to center pixel etc.;
Method 2: select one as depth value D in the pixel value of the degree of depth pixel in multiple precalculated positions from basic depth block, wherein multiple precalculated positions for example comprise two or more in the positions such as the upper left corner, the upper right corner, the lower left corner, the lower right corner, central point of this basic depth block, for example comprise again degree of depth location of pixels all in this basic depth block; Wherein system of selection is for example selected maximum, minimum value or intermediate value (medium value) etc. from the pixel value of the degree of depth pixel in multiple precalculated positions;
Method 3: using the weighted average of the pixel value of the degree of depth pixel in multiple precalculated positions in basic depth block as depth value D, wherein multiple precalculated positions for example comprise two or more in the positions such as the upper left corner, the upper right corner, the lower left corner, the lower right corner, central point of this basic depth block, for example comprise again degree of depth location of pixels all in this basic depth block; For example mean value of the computational methods of weighted average (be each weights equate), again for example in the time comprising three precalculated positions, adopts 1/4,1/2,1/4 respectively as the weights of three positions.
(2) depth value D is converted to the difference vector DV1 between viewpoint 1 image and viewpoint 2 images, and obtain the correspondence position Pos2 of basic depth block in viewpoint 2 images, wherein, difference vector DV1 points to the position Pos1 (or by basic depth block position Pos1 in viewpoint 1 image point to correspondence position Pos2) of basic depth block in viewpoint 1 image by correspondence position Pos2; Wherein, in the time that viewpoint 2 images are viewpoint 2 texture image, viewpoint 1 image refers to viewpoint 1 texture image, and in the time that viewpoint 2 images are viewpoint 2 depth image, viewpoint 1 image refers to viewpoint 1 depth image.
Being converted to the difference vector DV1 of basic depth block between viewpoint 1 image and viewpoint 2 images by depth value D is ripe conventional method, for example, in the time that viewpoint 1 image and viewpoint 2 images are (or being approximately) parallel vidicon and arrange, the vertical component of difference vector DV1 is 0, horizontal component value DV1x can obtain with formula DV1x=(f × L/Z)+du, wherein f is the focal length of the video camera that viewpoint 1 image is corresponding, L is the parallax range between viewpoint 1 and viewpoint 2, Z is that the pixel of depth value D instruction is to the physical distance of corresponding video camera, du is viewpoint 1 and viewpoint 2 image principal point offset differences (difference in principal point offset).Conventionally, L is signed number, the distance between its absolute value representation viewpoint 1 and viewpoint 2, and its symbol is relevant to the left and right position relationship of difference vector direction, viewpoint 1 and viewpoint 2, for example:; In the time that difference vector DV1 points to viewpoint 1 from viewpoint 2, if viewpoint 2 in viewpoint 1 left side, L is negative value (corresponding f × L/Z value is negative value), if viewpoint 2 on viewpoint 1 right side, L be on the occasion of; In the time that difference vector DV1 points to viewpoint 2 from viewpoint 1, if viewpoint 2 in viewpoint 1 left side, L be on the occasion of, if viewpoint 2 on viewpoint 1 right side, L is negative value.Difference vector DV1 correspondence position (corresponding position) Pos2 of basic depth block in viewpoint 2 images that can serve as reasons points to the correspondence position Pos1 of basic depth block in viewpoint 1 image, difference vector direction can be sketched serve as reasons " viewpoint 2 images point to viewpoint 1 image ", or " pointing to viewpoint 1 by viewpoint 2 "; Now there is Pos2=Pos1-DV1 (special, to have Pos2x=Pos1x-DV1x for horizontal component); In addition, difference vector DV1 points to Pos2 by Pos1, difference vector direction can be sketched serve as reasons " viewpoint 1 image points to viewpoint 2 images ", or " pointing to viewpoint 2 by viewpoint 1 ", now there is Pos2=Pos1+DV1 (special, to have Pos2x=Pos1x+DV1x for horizontal component).Difference vector DV1 can be whole pixel precision, can be also sub-pixel precision, for example 1/2 or 1/4 pixel precision.In the time that viewpoint 1 image and viewpoint 2 images are non-parallel video camera and arrange, adopt conventional tripleplane (3D warping) Solving Equations to obtain the correspondence position Pos2 of Pos1, wait until DV1 by DV1=Pos1-Pos2 (DV1 points to Pos1 by Pos2) or DV1=Pos2-Pos1 (DV1 points to Pos2 by Pos1) simultaneously.For the ease of understanding, here can be simultaneously with reference to figure 1.
It should be added that, the position of a piece conventionally can by a certain pixel in piece (for example central point, upper left angle point, vertically certain on center line a bit or other point of making an appointment) coordinate in image represents, the size of combined block, just can know this piece occupies which region in image.In the present embodiment, agreement the serve as reasons central point of piece in the position of piece represents, also can adopt other stipulated form.In addition, for explaining conveniently, in note viewpoint 1, the position of basic depth block is Pos0.
The correspondence position Pos1 of basic depth block in viewpoint 1 image can determine by the following method: in the time that viewpoint 1 image refers to viewpoint 1 depth image, the depth image piece at Pos1 place is basic depth block itself, has Pos1=Pos0; In the time that viewpoint 1 image refers to viewpoint 1 texture image, the texture image piece at Pos1 place and basic depth block are corresponding to same area of space, for example, in the time that texture image and depth image have equal resolution, Pos1=Pos0 (has Pos1x=Pos0x, Pos1y=Pos0y), again for example when the horizontal resolution of texture image with when vertically resolution is with 2 times of depth image, Pos1=Pos0 × 2 (having Pos1x=Pos0x × 2, Pos2y=Pos0y × 2).
(3) the difference vector DV2 of primary image piece is difference vector DV1, or be the product of DV1 and a real number, wherein primary image piece is positioned at correspondence position Pos2, the wherein real number constant such as be for example-1,1/2 ,-1/2,2 ,-2, or be that an absolute value is viewpoint 1 depth image resolution and viewpoint 2 image resolution ratio ratios or its zoom factor reciprocal.
Primary image piece is positioned at correspondence position Pos2, and primary image piece is an image block that covers Pos2 in viewpoint 2 images, for example an image block that comprises M × N image pixel centered by Pos2.Especially, if viewpoint 2 images have been divided into the image block of the multiple M of comprising × N image pixel in advance by certain rule, primary image piece is an image block that comprises M × N image pixel that covers Pos2 in these image blocks of dividing in advance; It should be noted that, the center point P os2 ' of this primary image piece and Pos2 may be unequal.
Conventionally, DV2 is DV1, also can save as-DV1 (size is identical, opposite direction), or save as DV1 convergent-divergent (Scaling) value afterwards.
Preferred embodiment 2
This preferred embodiment relates to one and looks a prediction image generation method, is one of application of difference vector generation method provided by the invention.The difference vector DV2 that is first obtained primary image piece in viewpoint 2 images by method described in preferred embodiment 1, the position of primary image piece is Pos2 ', DV2 points to viewpoint 1 image (or DV2 points to viewpoint 2 images by viewpoint 1 image) by viewpoint 2 images; Then, obtained the image block of a correspondence position Pos1 ' in viewpoint 1 image by Pos1 '=Pos2+DV2 (or Pos1 '=Pos2-DV2); It should be noted that, Pos1 ' may be not equal in preferred embodiment 1 for generating the basic depth block of DV2 at the correspondence position Pos1 of viewpoint 1 image.
Get the image block at Pos1 ' place as a predicted picture of looking for the primary image piece in viewpoint 2; It should be noted that, in the time that Pos1 ' is sub-pixel location (when DV2 is sub-pixel precision), can adopt sub-pixel interpolation filter (as the sub-pixel interpolation filter in H.264/AVC etc.) to obtain the image pixel of sub-pixel location, produce the sub-pixel precision pixel value of Pos1 ' place image block.It should be added that, in the present embodiment, the texture image of viewpoint 1 and depth image are generally reconstruction (reconstructed) image, instead of original (original) image.
Preferred embodiment 3
This preferred embodiment relates to a kind of difference vector generation method.In this preferred embodiment, the texture image resolution of viewpoint 1 and viewpoint 2 is identical, and the depth image resolution of viewpoint 1 and viewpoint 2 is identical, and the horizontal resolution of the texture image of viewpoint 1 and vertically resolution are 2 times of depth image.Viewpoint 1 and viewpoint 2 texture images are parallel vidicon and arrange.The present embodiment is for generating the difference vector of viewpoint 2 texture images.It should be added that, in the time that the present embodiment is applied to Video coding, texture image and depth image are generally reconstruction (reconstructed) image.
Viewpoint 2 texture images are divided into M × N piece (comprise the piece of M × N pixel, width is M pixel, is highly N pixel)), M=4, N=4, each M × N piece is primary image piece, for explaining more clear and definite, in the present embodiment, be called basic texture block; Viewpoint 1 depth image is divided into E × F piece, E=2, F=2, each E × F piece is basic depth block.
For example, to all basic depth block or a part wherein (comprising all basic depth block in the rectangular window of one or more basic depth block), be handled as follows, obtain the difference vector of one or more basic texture block in viewpoint 2 texture images:
(1) to any one basic depth block, the depth value D by the maximum of the depth value of four degree of depth pixels such as its upper left corner, the upper right corner, the lower left corner, the lower right corner as this basic depth block.
(2) obtained the horizontal component DV1x of the difference vector DV1 of basic depth block by common-used formula DV1x=(f × L/Z)+du, vertical component is 0, wherein f is the focal length of the video camera that viewpoint 1 image is corresponding, L is the parallax range between viewpoint 1 and viewpoint 2, Z is that the pixel of depth value D instruction is to the physical distance of corresponding video camera, du is viewpoint 1 and viewpoint 2 image principal point offset differences (difference in principal point offset), and its value is generally 0.DV1 direction is for to point to viewpoint 1 by viewpoint 2, and its horizontal component numerical value is 1/4 pixel precision (for example numerical value 5 represents 1.25 pixels, or 5 1/4 pixels).Obtain the correspondence position Pos2 of basic depth block on viewpoint 2 texture images by Pos2=Pos1-DV1, wherein Pos1 is the correspondence position of basic depth block on viewpoint 1 texture image, the horizontal component of Pos1, Pos2 is also 1/4 pixel precision, and vertical component is whole pixel precision (being that numerical value 5 represents 5 pixels).The coordinate of basic depth block top left corner pixel in viewpoint 1 depth image is designated as (x1, y1), and wherein x1, y1 are whole pixel precision, have:
Pos1x=x1×Sc1×Fa+offset1,Pos1y=y1×Sc2+offset2;
Pos2x=x1×Sc1×Fa+offset1-DV1x,Pos2y=y1×Sc2+offset3;
Wherein, Sc1 equals the ratio of texture image horizontal resolution and depth image horizontal resolution, Sc2 equals the ratio of the vertical resolution of texture image and the vertical resolution of depth image, and " × Fa " operation transfers the horizontal coordinate of whole pixel precision to 1/Fa pixel precision; In the present embodiment, there is Sc1=2, Sc2=2, Fa=4.Offset1=Fa × E × Sc1/2; Offset2, offset3 are the number between 0 to F × Sc2-1, for example offset2=offset3=0.
(3) the difference vector DV2 assignment of the basic texture block of covering Pos2 being ordered is DV1; Wherein judging whether basic texture block covers Pos2 for example can available following methods: remember that the coordinate of basic texture block top left corner pixel in viewpoint 2 texture images is (x2, y2), wherein x2, y2 are whole pixel precision, round (being the integer part of x2 divided by M) and equal that Pos2x/ (Fa × M) rounds and y2/N rounds and equals Pos2y/N and round if meet x2/M, this basic texture block covers Pos2.It should be noted that, in the time of power that Y is 2, " X/Y rounds " operation also can adopt " to the X log that moves to right 2(Y) position " operation realization.
Preferred embodiment 4
This preferred embodiment 4 relates to a kind of difference vector generation method.In this preferred embodiment, the texture image resolution of viewpoint 1 and viewpoint 2 is identical, and the depth image resolution of viewpoint 1 and viewpoint 2 is identical, and the texture image resolution of viewpoint 1 is identical with depth image resolution.Viewpoint 1 and viewpoint 2 texture images are all parallel vidicon and arrange.Viewpoint 1 and viewpoint 2 texture image principal point offset differences (difference in principal point offset) are 0.This preferred embodiment is for generating the difference vector (being the difference vector of the primary image piece of viewpoint 2 texture images) of viewpoint 2 texture images.
Viewpoint 2 texture images are divided into M × N piece, for example, have M=4, N=4, each 4 × 4 is primary image piece, more clear for statement, in the present embodiment, is called basic texture block; Viewpoint 1 depth image is divided into E × F piece, E, F are according to the ratio-dependent of viewpoint 1 depth image and viewpoint 2 texture image resolution: E=M × (Wd1/Wt2)=4, F=N × (Hd1/Ht2)=4, each E × F piece is basic depth block.
To all basic depth block or a part wherein, be handled as follows, obtain the difference vector of one or more basic texture block in viewpoint 2 texture images:
(1) to any one basic depth block, the depth value D with the depth value of the degree of depth pixel of its central point Cen as this basic depth block.The coordinate of basic depth block top left corner pixel in viewpoint 1 depth image is designated as (x1, y1), and wherein x1, y1 are whole pixel precision, and the horizontal coordinate of central point Cen, vertical coordinate may be defined as:
Cenx=x1+E/2;Ceny=y1+F/2;
Or also may be defined as:
Cenx=x1+E/2-1;Ceny=y1+F/2-1。
(2) by common-used formula DV1=f × l/Z=G 1× D+C 2obtain the horizontal component of the difference vector DV1 of basic depth block, vertical component is 0, wherein,
C 1 = f × l 255 × ( 1 Z near - 1 Z far ) , C 2 = f × l Z far ,
(the depth value D of degree of depth pixel and the corresponding relation of pixel to the physical distance Z of corresponding video camera);
Wherein, f is the focal length of the video camera that viewpoint 1 image is corresponding, and l is the parallax range between viewpoint 1 and viewpoint 2, and Z is that the pixel of depth value D instruction is to the physical distance of corresponding video camera, Z nearand Z farbe respectively recently and depth plane farthest.DV1 direction is for to point to viewpoint 2 by viewpoint 1, and its horizontal component numerical value is 1/2 pixel precision (for example numerical value 5 represents 2.5 pixels, or 5 1/2 pixels).Obtain the correspondence position of basic depth block on viewpoint 2 texture images by Pos2=Pos1+DV1, wherein Pos1 is the correspondence position of basic depth block on viewpoint 1 texture image, the horizontal component of Pos1, Pos2 is also 1/2 pixel precision, and vertical component is whole pixel precision.The coordinate of basic depth block top left corner pixel in viewpoint 1 depth image is designated as (x1, y1), and wherein x1, y1 are whole pixel precision, have:
Pos1x=x1×Fa+offset1,Pos1y=y1+offset2;
Pos2x=x1×Fa+offset1+DV1x,Pos2y=y1+offset3;
Offset1=Fa × E/2; Offset2, offset3 are the number between 0 to F-1, for example offset2=offset3=F/2.In the present embodiment, Fa=2, " × Fa " operation transfers the horizontal coordinate of whole pixel precision to 1/2 pixel precision.
(3) the difference vector DV2 assignment of the basic texture block of covering Pos2 being ordered is DV1; Wherein judge whether basic texture block covers Pos2 (being whether Pos2 drops in basic texture block) and for example can use following methods: remember that the coordinate of basic texture block top left corner pixel in viewpoint 2 texture images is (x2, y2), wherein x2, y2 are whole pixel precision, if meet x2≤Pos2x/Fa < x2+M and y2≤Pos2y < y2+N, this basic texture block covers Pos2.It should be added that, it is positive direction from left to right that image coordinate system is generally horizontal coordinate, and vertically coordinate is positive direction from top to bottom.
Preferred embodiment 5
This preferred embodiment relates to a kind of difference vector generation method.In the present embodiment, the depth image resolution of viewpoint 1 and viewpoint 2 is identical.Viewpoint 1 and viewpoint 2 depth images are parallel vidicon and arrange.Viewpoint 1 and viewpoint 2 depth image principal point offset differences (difference in principal point offset) are 0.The present embodiment is for generating the difference vector of viewpoint 2 depth images.
In advance viewpoint 2 depth images are divided into M × N piece, for example, have M=8, N=4, each M × N piece is primary image piece; Viewpoint 1 depth image is divided into E × F piece, for example E=4, F=4, each E × F piece is basic depth block.
To all basic depth block, be handled as follows, obtain the difference vector of multiple primary image pieces in viewpoint 2 depth images:
(1) to any one basic depth block, the depth value D with the depth value of the degree of depth pixel in its upper left corner as this basic depth block.
(2) by common-used formula DV1x=f × l/Z=C 1× D+C 2obtain the horizontal component DV1x of the difference vector DV1 of basic depth block, vertical component is that 0, DV1 direction is to point to viewpoint 2 images by viewpoint 1 image, and its horizontal component numerical value is 1/4 pixel precision;
Obtain the correspondence position Pos2 of basic depth block on viewpoint 2 depth images by Pos2=Pos1+DV1, wherein Pos1 is the correspondence position of basic depth block on viewpoint 1 depth image, the horizontal component of Pos1, Pos2 is also 1/4 pixel precision, and vertical component is whole pixel precision.The coordinate of basic depth block top left corner pixel in viewpoint 1 depth image is designated as (x1, y1), and wherein x1, y1 are whole pixel precision, have:
Pos1x=x1×4+8,Pos1y=y1;
Pos2x=x1 × 4+8+DV1x, Pos2y=y1; Wherein " × 4 " operation transfers the horizontal coordinate of whole pixel precision to 1/4 pixel precision.Be multiplied by 2 integral number power (as 2,4,8 etc.), also can realize with displacement.
(3) the difference vector DV2 assignment of the primary image piece of covering Pos2 being ordered is-DV1.Wherein judge whether primary image piece covers Pos2 and for example can use following methods: the coordinate of note primary image piece top left corner pixel in viewpoint 2 images (being the depth image of viewpoint 2 in this embodiment) is (x2, y2), wherein x2, y2 are whole pixel precision, if meet x2≤Pos2x/4 < x2+M and y2≤Pos2y < y2+N, this primary image piece covers Pos2.
Preferred embodiment 6
This preferred embodiment relates to a kind of difference vector generation method.In the present embodiment, the texture image resolution of viewpoint 1 and viewpoint 2 is identical, and the depth image resolution of viewpoint 1 and viewpoint 2 is identical, and the horizontal resolution of the texture image of viewpoint 1 and vertically resolution are 2 times of depth image.Viewpoint 1 and viewpoint 2 images are non-parallel video camera and arrange.The present embodiment is for generating the difference vector of viewpoint 2 depth images.
Viewpoint 2 depth images are divided into M × N piece (comprising the piece of M × N pixel), for example, have M=3, N=3, each M × N piece is primary image piece, and viewpoint 1 depth image is divided into E × F piece, E=3, F=3, each E × F piece is basic depth block.
For example, to all basic depth block or a part wherein (the basic depth block of a line), be handled as follows, obtain the difference vector of one or more primary image pieces in viewpoint 2 depth images:
(1) to any one basic depth block, its position is Pos1, the depth value D with the mean value of the depth value of the degree of depth pixel in its upper left corner, the upper right corner as this basic depth block.
(2) obtain the correspondence position Pos2 of basic depth block on viewpoint 2 depth images by tripleplane (3Dwarping) formula, the horizontal component of Pos1, Pos2 and vertical component are whole pixel precision.The coordinate of basic depth block central point pixel in viewpoint 1 depth image is designated as (x1, y1), and wherein x1, y1 are whole pixel precision, have:
Pos1x=x1, Pos1y=y1; (representing the position of basic depth block with basic depth block central point)
DV1x=Pos2x-Pos1x,DV1y=Pos2y-Pos1y;
Wherein DV1 direction vector is for to point to viewpoint 2 from viewpoint 1.
(3) be S5 × DV1 by the difference vector DV2 assignment of the primary image piece of putting centered by Pos2, wherein S5 is a fixed constant, as S5=-1, or S5=2.
Preferred embodiment 7
This preferred embodiment relates to a kind of difference vector generation method.First such as, in certain region in viewpoint 1 (rectangular area or the entire image etc. of one or more macro-block line or R × S size) basic depth block is adopted to the described difference vector generation method of embodiment 1 (or a kind of embodiment) in embodiment 3,4,5,6 successively, obtain the difference vector of the one or more primary image pieces in viewpoint 2 images.
To the target image piece of J × K size in viewpoint 2, the difference vector of the primary image piece in the individual precalculated position of Q1 (1≤Q1≤Q) in the Q being comprised by it primary image piece, obtain Q1 difference vector, be a difference vector in Q1 difference vector by the difference vector DV2 ' assignment of target image piece, or the weighted average of this Q1 difference vector (for example, in the time of Q1=5, employing weight coefficient is 1/8,1/8,1/8,1/8,1/2 weighted average).
Wherein, be that the method for a difference vector in Q1 difference vector is for example one of following by DV2 ' assignment:
Method 1: the difference vector using the difference vector of the primary image piece in a precalculated position in target image piece as target image piece, the upper left corner of for example this target image piece of wherein said precalculated position or the lower left corner or the upper right corner or the lower right corner or central point etc.;
Method 2: select a difference vector as target image piece from target image piece in the difference vector of the primary image piece in multiple precalculated positions, wherein multiple precalculated positions for example comprise two or more in the positions such as the upper left corner, the upper right corner, the lower left corner, the lower right corner, central point of this target image piece, for example comprise again the position of primary image pieces all in this target image piece; Wherein system of selection is for example selected maximum, minimum value or intermediate value (medium value) etc. from the difference vector of the primary image piece in multiple precalculated positions.
Preferred embodiment 8
This preferred embodiment relates to a kind of difference vector generation method.First the basic depth block in the rectangular area Reg1 of R × S size in viewpoint 1 is carried out to the difference vector generation method described in embodiment 3 successively, obtain the difference vector of one or more primary image pieces of another region Reg2 in viewpoint 2 images.To the target image piece of J × K size in Reg2 in viewpoint 2, the difference vector DV2 ' using the difference vector value of horizontal component absolute value maximum in the difference vector of four primary image pieces at the target image piece upper left corner, the upper right corner, the lower left corner, place, the lower right corner as this target image piece.
It should be noted that, some the primary image piece in Reg2 may not obtain difference vector, for example, in removing the primary image piece of occlusion area (dis-occlusion region); The difference vector that they are not generated by the degree of depth of a basic depth block.The difference vector of these primary image pieces, can be made as a fixed value, or is the difference vector of primary image piece another primary image piece around, or is the weighted average of the difference vector of primary image piece multiple primary image pieces around.
Preferred embodiment 9
This preferred embodiment relates to a kind of difference vector generation method.First the basic depth block in the rectangular area Reg1 of R × S size in viewpoint 1 is carried out to the difference vector generation method described in embodiment 4 successively, obtain the difference vector of one or more primary image pieces of another region Reg2 in viewpoint 2 images (may be non-rectangular area).To the target image piece of J × K size in Reg2 in viewpoint 2, the difference vector using horizontal component absolute value in the difference vector of four primary image pieces at the target image piece upper left corner, the upper right corner, the lower left corner, place, the lower right corner as intermediate value is as the difference vector DV2 ' of this target image piece.
In above-mentioned process of basic depth block being carried out successively to difference vector derivation processing, if when the primary image piece horizontal direction of the adjacent basic depth block correspondence of two horizontal directions in viewpoint 2 is non-conterminous, there is the individual primary image piece of N (N is positive integer) between these two primary image pieces, use the difference vector assignment of absolute value less (or larger) in the difference vector of these two primary image pieces to the difference vector of this N primary image piece.
Preferred embodiment 10
This preferred embodiment relates to a kind of difference vector generation method.First to R × S (R >=E in viewpoint 1, S >=F) basic depth block in the rectangular area Reg1 of size carries out the difference vector generation method described in preferred embodiment 5 successively, obtains the difference vector of one or more primary image pieces of another region Reg2 in viewpoint 2 images.To the target image piece of J × K size in Reg2 in viewpoint 2, the difference vector DV2 ' using the difference vector of target image piece central point place primary image piece as this target image piece.If the difference vector of central point place primary image piece does not exist, DV2 ' is made as to the mean value of the difference vector of 2 primary image pieces at the target image piece upper left corner, place, the upper right corner.
Preferred embodiment 11
This preferred embodiment relates to a kind of difference vector generation method.First according to the target image piece of J × K size in viewpoint 2 images, determine the rectangular area Reg1 of R × S size in viewpoint 1 image; Basic depth block in Reg1 is carried out to the difference vector generation method described in preferred embodiment 1 successively, and at least generate a difference vector that is positioned at the primary image piece of target image piece inside.To target image piece, whether the difference vector of the primary image piece at multiple predetermined position candidate (as central point, the upper left corner, the lower right corner, the lower left corner, the upper right corner etc.) place there is (whether adopting difference vector generation method to process the difference vector that has generated primary image piece by above-mentioned to the basic depth block in Reg1) in query aim image block successively, gets first Query Result and be the difference vector of primary image piece of existence as the difference vector DV2 ' of this target image piece.In the time that the difference vector of the primary image piece at all position candidate place does not all exist, can adopt one of following methods to obtain difference vector:
Method one: for example, primary image piece outside the primary image piece at the place, access candidate position of (pressing Zig-Zag scanning sequency etc.) successively, adopting and first accessing result is to exist the parallax of primary image piece of difference vector in right amount as the difference vector DV2 ' of target image piece;
Method two: the difference vector DV2 ' of this target image piece is made as to a fixed value;
Method three: the difference vector that the difference vector DV2 ' of this target image piece is made as to a primary image piece adjacent with this target image piece.
Preferred embodiment 12
This preferred embodiment relates to one and looks a movable information Forecasting Methodology, is one of application of difference vector generation method provided by the invention.First obtain the difference vector DV2 ' (pointing to viewpoint 1 by viewpoint 2) of target image piece according to the difference vector generation method described in preferred embodiment 7.By the position Pos2 ' of DV2 ' and target image piece, find the corresponding points Pos1 ' in viewpoint 1 image, there is Pos1 '=Pos2 '+DV2.Get the movable information of Pos1 ' pixel, such as motion vector (motion vector), reference number (reference index) etc., as the movable information predicted value of target image piece.
Preferred embodiment 13
This preferred embodiment relates to a kind of motion vector candidates queue (motion vector candidate list) building method, is one of application of difference vector method provided by the invention.First obtain the difference vector DV2 ' (pointing to viewpoint 1 by viewpoint 2) of target image piece according to the difference vector generation method described in preferred embodiment 8.Join in the motion vector candidates queue of target image piece reference frame sequence number (reference index) instruction viewpoint 1 texture image that DV2 ' is corresponding using DV2 ' as a vector.
Preferred embodiment 14
This preferred embodiment relates to a kind of difference vector generating apparatus.Fig. 5 is according to the structural representation of the difference vector generating apparatus of the embodiment of the present invention preferred implementation, and as shown in Figure 5, this device comprises two unit: piece degree of depth generation unit and difference vector generation unit.Wherein, piece degree of depth generation unit is for generation of the depth value of a basic depth block; Difference vector generation unit is for being produced the difference vector of certain primary image piece of another viewpoint by the depth value of basic depth block.Below these two functional units are described.
Piece degree of depth generation unit, its input comprises the basic depth block in viewpoint 1 depth image, its output comprises the depth value of basic depth block, and wherein basic depth block comprises E × F the degree of depth pixel (wherein E × F > 1) in viewpoint 1 depth image; Function and the execution mode of depth value D that the function that piece degree of depth generation unit completes and execution mode and the depth value by the individual degree of depth pixel of the X in basic depth block (1≤X≤E × F) in above-mentioned difference vector generation method obtain basic depth block is identical.
Difference vector generation unit, its input comprises depth value and the position of basic depth block, it is output as the difference vector of a primary image piece in viewpoint 2 images, and wherein primary image piece comprises M × N the image pixel (wherein M × N > 1) in viewpoint 2 images; The function that difference vector generation unit completes is identical with the following contents processing in above-mentioned difference vector generation method with execution mode:
(1) depth value D is converted to the difference vector DV1 between viewpoint 1 image and viewpoint 2 images, and obtains the correspondence position Pos2 of basic depth block in viewpoint 2 images;
(2) the difference vector DV2 of primary image piece is difference vector DV1, or be the product of DV1 and a real number, wherein primary image piece is positioned at correspondence position Pos2, wherein real number is as the constants such as-1,1/2 ,-1/2,2 ,-2, or is the absolute value depth image resolution that is viewpoint 1 and image resolution ratio ratio or its zoom factor reciprocal of viewpoint 2;
Wherein, viewpoint 2 images can be viewpoint 2 texture images, and now viewpoint 1 image refers to viewpoint 1 texture image; Viewpoint 2 images can be also viewpoint 2 depth images, and now viewpoint 1 image refers to viewpoint 1 depth image.
Preferred embodiment 15
This preferred embodiment relates to a kind of difference vector generating apparatus.Fig. 6 is according to the structural representation of the difference vector generating apparatus of another preferred implementation of the embodiment of the present invention, as shown in Figure 6, this device comprises three unit: piece degree of depth generation unit, difference vector generation unit, object block difference vector computing unit.Wherein, piece degree of depth generation unit is for generation of the depth value of a basic depth block; Difference vector generation unit is for being produced the difference vector of certain primary image piece of another viewpoint by the depth value of basic depth block; The difference vector of one or more primary image pieces that object block difference vector computing unit is covered by a target area calculates the difference vector of this target area.Below these three functional units are described.
Piece degree of depth generation unit, its input comprises the basic depth block in viewpoint 1 depth image, its output comprises the depth value of basic depth block, and wherein basic depth block comprises E × F the degree of depth pixel (wherein E × F > 1) in viewpoint 1 depth image; Function and the execution mode of depth value D that the function that piece degree of depth generation unit completes and execution mode and the depth value by the individual degree of depth pixel of the X in basic depth block (1≤X≤E × F) in above-mentioned difference vector generation method obtain basic depth block is identical.
Difference vector generation unit, its input comprises depth value and the position of basic depth block, it is output as the difference vector of a primary image piece in viewpoint 2 images, and wherein primary image piece comprises M × N the image pixel (wherein M × N > 1) in viewpoint 2 images; The function that difference vector generation unit completes is identical with the following contents processing in above-mentioned difference vector generation method with execution mode:
(1) depth value D is converted to the difference vector DV1 between viewpoint 1 image and viewpoint 2 images, and obtains the correspondence position Pos2 of basic depth block in viewpoint 2 images;
(2) the difference vector DV2 of primary image piece is difference vector DV1, or be the product of DV1 and a real number, wherein primary image piece is positioned at correspondence position Pos2, wherein real number is as the constants such as-1,1/2 ,-1/2,2 ,-2, or is the absolute value depth image resolution that is viewpoint 1 and image resolution ratio ratio or its zoom factor reciprocal of viewpoint 2.
Object block difference vector computing unit, its input comprises size and the position of an object block in viewpoint 2 images, and the difference vector of one or more primary image pieces, object block comprises J × K image pixel (wherein J × K > 1), and its output comprises a difference vector; In the function that object block difference vector computing unit completes and execution mode and above-mentioned difference vector generation method, obtain the difference vector of the individual primary image piece of Q1 (1≤Q1≤Q) in the individual primary image piece of Q (Q >=2) that object block comprises, be a difference vector in Q1 difference vector by the difference vector assignment of target image piece, or the function of the weighted average of this Q1 difference vector is identical with execution mode.
Wherein, viewpoint 2 images can be viewpoint 2 texture images, and now viewpoint 1 image refers to viewpoint 1 texture image; Viewpoint 2 images can be also viewpoint 2 depth images, and now viewpoint 1 image refers to viewpoint 1 depth image.
Difference vector generating apparatus can be realized by various ways, for example:
Method one: the additional software program identical with difference vector generation methodological function realized taking electronic computer as hardware.
Method two: the additional software program identical with difference vector generation methodological function realized taking single-chip microcomputer as hardware.
Method three: the additional software program identical with difference vector generation methodological function realized taking digital signal processor as hardware.
Method four: design generates with difference vector the circuit that methodological function is identical and realizes.
Certainly, in actual applications, the mode that realizes difference vector generating apparatus can also have other multiple, is not only confined to above-mentioned four kinds.
The difference vector generation method and the difference vector generating apparatus that adopt above-described embodiment to provide, do not rely on the reconstruction depth image of present encoding viewpoint, can be in the time that the texture image of present encoding viewpoint be encoded prior to depth image, produce the difference vector between coded views (viewpoint 1) image and present encoding viewpoint (viewpoint 2) image, thereby support to depend on the coding tools (as VSP and DMVP) of present encoding viewpoint reconstruction depth image, solve the difference vector generation method complexity in correlation technique higher, take larger data memory space and need to carry out more repeatedly the problem of the degree of depth to difference vector conversion processing, and then the computational complexity that has reached difference vector reduces, the effect that projection number of times is few and memory space is little.
From above description, can find out, the present invention has realized following technique effect: (1) compared with prior art, difference vector deriving method of the present invention does not rely on the reconstruction depth image of present encoding viewpoint, can be in the time that the texture image of present encoding viewpoint be encoded prior to depth image, produce the difference vector between coded views (viewpoint 1) image and present encoding viewpoint (viewpoint 2) image, thereby support to depend on the coding tools of present encoding viewpoint reconstruction depth image, as VSP and DMVP.In addition, difference vector generation method provided by the invention derives difference vector when coding depth image (because, the inevitable non-availability of depth image) in this region also can be for coding depth image-region time.(2) compared with the method proposing with the people such as S.Shimizu, the present invention is obtained the difference vector of the each primary image piece of present encoding viewpoint by the basic depth block in coded views, the difference vector of each target image piece is derived by the difference vector of primary image piece, no longer need the degree of depth to arrive the conversion of parallax, therefore computational complexity is lower.In addition, primary image piece comprises multiple degree of depth pixels but a corresponding difference vector only, and the number of difference vector is much smaller than the pixel count of depth image, so produce difference vector compared with synthetic depth image, still less, memory space is less for projection number of times.
Obviously, those skilled in the art should be understood that, above-mentioned of the present invention each module or each step can realize with general calculation element, they can concentrate on single calculation element, or be distributed on the network that multiple calculation elements form, alternatively, they can be realized with the executable program code of calculation element, thereby, they can be stored in storage device and be carried out by calculation element, and in some cases, can carry out shown or described step with the order being different from herein, or they are made into respectively to each integrated circuit modules, or the multiple modules in them or step are made into single integrated circuit module to be realized.Like this, the present invention is not restricted to any specific hardware and software combination.
The foregoing is only the preferred embodiments of the present invention, be not limited to the present invention, for a person skilled in the art, the present invention can have various modifications and variations.Within the spirit and principles in the present invention all, any amendment of doing, be equal to replacement, improvement etc., within all should being included in protection scope of the present invention.

Claims (10)

1. a difference vector generation method, is characterized in that, comprising:
Obtain the first depth value of described basic depth block according to the degree of depth pixel value of basic depth block in the first viewpoint depth image;
Generate the first difference vector of primary image piece in the second visual point image according to described the first depth value, wherein, described the second visual point image is the second viewpoint texture image or the second viewpoint depth image.
2. method according to claim 1, is characterized in that, obtains the first depth value of described basic depth block according to the degree of depth pixel value of basic depth block in the first viewpoint depth image, comprises one of following mode:
Using the depth value of the degree of depth pixel in a precalculated position in described basic depth block as described the first depth value;
Using a depth value of selecting in the depth value of the degree of depth pixel in multiple precalculated positions from described basic depth block as described the first depth value, maximum, minimum value or median in the depth value of the degree of depth pixel that wherein, the depth value of selection is described multiple precalculated positions;
Using the weighted average of the depth value of the degree of depth pixel in multiple precalculated positions in described basic depth block as described the first depth value.
3. method according to claim 1, is characterized in that, generates the first difference vector of primary image piece in the second visual point image according to described the first depth value, comprising:
Described the first depth value is converted to the second difference vector between the first visual point image and described the second visual point image, and obtain the correspondence position of described basic depth block in described the second visual point image, wherein, in the time that described the second visual point image is described the second viewpoint texture image, described the first visual point image is the first viewpoint texture image, in the time that described the second visual point image is described the second viewpoint depth image, described the first visual point image is the first viewpoint depth image;
Using the product of described the second difference vector or described the second difference vector and predetermined real number as described the first difference vector, wherein, described primary image piece is positioned on described correspondence position, it is one of following that described predetermined real number comprises: constant, zoom factor, wherein, the absolute value of described zoom factor is the first resolution of described the first viewpoint depth image and the ratio of second resolution of described the second visual point image or the inverse of described ratio.
4. according to the method in any one of claims 1 to 3, it is characterized in that, in generating the second visual point image according to described the first depth value, after the first difference vector of primary image piece, also comprise:
The 3rd difference vector that generates target image piece according to described the first difference vector, wherein, described target image piece comprises multiple described primary image pieces.
5. method according to claim 4, is characterized in that, generates the 3rd difference vector of target image piece according to described the first difference vector, comprising:
Determine described first difference vector of the described primary image piece in one or more precalculated positions in described target image piece;
The value of choosing described first difference vector from all described the first difference vector of determining is as described the 3rd difference vector, or, using the weighted average of all described the first difference vector of determining as described the 3rd difference vector.
6. a difference vector generating apparatus, is characterized in that, comprising:
Acquisition module, for obtaining the first depth value of described basic depth block according to the degree of depth pixel value of the basic depth block of the first viewpoint depth image;
The first generation module, for generate the first difference vector of the second visual point image primary image piece according to described the first depth value, wherein, described the second visual point image is the second viewpoint texture image or the second viewpoint depth image.
7. device according to claim 6, is characterized in that, it is one of following that acquisition module comprises:
The first setting unit, for using the depth value of the degree of depth pixel in a precalculated position of described basic depth block as described the first depth value;
Choose unit, for using a depth value of selecting from the depth value of the degree of depth pixel in the multiple precalculated positions of described basic depth block as described the first depth value, maximum, minimum value or median in the depth value of the degree of depth pixel that wherein, the depth value of selection is described multiple precalculated positions;
The second setting unit, for using the weighted average of the depth value of the degree of depth pixel in the multiple precalculated positions of described basic depth block as described the first depth value.
8. device according to claim 6, is characterized in that, the first generation module comprises:
Converting unit, for described the first depth value is converted to the second difference vector between the first visual point image and described the second visual point image, and obtain the correspondence position of described basic depth block in described the second visual point image, wherein, in the time that described the second visual point image is described the second viewpoint texture image, described the first visual point image is the first viewpoint texture image, and in the time that described the second visual point image is described the second viewpoint depth image, described the first visual point image is the first viewpoint depth image;
The 3rd setting unit, be used for using the product of described the second difference vector or described the second difference vector and predetermined real number as described the first difference vector, wherein, described primary image piece is positioned on described correspondence position, it is one of following that described predetermined real number comprises: constant, zoom factor, wherein, the absolute value of described zoom factor is the first resolution of described the first viewpoint depth image and the ratio of second resolution of described the second visual point image or the inverse of described ratio.
9. according to the device described in any one in claim 6 to 8, it is characterized in that, described device also comprises:
The second generation module, for generate the 3rd difference vector of target image piece according to described the first difference vector, wherein, described target image piece comprises multiple described primary image pieces.
10. device according to claim 9, is characterized in that, described the second generation module comprises:
Determining unit, for determining described first difference vector of described primary image piece in described target image piece one or more precalculated positions;
The 4th setting unit, the value that is used for choosing described first difference vector from definite all described the first difference vector is as described the 3rd difference vector, or, using the weighted average of all described the first difference vector of determining as described the 3rd difference vector.
CN201310007164.2A 2013-01-09 2013-01-09 Difference vector generation method and device Active CN103916652B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310007164.2A CN103916652B (en) 2013-01-09 2013-01-09 Difference vector generation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310007164.2A CN103916652B (en) 2013-01-09 2013-01-09 Difference vector generation method and device

Publications (2)

Publication Number Publication Date
CN103916652A true CN103916652A (en) 2014-07-09
CN103916652B CN103916652B (en) 2018-01-09

Family

ID=51042000

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310007164.2A Active CN103916652B (en) 2013-01-09 2013-01-09 Difference vector generation method and device

Country Status (1)

Country Link
CN (1) CN103916652B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104995915A (en) * 2015-02-05 2015-10-21 华为技术有限公司 Coding-decoding method, and coder-decoder
CN110336942A (en) * 2019-06-28 2019-10-15 Oppo广东移动通信有限公司 A kind of virtualization image acquiring method and terminal, computer readable storage medium
WO2022002181A1 (en) * 2020-07-03 2022-01-06 阿里巴巴集团控股有限公司 Free viewpoint video reconstruction method and playing processing method, and device and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101248670A (en) * 2005-09-22 2008-08-20 三星电子株式会社 Method of estimating disparity vector, and method and apparatus for encoding and decoding multi-view moving picture using the disparity vector estimation method
WO2012128068A1 (en) * 2011-03-18 2012-09-27 ソニー株式会社 Image processing device, image processing method, and program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101248670A (en) * 2005-09-22 2008-08-20 三星电子株式会社 Method of estimating disparity vector, and method and apparatus for encoding and decoding multi-view moving picture using the disparity vector estimation method
WO2012128068A1 (en) * 2011-03-18 2012-09-27 ソニー株式会社 Image processing device, image processing method, and program

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HEIKO SCHWARZ: "Test Model under Consideration for HEVC based 3D video coding", 《ISO/IEC JTC1/SC29/WG11 MPEG2011/N12559》 *
JIAN-LIANG LIN等: "3D-CE5.a related: Simplification on the disparity vector derivation for AVC-based 3D video coding", 《JOINT COLLABORATIVE TEAM ON 3D VIDEO CODING EXTENSION DEVELOPMENT OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104995915A (en) * 2015-02-05 2015-10-21 华为技术有限公司 Coding-decoding method, and coder-decoder
WO2016123774A1 (en) * 2015-02-05 2016-08-11 华为技术有限公司 Method and device for encoding and decoding
CN110336942A (en) * 2019-06-28 2019-10-15 Oppo广东移动通信有限公司 A kind of virtualization image acquiring method and terminal, computer readable storage medium
WO2022002181A1 (en) * 2020-07-03 2022-01-06 阿里巴巴集团控股有限公司 Free viewpoint video reconstruction method and playing processing method, and device and storage medium

Also Published As

Publication number Publication date
CN103916652B (en) 2018-01-09

Similar Documents

Publication Publication Date Title
CN104539966B (en) Image prediction method and relevant apparatus
KR102390695B1 (en) Picture prediction method and picture prediction apparatus
US10652577B2 (en) Method and apparatus for encoding and decoding light field based image, and corresponding computer program product
CN104363451B (en) Image prediction method and relevant apparatus
Stefanoski et al. Automatic view synthesis by image-domain-warping
KR102005007B1 (en) Picture prediction method and related apparatus
JP5883153B2 (en) Image encoding method, image decoding method, image encoding device, image decoding device, image encoding program, image decoding program, and recording medium
JP2007336188A (en) Multi-viewpoint image compression coding method, device, and program
US10785502B2 (en) Method and apparatus for encoding and decoding a light field based image, and corresponding computer program product
CN104704819A (en) Method and apparatus of disparity vector derivation and inter-view motion vector prediction for 3D video coding
CN103916652A (en) Method and device for generating disparity vector
CN110475116A (en) A kind of motion vector deriving method, device and electronic equipment
WO2019236347A1 (en) Prediction for light-field coding and decoding
CN105122808A (en) Method and apparatus of disparity vector derivation for three-dimensional and multi-view video coding
US10257488B2 (en) View synthesis using low resolution depth maps
JP2020195093A (en) Encoder, decoder, and program
CN104104933B (en) A kind of difference vector generates method and device
CN103997635A (en) Synthesis viewpoint distortion prediction method and coding method of free viewpoint video
JP7417388B2 (en) Encoding device, decoding device, and program
CN110798674A (en) Image depth value acquisition method, device, equipment, coder-decoder and storage medium
CN102831597B (en) Method and device for generating virtual vision pixel, and corresponding code stream
CN104737536A (en) Method for inducing disparity vector in predicting inter-view motion vector in 3d picture
CN104284194A (en) Method and device for encoding or decoding three-dimensional or multi-view video by means of view synthesis prediction
JP2021044659A (en) Encoding device, decoding device and program
CN104412238B (en) The method and apparatus of candidate motion vector between the view obtaining block in picture

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant