WO2007077942A1 - 映像符号化方法及び復号方法、それらの装置、及びそれらのプログラム並びにプログラムを記録した記憶媒体 - Google Patents
映像符号化方法及び復号方法、それらの装置、及びそれらのプログラム並びにプログラムを記録した記憶媒体 Download PDFInfo
- Publication number
- WO2007077942A1 WO2007077942A1 PCT/JP2006/326297 JP2006326297W WO2007077942A1 WO 2007077942 A1 WO2007077942 A1 WO 2007077942A1 JP 2006326297 W JP2006326297 W JP 2006326297W WO 2007077942 A1 WO2007077942 A1 WO 2007077942A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- parallax
- reference image
- information
- decoding
- encoding
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/146—Data rate or code amount at the encoder output
- H04N19/147—Data rate or code amount at the encoder output according to rate distortion criteria
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/122—Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/189—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
- H04N19/19—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding using optimisation based on Lagrange multipliers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
Definitions
- the present invention is a technique related to encoding and decoding of a multi-view video.
- Multi-viewpoint moving images are a plurality of moving images obtained by photographing the same subject and background with cameras at various positions.
- a moving image taken with one camera is called a “two-dimensional moving image”
- a set of two-dimensional moving images taken with the same subject and background is called a multi-view moving image.
- the two-dimensional video images of each camera included in the multi-view video are strongly correlated in the time direction.
- each camera is synchronized, there is a strong correlation between the cameras because the frames of each camera corresponding to the same time capture the subject and the background in exactly the same state.
- the frame is divided into blocks (this block is called a macro block and the block size is 16 ⁇ 16 (pixel)), and intra prediction is performed in each macro block.
- each macroblock can be divided into smaller blocks (hereinafter referred to as sub-blocks), and different intra-prediction methods can be used for each sub-block.
- intra prediction or inter prediction can be performed in each macroblock.
- Intra prediction for P frames is the same as for I frames.
- motion compensation is performed during inter prediction.
- a macroblock can be divided into smaller blocks, and each subblock can have a different motion vector and reference image.
- the ability to perform intra prediction and inter prediction also in the B frame can be used as a reference image for motion compensation in addition to the past frame.
- inter prediction in the B frame the future frame can be used as a reference image for motion compensation in addition to the past frame.
- encoding can be performed in the order of I ⁇ P ⁇ B ⁇ B.
- motion compensation can be performed with reference to the I and P frames.
- each sub-block obtained by dividing a macro block can have a different motion vector.
- each macroblock is subjected to quantization by performing DCT (discrete cosine transform) on the prediction residual block. Then, variable length codes are performed on the quantized values of the DCT coefficients obtained in this way.
- DCT discrete cosine transform
- the reference image that can select the reference image for each sub-block is represented by a numerical value called a reference image index and is variable-length encoded.
- the multi-view video encoding method there has conventionally been a method for encoding multi-view video with high efficiency by "parallax compensation" in which motion compensation is applied to images of different cameras at the same time.
- the parallax is a difference between positions at which the same position on the subject is projected on the image planes of cameras arranged at different positions.
- FIG. 13 shows a conceptual diagram of parallax generated between the cameras.
- the image plane of a camera with parallel optical axes is viewed vertically.
- the position where the same position on the subject is projected on the image plane of different cameras is generally called a corresponding point.
- the In parallax compensation the corresponding point on the reference camera image corresponding to a certain target pixel on the image of the encoding target camera is estimated from the reference image, and the pixel value of the target pixel is determined by the pixel value corresponding to the corresponding point. Predict.
- the above-described “estimated parallax” is also referred to as “parallax”.
- the disparity information and the prediction residual are encoded.
- parallax is expressed as a vector (parallax vector) on an image plane.
- the parallax in units of force blocks that includes a mechanism for performing parallax compensation in units of blocks is represented by a two-dimensional vector, that is, two parameters (X component and y component).
- a conceptual diagram of this disparity vector is shown in FIG. That is, in this method, disparity information composed of two parameters and a prediction residual are encoded. This method is effective when the camera parameters are unknown because the camera parameters are not used for encoding.
- Non-Patent Document 3 describes a method for encoding a multi-viewpoint image (still image).
- This method efficiently encodes multi-viewpoint images by using camera parameters for encoding and expressing disparity vectors as one-dimensional information based on epipolar geometric constraints.
- FIG. 15 shows a conceptual diagram of the epipolar geometric constraint.
- the epipolar geometric constraint in two images from two cameras (Camera 1 and Camera 2), the point m ′ on the other image corresponding to the point m on one image with respect to the position M on the subject Is constrained on a straight line called the epipolar line.
- the parallax with respect to the reference image is expressed by one parameter, that is, the position on the one-dimensional epipolar line. That is, in this method, the disparity information expressed by one parameter and the prediction residual are encoded.
- the parallax to each reference image can be expressed with one parameter using epipolar geometric constraints. For example, if the parallax on the epipolar line for one reference image is known, the parallax for the reference image for another camera can also be restored.
- Non-Patent Document 4 an arbitrary viewpoint image Disparity compensation is performed using image generation technology. Specifically, the pixel value of the image of the encoding target camera is predicted by interpolating with the pixel value of the corresponding point of the different camera corresponding to the pixel.
- Figure 16 shows a conceptual diagram of this interpolation. In this interpolation, the value of the pixel m of the encoding target image is predicted by interpolating the values of the pixels m and m of the reference images 1 and 2 corresponding to the pixel m.
- Non-Patent Document 1 ITU-T Rec.H.264 / ISO / IEC 11496-10, "Advanced Video Coding", Final Committee Draft, Document JVT-E022, September 2002
- Non-patent document 2 Hideaki Kimata and Masaki Kitahara, "Preliminary results on multiple view video coding (3DAV)", document M10976 MPEG Redmond Meeting, July, 2004 High-efficiency coding, IEICE Transactions, Vol.J82-D-II, No.ll, pp.1921-1929 (1999)
- Non-Special Reference 4 Masayuki Tanimoto, Toshiaki Fujn, Response to Call for Evidence on Multi-View Video Coding ", document Mxxxxx MPEG Hong Kong Meeting, solo y, 2005
- the epipolar geometric constraint is used to set the disparity information for each reference image regardless of the number of reference images. Since it can be expressed by one parameter, it is possible to efficiently encode disparity information.
- the present invention solves the above problems, and controls the degree of freedom of parallax compensation in accordance with the nature of the reference image in the multi-view video encoding code so that the encoding distortion of the reference image and the camera parameters are controlled.
- One The purpose is to improve the accuracy of parallax compensation and achieve higher coding efficiency than in the past even in the presence of measurement errors.
- the most different point of the present invention from the prior art is that the number of parameters of parallax information is made variable in order to make it possible to control the degree of freedom of parallax compensation according to the nature of the reference image, and the parallax parameter indicating the number of parameters.
- the number information or index information is encoded and included in the code information.
- index information in addition to the number of parameters of parallax information, information indicating a reference image used for parallax compensation can be included, and further other information can be included.
- the number of parallax parameters that specifies the number of parallax information parameters used for parallax compensation according to the nature of the video information. The process which encodes and decodes is performed.
- the parallax parameter number information specifies, for example, the dimension of the parallax vector for each reference image. For example, when the number of reference images is two (reference images A and B), the following configuration is conceivable.
- the reference image A, B !, and the disparity vector for the displacement are two-dimensional.
- the index information pNum can be defined as the disparity parameter number information.
- the number of parameters for expressing disparity information is set in the disparity parameter number setting step on the video code side.
- parallax parameter number information which is information regarding the number of parameters set in the parallax parameter number setting step, is encoded in the parallax parameter number information encoding step.
- the disparity information expressed by the number of parameters set in the disparity parameter number setting step is encoded in the disparity information encoding step.
- the parallax parameter number information is first decoded in the parallax parameter number information decoding step. Then, the parallax information of the number of parameters specified by the decoded parallax parameter number information is decoded in the parallax information decoding step.
- a reference image that can be used for parallax compensation is assigned to the reference image index.
- two reference images are used, and there are three reference images (A, B, C) that can be used in the reference image memory. Can be considered.
- refldx 2: Reference images A and C where refldx is the reference image index.
- a reference image index associated with the decoded image of the encoding target camera may be set.
- the decoding side includes a step of decoding the reference image index.
- the reference image index When combined with the H.264 reference image index reordering mechanism described above, the reference image index has a small value for a reference image that can generate a high-quality predicted image, depending on the nature of the moving image. Thus, the code efficiency can be improved.
- the available parallax parameter number information is associated with the reference image index.
- the video encoding side executes the reference image index encoding step for encoding the reference image index, but the disparity parameter number information is encoded in this step. Will be converted.
- the power parallax parameter number information for executing the reference image index decoding step for decoding the reference image index is decoded in this step.
- the code length of the variable length code assigned to the disparity parameter number information can be changed according to the nature of the moving image, and the disparity parameter number information Can be efficiently encoded.
- the prediction of the parallax information is not effective in the prediction according to the epipolar geometric constraint due to the measurement error of the camera parameter and the code distortion of the reference image.
- prediction is highly flexible by increasing the number of meters, and prediction efficiency is good even when epipolar geometric constraints are used, prediction expressing disparity with one parameter is applied according to the characteristics of the decoded image in units of frames or blocks Therefore, it is possible to control the coding efficiency and realize higher coding efficiency than the conventional one.
- FIG. 1 is a diagram showing a video encoding device according to a first embodiment of the present invention.
- FIG. 2 is a diagram illustrating a camera reference relationship in the first embodiment.
- FIG. 3 is a diagram showing a camera arrangement in Example 1.
- FIG. 4 is a flowchart illustrating a sign in the first embodiment.
- FIG. 5 is a diagram illustrating a video decoding apparatus according to Embodiment 1.
- FIG. 6 is a decoding flowchart in the first embodiment.
- FIG. 7 is a diagram showing a reference relationship of cameras in Embodiment 2 of the present invention.
- FIG. 8 is a diagram showing a video encoding device according to the second embodiment.
- FIG. 9 is a flowchart illustrating a sign in the second embodiment.
- FIG. 10 Detailed flowchart for the process of step S304 in FIG.
- FIG. 11 is a diagram showing a video decoding apparatus according to Embodiment 2.
- FIG. 12 is a video decoding flowchart in the second embodiment. [13] It is a conceptual diagram of parallax generated between cameras.
- FIG. 14 is a conceptual diagram of a disparity vector.
- FIG. 16 is a conceptual diagram of pixel value interpolation.
- FIG. 1 shows a block diagram of a video encoding apparatus according to Embodiment 1 of the present invention.
- the video encoding apparatus 100 includes an image input unit 101 that inputs an original image of a camera C that is an encoding target image, a reference image input unit 102 that inputs a decoded image of cameras A and B that are reference images, and a reference image.
- Reference image memory 103 to be stored parallax parameter number setting unit 104 for setting the number of parameters expressing disparity information used for disparity compensation, disparity parameter number information encoding unit 105 for encoding disparity parameter number information, and disparity information
- a parallax information code unit 106 for encoding and a prediction residual code unit 107 for encoding a residual signal generated by the parallax compensation are provided.
- FIG. 2 is a diagram illustrating a camera reference relationship according to the first embodiment.
- the moving image of camera C is encoded using the decoded images of cameras A and B as reference images. Show.
- the arrows in the figure indicate the reference relationship during parallax compensation, and when the image of camera C is encoded, the decoded images of cameras A and B that are the same time at the display time are encoded as reference images. Turn into. In that case, the predicted image is created with the average value of the pixel values for the corresponding points of cameras A and B.
- FIG. 3 is a diagram illustrating a camera arrangement in the first embodiment.
- the viewpoint positions of the three cameras are arranged at equal intervals on a straight line, and the optical axis is perpendicular to the straight line on which the cameras are arranged.
- the optical axes of the three cameras are assumed to be parallel.
- the xy coordinate system of the image plane is obtained by translation (no rotation, etc.) with respect to the straight line on which the cameras are arranged, and the pixels are constructed by dividing the X and y axes of the image plane at equal intervals by each camera.
- the resolution is the same for each camera, and the parallax for P pixels of camera C and camera A is the parallax of P pixels for camera C and camera B.
- FIG. 4 shows a flow of the sign ⁇ in the first embodiment.
- This flowchart shows processing performed when one image of the camera C is encoded, and it is assumed that moving image encoding is performed by repeating this processing for each image.
- a method of expressing the parallax information when the parallax for each reference image of the cameras A and B is represented by the parallax information representing the position on the epipolar line with respect to the camera A by one parameter (index) pNum value is 0) and parallax for each reference image of camera A and B is expressed as a two-dimensional vector, and parallax information is expressed with a total of four parameters (index pNum value is 1).
- the parallax compensation is performed by adaptively switching between the two.
- pNum is an index representing the parallax parameter number information.
- the number of parallax parameters is switched by N pixels in each of vertical and horizontal directions obtained by dividing an image.
- the image of the camera C is input by the image input unit 101 (step S101).
- the reference image input unit 102 inputs the decoded image power of the cameras A and B, which have the same display time as the image of the camera C input here, to the reference image memory 103.
- an index indicating individual N X N blocks obtained by dividing an image is represented as blk, and the total number of blocks for one image is represented as maxBlk.
- the search for disparity is the rate obtained based on the sum SAD of absolute values related to the NXN block of the prediction residual by disparity compensation and R, which is the estimated value of the code amount of disparity information. Distortion cost is done to minimize cost.
- cost is calculated by the following equation.
- ⁇ is Lagrange's undetermined multiplier, and a preset value is used.
- R the amount of code is obtained by performing variable length coding on the disparity information.
- MinPCost in the flow of Fig. 4 is a variable for storing the minimum value of pCost, and is set to an arbitrary value (maxPCost) larger than the maximum value that can be taken by pCost when processing block blk and initialized. Is done.
- the parallax is searched in a preset range.
- the parallax of camera C with respect to pixel (x, y) is (x + d, y) for camera A, where d ⁇ 0.
- SAD [d] ⁇ ⁇ ABS (DEC [x + i + d y + j] / 2 + DEC [x + i + d y + j] / 2-IMG [xij A x, B x, c x + i, y + j]) (2) where ⁇ . is the sum of i from 0 to N—1, and ⁇ j is the sum of j from 0 to N—1. ABS () takes the absolute value in parentheses, and DEC [x, y] and DEC [x, y]
- the rate distortion cost cost “d” for the parallax d is obtained from 1.
- the parallax search is performed in two dimensions without considering the epipolar geometric constraint. Specifically, the search range on the X-axis for each of camera A and camera B is d, d
- pCost is minPCost (S107)
- the value is set to minPCost pCost, and 1 is substituted into the best VEN for storing the optimum pNum (S108).
- parallax parameter number information encoding unit 105 performs best-length variable-length encoding (Sl l l).
- the disparity information encoding unit 106 encodes disparity information.
- the variable d is encoded, and in the case that the best is 1, (d d d d) is variable encoded.
- a prediction residual is signed in the prediction residual sign key unit 107 (S112 to S114).
- FIG. 5 shows a video decoding apparatus according to the first embodiment.
- the video decoding apparatus 200 includes a disparity parameter number information decoding unit 201 that decodes disparity parameter number information, a disparity information decoding unit 202 that decodes disparity information according to the disparity parameter number information, and a prediction that decodes a prediction residual
- a residual decoding unit 203, a parallax compensation unit 204, and a reference image memory 205 are provided.
- FIG. 6 shows a decoding flow of the present embodiment. This is the same as the frame for decoding one frame of camera C. Show low. Please explain the flow in detail below! /
- the parallax parameter number information decoding unit 201 decodes the parallax parameter number information bestVER (S202). The following processing is performed in accordance with the value of bestVER (S203).
- the disparity information decoding unit 202 decodes the disparity information d.
- the disparity compensation unit 204 receives the parallax parameter number information best VEN and the disparity information d, and the reference image memory 205 receives the N ⁇ N blocks of the cameras A and B corresponding to the disparity information d. Then, if the pixel position of the N ⁇ N block to be encoded is expressed as (x, y), a predicted image PRED [x + i, y + j] is generated by the following equation (S204).
- the disparity information decoding unit 202 decodes the disparity information (d d d d) x, A, x, B, y, A, y, B.
- the parallax compensation unit 204 receives the parallax parameter number information best VER and parallax information (d d d
- N X N blocks are input.
- a predicted image PRED [x + i, y + j] is generated by the following equation (S205).
- the prediction residual decoding unit 203 to which the encoded prediction residual is input decodes the NXN prediction residual block RES [x + i, y + j].
- the prediction residual block is input to the disparity compensation unit 204, and the sum of the prediction residual block and the prediction image is calculated as in the following equation, and the decoded image DEC [x + i, y + j] is obtained (S206).
- DEC [x + i, y + j] RES [x + i, y + j] + PRED [x + i, y + j] (6) While adding 1 to index blk (S207) blk force Iteratively performing until the number of blocks of Si frame reaches maxBlk, a decoded image related to camera C can be obtained.
- Example 2 a second example (hereinafter referred to as Example 2) will be described.
- the decoded images of cameras A, B, D, and E are used as reference images, and the moving image of camera C is used.
- the case where is encoded is shown.
- the image of the camera C is encoded by using only parallax compensation.
- encoding is performed by switching between motion compensation and parallax compensation in units of blocks. Do.
- the arrows in the figure indicate the reference relationship for parallax Z motion compensation.
- a predicted image is obtained by a plurality of pairs of two cameras (three types of A and B, A and D, and B and E) set in cameras A, B, D, and E. Is generated.
- the predicted image generation method is the same as that of the first embodiment, and the predicted image is created with the average value of the pixel values regarding the corresponding points of the two cameras.
- the viewpoint positions of the five cameras are arranged on the straight line at equal intervals, and the optical axis is perpendicular to the straight line on which the cameras are arranged.
- the relationship shown in Fig. 3 applies to five cameras, and the optical axes of the cameras are parallel.
- FIG. 8 shows a configuration diagram of the video encoding device in the second embodiment.
- the video encoding apparatus 300 includes an image input unit 301 that inputs an original image of the camera C, a reference image input unit 302 that inputs decoded images of the cameras A, B, D, and E, and a reference image memory 303 that stores a reference image. , A disparity compensation unit 304 that performs disparity compensation, a motion compensation unit 305 that performs motion compensation, a reference image setting unit 306, a reference image index encoding unit 307, a motion information encoding unit 308, a disparity information encoding unit 309, and a prediction residual An encoding unit 310 and a local decoding unit 311 are provided.
- FIG. 9 shows a flow of the sign ⁇ in the present embodiment. Also, step S3 in the flow Figure 10 shows the detailed flow for 04.
- This flowchart shows processing performed when one image of the camera C is encoded, and it is assumed that moving image encoding is performed by repeating this processing for each image.
- encoding is performed by adaptively switching the following processing in units of N ⁇ N blocks.
- refldx 2' Parallax compensation using camera A and B reference images
- refldx 3 'Cameras A and D Disparity compensation using reference images
- refldx 4 'Parallax compensation using reference images from cameras A and D
- refldx 5' Disparity using reference images from cameras B and E Compensation
- refldx 0 and 1
- the encoding side encodes the method used in each block and the reference image index corresponding to the reference image, and the decoding side decodes the pixel value of each block using the reference image index.
- the sign key process will be described along the flow of FIG. However, this processing is assumed to be the sign key processing for the third frame and thereafter of camera C.
- the image of the camera C is input by the image input unit 301 (S301). It should be noted that decoded images of cameras A, B, D, and E whose display times are the same as the images of camera C input here are input to reference image memory 303 by reference image input unit 302. In addition, it is assumed that the decoded image one frame before and two frames before the camera C is decoded by the local decoding unit 311 and input to the reference image memory 303.
- the index of each N X N block obtained by dividing an image is represented by blk, and the total number of blocks for one image is represented by maxBlk.
- the index blk of the NXN block is initialized to 0 (S302)
- the following processing is performed while adding 1 to the index blk (S311) until the index blk reaches the total number of blocks maxBlk (S312). ), Repeatedly for each NXN block.
- the reference image index refldx is initialized to 0, and the minRefCost variable that stores the minimum value of the cost value re! Cost is larger than the maximum value that re! Cost can take when processing the block blk.
- the value is initialized to maxRefCost (S303).
- Predictive processing corresponding to each reference image index refldx is performed on each N X N block indicated by the index blk (S304). In that case, the cost value re! Cost corresponding to each reference image index refldx is calculated, and the reference image index bestRefl dx that minimizes re! Cost is calculated as N X N
- step S304 the processing corresponding to each reference image index refldx in step S304 will be described according to the flow in FIG.
- motion compensation or parallax compensation is performed, but in either case, motion Z parallax information can be obtained by minimizing the cost given by the following equation.
- R is the estimated code amount of motion or disparity information
- SAD is the prediction residual vec
- refldx When refldx is 2 or more, it is refldx corresponding to parallax compensation (S3041), and the parallax compensation unit 304 reads the decoded images of the two cameras corresponding to the refldx as reference images. Parallax compensation is performed.
- the parallax on the epipolar line is minimized by the two reference images corresponding to the reference image index re fldx, and the rate distortion cost is minimized.
- the minimum cost value is set as re! Cost (S3043).
- the parallax on the image plane is searched for the two reference images corresponding to the reference image index refldx so as to minimize the rate distortion cost, and the minimum cost The value is set as re! Cost (S3044).
- the value power f Cost is obtained by adding the estimated code amount when the reference image index refldx is encoded to the calculated minimum cost value (refCost).
- refldx is 0 or 1
- it is refldx corresponding to motion compensation, and the process proceeds to step S3045.
- the motion compensation unit 305 reads the decoded image of the camera C corresponding to the value of refldx as a reference image, and performs motion compensation.
- the motion information at that time is performed by minimizing the cost calculated by Equation 7.
- a value obtained by adding the estimated code amount when the reference image index refl dx is encoded to the minimum cost value is set as refCost (S3045).
- the reference image setting unit 306 obtains the reference image index bestRefldx with the minimum refC ost and uses it for encoding. A reference image index is determined.
- bestRefldx is encoded by the reference image index encoding unit 307 (S309), and the motion information or disparity information is encoded by the motion information code unit 308 or the disparity information code unit 309,
- the prediction residual is encoded by the prediction residual encoding unit 310 (S310). 1 is added to the index blk (S311), and this is repeated until the total number of blocks maxBlk is reached (S312), whereby the image for one frame of camera C is encoded.
- FIG. 11 shows a video decoding apparatus according to the second embodiment.
- the video decoding device 400 includes a reference image index decoding unit 401 that decodes a reference image index, a disparity information decoding unit 402 that decodes disparity information, a motion information decoding unit 403 that decodes motion information, and a prediction residual that decodes a prediction residual.
- a difference decoding unit 404, a reference image memory 405 for storing a reference image, a parallax compensation unit 406 for performing parallax compensation, and a motion compensation unit 407 for performing motion compensation are provided.
- FIG. 12 shows a decoding flow of the present embodiment. This shows the flow for decoding one frame of camera C. The flow is described in detail below.
- the reference image index bestRefldx is decoded by the reference image index decoding unit 401 (S402).
- the following processing is performed according to the value of the reference image index bestRefldx (S403, S404).
- bestRefldx 0 or 1
- the prediction residual decoding unit 404 then decodes the prediction residual, and the motion compensation unit 407 V, the prediction image is added to the prediction residual (S408), and a decoded image of the NXN block is generated.
- bestRefldx is 2 or more, it is a reference image index corresponding to parallax compensation, and reference images regarding two cameras corresponding to the reference image index bestRefldx are read, and decoding by parallax compensation is performed.
- this reference image index bestRefldx is also associated with the value of the parallax parameter number information pNum, processing according to pNum is performed.
- the parallax compensation process is the same as that in the first embodiment (S404 to S406). Then, the prediction residual decoding unit 404 decodes the prediction residual, and the parallax compensation unit 406 adds the prediction image to the prediction residual (S408), thereby generating a decoded image of N ⁇ N blocks.
- the above video encoding process and video decoding process can be realized by a computer and a software program, and the program can be provided by being recorded on a computer-readable storage medium. It is also possible.
- the prediction of the parallax information is not effective when the prediction according to the epipolar geometric constraint is poor due to the measurement error of the camera parameter or the code distortion of the reference image.
- prediction is highly flexible by increasing the number of meters, and prediction efficiency is good even when epipolar geometric constraints are used, prediction expressing disparity with one parameter is applied according to the characteristics of the decoded image in units of frames or blocks Therefore, it is possible to control the coding efficiency and realize higher coding efficiency than the conventional one.
Abstract
Description
Claims
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
BRPI0620645A BRPI0620645B8 (pt) | 2006-01-05 | 2006-12-29 | Método e aparelho de codificação de vídeo, e método e aparelho de decodificação de vídeo |
CN2006800491986A CN101346998B (zh) | 2006-01-05 | 2006-12-29 | 视频编码方法及解码方法、其装置 |
US12/087,040 US8548064B2 (en) | 2006-01-05 | 2006-12-29 | Video encoding method and decoding method by using selected parallax for parallax compensation, apparatuses therefor, programs therefor, and storage media for storing the programs |
JP2007552992A JP5234586B2 (ja) | 2006-01-05 | 2006-12-29 | 映像符号化方法及び復号方法、それらの装置、及びそれらのプログラム並びにプログラムを記録した記憶媒体 |
CA 2633637 CA2633637C (en) | 2006-01-05 | 2006-12-29 | Video encoding method and decoding method, apparatuses therefor, programs therefor, and storage media for storing the programs |
EP06843675A EP1971154A4 (en) | 2006-01-05 | 2006-12-29 | VIDEO CODING METHOD AND DECODING METHOD, DEVICE THEREFOR, DEVICE THEREFOR AND STORAGE MEDIUM WITH THE PROGRAM |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2006-000394 | 2006-01-05 | ||
JP2006000394 | 2006-01-05 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2007077942A1 true WO2007077942A1 (ja) | 2007-07-12 |
Family
ID=38228291
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2006/326297 WO2007077942A1 (ja) | 2006-01-05 | 2006-12-29 | 映像符号化方法及び復号方法、それらの装置、及びそれらのプログラム並びにプログラムを記録した記憶媒体 |
Country Status (10)
Country | Link |
---|---|
US (1) | US8548064B2 (ja) |
EP (1) | EP1971154A4 (ja) |
JP (1) | JP5234586B2 (ja) |
KR (1) | KR100968920B1 (ja) |
CN (1) | CN101346998B (ja) |
BR (1) | BRPI0620645B8 (ja) |
CA (2) | CA2845591C (ja) |
RU (1) | RU2374786C1 (ja) |
TW (1) | TW200737990A (ja) |
WO (1) | WO2007077942A1 (ja) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008035654A1 (fr) * | 2006-09-20 | 2008-03-27 | Nippon Telegraph And Telephone Corporation | Procédés et dispositifs de codage et de décodage d'image, dispositif et programmes de décodage d'image, et support de stockage desdits programmes |
WO2008035665A1 (fr) * | 2006-09-20 | 2008-03-27 | Nippon Telegraph And Telephone Corporation | procédé DE CODAGE D'IMAGE, PROCÉDÉ DE DÉCODAGE, DISPOSITIF associÉ, DISPOSITIF DE DÉCODAGE D'IMAGE, programme associÉ, et support de stockage contenant le programme |
WO2013136365A1 (ja) * | 2012-03-14 | 2013-09-19 | 株式会社 東芝 | 多視点画像符号化装置及び方法、並びに、多視点画像復号装置及び方法 |
JPWO2013136365A1 (ja) * | 2012-03-14 | 2015-07-30 | 株式会社東芝 | 多視点画像符号化装置及び方法、並びに、多視点画像復号装置及び方法 |
Families Citing this family (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101595899B1 (ko) * | 2008-04-15 | 2016-02-19 | 오렌지 | 선형 형태의 픽셀들의 파티션들로 슬라이스 된 이미지 또는 이미지들의 시퀀스의 코딩 및 디코딩 |
US20120212579A1 (en) * | 2009-10-20 | 2012-08-23 | Telefonaktiebolaget Lm Ericsson (Publ) | Method and Arrangement for Multi-View Video Compression |
JP4927928B2 (ja) * | 2009-11-30 | 2012-05-09 | パナソニック株式会社 | 多視点動画像復号装置及び多視点動画像復号方法 |
JP4837772B2 (ja) * | 2009-12-15 | 2011-12-14 | パナソニック株式会社 | 多視点動画像復号装置、多視点動画像復号方法、プログラム及び集積回路 |
JP2011199396A (ja) * | 2010-03-17 | 2011-10-06 | Ntt Docomo Inc | 動画像予測符号化装置、動画像予測符号化方法、動画像予測符号化プログラム、動画像予測復号装置、動画像予測復号方法、及び動画像予測復号プログラム |
US9008175B2 (en) * | 2010-10-01 | 2015-04-14 | Qualcomm Incorporated | Intra smoothing filter for video coding |
US8284307B1 (en) * | 2010-11-01 | 2012-10-09 | Marseille Networks, Inc. | Method for processing digital video fields |
US20120163457A1 (en) * | 2010-12-28 | 2012-06-28 | Viktor Wahadaniah | Moving picture decoding method, moving picture coding method, moving picture decoding apparatus, moving picture coding apparatus, and moving picture coding and decoding apparatus |
JPWO2012131895A1 (ja) * | 2011-03-29 | 2014-07-24 | 株式会社東芝 | 画像符号化装置、方法及びプログラム、画像復号化装置、方法及びプログラム |
JP2012257198A (ja) * | 2011-05-17 | 2012-12-27 | Canon Inc | 立体画像符号化装置、その方法、および立体画像符号化装置を有する撮像装置 |
KR101677003B1 (ko) * | 2011-06-17 | 2016-11-16 | 가부시키가이샤 제이브이씨 켄우드 | 화상 부호화 장치, 화상 부호화 방법 및 화상 부호화 프로그램, 및 화상 복호 장치, 화상 복호 방법 및 화상 복호 프로그램 |
WO2012176405A1 (ja) * | 2011-06-20 | 2012-12-27 | 株式会社Jvcケンウッド | 画像符号化装置、画像符号化方法及び画像符号化プログラム、並びに画像復号装置、画像復号方法及び画像復号プログラム |
MX341889B (es) * | 2011-06-30 | 2016-09-07 | Sony Corp | Dispositivo de procesamiento de imagenes y metodo de procesamiento de imagenes. |
US9635355B2 (en) | 2011-07-28 | 2017-04-25 | Qualcomm Incorporated | Multiview video coding |
US9674525B2 (en) | 2011-07-28 | 2017-06-06 | Qualcomm Incorporated | Multiview video coding |
JP5706264B2 (ja) | 2011-08-01 | 2015-04-22 | 日本電信電話株式会社 | 画像符号化方法,画像復号方法,画像符号化装置,画像復号装置,画像符号化プログラムおよび画像復号プログラム |
US9451232B2 (en) | 2011-09-29 | 2016-09-20 | Dolby Laboratories Licensing Corporation | Representation and coding of multi-view images using tapestry encoding |
JP5485969B2 (ja) * | 2011-11-07 | 2014-05-07 | 株式会社Nttドコモ | 動画像予測符号化装置、動画像予測符号化方法、動画像予測符号化プログラム、動画像予測復号装置、動画像予測復号方法及び動画像予測復号プログラム |
BR122020007529B1 (pt) | 2012-01-20 | 2021-09-21 | Ge Video Compression, Llc | Conceito de codificação que permite o processamento paralelo, desmultiplexador de transporte e fluxo de bites de vídeo |
LT3793200T (lt) | 2012-04-13 | 2023-02-27 | Ge Video Compression, Llc | Vaizdo kodavimas su maža delsa |
JP2013258577A (ja) * | 2012-06-13 | 2013-12-26 | Canon Inc | 撮像装置、撮像方法及びプログラム、画像符号化装置、画像符号化方法及びプログラム |
AU2013283173B2 (en) | 2012-06-29 | 2016-03-24 | Ge Video Compression, Llc | Video data stream concept |
PL4033764T3 (pl) * | 2012-09-26 | 2023-12-27 | Sun Patent Trust | Sposób dekodowania obrazów, sposób kodowania obrazów, urządzenie do dekodowania obrazów, urządzenie do kodowania obrazów oraz urządzenie do kodowania/dekodowania obrazów |
JP2014082541A (ja) * | 2012-10-12 | 2014-05-08 | National Institute Of Information & Communication Technology | 互いに類似した情報を含む複数画像のデータサイズを低減する方法、プログラムおよび装置 |
JP6150277B2 (ja) * | 2013-01-07 | 2017-06-21 | 国立研究開発法人情報通信研究機構 | 立体映像符号化装置、立体映像復号化装置、立体映像符号化方法、立体映像復号化方法、立体映像符号化プログラム及び立体映像復号化プログラム |
CN105052148B (zh) * | 2013-04-12 | 2018-07-10 | 日本电信电话株式会社 | 视频编码装置和方法、视频解码装置和方法、以及其记录介质 |
JP6551743B2 (ja) * | 2013-06-05 | 2019-07-31 | ソニー株式会社 | 画像処理装置および画像処理方法 |
RU2679566C1 (ru) * | 2013-12-10 | 2019-02-11 | Кэнон Кабусики Кайся | Улучшенный палитровый режим в hevc |
EP3926955A1 (en) | 2013-12-10 | 2021-12-22 | Canon Kabushiki Kaisha | Method and apparatus for encoding or decoding blocks of pixel |
EP3171598A1 (en) * | 2015-11-19 | 2017-05-24 | Thomson Licensing | Methods and devices for encoding and decoding a matrix of views obtained from light-field data, corresponding computer program and non-transitory program storage device |
WO2018199792A1 (en) | 2017-04-26 | 2018-11-01 | Huawei Technologies Co., Ltd | Apparatuses and methods for encoding and decoding a panoramic video signal |
EP3639517B1 (en) | 2017-06-14 | 2021-02-24 | Huawei Technologies Co., Ltd. | Intra-prediction for video coding using perspective information |
CN110070564B (zh) * | 2019-05-08 | 2021-05-11 | 广州市百果园信息技术有限公司 | 一种特征点匹配方法、装置、设备及存储介质 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09261653A (ja) * | 1996-03-18 | 1997-10-03 | Sharp Corp | 多視点画像符号化装置 |
JPH10271511A (ja) * | 1997-01-22 | 1998-10-09 | Matsushita Electric Ind Co Ltd | 画像符号化装置と画像復号化装置 |
JP2004007377A (ja) * | 2002-04-18 | 2004-01-08 | Toshiba Corp | 動画像符号化/復号化方法及び装置 |
JP2006000394A (ja) | 2004-06-17 | 2006-01-05 | Tokai Kiki Kogyo Co Ltd | 畳側面の縫着方法及び畳用縫着装置 |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
SU1665545A1 (ru) | 1988-07-21 | 1991-07-23 | Винницкий политехнический институт | Телевизионное устройство селекции изображений объектов |
RU2030119C1 (ru) | 1991-04-19 | 1995-02-27 | Смирнов Александр Иванович | Устройство формирования стереотелевизионного изображения подвижного объекта |
US5625408A (en) * | 1993-06-24 | 1997-04-29 | Canon Kabushiki Kaisha | Three-dimensional image recording/reconstructing method and apparatus therefor |
JPH11239351A (ja) | 1998-02-23 | 1999-08-31 | Nippon Telegr & Teleph Corp <Ntt> | 動画像符号化方法、復号方法、符号化器、復号器、動画像符号化プログラムおよび動画像復号プログラムを記録した記録媒体 |
JP3519594B2 (ja) * | 1998-03-03 | 2004-04-19 | Kddi株式会社 | ステレオ動画像用符号化装置 |
US6519358B1 (en) * | 1998-10-07 | 2003-02-11 | Sony Corporation | Parallax calculating apparatus, distance calculating apparatus, methods of the same, and information providing media |
US7085409B2 (en) * | 2000-10-18 | 2006-08-01 | Sarnoff Corporation | Method and apparatus for synthesizing new video and/or still imagery from a collection of real video and/or still imagery |
JP4608136B2 (ja) | 2001-06-22 | 2011-01-05 | オリンパス株式会社 | 動きベクトル及び視差ベクトル検出装置 |
JP4213646B2 (ja) * | 2003-12-26 | 2009-01-21 | 株式会社エヌ・ティ・ティ・ドコモ | 画像符号化装置、画像符号化方法、画像符号化プログラム、画像復号装置、画像復号方法、及び画像復号プログラム。 |
KR100679740B1 (ko) | 2004-06-25 | 2007-02-07 | 학교법인연세대학교 | 시점 선택이 가능한 다시점 동영상 부호화/복호화 방법 |
JP4363295B2 (ja) * | 2004-10-01 | 2009-11-11 | オムロン株式会社 | ステレオ画像による平面推定方法 |
KR100738867B1 (ko) * | 2005-04-13 | 2007-07-12 | 연세대학교 산학협력단 | 다시점 동영상 부호화/복호화 시스템의 부호화 방법 및시점간 보정 변이 추정 방법 |
-
2006
- 2006-12-29 KR KR20087015483A patent/KR100968920B1/ko active IP Right Grant
- 2006-12-29 RU RU2008125846A patent/RU2374786C1/ru active
- 2006-12-29 CA CA2845591A patent/CA2845591C/en active Active
- 2006-12-29 US US12/087,040 patent/US8548064B2/en active Active
- 2006-12-29 CN CN2006800491986A patent/CN101346998B/zh active Active
- 2006-12-29 WO PCT/JP2006/326297 patent/WO2007077942A1/ja active Application Filing
- 2006-12-29 JP JP2007552992A patent/JP5234586B2/ja active Active
- 2006-12-29 EP EP06843675A patent/EP1971154A4/en not_active Withdrawn
- 2006-12-29 BR BRPI0620645A patent/BRPI0620645B8/pt active IP Right Grant
- 2006-12-29 CA CA 2633637 patent/CA2633637C/en active Active
-
2007
- 2007-01-02 TW TW096100017A patent/TW200737990A/zh unknown
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09261653A (ja) * | 1996-03-18 | 1997-10-03 | Sharp Corp | 多視点画像符号化装置 |
JPH10271511A (ja) * | 1997-01-22 | 1998-10-09 | Matsushita Electric Ind Co Ltd | 画像符号化装置と画像復号化装置 |
JP2004007377A (ja) * | 2002-04-18 | 2004-01-08 | Toshiba Corp | 動画像符号化/復号化方法及び装置 |
JP2006000394A (ja) | 2004-06-17 | 2006-01-05 | Tokai Kiki Kogyo Co Ltd | 畳側面の縫着方法及び畳用縫着装置 |
Non-Patent Citations (6)
Title |
---|
"ITU-T Rec.H.264/ISO/IEC 11496-10, "Advanced Video Coding"", FINAL COMMITTEE DRAFT, DOCUMENT JVT-E022, September 2002 (2002-09-01) |
HATA K. ET AL.: "Tashiten Gazo no Ko Noritsu Fugoka", THE TRANASACTIONS OF THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS, vol. J82-D-II, no. 11, November 1999 (1999-11-01), pages 1921 - 1929, XP008031096 * |
HIDEAKI KIMATA; MASAKI KITAHARA: "Preliminary results on multiple view video coding (3DA V", M10976 MPEG REDMOND MEETING, July 2004 (2004-07-01) |
KOICHI HATA; MINORU ETOH; KUNIHIRO CHIHARA: "Coding of Multi-Viewpoint Images", IEICE TRANSACTIONS, vol. J82-D-II, no. 1 1, 1999, pages 1921 - 1929 |
MASAYUKI TANIMOTO; TOSHIAKI FUJII: "Response to Call for Evidence on Multi-View Video Coding", MXXXXX MPEG HONG KONG MEETING, January 2005 (2005-01-01) |
See also references of EP1971154A4 |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008035654A1 (fr) * | 2006-09-20 | 2008-03-27 | Nippon Telegraph And Telephone Corporation | Procédés et dispositifs de codage et de décodage d'image, dispositif et programmes de décodage d'image, et support de stockage desdits programmes |
WO2008035665A1 (fr) * | 2006-09-20 | 2008-03-27 | Nippon Telegraph And Telephone Corporation | procédé DE CODAGE D'IMAGE, PROCÉDÉ DE DÉCODAGE, DISPOSITIF associÉ, DISPOSITIF DE DÉCODAGE D'IMAGE, programme associÉ, et support de stockage contenant le programme |
EP2066132A1 (en) * | 2006-09-20 | 2009-06-03 | Nippon Telegraph and Telephone Corporation | Image encoding and decoding methods, their devices, image decoding device, their programs, and storage medium in which programs are recorded |
JP4999854B2 (ja) * | 2006-09-20 | 2012-08-15 | 日本電信電話株式会社 | 画像符号化方法及び復号方法、それらの装置、及びそれらのプログラム並びにプログラムを記録した記憶媒体 |
JP4999853B2 (ja) * | 2006-09-20 | 2012-08-15 | 日本電信電話株式会社 | 画像符号化方法及び復号方法、それらの装置、及びそれらのプログラム並びにプログラムを記録した記憶媒体 |
US8290289B2 (en) | 2006-09-20 | 2012-10-16 | Nippon Telegraph And Telephone Corporation | Image encoding and decoding for multi-viewpoint images |
EP2066132A4 (en) * | 2006-09-20 | 2012-11-07 | Nippon Telegraph & Telephone | IMAGE ENCODING AND DECODING METHODS AND DEVICES, IMAGE DECODING DEVICE AND PROGRAMS, AND STORAGE MEDIUM OF SAID PROGRAMS |
US8385628B2 (en) | 2006-09-20 | 2013-02-26 | Nippon Telegraph And Telephone Corporation | Image encoding and decoding method, apparatuses therefor, programs therefor, and storage media for storing the programs |
WO2013136365A1 (ja) * | 2012-03-14 | 2013-09-19 | 株式会社 東芝 | 多視点画像符号化装置及び方法、並びに、多視点画像復号装置及び方法 |
JPWO2013136365A1 (ja) * | 2012-03-14 | 2015-07-30 | 株式会社東芝 | 多視点画像符号化装置及び方法、並びに、多視点画像復号装置及び方法 |
Also Published As
Publication number | Publication date |
---|---|
JPWO2007077942A1 (ja) | 2009-06-11 |
KR100968920B1 (ko) | 2010-07-14 |
CA2845591C (en) | 2015-12-08 |
CN101346998A (zh) | 2009-01-14 |
CA2845591A1 (en) | 2007-07-12 |
BRPI0620645B8 (pt) | 2022-06-14 |
TWI335185B (ja) | 2010-12-21 |
EP1971154A1 (en) | 2008-09-17 |
RU2374786C1 (ru) | 2009-11-27 |
CA2633637C (en) | 2014-06-17 |
EP1971154A4 (en) | 2010-10-27 |
CA2633637A1 (en) | 2007-07-12 |
BRPI0620645B1 (pt) | 2020-09-15 |
US8548064B2 (en) | 2013-10-01 |
KR20080076974A (ko) | 2008-08-20 |
TW200737990A (en) | 2007-10-01 |
JP5234586B2 (ja) | 2013-07-10 |
CN101346998B (zh) | 2012-01-11 |
BRPI0620645A2 (pt) | 2011-11-16 |
US20090028248A1 (en) | 2009-01-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2007077942A1 (ja) | 映像符号化方法及び復号方法、それらの装置、及びそれらのプログラム並びにプログラムを記録した記憶媒体 | |
JP5234587B2 (ja) | 映像符号化方法及び復号方法、それらの装置、及びそれらのプログラム並びにプログラムを記録した記憶媒体 | |
US9088802B2 (en) | Video encoding method and apparatus, video decoding method and apparatus, programs therefor, and storage media for storing the programs | |
JP7279154B2 (ja) | アフィン動きモデルに基づく動きベクトル予測方法および装置 | |
JP2007329693A (ja) | 画像符号化装置、及び画像符号化方法 | |
CN112703735B (zh) | 视频编/解码方法及相关设备和计算机可读存储介质 | |
CN111107354A (zh) | 一种视频图像预测方法及装置 | |
JP5560009B2 (ja) | 動画像符号化装置 | |
WO2020088482A1 (zh) | 基于仿射预测模式的帧间预测的方法及相关装置 | |
CN112740663B (zh) | 图像预测方法、装置以及相应的编码器和解码器 | |
TW201328362A (zh) | 影像編碼方法、裝置、影像解碼方法、裝置及該等之程式 | |
JP5841395B2 (ja) | イントラ予測装置、符号化装置、及びプログラム | |
Ahmmed et al. | A Two-Step Discrete Cosine Basis Oriented Motion Modeling Approach for Enhanced Motion Compensation | |
Kim et al. | Multilevel Residual Motion Compensation for High Efficiency Video Coding |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 200680049198.6 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
DPE1 | Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101) | ||
WWE | Wipo information: entry into national phase |
Ref document number: 5103/DELNP/2008 Country of ref document: IN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2633637 Country of ref document: CA Ref document number: 2006843675 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2007552992 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 12087040 Country of ref document: US Ref document number: 2008125846 Country of ref document: RU |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: PI0620645 Country of ref document: BR Kind code of ref document: A2 Effective date: 20080625 |