MX2008008865A - Method and apparatus for providing reduced resolution update mode for multi-view video coding - Google Patents

Method and apparatus for providing reduced resolution update mode for multi-view video coding

Info

Publication number
MX2008008865A
MX2008008865A MX/A/2008/008865A MX2008008865A MX2008008865A MX 2008008865 A MX2008008865 A MX 2008008865A MX 2008008865 A MX2008008865 A MX 2008008865A MX 2008008865 A MX2008008865 A MX 2008008865A
Authority
MX
Mexico
Prior art keywords
image
compensation
block
color
level
Prior art date
Application number
MX/A/2008/008865A
Other languages
Spanish (es)
Inventor
Su Yeping
Hoon Kim Jae
Gomila Cristina
Original Assignee
Gomila Cristina
Hoon Kim Jae
Su Yeping
Thomson Licensing
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gomila Cristina, Hoon Kim Jae, Su Yeping, Thomson Licensing filed Critical Gomila Cristina
Publication of MX2008008865A publication Critical patent/MX2008008865A/en

Links

Abstract

There are provided a method and apparatus for illumination and color compensation for multi-view video coding. A video encoder includes an encoder (100) for encoding a picture by enabling color compensation of at least one color component in a prediction of the picture based upon a correlation factor relating to color data between the picture and another picture. The picture and the other picture have different view points and both corresponding to multi-view content for a same or similar scene.

Description

METHOD AND APPARATUS FOR PROVIDING A REDUCED RESOLUTION UPDATE MODE FOR MULTIPLE VISION VIDEO ENCODING Cross reference with related applications This application claims the benefit of the US Provisional Application Series No. 60 / 757,372, entitled "Illumination System and Color Compensation for Multiple View Video Coding" filed on January 9, 2006, which is incorporated in its entirety to the present invention as a reference. In addition, the present application claims the benefit of the US Provisional Application Series No. 60 / 757,289 entitled "Multiple Vision Video Coding System" filed on June 9, 2006, which is incorporated in its entirety to the present invention as reference. In addition, the present application relates to the non-provisional application, Legal File No. PU060004, entitled "Methods and Apparatus for Multiple View Video Coding" which is commonly assigned, and is fully incorporated into the present invention as reference . Field of the Invention The present invention relates to video encoding and decoding, more particularly, to methods and apparatus for lighting compensation and compensation of color for Multiple View Video Coding (MVC). Color compensation can apply to at least one color component. Background of the Invention A Sequence of Video Coding of Views Multiple (MVC) is a group of two or more video sequences that capture the same scene from different points of view. It has been widely recognized that Multi-View Video Coding is a key technology that serves a wide variety of applications, including free point of view and 3D video applications, home entertainment, surveillance, etc. In such multi-view applications, a large amount of video data is often involved.
In a practical scenario, Multiple View Video Coding systems involving a large number of cameras are built using heterogeneous cameras or cameras that have not been calibrated perfectly. This leads to differences in luminance and chrominance when the same parts of a scene are viewed with different cameras. In addition, the distance and placement of the camera also affects the lighting, in the sense that the same surface can reflect light differently when it is perceived from different angles. Under these scenarios, differences in luminance and chrominance will decrease the efficiency of cross-view anticipation.
Several methods of the prior art have been developed to solve the problem of incompatibility and illumination between pairs of images. In a first method of the prior art, it is decided based on cross-entropy values if a variation model of local brightness is applied. If the cross entropy is greater than a threshold value, a global and local brightness compensation is applied using a multiplier (scale) and a compensation field. However, local parameters are selected only after the best compatibility block has been found, which can be inconvenient when lighting incompatibilities are significant. Similarly, a second method of the prior art proposes a modified motion estimation method, although a global lighting compensation model is used. Also, the second method of the prior art proposes a method of on / off control block by block, however said method is based on MSE. In a third method of the prior art, it addresses a problem of illumination incompatibility in video sequences. In a third method of the prior art, a scale / compensation parameter of a 16x16 macroblock and predictive coding of the parameter is proposed. The third method of the prior art also proposes a cost of range dispersion based on the enabling of a switch. Without However, the third method of the prior art focuses primarily on temporary video sequences. In video sequences, a problem of lighting incompatibility does not occur consistently as in cross-view prediction. Brief Description of the Invention These and other drawbacks and disadvantages of the prior art are addressed by the present invention, which is directed to methods and apparatus for lighting compensation and color compensation for Multiple View Video Coding (MVC). The color compensation can be applied to at least one color component. In accordance with one aspect of the present invention, a video encoder is provided. The video encoder includes an encoder for encoding an image enabling the color compensation of at least one color component in an anticipation of the image based on a correlation factor that relates to color data between the image and another image. The image and the other image have different points of view and both correspond to the multiple view content of the same scene or a similar scene. In accordance with another aspect of the present invention, a method of video encoding is provided. The method includes encoding an image by enabling color compensation of at least one color component in anticipation of the image based on a correlation factor that is related to color data between the image and another image. The image and the other image have different points of view and both correspond to a multiple view content for the same scene or a similar scene. In accordance with another aspect of the present invention, a video decoder is provided. The video decoder includes a decoder for decoding an image enabling the color compensation of at least one color component in an anticipation of the image based on a correlation factor that relates to color data between the image and another image. The image and the other image have different points of view and both correspond to the multiple vision content of the same scene or a similar scene. According to a further aspect of the present invention, a video decoding method is provided. The method includes decoding an image by enabling color compensation of at least one color component in a prediction of an image based on a correlation factor that relates to color data between the image and the other image. The image and the other image have different points of view and both correspond to a multiple view content for the same scene or a similar scene. These and other aspects, characteristics and advantages of present invention can be appreciated from the detailed description of the example modalities, which will be read in relation to the accompanying drawings. BRIEF DESCRIPTION OF THE DRAWINGS The present invention may be better understood in accordance with the following example figures, in which: Figure 1 is a block diagram of an exemplary Multiple View Video Coding (MVC) encoder to the which may apply the principles of the present invention in accordance with one embodiment thereof; Figure 2 is a block diagram of an exemplary Multiple View Video Coding (MVC) decoder to which the principles of the present invention can be applied in accordance with one embodiment thereof; Figure 3 is a flow diagram of an exemplary video coding method with illumination compensation for Multiple View video content according to one embodiment of the principles of the present invention; Fig. 4 is a flowchart of an exemplary video decoding method with illumination compensation for Multiple View video content according to one embodiment of the principles of the present invention; Figure 5 is a block diagram of an apparatus of example for generation of reference block with lighting compensation for multiple vision video content for which the principles of the present invention can be applied according to one modality thereof. Detailed Description of the Invention The present invention is directed to methods and apparatus for lighting compensation and color compensation for Multi-View Video Coding (MVC). Color compensation can apply to at least one color component. Conveniently, the embodiments of the present invention provide enhanced coding compression of Multiple Streaming simultaneous view data. As used in the present invention, the Multiple View sequence is a group of two or more video sequences that capture the same scene from different viewing points. It should be appreciated that the teachings of the lighting compensation and color compensation as described in the present invention can be used attached or separated in various embodiments of the principles of the present invention, still maintaining the scope of the principles of the invention. present invention. The present description illustrates the principles of the present invention. Therefore, it will be appreciated that those skilled in the art will have the ability to consider various adjustments, which although not explicitly described or shown in the present invention, represent the principles thereof and are included within the spirit and scope. All the examples and conditional language mentioned herein are projected for pedagogical purposes to help the reader understand the principles of the present invention and the concepts that are the inventor's contribution to favoring the technique, and will be constructed as without limitation to the examples and conditions mentioned specifically. In addition, all manifestations in the present invention that manifest principles, aspects and modalities thereof, as well as specific examples thereof, are projected to understand both structural and functional equivalents thereof. Furthermore, it is intended that said equivalents include equivalents both normally known as equivalents developed in the future, ie any developed elements that carry out the same function, regardless of the structure. Therefore, for example, those skilled in the art will appreciate that the block diagrams presented here represent conceptual views of illustrative circuits representing the principles of the present invention. Similarly, it will be appreciated that any flow charts, flowcharts, state transition diagrams, pseudocodes and the like represent several processes that can be represented substantially in computer readable media and are executed by a computer or processor, whether said computer or processor is displayed explicitly or do not. The functions of the various elements shown in the figures can be provided through the use of dedicated hardware, as well as hardware with the ability to run software in association with suitable software. When provided through a processor, the functions can be provided by a simple dedicated processor, by a simple compatible processor or by a plurality of individual processors, some of which can be shared. In addition, the explicit use of the term "processor" or "controller" should not be constructed to refer exclusively to hardware with the ability to run software, and may implicitly include, without limitation, a digital signal processor hardware ("DSP"). "), read-only memory (" ROM ") for storing software, random access memory (" RAM ") and non-volatile storage. Other hardware, conventional and / or customary, can also be included. Similarly, any switches shown in the figures are only conceptual Its function can be carried out through the operation of a programming logic, through a dedicated logic, through the interaction of program control and dedicated logic, and even in manual form, with the particular technique being selected by the implementer as can be understood more specifically from the context. In the claims of the present invention, any element expressed as a means to carry out a specific function, are intended to comprise any form to carry out said function including, for example, a) a combination of circuit elements which carries out the function or b) a software in any form, including, therefore, a firmware, microcode or the like, combined with a circuit suitable for executing said software to carry out the function. The present invention as defined by the claims, receives the fact that the functionalities provided by the various mentioned means are combined and carried together in a form which is claimed. Therefore, it is considered that any means that can provide these functionalities is equivalent to those shown here. The reference in the specification to "one embodiment" or "the modality" of the principles of the present invention means that a feature, structure, presentation, etc. in Particularly described in relation to the embodiment is included in at least one embodiment of the principles of the present invention. Therefore, the appearances of the phrase "in one modality" or "in the modality" that appear in various places throughout the specification does not necessarily refer to the same modality. Turning now to FIG. 1, an example Multiple View Video Coding (MVC) encoder to which the principles of the present invention may be applied, is generally indicated by the reference number 100. The encoder 100 includes a combiner 105 having an output connected in signal communication with an input of a transformer 110. A transformer output 110 is connected in signal communication with an input of a quantizer 115. An output of the quantizer 115 is connected in signal communication with an input of an entropy encoder 120 and an input of a reverse quantizer 125. An output of the reverse quantizer 125 is connected in signal communication with an input of a reverse transformer 130. A reverse transformer output 130 is connected in communication of signal with a first non-inverting input of a combiner 135. An output of the combiner 135 is connected in communication of s signal with an input of an anticipating intra 145 and an input of an unlocking filter 150. An output of the unlocking filter 150 is connected in signal communication with a store entry of reference image 155 (for view i). An output of the reference image store 155 is connected in signal communication with a first input of a motion compensator 175 and a first input of a motion estimator 180. An output of the motion estimator 180 is connected in signal communication with a second input of the motion compensator 175. An output of a reference image store 160 (for other views) is connected in signal communication with a first input of a disparity / illumination estimator 170 and a first input of a compensator Disparity / Illumination 165. An output of the disparity / illumination estimator 170 is connected in signal communication with a second disparity / illumination compensator input 165. An output of the entropy decoder 120 is available as an output of the encoder 100. A non-inverting input of the combiner 105 is available as an encoder input 100 and is connected in communication signal with a second input of the disparity / illumination estimator 170 and a second input of the motion estimator 180. An output of a switch 185 is connected in signal communication with a second input without inversion of the combiner 135 and with an input of investment of the combiner 105. The switch 185 includes a first input connected in signal communication with an output of motion compensator 175, a second signal connected in signal communication with a disparity / illumination compensator output 165 and a third input connected in signal communication with an output of the anticipator intra 145. A module decision module 140 has an output connected to the switch 185 to control the input that is selected by the switch 185. Turning now to FIG. 2, a decoder of Multiple View Video Coding (MVC) to which the principles of the present invention may be applied is generally indicated with the reference number 200. The decoder 200 includes an entropy decoder having an output connected in signal communication with a input of a reverse quantizer 210. An output of the inverse quantizer is connected in signal communication with an input of a reverse transformer 215. An output of the reverse transformer 215 is connected in signal communication with a first input without inverting a combiner 220. An output of the combiner 220 is connected in signal communication with an input of an unlocking filter 225 and an input of an anticipatory input 230. An output of the unlocking filter 225 is connected in signal communication with an input of an image store of reference 240 (for vision i). An output from the reference image store 240 is connected in signal communication with a first input of a motion compensator 235. An output from a reference image store 245 (for other views) is connected in signal communication with a first input of a disparity / illumination compensator 250. An input of the entropy encoder 205 is available as an input to the decoder 200 to receive a stream of residue bits. In addition, an input of a mode module 260 is also available as an input to the decoder 200, to receive syntax or control to control the input that is selected by the switch 255. In addition, a second input of the motion compensator 235 is available as an input of the decoder 200, to receive motion vectors. Also, a second input of the disparity / illumination compensator 250 is available as an input to the decoder 200, to receive disparity vectors and lighting compensation syntaxes. An output of a switch 255 is connected in signal communication with a second input without inverter of the combiner 220. A first switch input 255 is connected in signal communication with an output of the disparity / illumination compensator 250. A second switch input 255 is connected in signal communication with an output of motion compensator 235. A third input of switch 255 is connected in signal communication with an output of anticipator intra 230. An output of mode module 260 is connected in communication signal with the switch 255 to control which input is selected by the switch 255. An output of the unlock filter 225 is available as an output of the decoder 200. The embodiments of the present invention are directed to the efficient coding of video sequences of multiple view A multiple view video sequence is a group of two or more video sequences that capture the same scene from different viewing points. In particular, the various embodiments according to the principles of the present invention are directed to the compensation of illumination and / or color compensation to encode and decode multiple view video sequences. The principles of the present invention take into account that since a multiple vision source involves multiple views of the same scene, there is a high degree of correlation between multiple view images. Therefore, the redundancy of sight can be expressed in addition to temporal redundancy, and is achieved by carrying out an anticipation of view through the references seen (anticipation of cross view). For purposes of illustration, the description provided here is directed to a Multi-View Video Encoding extension of the Advanced Video Encoding Standard Part 10 of Group 4 of Filmmaking Experts of the International Organization for Standardization / International Commission Electrotechnical / Recommendation H.264 of the International Telecommunication Union, Telecommunication Sector, and extensions thereof. (Hereinafter "standard MPEG-4 AVC"). However, it will be appreciated that the principles of the present invention also apply to other video coding standards, as will be readily determined by one skilled in the art and related arts. That is, because of the teachings of the principles of the present invention provided herein, one skilled in the art and related arts will have the ability to easily increase these principles with respect to various video coding standards including the MPEG-AVC standard and others. video coding standards, while maintaining the scope of the principles of the present invention. In the structure of the MPEG-4 AVC standard, lighting compensation can be considered as part of the disparity compensation process, where the anticipation of cross-view (anticipation of view through different views of a sequence of Multiple Views) includes a compensation to address differences in lighting through different camera views. Due to the strong correlation between neighboring blocks in space, the compensation can be coded differently before being quantified and entropy coded. The lighting compensation can be implemented to be switchable on a block basis, since different signal blocks suffer from different levels of illumination incompatibility. In addition to the lighting compensation, the color compensation design is also proposed to address the color discrepancies between the different views of the camera. In an illustrative embodiment of the present invention involving lighting compensation and color compensation, directed to the extension of the Multiple Vision Video Coding (MVC) of the MPEG-4 AVC standard, an example structure is established as indicated below . At the cut level, a new syntax element (ic_predíction_flag) is introduced to indicate if lighting compensation is enabled for that moment's cut. At the macroblock level, two new syntax elements are introduced: one (ic_enable) is introduced to inject the lighting compensation utilization of each block; another (ic_sym) is entered to carry the compensation parameter of illumination. Turning now to FIG. 3, an exemplary video coding method with illumination compensation for multiple vision video content is generally indicated with reference number 300. Method 300 includes a start block 305 that passes control to a block of the circuit boundary. Circuit boundary block 310 begins with a circuit in each macroblock in a cutoff at that time including setting a range from the circuit using a variable mb = 0 to MacroBlocklnPic-1, and passing control to a decision block 315. Decision block 315 determines whether or not lighting compensation (IC) is enabled for that moment's cut. If so, the control passes to a function block 320. Otherwise, the control passes to a function block 350. Function block 320 carries out motion estimation with lighting compensation and passes the control to a block of function 325. Function block 325 forms an IC predictor c_offset_p, and passes control to a function block 330. Function block 330 performs differential lighting compensation that encodes ic_offset, quantifies ic_offset in ic_sym and passes the control to a function block 335. The function block 335 carries out a lighting compensation mode decision which decides the ic_prediction_flag and passes the control to a function block 340. The function block 340 carries a syntax structure, Y passes the control to a block of the circuit limit 345. The circuit limit block 345 ends the circuit on each macroblock in the moment cut and passes the control to an end block 355. The function block 350, performs the movement estimate and converts a movement decision into lighting compensation, and passes the control to the function block. Turning now to FIG. 4, an exemplary video decoding method with illumination compensation for multiple vision video content is generally indicated with the reference number 400. Method 400 includes a start block 405 that passes control to a circuit limit block 410. The circuit limit block 410 begins with a circuit in each macroblock in a current cut including the setting of a range for the circuit using a variable mb = 0 for MacroBlockslnPic-1 and passes the control to a function block 415. The function block 415 reads the syntax and passes the control to a decision block 420. The decision block 420 determines whether the lighting compensation for the current cut is enabled or not. If so, then the control goes to a decision block 425. Otherwise, the control passes to a function block 450. The decision block 425 determines whether or not ic_prediction_flag is equal to 1. If so, the control go to a block of function 430. Otherwise, the control passes function block 450. Function block 430 forms an IC anticipator ic_offset_p and passes control to a function block 435. Function block 435 inverse quantifies ic_sym, decodes in differential form ic_offset and passes the control to a function block 440. Function block 440 performs motion compensation with illumination, and passes the control to a circuit limit block 445. The circuit limit block 445 ends the circuit over each macroblock in the current cut and passes the control to an end block 455. The function block 450 performs motion compensation without lighting compensation and passes the control to the 445 circuit limit block. a description is provided with respect to the use of lighting compensation as part of the cross-view anticipation process according to an example embodiment of the foregoing. of the present invention. Lighting compensation is performed within the context of anticipation of cross-view for Multi-View Video Encoding. In this scenario, the cross-view anticipation usually involves the computation of a disparity field between images of different views. The field of Disparity is in anticipation of cross-view, since the field of motion is in anticipation. When applied to a coding scheme, cross-view anticipation is an effective tool for exploiting view redundancy. In order to be simpler, it is presumed then that the anticipation of cross-view, and in addition, the disparity estimate, are carried out on a block basis. However, it will be appreciated that due to the teachings of the principles of the present invention shown herein, the extension of said teachings to other groups demonstrates could be easily determined and implemented by a person skilled in the art and related techniques, still maintaining the scope of the principles of the present invention. Also, it will be appreciated further that although some modulators of the principles of the present invention are described herein with respect to being applied to a Multi-Video Video Encoding extension of the MPEG-4 AVC standard for which the compensation of movement and disparity compensation, due to the teachings of the principles of the present invention provided herein, implementations of the principles of the present invention may also be directed to any of the other multiple view video coding schemes. for which the disparity compensation is enabled, as can be easily determined and implemented by those skilled in the art and related techniques, still maintaining the scope of the principles of the present invention. It will further be appreciated that although some embodiments of the principles of the present invention directed to lighting compensation are described herein with respect to the Multiple View Video Coding, due to the teachings of the principles of the present invention provided herein, an expert in the related art and techniques it will easily contemplate other video-related scenarios to which the principles of the present invention may apply, still maintaining the scope of said principles. For example, the principles of the present invention may apply, but are not limited to, image registration and camera calibration. Next, a description will be provided regarding the transmission of lighting compensation syntax elements according to an exemplary embodiment of the principles of the present invention. In exemplary embodiments of the principles of the present invention applied to a Multiple Vision Video Encoding extension of the MPEG-4 AVC standard, a new syntax is introduced in the cut heading called ic_prediction_flag, which indicates whether the Lighting compensation (IC) is used for said cutting. If the cross-view anticipation is disabled for the entire cut, then ic_prediction_flag will be equal to zero and there will be no additional IC-related syntax in the cut. The degree of incompatibility of illumination varies from one part of the view image to another. Therefore, sending IC parameters to all blocks using disparity compensation should not be efficient. To serve that purpose, a new block-based syntax signal called ic_enable is introduced at the macroblock / sub-macroblock level to indicate whether IC is used for a specific block. The MPEG-4 AVC standard supports variable block size motion compensation, with block sizes ranging from 16x16 to 4x4. To reduce the operating costs of sending too many ic_enable and ic_sym signals, IC switching does not need to be applied to all block sizes. In a particular embodiment, IC switching applies only in blocks with a size greater than or equal to 8x8. The context design of ordinary arithmetic context adaptation (CABAC) coding for ic enable is designed as follows: (1) for block sizes in. { 16x16, 16x8 or 8x16} , three contexts are used, depending on the CABAC signals of the macroblocks above and from the left; and (2) for 8x8 block sizes, a separate CABAC context is assigned without referring to neighboring blocks. For purposes of illustration, the IC-related syntax tables are shown in table 1 through table 3. Table 1 illustrates the syntax of the cutting header for Multiple View Video Coding (MVC). Table 2 illustrates a macroblock layer syntax. Table 3 illustrates a sub-macroblock prediction syntax. Table 1 Table 2 Table 3 A description will now be given regarding the disparity estimate with lighting compensation according to an exemplary embodiment of the principles of the present invention.
In the particular scenario of a coding application, lighting compensation (IC) will be considered as part of the disparity compensation process. More specifically, when IC is enabled in the disparity compensation of a block, the reference block compensated by illumination, Br, is calculated as follows: Br (x, y) = R (x +? X, y + Ay ) + ic_offset where R (x, y) is the cross-view reference image, and (? x,? y) is the disparity vector (DV). As shown in figure 5, DV / ic_offset / ic_enable are used together in the disparity compensation process. Turning now to FIG. 5, an exemplary apparatus for generating reference block with illumination compensation for multi-view video content is generally indicated by reference number 500. Apparatus 600 includes a lighting compensation quantizer. 505 having an output connected in signal communication with a first input without inverter of a combiner 515. An output of the combiner 515 is connected in signal communication with a second input without inverter of the combiner 515 and with a first input without inverting a combiner 520. An output from a reference image store 510 (for another view) is connected in signal communication with a second non-reversing input of the combiner 520 and with a first input of a switch 525. A second signal of a switch 525 is connected in signal communication with a combiner output 520. An input of the lighting compensation quantizer 505 is available as a input to device 500, to receive the syntax ic_sym. In addition, an entry of the store reference image store 510 is available as an input of the apparatus, to receive disparity vectors. In addition, the apparatus 500 includes an input to receive an ic_enable syntax for controlling which input is selected by the switch 525. An output of the switch 525 is available as an output of the apparatus 500. At the block level, the lighting compensation parameter , ic_offset, is obtained through differential coding and uniform quantization. Next, a description will be provided regarding the differential coding of ic_offset according to an exemplary embodiment of the principles of the present invention. There is a strong correlation between ic_offset in neighboring blocks. To take advantage of this property, the c_offset is differentiated before quantification as follows: ic_offset = ic_offset_p + ic_offset_d where ic_offset_p is the differential lighting compensation, and the lighting compensation anticipator ic_offset_p is formed using ic_offset from neighboring blocks. The ic_offset_p is computerized according to the following rules. In a first rule, the default value of ic_offset_p is 0. The default value is used when there is no neighbor block with IC available. In a second rule, ic_offset_p is adjusted depending on the size of the MB block on the left, as follows: (1) if the block size = 16x16, then the ic_offset of the read block is used; (2) the block size = 16x8 or 8x16, then the ic_offset of the second block is used; and (3) the block size = 8x8 or less, then the available ic_offset of the block index 8x8 3, 1, 2, 0 (in this order) is used. In a third rule, if there is no neighboring macroblock on the left, then the upper block is used instead ic_offset. Next, a description regarding the quantification of ic_sym according to an exemplary embodiment of the principles of the present invention is provided. A uniform quantization is applied to the differentiated ic_offset: ic_offset = ic_offset_p + ic_sym * μ If a fixed pitch size quantization method is used, there is no need for extra syntax for the μ signal. In the case when a fixed step size quantization is not used, the syntax transmission should be considered. Next, a description with respect to the entropy coding of ic_sym is given according to an exemplary embodiment of the principles of the present invention. For ic_sym, a unary binarization is used in CABAC. For example, if | ¡c_sym | is 1, then binarized as "10" and if ic_sym is 3, then binarized as "1110". Since ic_sym is encoded differentially, a value of c_sym close to 0 is more likely to occur. By exploiting this property, four different contexts are assigned to each binarized symbol. After binarization, a symbol bit can be added at the end, which is encoded without context. Color compensation is then provided according to an exemplary embodiment of the principles of the present invention. Poor camera calibration can cause color incompatibilities as well as lighting incompatibility. Some embodiments of the principles of the present invention address this problem by extending the lighting compensation (IC) method described prior to color compensation (CC). For simplicity, it is presumed that the color compensation is applied to the UV color components of the YUV color space. Nevertheless, it should be appreciated that, due to the teachings of the principles of the present invention provided herein, one of ordinary skill in the art and related techniques will readily contemplate and implement the principles of the present invention with respect to other color spaces, still maintaining the scope of the principles of the present invention. Next, two exemplary methods with respect to color compensation according to the principles of the present invention will be described. The first is a local color compensation method and the second is a global color compensation method. Of course, because of the teachings of the principles of the present invention provided herein, the variations and extensions of the two methods described herein are readily contemplated by one skilled in the art and related arts, still maintaining the scope of the present invention. In a local color compensation method, similar to lighting compensation (IC), a local color compensation parameter cc_offset is introduced. These two different cc_offset for channels U and V share the same flag ic_enable and the same disparity vector. For the YUV420 chrominance sampling format, the width and height of the chrominance block is half the illuminance block. To avoid excessive expenditure of bits in the color compensation syntax, the block size for color compensation is set to 8x8. The signal cc_enable can be either signaled independently of ic_enable or can be derived from ic_enable. With respect to the overall color compensation method, the chrominance channels are generally much softer than the illuminance channel. A more economical method for color compensation uses a global compensation parameter: cc_offset_global. global cc_offset can be calculated at the level of the cut or box and can be applied to each block in the same cut or box. A description will now be given of some of the many advantages / features of the present invention, some of which have been mentioned above. For example, an advantage / feature is a video encoder that includes an encoder for encoding an image enabling the color compensation of at least one color component in an anticipation of the image based on a correlation factor that refers to the Color data between the image and another image. The image and the other image have different points of view and both correspond to a multiple vision content for the same scene or a similar scene.
Another feature / advantage is the video encoder as described above, wherein the encoder encodes the image to provide a resulting bitstream compliance with at least one of the International Organization for Standardization / lnternational Electrotechnical Commission Moving Picture Experts Group- 4 Part 10 Advanced Video Coding standard / International Telecommunication Union, Telecommunication Sector, recommendation H264 and an extension thereof. Yet another advantage / feature is the video encoder as described above, wherein the encoder uses a high level syntax to allow color compensation. In addition, another advantage / feature is the video encoding that uses a high level syntax as described above, where the high level syntax includes a cut level syntax. In addition, another advantage / feature is the video encoder as described above, wherein the encoder uses a block-level syntax to indicate whether color compensation is used in anticipation of the image. Also, another advantage / feature is the video encoder that uses a block level syntax as described above, where the encoder uses a context adaptation binary arithmetic that encodes contexts to encode the block-level syntax, encoding the context adaptation binary arithmetic, selected contexts based on the block size. In addition, another advantage of a feature is the video encoder as described above, wherein the encoder uses a block level syntax to signal the color compensation information. In addition, another advantage / feature is the video encoder that uses the block-level syntax as described above, where the encoder uses context-adaptive binary arithmetic that encodes contexts to encode block-level syntax, encoding the binary arithmetic of context adaptation, the contexts selected based on the block size. Also, another advantage / feature is that the video encoder uses the block level syntax described above, wherein the color compensation information includes a color compensation parameter. In addition, another advantage / feature is the video encoder described above, wherein the encoder uses a cut-level syntax to signal a quantity of color compensation applied to the chrominance channels of an entire cut corresponding to the image. . In addition, another advantage / feature is the encoder of video as described above, wherein the encoder encodes the image also enabling the compensation of illumination in the anticipation of the image based on a correlation factor that is related to illumination data between the image and the other image. In addition, another advantage / feature is the video encoder that encodes the image also allowing illumination compensation as described above, wherein the encoder uses a cut level syntax to enable illumination compensation. Another advantage / feature is the video encoder that encodes the image also enabling the lighting compensation described above, where the encoder uses a block-level syntax to indicate whether the lighting compensation was used in anticipation of the image. In addition, another advantage / feature is the video encoder that encodes the image, also enabling the lighting compensation described above, wherein the different block level syntaxes are used to indicate the lighting compensation and the color compensation, respectively, and the different block-level syntaxes are signaled independently. In addition, another advantage / feature is the video encoder that encodes the image, also enabling the lighting compensation as described above, wherein the different block level syntaxes are used to indicate lighting compensation and color compensation, respectively, and one of the different block level syntaxes are derived from other of the different block level syntax. In addition, another advantage / feature is the video encoder that encodes the image, also enabling lighting compensation as described above, wherein the encoder uses a block level syntax to signal the lighting compensation information. In addition, another advantage / feature is the video encoder that encodes the image, also enabling illumination compensation and using a block-level syntax described above, wherein the encoder uses context-adaptive binary arithmetic that encodes contexts for code the block-level syntax, coding the binary arithmetic of context adaptation, selected contexts based on the size of the block. Also, another advantage / feature is the video encoder that encodes the image, also enabling lighting compensation and using a block-level syntax as described above, wherein the lighting compensation information includes a lighting compensation parameter. In addition, another advantage / feature is the video encoder that encodes the image, also enabling lighting compensation as described above, wherein the encoder uses differential coding of at least one of the lighting compensation parameters and compensation parameters. of color in a block level. In addition, another advantage / feature is the video encoder that encodes the image, also enabling lighting compensation and also using differential coding as described above, wherein the encoder applies uniform quantization on at least one of the compensation parameters. of illumination encoded in differential form and color compensation parameters encoded in differential form. These and other features and advantages of the present invention may be readily confirmed by one skilled in the art based on the teachings shown herein. It will be understood that the teachings of the present invention can be implemented in various forms of hardware, software, firmware, processors for special purposes and combinations thereof. More preferably, the teachings of the present invention are implemented as a combination of hardware and software. In addition, the software can be implemented as an application program presented in tangible form in the program storage unit. The application program can be uploaded to, and executed by a machine comprising any suitable architecture. Preferably, the machine is implemented in a computing platform having a hardware such as one or more central processing units ("CPU"), a random access memory ("RAM"), and input / output interfaces ("I"). /OR"). The computing platform can also include an operating system and a microinstruction code. The various processes and functions described herein may be either part of the microinstruction code, or parts of the application program, or any combination thereof, which may be executed through a CPU. In addition, various other peripheral units can be connected to the computing platform, such as an additional data storage unit and a printing unit. It will be further understood that, since part of the constituent system components and methods illustrated in the drawings accompanying the present invention are preferably implemented in software, the actual connections between the components and the system or the function blocks of the process may differ. depending on the way in which the present invention is programmed. Due to teachings mentioned herein, one skilled in the art will have the ability to contemplate these implementations or configurations or the like of the present invention. Although illustrative embodiments have been described with reference to the drawings accompanying the present invention, it will be understood that the present invention is not limited to such precise embodiments, and that various changes and modifications can be made through one skilled in the art, without departing from the scope or spirit of the present invention. All of said changes and modifications are intended to be included in the scope of the present invention, as set forth in the appended claims.

Claims (82)

  1. CLAIMS 1. A video encoder comprising: an encoder (100) for encoding an image enabling the color compensation of at least one color component in an anticipation of the image based on a correlation factor that is related to data from color between the image and another image, the image and the other image having different points of view and corresponding both to Multiple View content for the same scene or a similar scene. The video encoder as described in claim 1, characterized in that the encoder (100) encodes the image to provide a resulting bit stream that complies with at least one of the Advanced Video Coding Standard Part 10 of the Group- 4 of Film Film Experts of the International Organization for Standardization / International Commission Electrotechnical / Recommendation H.264 of the International Telecommunication Union, Telecommunication Sector, and extensions thereof. The video encoder as described in claim 1, characterized in that the video encoder (100) uses a high level syntax to allow color compensation. 4. The video encoder as described in claim 3, characterized in that the high-level syntax comprises a cut-level syntax. The video encoder as described in claim 1, characterized in that the encoder (100) uses a block-level syntax to indicate whether the color compensation is used in anticipation of the image. The video encoder as described in claim 5, characterized in that the encoder (100) uses a binary context adaptation arithmetic that encodes contexts to encode the block level syntax, encoding the context adaptation binary arithmetic , contexts selected based on the block size. The video encoder as described in claim 1, characterized in that the encoder (100) uses a block level syntax to signal the color compensation information. The video encoder as described in claim 7, characterized in that the encoder (100) uses context-adaptive binary arithmetic that encodes contexts for encoding the block-level syntax, encoding the context adaptation binary arithmetic, Contexts selected based on the block size. 9. The video encoder as described in claim 7, characterized in that the color compensation information includes a compensation parameter color. The video encoder as described in claim 1, characterized in that the encoder (100) uses a cut-level syntax to indicate a quantity of color compensation applied in chrominance channels of a complete cut corresponding to the image. The video encoder as described in claim 1, characterized in that the encoder (100) encodes the image also enabling illumination compensation in the anticipation of the image based on a correlation factor that is related to data of Illumination between the image and the other image. The video encoder as described in claim 11, characterized in that the encoder (100) uses a level of cut syntax to allow illumination compensation. The video encoder as described in claim 11, characterized in that the encoder (100) uses a block-level syntax to indicate whether the illumination compensation is used in anticipation of the image. The video encoder as described in claim 11, characterized in that different block-level syntaxes are used to indicate illumination compensation and color compensation, respectively, and independently point out the different block-level syntaxes. The video encoder as described in claim 11, characterized in that the different block-level syntaxes are used to indicate lighting compensation and color compensation, respectively, and one of the different block-level syntaxes. they are derived from another of the different block-level syntaxes. The video encoder as described in claim 11, characterized in that the encoder (100) uses a block level syntax to signal the lighting compensation information. The video encoder as described in claim 16, characterized in that the encoder (100) uses context-adaptive binary arithmetic that encodes contexts for encoding the block level syntax, encoding the context adaptation binary arithmetic, Contexts selected based on the block size. 18. The video encoder as described in claim 16, characterized in that the illumination compensation information includes a lighting compensation parameter. 19. The video encoder as described in claim 11, characterized in that the encoder (100) uses differential coding of at least the parameters of Lighting compensation and color compensation parameters at a block level. The video encoder as described in claim 20, characterized in that the encoder (100) is applied to uniform quantization in at least one of the differentially encoded lighting compensation parameters and the color-coded compensation parameters in differential form. 21. A method for encoding video, characterized in that it comprises: encoding an image enabling color compensation of at least one color component in an anticipation of the image based on a correlation factor that is related to color data between the image and the other image, the image and the other image having different points of view and corresponding both to Multiple View content for the same scene or a similar scene. The method as described in claim 21, characterized in that the coding step encodes the image to provide a resulting bit stream that complies with at least one of the Advanced Video Coding Standard Part 10 of Expert Group-4. of Film Films of the International Organization for Standardization / International Electrotechnical Commission / recommendation H.264 of the International Union of Telecommunication, Telecommunication Sector, and their extensions (300). 23. The method as described in the claim 21, characterized in that the coding step uses high level syntax to enable color compensation (320). The method as described in claim 23, characterized in that the high level syntax comprises cut level syntax (320). 25. The method as described in claim 22, characterized in that the coding step uses block-level syntax to indicate whether the color compensation is used in the anticipation of the image (340) 26. The method as set forth in FIG. described in claim 25, characterized in that the coding step uses context-adaptive binary arithmetic that encodes contexts for encoding the block-level syntax, encoding the context adaptation binary arithmetic, selected contexts based on the block size ( 3. 4. 5). 27. The method as described in the claim 22, characterized in that the coding step uses block level syntax to signal the color compensation information (335). 28. The method as described in claim 27, characterized in that the coding step uses binary arithmetic of context adaptation that encodes contexts to encode the block-level syntax, coding the binary arithmetic of context adaptation, selected contexts based on the block size (345). 29. The method as described in claim 27, characterized in that the compensation information includes a color compensation parameter (335). 30. The method as described in claim 22, characterized in that the coding step uses cut-level syntax to signal a quantity of color compensation applied in chrominance channels of an entire cut corresponding to the image (345). . 31. The method as described in claim 22, characterized in that the coding step encodes the image also enabling illumination compensation in the anticipation of the image based on a correlation factor that is related to illumination data between the image and the other image (300). 32. The method as described in the claim 31, characterized in that the coding step uses a level of cut syntax to allow lighting compensation (320). 33. The method as described in claim 31, characterized in that the coding step uses a block level syntax to indicate if the lighting compensation is used in the anticipation of the image (340). 34. The method as described in claim 31, characterized in that different block-level syntaxes are used to indicate lighting compensation and color compensation, respectively, and different block-level syntaxes are signaled independently. (335). 35. The method as described in claim 31, characterized in that the different block level syntaxes are used to indicate the lighting compensation and the color compensation, respectively, and one of the different block level syntaxes are derived from another of the different block-level syntaxes (335). 36. The method as described in the claim 31, characterized in that the coding step uses a block level syntax for signaling the lighting compensation information (335). 37. The method as described in claim 36, wherein the encoding step uses binary arithmetic context adaptive coding contexts for encoding the block level syntax, encoding binary arithmetic adaptation context selected contexts base in block size 38. The method as described in claim 36, characterized in that the lighting compensation information includes a lighting compensation parameter (335). 39. The method as described in the claim 31, characterized in that the coding step uses differential coding of at least the lighting compensation parameters and the color compensation parameters at a block level (335). 40. The method as described in the claim 39, characterized in that the coding step is applied to uniform quantization in at least one of the differentially encoded lighting compensation parameters and the differentially encoded color compensation parameters. 41. A video decoder, characterized in that it comprises: a decoder (200) for decoding an image enabling color compensation of at least one color component in an anticipation of the image based on a correlation factor that is related to data of color between the image and another image, the image and the other image having different points of view and corresponding both to Multiple View content for the same scene or a similar scene. 42. The video decoder as described in claim 41, characterized in that the decoder (200) decodes the image to provide a resulting bitstream that complies with at least one of the Advanced Video Coding Standard Part 10 of Group-4 of Film Film Experts of the International Organization for Standardization / International Commission Electrotechnical / Recommendation H.264 of the International Telecommunication Union, Telecommunication Sector, and extensions thereof. 43. The video decoder as described in claim 41, characterized in that the video decoder (200) reads a high level syntax to allow color compensation. 44. The video decoder as described in claim 43, characterized in that the high-level syntax comprises a cut-level syntax. 45. The video decoder as described in claim 42, characterized in that the decoder (200) reads a block-level syntax to indicate whether the color compensation is used in anticipation of the image. 46. The video decoder as described in claim 44, characterized in that the decoder (200) uses a binary arithmetic of context adaptation that decodes contexts to decode the block-level syntax, encoding the binary arithmetic of context adaptation, selected contexts based on the block size. 47. The video decoder as described in claim 42, characterized in that the decoder (200) reads a block-level syntax for signaling the color compensation information. 48. The video decoder as described in claim 47, characterized in that the decoder (200) uses context-adaptive binary arithmetic that decodes contexts to decode the block level syntax, encoding the context adaptation binary arithmetic, Contexts selected based on the block size. 49. The video decoder as described in claim 47, characterized in that the color compensation information includes a color compensation parameter. 50. The video decoder as described in claim 42, characterized in that the decoder (200) reads a cut-level syntax to indicate a quantity of color compensation applied in chrominance channels of a complete cut corresponding to the image. 51. The video decoder as described in the claim 42, characterized in that the decoder (200) decodes the image also enabling illumination compensation in the anticipation of the image based on a correlation factor that is related to illumination data between the image and the other image. 52. The video decoder as described in claim 51, characterized in that the decoder (200) reads a cut syntax level to allow illumination compensation. 53. The video decoder as described in claim 51, characterized in that the decoder (200) reads a block-level syntax to indicate whether the illumination compensation is used in anticipation of the image. 54. The video decoder as described in claim 51, characterized in that different block level syntaxes are read to indicate lighting compensation and color compensation, respectively, and the different level syntaxes are signaled independently of block. 55. The video decoder as described in claim 51, characterized in that the different block-level syntaxes are read to indicate lighting compensation and color compensation, respectively, and one of the different block-level syntaxes. they are derived from another of the different block-level syntaxes. 56. The video decoder as described in claim 51, characterized in that the decoder (200) reads a block-level syntax for signaling the lighting compensation information. 57. The video decoder as described in claim 56, characterized in that the decoder (200) uses context-adaptive binary arithmetic that encodes contexts to decode the block-level syntax., coding the binary arithmetic of context adaptation, selected contexts based on the block size. 58. The video decoder as described in claim 56, characterized in that the lighting compensation information includes a lighting compensation parameter. 59. The video decoder as described in claim 51, characterized in that the decoder (200) uses differential decoding of at least the lighting compensation parameters and the color compensation parameters at a block level. 60. The video decoder as described in claim 59, characterized in that the decoder (200) is applied to uniform quantization in at least one of the differentially encoded lighting compensation parameters and the color compensation parameters encoded in differential form. 61. A video decoding method, characterized in that it comprises: decoding an image enabling color compensation of at least one color component in an anticipation of the image based on a correlation factor that is related to the color data between the image and the other image, the image and the other image having different points of view and corresponding both to a content of Multiple Views for the same scene or a similar scene (400). 62. The method as described in claim 61, characterized in that the decoding step decodes the image to provide a resulting bitstream that complies with at least one of the Advanced Video Encoding Standard Part 10 of Expert Group-4. of Film Films of the International Organization for Standardization / International Commission Electrotechnical / Recommendation H.264 of the International Telecommunication Union, Telecommunication Sector, and extensions thereof (400). 63. The method as described in claim 61, characterized in that the decoding step reads the high-level syntax to enable color compensation (415). 64. The method as described in the claim 63, characterized in that the high level syntax comprises cut level syntax (415). 65. The method as described in claim 61, characterized in that the decoding step reads block-level syntax to indicate whether the color compensation is used in anticipation of the image (415) 66. The method as set forth in FIG. described in claim 65, characterized in that the decoding step uses binary context adaptation arithmetic which decodes contexts to decode the block level syntax, encoding the context adaptation binary arithmetic, selected contexts based on the block size ( 415). 67. The method as described in claim 61, characterized in that the decoding step reads block level syntax to signal the color compensation information (415). 68. The method as described in claim 67, characterized in that the decoding step uses context-adaptive binary arithmetic that encodes contexts to decode the block-level syntax, encoding the binary arithmetic of context adaptation, selected contexts with base, in block size (415). 69. The method as described in the claim 67, characterized in that the compensation information includes a color compensation parameter (435). 70. The method as described in claim 61, characterized in that the decoding step reads cut-level syntax to indicate a quantity of color compensation applied in chrominance channels of a whole cut corresponding to the image (415) . 71. The method as described in claim 61, characterized in that the decoding step decodes the image also enabling illumination compensation in the anticipation of the image based on a correlation factor that is related to illumination data between the image and the other image (440). 72. The method as described in claim 71, characterized in that the decoding step reads a cut syntax level to allow illumination compensation (415). 73. The method as described in claim 71, characterized in that the decoding step reads a block-level syntax to indicate whether the lighting compensation is used in anticipation of the image (415). 74. The method as described in claim 71, characterized in that different block-level syntaxes are read to indicate lighting compensation and color compensation, respectively, and are signaled independently the different block-level syntaxes (415). 75. The method as described in claim 71, characterized in that the different block-level syntaxes are read to indicate lighting compensation and color compensation, respectively, and one of the different block level syntaxes are derived from another of the different block-level syntaxes (415). 76. The method as described in claim 71, characterized in that the decoding step reads a block-level syntax to signal the lighting compensation information (415). 77. The method as described in claim 76, characterized in that the decoding step uses context-adaptive binary arithmetic that encodes contexts to decode the block-level syntax, encoding the binary arithmetic of context adaptation, selected contexts with base in block size (415). 78. The method as described in the claim 76, characterized in that the lighting compensation information includes a lighting compensation parameter (435). 79. The method as described in claim 71, characterized in that the decoding step uses differential decoding of at least the lighting compensation parameters and the color compensation parameters at a block level (435). 80. The method as described in claim 39, characterized in that the decoding step is applied to uniform quantization in at least one of the differentially encoded lighting compensation parameters and the differentially encoded color compensation parameters (435). 81. A video signal structure for encoding video, characterized in that it comprises: an encoded image enabling color compensation of at least one color component in an anticipation of the image based on a correlation factor that is related to color data enter the image and the other image, the image and the other image having different points of view and corresponding both to the content of Multiple Views of the same scene or a similar scene. 82. A storage medium having video signal data encoded therein, characterized in that it comprises: an encoded image enabling the color compensation of at least one color component in a preview of the image based on a correlation factor which is related to color data enters the image and the other image, having the image and the other image different points of view and corresponding both to the content of Multiple Views for the same scene or a similar scene.
MX/A/2008/008865A 2006-01-09 2008-07-09 Method and apparatus for providing reduced resolution update mode for multi-view video coding MX2008008865A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US60/757,289 2006-01-09
US60/757,372 2006-01-09

Publications (1)

Publication Number Publication Date
MX2008008865A true MX2008008865A (en) 2008-09-26

Family

ID=

Similar Documents

Publication Publication Date Title
KR101357982B1 (en) Method and apparatus for providing reduced resolution update mode for multi-view video coding
US20170318292A1 (en) Methods and apparatus for determining quantization parameter predictors from a plurality of neighboring quantization parameters
KR20210113390A (en) Encoders, decoders and corresponding methods of intra prediction
CN112954367B (en) Encoder, decoder and corresponding methods using palette coding
WO2008048490A2 (en) Local illumination and color compensation without explicit signaling
KR20210125088A (en) Encoders, decoders and corresponding methods harmonizing matrix-based intra prediction and quadratic transform core selection
CN113228632B (en) Encoder, decoder, and corresponding methods for local illumination compensation
MX2008008865A (en) Method and apparatus for providing reduced resolution update mode for multi-view video coding
CN113766227B (en) Quantization and inverse quantization method and apparatus for image encoding and decoding
CN113891084B (en) Intra prediction mode dependent encoder, decoder, corresponding methods and computer readable medium
CN114868392A (en) Encoder, decoder and corresponding methods