CN100581265C - Processing method for multi-view point video - Google Patents

Processing method for multi-view point video Download PDF

Info

Publication number
CN100581265C
CN100581265C CN 200810059283 CN200810059283A CN100581265C CN 100581265 C CN100581265 C CN 100581265C CN 200810059283 CN200810059283 CN 200810059283 CN 200810059283 A CN200810059283 A CN 200810059283A CN 100581265 C CN100581265 C CN 100581265C
Authority
CN
China
Prior art keywords
macro block
video
component
viewpoint
background
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 200810059283
Other languages
Chinese (zh)
Other versions
CN101262606A (en
Inventor
邵枫
郁梅
蒋刚毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Guizhi Intellectual Property Service Co.,Ltd.
Original Assignee
Ningbo University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo University filed Critical Ningbo University
Priority to CN 200810059283 priority Critical patent/CN100581265C/en
Publication of CN101262606A publication Critical patent/CN101262606A/en
Application granted granted Critical
Publication of CN100581265C publication Critical patent/CN100581265C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a processing method applied to a multi-view video. Since a color correction process is embedded into a multi-view video encoding process, compared with a typical method that the view video encoding is carried out after the color is corrected, the method of the invention significantly improves the encoding performance of the multi-view video encoding. In addition, since the encoding mode information of a macro block is utilized for the separation processing of a front background, compared with the existing method that the front background is separated by utilizing parallax estimation or region-dependent segmentation, the method of the invention can ensure the fast and precise separation processing of the front background. Furthermore, the method of the invention utilizes background information to correct the color, compared with a method that the color is transferred between overall images or between the most similar regions, the method benefits to obtaining a corresponding reference plane and better comply with the imaging principle of a camera, thus improving the precision of the color correction of the multi-view video.

Description

A kind of processing method of multi-view point video
Technical field
The present invention relates to a kind of processing method of video, especially relate to a kind of processing method of multi-view point video.
Background technology
In real world, the vision content that the observer saw depends on the position of observer with respect to observed object, and the observer can freely select each different angle to remove to observe and analyze things.In traditional video system, real scene is selected decision with respect to the picture of a viewpoint by cameraman or director, the sequence of video images that the user can only watch video camera to be produced on single viewpoint passively, and can not freely select other viewpoint to observe real scene.The video sequence that these folk prescriptions make progress can only reflect a side of real-world scene.The free viewpoint video system can make the user freely select viewpoint to go to watch any side in the certain limit in the real-world scene, the developing direction that is called video system of future generation by the MPEG of International Standards Organization (Moving Picture Experts Group, Motion Picture Experts Group).
The multi-view point video technology is a core link in the free viewpoint video technology, and it can provide the video information of the different angles of captured scene.Fig. 1 is the parallel camera system imaging of a many viewpoints schematic diagram, and wherein n+1 camera (or video camera) placed abreast to take multi-view point video.Utilize the information of a plurality of viewpoints in the multi-view point video signal can synthesize the video information of user-selected any viewpoint, reach the purpose of freely switching any viewpoint video.But therefore the data volume of multi-view point video signal needs corresponding multiple view video coding compress technique reduce its huge data volume along with the increase of viewpoint number forms doubly increase, with the bandwidth of saving multi-view point video signal transmission and the space of storage.On the other hand, because the key elements such as scene illumination, camera CCD noise, shutter speed and exposure of each camera are inconsistent in gatherer process, can cause the color value difference of the video image that different cameral gathers very big, reduce the performance of follow-up multiple view video coding and the quality of virtual viewpoint rendering.
Problem at above-mentioned existence, a kind of processing method of typical multi-view point video has been proposed at present, as shown in Figure 2, in service end, at first the multi-view point video that a plurality of cameras are collected carries out color correction, then the video after proofreading and correct is carried out multiple view video coding, the video behind the coding transmits by network; In client, the video behind the coding that receives is decoded, and between the viewpoint of decoding, draw the virtual view video image.
Existing many viewpoints color calibration method is usually between the image of the overall situation or in the most similar interregional color map relation of setting up of image, interregional color map need be carried out cluster segmentation with target image and source images, concern in the most similar interregional color map of setting up, and source images is proofreaied and correct with these mapping relations.But the precision of this color calibration method is lower.And at present, the preceding background information of utilizing in the existing color trimming process adopts normally that the background separation means realize before disparity estimation or the Region Segmentation, the precision of preceding background separation is relevant with the computing capability of disparity estimation or Region Segmentation, for background separating method before these two kinds, improve precision and need expend a large amount of computing capabilitys.
At multiple view video coding, the JMVM of JVT (Joint Video Team, joint video code set) exploitation at present (Joint Multiview Video Model, associating multi-view point video model) recommends to adopt luminance compensation method.This method is estimated and the compensation prediction residual error by the coded macroblocks weight estimation being come compensation for parallax disparity, thereby improve encoding compression efficient, but adopt this method, the color of its decoded multi-viewpoint video image still can not reach consistency, and luminance compensation is not very big to the raising of coding efficiency.
Summary of the invention
Technical problem to be solved by this invention provides a kind of colour consistency that can guarantee decoded multi-viewpoint video image effectively, improves the processing method of multi-view point video of the coding efficiency of multi-view point video simultaneously.
The present invention solves the problems of the technologies described above the technical scheme that is adopted: a kind of processing method of multi-view point video, its processing procedure is: the multi-view point video of being taken at synchronization by the many view camera system with n+1 camera is being carried out carrying out the vedio color treatment for correcting in the multiple view video coding processing procedure, multi-view point video after will handling is again drawn the virtual view video image at last by Network Transmission between the viewpoint of decoding.
Described concrete steps of carrying out the vedio color treatment for correcting in carrying out the multiple view video coding processing procedure are:
(1) will be by the multi-view point video of the many view camera system with n+1 camera in the synchronization shooting, on time domain, be that coding unit is handled with the frame group, according to the coded prediction structure of setting, the residing viewpoint of I frame in the one frame group is defined as reference view, be designated as R, other viewpoint is defined as the source viewpoint, is designated as S; The gradable B frame coded prediction structure that the coded prediction structure of described setting adopts JVT to recommend;
(2) judge whether the present frame group that needs to encode is first frame group, if, then, the reference view video of reference view and the source viewpoint video of source viewpoint are being carried out coded prediction on the time domain He on the spatial domain according to the coded prediction structure of setting, and execution in step (5); Otherwise, continue to carry out;
(3) utilize the reference view of previous frame group and the preceding background separation information of source viewpoint, the source viewpoint video of present frame group is carried out color correction;
(4) according to the coded prediction structure of setting, the source viewpoint video behind reference view video and the color correction is carried out coded prediction on time domain and spatial domain;
(5) the coding mode information that adopts according to each macro block in the every frame in the present frame group is carried out the separating treatment of prospect and background respectively to reference view and source viewpoint, obtain the reference view of present frame group and the preceding background separation information of source viewpoint;
(6) judge whether the present frame group is last frame group, if then finish; Otherwise jump procedure (3) is handled next frame group.
The prospect in the described step (5) and the separating treatment of background may further comprise the steps:
A. add up the macro block of the diverse location of same viewpoint, adopt the coding mode of P frame or B frame coding in the every frame in the frame group,, determine that then this macro block belongs to background if the coding mode of a macro block in all frames all is the SKIP pattern; Otherwise, determine that this macro block belongs to prospect, obtain initial preceding background separation information;
B. initial preceding background separation information is carried out smoothly, if current macro belongs to prospect, and have at least three macro blocks to belong to background in the left macro block that is adjacent, right macro block, last macro block and the following macro block, then determine the foreground macro block of this current macro block, revise this current macro block to make it belong to background for isolating; If current macro belongs to background, and have at least three macro blocks to belong to prospect in the left macro block that is adjacent, right macro block, last macro block and the following macro block, then determine the background macro block of this current macro block, revise this current macro block to make it belong to prospect, obtain level and smooth preceding background separation information for isolating.
The detailed process that the source viewpoint video to the present frame group in the described step (3) carries out color correction is: utilize the reference view of previous frame group and the front background separation information of source viewpoint; Obtain the 1st component Y of every frame reference picture of reference view; The 2nd component U and the 3rd component V are in average and the standard deviation of background area; And the 1st component Y of every frame source images of source viewpoint; The 2nd component U and the 3rd component V are in average and the standard deviation of background area; Then to the Y of source images; Each component of U and V carries out respectively color correction
I i C ( x , y ) = σ i R σ i S ( I i S ( x , y ) - μ i S ) + μ i R , I wherein i S(x y) is the color value of i component of source images, I i C(x y) is the color value of i component of the correcting image behind the color correction, μ i RBe i component of reference picture average, σ in the background area i RBe i component of reference picture standard deviation, μ in the background area i SBe i component of source images average, σ in the background area i SBe i component of source images standard deviation, i=1,2,3 in the background area.
Compared with prior art, the advantage of the processing method of a kind of multi-view point video provided by the present invention is:
1) the inventive method is embedded into the color correction process process in the multiple view video coding processing procedure, and typically carries out color correction earlier and carries out the viewpoint video Methods for Coding again and compare, and has improved the coding efficiency of multi-view point video greatly;
2) the present invention utilizes the macroblock encoding pattern information to carry out preceding background separation processing, compares with the existing method of utilizing disparity estimation or Region Segmentation to carry out preceding background separation, and the preceding background separation of the inventive method is handled fast and be accurate;
3) the present invention utilizes background information to carry out color correction, with between the image of the overall situation or in the most similar interregional method of carrying out the color transmission, compare, help obtaining consistent reference planes, meet the image-forming principle of camera more, improved the precision that multi-viewpoint vedio color is proofreaied and correct.
Description of drawings
Fig. 1 is the parallel camera system imaging of a many viewpoints schematic diagram;
Fig. 2 is the handling process schematic diagram of typical multi-view point video;
Fig. 3 is the flow chart of the processing method of multi-view point video of the present invention;
The HBP coding framework schematic diagram that Fig. 4 adopts for the present invention;
Fig. 5 a is the reference view video image background mark schematic diagram of " flamenco1 " many viewpoints test set;
Fig. 5 b is the source viewpoint video image background mark schematic diagram of " flamenco1 " many viewpoints test set;
Fig. 6 a is the reference view video image background mark schematic diagram of " golf2 " many viewpoints test set;
Fig. 6 b is the source viewpoint video image background mark schematic diagram of " golf2 " many viewpoints test set;
Fig. 7 a for the source viewpoint video of " flamenco1 " many viewpoints test set through not adopting Y component coding distortion performance curve ratio after the luminance compensation coding method is handled than schematic diagram with do not adopt luminance compensation coding method and JMVM through JMVM after the color correction process of the present invention;
Fig. 7 b for the source viewpoint video of " flamenco1 " many viewpoints test set through not adopting U component coding distortion performance curve ratio after the luminance compensation coding method is handled than schematic diagram with do not adopt luminance compensation coding method and JMVM through JMVM after the color correction process of the present invention;
Fig. 7 c for the source viewpoint video of " flamenco1 " many viewpoints test set through not adopting V component coding distortion performance curve ratio after the luminance compensation coding method is handled than schematic diagram with do not adopt luminance compensation coding method and JMVM through JMVM after the color correction process of the present invention;
Fig. 8 a for the source viewpoint video of " golf2 " many viewpoints test set through not adopting Y component coding distortion performance curve ratio after the luminance compensation coding method is handled than schematic diagram with do not adopt luminance compensation coding method and JMVM through JMVM after the color correction process of the present invention;
Fig. 8 b for the source viewpoint video of " golf2 " many viewpoints test set through not adopting U component coding distortion performance curve ratio after the luminance compensation coding method is handled than schematic diagram with do not adopt luminance compensation coding method and JMVM through JMVM after the color correction process of the present invention;
Fig. 8 c for the source viewpoint video of " golf2 " many viewpoints test set through not adopting V component coding distortion performance curve ratio after the luminance compensation coding method is handled than schematic diagram with do not adopt luminance compensation coding method and JMVM through JMVM after the color correction process of the present invention;
The reference view video image that Fig. 9 a decodes after JMVM does not adopt the luminance compensation coding method to handle for " flamenco1 " many viewpoints test set again;
The source viewpoint video image that Fig. 9 b decodes after JMVM does not adopt the luminance compensation coding method to handle for " flamenco1 " many viewpoints test set again;
Fig. 9 c is the source viewpoint video image that " flamenco1 " many viewpoints test set is decoded after JMVM adopts the luminance compensation coding method to handle again;
Fig. 9 d is the source viewpoint video image that " flamenco1 " many viewpoints test set is decoded after the inventive method encoding process again;
The reference view video image that Figure 10 a decodes after JMVM does not adopt the luminance compensation coding method to handle for " golf2 " many viewpoints test set again;
The source viewpoint video image that Figure 10 b decodes after JMVM does not adopt the luminance compensation coding method to handle for " golf2 " many viewpoints test set again;
Figure 10 c is the source viewpoint video image that " golf2 " many viewpoints test set is decoded after JMVM adopts the luminance compensation coding method to handle again;
Figure 10 d is the source viewpoint video image that " golf2 " many viewpoints test set is decoded after the inventive method encoding process again.
Embodiment
Embodiment describes in further detail the present invention below in conjunction with accompanying drawing.
At first describe the notion of separating of prospect of the present invention and background below and carry out the process of color correction by background information.
Multiple view video coding mainly adopts JVT (Joint Video Team at present, joint video coding expert group) JMVM of exploitation (Joint Multiview Video Model, associating multi-view point video model), the macro-block coding pattern of JMVM mainly comprises SKIP, Motion SKIP, 16 * 16,16 * 8,8 * 16,8 * 8, Intra16, Intra8, coding modes such as Intra4.Wherein, the feature of SKIP coding mode is: if the motion vector of current macro is 0, and the residual error of pixel also is 0, then can confirm the macro block that this current macro block is the SKIP type, and the pixel reconstruction value of current macro can directly obtain from the corresponding macro block position copy of former frame.The macro block of SKIP type is the static macro block in the sport video, and can confirm as is the macro block of background area, thereby for providing theoretical foundation separating of prospect and background fast.
Prospect and separating of background that the present invention adopts may further comprise the steps:
A. add up the macro block of the diverse location of same viewpoint, adopt the coding mode of P frame or B frame coding in the every frame in the frame group,, determine that then this macro block belongs to background if the coding mode of a macro block in all frames all is the SKIP pattern; Otherwise, determine that this macro block belongs to prospect, obtain initial preceding background separation information;
B. initial preceding background separation information is carried out smoothly, if current macro belongs to prospect, and have at least three macro blocks to belong to background in the left macro block that is adjacent, right macro block, last macro block and the following macro block, promptly inconsistent with the classification of current macro, then determine the foreground macro block of this current macro block, revise this current macro block to make it belong to background for isolating; If current macro belongs to background, and have at least three macro blocks to belong to prospect in the left macro block that is adjacent, right macro block, last macro block and the following macro block, promptly inconsistent with the classification of current macro, determine that then this current macro block is isolated background macro block, revising this current macro block makes it belong to prospect, so just eliminate some isolated foreground macro block or background macro block, obtained level and smooth preceding background separation information.
Principle according to camera imaging, the color value of camera collection is optical characteristics, scene illumination and three coefficient results of factor of camera sensor of object in the scene, the difference of imaging of many viewpoints and single view imaging is the increase of many viewpoints imaging along with the viewpoint number, and is also just difficult more to the consistency control of three factors.The shutter speed of different cameral, time for exposure, camera noise etc. are difficult to adjust in full accord, same light source also can be inconsistent to the effect of diverse location viewpoint, for irregular scenario objects surface, spectral reflectance factor can produce great changes with the minor variations of locus.And according to the background imaging consistency principle, background is to illumination in imaging process, the influence of spectral reflectance factor is relatively stable, background is separated with prospect, be equivalent to obtain the reference planes of different points of view unanimity, therefore the detailed process of the color correction that adopts in the inventive method can be described as: utilize the reference view of previous frame group and the preceding background separation information of source viewpoint, obtain the 1st component Y of every frame reference picture of reference view, the 2nd component U and the 3rd component V are in the average and the standard deviation of background area, and the 1st component Y of every frame source images of source viewpoint, the 2nd component U and the 3rd component V are in the average and the standard deviation of background area, then to the Y of source images, each component of U and V carries out color correction respectively I i C ( x , y ) = σ i R σ i S ( I i S ( x , y ) - μ i S ) + μ i R , I wherein i S(x y) is the color value of i component of source images, I i C(x y) is the color value of i component of the correcting image behind the color correction, μ i RBe i component of reference picture average, σ in the background area i RBe i component of reference picture standard deviation, μ in the background area i SBe i component of source images average, σ in the background area i SBe i component of source images standard deviation, i=1,2,3 in the background area.
In the separating and carry out on the basis of color correction of above-mentioned prospect and background, as follows in conjunction with the processing method concrete steps of Fig. 3 multi-view point video of the present invention based on background information:
(1) at first will be by the multi-view point video of the many view camera system with n+1 camera in the synchronization shooting, on time domain, be that coding unit is handled with the frame group, according to the coded prediction structure of setting, the residing viewpoint of I frame in the one frame group is defined as reference view, be designated as R, other viewpoint is defined as the source viewpoint, is designated as S; The coded prediction structure of setting adopts gradable B frame (the Hierarchical B-Picture that JVT recommends in the present embodiment, HBP) coded prediction structure, as shown in Figure 4, HBP coded prediction structure between time reference and viewpoint with reference between doing reasonable equilibrium, be all to have shown higher coding efficiency no matter make it to the strong sequence of temporal correlation or to the strong sequence of space correlation;
(2) judge whether the present frame group that needs to encode is first frame group, if, then, the reference view video of reference view and the source viewpoint video of source viewpoint are being carried out coded prediction on the time domain He on the spatial domain according to the coded prediction structure of setting, and execution in step (5); Otherwise, continue to carry out;
(3) utilize the reference view of previous frame group and the preceding background separation information of source viewpoint, the source viewpoint video of present frame group is carried out color correction;
(4) according to the coded prediction structure of setting, the source viewpoint video behind reference view video and the color correction is carried out coded prediction on time domain and spatial domain;
(5) the coding mode information that adopts according to each macro block in the every frame in the present frame group is carried out the separating treatment of prospect and background respectively to reference view and source viewpoint, obtain the reference view of present frame group and the preceding background separation information of source viewpoint;
(6) judge whether the present frame group is last frame group, if then finish; Otherwise jump procedure (3) is handled next frame group;
(7) will between the viewpoint of decoding, draw the virtual view video image at last through the multi-view point video after above-mentioned color correction and the multiple view video coding processing by Network Transmission.
Below carry out the coding efficiency of multiple view video coding and the subjective performance of decoded picture compares with regard to the present invention.
To " flamenco1 " that is provided by KDDI company, " glof2 " two groups of multi-view point video test sets adopt the processing method of multi-view point video of the present invention.Fig. 5 a and Fig. 5 b have provided the reference view video image and the source viewpoint video image background mark schematic diagram of " flamenco1 " many viewpoints test set respectively, Fig. 6 a and Fig. 6 b have provided the reference view video image and the source viewpoint video image background mark schematic diagram of " golf2 " many viewpoints test set respectively, the reference view video image of " flamenco1 " and " glof2 " many viewpoints test set and the picture size of source viewpoint video image are 320 * 240, YUV (4:2:0) form.From Fig. 5 a and Fig. 5 b, and among Fig. 6 a and Fig. 6 b as can be seen, the color appearance of the source viewpoint video image shown in Fig. 5 a and Fig. 6 a shown in reference view video image and Fig. 5 b and Fig. 6 b is obviously inconsistent, source viewpoint video shown in Fig. 5 b and Fig. 6 b is carried out color correction just seem very necessary, and adopt preceding background separating method of the present invention to handle, can more accurately extract background information.
To adopt the coding efficiency of the inventive method, do not adopt the coding efficiency of luminance compensation and JMVM to adopt the coding efficiency of luminance compensation to compare, set quantization step baseQP=22 with JMVM, 27,32,37, the frame group is of a size of 15, also is that the frame number that needs on the time domain to encode is 15.Fig. 7 a, the source viewpoint video that Fig. 7 b and Fig. 7 c have provided " flamenco1 " many viewpoints test set respectively is not through adopting Y component after the luminance compensation coding method is handled with do not adopt luminance compensation coding method and JMVM through JMVM after the color correction process of the present invention, U component and V component coding distortion performance curve ratio are than schematic diagram, Fig. 8 a, the source viewpoint video that 8b and 8c have provided " golf2 " many viewpoints test set respectively is not through adopting Y component after the luminance compensation coding method is handled with do not adopt luminance compensation coding method and JMVM through JMVM after the color correction process of the present invention, U component and V component coding distortion performance curve ratio are than schematic diagram, and the coded data form is YUV (4:2:0).For " flamenco1 " many viewpoints test set, adopt the Y component distortion performance of the inventive method, adopt the distortion performance basically identical of luminance compensation coding with JMVM, do not improved 0.1dB and do not adopt luminance compensation to be coded in and compare under the same code rate with JMVM; The U component distortion performance of employing the inventive method does not adopt luminance compensation to be coded in and compares under the same code rate with JMVM employing luminance compensation coding and JMVM and improved 0.2~0.3dB; Adopt the V component distortion performance of the inventive method under same code rate, to improve 0.25dB; For " glof2 " many viewpoints test set, adopt the Y component distortion performance of the inventive method, adopt luminance compensation coding and JMVM not to adopt the distortion performance basically identical of luminance compensation coding with JMVM; Adopt the U component distortion performance of the inventive method to adopt luminance compensation coding and JMVM not to adopt luminance compensation to be coded in and compare under the same code rate and improved 0.6~0.7dB, adopt the V component distortion performance of the inventive method to adopt luminance compensation coding and JMVM not to adopt luminance compensation to be coded in and compare under the same code rate and improved 0.3dB with JMVM with JMVM.In sum as can be seen, after employing the inventive method is handled, improve the coding efficiency of multi-view point video greatly, illustrate that the color calibration method that adopts in the inventive method is effective.
Reference view video image that " flamenco1 " and " golf2 " many viewpoints test set is decoded after JMVM does not adopt luminance compensation to encode again and source viewpoint video image are respectively as Fig. 9 a, Fig. 9 b and Figure 10 a, shown in Figure 10 b, " flamenco1 " and " golf2 " many viewpoints test set adopts the source viewpoint video image of decoding again behind the luminance compensation coding shown in Fig. 9 c and Figure 10 c through JMVM, " flamenco1 " and " golf2 " many viewpoints test set adopts the source viewpoint video image of decoding again behind the inventive method coding shown in Fig. 9 d and Figure 10 d, quantization step baseQP=22 herein, as can be seen from the figure, adopt the color calibration method in the inventive method, the color of the source viewpoint video image of decoding and the reference view video image of decoding are very approaching, are more suitable for follow-up virtual viewpoint rendering.

Claims (2)

1, a kind of processing method of multi-view point video, the processing procedure that it is characterized in that it is: the multi-view point video of being taken at synchronization by the many view camera system with n+1 camera is being carried out carrying out the vedio color treatment for correcting in the multiple view video coding processing procedure, multi-view point video after will handling is again drawn the virtual view video image at last by Network Transmission between the viewpoint of decoding; Described concrete steps of carrying out the vedio color treatment for correcting in carrying out the multiple view video coding processing procedure are:
(1) will be by the multi-view point video of the many view camera system with n+1 camera in the synchronization shooting, on time domain, be that coding unit is handled with the frame group, according to the coded prediction structure of setting, the residing viewpoint of I frame in the one frame group is defined as reference view, be designated as R, other viewpoint is defined as the source viewpoint, is designated as S; The gradable B frame coded prediction structure that the coded prediction structure of described setting adopts JVT to recommend;
(2) judge whether the present frame group that needs to encode is first frame group, if, then, the reference view video of reference view and the source viewpoint video of source viewpoint are being carried out coded prediction on the time domain He on the spatial domain according to the coded prediction structure of setting, and execution in step (5); Otherwise, continue to carry out;
(3) utilize the reference view of previous frame group and the preceding background separation information of source viewpoint, the source viewpoint video of present frame group is carried out color correction; The detailed process that described source viewpoint video to the present frame group carries out color correction is: utilize the reference view of previous frame group and the front background separation information of source viewpoint; Obtain the 1st component Y of every frame reference picture of reference view; The 2nd component U and the 3rd component V are in average and the standard deviation of background area; And the 1st component Y of every frame source images of source viewpoint; The 2nd component U and the 3rd component V are in average and the standard deviation of background area; Then to the Y of source images; Each component of U and V carries out respectively color correction I i C ( x , y ) = σ i R σ i S ( I i S ( x , y ) - μ i S ) + μ i R , I wherein i S(x y) is the color value of i component of source images, I i C(x y) is the color value of i component of the correcting image behind the color correction, μ i RBe i component of reference picture average, σ in the background area i RBe i component of reference picture standard deviation, μ in the background area i SBe i component of source images average, σ in the background area i SBe i component of source images standard deviation, i=1,2,3 in the background area;
(4) according to the coded prediction structure of setting, the source viewpoint video behind reference view video and the color correction is carried out coded prediction on time domain and spatial domain;
(5) the coding mode information that adopts according to each macro block in the every frame in the present frame group is carried out the separating treatment of prospect and background respectively to reference view and source viewpoint, obtain the reference view of present frame group and the preceding background separation information of source viewpoint;
(6) judge whether the present frame group is last frame group, if then finish; Otherwise jump procedure (3) is handled next frame group.
2, the processing method of a kind of multi-view point video as claimed in claim 1 is characterized in that the prospect in the described step (5) and the separating treatment of background may further comprise the steps:
A. add up the macro block of the diverse location of same viewpoint, adopt the coding mode of P frame or B frame coding in the every frame in the frame group,, determine that then this macro block belongs to background if the coding mode of a macro block in all frames all is the SKIP pattern; Otherwise, determine that this macro block belongs to prospect, obtain initial preceding background separation information;
B. initial preceding background separation information is carried out smoothly, if current macro belongs to prospect, and have at least three macro blocks to belong to background in the left macro block that is adjacent, right macro block, last macro block and the following macro block, then determine the foreground macro block of this current macro block, revise this current macro block to make it belong to background for isolating; If current macro belongs to background, and have at least three macro blocks to belong to prospect in the left macro block that is adjacent, right macro block, last macro block and the following macro block, then determine the background macro block of this current macro block, revise this current macro block to make it belong to prospect, obtain level and smooth preceding background separation information for isolating.
CN 200810059283 2008-01-16 2008-01-16 Processing method for multi-view point video Expired - Fee Related CN100581265C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200810059283 CN100581265C (en) 2008-01-16 2008-01-16 Processing method for multi-view point video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200810059283 CN100581265C (en) 2008-01-16 2008-01-16 Processing method for multi-view point video

Publications (2)

Publication Number Publication Date
CN101262606A CN101262606A (en) 2008-09-10
CN100581265C true CN100581265C (en) 2010-01-13

Family

ID=39962765

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200810059283 Expired - Fee Related CN100581265C (en) 2008-01-16 2008-01-16 Processing method for multi-view point video

Country Status (1)

Country Link
CN (1) CN100581265C (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101820550B (en) * 2009-02-26 2011-11-23 华为终端有限公司 Multi-viewpoint video image correction method, device and system
KR20140023953A (en) * 2011-03-30 2014-02-27 가부시키가이샤 니콘 Image processing device, imaging device, and image processing program
US8824791B2 (en) * 2011-04-29 2014-09-02 International Business Machine Corporation Color correction for static cameras
US8866876B2 (en) 2011-12-07 2014-10-21 Futurewei Technologies, Inc. Color correction for multiple video objects in telepresence applications
CN104378617B (en) * 2014-10-30 2016-04-20 宁波大学 The acquisition methods of pixel in a kind of virtual view
CN104851100B (en) * 2015-05-22 2018-01-16 清华大学深圳研究生院 Binocular view solid matching method under variable light source
CN109640089B (en) * 2018-11-02 2023-03-24 西安万像电子科技有限公司 Image coding and decoding method and device
CN113271464B (en) * 2021-05-11 2022-11-18 北京奇艺世纪科技有限公司 Video encoding method, decoding method and related devices
CN116528065B (en) * 2023-06-30 2023-09-26 深圳臻像科技有限公司 Efficient virtual scene content light field acquisition and generation method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
一种多视点视频自动颜色校正系统. 邵枫,蒋刚毅等.光学学报,第27卷第5期. 2007
一种多视点视频自动颜色校正系统. 邵枫,蒋刚毅等.光学学报,第27卷第5期. 2007 *

Also Published As

Publication number Publication date
CN101262606A (en) 2008-09-10

Similar Documents

Publication Publication Date Title
CN100581265C (en) Processing method for multi-view point video
JP6788699B2 (en) Effective partition coding with high partitioning degrees of freedom
JP6814783B2 (en) Valid predictions using partition coding
CN103493483B (en) Decoding multi-view video plus depth content
CN100496121C (en) Image signal processing method of the interactive multi-view video system
CN103703777B (en) Motion depth map to changing with depth bounds enters row decoding
CN106134191B (en) For the processing of low latency luminance compensation and the method for the coding based on depth look-up table
JP2021168479A (en) Efficient multi-view coding using depth-map estimation and update
KR101844705B1 (en) Depth aware enhancement for stereo video
CN100527842C (en) Background-based motion estimation coding method
CN104471941B (en) The method and apparatus of son segmentation prediction between view in 3D Video codings
CN105191317B (en) The predictive interpretation of depth look-up table in view and across view
CN105874788B (en) The simplification decoded to the piecewise DC of larger prediction block in 3D video codings
CN102685532B (en) Coding method for free view point four-dimensional space video coding system
CN102598674A (en) Depth map generation techniques for conversion of 2d video data to 3d video data
CN105308956A (en) Predictor for depth map intra coding
CN102438147B (en) Intra-frame synchronous stereo video multi-reference frame mode inter-view predictive coding and decoding method
CN104412587A (en) Method and apparatus of inter-view candidate derivation in 3d video coding
CN103338370B (en) A kind of multi-view depth video fast encoding method
CN101888566A (en) Estimation method of distortion performance of stereo video encoding rate
CN104429074A (en) Method and apparatus of disparity vector derivation in 3D video coding
CN101729891A (en) Method for encoding multi-view depth video
Gu et al. Fast bi-partition mode selection for 3D HEVC depth intra coding
WO2016155070A1 (en) Method for acquiring adjacent disparity vectors in multi-texture multi-depth video
CN101404765B (en) Interactive multi-view point video encoding method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: SHANGHAI SILICON INTELLECTUAL PROPERTY EXCHANGE CE

Free format text: FORMER OWNER: NINGBO UNIVERSITY

Effective date: 20120105

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 315211 NINGBO, ZHEJIANG PROVINCE TO: 200030 XUHUI, SHANGHAI

TR01 Transfer of patent right

Effective date of registration: 20120105

Address after: 200030 Shanghai City No. 333 Yishan Road Huixin International Building 1 building 1704

Patentee after: Shanghai Silicon Intellectual Property Exchange Co.,Ltd.

Address before: 315211 Zhejiang Province, Ningbo Jiangbei District Fenghua Road No. 818

Patentee before: Ningbo University

ASS Succession or assignment of patent right

Owner name: SHANGHAI SIPAI KESI TECHNOLOGY CO., LTD.

Free format text: FORMER OWNER: SHANGHAI SILICON INTELLECTUAL PROPERTY EXCHANGE CENTER CO., LTD.

Effective date: 20120217

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 200030 XUHUI, SHANGHAI TO: 201203 PUDONG NEW AREA, SHANGHAI

TR01 Transfer of patent right

Effective date of registration: 20120217

Address after: 201203 Shanghai Chunxiao Road No. 350 South Building Room 207

Patentee after: Shanghai spparks Technology Co.,Ltd.

Address before: 200030 Shanghai City No. 333 Yishan Road Huixin International Building 1 building 1704

Patentee before: Shanghai Silicon Intellectual Property Exchange Co.,Ltd.

ASS Succession or assignment of patent right

Owner name: SHANGHAI GUIZHI INTELLECTUAL PROPERTY SERVICE CO.,

Free format text: FORMER OWNER: SHANGHAI SIPAI KESI TECHNOLOGY CO., LTD.

Effective date: 20120606

C41 Transfer of patent application or patent right or utility model
C56 Change in the name or address of the patentee
CP02 Change in the address of a patent holder

Address after: 200030 Shanghai City No. 333 Yishan Road Huixin International Building 1 building 1706

Patentee after: Shanghai spparks Technology Co.,Ltd.

Address before: 201203 Shanghai Chunxiao Road No. 350 South Building Room 207

Patentee before: Shanghai spparks Technology Co.,Ltd.

TR01 Transfer of patent right

Effective date of registration: 20120606

Address after: 200030 Shanghai City No. 333 Yishan Road Huixin International Building 1 building 1704

Patentee after: Shanghai Guizhi Intellectual Property Service Co.,Ltd.

Address before: 200030 Shanghai City No. 333 Yishan Road Huixin International Building 1 building 1706

Patentee before: Shanghai spparks Technology Co.,Ltd.

DD01 Delivery of document by public notice

Addressee: Shi Lingling

Document name: Notification of Passing Examination on Formalities

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20100113

Termination date: 20200116