CN104010180B - Method and device for filtering three-dimensional video - Google Patents
Method and device for filtering three-dimensional video Download PDFInfo
- Publication number
- CN104010180B CN104010180B CN201410265360.4A CN201410265360A CN104010180B CN 104010180 B CN104010180 B CN 104010180B CN 201410265360 A CN201410265360 A CN 201410265360A CN 104010180 B CN104010180 B CN 104010180B
- Authority
- CN
- China
- Prior art keywords
- pixel
- filtered
- value
- reference pixel
- depth
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Generation (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the invention provides a method and device for filtering a three-dimensional video. The method for filtering the three-dimensional video comprises the step of projecting pixels of an image plane into a three-dimensional space, the step of calculating the space proximity of a pixel to be filtered and a reference pixel in the three-dimensional space according to coordinate values of the pixel to be filtered and the reference pixel in the three-dimensional space, the step of calculating the texture pixel value similarity of the pixel to be filtered and the reference pixel according to the texture pixel value of the pixel to be filtered and the texture pixel value of the reference pixel, the step of calculating the motion feature consistency of the pixel to be filtered and the reference pixel according to the texture pixel value of the pixel to be filtered, the texture pixel value of the reference pixel and the texture pixel value of a pixel at the same position in an image in the front frame, and the step of determining the filtering weight according to the space proximity, the texture pixel value similarity and the motion feature consistency and carrying out weighted average on pixel values of all reference pixels in a reference pixel set to obtain a filtering result of the pixel to be filtered. According to the method and device, filtering accuracy of the three-dimensional video is improved.
Description
Technical field
The present embodiments relate to image processing techniquess, more particularly, to a kind of 3 D video filtering method and device.
Background technology
With the continuous development of visual multimedia technology, 3 D video progressively comes into the life of people with its unique Deep Canvas
Live, and be applied to the multiple fields such as education, military, amusement and medical treatment.Current 3 D video is broadly divided into according to video content
Two classes: pure color three dimension video and the 3 D video based on depth.Multichannel color video is directly presented by pure color three dimension video
To user, its viewpoint position and parallax are fixed, and the viewing giving people brings certain limitation.Regard compared to pure color three dimension
Frequently, due to the introducing of depth map, the 3 D video based on depth can be synthesized arbitrarily by the rendering technique based on depth image
The virtual image of viewpoint, people can select viewpoint according to personal like and adjust parallax, and then preferably enjoy 3 D video
The enjoyment brought.The feature of this freedom and flexibility makes to become widely accepted 3 D video at present based on the 3 D video of depth
Form.
3 d video content based on depth is made up of texture graphic sequence and depth map sequence, and texture maps intuitively present
The textural characteristics of body surface, depth map reflects the distance between object and camera.Using above-mentioned video content with based on deep
The rendering technique of degree image can synthesize the virtual view texture image specified.However, depth map and texture maps are obtaining, are encoding
With transmission etc. during can introduce much noise.In the View Synthesis stage, the noise in depth map and texture maps can cause respectively
The geometric distortion of composograph and texture distortion, and then have a strong impact on the visual experience of people.And filtering technique can be effective
Remove these noises, effectively lifting 3 D video quality.
In prior art, the denoising method for texture maps is mainly two-sided filter, using picture around pixel to be filtered
Element is as reference, and is weighted average method to it, obtains filter result.Primary Reference during weight computing
The similarity of location of pixels propinquity and pixel value in image.This filtering method is thought: two pixels the plane of delineation away from
From closer to dependency is stronger;The pixel value of two pixels is more similar, and dependency is stronger.
Fig. 1 is the schematic diagram of the calculating propinquity of the two-sided filter of prior art.Problem of the prior art is, due to figure
Pixel in picture is the reproduction in two dimensional image plane for the point in true three-dimension space, but two-sided filter is considering pixel
Not from true three-dimension scene during point propinquity, its result of calculation is inaccurate, as shown in figure 1, wherein a ', b ', c '
For three points in real scene, be a, b, c through its position in the plane of delineation of collected by camera, a and c in the plane away from
From with b and c in the plane with a distance from equal.If be filtered to c, a, b are reference pixel.According to Fig. 1, Ke Yiming
Show and find out, in three dimensions, the propinquity of b and c is higher, and two-sided filter then thinks the propinquity one of a and c and b and c
Cause, therefore filter result accuracy is not high.
Content of the invention
The embodiment of the present invention provides a kind of 3 D video filtering method and device, to overcome filter result in prior art accurate
The really not high problem of property.
In a first aspect, the embodiment of the present invention provides a kind of 3 D video filtering method, comprising:
By the pixel projection in the plane of delineation to three dimensions;Described pixel includes pixel to be filtered and reference pixel collection
Close;
According to the reference pixel in described pixel to be filtered and described reference pixel set in described three-dimensional coordinate
Value, calculates the described pixel to be filtered and described reference pixel spacial proximity in described three dimensions;Wherein, described reference
Collection of pixels and described pixel to be filtered are in same two field picture;
According to the texel value of the reference pixel in described pixel to be filtered and described reference pixel set, calculate described
The texel value similarity of pixel to be filtered and described reference pixel;
According to the reference pixel in described pixel to be filtered, described reference pixel set and described pixel place to be filtered frame
Previous frame image in same position pixel texel value, calculate the fortune of described pixel to be filtered and described reference pixel
Dynamic feature consistency;
Determine the weights of filtering according to described spacial proximity, texel value similarity and motion feature concordance, point
The other pixel value to the reference pixel in described reference pixel set is weighted averagely obtaining the filtering of described pixel to be filtered
Result.
In conjunction with a first aspect, in the first implementation of first aspect, when depth image is filtered, according to described sky
Between propinquity, described depth pixel corresponding texel value similarity and described motion feature concordance determine the power of filtering
Value, is weighted to the depth pixel value of the reference pixel in described reference pixel set averagely, obtaining described depth map respectively
Filter result as the depth pixel value of pixel to be filtered;Or,
When texture image is filtered, according to described spacial proximity, described texture pixel similarity and described motion feature
Concordance determines the weights of filtering, respectively the texel value of the reference pixel in described reference pixel set is weighted putting down
All, obtain the filter result of the texel value of described texture image pixel to be filtered.
In conjunction with the first implementation of first aspect or first aspect, in the second implementation of first aspect,
Described by the pixel projection in the plane of delineation to three dimensions, comprising:
Deep image information, viewpoint position information and the reference camera parameter information being provided using 3 D video, will be described
Pixel projects to three dimensions from the plane of delineation;Described deep image information includes the depth pixel value of described pixel.
In conjunction with the second implementation of first aspect, in the third implementation of first aspect, described utilize three
Deep image information, viewpoint position information and reference camera parameter information that dimension video provides, pixel is projected from the plane of delineation
To three dimensions, comprising:
According to formula p=r-1(da-1P-t) calculate the coordinate figure to after three dimensions for the described pixel projection;
Wherein, r and t is spin matrix and the translation vector of reference camera, and a is reference camera parameter matrix, For coordinate figure in described image plane for the described pixel, For described
Coordinate figure in described three dimensions for the pixel, d is the depth pixel value of described pixel;fxAnd fyIt is respectively horizontally and vertically side
To normalization focal length, r be coefficient of radial distortion, (ox,oy) for the datum mark in described image plane coordinate figure;Described base
It is the optical axis of described reference camera and the intersection point of described image plane on schedule.
In conjunction with first-the three any one implementation of first aspect or first aspect, in the 4th kind of reality of first aspect
In existing mode, described spacial proximity is used as letter by the distance of pixel to be filtered described in three dimensions and described reference pixel
The input value of number is calculated;The output valve of described function increases with the reduction of input value;
Described texel value similarity is by the difference of described pixel to be filtered and the texel value of described reference pixel
Value is calculated as the input value of function;The output valve of described function increases with the reduction of input value;
Whether described motion feature concordance passes through to calculate the motion feature of described pixel to be filtered and described reference pixel
Unanimously obtain, comprising:
When the difference of the texel value of the pixel of correspondence position in described pixel to be filtered with former frame, and described ginseng
Examine the difference of pixel and the texel value of pixel of correspondence position in former frame, simultaneously greater than or true less than during default threshold value
It is set to described pixel to be filtered consistent with the kinestate of described reference pixel;Otherwise it is defined as described pixel to be filtered and described
The kinestate of reference pixel is inconsistent.
In conjunction with first-the four any one implementation of first aspect or first aspect, in the 5th kind of reality of first aspect
In existing mode, the described power determining filtering according to described spacial proximity, texel value similarity and motion feature concordance
Value, is weighted averagely obtaining described pixel to be filtered to the pixel value of the reference pixel in described reference pixel set respectively
Filter result, comprising:
According to formula (1): Calculate and treat described in obtaining
The filter result of the depth pixel value of filtered pixel;Or,
According to formula (2): Calculate acquisition described to be filtered
The filter result of the texel value of ripple pixel;
Wherein,Space for calculating described pixel to be filtered and described reference pixel is adjacent
Nearly property;
ft(tp,tq)=ft(||tp-tq| |) for calculating the texture pixel of described pixel to be filtered and described reference pixel
Value similarity;
Wherein, p is pixel to be filtered, and q is reference pixel, and k is reference pixel set, dp' it is the filtered depth pixel of p
Value, dqDepth pixel value for q, p, q are p, q coordinate figure in three dimensions, tp、tqTexel value for p, q, tp'、
tq'For p, q former frame same position texel value, tp' it is the filtered texel value of p, th is default texture picture
Plain difference limen value.
Second aspect, the embodiment of the present invention provides a kind of 3 D video filtering method, comprising:
By the pixel projection in the plane of delineation to three dimensions;Described pixel includes pixel to be filtered and reference pixel collection
Close;
According to the reference pixel in described pixel to be filtered and described reference pixel set in described three-dimensional coordinate
Value, calculates the described pixel to be filtered and described reference pixel spacial proximity in described three dimensions;Wherein, described reference
Collection of pixels is being located in same two field picture and adjacent multiple image with described pixel to be filtered;
According to the texel value of the reference pixel in described pixel to be filtered and described reference pixel set, calculate described
The texel value similarity of pixel to be filtered and described reference pixel;
According to the time interval of the reference pixel place frame in described pixel to be filtered and described reference pixel set, calculate
Described pixel to be filtered and the time domain propinquity of described reference pixel;
Determine the weights of filtering according to described spacial proximity, texel value similarity and time domain propinquity, right respectively
The pixel value of the reference pixel in described reference pixel set is weighted averagely obtaining the filter result of described pixel to be filtered.
In conjunction with second aspect, in the first implementation of second aspect, when depth image is filtered, according to described sky
Between propinquity, described depth pixel corresponding texel value similarity and described time domain propinquity determine the weights of filtering, point
The other depth pixel value to the reference pixel in described reference pixel set is weighted averagely, obtaining described depth image to be filtered
The filter result of the depth pixel value of ripple pixel;Or,
When texture image is filtered, neighbouring according to described spacial proximity, described texture pixel similarity and described time domain
Property determine filtering weights, respectively the texel value of the reference pixel in described reference pixel set is weighted averagely,
Obtain the filter result of the texel value of described texture image pixel to be filtered.
In conjunction with the first implementation of second aspect or second aspect, in the second implementation of second aspect,
Described by the pixel projection in the plane of delineation to three dimensions, comprising:
Deep image information, viewpoint position information and the reference camera parameter information being provided using 3 D video, will be described
Pixel projects to three dimensions from the plane of delineation;Described deep image information includes the depth pixel value of described pixel.
In conjunction with the second implementation of second aspect, in the third implementation of second aspect, described utilize three
Deep image information, viewpoint position information and reference camera parameter information that dimension video provides, by described pixel from the plane of delineation
Project to three dimensions, comprising:
According to formula p=r-1(da-1P-t) calculate the coordinate figure to after three dimensions for the described pixel projection;
Wherein, r and t is spin matrix and the translation vector of reference camera, and a is reference camera parameter matrix, For coordinate figure in described image plane for the described pixel, For described
Coordinate figure in described three dimensions for the pixel, d is the depth pixel value of described pixel;fxAnd fyIt is respectively horizontally and vertically side
To normalization focal length, r be coefficient of radial distortion, (ox,oy) for the datum mark in described image plane coordinate figure;Described base
It is the optical axis of described reference camera and the intersection point of described image plane on schedule.
In conjunction with first-the three any one implementation of second aspect or second aspect, in the 4th kind of reality of second aspect
In existing mode, described spacial proximity is used as letter by the distance of pixel to be filtered described in three dimensions and described reference pixel
The input value of number is calculated;The output valve of described function increases with the reduction of input value;
Described texel value similarity is by the difference of described pixel to be filtered and the texel value of described reference pixel
Value is calculated as the input value of function;The output valve of described function increases with the reduction of input value;
Described time domain propinquity is used as letter by the time interval of described pixel to be filtered and described reference pixel place frame
The input value of number is calculated;The output valve of described function increases with the reduction of input value.
In conjunction with first-the four any one implementation of second aspect or second aspect, in the 5th kind of reality of second aspect
In existing mode, the described weights determining filtering according to described spacial proximity, texel value similarity and time domain propinquity, point
The other pixel value to the reference pixel in described reference pixel set is weighted averagely obtaining the filtering of described pixel to be filtered
Result, comprising:
According to formula (3): Calculate and obtain institute
State the filter result of the depth pixel value of pixel to be filtered;Or,
According to formula (4): Calculate and obtain institute
State the filter result of the texel value of pixel to be filtered;
Wherein,For calculating the space of described pixel to be filtered and described reference pixel
Propinquity;
For calculating the texture picture of described pixel to be filtered and described reference pixel
Element value similarity;
ftem(i, n)=ftem(| | i-n | |) is used for calculating described pixel to be filtered and the time domain of described reference pixel is neighbouring
Property;
Wherein, n is the frame number of pixel place to be filtered frame, and i is the frame number of reference pixel place frame, and i value is [n-m, n+
N] interval integer, m, n are respectively before the frame of pixel place to be filtered, reference frame number afterwards, and m, n are nonnegative integer, p
For pixel to be filtered, qiFor the reference pixel in the i-th frame, kiFor the reference pixel set in the i-th frame, dp' it is the filtered depth of p
Degree pixel value,The depth pixel value of q, p, q in i-th frameiFor q coordinate figure in three dimensions in p, the i-th frame, tp、Point
Not Wei p, in the i-th frame q texel value, tp' it is the filtered texel value of p.
The third aspect, the embodiment of the present invention provides a kind of 3 D video filter, comprising:
Projection module, for by the pixel projection in the plane of delineation to three dimensions;Described pixel includes pixel to be filtered
With reference pixel set;
Computing module, for according to the reference pixel in described pixel to be filtered and described reference pixel set described three
The coordinate figure of dimension space, calculates the described pixel to be filtered and described reference pixel spacial proximity in described three dimensions;
Wherein, described reference pixel set and described pixel to be filtered are in same two field picture;
Described computing module, is additionally operable to according to the reference pixel in described pixel to be filtered and described reference pixel set
Texel value, calculates the texel value similarity of described pixel to be filtered and described reference pixel;
Described computing module, be additionally operable to according to the reference pixel in described pixel to be filtered, described reference pixel set and
The texel value of the pixel of same position in the previous frame image of described pixel place to be filtered frame, calculates described picture to be filtered
The motion feature concordance of plain and described reference pixel;
Filtration module, for determining according to described spacial proximity, texel value similarity and motion feature concordance
The weights of filtering, are weighted averagely obtaining described to be filtered respectively to the pixel value of the reference pixel in described reference pixel set
The filter result of ripple pixel.
In conjunction with the third aspect, in the first implementation of the third aspect, described filtration module, specifically for:
When depth image is filtered, similar according to described spacial proximity, the corresponding texel value of described depth pixel
Property and described motion feature concordance determine filtering weights, the depth to the reference pixel in described reference pixel set respectively
Pixel value is weighted averagely obtaining the filter result of the depth pixel value of described depth image pixel to be filtered;Or,
When texture image is filtered, according to described spacial proximity, described texture pixel similarity and described motion feature
Concordance determines the weights of filtering, respectively the texel value of the reference pixel in described reference pixel set is weighted putting down
All, obtain the filter result of the texel value of described texture image pixel to be filtered.
In conjunction with the first implementation of the third aspect or the third aspect, in the second implementation of the third aspect,
Described projection module, specifically for:
Deep image information, viewpoint position information and the reference camera parameter information being provided using 3 D video, will be described
Pixel projects to three dimensions from the plane of delineation;Described deep image information includes the depth pixel value of described pixel.
In conjunction with the second implementation of the third aspect, in the third implementation of the third aspect, described projective module
Block, specifically for:
According to formula p=r-1(da-1P-t) calculate the coordinate figure to after three dimensions for the described pixel projection;
Wherein, r and t is spin matrix and the translation vector of reference camera, and a is reference camera parameter matrix, For coordinate figure in described image plane for the described pixel, For described
Coordinate figure in described three dimensions for the pixel, d is the depth pixel value of described pixel;fxAnd fyIt is respectively horizontally and vertically side
To normalization focal length, r be coefficient of radial distortion, (ox,oy) for the datum mark in described image plane coordinate figure;Described base
It is the optical axis of described reference camera and the intersection point of described image plane on schedule.
In conjunction with first-the three any one implementation of the third aspect or the third aspect, in the 4th kind of reality of the third aspect
In existing mode, described spacial proximity is used as letter by the distance of pixel to be filtered described in three dimensions and described reference pixel
The input value of number is calculated;The output valve of described function increases with the reduction of input value;
Described texel value similarity is by the difference of described pixel to be filtered and the texel value of described reference pixel
Value is calculated as the input value of function;The output valve of described function increases with the reduction of input value;
Whether described motion feature concordance passes through to calculate the motion feature of described pixel to be filtered and described reference pixel
Unanimously obtain, comprising:
When the difference of the texel value of the pixel of correspondence position in described pixel to be filtered with former frame, and described ginseng
Examine the difference of pixel and the texel value of pixel of correspondence position in former frame, simultaneously greater than or true less than during default threshold value
It is set to described pixel to be filtered consistent with the kinestate of described reference pixel;Otherwise it is defined as described pixel to be filtered and described
The kinestate of reference pixel is inconsistent.
In conjunction with first-the four any one implementation of the third aspect or the third aspect, in the 5th kind of reality of the third aspect
In existing mode, described filtration module, specifically for:
According to formula (1): Calculate and treat described in obtaining
The filter result of the depth pixel value of filtered pixel;Or,
According to formula (2): Calculate acquisition described to be filtered
The filter result of the texel value of ripple pixel;
Wherein,Space for calculating described pixel to be filtered and described reference pixel is adjacent
Nearly property;
ft(tp,tq)=ft(||tp-tq| |) for calculating the texture pixel of described pixel to be filtered and described reference pixel
Value similarity;
Wherein, p is pixel to be filtered, and q is reference pixel, and k is reference pixel set, dp' it is the filtered depth pixel of p
Value, dqDepth pixel value for q, p, q are p, q coordinate figure in three dimensions, tp、tqTexel value for p, q, tp'、
tq'For p, q former frame same position texel value, tp' it is the filtered texel value of p, th is default texture picture
Plain difference limen value.
Fourth aspect, the embodiment of the present invention provides a kind of 3 D video filter, comprising:
Projection module, for by the pixel projection in the plane of delineation to three dimensions;Described pixel includes pixel to be filtered
With reference pixel set;
Computing module, for according to the reference pixel in described pixel to be filtered and described reference pixel set described three
The coordinate figure of dimension space, calculates the described pixel to be filtered and described reference pixel spacial proximity in described three dimensions;
Wherein, described reference pixel is integrated into and is located in same two field picture and adjacent multiple image with described pixel to be filtered;
Described computing module, is additionally operable to according to the reference pixel in described pixel to be filtered and described reference pixel set
Texel value, calculates the texel value similarity of described pixel to be filtered and described reference pixel;
Described computing module, is additionally operable to according to the reference pixel institute in described pixel to be filtered and described reference pixel set
In the time interval of frame, calculate the time domain propinquity of described pixel to be filtered and described reference pixel;
Filtration module, for determining filtering according to described spacial proximity, texel value similarity and time domain propinquity
Weights, respectively the pixel value of the reference pixel in described reference pixel set is weighted averagely obtaining described picture to be filtered
The filter result of element.
In conjunction with fourth aspect, in the first implementation of fourth aspect, described filtration module, specifically for:
When depth image is filtered, similar according to described spacial proximity, the corresponding texel value of described depth pixel
Property and described time domain propinquity determine filtering weights, the depth pixel to the reference pixel in described reference pixel set respectively
Value is weighted averagely obtaining the filter result of the depth pixel value of described depth image pixel to be filtered;Or,
When texture image is filtered, neighbouring according to described spacial proximity, described texture pixel similarity and described time domain
Property determine filtering weights, respectively the texel value of the reference pixel in described reference pixel set is weighted averagely,
Obtain the filter result of the texel value of described texture image pixel to be filtered.
In conjunction with the first implementation of fourth aspect or fourth aspect, in the second implementation of fourth aspect,
Described projection module, specifically for:
Deep image information, viewpoint position information and the reference camera parameter information being provided using 3 D video, will be described
Pixel projects to three dimensions from the plane of delineation;Described deep image information includes the depth pixel value of described pixel.
In conjunction with the second implementation of fourth aspect, in the third implementation of fourth aspect, described projective module
Block, specifically for:
According to formula p=r-1(da-1P-t) calculate the coordinate figure to after three dimensions for the described pixel projection;
Wherein, r and t is spin matrix and the translation vector of reference camera, and a is reference camera parameter matrix, For coordinate figure in described image plane for the described pixel, For described
Coordinate figure in described three dimensions for the pixel, d is the depth pixel value of described pixel;fxAnd fyIt is respectively horizontally and vertically side
To normalization focal length, r be coefficient of radial distortion, (ox,oy) for the datum mark in described image plane coordinate figure;Described base
It is the optical axis of described reference camera and the intersection point of described image plane on schedule.
In conjunction with first-the three any one implementation of fourth aspect or fourth aspect, in the 4th kind of reality of fourth aspect
In existing mode, described spacial proximity is used as letter by the distance of pixel to be filtered described in three dimensions and described reference pixel
The input value of number is calculated;The output valve of described function increases with the reduction of input value;
Described texel value similarity is by the difference of described pixel to be filtered and the texel value of described reference pixel
Value is calculated as the input value of function;The output valve of described function increases with the reduction of input value;
Described time domain propinquity is used as letter by the time interval of described pixel to be filtered and described reference pixel place frame
The input value of number is calculated;The output valve of described function increases with the reduction of input value.
In conjunction with first-the four any one implementation of fourth aspect or fourth aspect, in the 5th kind of reality of fourth aspect
In existing mode, described filtration module, specifically for:
According to formula (3): Calculate and obtain institute
State the filter result of the depth pixel value of pixel to be filtered;Or,
According to formula (4): Calculate and obtain institute
State the filter result of the texel value of pixel to be filtered;
Wherein,For calculating the space of described pixel to be filtered and described reference pixel
Propinquity;
For calculating the texture picture of described pixel to be filtered and described reference pixel
Element value similarity;
ftem(i, n)=ftem(| | i-n | |) is used for calculating described pixel to be filtered and the time domain of described reference pixel is neighbouring
Property;
Wherein, n is the frame number of pixel place to be filtered frame, and i is the frame number of reference pixel place frame, and i value is [n-m, n+
N] interval integer, m, n are respectively before the frame of pixel place to be filtered, reference frame number afterwards, and m, n are nonnegative integer, p
For pixel to be filtered, qiFor the reference pixel in the i-th frame, kiFor the reference pixel set in the i-th frame, dp' it is the filtered depth of p
Degree pixel value,The depth pixel value of q, p, q in i-th frameiFor q coordinate figure in three dimensions in p, the i-th frame, tp、Point
Not Wei p, in the i-th frame q texel value, tp' it is the filtered texel value of p.
Embodiment of the present invention 3 D video filtering method and device, using pixel to be filtered and reference pixel in true three-dimension
Relation in space, calculates the described pixel to be filtered and described reference pixel spacial proximity in described three dimensions, stricture of vagina
Reason pixel value similarity, motion feature concordance and time domain propinquity;According to spacial proximity, texel value similarity and fortune
Dynamic feature consistency determines the weights of filtering, or is determined according to spacial proximity, texel value similarity and time domain propinquity
The weights of filtering, are weighted averagely obtaining described to be filtered respectively to the pixel value of the reference pixel in described reference pixel set
The filter result of ripple pixel, has considered spacial proximity, texel value similarity, motion feature consistent when calculating weights
Property and time domain propinquity, due to calculate during spacial proximity using be position in true three-dimension space, and due to three-dimensional
Video is made up of from not image in the same time a series of collections, and the pixel between different frame there is also dependency, therefore weights
After consideration time domain propinquity filtering, the seriality between frame and frame is stronger, and increases the motion feature considering between pixel
Concordance, improves filter result accuracy, solves the problems, such as that in prior art, filter result accuracy is not high.
Brief description
In order to be illustrated more clearly that the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing
Have technology description in required use accompanying drawing be briefly described it should be apparent that, drawings in the following description are these
Some bright embodiments, for those of ordinary skill in the art, without having to pay creative labor, acceptable
Other accompanying drawings are obtained according to these accompanying drawings.
Fig. 1 is the schematic diagram of the calculating propinquity of the two-sided filter of prior art;
Fig. 2 is the flow chart of 3 D video filtering method embodiment one of the present invention;
Fig. 3 is the pixel projection schematic diagram of the inventive method embodiment one;
Fig. 4 is the flow chart of 3 D video filtering method embodiment two of the present invention;
Fig. 5 is that the reference pixel of the inventive method embodiment two selects schematic diagram;
Fig. 6 is the structural representation of 3 D video filter embodiment of the present invention;
Fig. 7 is the structural representation of 3 D video filter apparatus embodiment of the present invention.
Specific embodiment
Purpose, technical scheme and advantage for making the embodiment of the present invention are clearer, below in conjunction with the embodiment of the present invention
In accompanying drawing, the technical scheme in the embodiment of the present invention is clearly and completely described it is clear that described embodiment is
The a part of embodiment of the present invention, rather than whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art
The every other embodiment being obtained under the premise of not making creative work, broadly falls into the scope of protection of the invention.
Fig. 2 is the flow chart of 3 D video filtering method embodiment one of the present invention, and Fig. 3 is the inventive method embodiment one
Pixel projection schematic diagram.As shown in Fig. 2 the method for the present embodiment may include that
Step 201, by the pixel projection in the plane of delineation to three dimensions;Described pixel includes pixel to be filtered and reference
Collection of pixels.
Alternatively, by the pixel projection in the plane of delineation to three dimensions, comprising:
Deep image information, viewpoint position information and the reference camera parameter information being provided using 3 D video, by pixel
Project to three dimensions from the plane of delineation;Described deep image information includes the depth pixel value of described pixel.
Alternatively, described utilization 3 D video provides deep image information, viewpoint position information and reference camera parameter
Information, pixel is projected to three dimensions from the plane of delineation, comprising:
According to formula p=r-1(da-1P-t) calculate the coordinate figure to after three dimensions for the described pixel projection;
Wherein, r and t is spin matrix and the translation vector of reference camera, and a is reference camera parameter matrix, For coordinate figure in described image plane for the described pixel, For described
Coordinate figure in described three dimensions for the pixel, d is the depth pixel value of described pixel;fxAnd fyIt is respectively horizontally and vertically side
To normalization focal length, r be coefficient of radial distortion, (ox,oy) for the datum mark in described image plane coordinate figure;Described base
It is the optical axis of described reference camera and the intersection point of described image plane on schedule.
As shown in figure 3, the plane that uv coordinate is located is the plane of delineation, the location of pixels world coordinate system in three dimensions
In coordinate representation, p is the pixel in the plane of delineation, and coordinate figure in described image plane for the described pixel is
Application tripleplane technology, by the p point in world coordinate system for the pixel projection, coordinate in institute's world coordinate system for the described pixel
It is worth and be This coordinate figure can pass through formula p=r-1(da-1P-t) it is calculated, wherein r and t is reference camera
Spin matrix and translation vector, d is the depth pixel value of pixel, can be obtained by the depth map information that 3 D video provides, a is
Reference camera parameter matrix fxAnd fyIt is respectively the normalization focal length in horizontally and vertically direction, r is
Coefficient of radial distortion, (ox,oy) for the datum mark in described image plane coordinate figure;Described datum mark is described reference camera
Optical axis and described image plane intersection point.
Step 202, according to the reference pixel in pixel to be filtered and reference pixel set in three-dimensional coordinate figure, count
Calculate the pixel to be filtered and reference pixel spacial proximity in described three dimensions;Wherein, reference pixel set with to be filtered
Pixel is in same two field picture.
Alternatively, described spacial proximity passes through the distance of pixel to be filtered and described reference pixel described in three dimensions
Input value as function is calculated;The output valve of described function increases with the reduction of input value.
Specifically, the space length between 2 points can reflect its spatial neighbor degree, and the nearlyer dependency of distance is stronger, then
Spacial proximity is bigger, you can with by the reference pixel in pixel to be filtered and reference pixel set in three-dimensional coordinate
Value calculates space length, and this space length is for example calculated spacial proximity by Gaussian function as input value, calculates
The function of spacial proximity can also be other functions, however it is necessary that ensure the output valve of this function with the reduction of input value and
Increase;Wherein, the reference pixel set in the present embodiment and pixel to be filtered are in same frame.
Step 203, the texel value according to the reference pixel in pixel to be filtered and reference pixel set, calculate to be filtered
The texel value similarity of ripple pixel and reference pixel.
Alternatively, described texel value similarity is by the texture pixel of described pixel to be filtered and described reference pixel
The difference of value is calculated as the input value of function;The output valve of described function increases with the reduction of input value.
Specifically, the difference degree of textural characteristics reflects its similarity degree between 2 points, and the more similar dependency of texture is more
By force, then texel value similarity is bigger, you can with the texel value by described pixel to be filtered and described reference pixel
Calculating difference, and this difference is for example calculated texel value similarity by Gaussian function as input value, calculate texture
The function of pixel value similarity can also be other functions, however it is necessary that ensureing the reduction with input value for the output valve of this function
And increase.
Step 204, according to the reference pixel in pixel to be filtered, reference pixel set and pixel place to be filtered frame before
The texel value of the pixel of same position in one two field picture, calculates pixel to be filtered consistent with the motion feature of reference pixel
Property.
Alternatively, described motion feature concordance is passed through to calculate described pixel to be filtered and the motion of described reference pixel is special
Levy and whether unanimously obtain, comprising:
When the difference of the texel value of the pixel of correspondence position in described pixel to be filtered with former frame, and described ginseng
Examine the difference of pixel and the texel value of pixel of correspondence position in former frame, simultaneously greater than or true less than during default threshold value
It is set to described pixel to be filtered consistent with the motion feature of described reference pixel;Otherwise it is defined as described pixel to be filtered and described
The motion feature of reference pixel is inconsistent.
Specifically, the relation of relative motion also reflects its kinematic similarity between 2 points, moves more similar dependency more
By force.Due to being difficult to obtain the movable information of pixel, the therefore embodiment of the present invention two frames in front and back from three-dimensional video sequence
Pixel difference of same position texture pixel in the plane of delineation to judge whether described pixel moves, when difference is more than a certain pre-
If threshold value when it is believed that this pixel motion feature be motion, otherwise it is believed that the motion feature of this pixel be without motion;Enter one
Step ground, is judged with the pixel to be filtered of in front and back two frame and the difference of reference pixel same position texture pixel in the plane of delineation
Whether described pixel to be filtered is consistent with reference pixel motion feature, when difference is both greater than or is less than a certain default threshold value,
Think that described pixel to be filtered is consistent with the motion feature of reference pixel, conversely, motion feature is inconsistent.If the fortune of pixel
Dynamic feature is unanimously it is believed that there is dependency, otherwise thinks non-correlation.
Step 205, the power according to the determination filtering of spacial proximity, texel value similarity and motion feature concordance
Value, is weighted averagely obtaining the filtering knot of pixel to be filtered respectively to the pixel value of the reference pixel in reference pixel set
Really.
Alternatively, when depth image being filtered, according to described spacial proximity, the corresponding texture pixel of described depth pixel
Value similarity and described motion feature concordance determine the weights filtering, respectively to the reference pixel in described reference pixel set
Depth pixel value be weighted averagely described in acquisition, treating the filter result of the depth pixel value of depth image pixel to be filtered;
Or,
When texture image is filtered, according to described spacial proximity, described texture pixel similarity and described motion feature
Concordance determines the weights of filtering, respectively the texel value of the reference pixel in described reference pixel set is weighted putting down
All, treat the filter result of the texel value of texture image pixel to be filtered described in acquisition.
Alternatively, the weights of filtering are determined according to spacial proximity, texel value similarity and motion feature concordance,
Respectively the pixel value of the reference pixel in reference pixel set is weighted averagely obtaining the filter result of pixel to be filtered, bag
Include:
According to formula (1): Calculate acquisition to be filtered
The filter result of the depth pixel value of pixel;Or,
According to formula (2): Calculate and obtain picture to be filtered
The filter result of the texel value of element;
Wherein,For calculating the spacial proximity of pixel to be filtered and reference pixel;
ft(tp,tq)=ft(||tp-tq| |) for calculating the texel value similarity of pixel to be filtered and reference pixel;
Wherein, p is pixel to be filtered, and q is reference pixel, and k is reference pixel set, dp' it is the filtered depth pixel of p
Value, dqDepth pixel value for q, p, q are p, q coordinate figure in three dimensions, tp、tqTexel value for p, q, tp'、
tq'For p, q former frame same position texel value, tp' it is the filtered texel value of p, th is default texture picture
Plain difference limen value.
Specifically, can be according to formula To reference image
Reference pixel in element set is weighted averagely calculating the filter result of the depth pixel value obtaining pixel to be filtered;According to
Formula Reference pixel in reference pixel set is carried out add
Weight average, calculates the filter result of the texel value obtaining pixel to be filtered;
Wherein,For calculating the spacial proximity of pixel to be filtered and reference pixel;Should
The input value of function is the space length of described pixel to be filtered and described reference pixel;The output valve of described function is with input
The reduction of value and increase;
ft(tp,tq)=ft(||tp-tq| |) for calculating the texel value similarity of pixel to be filtered and reference pixel;
The input value of this function is the difference of the texel value of described pixel to be filtered and described reference pixel;The output of described function
Value increases with the reduction of input value;
Wherein, p is pixel to be filtered, and q is reference pixel, and k is reference pixel set, and this set generally takes with picture to be filtered
Square area centered on element, size is 5*5 or 7*7, dp' it is p filtered depth pixel value, dqDepth pixel for q
Value, p, q are p, q coordinate figure in three dimensions, tp、tqTexel value for p, q, tp'、tq'Identical in former frame for p, q
The texel value of position, where like position refers to corresponding identical position, t in the plane of delineationp' it is the filtered stricture of vagina of p
Reason pixel value, th is default texture pixel difference limen value.Th is the threshold value judging whether pixel motion feature is consistent, can root
According to the different choice of three-dimensional video sequence content, generally 6~20, when selecting suitable, can preferably distinguish moving object
Border, after making filtering, object boundary becomes apparent from.
It should be noted that in the present embodiment step 202, step 203, step 204 in no particular order order.
The present embodiment, using the pixel to be filtered and reference pixel relation in true three-dimension space, calculates described to be filtered
Spacial proximity in described three dimensions of ripple pixel and described reference pixel, texel value similarity, motion feature one
Cause property;Determine the weights of filtering according to spacial proximity, texel value similarity and motion feature concordance, respectively to described
The pixel value of the reference pixel in reference pixel set is weighted averagely obtaining the filter result of described pixel to be filtered, calculates
Spacial proximity, texel value similarity, motion feature concordance has been considered, due to calculating spacial proximity during weights
Shi Liyong's is position in true three-dimension space, and increases the motion feature concordance considering between pixel, improves
Filter result accuracy, solves the problems, such as that in prior art, filter result accuracy is not high.
Fig. 4 is the flow chart of 3 D video filtering method embodiment two of the present invention, and Fig. 5 is the inventive method embodiment two
Reference pixel selects schematic diagram, as shown in figure 4, the method for the present embodiment may include that
Step 401, by the pixel projection in the plane of delineation to three dimensions;Described pixel includes pixel to be filtered and reference
Collection of pixels.
Alternatively, by the pixel projection in the plane of delineation to three dimensions, comprising:
Deep image information, viewpoint position information and the reference camera parameter information being provided using 3 D video, by pixel
Project to three dimensions from the plane of delineation;Described deep image information includes the depth pixel value of described pixel.
Alternatively, deep image information, viewpoint position information and the reference camera parameter information being provided using 3 D video,
Pixel is projected to three dimensions from the plane of delineation, comprising:
According to formula p=r-1(da-1P-t) calculate the coordinate figure to after three dimensions for the described pixel projection;
Wherein, r and t is spin matrix and the translation vector of reference camera, and a is reference camera parameter matrix, For coordinate figure in described image plane for the described pixel, For described
Coordinate figure in described three dimensions for the pixel, d is the depth pixel value of described pixel;fxAnd fyIt is respectively horizontally and vertically side
To normalization focal length, r be coefficient of radial distortion, (ox,oy) for the datum mark in described image plane coordinate figure;Described base
It is the optical axis of described reference camera and the intersection point of described image plane on schedule.
Step 402, according to the reference pixel in described pixel to be filtered and described reference pixel set in described three-dimensional space
Between coordinate figure, calculate the described pixel to be filtered and described reference pixel spacial proximity in described three dimensions;Wherein,
Described reference pixel is integrated into and is located in same two field picture and adjacent multiple image with described pixel to be filtered.
Alternatively, described spacial proximity passes through the distance of pixel to be filtered and described reference pixel described in three dimensions
Input value as function is calculated;The output valve of described function increases with the reduction of input value.
Step 403, the texel value according to the reference pixel in described pixel to be filtered and described reference pixel set,
Calculate the texel value similarity of described pixel to be filtered and described reference pixel.
Alternatively, described texel value similarity is by the texture pixel of described pixel to be filtered and described reference pixel
The difference of value is calculated as the input value of function;The output valve of described function increases with the reduction of input value;
In step 401, step 402, step 403 realize principle with similar in embodiment one, here is omitted.
Step 404, the time according to the reference pixel place frame in described pixel to be filtered and described reference pixel set
Interval, calculates the time domain propinquity of described pixel to be filtered and described reference pixel.
Alternatively, described time domain propinquity is by the time interval of described pixel to be filtered and described reference pixel place frame
Input value as function is calculated;The output valve of described function increases with the reduction of input value.
Specifically, because the pixel between different frame reflects the state of object in a period of time jointly, it exists certain
Dependency, this kind of dependency can be used in the filtering method of image in 3 D video.Therefore, the present invention is in above-described embodiment
On the basis of one weighing computation method, the selection range of reference pixel is expanded to it from currently pixel place to be filtered frame
Consecutive frame (filtered reference frame), to increase the seriality between frame and frame after filtering.As shown in figure 5, in each filtered reference frame, ginseng
The selection range examining pixel is consistent with the selection range of reference pixel in frame to be filtered, and wherein, n-th frame is currently pixel to be filtered
Place frame, selects its front m frame and rear n frame as filtered reference frame, each frame reference pixel window is sat with n-th frame reference pixel window
Mark identical with size (coordinate herein referring on the plane of delineation is identical with size), kiFor the reference pixel set in the i-th frame, i takes
Be worth for [n-m, n+n] interval integer, m, n are nonnegative integer, represent as n=0 only encoded before frame to be filtered or
The frame of decoding obtains.
Between 2 points, the distance in time domain reflects its time domain proximity, and the nearlyer dependency of time domain distance is stronger, then
Time domain propinquity is bigger, you can to calculate time interval by the frame that described pixel to be filtered and described reference pixel are located, and
This time interval is for example calculated time domain propinquity by Gaussian function as input value, calculates the function of time domain propinquity also
Can be other functions, however it is necessary that ensureing that the output valve of this function increases with the reduction of input value.
Step 405, the power according to the determination filtering of described spacial proximity, texel value similarity and time domain propinquity
Value, is weighted averagely obtaining described pixel to be filtered to the pixel value of the reference pixel in described reference pixel set respectively
Filter result.
Alternatively, when depth image being filtered, according to described spacial proximity, the corresponding texture pixel of described depth pixel
Value similarity and described time domain propinquity determine the weights filtering, the depth to the reference pixel in described reference pixel set respectively
Degree pixel value is weighted averagely obtaining the filter result of the depth pixel value of described depth image pixel to be filtered;Or,
When texture image is filtered, neighbouring according to described spacial proximity, described texture pixel similarity and described time domain
Property determine filtering weights, respectively the texel value of the reference pixel in described reference pixel set is weighted averagely,
Obtain the filter result of the texel value of described texture image pixel to be filtered.
Alternatively, the weights of filtering are determined according to described spacial proximity, texel value similarity and time domain propinquity,
Respectively the pixel value of the reference pixel in described reference pixel set is weighted averagely obtaining the filter of described pixel to be filtered
Ripple result, comprising:
According to formula (3): Calculate and obtain
The filter result of the depth pixel value of described pixel to be filtered;Or,
According to formula (4): Calculate and obtain institute
State the filter result of the texel value of pixel to be filtered;
Wherein,Space for calculating described pixel to be filtered and described reference pixel is adjacent
Nearly property;;
For calculating the texture picture of described pixel to be filtered and described reference pixel
Element value similarity;
ftem(i, n)=ftem(| | i-n | |) is used for calculating described pixel to be filtered and the time domain of described reference pixel is neighbouring
Property;
Wherein, n is the frame number of pixel place to be filtered frame, and i is the frame number of reference pixel place frame, and i value is [n-m, n+
N] interval integer, m, n are respectively before the frame of pixel place to be filtered, reference frame number afterwards, and m, n are nonnegative integer, p
For pixel to be filtered, qiFor the reference pixel in the i-th frame, kiFor the reference pixel set in the i-th frame, dp' it is the filtered depth of p
Degree pixel value,The depth pixel value of q, p, q in i-th frameiFor q coordinate figure in three dimensions in p, the i-th frame, tp、Point
Not Wei p, in the i-th frame q texel value, tp' it is the filtered texel value of p.
Specifically, can be according to formula Right
Reference pixel in reference pixel set is weighted averagely calculating the filtering knot of the depth pixel value obtaining pixel to be filtered
Really;According to formula To the ginseng in reference pixel set
Examine pixel to be weighted averagely calculating the filter result of the texel value obtaining pixel to be filtered;
Wherein,For calculating the spacial proximity of pixel to be filtered and reference pixel;Should
The input value of function is the space length of described pixel to be filtered and described reference pixel;The output valve of described function is with input
The reduction of value and increase;
ft(tp,tq)=ft(||tp-tq| |) for calculating the texel value similarity of pixel to be filtered and reference pixel;
The input value of this function is the difference of the texel value of described pixel to be filtered and described reference pixel;The output of described function
Value increases with the reduction of input value;
ftem(i, n)=ftem(| | i-n | |) is used for calculating described pixel to be filtered and the time domain of described reference pixel is neighbouring
Property;The input value of this function is described pixel to be filtered and the time interval of described reference pixel institute frame;The output of described function
Value increases with the reduction of input value;
Wherein, n is the frame number of pixel place to be filtered frame, and i is the frame number of reference pixel place frame, and m, n are respectively and are treating
Reference frame number before the frame of filtered pixel place, afterwards, usual m, n can be 1~3, because the increase being spaced over time,
Dependency between frame and frame can become very little, can ignore, and p is pixel to be filtered, qiFor the reference pixel in the i-th frame, ki
For the reference pixel set in the i-th frame, this gathers the square area generally taking centered on pixel to be filtered, and size is 5*5
Or 7*7, dp' it is p filtered depth pixel value,The depth pixel value of q, p, q in i-th frameiFor q in p, the i-th frame in three-dimensional
Coordinate figure in space, tp、The texel value of q, t in respectively p, the i-th framep' it is the filtered texel value of p.
It should be noted that in the present embodiment step 402, step 403, step 404 in no particular order order.
The present embodiment, using the pixel to be filtered and reference pixel relation in true three-dimension space, calculates described to be filtered
Spacial proximity in described three dimensions of ripple pixel and described reference pixel, texel value similarity and time domain are neighbouring
Property;Determine the weights of filtering according to spacial proximity, texel value similarity and time domain propinquity, respectively to described reference image
The pixel value of the reference pixel in element set is weighted averagely obtaining the filter result of described pixel to be filtered, when calculating weights
Consider spacial proximity, texel value similarity and time domain propinquity, utilized during due to calculating spacial proximity
It is the position in true three-dimension space, and because 3 D video is made up of from not image in the same time a series of collections, different
Pixel between frame there is also dependency, and therefore weights consider that the seriality between frame and frame is relatively after the filtering of time domain propinquity
By force, improve filter result accuracy, solve the problems, such as that in prior art, filter result accuracy is not high.
Fig. 6 is the structural representation of 3 D video filter embodiment of the present invention, as shown in fig. 6, the three of the present embodiment
Dimension video filtering device 60 may include that projection module 601, computing module 602 and filtration module 603;
Wherein, projection module 601, for by the pixel projection in the plane of delineation to three dimensions;Described pixel includes treating
Filtered pixel and reference pixel set;
Computing module 602, for according to the reference pixel in described pixel to be filtered and described reference pixel set in institute
State three-dimensional coordinate figure, calculate the described pixel to be filtered and described reference pixel spatial neighbor in described three dimensions
Property;Wherein, described reference pixel set and described pixel to be filtered are in same two field picture;
Described computing module 602, is additionally operable to according to the reference image in described pixel to be filtered and described reference pixel set
The texel value of element, calculates the texel value similarity of described pixel to be filtered and described reference pixel;
Described computing module 602, is additionally operable to according to the reference pixel in described pixel to be filtered, described reference pixel set
With the texel value of the pixel of same position in the previous frame image of described pixel place to be filtered frame, calculate described to be filtered
The motion feature concordance of pixel and described reference pixel;
Filtration module 603, for true according to described spacial proximity, texel value similarity and motion feature concordance
The weights of fixed filtering, the pixel value of the reference pixel in described reference pixel set is weighted averagely obtaining respectively described in treat
The filter result of filtered pixel.
Alternatively, described filtration module 603, specifically for:
When depth image is filtered, similar according to described spacial proximity, the corresponding texel value of described depth pixel
Property and described motion feature concordance determine filtering weights, the depth to the reference pixel in described reference pixel set respectively
Pixel value is weighted averagely obtaining the filter result of the depth pixel value of described depth image pixel to be filtered;Or,
When texture image is filtered, according to described spacial proximity, described texture pixel similarity and described motion feature
Concordance determines the weights of filtering, respectively the texel value of the reference pixel in described reference pixel set is weighted putting down
All, obtain the filter result of the texel value of described texture image pixel to be filtered.
Alternatively, described projection module 601, specifically for:
Deep image information, viewpoint position information and the reference camera parameter information being provided using 3 D video, by pixel
Project to three dimensions from the plane of delineation;Described deep image information includes the depth pixel value of described pixel.
Alternatively, described projection module 601, specifically for:
According to formula p=r-1(da-1P-t) calculate the coordinate figure to after three dimensions for the described pixel projection;
Wherein, r and t is spin matrix and the translation vector of reference camera, and a is reference camera parameter matrix, For coordinate figure in described image plane for the described pixel, For described
Coordinate figure in described three dimensions for the pixel, d is the depth pixel value of described pixel;fxAnd fyIt is respectively horizontally and vertically side
To normalization focal length, r be coefficient of radial distortion, (ox,oy) for the datum mark in described image plane coordinate figure;Described base
It is the optical axis of described reference camera and the intersection point of described image plane on schedule.
Alternatively, described spacial proximity passes through the distance of pixel to be filtered and described reference pixel described in three dimensions
Input value as function is calculated;The output valve of described function increases with the reduction of input value;
Described texel value similarity is by the difference of described pixel to be filtered and the texel value of described reference pixel
Value is calculated as the input value of function;The output valve of described function increases with the reduction of input value;
Whether described motion feature concordance passes through to calculate the motion feature of described pixel to be filtered and described reference pixel
Unanimously obtain, comprising:
When the difference of the texel value of the pixel of correspondence position in described pixel to be filtered with former frame, and described ginseng
Examine the difference of pixel and the texel value of pixel of correspondence position in former frame, simultaneously greater than or true less than during default threshold value
It is set to described pixel to be filtered consistent with the kinestate of described reference pixel;Otherwise it is defined as described pixel to be filtered and described
The kinestate of reference pixel is inconsistent.
Alternatively, described filtration module 603, specifically for:
According to formula (1): Calculate and treat described in obtaining
The filter result of the depth pixel value of filtered pixel;Or,
According to formula (2): Calculate acquisition described to be filtered
The filter result of the texel value of ripple pixel;
Wherein,Space for calculating described pixel to be filtered and described reference pixel is adjacent
Nearly property;
ft(tp,tq)=ft(||tp-tq| |) for calculating the texture pixel of described pixel to be filtered and described reference pixel
Value similarity;
Wherein, p is pixel to be filtered, and q is reference pixel, and k is reference pixel set, dp' it is the filtered depth pixel of p
Value, dqDepth pixel value for q, p, q are p, q coordinate figure in three dimensions, tp、tqTexel value for p, q, tp'、
tq'For p, q former frame same position texel value, tp' it is the filtered texel value of p, th is default texture picture
Plain difference limen value.
The device of the present embodiment, can be used for executing the technical scheme of embodiment of the method shown in Fig. 2, it realizes principle and skill
Art effect is similar to, and here is omitted.
In 3 D video filter embodiment two of the present invention, the device of the present embodiment is in Fig. 6 shown device structure
On the basis of, further, projection module 601 in the 3 D video filter 60 of the present embodiment, for by the plane of delineation
Pixel projection to three dimensions;Described pixel includes pixel to be filtered and reference pixel set;
Computing module 602, for according to the reference pixel in described pixel to be filtered and described reference pixel set in institute
State three-dimensional coordinate figure, calculate the described pixel to be filtered and described reference pixel spatial neighbor in described three dimensions
Property;Wherein, described reference pixel is integrated into and is located in same two field picture and adjacent multiple image with described pixel to be filtered;
Described computing module 602, is additionally operable to according to the reference image in described pixel to be filtered and described reference pixel set
The texel value of element, calculates the texel value similarity of described pixel to be filtered and described reference pixel;
Described computing module 602, is additionally operable to according to the reference image in described pixel to be filtered and described reference pixel set
The time interval of plain place frame, calculates the time domain propinquity of described pixel to be filtered and described reference pixel;
Filtration module 603, for determining filter according to described spacial proximity, texel value similarity and time domain propinquity
The weights of ripple, are weighted averagely obtaining described to be filtered respectively to the pixel value of the reference pixel in described reference pixel set
The filter result of pixel.
Alternatively, described filtration module 603, specifically for:
When depth image is filtered, similar according to described spacial proximity, the corresponding texel value of described depth pixel
Property and described time domain propinquity determine filtering weights, the depth pixel to the reference pixel in described reference pixel set respectively
Value is weighted averagely obtaining the filter result of the depth pixel value of described depth image pixel to be filtered;Or,
When texture image is filtered, neighbouring according to described spacial proximity, described texture pixel similarity and described time domain
Property determine filtering weights, respectively the texel value of the reference pixel in described reference pixel set is weighted averagely,
Obtain the filter result of the texel value of described texture image pixel to be filtered.
Alternatively, described projection module 601, specifically for:
Deep image information, viewpoint position information and the reference camera parameter information being provided using 3 D video, by pixel
Project to three dimensions from the plane of delineation;Described deep image information includes the depth pixel value of described pixel.
Alternatively, described projection module 601, specifically for:
According to formula p=r-1(da-1P-t) calculate the coordinate figure to after three dimensions for the described pixel projection;
Wherein, r and t is spin matrix and the translation vector of reference camera, and a is reference camera parameter matrix, For coordinate figure in described image plane for the described pixel, For described
Coordinate figure in described three dimensions for the pixel, d is the depth pixel value of described pixel;fxAnd fyIt is respectively horizontally and vertically side
To normalization focal length, r be coefficient of radial distortion, (ox,oy) for the datum mark in described image plane coordinate figure;Described base
It is the optical axis of described reference camera and the intersection point of described image plane on schedule.
Alternatively, described spacial proximity passes through the distance of pixel to be filtered and described reference pixel described in three dimensions
Input value as function is calculated;The output valve of described function increases with the reduction of input value;
Described texel value similarity is by the difference of described pixel to be filtered and the texel value of described reference pixel
Value is calculated as the input value of function;The output valve of described function increases with the reduction of input value;
Described time domain propinquity is used as letter by the time interval of described pixel to be filtered and described reference pixel place frame
The input value of number is calculated;The output valve of described function increases with the reduction of input value.
Alternatively, described filtration module 603, specifically for:
According to formula (3): Calculate and obtain
The filter result of the depth pixel value of described pixel to be filtered;Or,
According to formula (4): Calculate and obtain institute
State the filter result of the texel value of pixel to be filtered;
Wherein,For calculating the space of described pixel to be filtered and described reference pixel
Propinquity;;
For calculating the texture picture of described pixel to be filtered and described reference pixel
Element value similarity;
ftem(i, n)=ftem(| | i-n | |) is used for calculating described pixel to be filtered and the time domain of described reference pixel is neighbouring
Property;
Wherein, n is the frame number of pixel place to be filtered frame, and i is the frame number of reference pixel place frame, and i value is [n-m, n+
N] interval integer, m, n are respectively before the frame of pixel place to be filtered, reference frame number afterwards, and m, n are nonnegative integer, p
For pixel to be filtered, qiFor the reference pixel in the i-th frame, kiFor the reference pixel set in the i-th frame, dp' it is the filtered depth of p
Degree pixel value,The depth pixel value of q, p, q in i-th frameiFor q coordinate figure in three dimensions in p, the i-th frame, tp、Point
Not Wei p, in the i-th frame q texel value, tp' it is the filtered texel value of p.
The device of the present embodiment, can be used for executing the technical scheme of embodiment of the method shown in Fig. 4, it realizes principle and skill
Art effect is similar to, and here is omitted.
Fig. 7 is the structural representation of 3 D video filter apparatus embodiment of the present invention.As shown in fig. 7, the present embodiment provides
3 D video filter apparatus 70 include processor 701 and memorizer 702.Wherein memorizer 702 is used for storing execute instruction, when
When 3 D video filter apparatus 70 run, communicate between processor 701 and memorizer 702, processor 701 calls memorizer 702
In execute instruction, for executing the technical scheme described in either method embodiment, it is similar with technique effect that it realizes principle, this
Place repeats no more.
It should be understood that disclosed equipment and method in several embodiments provided herein, can be passed through it
Its mode is realized.For example, apparatus embodiments described above are only schematically, for example, described unit or module
Divide, only a kind of division of logic function, actual can have other dividing mode when realizing, for example multiple units or module
Can in conjunction with or be desirably integrated into another system, or some features can be ignored, or does not execute.Another, shown or
The coupling each other discussing or direct-coupling or communication connection can be by some interfaces, the indirect coupling of equipment or module
Close or communicate to connect, can be electrical, mechanical or other forms.
The described module illustrating as separating component can be or may not be physically separate, show as module
The part showing can be or may not be physical module, you can with positioned at a place, or can also be distributed to multiple
On NE.The mesh to realize this embodiment scheme for some or all of module therein can be selected according to the actual needs
's.
One of ordinary skill in the art will appreciate that: all or part of step realizing above-mentioned each method embodiment can be led to
Cross the related hardware of programmed instruction to complete.Aforesaid program can be stored in a computer read/write memory medium.This journey
Sequence upon execution, executes the step including above-mentioned each method embodiment;And aforesaid storage medium includes: rom, ram, magnetic disc or
Person's CD etc. is various can be with the medium of store program codes.
Finally it is noted that various embodiments above, only in order to technical scheme to be described, is not intended to limit;To the greatest extent
Pipe has been described in detail to the present invention with reference to foregoing embodiments, it will be understood by those within the art that: its according to
So the technical scheme described in foregoing embodiments can be modified, or wherein some or all of technical characteristic is entered
Row equivalent;And these modifications or replacement, do not make the essence of appropriate technical solution depart from various embodiments of the present invention technology
The scope of scheme.
Claims (20)
1. a kind of 3 D video filtering method is it is characterised in that include:
By the pixel projection in the plane of delineation to three dimensions;Described pixel includes pixel to be filtered and reference pixel set;
According to the reference pixel in described pixel to be filtered and described reference pixel set in described three-dimensional coordinate figure, count
Calculate the described pixel to be filtered and described reference pixel spacial proximity in described three dimensions;Wherein, described reference pixel
Set and described pixel to be filtered are in same two field picture;
According to the texel value of the reference pixel in described pixel to be filtered and described reference pixel set, calculate described to be filtered
The texel value similarity of ripple pixel and described reference pixel;
Before the reference pixel in described pixel to be filtered, described reference pixel set and described pixel place to be filtered frame
The texel value of the pixel of same position in one two field picture, calculates described pixel to be filtered and the motion of described reference pixel is special
Levy concordance;
When depth image is filtered, according to described spacial proximity, described depth pixel corresponding texel value similarity and
Described motion feature concordance determines the weights of filtering, the depth pixel to the reference pixel in described reference pixel set respectively
Value is weighted averagely obtaining the filter result of the depth pixel value of described depth image pixel to be filtered.
2. method according to claim 1 it is characterised in that described by the pixel projection in the plane of delineation to three-dimensional space
Between, comprising:
Deep image information, viewpoint position information and the reference camera parameter information being provided using 3 D video, by described pixel
Project to three dimensions from the plane of delineation;Described deep image information includes the depth pixel value of described pixel.
3. method according to claim 2 it is characterised in that described utilization 3 D video provide deep image information,
Viewpoint position information and reference camera parameter information, described pixel is projected to three dimensions from the plane of delineation, comprising:
According to formula p=r-1(da-1P-t) calculate the coordinate figure to after three dimensions for the described pixel projection;
Wherein, r and t is spin matrix and the translation vector of reference camera, and a is reference camera parameter matrix,For coordinate figure in described image plane for the described pixel,For described
Coordinate figure in described three dimensions for the pixel, d is the depth pixel value of described pixel;fxAnd fyIt is respectively horizontally and vertically side
To normalization focal length, r be coefficient of radial distortion, (ox,oy) for the datum mark in described image plane coordinate figure;Described base
It is the optical axis of described reference camera and the intersection point of described image plane on schedule.
4. the method according to any one of claim 1-3 is it is characterised in that described spacial proximity passes through in three dimensions
The distance of described pixel to be filtered and described reference pixel is calculated as the input value of function;The output valve of described function with
The reduction of input value and increase;
Described texel value similarity is made by the difference of described pixel to be filtered and the texel value of described reference pixel
Input value for function is calculated;The output valve of described function increases with the reduction of input value;
Whether described motion feature concordance passes through the described pixel to be filtered of calculating consistent with the motion feature of described reference pixel
Obtain, comprising:
When the difference of the texel value of the pixel of correspondence position in described pixel to be filtered with former frame, and described reference image
Element and the difference of the texel value of pixel of correspondence position in former frame, are defined as simultaneously greater than or when being less than default threshold value
Described pixel to be filtered is consistent with the kinestate of described reference pixel;Otherwise it is defined as described pixel to be filtered and described reference
The kinestate of pixel is inconsistent.
5. method according to claim 1 it is characterised in that described according to described spacial proximity, texel value phase
Determine the weights of filtering, the pixel value to the reference pixel in described reference pixel set respectively like property and motion feature concordance
It is weighted averagely obtaining the filter result of described pixel to be filtered, comprising:
According to formula (1):Calculate acquisition described to be filtered
The filter result of the depth pixel value of pixel;
Wherein,For calculating the spacial proximity of described pixel to be filtered and described reference pixel;
ft(tp,tq)=ft(||tp-tq| |) for calculating described pixel to be filtered and the texel value phase of described reference pixel
Like property;
For
Calculate described pixel to be filtered and the motion feature concordance of described reference pixel;
Wherein, p is pixel to be filtered, and q is reference pixel, and k is reference pixel set, dp' it is p filtered depth pixel value, dq
Depth pixel value for q, p, q are p, q coordinate figure in three dimensions, tp、tqTexel value for p, q, tp'、tq'For p,
In the texel value of former frame same position, th is default texture pixel difference limen value to q.
6. a kind of 3 D video filtering method is it is characterised in that include:
By the pixel projection in the plane of delineation to three dimensions;Described pixel includes pixel to be filtered and reference pixel set;
According to the reference pixel in described pixel to be filtered and described reference pixel set in described three-dimensional coordinate figure, count
Calculate the described pixel to be filtered and described reference pixel spacial proximity in described three dimensions;Wherein, described reference pixel
It is integrated into and be located in same two field picture and adjacent multiple image with described pixel to be filtered;
According to the texel value of the reference pixel in described pixel to be filtered and described reference pixel set, calculate described to be filtered
The texel value similarity of ripple pixel and described reference pixel;
According to the time interval of the reference pixel place frame in described pixel to be filtered and described reference pixel set, calculate described
The time domain propinquity of pixel to be filtered and described reference pixel;
When depth image is filtered, according to described spacial proximity, described depth pixel corresponding texel value similarity and
Described time domain propinquity determines the weights of filtering, respectively the depth pixel value of the reference pixel in described reference pixel set is entered
Row weighted average, obtains the filter result of the depth pixel value of described depth image pixel to be filtered.
7. method according to claim 6 it is characterised in that described by the pixel projection in the plane of delineation to three-dimensional space
Between, comprising:
Deep image information, viewpoint position information and the reference camera parameter information being provided using 3 D video, by described pixel
Project to three dimensions from the plane of delineation;Described deep image information includes the depth pixel value of described pixel.
8. method according to claim 7 it is characterised in that described utilization 3 D video provide deep image information,
Viewpoint position information and reference camera parameter information, described pixel is projected to three dimensions from the plane of delineation, comprising:
According to formula p=r-1(da-1P-t) calculate the coordinate figure to after three dimensions for the described pixel projection;
Wherein, r and t is spin matrix and the translation vector of reference camera, and a is reference camera parameter matrix,For coordinate figure in described image plane for the described pixel,For described
Coordinate figure in described three dimensions for the pixel, d is the depth pixel value of described pixel;fxAnd fyIt is respectively horizontally and vertically side
To normalization focal length, r be coefficient of radial distortion, (ox,oy) for the datum mark in described image plane coordinate figure;Described base
It is the optical axis of described reference camera and the intersection point of described image plane on schedule.
9. the method according to any one of claim 6-8 is it is characterised in that described spacial proximity passes through in three dimensions
The distance of described pixel to be filtered and described reference pixel is calculated as the input value of function;The output valve of described function with
The reduction of input value and increase;
Described texel value similarity is made by the difference of described pixel to be filtered and the texel value of described reference pixel
Input value for function is calculated;The output valve of described function increases with the reduction of input value;
Described time domain propinquity is by the time interval of described pixel to be filtered and described reference pixel place frame as function
Input value is calculated;The output valve of described function increases with the reduction of input value.
10. method according to claim 6 it is characterised in that described according to described spacial proximity, texel value phase
Determine the weights of filtering like property and time domain propinquity, respectively the pixel value of the reference pixel in described reference pixel set is carried out
Weighted average obtains the filter result of described pixel to be filtered, comprising:
According to formula (3):Calculate and treat described in obtaining
The filter result of the depth pixel value of filtered pixel;
Wherein,For calculating the spatial neighbor of described pixel to be filtered and described reference pixel
Property;
For calculating the texel value of described pixel to be filtered and described reference pixel
Similarity;
ftem(i, n)=ftem(| | i-n | |) is used for calculating the time domain propinquity of described pixel to be filtered and described reference pixel;
Wherein, n is the frame number of pixel place to be filtered frame, and i is the frame number of reference pixel place frame, and i value is [n-m, n+n] area
Between integer, m, n be respectively before the frame of pixel place to be filtered, reference frame number afterwards, m, n be nonnegative integer, p be treat
Filtered pixel, qiFor the reference pixel in the i-th frame, kiFor the reference pixel set in the i-th frame, dp' it is p filtered depth picture
Element value,The depth pixel value of q, p, q in i-th frameiFor q coordinate figure in three dimensions in p, the i-th frame, tp、It is respectively
The texel value of q in p, the i-th frame.
A kind of 11. 3 D video filters are it is characterised in that include:
Projection module, for by the pixel projection in the plane of delineation to three dimensions;Described pixel includes pixel to be filtered and ginseng
Examine collection of pixels;
Computing module, for according to the reference pixel in described pixel to be filtered and described reference pixel set in described three-dimensional space
Between coordinate figure, calculate the described pixel to be filtered and described reference pixel spacial proximity in described three dimensions;Wherein,
Described reference pixel set and described pixel to be filtered are in same two field picture;
Described computing module, is additionally operable to the texture according to the reference pixel in described pixel to be filtered and described reference pixel set
Pixel value, calculates the texel value similarity of described pixel to be filtered and described reference pixel;
Described computing module, is additionally operable to according to the reference pixel in described pixel to be filtered, described reference pixel set and described
The texel value of the pixel of same position in the previous frame image of pixel place to be filtered frame, calculate described pixel to be filtered and
The motion feature concordance of described reference pixel;
Filtration module, during for filtering to depth image, according to described spacial proximity, described depth pixel corresponding texture picture
Element value similarity and described motion feature concordance determine the weights filtering, respectively to the reference image in described reference pixel set
The depth pixel value of element is weighted averagely obtaining the filter result of the depth pixel value of described depth image pixel to be filtered.
12. devices according to claim 11 it is characterised in that described projection module, specifically for:
Deep image information, viewpoint position information and the reference camera parameter information being provided using 3 D video, by described pixel
Project to three dimensions from the plane of delineation;Described deep image information includes the depth pixel value of described pixel.
13. devices according to claim 12 it is characterised in that described projection module, specifically for:
According to formula p=r-1(da-1P-t) calculate the coordinate figure to after three dimensions for the described pixel projection;
Wherein, r and t is spin matrix and the translation vector of reference camera, and a is reference camera parameter matrix,For coordinate figure in described image plane for the described pixel,For described
Coordinate figure in described three dimensions for the pixel, d is the depth pixel value of described pixel;fxAnd fyIt is respectively horizontally and vertically side
To normalization focal length, r be coefficient of radial distortion, (ox,oy) for the datum mark in described image plane coordinate figure;Described base
It is the optical axis of described reference camera and the intersection point of described image plane on schedule.
14. devices according to any one of claim 11-13 are it is characterised in that described spacial proximity passes through three-dimensional space
Between described in the distance of pixel to be filtered and described reference pixel be calculated as the input value of function;The output of described function
Value increases with the reduction of input value;
Described texel value similarity is made by the difference of described pixel to be filtered and the texel value of described reference pixel
Input value for function is calculated;The output valve of described function increases with the reduction of input value;
Whether described motion feature concordance passes through the described pixel to be filtered of calculating consistent with the motion feature of described reference pixel
Obtain, comprising:
When the difference of the texel value of the pixel of correspondence position in described pixel to be filtered with former frame, and described reference image
Element and the difference of the texel value of pixel of correspondence position in former frame, are defined as simultaneously greater than or when being less than default threshold value
Described pixel to be filtered is consistent with the kinestate of described reference pixel;Otherwise it is defined as described pixel to be filtered and described reference
The kinestate of pixel is inconsistent.
15. devices according to claim 11 it is characterised in that described filtration module, specifically for:
According to formula (1):Calculate acquisition described to be filtered
The filter result of the depth pixel value of pixel;
Wherein,For calculating the spacial proximity of described pixel to be filtered and described reference pixel;
ft(tp,tq)=ft(||tp-tq| |) for calculating described pixel to be filtered and the texel value phase of described reference pixel
Like property;
For
Calculate described pixel to be filtered and the motion feature concordance of described reference pixel;
Wherein, p is pixel to be filtered, and q is reference pixel, and k is reference pixel set, dp' it is p filtered depth pixel value, dq
Depth pixel value for q, p, q are p, q coordinate figure in three dimensions, tp、tqTexel value for p, q, tp'、tq'For p,
In the texel value of former frame same position, th is default texture pixel difference limen value to q.
A kind of 16. 3 D video filters are it is characterised in that include:
Projection module, for by the pixel projection in the plane of delineation to three dimensions;Described pixel includes pixel to be filtered and ginseng
Examine collection of pixels;
Computing module, for according to the reference pixel in described pixel to be filtered and described reference pixel set in described three-dimensional space
Between coordinate figure, calculate the described pixel to be filtered and described reference pixel spacial proximity in described three dimensions;Wherein,
Described reference pixel is integrated into and is located in same two field picture and adjacent multiple image with described pixel to be filtered;
Described computing module, is additionally operable to the texture according to the reference pixel in described pixel to be filtered and described reference pixel set
Pixel value, calculates the texel value similarity of described pixel to be filtered and described reference pixel;
Described computing module, is additionally operable to according to the reference pixel place frame in described pixel to be filtered and described reference pixel set
Time interval, calculate the time domain propinquity of described pixel to be filtered and described reference pixel;
Filtration module, during for filtering to depth image, according to described spacial proximity, described depth pixel corresponding texture picture
Element value similarity and described time domain propinquity determine the weights filtering, respectively to the reference pixel in described reference pixel set
Depth pixel value is weighted averagely obtaining the filter result of the depth pixel value of described depth image pixel to be filtered.
17. devices according to claim 16 it is characterised in that described projection module, specifically for:
Deep image information, viewpoint position information and the reference camera parameter information being provided using 3 D video, by described pixel
Project to three dimensions from the plane of delineation;Described deep image information includes the depth pixel value of described pixel.
18. devices according to claim 17 it is characterised in that described projection module, specifically for:
According to formula p=r-1(da-1P-t) calculate the coordinate figure to after three dimensions for the described pixel projection;
Wherein, r and t is spin matrix and the translation vector of reference camera, and a is reference camera parameter matrix,For coordinate figure in described image plane for the described pixel,For described
Coordinate figure in described three dimensions for the pixel, d is the depth pixel value of described pixel;fxAnd fyIt is respectively horizontally and vertically side
To normalization focal length, r be coefficient of radial distortion, (ox,oy) for the datum mark in described image plane coordinate figure;Described base
It is the optical axis of described reference camera and the intersection point of described image plane on schedule.
19. devices according to any one of claim 16-18 are it is characterised in that described spacial proximity passes through three-dimensional space
Between described in the distance of pixel to be filtered and described reference pixel be calculated as the input value of function;The output of described function
Value increases with the reduction of input value;
Described texel value similarity is made by the difference of described pixel to be filtered and the texel value of described reference pixel
Input value for function is calculated;The output valve of described function increases with the reduction of input value;
Described time domain propinquity is by the time interval of described pixel to be filtered and described reference pixel place frame as function
Input value is calculated;The output valve of described function increases with the reduction of input value.
20. devices according to claim 16 it is characterised in that described filtration module, specifically for:
According to formula (3):Calculate and treat described in obtaining
The filter result of the depth pixel value of filtered pixel;
Wherein,For calculating the spatial neighbor of described pixel to be filtered and described reference pixel
Property;
For calculating the texel value of described pixel to be filtered and described reference pixel
Similarity;
ftem(i, n)=ftem(| | i-n | |) is used for calculating the time domain propinquity of described pixel to be filtered and described reference pixel;
Wherein, n is the frame number of pixel place to be filtered frame, and i is the frame number of reference pixel place frame, and i value is [n-m, n+n] area
Between integer, m, n be respectively before the frame of pixel place to be filtered, reference frame number afterwards, m, n be nonnegative integer, p be treat
Filtered pixel, qiFor the reference pixel in the i-th frame, kiFor the reference pixel set in the i-th frame, dp' it is p filtered depth picture
Element value,The depth pixel value of q, p, q in i-th frameiFor q coordinate figure in three dimensions in p, the i-th frame, tp、It is respectively
The texel value of q in p, the i-th frame.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410265360.4A CN104010180B (en) | 2014-06-13 | 2014-06-13 | Method and device for filtering three-dimensional video |
PCT/CN2015/077707 WO2015188666A1 (en) | 2014-06-13 | 2015-04-28 | Three-dimensional video filtering method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410265360.4A CN104010180B (en) | 2014-06-13 | 2014-06-13 | Method and device for filtering three-dimensional video |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104010180A CN104010180A (en) | 2014-08-27 |
CN104010180B true CN104010180B (en) | 2017-01-25 |
Family
ID=51370655
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410265360.4A Active CN104010180B (en) | 2014-06-13 | 2014-06-13 | Method and device for filtering three-dimensional video |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN104010180B (en) |
WO (1) | WO2015188666A1 (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104010180B (en) * | 2014-06-13 | 2017-01-25 | 华为技术有限公司 | Method and device for filtering three-dimensional video |
CN104683783B (en) * | 2015-01-08 | 2017-03-15 | 电子科技大学 | A kind of self adaptation depth map filtering method |
CN105959663B (en) * | 2016-05-24 | 2018-09-21 | 厦门美图之家科技有限公司 | The successional optimized treatment method of video interframe signal, system and camera terminal |
CN107959855B (en) | 2016-10-16 | 2020-02-14 | 华为技术有限公司 | Motion compensated prediction method and apparatus |
CN108111851B (en) * | 2016-11-25 | 2020-12-22 | 华为技术有限公司 | Deblocking filtering method and terminal |
CN108833879A (en) * | 2018-06-29 | 2018-11-16 | 东南大学 | With time and space continuity virtual visual point synthesizing method |
CN109191506B (en) * | 2018-08-06 | 2021-01-29 | 深圳看到科技有限公司 | Depth map processing method, system and computer readable storage medium |
CN115187491B (en) * | 2022-09-08 | 2023-02-17 | 阿里巴巴(中国)有限公司 | Image denoising processing method, image filtering processing method and device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1474358A (en) * | 2002-08-08 | 2004-02-11 | GEҽ��ϵͳ����������˾ | Three dimension space filter device and method |
CN1836634A (en) * | 2004-12-27 | 2006-09-27 | Ge医疗系统环球技术有限公司 | Four-dimensional labeling apparatus, n-dimensional labeling apparatus, four-dimensional spatial filter apparatus, and n-dimensional spatial filter apparatus |
CN102238316A (en) * | 2010-04-29 | 2011-11-09 | 北京科迪讯通科技有限公司 | Self-adaptive real-time denoising scheme for 3D digital video image |
CN103369209A (en) * | 2013-07-31 | 2013-10-23 | 上海通途半导体科技有限公司 | Video noise reduction device and video noise reduction method |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101651772B (en) * | 2009-09-11 | 2011-03-16 | 宁波大学 | Method for extracting video interested region based on visual attention |
CN102271262B (en) * | 2010-06-04 | 2015-05-13 | 三星电子株式会社 | Multithread-based video processing method for 3D (Three-Dimensional) display |
TWI439119B (en) * | 2010-09-20 | 2014-05-21 | Nat Univ Chung Cheng | A method depth information processing and its application device |
JP2013059016A (en) * | 2011-08-12 | 2013-03-28 | Sony Corp | Image processing device, method, and program |
CN104010180B (en) * | 2014-06-13 | 2017-01-25 | 华为技术有限公司 | Method and device for filtering three-dimensional video |
-
2014
- 2014-06-13 CN CN201410265360.4A patent/CN104010180B/en active Active
-
2015
- 2015-04-28 WO PCT/CN2015/077707 patent/WO2015188666A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1474358A (en) * | 2002-08-08 | 2004-02-11 | GEҽ��ϵͳ����������˾ | Three dimension space filter device and method |
CN1836634A (en) * | 2004-12-27 | 2006-09-27 | Ge医疗系统环球技术有限公司 | Four-dimensional labeling apparatus, n-dimensional labeling apparatus, four-dimensional spatial filter apparatus, and n-dimensional spatial filter apparatus |
CN102238316A (en) * | 2010-04-29 | 2011-11-09 | 北京科迪讯通科技有限公司 | Self-adaptive real-time denoising scheme for 3D digital video image |
CN103369209A (en) * | 2013-07-31 | 2013-10-23 | 上海通途半导体科技有限公司 | Video noise reduction device and video noise reduction method |
Also Published As
Publication number | Publication date |
---|---|
CN104010180A (en) | 2014-08-27 |
WO2015188666A1 (en) | 2015-12-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104010180B (en) | Method and device for filtering three-dimensional video | |
US20210012466A1 (en) | Denoising Filter | |
CN105574827B (en) | A kind of method, apparatus of image defogging | |
CN103945208B (en) | A kind of parallel synchronous zooming engine for multiple views bore hole 3D display and method | |
CN104869387B (en) | Method for acquiring binocular image maximum parallax based on optical flow method | |
EP1494174B1 (en) | Method of generating blur | |
CN109215123B (en) | Method, system, storage medium and terminal for generating infinite terrain based on cGAN | |
CN104937927B (en) | 2 tie up images or video to the real-time automatic conversion of 3-dimensional stereo-picture or video | |
US6791540B1 (en) | Image processing apparatus | |
CN108537871A (en) | Information processing equipment and information processing method | |
CN109255831A (en) | The method that single-view face three-dimensional reconstruction and texture based on multi-task learning generate | |
CN106910242A (en) | The method and system of indoor full scene three-dimensional reconstruction are carried out based on depth camera | |
CN102271262B (en) | Multithread-based video processing method for 3D (Three-Dimensional) display | |
CN102968814B (en) | A kind of method and apparatus of image rendering | |
CN111179189B (en) | Image processing method and device based on generation of countermeasure network GAN, electronic equipment and storage medium | |
EP1728208A1 (en) | Creating a depth map | |
CN104077808A (en) | Real-time three-dimensional face modeling method used for computer graph and image processing and based on depth information | |
CN109003297A (en) | A kind of monocular depth estimation method, device, terminal and storage medium | |
CN102098528A (en) | Method and device for converting planar image into stereoscopic image | |
CN110033483A (en) | Based on DCNN depth drawing generating method and system | |
JP2006284704A (en) | Three-dimensional map simplification device and three-dimensional map simplification method | |
CN110889868A (en) | Monocular image depth estimation method combining gradient and texture features | |
CN110335275A (en) | A kind of space-time vectorization method of the flow surface based on ternary biharmonic B-spline | |
CN101742088B (en) | Non-local mean space domain time varying video filtering method | |
CN103733221B (en) | Image blurring by what inseparable FIR filter was divided |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |