CN106791845A - A kind of quick parallax method of estimation for multi-view image coding - Google Patents

A kind of quick parallax method of estimation for multi-view image coding Download PDF

Info

Publication number
CN106791845A
CN106791845A CN201710034540.5A CN201710034540A CN106791845A CN 106791845 A CN106791845 A CN 106791845A CN 201710034540 A CN201710034540 A CN 201710034540A CN 106791845 A CN106791845 A CN 106791845A
Authority
CN
China
Prior art keywords
image
motion vector
pixel
image block
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710034540.5A
Other languages
Chinese (zh)
Other versions
CN106791845B (en
Inventor
向北海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Youxiang Technology Co Ltd
Original Assignee
Hunan Youxiang Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Youxiang Technology Co Ltd filed Critical Hunan Youxiang Technology Co Ltd
Priority to CN201710034540.5A priority Critical patent/CN106791845B/en
Publication of CN106791845A publication Critical patent/CN106791845A/en
Application granted granted Critical
Publication of CN106791845B publication Critical patent/CN106791845B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/154Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation

Abstract

The present invention proposes a kind of quick parallax method of estimation for multi-view image coding, the matching result for quickly calculating all image blocks by the block matching method of a two-step type first obtains motion vector set, then consider each image block in itself and its surrounding image block information, a kind of method detected by distinguished point based judged motion vector set, excludes distortion point therein.Method proposed by the present invention can effectively exclude error hiding while quickly Block- matching is carried out, and substantially increase the degree of accuracy of disparity estimation.Method simple practical proposed by the present invention, algorithm complex is small, can possess good practical value with all kinds of multi-view images of real-time processing.

Description

A kind of quick parallax method of estimation for multi-view image coding
Technical field
The invention belongs to Digital Image Processing and technical field of computer vision, refer in particular to a kind of for multi-view image coding Quick parallax method of estimation.
Background technology
Multi-view point video is a kind of new video with third dimension and interactive operation function, by multiple cameras from Different angles are synchronized to Same Scene and shot to obtain the vision signal of different points of view, are a kind of effective 3D representation of video shot Method, being capable of more vivo reconstruction of scenes, there is provided third dimension and interactive function.Multi-view point video can be widely applied to arbitrarily regard Various multimedia services for rising such as point video, 3 D stereo TV, immersion video conference and video monitoring system.
Compared with single-view video, the data volume of multi-view point video is linearly increasing with the increase of video camera number.It is huge Big data volume turned into restriction its wide variety of bottleneck.Data compression in current multi-view video system, data transfer with And the drafting of any viewpoint is all problem demanding prompt solution.Disparity estimation wherein between viewpoint is to solve a pass of these problems Key technology.So-called parallax refers to the disparity between the image pair of different points of view, along difference vector, is had between image pair The similitude of height, so that available reference image carries out disparity compensation prediction to target image, target image need to only be sweared with parallax Amount and parallax compensation difference are characterized.
The disparity vector estimation method for being traditionally used for coding is the method for estimation based on Block- matching, and its advantage is simple reality With, be easy to hardware and realize that but from terms of actual effect, the disparity estimation accuracy of this method is not high, can produce many error hidings.
The content of the invention
It is a kind of for the fast of multi-view image coding it is an object of the invention to propose for the defect that existing method is present Fast parallax estimation method.
The technical scheme is that:
A kind of quick parallax method of estimation for multi-view image coding, comprises the following steps:
S1. note synchronization, Same Scene are F from the image that two different points of view shoot1(x, y) and F2(x, y), it is right Two width multi-view images carry out piecemeal Block- matching, and each image block can obtain a motion vector, finally give the motion of Set of vectors.
S2. the motion vector set obtained in S1 is judged, excludes distortion point therein, namely reject Block- matching knot Error hiding in fruit just obtains disparity estimation result.
In the present invention, the implementation process of step S1 is as follows:
S1.1 notes synchronization, Same Scene are F from the image that two different points of view shoot1(x, y) and F2(x, y), two Image size is Wd × Hd, first with 15 × 15 couples of image F of fixed dimension1(x, y) carries out piecemeal, obtains sum and isImage block, be designated as { kn(x, y) | n=1 ..., Num }.
S1.2 is to image F1Each image block on (x, y), in another visual point image F2On (x, y), Block- matching is carried out, looked for To most like block image.
S1.2.1 is for image F1Any image block k on (x, y)n(x, y), image block kn(x, y) is in image F1(x,y) Centre coordinate be designated as dot (x0, y0).According to priori setting search scope cth=10, then in image F2On (x, y), Size is that the image block of 15 × 15, centre coordinate (x, y) satisfaction (| x-x0 | < 10, | y-y0 | < 10) belongs to hunting zone i.e. Belong to candidate image block.
S1.2.2 calculates image block knThe mean μ of the gray value of all pixels point in (x, y)0,
S1.2.3 is in image F2When (x, y) is scanned for, for each candidate image block fm(x, y) calculates wherein all pictures The mean μ of the gray value of vegetarian refreshmentsmIf, | μm0| > 20, illustrate this candidate image block and image block kn(x, y) is mismatched, no Need to carry out next step operation again;If | μm0|≤20, then carry out next step operation, i.e., calculating this candidate by formula 1 schemes As block and image block knThe average absolute value error amount of (x, y):
After S1.2.4 is disposed according to the matching process of S1.2.2 to S1.2.3 to all candidate image blocks, it is selected The middle minimum corresponding candidate image block f of MAD (m) valuesm(x, y) is used as image block knThe matching result of (x, y);Image block fm(x, Y) centre coordinate is designated as dot ' (x1, y1), then from kn(x, y) arrives fmMotion vector (the V of (x, y)n_x,Vn_ y) it is (x1- x0,y1-y0)。
S1.2.5 is to image F1All image blocks of (x, y) carry out Block- matching according to the method described above, and each image block can A motion vector is obtained, it is motion vector the set { (V of Num to finally give sumn_x,Vn_ y) | n=1 ..., Num }.
In the present invention, the implementation process of step S2 is as follows:
S2.1 will obtain motion vector set { (V in S1n_x,Vn_ y) | n=1 ..., Num }, it is divided into two parts:{Vn_x|n =1 ..., Num } and { Vn_ y | n=1 ..., Num };Each section all regards a width behavior asIt is classified asMotion vector Figure, to { Vn_ x | n=1 ..., Num } and { Vn_ y | n=1 ..., Num } SUSAN operator feature point detections are carried out respectively, will examine Two set of characteristic points for measuring are merged, obtain it is all of distortion point set (i, j) | Rx (i, j) > 0 | | Ry (i, j) > 0 }.
S2.2 image blocks and motion vector are one-to-one, through the distortion point set that SUSAN operator feature point detections go out Correspond to matching result not necessarily accurate image block { kn(x, y) | Rx (i, j) > 0 | | Ry (i, j) > 0, n=i × Hd/15+ j};Delete the matching result of these image blocks, so that it may obtain a disparity estimation result for precise and high efficiency.
In the present invention, to { V in step S2.1n_ x | n=1 ..., Num } and { Vn_ y | n=1 ..., Num } carry out SUSAN Operator feature point detection, method is as follows:
S2.1.1 is by { Vn_ x | n=1 ..., Num } write as image format Vx (i, j) | i=1 ..., Wd/15;J=1 ..., Hd/15 }, wherein
S2.1.2 carries out SUSAN operator feature point detections to motion vector figure Vx (i, j), and step is as follows:
(1) using circular shuttering traversing graph as Vx (i, j), every value in the absorption He Tongzhi areas at place is calculated.
Define circular shuttering:Center pixel is made to be designated as (x1, y1), it is all to meet condition (x-x1)2+(y-y1)2≤ 10 picture The region of plain (x, y) composition is the scope of circular shuttering.Circular shuttering size is 7 × 7, altogether 37 pixels;For motion Any pixel (i0, j0) of vector graphics Vx (i, j), (i0, j0) is placed on by the center pixel of circular shuttering, then calculates motion The grey scale pixel value of each pixel in vector graphics Vx (i, j) inside the circular shuttering position and center pixel (i0, j0) The difference of gray value, if difference between the gray value of the grey scale pixel value and center pixel (i0, j0) of a pixel be less than or Equal to setting similarity degree threshold value th1, then the pixel belong to USAN regions;Otherwise the pixel is not belonging to USAN areas Domain;By the above method so as to judge whether pixel belongs to USAN regions, specific formula is as follows:
Wherein th1 represents the threshold value of similarity degree, value 20;C (i, j) represents whether the pixel belongs to USAN regions.
Then all pixels point in motion vector figure Vx (i, j) in circular shuttering position is counted:
Wherein Ω represents the pixel point set in circular shuttering position in motion vector figure Vx (i, j), u (i0, J0 the USAN values of pixel (i0, j0)) are.
Using circular shuttering traversing graph as all pixels of Vx (i, j), USAN values u (i, j) of all pixels can be obtained.
(2) after calculating the USAN values of all pixels, a preliminary characteristic point is obtained by thresholding and responds Rx (i, j).
Rx (i, j)=max (0, th2-u (i, j)) formula 4
Wherein th2 represents threshold value, value 28, and only as u (i, j) < th2, Rx (i, j) is only possible to more than 0, Ye Jibiao The bright point is a characteristic point for preliminary judgement.Pixel set of all Rx (i, j) more than 0 is preliminary set of characteristic points.
(3) preliminary set of characteristic points is processed using non-maxima suppression, obtains final set of characteristic points.
For each preliminary judgement in preliminary set of characteristic points characteristic point such as (i1, j1), observe in being with it The heart, in the field of size 5 × 5, if also Rx (i, the j) value of other pixels is bigger than Rx (i1, j1), if it did not, namely Rx (i1, j1) is maximum, then retain this characteristic point (i1, j1).Otherwise delete this characteristic point (i1, j1), also will Rx (i, J) 0 is reset to;The characteristic point of all preliminary judgements in preliminary set of characteristic points is processed, and obtains motion vector figure The final set of characteristic points { (i, j) | Rx (i, j) > 0 } of Vx (i, j).
(4) it is that step (1) to (3) carries out SUSAN operator characteristic points to motion vector figure Vy (i, j) with same method Detection, can obtain the final set of characteristic points { (i, j) | Ry (i, j) > 0 } of Vy (i, j).
The present invention quickly calculates the matching result of all image blocks by the block matching method of a two-step type first Motion vector set is obtained, each image block is then considered in itself and its information of surrounding image block, is based on by one kind The method of feature point detection judged motion vector set, excludes distortion point therein.Method proposed by the present invention can While quickly Block- matching is carried out, error hiding is effectively excluded, substantially increase the degree of accuracy of disparity estimation.The present invention is proposed Method simple practical, algorithm complex is small, can possess good practical value with all kinds of multi-view images of real-time processing.
Brief description of the drawings
Fig. 1 is flow chart of the invention;
Fig. 2 is the circular shuttering of SUSAN operators.
Specific embodiment
To make the object, technical solutions and advantages of the present invention clearer, below in conjunction with accompanying drawing to embodiment party of the present invention Formula is described in further detail.
A kind of reference picture 1, quick parallax method of estimation for multi-view image coding of the present invention, comprises the following steps:
S1. note synchronization, Same Scene are F from the image that two different points of view shoot1(x, y) and F2(x, y), it is right Two width multi-view images carry out piecemeal Block- matching, and each image block can obtain a motion vector, finally give the motion of Set of vectors.
S1.1 notes synchronization, Same Scene are F from the image that two different points of view shoot1(x, y) and F2(x, y), two Image size is Wd × Hd, first with 15 × 15 couples of image F of fixed dimension1(x, y) carries out piecemeal, obtains sum and isImage block, be designated as { kn(x, y) | n=1 ..., Num }.
S1.2 is to image F1Each image block on (x, y), in another visual point image F2On (x, y), Block- matching is carried out, looked for To most like block image.
S1.2.1 is for image F1Any image block k on (x, y)n(x, y), image block kn(x, y) is in image F1(x,y) Centre coordinate be designated as dot (x0, y0);According to priori setting search scope cth=10, then in image F2On (x, y), Size is that the image block of 15 × 15, centre coordinate (x, y) satisfaction (| x-x0 | < 10, | y-y0 | < 10) belongs to hunting zone i.e. Belong to candidate image block.
S1.2.2 calculates image block knThe mean μ of the gray value of all pixels point in (x, y)0,
S1.2.3 is in image F2When (x, y) is scanned for, for each candidate image block fm(x, y) calculates wherein all pictures The mean μ of the gray value of vegetarian refreshmentsm.If | μm0| > 20, illustrate this candidate image block and image block kn(x, y) is mismatched, no Need to carry out next step operation again.If | μm0|≤20, then carry out next step operation, i.e., calculating this candidate by formula 1 schemes As block and image block knThe average absolute value error amount of (x, y):
After S1.2.4 is disposed according to the matching process of S1.2.2 to S1.2.3 to all candidate image blocks, it is selected The middle minimum corresponding candidate image block f of MAD (m) valuesm(x, y) is used as image block knThe matching result of (x, y).Image block fm(x, Y) centre coordinate is designated as dot ' (x1, y1), then from kn(x, y) arrives fmMotion vector (the V of (x, y)n_x,Vn_ y) it is (x1- x0,y1-y0)。
S1.2.5 is to image F1All image blocks of (x, y) carry out Block- matching according to the method described above, and each image block can A motion vector is obtained, it is motion vector the set { (V of Num to finally give sumn_x,Vn_ y) | n=1 ..., Num }.
Above matching process only considered each image block information in itself, not account for the information of image block around, because The result of this matching is accurate not enough.The motion change of image block should be continuous, therefore motion vector should also be gradually Change, if it is irrational that distortion occurs in middle a certain motion vector.Based on these analyses, in order to further improve matching Precision, considers each image block in itself and its information of surrounding image block, and the present invention proposes a kind of distinguished point based inspection The method of survey judged motion vector set, excludes distortion point therein.
Corner Detection be for obtaining a kind of method of image characteristic point in computer vision system, being capable of ash in image The larger point of curvature is detected on the violent point of degree change or image border.It is below that the present invention selects Corner Detection in S2 Middle algorithm is simple, accurate positioning, strong anti-noise ability the features such as minimal absorption He Tongzhi areas (SUSAN) operator carry out characteristic point inspection Survey, it is specific as follows:
S2. the motion vector set obtained in S1 is judged, excludes distortion point therein, namely reject Block- matching knot Error hiding in fruit just obtains disparity estimation result.
S2.1 will obtain motion vector set { (V in S1n_x,Vn_ y) | n=1 ..., Num }, it is divided into two parts:{Vn_x|n =1 ..., Num } and { Vn_ y | n=1 ..., Num }.Each section is it can be seen that a width behaviorIt is classified asMotion arrow Spirogram shape.
To { Vn_ x | n=1 ..., Num } and { Vn_ y | n=1 ..., Num } SUSAN operator feature point detections are carried out respectively, will Two set of characteristic points obtaining of detection are merged, obtain all of distortion point set (i, j) | Rx (i, j) > 0 | | Ry (i, J) > 0 }.
With { Vn_ x | n=1 ..., Num } as a example by, to { Vn_ x | n=1 ..., Num } SUSAN operator feature point detections are carried out, Method is as follows:
S2.1.1 is by { Vn_ x | n=1 ..., Num } write as image format Vx (i, j) | i=1 ..., Wd/15;J=1 ..., Hd/15 }, wherein
S2.1.2 carries out SUSAN operator feature point detections to motion vector figure Vx (i, j), and step is as follows:
(1) using circular shuttering traversing graph as Vx (i, j), every value in the absorption He Tongzhi areas (USAN) at place is calculated;
Circular shuttering is defined as follows:Center pixel is designated as (x1, y1), all to meet condition (x-x1)2+(y-y1)2≤ 10 The region of pixel (x, y) composition is the scope of circular shuttering.Circular shuttering is as shown in Fig. 2 circular shuttering size is 7 × 7, one Totally 37 pixels, the pixel of wherein label 19 is center pixel.For motion vector figure Vx (i, j) any pixel (i0, J0), the center pixel of circular shuttering is placed on (i0, j0), is then in circular shuttering in calculating motion vector figure Vx (i, j) The difference of the grey scale pixel value of each pixel inside position and the gray value of center pixel (i0, j0), if the picture of a pixel The threshold value th1 of similarity degree of the difference less than or equal to setting between the gray value of plain gray value and center pixel (i0, j0), then The pixel belongs to USAN regions;Otherwise the pixel is not belonging to USAN regions.It is so as to judge pixel by the above method No to belong to USAN regions, specific formula is as follows:
Wherein th1 represents the threshold value of similarity degree, here value 20.C (i, j) represents whether the pixel belongs to USAN areas Domain.
Then all pixels point in motion vector figure Vx (i, j) in circular shuttering position is counted:
Here Ω represents the pixel point set in circular shuttering position in motion vector figure Vx (i, j), u (i0, J0 the USAN values of pixel (i0, j0)) are.
Using circular shuttering traversing graph as all pixels of Vx (i, j), USAN values u (i, j) of all pixels can be obtained.
(2) after calculating the USAN values of all pixels, a preliminary characteristic point is obtained by thresholding and responds Rx (i, j);
Rx (i, j)=max (0, th2-u (i, j)) formula 4
Wherein th2 represents threshold value, here value 28, and only as u (i, j) < th2, Rx (i, j) is only possible to more than 0, Show that the point is a characteristic point for preliminary judgement.Pixel set of all Rx (i, j) more than 0 is preliminary feature point set Close.
(3) preliminary set of characteristic points is processed using non-maxima suppression, obtains final set of characteristic points.
For each preliminary judgement in preliminary set of characteristic points characteristic point such as (i1, j1), observe in being with it The heart, in the field of size 5 × 5, if also Rx (i, the j) value of other pixels is bigger than Rx (i1, j1), if it did not, namely Rx (i1, j1) is maximum, then retain this characteristic point (i1, j1);Otherwise delete this characteristic point (i1, j1), also will Rx (i, J) 0 is reset to.The characteristic point of all preliminary judgements in preliminary set of characteristic points is processed, and obtains motion vector figure The final set of characteristic points { (i, j) | Rx (i, j) > 0 } of Vx (i, j).
(4) it is that step (1) to (3) carries out SUSAN operator characteristic points to motion vector figure Vy (i, j) with same method Detection, can obtain the final set of characteristic points { (i, j) | Ry (i, j) > 0 } of Vy (i, j).Both are merged, and are owned Characteristic point namely distortion point set { (i, j) | Rx (i, j) > 0 | | Ry (i, j) > 0 }.
S2.2 image blocks and motion vector are one-to-one, so the characteristic point for detecting namely distortion point set correspondence Matching result not necessarily accurate image block { kn(x, y) | Rx (i, j) > 0 | | Ry (i, j) > 0, n=i × Hd/15+j }.Delete Except the matching result of these image blocks, it is possible to obtain a disparity estimation result for precise and high efficiency.
The explanation of the preferred embodiment of the present invention contained above, this be in order to describe technical characteristic of the invention in detail, and Be not intended to be limited in the content of the invention in the concrete form described by embodiment, carry out according to present invention purport other Modification and modification are also protected by this patent.The purport of present invention is to be defined by the claims, rather than by embodiment Specific descriptions are defined.

Claims (4)

1. it is a kind of for multi-view image coding quick parallax method of estimation, it is characterised in that comprise the following steps:
S1. note synchronization, Same Scene are F from the image that two different points of view shoot1(x, y) and F2(x, y), to two width Multi-view image carries out piecemeal Block- matching, and each image block can obtain a motion vector, finally give the motion vector of Set;
S2. the motion vector set obtained in S1 is judged, in exclusion distortion point therein, namely rejecting Block- matching result Error hiding just obtain disparity estimation result.
2. it is according to claim 1 for multi-view image coding quick parallax method of estimation, it is characterised in that step The implementation process of S1 is as follows:
S1.1 notes synchronization, Same Scene are F from the image that two different points of view shoot1(x, y) and F2(x, y), two images Size is Wd × Hd, first with 15 × 15 couples of image F of fixed dimension1(x, y) carries out piecemeal, obtains sum and isImage block, be designated as { kn(x, y) | n=1 ..., Num };
S1.2 is to image F1Each image block on (x, y), in another visual point image F2On (x, y), Block- matching is carried out, found most Similar block image;
S1.2.1 is for image F1Any image block k on (x, y)n(x, y), image block kn(x, y) is in image F1The center of (x, y) Coordinate is designated as dot (x0, y0);According to priori setting search scope cth=10, then in image F2On (x, y), size be 15 × 15, centre coordinate (x, y) meets the image block of (| x-x0 | < 10, | y-y0 | < 10) and belongs to hunting zone and belong to time Select image block;
S1.2.2 calculates image block knThe mean μ of the gray value of all pixels point in (x, y)0,
S1.2.3 is in image F2When (x, y) is scanned for, for each candidate image block fm(x, y) calculates wherein all pixels point Gray value mean μmIf, | μm0| > 20, illustrate this candidate image block and image block kn(x, y) is mismatched, it is not necessary to Next step operation is carried out again;If | μm0|≤20, then carry out next step operation, i.e., this candidate image block is calculated by formula 1 With image block knThe average absolute value error amount of (x, y):
After S1.2.4 is disposed according to the matching process of S1.2.2 to S1.2.3 to all candidate image blocks, selection is wherein most The corresponding candidate image block f of small MAD (m) valuesm(x, y) is used as image block knThe matching result of (x, y);Image block fm(x's, y) Centre coordinate is designated as dot ' (x1, y1), then from kn(x, y) arrives fmMotion vector (the V of (x, y)n_x,Vn_ y) for (x1-x0, y1-y0);
S1.2.5 is to image F1All image blocks of (x, y) carry out Block- matching according to the method described above, and each image block can obtain one Individual motion vector, it is motion vector the set { (V of Num to finally give sumn_x,Vn_ y) | n=1 ..., Num }.
3. it is according to claim 2 for multi-view image coding quick parallax method of estimation, it is characterised in that step The implementation process of S2 is as follows:
S2.1 will obtain motion vector set { (V in S1n_x,Vn_ y) | n=1 ..., Num }, it is divided into two parts:{Vn_ x | n= 1 ..., Num } and { Vn_ y | n=1 ..., Num };Each section all regards a width behavior asIt is classified asMotion vector figure Shape, to { Vn_ x | n=1 ..., Num } and { Vn_ y | n=1 ..., Num } SUSAN operator feature point detections are carried out respectively, will detect Two set of characteristic points for obtaining are merged, obtain it is all of distortion point set (i, j) | Rx (i, j) > 0 | | Ry (i, j) > 0};
S2.2 image blocks and motion vector are distortion point set correspondences that is one-to-one, going out through SUSAN operator feature point detections Matching result not necessarily accurate image block { kn(x, y) | Rx (i, j) > 0 | | Ry (i, j) > 0, n=i × Hd/15+j };Delete Except the matching result of these image blocks, so that it may obtain a disparity estimation result for precise and high efficiency.
4. it is according to claim 3 for multi-view image coding quick parallax method of estimation, it is characterised in that in step To { V in rapid S2.1n_ x | n=1 ..., Num } and { Vn_ y | n=1 ..., Num } SUSAN operator feature point detections are carried out, method is such as Under:
S2.1.1 is by { Vn_ x | n=1 ..., Num } write as image format Vx (i, j) | i=1 ..., Wd/15;J=1 ..., Hd/ 15 }, wherein
S2.1.2 carries out SUSAN operator feature point detections to motion vector figure Vx (i, j), and step is as follows:
(1) using circular shuttering traversing graph as Vx (i, j), every value in the absorption He Tongzhi areas at place is calculated;
Define circular shuttering:Center pixel is made to be designated as (x1, y1), it is all to meet condition (x-x1)2+(y-y1)2≤ 10 pixel The region of (x, y) composition is the scope of circular shuttering;Circular shuttering size is 7 × 7, altogether 37 pixels;Sweared for motion Any pixel (i0, j0) of spirogram shape Vx (i, j), (i0, j0) is placed on by the center pixel of circular shuttering, then calculates motion arrow The grey scale pixel value and the ash of center pixel (i0, j0) of each pixel in spirogram shape Vx (i, j) inside circular shuttering position The difference of angle value, if the difference between the gray value of the grey scale pixel value and center pixel (i0, j0) of a pixel is less than or waits In setting similarity degree threshold value th1, then the pixel belong to USAN regions;Otherwise the pixel is not belonging to USAN regions; By the above method so as to judge whether pixel belongs to USAN regions, specific formula is as follows:
Wherein th1 represents the threshold value of similarity degree, value 20;C (i, j) represents whether the pixel belongs to USAN regions;
Then all pixels point in motion vector figure Vx (i, j) in circular shuttering position is counted:
Wherein Ω represents the pixel point set being in circular shuttering position in motion vector figure Vx (i, j), and u (i0, j0) is It is the USAN values of pixel (i0, j0);
Using circular shuttering traversing graph as all pixels of Vx (i, j), USAN values u (i, j) of all pixels can be obtained;
(2) after calculating the USAN values of all pixels, a preliminary characteristic point is obtained by thresholding and responds Rx (i, j);
Rx (i, j)=max (0, th2-u (i, j)) formula 4
Wherein th2 represents threshold value, value 28, and only as u (i, j) < th2, Rx (i, j) is only possible to more than 0, namely shows this Point is a characteristic point for preliminary judgement;Pixel set of all Rx (i, j) more than 0 is preliminary set of characteristic points;
(3) preliminary set of characteristic points is processed using non-maxima suppression, obtains final set of characteristic points;
For each preliminary judgement in preliminary set of characteristic points characteristic point such as (i1, j1), observe centered on by it, greatly In small 5 × 5 field, if also Rx (i, the j) value of other pixels is bigger than Rx (i1, j1), if it did not, namely Rx (i1, J1 it is) maximum, then retain this characteristic point (i1, j1);This characteristic point (i1, j1) is otherwise deleted, also will Rx (i, j) replacements It is 0;The characteristic point of all preliminary judgements in preliminary set of characteristic points is processed, obtain motion vector figure Vx (i, J) final set of characteristic points { (i, j) | Rx (i, j) > 0 };
(4) it is that step (1) to (3) carries out the inspection of SUSAN operators characteristic point to motion vector figure Vy (i, j) with same method Survey, the final set of characteristic points { (i, j) | Ry (i, j) > 0 } of Vy (i, j) can be obtained.
CN201710034540.5A 2017-01-17 2017-01-17 A kind of quick parallax estimation method for multi-view image coding Active CN106791845B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710034540.5A CN106791845B (en) 2017-01-17 2017-01-17 A kind of quick parallax estimation method for multi-view image coding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710034540.5A CN106791845B (en) 2017-01-17 2017-01-17 A kind of quick parallax estimation method for multi-view image coding

Publications (2)

Publication Number Publication Date
CN106791845A true CN106791845A (en) 2017-05-31
CN106791845B CN106791845B (en) 2019-06-14

Family

ID=58946348

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710034540.5A Active CN106791845B (en) 2017-01-17 2017-01-17 A kind of quick parallax estimation method for multi-view image coding

Country Status (1)

Country Link
CN (1) CN106791845B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108921212A (en) * 2018-06-27 2018-11-30 努比亚技术有限公司 A kind of image matching method, mobile terminal and computer readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101344968A (en) * 2008-09-02 2009-01-14 西北工业大学 Movement compensation method for star sky background image
CN101833768A (en) * 2009-03-12 2010-09-15 索尼株式会社 Method and system for carrying out reliability classification on motion vector in video
CN102118561A (en) * 2010-05-27 2011-07-06 周渝斌 Camera movement detection system in monitoring system and method
WO2012074852A1 (en) * 2010-11-23 2012-06-07 Qualcomm Incorporated Depth estimation based on global motion

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101344968A (en) * 2008-09-02 2009-01-14 西北工业大学 Movement compensation method for star sky background image
CN101833768A (en) * 2009-03-12 2010-09-15 索尼株式会社 Method and system for carrying out reliability classification on motion vector in video
CN102118561A (en) * 2010-05-27 2011-07-06 周渝斌 Camera movement detection system in monitoring system and method
WO2012074852A1 (en) * 2010-11-23 2012-06-07 Qualcomm Incorporated Depth estimation based on global motion

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王珊: "多视点视频编码运动与视差估计算法的优化与实现", 《中国优秀硕士学位论文全文数据库信息科技辑,2015年第3期》 *
黄君婷: "基于H.264的多视点视频编码快速算法研究", 《中国优秀硕士学位论文全文数据库信息科技辑,2014年第8期》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108921212A (en) * 2018-06-27 2018-11-30 努比亚技术有限公司 A kind of image matching method, mobile terminal and computer readable storage medium
CN108921212B (en) * 2018-06-27 2021-11-19 努比亚技术有限公司 Image matching method, mobile terminal and computer readable storage medium

Also Published As

Publication number Publication date
CN106791845B (en) 2019-06-14

Similar Documents

Publication Publication Date Title
US11501507B2 (en) Motion compensation of geometry information
Zhang et al. Light-field depth estimation via epipolar plane image analysis and locally linear embedding
Yao et al. Depth map driven hole filling algorithm exploiting temporal correlation information
CN103250184B (en) Based on the estimation of Depth of global motion
TWI483612B (en) Converting the video plane is a perspective view of the video system
Jang et al. Efficient disparity map estimation using occlusion handling for various 3D multimedia applications
KR100950617B1 (en) Method for estimating the dominant motion in a sequence of images
KR20110014067A (en) Method and system for transformation of stereo content
US11727637B2 (en) Method for generating 3D skeleton using joint-based calibration acquired from multi-view camera
Jang et al. Discontinuity preserving disparity estimation with occlusion handling
Zhang et al. Interactive stereoscopic video conversion
CN106791845B (en) A kind of quick parallax estimation method for multi-view image coding
Wang et al. Example-based video stereolization with foreground segmentation and depth propagation
Mathai et al. Automatic 2D to 3D video and image conversion based on global depth map
Sheng et al. Online temporally consistent indoor depth video enhancement via static structure
Shih et al. A depth refinement algorithm for multi-view video synthesis
Lee et al. Segment-based multi-view depth map estimation using belief propagation from dense multi-view video
Xiang et al. Interfered depth map recovery with texture guidance for multiple structured light depth cameras
CN114419102A (en) Multi-target tracking detection method based on frame difference time sequence motion information
Yao et al. View synthesis based on background update with gaussian mixture model
Sato et al. High-speed and high-accuracy scene flow estimation using kinect
Hsiao et al. Super-resolution reconstruction for binocular 3D data
KR102513220B1 (en) Adjacent camera recognition system and method between multiple cameras for 3D image
Huang et al. Consistency constrained reconstruction of depth maps from epipolar plane images
Talu et al. A novel object recognition method based on improved edge tracing for binary images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant