CN102740096A - Space-time combination based dynamic scene stereo video matching method - Google Patents

Space-time combination based dynamic scene stereo video matching method Download PDF

Info

Publication number
CN102740096A
CN102740096A CN2012102434612A CN201210243461A CN102740096A CN 102740096 A CN102740096 A CN 102740096A CN 2012102434612 A CN2012102434612 A CN 2012102434612A CN 201210243461 A CN201210243461 A CN 201210243461A CN 102740096 A CN102740096 A CN 102740096A
Authority
CN
China
Prior art keywords
parallax
video
color
time domain
block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2012102434612A
Other languages
Chinese (zh)
Inventor
朱云芳
杜歆
陈国赟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Gongshang University
Original Assignee
Zhejiang Gongshang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Gongshang University filed Critical Zhejiang Gongshang University
Priority to CN2012102434612A priority Critical patent/CN102740096A/en
Publication of CN102740096A publication Critical patent/CN102740096A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses a space-time combination based dynamic scene stereo video matching method. According to the method, the affine transformation motion of continuous video frames is modeled, a time domain relation among the continuous video frames is created, and a parallax value is selectively optimized by utilizing the geometric correlation between a left view and a right view according to the smoothness constraint for the parallax change of the time domain overlapped segmentation blocks among the continuous video frames on the basis of color segmentation, so that mismatching and parallax jumping phenomena caused by instability of color segmentation blocks in the traditional stereo video matching process are avoided, and a finally-obtained parallax video is more accurate and stable on a timer axis. The space-time combination based dynamic scene stereo video matching method can be used for effectively reducing the jumping phenomenon of the parallax video, well correcting the dismatching introduced to color segmentation and obtaining a more accurate parallax video result.

Description

A kind of dynamic scene three-dimensional video-frequency matching process that combines based on space-time
Technical field
The present invention relates to a kind of stereoscopic video matees to ask for the method for disparity map video; Specifically; Relate to a kind of time-domain constraints of utilizing between the three-dimensional video-frequency successive video frames, the three-dimensional matching result in spatial domain is revised, thus a kind of method of raising three-dimensional video-frequency matching result accuracy.
Background technology
Ask for parallax information and then carry out three-dimensional reconstruction through the solid coupling is big focus of one in the computer vision research and difficult point always.Along with the rise of 3DTV, the disparity map that obtains not only can be used for traditional field such as three-dimensional modeling, robot navigation, more is used widely in a plurality of fields such as stereo scopic video coding, viewpoint are synthetic.There are many scholars to be devoted to improve the precision and the efficient of three-dimensional coupling.
Traditional solid matching method generally is research object with the binocular stereo image, adopts the matching algorithm local or overall situation, through selecting specific cost function, asks for parallax with optimization strategies for different.But go deep into along with what the solid coupling was used, the object that people need handle much is a three-dimensional video-frequency, as in 3DTV, will obtain the parallax result of one section three-dimensional video-frequency usually.Each frame of video that direct way is a stereoscopic video is carried out individual processing, obtains disparity map with pursuing frame.Yet; Variation owing to many factors such as having illumination, noise, motion, texture in three-dimensional difficulty of mating and the video, block; The parallax result does not often have stability, shows as the phenomenon of the parallax saltus step that occurs on the erroneous matching that occurs in the parallax video and the time shaft.Using these consequences that contain the parallax result of erroneous matching is serious sometimes, as for the robot in advancing provides wrong obstacle information, perhaps tangible flaw etc. possibly occur in the three-dimensional video-frequency drafting.Therefore, for three-dimensional video-frequency, can not only pay close attention to the accuracy of single frames disparity map, also require to guarantee the stability of disparity map sequence on time shaft.
The researcher has begun to pay close attention to the extraction problem of the stable parallax video of time domain; Solid as space-time combines is mated (space time stereo) method; This method expands to time domain with the window of the local coupling of single frames; In video sequence, form the time-domain window that span is T,, estimate the parallax conversion between the successive frame through motion modeling to increase the correlation of time domain between the successive frame.Also have the scholar to suppose that the parallax of pixel in the same color block is close, the motion basically identical is to guarantee the stability of parallax in the overlapping block of successive frame.In addition, also have the scholar to utilize the similitude that parallax distributes between the sequential frame image in the Same Scene, the likelihood function that distributes through the structure parallax comes the optimization of matching cost, to reduce the saltus step phenomenon among the successive frame parallax result.
Summary of the invention
The objective of the invention is to deficiency, the dynamic scene three-dimensional video-frequency matching process that provides a kind of space-time to combine to prior art.This method is on the basis of the three-dimensional coupling of single frames; The utilization affine transformation is carried out motion modeling; Set up the time domain contact between the successive frame,, adopt the likelihood optimisation strategy of parallax through the flatness constraint that the overlapping block parallax of time domain between the successive frame changes; Parallax to unstable region upgrades, and finally obtains stable parallax video.
The objective of the invention is to realize through following technical scheme: a kind of dynamic scene three-dimensional video-frequency matching process that combines based on space-time, this method may further comprise the steps:
(1) color is cut apart: the left and right sides view of stereoscopic video present frame carries out color respectively to be cut apart, and the pixel segmentation that color is close is same color block;
(2) the three-dimensional coupling in spatial domain: utilize the color segmentation result, the left and right sides view of stereoscopic video present frame carries out the three-dimensional coupling in spatial domain, obtains initial parallax figure;
(3) affine transformation modeling: the present frame and the former frame of stereoscopic video left side road video are carried out Feature Points Matching; Under the affine transformation model, reject exterior point with the Ransac algorithm, ask for affine transformation matrix;
(4) the overlapping block parallax of time domain is optimized: after the affine transformation modeling, if the block that color is close between present frame and the former frame is looked on three-dimensional video-frequency left side road bigger overlapping region is arranged, think that then these two color blocks are that time domain is overlapping; For the overlapping color block of time domain, can think that parallax also should be level and smooth on changing, and carries out the optimization of the overlapping block parallax of time domain in view of the above;
(5) sub-pixel interpolation: adopt sub-pixel interpolation to eliminate the non-continuous event that the parallax quantification is brought.
Further, described step (2) specifically may further comprise the steps:
(a) calculate the coupling cost: carry out the window coupling with the gray scale and the adaptive weighted factor of gradient, in predetermined hunting zone, selecting the minimum pixel of coupling cost is the matched pixel point;
(b) become the window coupling:, recomputate the coupling cost of step (a) and obtain new matched pixel point if the matched pixel point ratio through left and right sides consistency desired result in color block then enlarges match window less than 50%;
(c) parallax plane fitting: the matched pixel point in the same color block carries out the parallax plane fitting, obtains the spatial domain matching result.
Further, said step (4) specifically may further comprise the steps:
(a) according to the affine transformation model, the color block is carried out time domain follow the tracks of, find out the overlapping block of time domain between the successive frame;
(b) matched pixel between the overlapping block of examination time domain is if the parallax value of matched pixel differs greatly then thinks point of instability; Point of instability also comprises and the bigger pixel of affine transformation subpoint value differences;
(c) point of instability is carried out the optimization of parallax likelihood; For the parallax point of instability, each color block is done the parallax histogram analysis, adopt the parallax likelihood to optimize to its assignment again.
The invention has the beneficial effects as follows: the present invention sets up the time domain contact between the successive video frames effectively through affine transformation; Can effectively improve in the three-dimensional video-frequency matching process the inconsistent mistake coupling and parallax saltus step phenomenon that produces, make the parallax video that finally obtains on time shaft, more accurately stablize owing to the color block.
Description of drawings
Fig. 1 is the flow chart of the inventive method;
Fig. 2 is the overlapping block sketch map of time domain in the inventive method;
Fig. 3 is an approximate parallax peak value sketch map in the inventive method.
Embodiment
Below in conjunction with accompanying drawing and embodiment the present invention is described in further detail, it is more obvious that the object of the invention and effect will become.
Fig. 1 has provided the method flow diagram that carries out the three-dimensional video-frequency coupling according to the present invention.
The present invention be directed to that the coupling of three-dimensional video-frequency carries out, the video flowing at two visual angles about the three-dimensional video-frequency of input has finally obtains the parallax video through the three-dimensional video-frequency coupling.Through the processing of the inventive method, can make the parallax video that finally obtains on time domain, keep stable, eliminate phenomenons such as mistake coupling and parallax saltus step effectively.
As shown in Figure 1, in step 101, the left and right sides view of importing the three-dimensional video-frequency present frame to be carried out color respectively cut apart, the pixel segmentation that color is close is same color block.
But about color partitioning algorithm list of references: Comaniciu.D and M.P; Mean shift:A robust approach toward feature space analysis.IEEE Transactions on Pattern Analysis and Machine Intelligence; 2002,24 (5): 603-619.
As shown in Figure 1, in step 102, in a predefined parallax hunting zone, the left and right sides view of stereoscopic video present frame carries out the three-dimensional coupling in spatial domain, obtains the three-dimensional matching result in spatial domain.Comprise following steps:
(a) calculate the initial matching cost: the present invention has considered that simultaneously gray scale and gradient information come the otherness between the comprehensive measurement pixel, shown in formula (1):
C(x,y,d)=(1-w(c,g))×C sad(x,y,d)/n 2+w(c,g)×C grad(x,y,d)/n 2; (1)
In (1) formula, (d) for being reference-view with the left view, (x y) locates on the throne being changed to C, the coupling cost when parallax is d for x, y.W (dynamically given for principle at most by the match point number through left and right sides consistency desired result for c, g) the shared weight of expression gradient information by its selection.C Sad(x, y, d) and C Grad(x, y d) represent the non-similarity tolerance of corresponding gray scale and gradient information respectively, and they define respectively as follows:
C sad ( x , y , d ) = Σ ( i , j ) ∈ N ( x , y ) | I L ( i , j ) - I R ( i - d , j ) | ; - - - ( 2 )
C grad ( x , y , d ) = Σ ( i , j ) ∈ N ( x , y ) | ∂ I L ( i , j ) ∂ x - ∂ I R ( i - d , j ) ∂ x | + Σ ( i , j ) ∈ N ( x , y ) | ∂ I L ( i , j ) ∂ y - ∂ I R ( i - d , j ) ∂ y | ; - - - ( 3 )
In above-mentioned (2), (3) formula, I L(i, j) and I R(i j) representes that respectively left and right sides view meta is changed to (i, the pixel value of j) locating.(x y) is match window to N, and it is centered close to, and (x y) locates, and length and width are n.
But about left and right sides consistency desired result method list of references: Daniel Scharstein and R.Szeliski; A taxonomy and evaluation of dense two-frame stereo correspondence algorithms.International journal of computer vision; 2002,47 (1): 7-42.
(b) become the window policy coupling:, recomputate formula (1) and obtain new coupling cost if the matched pixel point ratio of in a color block, passing through left and right sides consistency desired result then enlarges match window less than 50%.The adjustable range of window size from 3 * 3 to 9 * 9.Wicket helps to keep the accuracy in edge and long and narrow zone, and big window then helps the processing of weak texture region.
(c) parallax plane fitting: based on the smoothness assumption of parallax value in the same color block, the parallax value of the pixel that belongs to same color block is carried out the parallax plane fitting, obtain the spatial domain matching result.
But about parallax plane fitting algorithm list of references: Tao H and S.HS.; Global matching criterion and color segmentation based stereo.Fifth IEEE Workshop on Applications of Computer Vision; 2000, p246-253.
As shown in Figure 1, in step 103, utilize the present frame (t frame) of three-dimensional video-frequency left side road video and the Feature Points Matching between the former frame (t-1 frame).Under the affine transformation model, reject exterior point with the Ransac algorithm, ask for affine transformation matrix.
In common dynamic solid video scene, the motion in the video generally can be divided into two types: integral body (comprising background and the prospect) motion of (1) scene that camera motion caused; (2) motion of object shows as sport foreground and demonstrates the independently moving that is different from background.Above-mentioned these two kinds of athletic meeting exist simultaneously, but different in kind.Motion modeling generally is to by scene mass motion that camera motion caused.
In motion modeling method commonly used, be two kinds of methods commonly used based on fundamental matrix and affine matrix.Because the degree of freedom of affine matrix is less, to ask for simply, speed is fast, and the present invention has adopted the affine transformation modeling method, shown in formula (4):
x ′ y ′ 1 = A · x y 1 , - - - ( 4 )
In the formula (4), x ′ y ′ 1 Be the pixel homogeneous coordinates of the characteristic point in the three-dimensional video-frequency left side road video former frame (t-1 frame), x y 1 Be the pixel homogeneous coordinates of the corresponding matched characteristic point in the three-dimensional video-frequency left side road video present frame (t frame), A = a 11 a 12 t x a 21 a 22 t y 0 0 1 Be affine transformation matrix.
At first the successive video frames of stereoscopic video left side road video is carried out Feature Points Matching when calculating, and under the affine transformation model of formula (4), rejects exterior point with the Ransac algorithm then, asks for affine transformation matrix A.
But about Feature Points Matching algorithm list of references: Bay, H., A.Ess, et al.Speeded-Up Robust Features (SURF) .Computer Vision and Image Understanding, 2008,110 (3): 346-359.
But about Ransac algorithm list of references: Fischler; M.and R.Bolles; Random sample consensus:A paradigm for model fitting with applications to image analysis and automated cartography.Communications of the ACM; 1981,24 (6): 381-395.
Successive video frames through having the overlapping region to any two is calculated affine transformation matrix; Can set up the time domain contact between the two; As can pixel in the present frame (t frame) of continuous videos be projected to former frame (t-1 frame), obtain the relevant matched pixel point of time domain.
As shown in Figure 1, in step 104, the color block overlapping for time domain carries out parallax optimization.
After the affine transformation projection, if between the frame of the front and back of continuous videos, the color block that color is close has bigger overlapping region on the spatial domain, can think that then these two color blocks are that time domain is overlapping.For the overlapping block of the time domain between the successive video frames, they also should be level and smooth on parallax changes.
According to the affine transformation model, can carry out time domain to the color block and follow the tracks of, find out the coupling block between the successive video frames.Time domain follow the tracks of can stereoscopic video left road video carry out, right wing video that also can stereoscopic video carries out, and selects the left road video of three-dimensional video-frequency generally speaking.As shown in Figure 2, according to the affine transformation model, the color block S in the t frame of the left road video of three-dimensional video-frequency is projected to the t-1 frame obtain S f(dotting among Fig. 2).In the note t-1 frame with S fThe equitant some color blocks in position are { s i| i=1,2 ..., n}.(annotate: because over-segmentation, in the t-1 frame with S fOverlapping color block has polylith.If) satisfy following condition, then think s iWith S is the overlapping color block of time domain:
(a) s iThere is 80% pixel to drop on the S of view field fInterior (or S fThere is 80% pixel to drop on s iIn).
(b) s iColor approaching with the color block S in the t frame.
After definite time domain overlapping colors block, further examine or check the corresponding matched pixel in the time domain overlapping colors block according to the affine transformation model.If the parallax value of matched pixel is close, then thinks point of safes, otherwise think point of instability.Point of instability comprises that also the pixel value after those are through the affine transformation projection differs bigger point.For these point of instability, the probability of parallax value mistake is bigger.
In order to determine whether to do the parallax histogram analysis to these point of instability assignment again to each color block, with parallax likelihood each candidate's parallax value relatively, so that determine whether need be to current parallax assignment again.
The present invention adopts the approximate peak parallax value to represent the bigger value of parallax value distribution probability in the color block.The approximate peak parallax value of color block S is added up according to formula (5):
D S = arg max l { h l - 1 + h l + h l + 1 } , - - - ( 5 )
In formula (5), D SBe the approximate peak parallax value, the span of l is all parallax value among the color block S, h lParallax value is the frequency of l in the parallax histogram of expression color block S.
Use the approximate peak parallax value can avoid the influence because of noise or other factors, it is interval to make the maximum parallax value of distribution probability be scattered in several parallax value adjacent in the histogram.At this moment, the peak value on the histogram just can not have been represented the parallax value of distribution probability maximum.As shown in Figure 3, the peak value parallax among the figure is detected histogram peak, but reasonably the maximum parallax value of distribution probability should be by the resulting approximate peak parallax of formula (5).
For the parallax point of instability, the present invention adopts the parallax likelihood to optimize its assignment again.The pixel that these are to be revised, its candidate's parallax value comprises: the approximate peak parallax value of the approximate peak parallax value of color block, the match block in the previous frame (t-1 frame) in the current location parallax value, present frame (t frame), in previous frame (t-1 frame) according to the parallax value of affine transformation subpoint.
Parallax likelihood score L (x, y d) are calculated by two parts, shown in formula (6), are used for measuring the conform to degree of different candidate's parallax value for geometric correlation property:
L(x,y,d)=p c(x,y,d)×p v(x,y,d,d′), (6)
Wherein, p c(x, y d) are left view pixel I L(x, y) the matched pixel point I when parallax value is d and in the right view R(x-d, non-similarity tolerance y) are represented with formula (7):
p c ( x , y , d ) = σ c σ c + Σ i ∈ { R , G , B } | I L , i ( x , y ) - I R , i ( x - d , y ) | , - - - ( 7 )
In the formula (7), { the non-similarity tolerance of B} remarked pixel is to R to i ∈, adding up after G, three passages of B carry out respectively for R, G.σ cIt is a preset parameter.
A certain pixel is I in the note left view L(x y), finds its matched pixel point I in right view according to the parallax value d of this point R(x-d, y), then according to right view mid point I R(x-d, left view is returned in parallax value d ' projection y), obtains the back projection's point I on left figure L(x-d+d ', y).p v(x, y, d, d ') is used to measure I L(x, y) and I L(x-d+d ', the correlation between y), shown in formula (8):
p v ( x , y , d , d ′ ) = exp ( - Σ i ∈ ( R , G , B ) | I L , i ( x , y ) - I L , i ( x - d + d ′ , y ) 2 σ d 2 ) , - - - ( 8 )
In the formula (8), { implication of B} is identical with formula (7), σ for R, G for i ∈ dIt is a parameter preset.
If do not block or matching error I L(x, y) and I L(x-d+d ', y) corresponding to the same pixel on the left view, i.e. d=d '.But in the coupling of reality, possibly there is the situation of many-one or one-to-many, defines p thus vAs geometric correlation property, can weigh the similitude between original point and the back projection's point better.
According to the result of calculation of parallax likelihood, adopt the principle of WTA (winner-take-all) can in a plurality of candidate's parallax value, select the optimum parallax value with maximum likelihood degree, the parallax result after being optimized.
In order to eliminate the disparity map that obtains at last owing to the non-continuous event that quantizes to produce, in step 105, the present invention adopts interpolation method to carry out sub-pix optimization.
The parallax value d that confirms after the optimization of selection parallax likelihood makes d -=d-1, d +=d+1, the corresponding coupling cost of each parallax value is E (d), E (d -) and E (d +), calculate the parallax result after the optimization according to formula (9):
d refine = d - E ( d + ) - E ( d - ) 2 ( E ( d + ) E ( d - ) - 2 E ( d ) ) , - - - ( 9 )
Sub-pix optimization can effectively improve the discontinuity phenomenon of disparity map, makes the parallax result look more level and smooth.
The foregoing description is used for the present invention that explains, rather than limits the invention, and in the protection range of spirit of the present invention and claim, any modification and change to the present invention makes all fall into protection scope of the present invention.

Claims (3)

1. dynamic scene three-dimensional video-frequency matching process that combines based on space-time is characterized in that this method may further comprise the steps:
(1) color is cut apart: the left and right sides view of stereoscopic video present frame carries out color respectively to be cut apart, and the pixel segmentation that color is close is same color block;
(2) the three-dimensional coupling in spatial domain: utilize the color segmentation result, the left and right sides view of stereoscopic video present frame carries out the three-dimensional coupling in spatial domain, obtains initial parallax figure;
(3) affine transformation modeling: the present frame and the former frame of stereoscopic video left side road video are carried out Feature Points Matching; Under the affine transformation model, reject exterior point with the Ransac algorithm, ask for affine transformation matrix;
(4) the overlapping block parallax of time domain is optimized: after the affine transformation modeling, if the block that color is close between present frame and the former frame is looked on three-dimensional video-frequency left side road bigger overlapping region is arranged, think that then these two color blocks are that time domain is overlapping; For the overlapping color block of time domain, can think that parallax also should be level and smooth on changing, and carries out the optimization of the overlapping block parallax of time domain in view of the above;
(5) sub-pixel interpolation: adopt sub-pixel interpolation to eliminate the non-continuous event that the parallax quantification is brought.
2. according to the said dynamic scene three-dimensional video-frequency matching process that combines based on space-time of claim 1, it is characterized in that described step (2) specifically may further comprise the steps:
(a) calculate the coupling cost: carry out the window coupling with the gray scale and the adaptive weighted factor of gradient, in predetermined hunting zone, selecting the minimum pixel of coupling cost is the matched pixel point;
(b) become the window coupling:, recomputate the coupling cost of step (a) and obtain new matched pixel point if the matched pixel point ratio through left and right sides consistency desired result in color block then enlarges match window less than 50%;
(c) parallax plane fitting: the matched pixel point in the same color block carries out the parallax plane fitting, obtains the spatial domain matching result.
3. according to the said dynamic scene three-dimensional video-frequency matching process that combines based on space-time of claim 1, it is characterized in that said step (4) specifically may further comprise the steps:
(a) according to the affine transformation model, the color block is carried out time domain follow the tracks of, find out the overlapping block of time domain between the successive frame;
(b) matched pixel between the overlapping block of examination time domain is if the parallax value of matched pixel differs greatly then thinks point of instability; Point of instability also comprises and the bigger pixel of affine transformation subpoint value differences;
(c) point of instability is carried out the optimization of parallax likelihood; For the parallax point of instability, each color block is done the parallax histogram analysis, adopt the parallax likelihood to optimize to its assignment again.
CN2012102434612A 2012-07-13 2012-07-13 Space-time combination based dynamic scene stereo video matching method Pending CN102740096A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2012102434612A CN102740096A (en) 2012-07-13 2012-07-13 Space-time combination based dynamic scene stereo video matching method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2012102434612A CN102740096A (en) 2012-07-13 2012-07-13 Space-time combination based dynamic scene stereo video matching method

Publications (1)

Publication Number Publication Date
CN102740096A true CN102740096A (en) 2012-10-17

Family

ID=46994768

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2012102434612A Pending CN102740096A (en) 2012-07-13 2012-07-13 Space-time combination based dynamic scene stereo video matching method

Country Status (1)

Country Link
CN (1) CN102740096A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103337064A (en) * 2013-04-28 2013-10-02 四川大学 Method for removing mismatching point in image stereo matching
CN104268880A (en) * 2014-09-29 2015-01-07 沈阳工业大学 Depth information obtaining method based on combination of features and region matching
CN105657401A (en) * 2016-01-13 2016-06-08 深圳创维-Rgb电子有限公司 Naked eye 3D display method and system and naked eye 3D display device
CN106643562A (en) * 2016-10-27 2017-05-10 天津大学 Time domain and space domain hybrid coding based structured light fringe projection method
CN109191506A (en) * 2018-08-06 2019-01-11 深圳看到科技有限公司 Processing method, system and the computer readable storage medium of depth map
CN110188754A (en) * 2019-05-29 2019-08-30 腾讯科技(深圳)有限公司 Image partition method and device, model training method and device
CN113286163A (en) * 2021-05-21 2021-08-20 成都威爱新经济技术研究院有限公司 Timestamp error calibration method and system for virtual shooting live broadcast
CN114187556A (en) * 2021-12-14 2022-03-15 养哇(南京)科技有限公司 High-definition video intelligent segmentation method based on picture features

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈国赟: "时空结合的深度视频估计及相关研究", 《中国优秀硕士学位论文全文数据库》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103337064A (en) * 2013-04-28 2013-10-02 四川大学 Method for removing mismatching point in image stereo matching
CN104268880A (en) * 2014-09-29 2015-01-07 沈阳工业大学 Depth information obtaining method based on combination of features and region matching
CN105657401B (en) * 2016-01-13 2017-10-24 深圳创维-Rgb电子有限公司 A kind of bore hole 3D display methods, system and bore hole 3D display device
CN105657401A (en) * 2016-01-13 2016-06-08 深圳创维-Rgb电子有限公司 Naked eye 3D display method and system and naked eye 3D display device
CN106643562B (en) * 2016-10-27 2019-05-03 天津大学 Structural light stripes projective techniques based on time domain airspace hybrid coding
CN106643562A (en) * 2016-10-27 2017-05-10 天津大学 Time domain and space domain hybrid coding based structured light fringe projection method
CN109191506A (en) * 2018-08-06 2019-01-11 深圳看到科技有限公司 Processing method, system and the computer readable storage medium of depth map
CN109191506B (en) * 2018-08-06 2021-01-29 深圳看到科技有限公司 Depth map processing method, system and computer readable storage medium
CN110188754A (en) * 2019-05-29 2019-08-30 腾讯科技(深圳)有限公司 Image partition method and device, model training method and device
CN110188754B (en) * 2019-05-29 2021-07-13 腾讯科技(深圳)有限公司 Image segmentation method and device and model training method and device
US11900613B2 (en) 2019-05-29 2024-02-13 Tencent Technology (Shenzhen) Company Limited Image segmentation method and apparatus, model training method and apparatus, device, and storage medium
CN113286163A (en) * 2021-05-21 2021-08-20 成都威爱新经济技术研究院有限公司 Timestamp error calibration method and system for virtual shooting live broadcast
CN113286163B (en) * 2021-05-21 2022-07-08 成都威爱新经济技术研究院有限公司 Timestamp error calibration method and system for virtual shooting live broadcast
CN114187556A (en) * 2021-12-14 2022-03-15 养哇(南京)科技有限公司 High-definition video intelligent segmentation method based on picture features
CN114187556B (en) * 2021-12-14 2023-12-15 华策影视(北京)有限公司 Intelligent high-definition video segmentation method based on picture characteristics

Similar Documents

Publication Publication Date Title
CN102740096A (en) Space-time combination based dynamic scene stereo video matching method
CN108520554B (en) Binocular three-dimensional dense mapping method based on ORB-SLAM2
Hamzah et al. Stereo matching algorithm based on per pixel difference adjustment, iterative guided filter and graph segmentation
CN102184540B (en) Sub-pixel level stereo matching method based on scale space
CN102510506B (en) Virtual and real occlusion handling method based on binocular image and range information
CN103310421B (en) The quick stereo matching process right for high-definition image and disparity map acquisition methods
CN103383776B (en) A kind of laddering Stereo Matching Algorithm based on two stage cultivation and Bayesian Estimation
CN110189339A (en) The active profile of depth map auxiliary scratches drawing method and system
CN103996202A (en) Stereo matching method based on hybrid matching cost and adaptive window
CN103384343B (en) A kind of method and device thereof filling up image cavity
CN103996201A (en) Stereo matching method based on improved gradient and adaptive window
CN101765019B (en) Stereo matching algorithm for motion blur and illumination change image
CN103268604B (en) Binocular video depth map acquiring method
CN107862735A (en) A kind of RGBD method for reconstructing three-dimensional scene based on structural information
CN106875443A (en) The whole pixel search method and device of the 3-dimensional digital speckle based on grayscale restraint
CN103955945A (en) Self-adaption color image segmentation method based on binocular parallax and movable outline
CN102903111B (en) Large area based on Iamge Segmentation low texture area Stereo Matching Algorithm
CN108629809B (en) Accurate and efficient stereo matching method
Ma et al. Segmentation-based stereo matching using combinatorial similarity measurement and adaptive support region
CN107610148A (en) A kind of foreground segmentation method based on Binocular Stereo Vision System
Jung et al. Boundary-preserving stereo matching with certain region detection and adaptive disparity adjustment
El Ansari et al. A new regions matching for color stereo images
Antunes et al. Piecewise-planar reconstruction using two views
Park et al. Shape-indifferent stereo disparity based on disparity gradient estimation
Xiao et al. A segment-based stereo matching method with ground control points

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20121017