CN103761725A - Video plane detection method based on improved algorithm - Google Patents

Video plane detection method based on improved algorithm Download PDF

Info

Publication number
CN103761725A
CN103761725A CN201310450214.4A CN201310450214A CN103761725A CN 103761725 A CN103761725 A CN 103761725A CN 201310450214 A CN201310450214 A CN 201310450214A CN 103761725 A CN103761725 A CN 103761725A
Authority
CN
China
Prior art keywords
plane
frame
video
track
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310450214.4A
Other languages
Chinese (zh)
Other versions
CN103761725B (en
Inventor
黄华
陶蕾
张磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaanxi Help You Electronic Technology Co ltd
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN201310450214.4A priority Critical patent/CN103761725B/en
Publication of CN103761725A publication Critical patent/CN103761725A/en
Application granted granted Critical
Publication of CN103761725B publication Critical patent/CN103761725B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to a video plane extraction method based on an improved algorithm. In the method, tracking of characteristic points of a given video sequence is firstly performed through a standard KLT method and obtained characteristic-point tracks are recorded; whether the tracks belong to planes corresponding to a current homography matrix is judged through calculation of a reprojection error between adjacent frames and then tracking and extraction of planes of the whole video sequence are performed; and through use of obtained plane models and homography sets corresponding to the plane models, a video can be segmented so as to obtain a final plane extraction result. The video plane extraction method based on the improved algorithm is capable of extracting planes directly from the video sequence; and the tracks are selected as operation objects so that global uniformity of the plane extraction result is ensured and test convergence speed is speeded up and test convergence accuracy is improved significantly.

Description

A kind of video plane detection method based on improving algorithm
Technical field
The present invention relates to a kind of video plane detection method based on improving algorithm, be specifically related to a kind of video sequence plane extracting method based on feature point tracking, belong to image processing field.
Background technology
It is an important research direction in computer vision and graphics field that plane is extracted always.Due to planar structure, be one of class basis geometric properties the most common in reality scene, wherein comprised a large amount of two-dimensional structure information of object, and the point being positioned in plane is had to very strong geometrical constraint.In a lot of practical applications, as built scene space, barrier or target object model etc., all need to extract in advance that planar structure in scene is processed or alternately.The work of this respect has had a lot, such as the space plane based on J-Linkage cluster extracts (Toldo R, Fusiello A.Robust multiple structures estimation with j-linkage.Computer Vision – ECCV2008.Springer Berlin Heidelberg, plane 2008:537-547) with based on line coupling is extracted (Kim H, Lee S.Multiple planar region extraction based on the coplanar line pairs.Robotics and Automation (ICRA), 2011IEEE International Conference on.IEEE, 2011:2059-2064).Above plane extraction work can generate more attractive result, but due to these methods or need to recover in advance three-dimensional information and the camera parameters of scene, this process computation amount is very big, iff the planar structure that need to extract in image or video, this class algorithm cost prohibitive; With affined transformation set up image between transformation relation model, and ignored the nonlinear effect that may exist in conversion process., therefore by these traditional plane extraction algorithms, be difficult to from video sequence, extract rapidly and accurately planar structure.
Summary of the invention
In order to solve conventional planar extracting method, be difficult to from video sequence, extract rapidly and accurately the problem of planar structure, the present invention proposes a kind of video plane extracting method based on improving algorithm, and the plane extraction effect of better video sequence can be provided.The method needn't be recovered video camera three-dimensional information, directly by the projective transformation model setting up between video consecutive frame under polar curve constraint, calculate the list of being induced by plane should, and each single should be corresponding a plane in scene; Then according to each frame of the single reply video calculating, cut apart, extract the planar structure in video.
The concrete implementation procedure of the present invention is as follows:
Based on a video plane extracting method that improves algorithm, the method comprises the steps:
Step 1, video read;
Read input video, by video frame by frame solution become picture format to record;
Step 2, feature point tracking;
Use KLT algorithm that the unique point in each frame of video is extracted and followed the tracks of, the unique point track of the whole video sequence of leap that record obtains;
Step 3, plane track and extract;
The video sequence of the N frame reading for step 1, supposes that the set that step 2 obtains M unique point track is
Figure BDA0000388617880000021
for each track T j, make p jand q j(1≤p j<q j≤ N) represent respectively its starting and ending frame number, track T jbe expressed as form, wherein
Figure BDA0000388617880000023
represent the homogeneous coordinates of j unique point in i frame; For the set T of all tracks that crosses over a frame and b frame ab={ T j∈ T:p j≤ a, q j>=b} represents; For a tracing point
Figure BDA0000388617880000024
if it is positioned in a plane, its coordinate on i frame and k frame just can be induced by this plane so singly answers H iknext associated, have
Figure BDA0000388617880000025
in order to weigh the fitness of an areal model to a track, to a track
Figure BDA0000388617880000026
define it to k (p j≤ k≤q j) re-projection error of frame is as follows:
e = &Sigma; i = p j , i &NotEqual; k q j ( | | x j k - H ki x j i | | 2 + | | x j i - H ki - 1 x j k | | 2 )
According to re-projection error, all tracks are divided into different set, each set corresponding a plane, specifically can be divided into two steps:
(1) stochastic sampling: for the video sequence reading in step 1
Figure BDA0000388617880000031
structure N-1 to adjacent frames to C={ (F 1, F 2), (F 2, F 3) ..., (F n-1, F n); First from C, choose arbitrary frame to (F i-1, F i), get the track T that crosses over this two frame (i-1) iat F i-1interior corresponding point are carried out Delaunay trigonometric ratio, obtain the leg-of-mutton set of a Delaunay T tri, then at T triinside get any one triangle t, use corresponding three tracks of t to calculate F i-1and F ibetween a homography matrix H (i-1) i, computing formula is as follows:
Known basis matrix F and three picture group picture point correspondences
Figure BDA0000388617880000032
singly should be of 3D point place plane induction:
H=A-e'(M -1b) T
Wherein A=[e'] ×f, e' is the antipodal points on the second width image, has Fe = 0 F T e &prime; = 0 ; [e'] ×represent the antisymmetric matrix of e', suppose e'=(e 1, e 2, e 3) t, have:
[ e &prime; ] &times; = 0 - e 3 e 2 e 2 0 - e 1 - e 2 e 1 0
B is 3 n dimensional vector ns, and its element is:
b i=(x i'×(Ax i)) T(x i'×e')/x i'×e' 2
And M is a row vector, be
Figure BDA0000388617880000035
3 × 3 matrixes;
A given threshold epsilon, by judging re-projection error e=x i-H (i-1) ix i-1 2+ x i-1-H (i-1) i -1x i 2whether be less than ε, by T (i-1) iinterior track is divided into T inand T out;
(2) consistance is calculated: select and (F i-1, F i) adjacent frame is to (F i, F i+1), get T i (i+1)in be marked as T intrack, i.e. T iniT i (i+1)at F iinterior corresponding point are carried out Delaunay trigonometric ratio, obtain the leg-of-mutton set of a new Delaunay T tri; Then, same previous step, is used corresponding three tracks of t to calculate F iand F i+1between homography matrix H i (i+1), by judging whether re-projection error e is less than threshold epsilon, by T i (i+1)interior track is also divided into T inand T out, constantly repeat this step until all frames all pass through and process or T in∩ T i (i+1)interior track number is less than 3;
Repeatedly repeating step (1) and (2), select T inthe plane of size maximum and its institute are corresponding
Figure BDA0000388617880000041
set, the result of extracting as this secondary flat output;
Step 4, plane are cut apart;
The areal model obtaining according to step 3 is corresponding with them
Figure BDA0000388617880000042
after set, according to
Figure BDA0000388617880000043
video is cut apart to obtain net result, and the present invention selects Graph Cuts algorithm to cut apart video:
The object that a N frame video sequence that comprises P plane is cut apart is to be each pixel p ∈ F={F 1, F 2..., F na label f of appointment p∈ 0,1 ..., P-1}, obtains belonging to of each two field picture of conplane pixel set f={p:f p=i, i ∈ 0,1 ..., P-1}}; Tag set { f pby energy function of optimization, obtain, first input picture is represented to the wherein node set in N presentation graphs, each node n with a figure G=(N, E) i∈ N is corresponding to a pixel in input picture, the set on the limit in E presentation graphs between node, and each limit is corresponding to a pair of node < n with particular kind of relationship p, n q>; The optimization objective function of Graph Cuts algorithm is as follows:
E ( f ) = &Sigma; n &Element; N D n ( f n ) + &Sigma; n p < n q &Element; N V n p , n q ( f n p , f n q )
Wherein, D n(f n) represent the label of node n to be made as f ntime the data item punishment that brings; n p< n qunidirectional combination in presentation graphs between neighborhood of nodes, " adjacent " is here that neighbours territory is adjacent or eight neighborhoods are adjacent;
Figure BDA0000388617880000045
represent two neighborhood of nodes < n p, n qthe label of > is set as respectively
Figure BDA0000388617880000046
with
Figure BDA0000388617880000047
time level and smooth the punishment based on space continuity that brings; Label { the f of each node nby minimizing above-mentioned objective function, obtain;
Here, data item punishment is defined as follows:
D n(f n)=((R(H[f n]n)-R(n)) 2
+(G(H[f n]n)-G(n)) 2
+(B(H[f n]n)-B(n)) 2) 1/2
Wherein, H[f n] be f nthe list that individual plane is induced is answered, R (), and B (), G () represents respectively the RGB component of current pixel point, level and smooth punishment is for node n pbe adjacent node n qconnection weights, be defined as follows:
V n p , n q ( f n p , f n q ) = &lambda; ( f n p - f n q ) 2
Wherein n p, n q∈ N, λ is constant, the weight being used between equilibrium criterion penalty term and level and smooth penalty term, generally gets 80~100.
Beneficial effect:
(1) traditional plane extracting method one class need to recover three-dimensional information and the camera parameters of scene in advance, and this process computation amount is very big, iff the planar structure that need to extract in image or video, and this class methods cost prohibitive; And another kind of algorithm generally with affined transformation set up image between transformation relation model, and ignored the nonlinear effect that may exist in conversion process, this is not accurate enough often.In addition, current existing Equations of The Second Kind plane extraction algorithm substantially all carries out for image, the plane of image is extracted and normally between two width views, is carried out, and the plane of video is extracted except will extract plane between consecutive frame, also will guarantee the global coherency of this plane in whole video, the plane scene structure each frame of video finally being extracted should be consistent.And the inventive method is by the projective transformation model setting up between video consecutive frame under polar curve constraint, needn't recover video camera three-dimensional information, directly can from video sequence, extract plane; And select track as operand, guaranteed the global coherency of plane extraction result.
(2) the present invention has adopted improved RANSAC plane extracting method, get at random four unique points from traditional RANSAC algorithm tests different at every turn, the present invention utilizes not collinear three points can determine this axiom of plane, the unique point tracing into is carried out to Delaunay trigonometric ratio, and three points that each test is got on a triangle at random calculate in conjunction with fundamental matrix.And three points on a triangle belong to conplane probability obviously will be higher than four unique points of getting at random probability at grade, therefore, algorithm of the present invention has not only been accelerated the speed of test convergence greatly, and accuracy is higher.
Embodiment
Below the embodiment of the inventive method is elaborated.
Based on a video plane extracting method that improves algorithm, the method, first to given video sequence, is carried out the tracking of unique point by standard K LT method, the unique point track that record obtains; By the re-projection error calculating between consecutive frame, judge that whether track belongs to the corresponding plane of current homography matrix, carries out tracking and the extraction of the plane to whole video sequence; The areal model that utilization obtains and their corresponding lists should be gathered, and just can cut apart to obtain final plane to video and extract result.
The specific implementation process of the inventive method is as follows:
Step 1, video read;
Read input video, by video frame by frame solution become picture format to record;
Step 2, feature point tracking;
Use standard K LT algorithm (Shi J, Tomasi C.Good features to track.Computer Vision and Pattern Recognition, 1994.Proceedings CVPR'94., 1994) unique point in each frame of video is extracted and followed the tracks of, the unique point track of the whole video sequence of leap that record obtains;
Step 3, plane track and extract;
The video sequence of the N frame reading for step 1, supposes that the set that step 2 obtains M unique point track is
Figure BDA0000388617880000061
for each track T j, make p jand q j(1≤p j<q j≤ N) represent respectively its starting and ending frame number, track T so jjust can be expressed as
Figure BDA0000388617880000062
form, wherein represent the homogeneous coordinates of j unique point in i frame.Set for all tracks of crossing over a frame and b frame can be used T ab={ T j∈ T:p j≤ a, q j>=b} represents.For a tracing point
Figure BDA0000388617880000071
if it is positioned in a plane, its coordinate on i frame and k frame just can be induced by this plane so singly answers H iknext associated, have the present invention weighs the fitness of an areal model to a track with re-projection error.To a track
Figure BDA0000388617880000073
define it to k (p j≤ k≤q j) re-projection error of frame is as follows:
e = &Sigma; i = p j , i &NotEqual; k q j ( | | x j k - H ki x j i | | 2 + | | x j i - H ki - 1 x j k | | 2 )
The final purpose of algorithm of the present invention is exactly, according to re-projection error, all tracks are divided into different set, each set corresponding a plane.Specifically can be divided into two steps:
1. stochastic sampling: for the video sequence reading in step 1
Figure BDA0000388617880000075
can construct N-1 to adjacent frames to C={ (F 1, F 2), (F 2, F 3) ..., (F n-1, F n).First from C, choose arbitrary frame to (F i-1, F i), we generally get (F here 0, F 1), get the track T that crosses over this two frame (i-1) iat F i-1interior corresponding point are carried out Delaunay trigonometric ratio, obtain the leg-of-mutton set of a Delaunay T tri.Then, at T triinside get any one triangle t, use corresponding three tracks of t just can calculate F i-1and F ibetween a homography matrix H (i-1) i.Computing formula is as follows:
Known basis matrix F and three picture group picture point correspondences
Figure BDA0000388617880000076
singly should be of 3D point place plane induction:
H=A-e'(M -1b) T
Wherein A=[e'] ×f, e' is the antipodal points on the second width image, has Fe = 0 F T e &prime; = 0 . [e'] ×represent the antisymmetric matrix of e', suppose e'=(e 1, e 2, e 3) t, have:
[ e &prime; ] &times; = 0 - e 3 e 2 e 2 0 - e 1 - e 2 e 1 0
B is 3 n dimensional vector ns, and its element is:
b i=(x i'×(Ax i)) T(x i'×e')/x i'×e' 2
And M is a row vector, be
Figure BDA0000388617880000081
3 × 3 matrixes.
A given threshold epsilon, we get ε=4 here, by judging re-projection error e=x i-H (i-1) ix i-1 2+ x i-1-H ( i-1) i -1x i 2whether be less than ε, can be by T (i-1) iinterior track is divided into T inand T out.
2. consistance is calculated: select and (F i-1, F i) adjacent frame is to (F i, F i+1), get T i (i+1)in be marked as T intrack, i.e. T in∩ T i (i+1)at F iinterior corresponding point are carried out Delaunay trigonometric ratio, obtain the leg-of-mutton set of a new Delaunay T tri.Then, same previous step, is used corresponding three tracks of t to calculate F iand F i+1between homography matrix H i (i+1), by judging whether re-projection error e is less than threshold epsilon, can be by T i (i+1)interior track is also divided into T inand T out.Constantly repeat this step until all frames all pass through and process or T in∩ T i (i+1)interior track number is less than 3.
Repeatedly repeating step 1 and 2, selects T inthe plane of size maximum and its institute are corresponding set, the result of extracting as this secondary flat output.
Step 4, plane are cut apart;
The areal model obtaining according to step 3 is corresponding with them
Figure BDA0000388617880000083
after set, just can basis
Figure BDA0000388617880000084
video is cut apart to obtain net result.Here selected Graph Cuts algorithm (Narayanan P J, Vineet V, Stich T.Fast Graph Cuts for Computer.GPU Computing Gems Emerald Edition, 2011:439) to cut apart video.
The object that a N frame video sequence that comprises P plane is cut apart is to be each pixel p ∈ F={F 1, F 2..., F na label f of appointment p∈ 0,1 ..., P-1}, obtains belonging to of each two field picture of conplane pixel set f={p:f p=i, i ∈ 0,1 ..., P-1}}.Tag set { f pnormally by energy function of optimization, obtain.First, by a figure G=(N for input picture, E) represent, the wherein node set in N presentation graphs, each node n i ∈ N is corresponding to a pixel in input picture, the set on the limit in E presentation graphs between node, each limit is corresponding to a pair of node < n with particular kind of relationship p, n q>.The optimization objective function of Graph Cuts algorithm is as follows:
E ( f ) = &Sigma; n &Element; N D n ( f n ) + &Sigma; n p < n q &Element; N V n p , n q ( f n p , f n q )
Wherein, D n(f n) represent the label of node n to be made as f ntime the data item punishment that brings.N p< n qunidirectional combination in presentation graphs between neighborhood of nodes, " adjacent " here can be that neighbours territory is adjacent or eight neighborhoods are adjacent; represent two neighborhood of nodes < n p, n qthe label of > is set as respectively
Figure BDA0000388617880000093
with
Figure BDA0000388617880000094
time level and smooth the punishment based on space continuity that brings.Label { the f of each node nby minimizing above-mentioned objective function, obtain.
Here, data item punishment is defined as follows:
D n(f n)=((R(H[f n]n)-R(n)) 2
+(G(H[f n]n)-G(n)) 2
+(B(H[f n]n)-B(n)) 2) 1/2
Wherein, H[f n] be f nthe list that individual plane is induced is answered, R (), and B (), G () represents respectively the RGB component of current pixel point.Level and smooth punishment is for node n pbe adjacent node n qconnection weights, be defined as follows:
V n p , n q ( f n p , f n q ) = &lambda; ( f n p - f n q ) 2
Wherein n p, n q∈ N, λ is constant, the weight being used between equilibrium criterion penalty term and level and smooth penalty term, we get λ=80 here.
Obviously, those skilled in the art can carry out various changes and modification and not depart from the spirit and scope of the present invention the present invention.Like this, if within of the present invention these are revised and modification belongs to the scope of the claims in the present invention and equivalent technologies thereof, the present invention is also intended to comprise these changes and modification interior.

Claims (3)

1. the video plane extracting method based on improving algorithm, the method comprises the steps:
Step 1, video read;
Read input video, by video frame by frame solution become picture format to record;
Step 2, feature point tracking;
Use KLT algorithm that the unique point in each frame of video is extracted and followed the tracks of, the unique point track of the whole video sequence of leap that record obtains;
Step 3, plane track and extract;
The video sequence of the N frame reading for step 1, supposes that the set that step 2 obtains M unique point track is
Figure FDA0000388617870000011
for each track T j, make p jand q j(1≤p j<q j≤ N) represent respectively its starting and ending frame number, track T jbe expressed as
Figure FDA0000388617870000012
form, wherein represent the homogeneous coordinates of j unique point in i frame; For the set T of all tracks that crosses over a frame and b frame ab={ T j∈ T:p j≤ a, q j>=b} represents; For a tracing point
Figure FDA0000388617870000014
if it is positioned in a plane, its coordinate on i frame and k frame just can be induced by this plane so singly answers H iknext associated, have in order to weigh the fitness of an areal model to a track, to a track
Figure FDA0000388617870000016
define it to k (p j≤ k≤q j) re-projection error of frame is as follows:
e = &Sigma; i = p j , i &NotEqual; k q j ( | | x j k - H ki x j i | | 2 + | | x j i - H ki - 1 x j k | | 2 )
According to re-projection error, all tracks are divided into different set, each set corresponding a plane, specifically can be divided into two steps:
(1) stochastic sampling: for the video sequence reading in step 1
Figure FDA0000388617870000018
structure N-1 to adjacent frames to C={ (F 1, F 2), (F 2, F 3) ..., (F n-1, F n); First from C, choose arbitrary frame to (F i-1, F i), get the track T that crosses over this two frame (i-1) iat F i-1interior corresponding point are carried out Delaunay trigonometric ratio, obtain the leg-of-mutton set of a Delaunay T tri, then at T triinside get any one triangle t, use corresponding three tracks of t to calculate F i-1and F ibetween a homography matrix H (i-1) i, computing formula is as follows:
Known basis matrix F and three picture group picture point correspondences
Figure FDA0000388617870000025
singly should be of 3D point place plane induction:
H=A-e'(M -1b) T
Wherein A=[e'] ×f, e' is the antipodal points on the second width image, has Fe = 0 F T e &prime; = 0 ; [e'] ×represent the antisymmetric matrix of e', suppose e'=(e 1, e 2, e 3) t, have:
[ e &prime; ] &times; = 0 - e 3 e 2 e 2 0 - e 1 - e 2 e 1 0
B is 3 n dimensional vector ns, and its element is:
b i=(x i'×(Ax i)) T(x i'×e')/x i'×e' 2
And M is a row vector, be
Figure FDA0000388617870000023
3 × 3 matrixes;
A given threshold epsilon, by judging re-projection error e=x i-H (i-1) ix i-1 2+ x i-1-H (i-1) i -1x i 2whether be less than ε, by T (i-1) iinterior track is divided into T inand T out;
(2) consistance is calculated: select and (F i-1, F i) adjacent frame is to (F i, F i+1), get T i (i+1)in be marked as T intrack, i.e. T in∩ T i (i+1)at F iinterior corresponding point are carried out Delaunay trigonometric ratio, obtain the leg-of-mutton set of a new Delaunay T tri; Then, same previous step, is used corresponding three tracks of t to calculate F iand F i+1between homography matrix H i (i+1), by judging whether re-projection error e is less than threshold epsilon, by T i (i+1)interior track is also divided into T inand T out, constantly repeat this step until all frames all pass through and process or T in∩ T i (i+1)interior track number is less than 3;
Repeatedly repeating step (1) and (2), select T inthe plane of size maximum and its institute are corresponding
Figure FDA0000388617870000024
set, the result of extracting as this secondary flat output;
Step 4, plane are cut apart;
The areal model obtaining according to step 3 is corresponding with them
Figure FDA0000388617870000031
after set, according to
Figure FDA0000388617870000032
video is cut apart to obtain net result, and the present invention selects Graph Cuts algorithm to cut apart video:
The object that a N frame video sequence that comprises P plane is cut apart is to be each pixel p ∈ F={F 1, F 2..., F na label f of appointment p∈ 0,1 ..., P-1}, obtains belonging to of each two field picture of conplane pixel set f={p:f p=i, i ∈ 0,1 ..., P-1}}; Tag set { f pby energy function of optimization, obtain, first input picture is represented to the wherein node set in N presentation graphs, each node n with a figure G=(N, E) i∈ N is corresponding to a pixel in input picture, the set on the limit in E presentation graphs between node, and each limit is corresponding to a pair of node < n with particular kind of relationship p, n q>; The optimization objective function of Graph Cuts algorithm is as follows:
E ( f ) = &Sigma; n &Element; N D n ( f n ) + &Sigma; n p < n q &Element; N V n p , n q ( f n p , f n q )
Wherein, D n(f n) represent the label of node n to be made as f ntime the data item punishment that brings; n p< n qunidirectional combination in presentation graphs between neighborhood of nodes, " adjacent " is here that neighbours territory is adjacent or eight neighborhoods are adjacent;
Figure FDA0000388617870000034
represent two neighborhood of nodes < n p, n qthe label of > is set as respectively
Figure FDA0000388617870000035
with
Figure FDA0000388617870000036
time level and smooth the punishment based on space continuity that brings; Label { the f of each node nby minimizing above-mentioned objective function, obtain;
Here, data item punishment is defined as follows:
D n(f n)=((R(H[f n]n)-R(n)) 2
+(G(H[f n]n)-G(n)) 2
+(B(H[f n]n)-B(n)) 2) 1/2
Wherein, H[f n] be f nthe list that individual plane is induced is answered, R (), and B (), G () represents respectively the RGB component of current pixel point, level and smooth punishment is for node n pbe adjacent node n qconnection weights, be defined as follows:
V n p , n q ( f n p , f n q ) = &lambda; ( f n p - f n q ) 2
Wherein n p, n q∈ N, λ is constant, the weight being used between equilibrium criterion penalty term and level and smooth penalty term.
2. a kind of video plane extracting method based on improving algorithm as claimed in claim 1, is characterized in that: in step 3, and threshold epsilon=4.
3. a kind of video plane extracting method based on improving algorithm as claimed in claim 1 or 2, is characterized in that: in step 4, and weights λ=80.
CN201310450214.4A 2013-09-27 2013-09-27 A kind of video plane detection method based on innovatory algorithm Expired - Fee Related CN103761725B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310450214.4A CN103761725B (en) 2013-09-27 2013-09-27 A kind of video plane detection method based on innovatory algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310450214.4A CN103761725B (en) 2013-09-27 2013-09-27 A kind of video plane detection method based on innovatory algorithm

Publications (2)

Publication Number Publication Date
CN103761725A true CN103761725A (en) 2014-04-30
CN103761725B CN103761725B (en) 2016-08-17

Family

ID=50528958

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310450214.4A Expired - Fee Related CN103761725B (en) 2013-09-27 2013-09-27 A kind of video plane detection method based on innovatory algorithm

Country Status (1)

Country Link
CN (1) CN103761725B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105046688A (en) * 2015-06-23 2015-11-11 北京工业大学 Method for automatically identifying multiple planes in three-dimensional point cloud
CN108961182A (en) * 2018-06-25 2018-12-07 北京大学 Vertical direction vanishing point detection method and video positive twist method for video image

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010186217A (en) * 2009-02-10 2010-08-26 Epson Imaging Devices Corp Position detection device, electro-optical device and electronic apparatus
CN102945551A (en) * 2012-10-16 2013-02-27 同济大学 Graph theory based three-dimensional point cloud data plane extracting method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010186217A (en) * 2009-02-10 2010-08-26 Epson Imaging Devices Corp Position detection device, electro-optical device and electronic apparatus
CN102945551A (en) * 2012-10-16 2013-02-27 同济大学 Graph theory based three-dimensional point cloud data plane extracting method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HYUNWOO KIM等: "Multiple Planar Region Extraction Based on the Coplanar Line Pairs", 《2011 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION》, 13 May 2011 (2011-05-13), pages 2059 - 2064, XP032033425, DOI: doi:10.1109/ICRA.2011.5979548 *
汤力 等: "轮廓基的视频对象平面提取方法", 《电视技术》, no. 5, 31 May 2001 (2001-05-31) *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105046688A (en) * 2015-06-23 2015-11-11 北京工业大学 Method for automatically identifying multiple planes in three-dimensional point cloud
CN105046688B (en) * 2015-06-23 2017-10-10 北京工业大学 A kind of many plane automatic identifying methods in three-dimensional point cloud
CN108961182A (en) * 2018-06-25 2018-12-07 北京大学 Vertical direction vanishing point detection method and video positive twist method for video image
CN108961182B (en) * 2018-06-25 2021-06-01 北京大学 Vertical direction vanishing point detection method and video correction method for video image

Also Published As

Publication number Publication date
CN103761725B (en) 2016-08-17

Similar Documents

Publication Publication Date Title
CA2826534C (en) Backfilling points in a point cloud
CN109447121B (en) Multi-target tracking method, device and system for visual sensor network
CN103700099B (en) Rotation and dimension unchanged wide baseline stereo matching method
CN111199564A (en) Indoor positioning method and device of intelligent mobile terminal and electronic equipment
CN104537355B (en) It is a kind of to utilize image boundary information and the notable method for checking object of the connectivity of region
CN102054166B (en) A kind of scene recognition method for Outdoor Augmented Reality System newly
CN109087394A (en) A kind of real-time indoor three-dimensional rebuilding method based on inexpensive RGB-D sensor
CN103942774A (en) Multi-target collaborative salient-region detection method based on similarity propagation
Yan et al. Continuous mapping convolution for large-scale point clouds semantic segmentation
CN111881804A (en) Attitude estimation model training method, system, medium and terminal based on joint training
Gao et al. Pose refinement with joint optimization of visual points and lines
Zhang et al. Real-Time object detection for 360-degree panoramic image using CNN
CN116385660A (en) Indoor single view scene semantic reconstruction method and system
CN106023317B (en) A kind of weighted Voronoi diagrams drawing generating method for big data test
CN103854271A (en) Plane type camera calibration method
CN104021395A (en) Target tracing algorithm based on high-order partial least square method
CN103761725B (en) A kind of video plane detection method based on innovatory algorithm
Kawanishi et al. Parallel line-based structure from motion by using omnidirectional camera in textureless scene
CN117367404A (en) Visual positioning mapping method and system based on SLAM (sequential localization and mapping) in dynamic scene
CN104657985A (en) Occlusion avoidance method for static visual target based on depth image occlusion information
CN111914809A (en) Target object positioning method, image processing method, device and computer equipment
Sandström et al. Learning online multi-sensor depth fusion
CN102724530A (en) Three-dimensional method for plane videos based on feedback control
CN104156952B (en) A kind of image matching method for resisting deformation
Kikuchi et al. A data structure for triangular dissection of multi-resolution images

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
ASS Succession or assignment of patent right

Owner name: SHENZHEN RESEARCH INSTITUTE, BEIJING INSTITUTE OF

Effective date: 20140918

C41 Transfer of patent application or patent right or utility model
TA01 Transfer of patent application right

Effective date of registration: 20140918

Address after: 100081 No. 5, Zhongguancun South Street, Haidian District, Beijing

Applicant after: BEIJING INSTITUTE OF TECHNOLOGY

Applicant after: Shenzhen Institute of Beijing Institute of Technology

Address before: 100081 No. 5, Zhongguancun South Street, Haidian District, Beijing

Applicant before: BEIJING INSTITUTE OF TECHNOLOGY

C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20191125

Address after: 710014 floor 18, Xinyuan center, 251 Fenghe Road, Lianhu District, Xi'an City, Shaanxi Province

Patentee after: Shaanxi Help You Electronic Technology Co.,Ltd.

Address before: 100081 No. 5, Zhongguancun South Street, Haidian District, Beijing

Co-patentee before: Shenzhen Institute of Beijing Institute of Technology

Patentee before: BEIJING INSTITUTE OF TECHNOLOGY

TR01 Transfer of patent right
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160817

Termination date: 20210927