CN103761725B - A kind of video plane detection method based on innovatory algorithm - Google Patents

A kind of video plane detection method based on innovatory algorithm Download PDF

Info

Publication number
CN103761725B
CN103761725B CN201310450214.4A CN201310450214A CN103761725B CN 103761725 B CN103761725 B CN 103761725B CN 201310450214 A CN201310450214 A CN 201310450214A CN 103761725 B CN103761725 B CN 103761725B
Authority
CN
China
Prior art keywords
plane
video
frame
track
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310450214.4A
Other languages
Chinese (zh)
Other versions
CN103761725A (en
Inventor
黄华
陶蕾
张磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaanxi Help You Electronic Technology Co ltd
Original Assignee
Shenzhen Research Institute Beijing Institute Of Technology
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Research Institute Beijing Institute Of Technology, Beijing Institute of Technology BIT filed Critical Shenzhen Research Institute Beijing Institute Of Technology
Priority to CN201310450214.4A priority Critical patent/CN103761725B/en
Publication of CN103761725A publication Critical patent/CN103761725A/en
Application granted granted Critical
Publication of CN103761725B publication Critical patent/CN103761725B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The present invention relates to a kind of video plane extracting method based on innovatory algorithm, the method, first to given video sequence, carries out the tracking of characteristic point, the feature point trajectory that record obtains by standard K LT method;Judge whether track belongs to the plane corresponding to current homography matrix by calculating the re-projection error between consecutive frame, carry out tracking and the extraction of the plane to whole video sequence;The areal model obtained and they corresponding lists are utilized to gather, it is possible to video to be split obtain final plane and extracts result.The present invention directly can extract plane from video sequence;And select track as operation object, it is ensured that plane extracts the global coherency of result, and is greatly accelerated speed and the accuracy of test convergence.

Description

A kind of video plane detection method based on innovatory algorithm
Technical field
The present invention relates to a kind of video plane detection method based on innovatory algorithm, be specifically related to the video sequence plane extracting method that a kind of distinguished point based is followed the tracks of, belong to image processing field.
Background technology
Plane is extracted and is always an important research direction in computer vision and graphics field.Owing to planar structure is one of class basis geometric properties most commonly seen in reality scene, wherein contain a large amount of two-dimensional structure information of object, and the point being pointed in plane has the strongest geometrical constraint.In a lot of actual application, as built scene space, barrier or target object model etc., the planar structure being required for extracting in scene in advance carries out processing or alternately.nullThe work of this respect has had a lot,Such as based on J-Linkage cluster space plane extracts (Toldo R,Fusiello A.Robust multiple structures estimation with j-linkage.Computer Vision–ECCV2008.Springer Berlin Heidelberg,2008:537-547) and plane based on lines matching extract (Kim H,Lee S.Multiple planar region extraction based on the coplanar line pairs.Robotics and Automation(ICRA),2011IEEE International Conference on.IEEE,2011:2059-2064).Above plane extraction work can generate more attractive result, but due to these methods or need to recover in advance three-dimensional information and the camera parameters of scene, this process amount of calculation is very big, iff the planar structure needed in extraction image or video, this kind of algorithm cost prohibitive;Use affine transformation set up image between transformation relation model, and have ignored nonlinear effect that may be present in conversion process.Therefore it is difficult to from video sequence, extract planar structure rapidly and accurately by these traditional plane extraction algorithms.
Summary of the invention
The problem being difficult to extract from video sequence rapidly and accurately planar structure in order to solve conventional planar extracting method, the present invention proposes a kind of video plane extracting method based on innovatory algorithm, using the teaching of the invention it is possible to provide the preferably plane extraction effect of video sequence.The method needn't recover video camera three-dimensional information, directly by setting up the projective transformation model under epipolar-line constraint between video consecutive frame, calculates the list induced by plane and answers, and each list should correspond to a plane in scene;Then split according to each frame of the list reply video calculated, extract the planar structure in video.
Concrete implementation process of the present invention is as follows:
A kind of video plane extracting method based on innovatory algorithm, the method comprises the steps:
Step one, video read;
Read input video, become picture format to record by frame solution video;
Step 2, feature point tracking;
The characteristic point in KLT algorithm frame each to video is used to extract and follow the tracks of, the feature point trajectory crossing over whole video sequence that record obtains;
Step 3, plane track and extract;
Video sequence for the N frame that step one reads, it is assumed that step 2 obtains the collection of M feature point trajectory and is combined intoFor each track Tj, make pjAnd qj(1≤pj<qj≤ N) represent its starting and ending frame number, track T respectivelyjIt is expressed asForm, whereinRepresent jth characteristic point homogeneous coordinates in the i-th frame;T is share for crossing over the collection of all tracks of a frame and b frameab={Tj∈T:pj≤a,qj>=b} represents;For a tracing pointIf it is positioned in a plane, then its coordinate on the i-th frame and kth frame just can singly be answered H by what this plane inducedikAssociate, haveIn order to weigh the areal model fitness to a track, to a trackDefine it to kth (pj≤k≤qj) re-projection error of frame is as follows:
e = &Sigma; i = p j , i &NotEqual; k q j ( | | x j k - H ki x j i | | 2 + | | x j i - H ki - 1 x j k | | 2 )
According to re-projection error, all of track being divided into different set, each set correspond to a plane, specifically can be divided into two steps:
(1) stochastical sampling: for the video sequence read in step oneConstruct N-1 to adjacent frames to C={ (F1,F2),(F2,F3),...,(FN-1,FN)};First from C, any frame is chosen to (Fi-1,Fi), take the track T crossing over this two frame(i-1)iAt Fi-1Interior corresponding point carry out Delaunay trigonometric ratio, obtain the set T of a Delaunay triangletri, then at TtriInside take any one triangle t, use three trajectory calculation corresponding to t to obtain Fi-1And FiBetween a homography matrix H(i-1)i, computing formula is as follows:
Known basis matrix F and three groups of picture point are correspondingWhat then 3D point place plane was induced singly should be:
H=A-e'(M-1b)T
Wherein A=[e']×F, e' are the antipodal points on the second width image, have Fe = 0 F T e &prime; = 0 ; [e']×Represent the antisymmetric matrix of e', it is assumed that e'=(e1,e2,e3)T, have:
[ e &prime; ] &times; = 0 - e 3 e 2 e 2 0 - e 1 - e 2 e 1 0
B is a 3-dimensional vector, and its element is:
bi=(xi'×(Axi))T(xi'×e')/xi'×e'2
And M is a row vector is3 × 3 matrixes;
A given threshold epsilon, by judging re-projection error e=xi-H(i-1)ixi-1 2+xi-1-H(i-1)i -1xi 2Whether less than ε, by T(i-1)iInterior track is divided into TinAnd Tout
(2) concordance calculates: select and (Fi-1,Fi) adjacent frame is to (Fi,Fi+1), take Ti(i+1)In have been labeled as TinTrack, i.e. TinITi(i+1)At FiInterior corresponding point carry out Delaunay trigonometric ratio, obtain the set T of a new Delaunay triangletri;Then, same to previous step, use three trajectory calculation corresponding to t to obtain FiAnd Fi+1Between homography matrix Hi(i+1), by judging whether re-projection error e is less than threshold epsilon, by Ti(i+1)Interior track is also classified into TinAnd Tout, constantly repeat this step until all frames are the most through processing or Tin∩Ti(i+1)Interior trace number is less than 3;
Step (1) and (2) are repeated several times, select TinPlane that size is maximum and it corresponding toSet, the result output extracted as this secondary flat;
Step 4, plane are split;
Corresponding with them according to the areal model obtained by step 3After set, according toVideo is split to obtain final result, and the present invention selects Graph Cuts algorithm to split video:
The purpose that the N frame video sequence comprising P plane to is split is for each pixel p ∈ F={F1,F2,...,FNSpecify a label fp∈ 0,1 ..., P-1}, obtain each two field picture belongs to conplane collection of pixels f={p:fp=i,i∈{0,1,...,P-1}};Tag set { fpObtained by one energy function of optimization, first input picture is represented with figure G=(N, E), the node set during wherein N represents figure, each node ni∈ N is corresponding to a pixel in input picture, and E represents the set on the limit in figure between node, and each limit is corresponding to having the node < n of particular kind of relationship for a pairp,nq〉;The optimization objective function of Graph Cuts algorithm is as follows:
E ( f ) = &Sigma; n &Element; N D n ( f n ) + &Sigma; n p < n q &Element; N V n p , n q ( f n p , f n q )
Wherein, Dn(fn) represent the label of node n is set to fnTime bring data item punishment;np< nqRepresenting the unidirectional combination between neighborhood of nodes in figure, " adjacent " here is that four neighborhoods are adjacent or eight neighborhood is adjacent;Represent two neighborhood of nodes < np,nqThe label of > is respectively set toWithTime bring based on spatial continuity smooth item punishment;Label { the f of each nodenObtain by minimizing above-mentioned object function;
Here, data item punishment is defined as follows:
Dn(fn)=((R(H[fn]n)-R(n))2
+(G(H[fn]n)-G(n))2
+(B(H[fn]n)-B(n))2)1/2
Wherein, H [fn] it is fnThe list that individual plane is induced is answered, and R (), B (), G () represent the RGB component of current pixel point respectively, and smooth item is punished for node npIt is adjacent node nqConnection weights, be defined as follows:
V n p , n q ( f n p , f n q ) = &lambda; ( f n p - f n q ) 2
Wherein np,nq∈ N, λ are constant, for the weight between equilibrium criterion penalty term and smooth penalty term, typically take 80~100.
Beneficial effect:
(1) traditional plane extracting method one class needs to recover in advance three-dimensional information and the camera parameters of scene, and this process amount of calculation is very big, iff needing to extract the planar structure in image or video, and this kind of method cost prohibitive;And another kind of algorithm generally use affine transformation to set up image between transformation relation model, and have ignored nonlinear effect that may be present in conversion process, this is often not accurate enough.In addition, current existing Equations of The Second Kind plane extraction algorithm is substantially and carries out for image, the plane of image is extracted and is typically to carry out between two width views, and the plane of video is extracted except to extract plane between consecutive frame, also to ensure this plane global coherency in whole video, the plane scene structure that i.e. each to video frame finally extracts should be consistent.And the inventive method is by setting up the projective transformation model under epipolar-line constraint between video consecutive frame, video camera three-dimensional information needn't be recovered, directly can extract plane from video sequence;And select track as operation object, it is ensured that plane extracts the global coherency of result.
(2) present invention employs the RANSAC plane extracting method of improvement, take four characteristic points from traditional RANSAC algorithm the most at random and carry out testing different, the present invention utilizes not collinear three points i.e. to can determine that this axiom of plane, the characteristic point traced into carries out Delaunay trigonometric ratio, and each test takes three points on a triangle at random and combines fundamental matrix and calculate.And three points on a triangle belong to conplane probability and are substantially higher in four characteristic points taken at random probability at grade, therefore, inventive algorithm is not only greatly accelerated the speed of test convergence, and accuracy is higher.
Detailed description of the invention
Below the embodiment of the inventive method is elaborated.
A kind of video plane extracting method based on innovatory algorithm, the method, first to given video sequence, carries out the tracking of characteristic point, the feature point trajectory that record obtains by standard K LT method;Judge whether track belongs to the plane corresponding to current homography matrix by calculating the re-projection error between consecutive frame, carry out tracking and the extraction of the plane to whole video sequence;The areal model obtained and they corresponding lists are utilized to gather, it is possible to video to be split obtain final plane and extracts result.
The inventive method to implement process as follows:
Step one, video read;
Read input video, become picture format to record by frame solution video;
Step 2, feature point tracking;
Use standard K LT algorithm (Shi J, Tomasi C.Good features to track.Computer Vision and Pattern Recognition, 1994.Proceedings CVPR'94., 1994) characteristic point in frame each to video is extracted and is followed the tracks of, the feature point trajectory crossing over whole video sequence that record obtains;
Step 3, plane track and extract;
Video sequence for the N frame that step one reads, it is assumed that step 2 obtains the collection of M feature point trajectory and is combined intoFor each track Tj, make pjAnd qj(1≤pj<qj≤ N) represent its starting and ending frame number respectively, then track TjJust can be expressed asForm, whereinRepresent jth characteristic point homogeneous coordinates in the i-th frame.T can be used for crossing over the set of all tracks of a frame and b frameab={Tj∈T:pj≤a,qj>=b} represents.For a tracing pointIf it is positioned in a plane, then its coordinate on the i-th frame and kth frame just can singly be answered H by what this plane inducedikAssociate, haveThe present invention uses re-projection error to weigh the areal model fitness to a track.To a trackDefine it to kth (pj≤k≤qj) re-projection error of frame is as follows:
e = &Sigma; i = p j , i &NotEqual; k q j ( | | x j k - H ki x j i | | 2 + | | x j i - H ki - 1 x j k | | 2 )
The final purpose of inventive algorithm is exactly, according to re-projection error, all of track is divided into different set, and each set correspond to a plane.Specifically can be divided into two steps:
1. stochastical sampling: for the video sequence read in step oneN-1 can be constructed to adjacent frames to C={ (F1,F2),(F2,F3),...,(FN-1,FN)}.First from C, any frame is chosen to (Fi-1,Fi), we typically take (F here0,F1), take the track T crossing over this two frame(i-1)iAt Fi-1Interior corresponding point carry out Delaunay trigonometric ratio, obtain the set T of a Delaunay triangletri.Then, at TtriInside take any one triangle t, use three tracks corresponding to t just can be calculated Fi-1And FiBetween a homography matrix H(i-1)i.Computing formula is as follows:
Known basis matrix F and three groups of picture point are correspondingWhat then 3D point place plane was induced singly should be:
H=A-e'(M-1b)T
Wherein A=[e']×F, e' are the antipodal points on the second width image, have Fe = 0 F T e &prime; = 0 . [e']×Represent the antisymmetric matrix of e', it is assumed that e'=(e1,e2,e3)T, have:
[ e &prime; ] &times; = 0 - e 3 e 2 e 2 0 - e 1 - e 2 e 1 0
B is a 3-dimensional vector, and its element is:
bi=(xi'×(Axi))T(xi'×e')/xi'×e'2
And M is a row vector is3 × 3 matrixes.
A given threshold epsilon, we take ε=4 here, by judging re-projection error e=xi-H(i-1)ixi-1 2+xi-1-H(i-1)i -1xi 2Whether less than ε, can be by T(i-1)iInterior track is divided into TinAnd Tout
2. concordance calculates: select and (Fi-1,Fi) adjacent frame is to (Fi,Fi+1), take Ti(i+1)In have been labeled as TinTrack, i.e. Tin∩Ti(i+1)At FiInterior corresponding point carry out Delaunay trigonometric ratio, obtain the set T of a new Delaunay triangletri.Then, same to previous step, use three trajectory calculation corresponding to t to obtain FiAnd Fi+1Between homography matrix Hi(i+1), by judging that re-projection error e, can be by T whether less than threshold epsiloni(i+1)Interior track is also classified into TinAnd Tout.Constantly repeat this step until all frames are the most through processing or Tin∩Ti(i+1)Interior trace number is less than 3.
Step 1 and 2 is repeated several times, selects TinPlane that size is maximum and it corresponding toSet, the result output extracted as this secondary flat.
Step 4, plane are split;
Corresponding with them according to the areal model obtained by step 3After set, it is possible to according toSplit to obtain final result to video.Here have selected Graph Cuts algorithm (Narayanan P J, Vineet V, Stich T.Fast Graph Cuts for Computer.GPU Computing Gems Emerald Edition, 2011:439) video is split.
The purpose that the N frame video sequence comprising P plane to is split is for each pixel p ∈ F={F1,F2,...,FNSpecify a label fp∈ 0,1 ..., P-1}, obtain each two field picture belongs to conplane collection of pixels f={p:fp=i,i∈{0,1,...,P-1}}.Tag set { fpObtain typically by one energy function of optimization.First, input picture is schemed G=(N with one, E) represent, node set during wherein N represents figure, each node n i ∈ N is corresponding to a pixel in input picture, E represents the set on the limit in figure between node, and each limit is corresponding to having the node < n of particular kind of relationship for a pairp,nq〉.The optimization objective function of Graph Cuts algorithm is as follows:
E ( f ) = &Sigma; n &Element; N D n ( f n ) + &Sigma; n p < n q &Element; N V n p , n q ( f n p , f n q )
Wherein, Dn(fn) represent the label of node n is set to fnTime bring data item punishment.np< nqRepresenting the unidirectional combination between neighborhood of nodes in figure, " adjacent " here can be that four neighborhoods are adjacent or eight neighborhood is adjacent;Represent two neighborhood of nodes < np,nqThe label of > is respectively set toWithTime bring based on spatial continuity smooth item punishment.Label { the f of each nodenObtain by minimizing above-mentioned object function.
Here, data item punishment is defined as follows:
Dn(fn)=((R(H[fn]n)-R(n))2
+(G(H[fn]n)-G(n))2
+(B(H[fn]n)-B(n))2)1/2
Wherein, H [fn] it is fnThe list that individual plane is induced is answered, and R (), B (), G () represent the RGB component of current pixel point respectively.Smooth item is punished for node npIt is adjacent node nqConnection weights, be defined as follows:
V n p , n q ( f n p , f n q ) = &lambda; ( f n p - f n q ) 2
Wherein np,nq∈ N, λ are constant, and for the weight between equilibrium criterion penalty term and smooth penalty term, we take λ=80 here.
Obviously, those skilled in the art can carry out various change and modification without departing from the spirit and scope of the present invention to the present invention.So, if these amendments of the present invention and modification belong within the scope of the claims in the present invention and equivalent technologies thereof, then the present invention is also intended to comprise these change and modification.

Claims (3)

1. a video plane extracting method based on innovatory algorithm, the method comprises the steps:
Step one, video read;
Read input video, video is resolved into picture format by frame and records;
Step 2, feature point tracking;
The characteristic point in KLT algorithm frame each to video is used to extract and follow the tracks of, the feature point trajectory crossing over whole video sequence that record obtains;
Step 3, plane track and extract;
Video sequence for the N frame that step one reads, it is assumed that step 2 obtains the collection of M feature point trajectory and is combined intoFor each track Tj, make pjAnd qj, 1≤pj<qj≤ N, represents its starting and ending frame number, track T respectivelyjIt is expressed asForm, whereinRepresent jth characteristic point homogeneous coordinates in the i-th frame;For crossing over a frame andThe collection of all tracks of frame shareRepresent;For a tracing pointIf it is positioned in a plane, then its coordinate on the i-th frame and kth frame just can singly be answered H by what this plane inducedikAssociate, haveIn order to weigh the areal model fitness to a track, to a trackDefine it as follows to the re-projection error of kth frame, wherein pj≤k≤qj:
According to re-projection error, all of track being divided into different set, each set correspond to a plane, specifically can be divided into two steps:
(1) stochastical sampling: for the video sequence read in step oneConstruct N-1 to adjacent frames to C={ (F1,F2),(F2,F3),...,(FN-1,FN)};First from C, any frame is chosen to (Fi-1,Fi), take the track T crossing over this two frame(i-1)iAt Fi-1Interior corresponding point carry out Delaunay trigonometric ratio, obtain the set T of a Delaunay triangletri, then at TtriInside take any one triangle t, use three trajectory calculation corresponding to t to obtain Fi-1And FiBetween a homography matrix H(i-1)i, computing formula is as follows:
Known basis matrix F and three groups of picture point are correspondingα i=1,2,3, then what 3D point place plane was induced singly should be:
H=A-e'(M-1b)T
Wherein A=[e']×F, e' are the antipodal points on the second width image, have[e']×Represent the antisymmetric matrix of e', it is assumed that e'=(e1,e2,e3)T, have:
B is a 3-dimensional vector, and its element is:
bi=(xi'×(Axi))T(xi'×e')/||xi'×e'||2
And M is a row vector is3 × 3 matrixes;
A given threshold epsilon, by judging re-projection errorWhether less than ε, by T(i-1)iInterior track is divided into TinAnd Tout
(2) concordance calculates: select and (Fi-1,Fi) adjacent frame is to (Fi,Fi+1), take Ti(i+1)In have been labeled as TinTrack, i.e. Tin∩Ti(i+1)At FiInterior corresponding point carry out Delaunay trigonometric ratio, obtain the set T of a new Delaunay triangletri;Then, same to previous step, use three trajectory calculation corresponding to t to obtain FiAnd Fi+1Between homography matrix Hi(i+1), by judging whether re-projection error e is less than threshold epsilon, by Ti(i+1)Interior track is also classified into TinAnd Tout, constantly repeat this step until all frames are the most through processing or Tin∩Ti(i+1)Interior trace number is less than 3;
Repeat step (1) and (2), select TinPlane that size is maximum and it corresponding toSet, the result output extracted as this secondary flat;
Step 4, plane are split;
Corresponding with them according to the areal model obtained by step 3After set, according toVideo is split to obtain final result, selects Graph Cuts algorithm that video is split:
The purpose that the N frame video sequence comprising P plane to is split is for each pixel q ∈ F={F1,F2,...,FNSpecify a label fp∈ 0,1 ..., P-1}, obtain each two field picture belongs to conplane collection of pixels f={q:fp∈{0,1,...,P-1}};Tag set { fpObtained by one energy function of optimization, first input picture is represented with figure G=(R, E), the node set during wherein R represents figure, each node nβ i∈ R is corresponding to a pixel in input picture, and E represents the set on the limit in figure between node, and each limit is corresponding to having the node < n of particular kind of relationship for a pairp,nq>;The optimization objective function of Graph Cuts algorithm is as follows:
Wherein, Dn(fn) represent the label of node n is set to fnTime bring data item punishment;np<nqRepresenting the unidirectional combination between neighborhood of nodes in figure, " adjacent " here is that four neighborhoods are adjacent or eight neighborhood is adjacent;Represent two neighborhood of nodes < np,nq> label be respectively set toWithTime bring based on spatial continuity smooth item punishment;Label { the f of each nodenObtain by minimizing above-mentioned object function;
Here, data item punishment is defined as follows:
Dn(fn)=((R (H [fn]n)-R(n))2
+(G(H[fn]n)-G(n))2
+(B(H[fn]n)-B(n))2)1/2
Wherein, H [fn] it is fnThe list that individual plane is induced is answered, and R (), B (), G () represent the RGB component of current pixel point respectively, and smooth item is punished for node npIt is adjacent node nqConnection weights, be defined as follows:
Wherein np,nq∈ R, λ are constant, for the weight between equilibrium criterion penalty term and smooth penalty term.
A kind of video plane extracting method based on innovatory algorithm, it is characterised in that: in step 3, threshold epsilon=4.
A kind of video plane extracting method based on innovatory algorithm, it is characterised in that: in step 4, weights λ=80.
CN201310450214.4A 2013-09-27 2013-09-27 A kind of video plane detection method based on innovatory algorithm Expired - Fee Related CN103761725B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310450214.4A CN103761725B (en) 2013-09-27 2013-09-27 A kind of video plane detection method based on innovatory algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310450214.4A CN103761725B (en) 2013-09-27 2013-09-27 A kind of video plane detection method based on innovatory algorithm

Publications (2)

Publication Number Publication Date
CN103761725A CN103761725A (en) 2014-04-30
CN103761725B true CN103761725B (en) 2016-08-17

Family

ID=50528958

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310450214.4A Expired - Fee Related CN103761725B (en) 2013-09-27 2013-09-27 A kind of video plane detection method based on innovatory algorithm

Country Status (1)

Country Link
CN (1) CN103761725B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105046688B (en) * 2015-06-23 2017-10-10 北京工业大学 A kind of many plane automatic identifying methods in three-dimensional point cloud
CN108961182B (en) * 2018-06-25 2021-06-01 北京大学 Vertical direction vanishing point detection method and video correction method for video image

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102945551A (en) * 2012-10-16 2013-02-27 同济大学 Graph theory based three-dimensional point cloud data plane extracting method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4952726B2 (en) * 2009-02-10 2012-06-13 エプソンイメージングデバイス株式会社 POSITION DETECTION DEVICE, ELECTRO-OPTICAL DEVICE, AND ELECTRONIC DEVICE

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102945551A (en) * 2012-10-16 2013-02-27 同济大学 Graph theory based three-dimensional point cloud data plane extracting method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Multiple Planar Region Extraction Based on the Coplanar Line Pairs;Hyunwoo Kim等;《2011 IEEE International Conference on Robotics and Automation》;20110513;第2059-2064页 *
轮廓基的视频对象平面提取方法;汤力 等;《电视技术》;20010531(第5期);第10-11,24页 *

Also Published As

Publication number Publication date
CN103761725A (en) 2014-04-30

Similar Documents

Publication Publication Date Title
CN102646275B (en) The method of virtual three-dimensional superposition is realized by tracking and location algorithm
CN109190508A (en) A kind of multi-cam data fusion method based on space coordinates
CN106803270A (en) Unmanned aerial vehicle platform is based on many key frames collaboration ground target localization method of monocular SLAM
CN104268866B (en) The video sequence method for registering being combined with background information based on movable information
CN104376552A (en) Virtual-real registering algorithm of 3D model and two-dimensional image
CN103839277A (en) Mobile augmented reality registration method of outdoor wide-range natural scene
CN108492017B (en) Product quality information transmission method based on augmented reality
CN104616247B (en) A kind of method for map splicing of being taken photo by plane based on super-pixel SIFT
CN104794737A (en) Depth-information-aided particle filter tracking method
CN112907573B (en) Depth completion method based on 3D convolution
CN103903238A (en) Method for fusing significant structure and relevant structure of characteristics of image
CN111881804A (en) Attitude estimation model training method, system, medium and terminal based on joint training
CN111914615A (en) Fire-fighting area passability analysis system based on stereoscopic vision
CN108961385A (en) A kind of SLAM patterning process and device
CN104318552A (en) Convex hull projection graph matching based model registration method
CN103761725B (en) A kind of video plane detection method based on innovatory algorithm
CN103854271B (en) A kind of planar pickup machine scaling method
CN104021395A (en) Target tracing algorithm based on high-order partial least square method
CN104657985A (en) Occlusion avoidance method for static visual target based on depth image occlusion information
Kiyokawa et al. Efficient collection and automatic annotation of real-world object images by taking advantage of post-diminished multiple visual markers
CN107424194A (en) The detection method of keyboard profile tolerance
CN104156952B (en) A kind of image matching method for resisting deformation
Fu et al. Interior dense 3D reconstruction system with RGB-D camera for complex large scenes
Kikuchi et al. A data structure for triangular dissection of multi-resolution images
CN103077523A (en) Method for shooting and taking evidence through handheld camera

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
ASS Succession or assignment of patent right

Owner name: SHENZHEN RESEARCH INSTITUTE, BEIJING INSTITUTE OF

Effective date: 20140918

C41 Transfer of patent application or patent right or utility model
TA01 Transfer of patent application right

Effective date of registration: 20140918

Address after: 100081 No. 5, Zhongguancun South Street, Haidian District, Beijing

Applicant after: BEIJING INSTITUTE OF TECHNOLOGY

Applicant after: Shenzhen Institute of Beijing Institute of Technology

Address before: 100081 No. 5, Zhongguancun South Street, Haidian District, Beijing

Applicant before: BEIJING INSTITUTE OF TECHNOLOGY

C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20191125

Address after: 710014 floor 18, Xinyuan center, 251 Fenghe Road, Lianhu District, Xi'an City, Shaanxi Province

Patentee after: Shaanxi Help You Electronic Technology Co.,Ltd.

Address before: 100081 No. 5, Zhongguancun South Street, Haidian District, Beijing

Co-patentee before: Shenzhen Institute of Beijing Institute of Technology

Patentee before: BEIJING INSTITUTE OF TECHNOLOGY

TR01 Transfer of patent right
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160817

Termination date: 20210927