CN105513094A - Stereo vision tracking method and stereo vision tracking system based on 3D Delaunay triangulation - Google Patents

Stereo vision tracking method and stereo vision tracking system based on 3D Delaunay triangulation Download PDF

Info

Publication number
CN105513094A
CN105513094A CN201510952595.5A CN201510952595A CN105513094A CN 105513094 A CN105513094 A CN 105513094A CN 201510952595 A CN201510952595 A CN 201510952595A CN 105513094 A CN105513094 A CN 105513094A
Authority
CN
China
Prior art keywords
point
ift
target
matching
unique point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510952595.5A
Other languages
Chinese (zh)
Inventor
李建勋
刘国栋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN201510952595.5A priority Critical patent/CN105513094A/en
Publication of CN105513094A publication Critical patent/CN105513094A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Abstract

Disclosed are a stereo vision tracking method and a stereo vision tracking system based on 3D Delaunay triangulation. The method comprises the following steps: extracting 3D SIFT feature points of an original image by use of a 3D SIFT algorithm; calculating the Euclidean distance between a candidate model and a target model to perform coarse matching of the 3D SIFT feature points; and performing 3D Delaunay triangulation on the 3D SIFT feature points after matching, using space geometric constraints formed through triangulation to perform fine matching of the 3D SIFT feature points of the candidate model and the target model, and tracking a target under a particle filtering target tracking framework. According to the invention, mismatching points can be well removed through matching based on the topology similarity between the 3D SIFT feature points, feature point matching is more accurate, and the tracking effect is better.

Description

Based on stereoscopic vision tracking and the system of three-dimensional Delaunay triangulation
Technical field
What the present invention relates to is the technology that a kind of image procossing and stereoscopic vision follow the tracks of field, specifically a kind of stereoscopic vision tracking based on three-dimensional Delaunay triangulation and system.
Background technology
Visual target tracking is an important research direction of computer vision field, it combines the knowledge of many association areas such as Computer Image Processing, Computer Vision, pattern-recognition, artificial intelligence and Mechanical course, be widely used in the aspects such as intelligent monitoring, target identification, traffic monitoring.
Traditional motion target tracking is mostly the method based on monocular vision.At present, monocular vision target tracking algorism comparative maturity, a lot of field and in all achieve good development.But monocular vision also exists very large defect, monocular vision quantity of information is little, and image lost the three-dimensional information of actual scene in projection process, can not make full use of the spatial structure information of tracking target, therefore have irremediable defect.Adopt carry out motion target tracking based on the method for monocular vision time, usually there is the problems such as the interference of target occlusion and scene light change around and shade.Stereoscopic vision can ask for the three-dimensional information of scenery, uses the three-dimensional information tracking target of scene, effectively can solve interference and the insoluble problem in the monocular vision such as blocking of the change of scene light and shade.
In recent years, along with the development and progress of 3-D technology, the acquisition of three-dimensional model becomes more and more convenient, and the research of stereoscopic vision track algorithm have also been obtained development.Stereoscopic vision compensate for monocular vision and obtains insufficient shortcoming to target information, can make full use of spatial structural form and the positional information of target.Target following based on RGB-D is popular recently, and it make use of colouring information and depth information simultaneously.Such as RGB-DHOG, it is from RGB image and depth image calculated direction histogram of gradients simultaneously.RGB-DHOG feature interpretation local color texture and 3D shape.But it does not make full use of the spatial structural form of target, the target following serious to distortion is unstable.
Summary of the invention
The present invention is directed to prior art above shortcomings, a kind of stereoscopic vision tracking based on three-dimensional Delaunay triangulation and system are proposed, adopt D S IFT unique point to describe target signature, and adopt three-dimensional Delaunay triangulation establishing target characteristic 3 D space constraint.Because three dimensions characteristic point information is more more abundant than two-dimensional space, thus utilize the topological structure similarity of D S IFT unique point to carry out coupling and can well remove not match point, Feature Points Matching precision is higher, thus tracking effect is better.
The present invention is achieved by the following technical solutions:
The present invention relates to a kind of stereoscopic vision tracking based on three-dimensional Delaunay triangulation, use D S IFT algorithm realization original image to be carried out to the extraction of D S IFT unique point, then realize the thick of D S IFT unique point by calculated candidate model and Euclidean distance between object module and mate; Again three-dimensional Delaunay triangulation is carried out to the D S IFT unique point after coupling, the space geometry constraint utilizing subdivision to be formed, candidate family and object module D S IFT unique point are carefully mated, realizes following the tracks of target under particle filter target following framework.
The present invention specifically comprises the steps:
Steps A, from the first frame of original image, carry out the extraction of D S IFT unique point: SIFT (Scale-InvariantFeatureTransform) is a kind of algorithm detecting local feature, rotation, scaling, brightness change are maintained the invariance, also has stability to a certain degree to visual angle change, affined transformation, noise.Here the SIFT feature of two-dimensional space is extracted and expand to three dimensions, be applied to the feature extraction of three-dimensional point cloud, concrete steps are as follows:
Steps A 1: detect the key point be made up of the Local Extremum in DOG space: in order to find DOG Function Extreme Value point, each sampling point needs to compare with its neighbor point, when the DOG operator value of this point is the extreme value in neighborhood, i.e. DOG Function Extreme Value point, be then defined as key point by this point.
Steps A 2: the D S IFT proper vector generating key point: with the gradient direction distribution characteristic in key point neighborhood, with the gradient direction of statistics with histogram neighborhood territory pixel, generates D S IFT proper vector.
Step B, slightly mates the D S IFT unique point that candidate family and object module extract.
When target comprises N 1individual D S IFT unique point, represents object module with P, has wherein: p irepresent a D S IFT unique point of object module.
Again when t frame, candidate target model has N 2individual D S IFT unique point, represents have with Q wherein: q iit is a D S IFT unique point of candidate family.
To arbitrary characteristics point p i, and p i∈ P, calculates and the shortest Euclidean distance of all unique points in Q and time short Euclidean distance, when the ratio of the shortest Euclidean distance and time short Euclidean distance is less than certain threshold value, then representation feature point p ithe unique point of coupling is there is in Q.
Step C, carries out three-dimensional Delaunay triangulation to the D S IFT unique point after coupling, sets up the three-dimensional apparent model of target:
When after thick coupling, coupling is counted as N, point set PS and QS represents the unique point ps after thick coupling respectively iand qs ithe object module formed and candidate family, namely three-dimensional Delaunay triangulation is carried out to point set PS, QS.
With representation space topological structure similarity, represent some ps iabutment points, represent some qs iabutment points; ps i = Σ ps j ∈ N ps i , j ≠ i w i j 1 ps j , qs i = Σ qs j ∈ N qs i , j ≠ i w i j 2 qs j , Wherein: represent some ps i's the weight coefficient vector of neighborhood, W qs i = { w i j 2 | qs j ∈ N qs i , j ≠ i } Represent some qs i's the weight coefficient vector of neighborhood, with there is equal length, by calculating with carry out comparison point ps iand qs ispace expanding similarity.That is: G ( ps i , N ps i , qs i . N qs i ) = | | W ps i - W qs i | | 2 , When be less than matching threshold, then can think a ps iand qs iit is correct matching relationship.
Described weight coefficient vector with least square method calculated by least square method, because can obtain minimum error under L2 norm.In addition, least square method can obtain the weight coefficient of non-zero usually, and that is each point can be described with its all neighborhoods point.
Step D, realize following the tracks of target under particle filter framework: after meticulous coupling, similarity between the number descriptive model utilizing matching characteristic point between object module and candidate family, describes the observing and nursing in particle filter tracking framework by similarity, namely wherein: p i∈ P, map (p i) represent the mapping of matching characteristic point, map (p i) ∈ Q, d () be:
The present invention relates to a kind of system realizing said method, comprise: the 3DSIFT feature point detection module connected successively, the thick matching module of unique point, the thin matching module of triangulation space constraint and particle filter tracking module, wherein: 3DSIFT feature point detection module extracts 3DSIFT characteristic point information from the raw information of target three-dimensional point cloud model, the thick matching module of unique point slightly mates for 3DSIFT characteristic point information and thick matching result is exported to the thin matching module of triangulation space constraint and carries out carefully mating based on the triangulation space constraint of three-dimensional Delaunay, particle filter tracking module generates particle filter framework according to thin match information, and target following information is obtained from Particle tracking framework.
Accompanying drawing explanation
Fig. 1 is D S IFT feature extraction;
In figure: a is ballet person 3DSIFT feature point extraction; B is thunderbolt dancer 3DSIFT feature point extraction; C is ballet dancer local 3DSIFT unique point; D is thunderbolt dancer local 3DSIFT unique point;
Fig. 2 is three-dimensional Delaunay triangulation;
In figure: a is the three-dimensional Delaunay triangulation that ballet dancer 3DSIFT unique point is formed; B is the three-dimensional Delaunay triangulation that thunderbolt dancer 3DSIFT unique point is formed; C is the three-dimensional Delaunay triangulation of ballet dancer local 3DSIFT unique point; D is the three-dimensional Delaunay triangulation that thunderbolt dancer local 3DSIFT unique point is formed;
Fig. 3 is method flow schematic diagram;
Fig. 4 is Ballet tracking results figure;
In figure: a is the 1st frame; B is the 5th frame; C is the 10th frame; D is the 20th frame; E is the 30th frame;
Fig. 5 is Breakdance tracking results figure;
In figure: a is the 1st frame; B is the 5th frame; C is the 10th frame; D is the 20th frame; E is the 30th frame.
Embodiment
The present embodiment relates to a kind of stereoscopic vision tracking based on three-dimensional Delaunay triangulation, first candidate target feature is extracted, it is slightly mated with object module, then three-dimensional Delaunay triangulation is set up to the unique point after coupling, utilize the space geometry restriction relation of three-dimensional Delaunay triangulation, unique point is carefully mated.Under particle filter framework, target is followed the tracks of.
Step 1, D S IFT feature extraction, process from video first frame, key step is as follows:
Step 1.1: detect the key point be made up of the Local Extremum in DOG space: in order to find DOG Function Extreme Value point, each sampling point needs to compare with its neighbor point, when the DOG operator value of this point is the extreme value in neighborhood, i.e. DOG Function Extreme Value point, be then defined as key point by this point.
Described DOG Function Extreme Value point, obtain in the following manner: first original image and different IPs are worth Gaussian function to carry out convolution algorithm and form Gaussian scale-space, by sampling to Gaussian scale-space, set up gaussian pyramid, then the adjacent layer of gaussian pyramid is done difference, obtains DOG pyramid:
D (x, y, z, k j, σ) and=(G (x, y, z, k i, σ) and-G (x, y, z, k j, σ)) * P (x, y, z), namely
D(x,y,z,k j,σ)=L(x,y,z,k i,σ)-L(x,y,z,k j,σ),
Wherein: point coordinate in the point cloud model that P (x, y, z) is object, metric space L (x, y, x, k σ)=G (x, y, x, k σ) the * P (x, y, z) of three-dimensional point cloud, wherein: σbe the metric space factor, k is the gaussian kernel value of metric space, and G (x, y, z, k σ) is the three-dimensional Gaussian kernel function of changeable scale, G ( x , y , z , k σ ) = 1 2 πk 2 σ 2 e - ( x 2 + y 2 + z 2 ) / 2 k 2 σ 2 .
Described neighbor point compares, and is preferably 80 neighbor points, comprise 26 with its consecutive point with yardstick consecutive point yardstick adjacent with 2 × 27.
Step 1.2: generate key point feature, i.e. D S IFT proper vector: the key point that each is detected, be each key point assigned direction parameter with the gradient direction distribution characteristic in key point neighborhood, with the gradient direction of statistics with histogram neighborhood territory pixel, histogrammic peak value represents the direction of this key point neighborhood, using the principal direction of this direction as key point; Then add up key point neighborhood local space relation, calculate each angular histogram in neighborhood, generate D S IFT proper vector.
As shown in Figure 1, be two width three-dimensional point cloud images, wherein: bright spot is the D S IFT unique point of extraction.
Step 2, the D S IFT unique point that candidate family and object module extract slightly is mated:
When target comprises N 1individual D S IFT unique point, represents object module with P, has wherein: p irepresent a D S IFT unique point of object module.At t frame, candidate target model has N 2individual D S IFT unique point, represents have with Q wherein: q iit is a D S IFT unique point of candidate family.
To arbitrary characteristics point p i, and p i∈ P, calculates and the shortest Euclidean distance of all unique points in Q and time short Euclidean distance, when the ratio of the shortest Euclidean distance and time short Euclidean distance is less than distance threshold, then representation feature point p ithe unique point of coupling is there is in Q.
Distance threshold value in the present embodiment is: 0.6.
Step 3, D S IFT unique point is carefully mated:
Step 3.1: when after thick coupling, coupling is counted as N, unique point ps after point set PS and QS represents respectively and slightly mate iand qs ithe object module of composition and candidate family three-dimensional Delaunay triangulation is carried out to PS, QS, as shown in Figure 2.
Step 3.2: represent some ps iabutment points, represent some qs iabutment points, qs i = Σ qs j ∈ N qs i , j ≠ i w i j 2 qs j
Step 3.3: order represent some ps i's the weight coefficient vector of neighborhood, W qs i = { w i j 2 | qs j ∈ N qs i , j ≠ i } Represent some qs i's the weight coefficient vector of neighborhood. with there is equal length.Calculated by least square method with
Step 3.4: use representation space topological structure similarity: G ( ps i , N ps i , qs i . N qs i ) = | | W ps i - W qs i | | 2 , When be less than matching threshold, then can think a ps iand qs iit is correct matching relationship.
Matching threshold value in the present embodiment is: 0.1.
Step 4, particle filter target following:
The present embodiment carries out tracking prediction with the framework of particle filter target tracking algorism the most to target, and particle filter framework mainly comprises dynamic model and observation model, wherein: dynamic model X k=f (X k-1)+V k, wherein V kfor Gaussian distribution; Observation model wherein: p i∈ P, map (p i) represent the mapping of matching characteristic point, map (p i) ∈ Q, d () be utilize dynamic model to upgrade particle position, then model modification particle weights according to the observation, the position of getting weight limit particle is here the position of target, thus realizes the tracking to target.
Tracking results as shown in Figure 3, Figure 4.Target for following the tracks of in three-dimensional frame.After tested, in Ballet test, the rate of hitting the mark is 95%, and in Breakdance test, the rate of hitting the mark is 90%.Here, the rate that hits the mark feeling the pulse with the finger-tip mark occupy three-dimensional frame 2/3rds and more than.
Above-mentioned concrete enforcement can carry out local directed complete set to it by those skilled in the art in a different manner under the prerequisite not deviating from the principle of the invention and aim; protection scope of the present invention is as the criterion with claims and can't help above-mentioned concrete enforcement and limit, and each implementation within the scope of it is all by the constraint of the present invention.

Claims (8)

1. the stereoscopic vision tracking based on three-dimensional Delaunay triangulation, it is characterized in that, use D S IFT algorithm realization original image to be carried out to the extraction of D S IFT unique point, then realize the thick of D S IFT unique point by calculated candidate model and Euclidean distance between object module and mate; Again three-dimensional Delaunay triangulation is carried out to the D S IFT unique point after coupling, the space geometry constraint utilizing subdivision to be formed, candidate family and object module D S IFT unique point are carefully mated, realizes following the tracks of target under particle filter target following framework.
2. method according to claim 1, is characterized in that, specifically comprises the steps:
Steps A, from the first frame of original image, carries out the extraction of D S IFT unique point;
Step B, slightly mates the D S IFT unique point that candidate family and object module extract;
Step C, carries out three-dimensional Delaunay triangulation to the D S IFT unique point after coupling, sets up the three-dimensional apparent model of target;
Step D, after meticulous coupling, the similarity between the number descriptive model utilizing matching characteristic point between object module and candidate family, the observing and nursing described in particle filter tracking framework by similarity realizes following the tracks of.
3. method according to claim 2, is characterized in that, described steps A comprises:
Steps A 1: detect the key point be made up of the Local Extremum in DOG space: each sampling point needs are compared with its neighbor point, when the DOG operator value of this point is the extreme value in neighborhood, i.e. DOG Function Extreme Value point, be then defined as key point by this point;
Steps A 2: the D S IFT proper vector generating key point: with the gradient direction distribution characteristic in key point neighborhood, with the gradient direction of statistics with histogram neighborhood territory pixel, generates D S IFT proper vector.
4. method according to claim 2, is characterized in that, described step B specifically refers to: when target comprises N 1individual D S IFT unique point, represents object module with P, has wherein: p irepresent a D S IFT unique point of object module; Again when t frame, candidate target model has N 2individual D S IFT unique point, represents have with Q wherein: q iit is a D S IFT unique point of candidate family;
To arbitrary characteristics point p i, and p i∈ P, calculates and the shortest Euclidean distance of all unique points in Q and time short Euclidean distance, when the ratio of the shortest Euclidean distance and time short Euclidean distance is less than certain threshold value, then representation feature point p ithe unique point of coupling is there is in Q.
5. method according to claim 2, is characterized in that, described step C specifically refers to: when after thick coupling, coupling is counted as N, point set PS and QS represents the unique point ps after thick coupling respectively iand qs ithe object module formed and candidate family, namely three-dimensional Delaunay triangulation is carried out to point set PS, QS;
With representation space topological structure similarity, represent some ps iabutment points, represent some qs iabutment points; ps i = Σ ps j ∈ N ps i , j ≠ i w i j 1 ps j , qs i = Σ qs j ∈ N qs i , j ≠ i w i j 2 qs j , Wherein: represent some ps i's the weight coefficient vector of neighborhood, represent some qs i's the weight coefficient vector of neighborhood, with there is equal length, by calculating with carry out comparison point ps iand qs ispace expanding similarity.That is: G ( ps i , N ps i , qs i . N qs i ) = | | W ps i - W qs i | | 2 , When be less than matching threshold, then put ps iand qs iit is correct matching relationship.
6. method according to claim 5, is characterized in that, described weight coefficient vector with calculated by least square method, and its all neighborhoods point of each point is described.
7. method according to claim 5, is characterized in that, the similarity described in step D refers to: wherein: p i∈ P, map (p i) represent the mapping of matching characteristic point,
8. one kind realizes the stereoscopic vision tracker of method described in above-mentioned arbitrary claim, it is characterized in that, comprise: the 3DSIFT feature point detection module connected successively, the thick matching module of unique point, the thin matching module of triangulation space constraint and particle filter tracking module, wherein: 3DSIFT feature point detection module extracts 3DSIFT characteristic point information from the raw information of target three-dimensional point cloud model, the thick matching module of unique point slightly mates for 3DSIFT characteristic point information and thick matching result is exported to the thin matching module of triangulation space constraint and carries out carefully mating based on the triangulation space constraint of three-dimensional Delaunay, particle filter tracking module generates particle filter framework according to thin match information, and target following information is obtained from Particle tracking framework.
CN201510952595.5A 2015-12-17 2015-12-17 Stereo vision tracking method and stereo vision tracking system based on 3D Delaunay triangulation Pending CN105513094A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510952595.5A CN105513094A (en) 2015-12-17 2015-12-17 Stereo vision tracking method and stereo vision tracking system based on 3D Delaunay triangulation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510952595.5A CN105513094A (en) 2015-12-17 2015-12-17 Stereo vision tracking method and stereo vision tracking system based on 3D Delaunay triangulation

Publications (1)

Publication Number Publication Date
CN105513094A true CN105513094A (en) 2016-04-20

Family

ID=55721051

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510952595.5A Pending CN105513094A (en) 2015-12-17 2015-12-17 Stereo vision tracking method and stereo vision tracking system based on 3D Delaunay triangulation

Country Status (1)

Country Link
CN (1) CN105513094A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108828524A (en) * 2018-06-03 2018-11-16 桂林电子科技大学 Particle filter audio source tracking localization method based on Delaunay Triangulation
CN110399931A (en) * 2019-07-30 2019-11-01 燕山大学 A kind of fish eye images matching process and system
CN110706259A (en) * 2019-10-12 2020-01-17 四川航天神坤科技有限公司 Space constraint-based cross-shot tracking method and device for suspicious people
CN111007565A (en) * 2019-12-24 2020-04-14 清华大学 Three-dimensional frequency domain full-acoustic wave imaging method and device
CN111046906A (en) * 2019-10-31 2020-04-21 中国资源卫星应用中心 Reliable encryption matching method and system for planar feature points
CN111461196A (en) * 2020-03-27 2020-07-28 上海大学 Method and device for identifying and tracking fast robust image based on structural features
WO2022156652A1 (en) * 2021-01-25 2022-07-28 腾讯科技(深圳)有限公司 Vehicle motion state evaluation method and apparatus, device, and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101655982A (en) * 2009-09-04 2010-02-24 上海交通大学 Image registration method based on improved Harris angular point
US20100169576A1 (en) * 2008-12-31 2010-07-01 Yurong Chen System and method for sift implementation and optimization
CN102157017A (en) * 2011-04-28 2011-08-17 上海交通大学 Method for rapidly obtaining object three-dimensional geometric invariant based on image
US20110216939A1 (en) * 2010-03-03 2011-09-08 Gwangju Institute Of Science And Technology Apparatus and method for tracking target
CN104021577A (en) * 2014-06-19 2014-09-03 上海交通大学 Video tracking method based on local background learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100169576A1 (en) * 2008-12-31 2010-07-01 Yurong Chen System and method for sift implementation and optimization
CN101655982A (en) * 2009-09-04 2010-02-24 上海交通大学 Image registration method based on improved Harris angular point
US20110216939A1 (en) * 2010-03-03 2011-09-08 Gwangju Institute Of Science And Technology Apparatus and method for tracking target
CN102157017A (en) * 2011-04-28 2011-08-17 上海交通大学 Method for rapidly obtaining object three-dimensional geometric invariant based on image
CN104021577A (en) * 2014-06-19 2014-09-03 上海交通大学 Video tracking method based on local background learning

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
DAVID G. LOWE: "Distinctive Image Features from Scale-Invariant Keypoints", 《COMPUTER VISION》 *
DONG NI 等: "Volumetric ultrasound panorama based on 3D SIFT", 《MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION》 *
JIANFANG DOU、JIANXUN LI: "Image matching based local Delaunay triangulation and affine invariant geometric constraint", 《OPTIK》 *
牛长锋 等: "基于SIFT 特征和粒子滤波的目标跟踪方法", 《机器人ROBOT》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108828524A (en) * 2018-06-03 2018-11-16 桂林电子科技大学 Particle filter audio source tracking localization method based on Delaunay Triangulation
CN108828524B (en) * 2018-06-03 2021-04-06 桂林电子科技大学 Particle filter sound source tracking and positioning method based on Delaunay triangulation
CN110399931A (en) * 2019-07-30 2019-11-01 燕山大学 A kind of fish eye images matching process and system
CN110399931B (en) * 2019-07-30 2021-07-06 燕山大学 Fisheye image matching method and system
CN110706259A (en) * 2019-10-12 2020-01-17 四川航天神坤科技有限公司 Space constraint-based cross-shot tracking method and device for suspicious people
CN110706259B (en) * 2019-10-12 2022-11-29 四川航天神坤科技有限公司 Space constraint-based cross-shot tracking method and device for suspicious people
CN111046906A (en) * 2019-10-31 2020-04-21 中国资源卫星应用中心 Reliable encryption matching method and system for planar feature points
CN111046906B (en) * 2019-10-31 2023-10-31 中国资源卫星应用中心 Reliable encryption matching method and system for planar feature points
CN111007565A (en) * 2019-12-24 2020-04-14 清华大学 Three-dimensional frequency domain full-acoustic wave imaging method and device
CN111461196A (en) * 2020-03-27 2020-07-28 上海大学 Method and device for identifying and tracking fast robust image based on structural features
CN111461196B (en) * 2020-03-27 2023-07-21 上海大学 Rapid robust image identification tracking method and device based on structural features
WO2022156652A1 (en) * 2021-01-25 2022-07-28 腾讯科技(深圳)有限公司 Vehicle motion state evaluation method and apparatus, device, and medium

Similar Documents

Publication Publication Date Title
CN105513094A (en) Stereo vision tracking method and stereo vision tracking system based on 3D Delaunay triangulation
CN111028277B (en) SAR and optical remote sensing image registration method based on pseudo-twin convolution neural network
CN104063702B (en) Three-dimensional gait recognition based on shielding recovery and partial similarity matching
CN109242954B (en) Multi-view three-dimensional human body reconstruction method based on template deformation
CN107481315A (en) A kind of monocular vision three-dimensional environment method for reconstructing based on Harris SIFT BRIEF algorithms
CN107424161B (en) Coarse-to-fine indoor scene image layout estimation method
CN107862735B (en) RGBD three-dimensional scene reconstruction method based on structural information
CN104751465A (en) ORB (oriented brief) image feature registration method based on LK (Lucas-Kanade) optical flow constraint
CN103593832A (en) Method for image mosaic based on feature detection operator of second order difference of Gaussian
CN104346608A (en) Sparse depth map densing method and device
CN106485690A (en) Cloud data based on a feature and the autoregistration fusion method of optical image
CN104867126A (en) Method for registering synthetic aperture radar image with change area based on point pair constraint and Delaunay
CN103735269B (en) A kind of height measurement method followed the tracks of based on video multi-target
CN103413352A (en) Scene three-dimensional reconstruction method based on RGBD multi-sensor fusion
CN106709950A (en) Binocular-vision-based cross-obstacle lead positioning method of line patrol robot
CN106919944A (en) A kind of wide-angle image method for quickly identifying based on ORB algorithms
CN104036524A (en) Fast target tracking method with improved SIFT algorithm
CN105389774A (en) Method and device for aligning images
CN108596975A (en) A kind of Stereo Matching Algorithm for weak texture region
CN106485737A (en) Cloud data based on line feature and the autoregistration fusion method of optical image
CN112163622B (en) Global and local fusion constrained aviation wide-baseline stereopair line segment matching method
CN104794737A (en) Depth-information-aided particle filter tracking method
CN111998862B (en) BNN-based dense binocular SLAM method
CN104599286A (en) Optical flow based feature tracking method and device
Hofer et al. Line-based 3D reconstruction of wiry objects

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160420