CN102221358B - Monocular visual positioning method based on inverse perspective projection transformation - Google Patents

Monocular visual positioning method based on inverse perspective projection transformation Download PDF

Info

Publication number
CN102221358B
CN102221358B CN 201110070941 CN201110070941A CN102221358B CN 102221358 B CN102221358 B CN 102221358B CN 201110070941 CN201110070941 CN 201110070941 CN 201110070941 A CN201110070941 A CN 201110070941A CN 102221358 B CN102221358 B CN 102221358B
Authority
CN
China
Prior art keywords
image
camera
wheeled vehicle
loc
perspective projection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 201110070941
Other languages
Chinese (zh)
Other versions
CN102221358A (en
Inventor
冯莹
曹毓
魏立安
雷兵
陈运锦
杨云涛
赵立双
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN 201110070941 priority Critical patent/CN102221358B/en
Publication of CN102221358A publication Critical patent/CN102221358A/en
Application granted granted Critical
Publication of CN102221358B publication Critical patent/CN102221358B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a monocular visual positioning method based on inverse perspective projection transformation. The technical scheme employed in the invention is that: an attitude sensor and a camera are fixed together and arranged on a wheeled vehicle; and images shoot in driving are subjected to the following processes: step 1, carrying out inverse perspective projection transformation on an image sequence; step 2, obtaining transformation matrixes between adjacent images through calculating; and step 3, determining a driving locus curve of the wheeled vehicle. The method allows high accurate positioning results to be obtained by assisting to position the wheeled vehicle through real time attitude information obtained by the attitude sensor, and the perspective effect to be eliminated through carrying out inverse perspective projection transformation on the images, so the precision of wheeled vehicle positioning can be further improved.

Description

Monocular vision localization method based on contrary perspective projection transformation
Technical field
The present invention relates to videographic measurment and technical field of image processing, further relate to a kind of method of utilizing contrary perspective projection transformation algorithm motion platform to be carried out the monocular vision location.
Background technology
Automobile autonomous driving technology is the hot issue of studying both at home and abroad in recent years, and gordian technique wherein is how to realize self-align accurately in real time to wheeled vehicle.At present; Targeting scheme commonly used is to utilize GPS (Global Position System; GPS) add the integrated navigation system of IMU (Inertial Measurement Unit, inertial measuring unit), but price general charged is comparatively expensive; And when environment of living in made that motion platform can't receive gps signal, this scheme can't be accomplished positioning function.
Existing vision positioning method mainly is divided into stereoscopic vision location and monocular vision location two big classes.The stereoscopic vision localization method is the testing environment unique point in three dimensions, and estimates the motion conditions of wheeled vehicle on this basis.There is following shortcoming in it: the one, and complex algorithm, comparatively consuming time, be difficult to requirement of real time; The 2nd, when environmental background did not have obvious textural characteristics, the unique point limited amount of extraction can cause algorithm bigger measuring error to occur.Therefore, the stereoscopic vision localization method will reach real through engineering approaches application level and still takes day.
By comparison, the monocular vision localization method is comparatively smooth with the hypothesis road surface to be prerequisite, realizes resolving of wheeled vehicle driving trace through finding the solution between image sequence simple displacement relation.Its algorithm is simple, and is ageing strong, and installs easily.But traditional monocular vision localization method need satisfy the application conditions such as variation with the wheeled vehicle motion of the smooth and camera attitude in road surface.Lv Qiang etc. have carried out studying (sensing technology journal to the monocular vision location technology in " based on the realization of monocular vision odometer in navigational system of SIFT feature extraction " literary composition; The 20th volume; The 5th phase; The 1148-1152 page or leaf), the fluoroscopy images that it is taken camera directly carries out the SIFT characteristic matching, calculates the wheeled vehicle motion state according to the theoretical model of deriving by match point.Because fluoroscopy images has true scenery is deformed into nearly far away little effect, the point of the unique point correspondence of then extracting on the image under world coordinate system is far away more apart from wheeled vehicle, and its image mapped that causes concerns that resolution error will be big more.If single apart from wheeled vehicle closer environment textural characteristics, unique point is difficult to extract, and then bigger measuring error can appear in algorithm, even loses efficacy.Because the theoretical model of this article has carried out being similar to a certain degree; Do not consider that simultaneously camera is when moving along with wheeled vehicle; Can occur the problem that attitude changes unavoidably, and directly fluoroscopy images is carried out characteristic matching also exists certain limitation, causes final bearing accuracy not ideal enough.
Summary of the invention
The technical matters that the present invention solves is: to the deficiency of existing vision localization technology, a kind of monocular vision localization method based on contrary perspective projection transformation is proposed, and simple than stereoscopic vision localization method algorithm of the present invention, be easy to realize; Compare with existing monocular vision localization method, the present invention has high orientation precision.
The concrete technical scheme that the present invention adopted is following:
With attitude sensor and camera fixing together, and the placement location of camera make camera can photograph the road surface on any one direction around the vehicle body.The frame frequency and the resolution of camera are set as required.Being provided with of camera frame frequency need guarantee that in wheeled vehicle cruising process the corresponding field of view of adjacent two width of cloth images that camera is taken has overlapping part.The initial moment of hypothetical record wheeled vehicle driving trace curve is t 1, the image that this moment camera is taken is P 1At moment t i(i=1,2..., n, wherein n is a total number of images), camera photographs i width of cloth image P i, the camera attitude angle that attitude sensor obtains is a iImplement following step:
The first step is carried out contrary perspective projection transformation to image sequence.
At moment t i, by camera attitude angle a iObtain Camera extrinsic matrix number A i, combining camera intrinsic parameter matrix obtains the contrary perspective projection transformation matrix B of image iAccording to the contrary perspective projection transformation matrix B that obtains i, at moment t iThe image P that takes iCarry out contrary perspective projection transformation, obtain the road surface and just playing view P ' i
Suppose that all road surfaces are constantly just playing view P ' iForm and just descending view sequence P ' 1, P ' 2..., P ' n
In second step, calculate the transformation matrix between adjacent image.
Align down view sequence P ' 1, P ' 2..., P ' nAny adjacent image P ' qAnd P ' Q+1, q=1,2 ..., n-1, carry out following processing:
(1) step, extract minutiae.
Use SURF feature description operator to adjacent image P ' qAnd P ' Q+1Carry out feature extraction, image P ' after the feature extraction qFeature point set be F q, image P ' Q+1Feature point set be F Q+1It is relevant that the unique point number of every width of cloth characteristics of image point set and the parameter of the resolution of image, texture complexity and SURF operator are provided with, and can confirm according to the use needs.
(2) goes on foot, and obtains the parameter of rigid body translation model between image.
To adjacent image P ' qAnd P ' Q+1, set up the transformation relation between rigid body translation model description two width of cloth images, the rigid body translation model is following:
x j y j = cos θ q - sin θ q sin θ q cos θ q x k ′ y k ′ + dx q dy q q=1,2...,n-1
Wherein, suppose image P ' qFeature point set F qCorresponding characteristics of image pixel coordinate set is { (x j, y j), j=1...m, image P ' Q+1Feature point set F Q+1Corresponding characteristics of image pixel coordinate set be (x ' k, y ' k), k=1...m ', wherein m, m ' are respectively feature point set F qAnd F Q+1The unique point quantity that comprises, θ qPresentation video P ' Q+1With respect to P ' qThe angle of rotation, (dx q, dy q) be image P ' Q+1With respect to P ' qThe pixel coordinate translational movement.
Utilize image P ' qAnd P ' Q+1Corresponding characteristics of image pixel coordinate set adopts the RANSAC algorithm for estimating to find the solution the rigid body translation model, just can calculate the parameter θ of rigid body translation model q(dx q, dy q).
In the 3rd step, confirm wheeled vehicle driving trace curve.
According to the rigid body translation model parameter θ that obtains q(dx q, dy q), calculate adjacent moment t qAnd t Q+1Between the displacement dL of wheeled vehicle q, Q=1,2..., n-1, wherein, M is a longitudinal frame of just playing view, is just descending the longitudinal frame of each width of cloth image in the view sequence identical; D is vertical practical field of view scope corresponding when just playing the view longitudinal frame to be M, and the value of M and D can be confirmed as required.In conjunction with course information θ q, can infer t constantly Q+1Wheeled vehicle is present position coordinate Loc under world coordinate system Q+1, Loc wherein 1Be positioned at true origin, promptly All the other Loc Q+1(q=1,2 ..., horizontal ordinate n-1)
Figure GDA0000075102160000043
Can analogize as follows:
X Loc q + 1 = X Loc q + dL q × cos ( Σ l = 1 q θ l ) Y Loc q + 1 = Y Loc q + dL q × sin ( Σ l = 1 q θ l )
With each moment correspondence position coordinate points Loc i(i=1,2 ..., n) connect successively, just can obtain the driving trace curve of wheeled vehicle.
Beneficial effect of the present invention is:
1, on the basis of traditional monocular vision localization method, the real-time attitude information that utilizes attitude sensor to obtain is assisted the location of wheeled vehicle, has obtained more high-precision location result.
2, image is carried out contrary perspective projection transformation, eliminated transparent effect, carry out feature extraction, model parameter calculating on this basis, avoided classic method to cause image mapped to concern the problem that resolution error is big, improved the wheeled vehicle bearing accuracy.
3, the present invention only uses a camera and an attitude sensor, just can calculate the driving trace information of wheeled vehicle, so structure of the present invention is comparatively simple, and simple installation need not complicated operations such as demarcation.
Description of drawings
Fig. 1 is the process flow diagram of practical implementation of the present invention;
Fig. 2 is the calculating principle schematic of wheeled vehicle geometric locus;
Fig. 3 is monocular vision location principle of work synoptic diagram;
The 34th width of cloth pavement image that Fig. 4 takes for camera in the experiment;
Fig. 5 is that view is just being played on the road surface of corresponding the 34th width of cloth image of Fig. 4 after contrary perspective transform;
The wheeled vehicle driving trace curve map of Fig. 6 for obtaining in the experiment.
Embodiment
Fig. 1 has provided the process flow diagram of practical implementation of the present invention.The first step is carried out contrary perspective projection transformation to image sequence.In this step, find the solution camera intrinsic parameter matrix and Camera extrinsic matrix number method referring to the 22nd page of " videographic measurment learn principle and applied research " book (Science Press publishes, Yu Qifeng/Shangyang work) to 33 pages.In second step, calculate the transformation matrix between adjacent image.Wherein, (1) step was an extract minutiae.In this step; The feature detection operator that uses is the SURF operator, about the character of SURF characteristic operator and method of application referring to document " Distinctive image features from scale invariant keypoints " (periodical: International Journal of Computer Vision, 2004; 60 (2); Page number 91-110, author David G. Lowe) and document " SURF:Speeded up robust features " (meeting: Proceedings ofthe 9th European Conference on Computer Vision, 2006; 3951 (1); Page number 404-417, author Herbert Bay, Tinne Tuytelaars and Luc Van Gool).(2) step was a parameter of obtaining rigid body translation model between image.In this step; Adopt the RANSAC algorithm for estimating to find the solution the rigid body translation model; The RANSAC algorithm speed is fast, and the parameter accuracy that estimates is high, is algorithm for estimating relatively more commonly used at present; The detailed introduction of RANSAC algorithm principle is referring to document " PreemptiveRANSAC for Live Structure and Motion Estimation " (meeting: Proceedings of theNinth IEEE International Conference on Computer Vision; ICCV 2003, author DavidNist ' er, Sarnoff Corporation and Princeton).The 3rd goes on foot, and confirms the driving trace curve of wheeled vehicle.Fig. 2 has provided the schematic diagram calculation of wheeled vehicle geometric locus.XY coordinate axis among the figure is represented world coordinate system, the initial point of coordinate system the is corresponding position coordinates Loc of wheeled vehicle in world coordinate system when camera takes the 1st width of cloth image 1, this position the is corresponding initial moment t of location 1, θ 1Be the angle rotation amount between the 1st width of cloth image and the 2nd width of cloth image, dL 1Be the translation distance between the 1st width of cloth image and the 2nd width of cloth image, Loc 2The position coordinates of wheeled vehicle under world coordinate system when corresponding camera is taken the 2nd width of cloth image.θ 2Be the angle rotation amount between the 2nd width of cloth image and the 3rd width of cloth image, dL 2Be the translation distance between the 2nd width of cloth image and the 3rd width of cloth image, Loc 3The position coordinates of wheeled vehicle under world coordinate system when corresponding camera is taken the 3rd width of cloth image.θ N-1Be the angle rotation amount between n-1 width of cloth image and the n width of cloth image, wherein n is the total number of images that camera is taken, dL N-1Be the translation distance between n-1 width of cloth image and the n width of cloth image, Loc N-1The position coordinates of wheeled vehicle under world coordinate system when corresponding camera is taken n-1 width of cloth image, Loc nThe position coordinates of wheeled vehicle under world coordinate system when corresponding camera is taken n width of cloth image.With each position coordinates Loc under the world coordinate system 1, Loc 2... Loc nBe connected, the curve of acquisition is the driving trace curve of wheeled vehicle.
Utilize embodiment of the present invention to test; Experimental selection is carried out in outdoor smooth place; Camera frame is located at wheeled vehicle the place ahead, also can be set up in any side of wheeled vehicle or rear in the practical application, includes only the road surface as long as guarantee the visual field that camera photographs.Camera is gathered 93 in picture altogether with the speed acquisition pictures of 3 frame/seconds in this experiment, i.e. n=93, and attitude sensor (model MTI) and camera (model FLE2-14S3) synchronous acquisition data, the principle of work of experiment is as shown in Figure 3.If digital 1 indication is shown t constantly among Fig. 3 q(q=1,2 ... n-1, the total number of images of n for taking) the wheeled vehicle present position, the field range of this corresponding camera of moment is shown in trapezoid area 3, and the image that this moment, camera was taken is P q, digital 2 indications are shown t Q+1Moment wheeled vehicle present position, the field range of this corresponding camera of moment are shown in trapezoid area 4, and the image that this moment, camera was taken is P Q+1, black arrow represents wheeled vehicle at moment t qAnd t Q+1Between travel distance, image P qAnd P Q+1The overlapping region, visual field as the numeral 5 shown in.The obtaining just overlapping region through two visual fields relative position relation obtain in each visual field of wheeled vehicle displacement and direction.
In the experiment, dolly is by serpentine fashion curve driving one segment distance.Fig. 4 is the 34th P in the image sequence of camera shooting in the experiment 34, Fig. 5 is image P 34View P ' is just being played on road surface after contrary perspective transform 34Fig. 6 has provided the wheeled vehicle driving trace curve that utilizes the present invention to obtain in the experiment, and XY represents world coordinate system among the figure, and unit is a rice, and each green square point is represented the corresponding wheeled vehicle position coordinates Loc constantly of each sub-picture among the figure i, all some connections are just obtained wheeled vehicle driving trace curve.By the visible operating range obtained by the method for the present invention of Fig. 6 is 22.76 meters, is 22.63 meters and get the actual travel distance by the tape measure field survey, error about 6 ‰.

Claims (1)

1. the monocular vision localization method based on contrary perspective projection transformation is characterized in that, with attitude sensor and camera fixing together, and the placement location of camera makes camera can photograph the vehicle body road surface on any one direction on every side; The frame frequency and the resolution of camera are set as required; Being provided with of camera frame frequency need guarantee that in wheeled vehicle cruising process the corresponding field of view of adjacent two width of cloth images that camera is taken has overlapping part; The initial moment of hypothetical record wheeled vehicle driving trace curve is t 1, the image that this moment camera is taken is P 1At moment t i, camera photographs i width of cloth image P i, the camera attitude angle that attitude sensor obtains is a i, i=1,2..., n, wherein n is a total number of images; Implement following step:
The first step is carried out contrary perspective projection transformation to image sequence;
At moment t i, by camera attitude angle a iObtain Camera extrinsic matrix number A i, combining camera intrinsic parameter matrix obtains the contrary perspective projection transformation matrix B of image iAccording to the contrary perspective projection transformation matrix B that obtains i, at moment t iThe image P that takes iCarry out contrary perspective projection transformation, obtain the road surface and just playing view
Figure FDA00002074919800011
Assuming that all the roads are always under the view
Figure FDA00002074919800012
view sequence composition is under
Figure FDA00002074919800013
In second step, calculate the transformation matrix between adjacent image;
Align down any adjacent image
Figure FDA00002074919800015
and
Figure FDA00002074919800016
q=1 of view sequence
Figure FDA00002074919800014
; 2; N-1, carry out following processing:
(1) step, extract minutiae;
Use SURF feature description operator to adjacent image
Figure FDA00002074919800017
With
Figure FDA00002074919800018
Carry out feature extraction, image after the feature extraction
Figure FDA00002074919800019
Feature point set be F q, image
Figure FDA000020749198000110
Feature point set be F Q+1It is relevant that the unique point number of every width of cloth characteristics of image point set and the parameter of the resolution of image, texture complexity and SURF operator are provided with, and can confirm according to the use needs;
(2) goes on foot, and obtains the parameter of rigid body translation model between image;
Adjacent image
Figure FDA000020749198000111
and
Figure FDA000020749198000112
are set up the transformation relation between rigid body translation model description two width of cloth images, and the rigid body translation model is following:
x j y j = cos θ q - sin θ q sin θ q cos θ q x k ′ y k ′ + dx q dy q , q = 1,2 . . . , n - 1
Wherein, suppose image Feature point set F qCorresponding characteristics of image pixel coordinate set is { (x j, y j), j=1...m, image
Figure FDA00002074919800023
Feature point set F Q+1Corresponding characteristics of image pixel coordinate set does Wherein m, m ' are respectively feature point set F qAnd F Q+1The unique point quantity that comprises, θ qPresentation video
Figure FDA00002074919800025
With respect to
Figure FDA00002074919800026
The angle of rotation, (dx q, dy q) be image With respect to The pixel coordinate translational movement;
Utilize image
Figure FDA00002074919800029
With
Figure FDA000020749198000210
Corresponding characteristics of image pixel coordinate set adopts the RANSAC algorithm for estimating to find the solution the rigid body translation model, just can calculate the parameter θ of rigid body translation model q(dx q, dy q);
In the 3rd step, confirm wheeled vehicle driving trace curve;
According to the rigid body translation model parameter θ that obtains q(dx q, dy q), calculate adjacent moment t qAnd t Q+1Between the displacement dL of wheeled vehicle q,
Figure FDA000020749198000211
Wherein, M is a longitudinal frame of just playing view; D is vertical practical field of view scope corresponding when just playing the view longitudinal frame to be M, and the value of M and D can be confirmed as required; Wheeled vehicle is present position coordinate Loc under world coordinate system Q+1, Loc wherein 1Be positioned at true origin, promptly All the other Loc Q+1Horizontal ordinate and ordinate
Figure FDA000020749198000213
For, q=1,2 ..., n-1:
X Loc q + 1 = X Loc q + dL q × cos ( Σ l = 1 q θ l ) Y Loc q + 1 = Y Loc q + dL q × sin ( Σ l = 1 q θ l )
With each moment correspondence position coordinate points Loc iConnect successively, just can obtain the driving trace curve of wheeled vehicle.
CN 201110070941 2011-03-23 2011-03-23 Monocular visual positioning method based on inverse perspective projection transformation Expired - Fee Related CN102221358B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110070941 CN102221358B (en) 2011-03-23 2011-03-23 Monocular visual positioning method based on inverse perspective projection transformation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110070941 CN102221358B (en) 2011-03-23 2011-03-23 Monocular visual positioning method based on inverse perspective projection transformation

Publications (2)

Publication Number Publication Date
CN102221358A CN102221358A (en) 2011-10-19
CN102221358B true CN102221358B (en) 2012-12-12

Family

ID=44777980

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110070941 Expired - Fee Related CN102221358B (en) 2011-03-23 2011-03-23 Monocular visual positioning method based on inverse perspective projection transformation

Country Status (1)

Country Link
CN (1) CN102221358B (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103292807B (en) * 2012-03-02 2016-04-20 江阴中科矿业安全科技有限公司 Drill carriage attitude measurement method based on monocular vision
CN102829763B (en) * 2012-07-30 2014-12-24 中国人民解放军国防科学技术大学 Pavement image collecting method and system based on monocular vision location
CN104180818B (en) * 2014-08-12 2017-08-11 北京理工大学 A kind of monocular vision mileage calculation device
CN104359464A (en) * 2014-11-02 2015-02-18 天津理工大学 Mobile robot positioning method based on stereoscopic vision
CN105976402A (en) * 2016-05-26 2016-09-28 同济大学 Real scale obtaining method of monocular vision odometer
CN106462762B (en) * 2016-09-16 2019-07-19 香港应用科技研究院有限公司 Based on vehicle detection, tracking and the positioning for enhancing anti-perspective transform
CN108051012A (en) * 2017-12-06 2018-05-18 爱易成技术(天津)有限公司 Mobile object space coordinate setting display methods, apparatus and system
CN108921060A (en) * 2018-06-20 2018-11-30 安徽金赛弗信息技术有限公司 Motor vehicle based on deep learning does not use according to regulations clearance lamps intelligent identification Method
CN110567728B (en) * 2018-09-03 2021-08-20 创新先进技术有限公司 Method, device and equipment for identifying shooting intention of user
CN109242907A (en) * 2018-09-29 2019-01-18 武汉光庭信息技术股份有限公司 A kind of vehicle positioning method and device based on according to ground high speed camera
US20220018658A1 (en) * 2018-12-12 2022-01-20 The University Of Tokyo Measuring system, measuring method, and measuring program
CN110335317B (en) * 2019-07-02 2022-03-25 百度在线网络技术(北京)有限公司 Image processing method, device, equipment and medium based on terminal equipment positioning
CN112212873B (en) * 2019-07-09 2022-12-02 北京地平线机器人技术研发有限公司 Construction method and device of high-precision map
CN112710308B (en) * 2019-10-25 2024-05-31 阿里巴巴集团控股有限公司 Positioning method, device and system of robot
CN110986890B (en) * 2019-11-26 2022-03-25 北京经纬恒润科技股份有限公司 Height detection method and device
CN112927306B (en) * 2021-02-24 2024-01-16 深圳市优必选科技股份有限公司 Calibration method and device of shooting device and terminal equipment
CN114782549B (en) * 2022-04-22 2023-11-24 南京新远见智能科技有限公司 Camera calibration method and system based on fixed point identification

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100391442B1 (en) * 2000-12-27 2003-07-12 현대자동차주식회사 Image processing method for preventing a vehicle from running off the line
JP2006101816A (en) * 2004-10-08 2006-04-20 Univ Of Tokyo Method and apparatus for controlling steering

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Cao Yu et al.Monocular Visual Odometry based on Inverse Perspective Mapping.《Proc. of SPIE》.2011,第8194卷1-7. *
JP特开2006-101816A 2006.04.20
高德芝等.基于逆透视变换德智能车辆定位技术.《计算机测量与控制》.2009,第17卷(第9期),1810-1812. *

Also Published As

Publication number Publication date
CN102221358A (en) 2011-10-19

Similar Documents

Publication Publication Date Title
CN102221358B (en) Monocular visual positioning method based on inverse perspective projection transformation
CN107505644B (en) Three-dimensional high-precision map generation system and method based on vehicle-mounted multi-sensor fusion
Baboud et al. Automatic photo-to-terrain alignment for the annotation of mountain pictures
CN109842756A (en) A kind of method and system of lens distortion correction and feature extraction
CN104268935A (en) Feature-based airborne laser point cloud and image data fusion system and method
CN102721409B (en) Measuring method of three-dimensional movement track of moving vehicle based on vehicle body control point
CN103337068B (en) The multiple subarea matching process of spatial relation constraint
CN102692236A (en) Visual milemeter method based on RGB-D camera
Goecke et al. Visual vehicle egomotion estimation using the fourier-mellin transform
JP2012159506A5 (en)
CN103411587B (en) Positioning and orientation method and system
Jende et al. A fully automatic approach to register mobile mapping and airborne imagery to support the correction of platform trajectories in GNSS-denied urban areas
Dawood et al. Harris, SIFT and SURF features comparison for vehicle localization based on virtual 3D model and camera
CN110715646B (en) Map trimming measurement method and device
Dawood et al. Vehicle geo-localization based on IMM-UKF data fusion using a GPS receiver, a video camera and a 3D city model
CN108613675B (en) Low-cost unmanned aerial vehicle movement measurement method and system
CN102483881B (en) Pedestrian-crossing marking detecting method and pedestrian-crossing marking detecting device
CN112446915A (en) Picture-establishing method and device based on image group
Shahbazi et al. Unmanned aerial image dataset: Ready for 3D reconstruction
Wan et al. The P2L method of mismatch detection for push broom high-resolution satellite images
CN116907469A (en) Synchronous positioning and mapping method and system for multi-mode data combined optimization
CN103489165A (en) Decimal lookup table generation method for video stitching
Jiang et al. Precise vehicle ego-localization using feature matching of pavement images
CN110033493B (en) Camera 3D calibration method and terminal
De Agostino et al. GIMPHI: a new integration approach for early impact assessment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20121212

Termination date: 20150323

EXPY Termination of patent right or utility model