CN102221358A - Monocular visual positioning method based on inverse perspective projection transformation - Google Patents

Monocular visual positioning method based on inverse perspective projection transformation Download PDF

Info

Publication number
CN102221358A
CN102221358A CN 201110070941 CN201110070941A CN102221358A CN 102221358 A CN102221358 A CN 102221358A CN 201110070941 CN201110070941 CN 201110070941 CN 201110070941 A CN201110070941 A CN 201110070941A CN 102221358 A CN102221358 A CN 102221358A
Authority
CN
China
Prior art keywords
image
camera
wheeled vehicle
loc
perspective projection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 201110070941
Other languages
Chinese (zh)
Other versions
CN102221358B (en
Inventor
冯莹
曹毓
魏立安
雷兵
陈运锦
杨云涛
赵立双
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN 201110070941 priority Critical patent/CN102221358B/en
Publication of CN102221358A publication Critical patent/CN102221358A/en
Application granted granted Critical
Publication of CN102221358B publication Critical patent/CN102221358B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a monocular visual positioning method based on inverse perspective projection transformation. The technical scheme employed in the invention is that: an attitude sensor and a camera are fixed together and arranged on a wheeled vehicle; and images shoot in driving are subjected to the following processes: step 1, carrying out inverse perspective projection transformation on an image sequence; step 2, obtaining transformation matrixes between adjacent images through calculating; and step 3, determining a driving locus curve of the wheeled vehicle. The method allows high accurate positioning results to be obtained by assisting to position the wheeled vehicle through real time attitude information obtained by the attitude sensor, and the perspective effect to be eliminated through carrying out inverse perspective projection transformation on the images, so the precision of wheeled vehicle positioning can be further improved.

Description

Monocular vision localization method based on contrary perspective projection transformation
Technical field
The present invention relates to videographic measurment and technical field of image processing, further relate to a kind of method of utilizing contrary perspective projection transformation algorithm motion platform to be carried out the monocular vision location.
Background technology
Automobile autonomous driving technology is the hot issue of studying both at home and abroad in recent years, and gordian technique wherein is how to realize self-align accurately in real time to wheeled vehicle.At present, targeting scheme commonly used is to utilize GPS (Global Position System, GPS) adds IMU (Inertial Measurement Unit, inertial measuring unit) integrated navigation system, but price general charged is comparatively expensive, and when environment of living in made that motion platform can't receive gps signal, this scheme can't be finished positioning function.
Existing vision positioning method mainly is divided into stereoscopic vision location and monocular vision location two big classes.The stereoscopic vision localization method is the testing environment unique point in three dimensions, and estimates the motion conditions of wheeled vehicle on this basis.There is following shortcoming in it: the one, and the algorithm complexity, comparatively consuming time, be difficult to requirement of real time; The 2nd, when environmental background did not have obvious textural characteristics, the unique point limited amount of extraction can cause algorithm bigger measuring error to occur.Therefore, the stereoscopic vision localization method will reach real through engineering approaches application level and still takes day.
By comparison, the monocular vision localization method is comparatively smooth with the hypothesis road surface to be prerequisite, realizes resolving of wheeled vehicle driving trace by finding the solution between image sequence simple displacement relation.Its algorithm is simple, and is ageing strong, and installs easily.But traditional monocular vision localization method need satisfy the application conditions such as variation with the wheeled vehicle motion of the smooth and camera attitude in road surface.Lv Qiang etc. have carried out studying (sensing technology journal to the monocular vision location technology in " based on the realization of monocular vision odometer in navigational system of SIFT feature extraction " literary composition, the 20th volume, the 5th phase, the 1148-1152 page or leaf), the fluoroscopy images that it is taken camera directly carries out the SIFT characteristic matching, calculates the wheeled vehicle motion state according to the theoretical model of deriving by match point.Because fluoroscopy images has true scenery is deformed near big and far smaller effect, the point of the unique point correspondence of then extracting on the image under world coordinate system is far away more apart from wheeled vehicle, and its image mapped that causes concerns that resolution error will be big more.If the environment textural characteristics is single more nearby at the distance wheeled vehicle, unique point is difficult to extract, and then bigger measuring error can appear in algorithm, even loses efficacy.Because the theoretical model of this article has carried out being similar to a certain degree, do not consider that simultaneously camera is when moving along with wheeled vehicle, can occur the problem that attitude changes unavoidably, and directly fluoroscopy images is carried out characteristic matching also exists certain limitation, causes final bearing accuracy not ideal enough.
Summary of the invention
The technical matters that the present invention solves is: at the deficiency of existing vision localization technology, a kind of monocular vision localization method based on contrary perspective projection transformation is proposed, and simple than stereoscopic vision localization method algorithm of the present invention, be easy to realize; Compare with existing monocular vision localization method, the present invention has high orientation precision.
Concrete technical scheme of the present invention is as follows:
Attitude sensor and camera are fixed together, and the placement location of camera makes camera can photograph the road surface on any one direction around the vehicle body.The frame frequency and the resolution of camera are set as required.Being provided with of camera frame frequency need guarantee that in wheeled vehicle cruising process the field of view of adjacent two width of cloth image correspondences that camera is taken has overlapping part.The initial moment of hypothetical record wheeled vehicle driving trace curve is t 1, the image that this moment camera is taken is P 1At moment t i(i=1,2..., n, wherein n is a total number of images), camera photographs i width of cloth image P i, the camera attitude angle that attitude sensor obtains is a iImplement following step:
The first step is carried out contrary perspective projection transformation to image sequence.
At moment t i, by camera attitude angle a iObtain Camera extrinsic matrix number A i, combining camera intrinsic parameter matrix obtains the contrary perspective projection transformation matrix B of image iAccording to the contrary perspective projection transformation matrix B that obtains i, at moment t iThe image P that takes iCarry out contrary perspective projection transformation, obtain the road surface and just playing view P ' i
Suppose that all road surfaces are constantly just playing view P ' iForm and just descending view sequence P ' 1, P ' 2, L, P ' n
In second step, calculate the transformation matrix between adjacent image.
Align down view sequence P ' 1, P ' 2, L, P ' nAny adjacent image P ' qAnd P ' Q+1, q=1,2, L, n-1, carry out following processing:
(1) step, extract minutiae.
Use SURF feature description operator to adjacent image P ' qAnd P ' Q+1Carry out feature extraction, image P ' after the feature extraction qFeature point set be F q, image P ' Q+1Feature point set be F Q+1It is relevant that the unique point number of every width of cloth characteristics of image point set and the parameter of the resolution of image, texture complexity and SURF operator are provided with, can be according to using needs to determine.
(2) goes on foot, and obtains the parameter of rigid body translation model between image.
To adjacent image P ' qAnd P ' Q+1, set up the transformation relation between rigid body translation model description two width of cloth images, the rigid body translation model is as follows:
x j y j = cos θ q - sin θ q sin θ q cos θ q x k ′ y k ′ + dx q dy q q=1,2...,n-1
Wherein, suppose image P ' qFeature point set F qCorresponding characteristics of image pixel coordinate set is { (x j, y j), j=1...m, image P ' Q+1Feature point set F Q+1Corresponding characteristics of image pixel coordinate set be (x ' k, y ' k), k=1...m ', wherein m, m ' are respectively feature point set F qAnd F Q+1The unique point quantity that comprises, θ qPresentation video P ' Q+1With respect to P ' qThe angle of rotation, (dx q, dy q) be image P ' Q+1With respect to P ' qThe pixel coordinate translational movement.
Utilize image P ' qAnd P ' Q+1Corresponding characteristics of image pixel coordinate set adopts the RANSAC algorithm for estimating to find the solution the rigid body translation model, just can calculate the parameter θ of rigid body translation model q(dx q, dy q).
In the 3rd step, determine wheeled vehicle driving trace curve.
According to the rigid body translation model parameter θ that obtains q(dx q, dy q), calculate adjacent moment t qAnd t Q+1Between the displacement dL of wheeled vehicle q,
Figure BDA0000051758130000041
Q=1,2..., n-1, wherein, M is a longitudinal frame of just playing view, is just descending the longitudinal frame of each width of cloth image in the view sequence identical; D is vertical practical field of view scope of correspondence when just playing the view longitudinal frame to be M, and the value of M and D can be determined as required.In conjunction with course information θ q, can infer t constantly Q+1Wheeled vehicle is present position coordinate Loc under world coordinate system Q+1, Loc wherein 1Be positioned at true origin, promptly
Figure BDA0000051758130000042
All the other Loc Q+1(q=1,2, L, horizontal ordinate n-1)
Figure BDA0000051758130000043
Can analogize as follows:
X Loc q + 1 = X Loc q + dL q × cos ( Σ l = 1 q θ l ) Y Loc q + 1 = Y Loc q + dL q × sin ( Σ l = 1 q θ l )
With each moment correspondence position coordinate points Loc i(i=1,2, L n) connects successively, just can obtain the driving trace curve of wheeled vehicle.
Beneficial effect of the present invention is:
1, on the basis of traditional monocular vision localization method, the real-time attitude information that utilizes attitude sensor to obtain is assisted the location of wheeled vehicle, has obtained more high-precision positioning result.
2, image is carried out contrary perspective projection transformation, eliminated transparent effect, carry out feature extraction, model parameter calculating on this basis, avoided classic method to cause image mapped to concern the problem that resolution error is big, improved the wheeled vehicle bearing accuracy.
3, the present invention only uses a camera and an attitude sensor, just can calculate the driving trace information of wheeled vehicle, so structure of the present invention is comparatively simple, and simple installation need not complicated operations such as demarcation.
Description of drawings
Fig. 1 is the concrete process flow diagram of implementing of the present invention;
Fig. 2 is the calculating principle schematic of wheeled vehicle geometric locus;
Fig. 3 is monocular vision location principle of work synoptic diagram;
The 34th width of cloth pavement image that Fig. 4 takes for camera in the experiment;
Fig. 5 is that view is just being played on the road surface of corresponding the 34th width of cloth image of Fig. 4 after contrary perspective transform;
The wheeled vehicle driving trace curve map of Fig. 6 for obtaining in the experiment.
Embodiment
Fig. 1 has provided the concrete process flow diagram of implementing of the present invention.The first step is carried out contrary perspective projection transformation to image sequence.In this step, find the solution camera intrinsic parameter matrix and Camera extrinsic matrix number method referring to the 22nd page of " videographic measurment learn principle and applied research " book (Science Press publishes, Yu Qifeng/Shangyang work) to 33 pages.In second step, calculate the transformation matrix between adjacent image.Wherein, (1) step was an extract minutiae.In this step, the feature detection operator that uses is the SURF operator, about the character of SURF feature operator and using method referring to document " Distinctive image features from scale invariant key points " (periodical: International Journal of Computer Vision, 2004,60 (2), page number 91-110, author David G.Lowe) and document " SURF:Speeded up robust features " (meeting: Proceedings of the 9th European Conference on Computer Vision, 2006,3951 (1), page number 404-417, author Herbert Bay, Tinne Tuytelaars and Luc Van Gool).(2) step was a parameter of obtaining rigid body translation model between image.In this step, adopt the RANSAC algorithm for estimating to find the solution the rigid body translation model, the RANSAC algorithm speed is fast, the parameter accuracy height that estimates, it is algorithm for estimating relatively more commonly used at present, the detailed introduction of RANSAC algorithm principle is referring to document " Preemptive RANSAC for Live Structure and Motion Estimation " (meeting: Proceedings of the Ninth IEEE International Conference on Computer Vision, ICCV 2003, author David Nist ' er, Sarnoff Corporation and Princeton).The 3rd goes on foot, and determines the driving trace curve of wheeled vehicle.Fig. 2 has provided the schematic diagram calculation of wheeled vehicle geometric locus.XY coordinate axis among the figure is represented world coordinate system, the initial point correspondence of coordinate system camera wheeled vehicle position coordinates Loc in world coordinate system when taking the 1st width of cloth image 1, this position correspondence the location initial moment t 1, θ 1Be the angle rotation amount between the 1st width of cloth image and the 2nd width of cloth image, dL 1Be the translation distance between the 1st width of cloth image and the 2nd width of cloth image, Loc 2The position coordinates of wheeled vehicle under world coordinate system when corresponding camera is taken the 2nd width of cloth image.θ 2Be the angle rotation amount between the 2nd width of cloth image and the 3rd width of cloth image, dL 2Be the translation distance between the 2nd width of cloth image and the 3rd width of cloth image, Loc 3The position coordinates of wheeled vehicle under world coordinate system when corresponding camera is taken the 3rd width of cloth image.θ N-1Be the angle rotation amount between n-1 width of cloth image and the n width of cloth image, wherein n is the total number of images that camera is taken, dL N-1Be the translation distance between n-1 width of cloth image and the n width of cloth image, Loc N-1The position coordinates of wheeled vehicle under world coordinate system when corresponding camera is taken n-1 width of cloth image, Loc nThe position coordinates of wheeled vehicle under world coordinate system when corresponding camera is taken n width of cloth image.With each position coordinates Loc under the world coordinate system 1, Loc 2... Loc nBe connected, the curve of acquisition is the driving trace curve of wheeled vehicle.
Utilize the specific embodiment of the present invention to test, experimental selection is carried out in outdoor smooth place, camera frame is located at wheeled vehicle the place ahead, also can be set up in any side of wheeled vehicle or rear in the practical application, includes only the road surface as long as guarantee the visual field that camera photographs.Camera is gathered 93 in picture altogether with the speed acquisition pictures of 3 frame/seconds in this experiment, i.e. n=93, and attitude sensor (model MTI) and camera (model FLE2-14S3) synchronous acquisition data, the principle of work of experiment is as shown in Figure 3.If digital 1 indication is shown t constantly among Fig. 3 q(q=1,2 ... n-1, the total number of images of n for taking) the wheeled vehicle present position, the field range of this corresponding camera of moment is shown in trapezoid area 3, and the image that this moment, camera was taken is P q, digital 2 indications are shown t Q+1Moment wheeled vehicle present position, the field range of this corresponding camera of moment are shown in trapezoid area 4, and the image that this moment, camera was taken is P Q+1, black arrow represents wheeled vehicle at moment t qAnd t Q+1Between travel distance, image P qAnd P Q+1The overlapping region, visual field as the numeral 5 shown in.The obtaining just overlapping region by two visual fields relative position relation obtain in each visual field of wheeled vehicle displacement and direction.
In the experiment, dolly is by serpentine fashion curve driving one segment distance.Fig. 4 is the 34th P in the image sequence of camera shooting in the experiment 34, Fig. 5 is image P 34View P ' is just being played on road surface after contrary perspective transform 34Fig. 6 has provided the wheeled vehicle driving trace curve that utilizes the present invention to obtain in the experiment, and XY represents world coordinate system among the figure, and unit is a rice, and each green square point is represented the corresponding wheeled vehicle position coordinates Loc constantly of each sub-picture among the figure i, all some connections are just obtained wheeled vehicle driving trace curve.Operating range obtained by the method for the present invention as seen from Figure 6 is 22.76 meters, is 22.63 meters and get the actual travel distance by the tape measure field survey, error about 6 ‰.

Claims (1)

1. the monocular vision localization method based on contrary perspective projection transformation is characterized in that, attitude sensor and camera are fixed together, and the placement location of camera makes camera can photograph the vehicle body road surface on any one direction on every side; The frame frequency and the resolution of camera are set as required; Being provided with of camera frame frequency need guarantee that in wheeled vehicle cruising process the field of view of adjacent two width of cloth image correspondences that camera is taken has overlapping part; The initial moment of hypothetical record wheeled vehicle driving trace curve is t 1, the image that this moment camera is taken is P 1At moment t i(i=1,2.., n, wherein n is a total number of images), camera photographs i width of cloth image P i, the camera attitude angle that attitude sensor obtains is a iImplement following step:
The first step is carried out contrary perspective projection transformation to image sequence;
At moment t i, by camera attitude angle a iObtain Camera extrinsic matrix number A i, combining camera intrinsic parameter matrix obtains the contrary perspective projection transformation matrix B of image iAccording to the contrary perspective projection transformation matrix B that obtains i, at moment t iThe image P that takes iCarry out contrary perspective projection transformation, obtain the road surface and just playing view P ' i
Suppose that all road surfaces are constantly just playing view P ' iForm and just descending view sequence P ' 1, P ' 2, L, P ' n
In second step, calculate the transformation matrix between adjacent image;
Align down view sequence P ' 1, P ' 2, L, P ' nAny adjacent image P ' qAnd P ' Q+1, q=1,2, L, n-1, carry out following processing:
(1) step, extract minutiae;
Use SURF feature description operator to adjacent image P ' qAnd P ' Q+1Carry out feature extraction, image P ' after the feature extraction qFeature point set be F q, image P ' Q+1Feature point set be F Q+1It is relevant that the unique point number of every width of cloth characteristics of image point set and the parameter of the resolution of image, texture complexity and SURF operator are provided with, can be according to using needs to determine;
(2) goes on foot, and obtains the parameter of rigid body translation model between image;
To adjacent image P ' qAnd P ' Q+1, set up the transformation relation between rigid body translation model description two width of cloth images, the rigid body translation model is as follows:
x j y j = cos θ q - sin θ q sin θ q cos θ q x k ′ y k ′ + dx q dy q q=1,2...,n-1
Wherein, suppose image P ' qFeature point set F qCorresponding characteristics of image pixel coordinate set is { (x j, y j), j=1...m, image P ' Q+1Feature point set F Q+1Corresponding characteristics of image pixel coordinate set be (x ' k, y ' k), k=1...m ', wherein m, m ' are respectively feature point set F qAnd F Q+1The unique point quantity that comprises, θ qPresentation video P ' Q+1With respect to P ' qThe angle of rotation, (dx q, dy q) be image P ' Q+1With respect to P ' qThe pixel coordinate translational movement;
Utilize image P ' qAnd P ' Q+1Corresponding characteristics of image pixel coordinate set adopts the RANSAC algorithm for estimating to find the solution the rigid body translation model, just can calculate the parameter θ of rigid body translation model q(dx q, dy q);
In the 3rd step, determine wheeled vehicle driving trace curve;
According to the rigid body translation model parameter θ that obtains q(dx q, dy q), calculate adjacent moment t qAnd t Q+1Between the displacement dL of wheeled vehicle q,
Figure FDA0000051758120000022
Q=1,2..., n-1, wherein, M is a longitudinal frame of just playing view; D is vertical practical field of view scope of correspondence when just playing the view longitudinal frame to be M, and the value of M and D can be determined as required; Wheeled vehicle is present position coordinate Loc under world coordinate system Q+1, Loc wherein 1Be positioned at true origin, promptly All the other Loc Q+1(q=1,2, L, horizontal ordinate n-1) and ordinate
Figure FDA0000051758120000024
For:
X Loc q + 1 = X Loc q + dL q × cos ( Σ l = 1 q θ l ) Y Loc q + 1 = Y Loc q + dL q × sin ( Σ l = 1 q θ l )
With each moment correspondence position coordinate points Loc i(i=1,2, L n) connects successively, just can obtain the driving trace curve of wheeled vehicle.
CN 201110070941 2011-03-23 2011-03-23 Monocular visual positioning method based on inverse perspective projection transformation Expired - Fee Related CN102221358B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110070941 CN102221358B (en) 2011-03-23 2011-03-23 Monocular visual positioning method based on inverse perspective projection transformation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110070941 CN102221358B (en) 2011-03-23 2011-03-23 Monocular visual positioning method based on inverse perspective projection transformation

Publications (2)

Publication Number Publication Date
CN102221358A true CN102221358A (en) 2011-10-19
CN102221358B CN102221358B (en) 2012-12-12

Family

ID=44777980

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110070941 Expired - Fee Related CN102221358B (en) 2011-03-23 2011-03-23 Monocular visual positioning method based on inverse perspective projection transformation

Country Status (1)

Country Link
CN (1) CN102221358B (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102829763A (en) * 2012-07-30 2012-12-19 中国人民解放军国防科学技术大学 Pavement image collecting method and system based on monocular vision location
CN103292807A (en) * 2012-03-02 2013-09-11 江阴中科矿业安全科技有限公司 Drill carriage posture measurement method based on monocular vision
CN104359464A (en) * 2014-11-02 2015-02-18 天津理工大学 Mobile robot positioning method based on stereoscopic vision
CN105976402A (en) * 2016-05-26 2016-09-28 同济大学 Real scale obtaining method of monocular vision odometer
CN106462762A (en) * 2016-09-16 2017-02-22 香港应用科技研究院有限公司 Detection, tracking and positioning of vehicle based on enhanced inverse perspective mapping
CN104180818B (en) * 2014-08-12 2017-08-11 北京理工大学 A kind of monocular vision mileage calculation device
CN108051012A (en) * 2017-12-06 2018-05-18 爱易成技术(天津)有限公司 Mobile object space coordinate setting display methods, apparatus and system
CN108921060A (en) * 2018-06-20 2018-11-30 安徽金赛弗信息技术有限公司 Motor vehicle based on deep learning does not use according to regulations clearance lamps intelligent identification Method
CN109242907A (en) * 2018-09-29 2019-01-18 武汉光庭信息技术股份有限公司 A kind of vehicle positioning method and device based on according to ground high speed camera
CN110335317A (en) * 2019-07-02 2019-10-15 百度在线网络技术(北京)有限公司 Image processing method, device, equipment and medium based on terminal device positioning
CN110567728A (en) * 2018-09-03 2019-12-13 阿里巴巴集团控股有限公司 Method, device and equipment for identifying shooting intention of user
CN110986890A (en) * 2019-11-26 2020-04-10 北京经纬恒润科技有限公司 Height detection method and device
CN112212873A (en) * 2019-07-09 2021-01-12 北京地平线机器人技术研发有限公司 High-precision map construction method and device
CN112710308A (en) * 2019-10-25 2021-04-27 阿里巴巴集团控股有限公司 Positioning method, device and system of robot
CN112927306A (en) * 2021-02-24 2021-06-08 深圳市优必选科技股份有限公司 Calibration method and device of shooting device and terminal equipment
CN113167579A (en) * 2018-12-12 2021-07-23 国立大学法人东京大学 Measurement system, measurement method, and measurement program
CN114782549A (en) * 2022-04-22 2022-07-22 南京新远见智能科技有限公司 Camera calibration method and system based on fixed point identification

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020080235A1 (en) * 2000-12-27 2002-06-27 Yong-Won Jeon Image processing method for preventing lane deviation
JP2006101816A (en) * 2004-10-08 2006-04-20 Univ Of Tokyo Method and apparatus for controlling steering

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020080235A1 (en) * 2000-12-27 2002-06-27 Yong-Won Jeon Image processing method for preventing lane deviation
JP2006101816A (en) * 2004-10-08 2006-04-20 Univ Of Tokyo Method and apparatus for controlling steering

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《Proc. of SPIE》 20110524 Cao Yu et al Monocular Visual Odometry based on Inverse Perspective Mapping 1-7 1 第8194卷, *
《计算机测量与控制》 20090930 高德芝等 基于逆透视变换德智能车辆定位技术 1810-1812 1 第17卷, 第9期 *

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103292807A (en) * 2012-03-02 2013-09-11 江阴中科矿业安全科技有限公司 Drill carriage posture measurement method based on monocular vision
CN103292807B (en) * 2012-03-02 2016-04-20 江阴中科矿业安全科技有限公司 Drill carriage attitude measurement method based on monocular vision
CN102829763B (en) * 2012-07-30 2014-12-24 中国人民解放军国防科学技术大学 Pavement image collecting method and system based on monocular vision location
CN102829763A (en) * 2012-07-30 2012-12-19 中国人民解放军国防科学技术大学 Pavement image collecting method and system based on monocular vision location
CN104180818B (en) * 2014-08-12 2017-08-11 北京理工大学 A kind of monocular vision mileage calculation device
CN104359464A (en) * 2014-11-02 2015-02-18 天津理工大学 Mobile robot positioning method based on stereoscopic vision
CN105976402A (en) * 2016-05-26 2016-09-28 同济大学 Real scale obtaining method of monocular vision odometer
CN106462762A (en) * 2016-09-16 2017-02-22 香港应用科技研究院有限公司 Detection, tracking and positioning of vehicle based on enhanced inverse perspective mapping
CN108051012A (en) * 2017-12-06 2018-05-18 爱易成技术(天津)有限公司 Mobile object space coordinate setting display methods, apparatus and system
CN108921060A (en) * 2018-06-20 2018-11-30 安徽金赛弗信息技术有限公司 Motor vehicle based on deep learning does not use according to regulations clearance lamps intelligent identification Method
CN110567728A (en) * 2018-09-03 2019-12-13 阿里巴巴集团控股有限公司 Method, device and equipment for identifying shooting intention of user
CN110567728B (en) * 2018-09-03 2021-08-20 创新先进技术有限公司 Method, device and equipment for identifying shooting intention of user
CN109242907A (en) * 2018-09-29 2019-01-18 武汉光庭信息技术股份有限公司 A kind of vehicle positioning method and device based on according to ground high speed camera
CN113167579A (en) * 2018-12-12 2021-07-23 国立大学法人东京大学 Measurement system, measurement method, and measurement program
CN110335317A (en) * 2019-07-02 2019-10-15 百度在线网络技术(北京)有限公司 Image processing method, device, equipment and medium based on terminal device positioning
CN112212873A (en) * 2019-07-09 2021-01-12 北京地平线机器人技术研发有限公司 High-precision map construction method and device
CN112710308A (en) * 2019-10-25 2021-04-27 阿里巴巴集团控股有限公司 Positioning method, device and system of robot
CN112710308B (en) * 2019-10-25 2024-05-31 阿里巴巴集团控股有限公司 Positioning method, device and system of robot
CN110986890A (en) * 2019-11-26 2020-04-10 北京经纬恒润科技有限公司 Height detection method and device
CN112927306A (en) * 2021-02-24 2021-06-08 深圳市优必选科技股份有限公司 Calibration method and device of shooting device and terminal equipment
CN112927306B (en) * 2021-02-24 2024-01-16 深圳市优必选科技股份有限公司 Calibration method and device of shooting device and terminal equipment
CN114782549A (en) * 2022-04-22 2022-07-22 南京新远见智能科技有限公司 Camera calibration method and system based on fixed point identification
CN114782549B (en) * 2022-04-22 2023-11-24 南京新远见智能科技有限公司 Camera calibration method and system based on fixed point identification

Also Published As

Publication number Publication date
CN102221358B (en) 2012-12-12

Similar Documents

Publication Publication Date Title
CN102221358B (en) Monocular visual positioning method based on inverse perspective projection transformation
CN107505644B (en) Three-dimensional high-precision map generation system and method based on vehicle-mounted multi-sensor fusion
CN102646275B (en) The method of virtual three-dimensional superposition is realized by tracking and location algorithm
CN109842756A (en) A kind of method and system of lens distortion correction and feature extraction
CN104268935A (en) Feature-based airborne laser point cloud and image data fusion system and method
CN103345630B (en) A kind of traffic signs localization method based on spherical panoramic video
CN107728175A (en) The automatic driving vehicle navigation and positioning accuracy antidote merged based on GNSS and VO
CN103337068B (en) The multiple subarea matching process of spatial relation constraint
CN102692236A (en) Visual milemeter method based on RGB-D camera
Goecke et al. Visual vehicle egomotion estimation using the fourier-mellin transform
CN103605978A (en) Urban illegal building identification system and method based on three-dimensional live-action data
CN102721409B (en) Measuring method of three-dimensional movement track of moving vehicle based on vehicle body control point
CN103411587B (en) Positioning and orientation method and system
CN104021588A (en) System and method for recovering three-dimensional true vehicle model in real time
Jende et al. A fully automatic approach to register mobile mapping and airborne imagery to support the correction of platform trajectories in GNSS-denied urban areas
Dawood et al. Harris, SIFT and SURF features comparison for vehicle localization based on virtual 3D model and camera
CN110715646B (en) Map trimming measurement method and device
Dawood et al. Vehicle geo-localization based on IMM-UKF data fusion using a GPS receiver, a video camera and a 3D city model
CN102483881B (en) Pedestrian-crossing marking detecting method and pedestrian-crossing marking detecting device
CN116907469A (en) Synchronous positioning and mapping method and system for multi-mode data combined optimization
CN102999895B (en) Method for linearly solving intrinsic parameters of camera by aid of two concentric circles
CN103700082A (en) Image splicing method based on dual quaterion relative orientation
CN112446915A (en) Picture-establishing method and device based on image group
Shahbazi et al. Unmanned aerial image dataset: Ready for 3D reconstruction
Wan et al. The P2L method of mismatch detection for push broom high-resolution satellite images

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20121212

Termination date: 20150323

EXPY Termination of patent right or utility model