CN108955683A - Localization method based on overall Vision - Google Patents

Localization method based on overall Vision Download PDF

Info

Publication number
CN108955683A
CN108955683A CN201810391663.9A CN201810391663A CN108955683A CN 108955683 A CN108955683 A CN 108955683A CN 201810391663 A CN201810391663 A CN 201810391663A CN 108955683 A CN108955683 A CN 108955683A
Authority
CN
China
Prior art keywords
target
camera
posture
vision
calculated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201810391663.9A
Other languages
Chinese (zh)
Inventor
罗胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Laser and Optoelectronics Intelligent Manufacturing of Wenzhou University
Original Assignee
Institute of Laser and Optoelectronics Intelligent Manufacturing of Wenzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Laser and Optoelectronics Intelligent Manufacturing of Wenzhou University filed Critical Institute of Laser and Optoelectronics Intelligent Manufacturing of Wenzhou University
Priority to CN201810391663.9A priority Critical patent/CN108955683A/en
Publication of CN108955683A publication Critical patent/CN108955683A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)

Abstract

The present invention provides the localization method based on overall Vision, and (1) obtains the exact position of camera;(2) posture of camera is obtained;(3) to target imaging: whole system puts into effect, to target imaging;(4) target is detected in the picture;5) direction ray is calculated;(6) target position is calculated;7) calculate targeted attitude: according to the posture of posture and camera of the target in image coordinate, the posture of target can be determined by taking vision and the information of IMU, OD, Geomagnetic to merge integrated navigation.The beneficial effects of the invention are as follows position, the directions of having known camera, and the model towards geographical environment, so that it may be easily calculated the position of each target within sweep of the eye;The positioning devices such as vision and GPS, IMU, OD and earth magnetism are cooperated, high-precision navigator fix can be obtained.

Description

Localization method based on overall Vision
Technical field
The invention belongs to field of locating technology, more particularly, to a kind of localization method based on overall Vision.
Background technique
Positioning is the precondition of navigation, is widely used in fields such as industry, endowment, medical treatment, exhibitions, automations. But current location technology has short slab in the application, such as GPS is easy to be blocked, and is not available indoors, in mountain area, tree Precision is low in woods;Wi-Fi precision is low, cannot be through walls;Bluetooth stability is slightly worse, is interfered by noise signal big;ZigBee needs close Collection arrangement information source;RFID operating distance is short, generally up to tens meters, is not easy to be integrated among mobile device.IMU, OD can With high frequency measurement acceleration, speed and attitude angle, but it is affected by noise big, accumulated error is had for a long time.
But the monitoring camera as intelligent city's project, densely it is distributed in each key position.If Known the position of camera, direction, and towards geographical environment, so that it may be easily calculated within the vision The position of each target.If cooperated with positioning devices such as GPS, IMU, OD and earth magnetism, so that it may improve positioning accuracy.
Summary of the invention
The object of the present invention is to provide a kind of localization methods based on overall Vision, and which solve deposit in usual localization method Position inaccurate, vulnerable to interference, installation cost is high the deficiencies of, improve positioning accuracy, be suitable for use in industry, automation, In the location navigation in the fields such as medical treatment, exhibitions, endowment and hotel.
The technical scheme is that a kind of localization method based on overall Vision, includes the following steps:
If exact position (longitude O, latitude A, height H) and the posture (α c, β c, γ c) of known camera, and known field The geometrical model on ground, then can determine mesh according to the position of target in the picture after finding target in the image of camera The azimuth (α O, β O, γ O) of mark and camera line, and target is calculated according to the geometrical model at this azimuth and place Position and posture.
(1) exact position of camera is obtained: if in field, using high-precision differential GPS device;If in room It is interior, whole building is positioned with high-precision differential GPS device, calculates camera further according to the size of constructure inner structure Exact position.World coordinate system origin is arranged at camera focal length, the direction longitude O (east) is directed toward in a direction, another The direction latitude A (north) is directed toward in a direction, and the direction height H is directed toward in third direction;
(2) it obtains the posture of camera: using the calibrating template calibration for cameras with level meter and compass;Horizontal positioned mark The direction longitude O (east) is directed toward in one direction of fixed board, and the direction latitude A (north) is directed toward in a direction, with the world at camera Coordinate system is consistent;After bidding is fixed, camera coordinate system and world coordinate system are transformed to R1 | T, it can be by such as from spin matrix R1 Lower formula determines three attitude angles (α c, β c, γ c) of camera,
(3) to target imaging: whole system puts into effect, to target imaging;
(4) target is detected in the picture: the method that target detection can be used, it can also be in target with the side for presetting label Method determines the position of target in the picture, and target sizes λ, relative to the offset (r, c) of picture centre, target is sat in image Posture θ in mark;
(5) it calculates direction ray: because being monocular visual angle, can not determine the height and distance of target;But for specific For, often on the ground, and target is often certain type, such as people, vehicle, AGV etc. for determining to target, therefore big Small, height is fixed.After finding target in the picture, according to the offset (r, c) at target relative image center, in amendment camera deformation Afterwards, the deflection angle of target Yu camera optical axis can be determined
And spin matrix R2 of the target relative to camera coordinates is calculated, accordingly, it can be determined that direction ray is in world coordinates Angle (α O, β O, γ O) in system;
(6) calculate target position: after known target ray, can determine target position there are two types of mode: (a) if it is known that The geometrical model in place: if ground be not it is horizontal, after the geometrical model S in place is translated up object height, this is three-dimensional The intersection point of curved surface and direction ray is exactly target position;If ground be it is horizontal, the geometrical model in place is not needed, by phase It hands over after calculating, so that it may determine target position;(b) according to target sizes: according to the size λ of target in the picture, estimating target At a distance from camera, so that it is determined that the position coordinates of target.
(7) it calculates targeted attitude: according to the posture of posture θ and camera of the target in image coordinate, taking vision The posture of target can be determined with the information fusion integrated navigation of IMU, OD, Geomagnetic.
A kind of information fusion Combinated navigation method in overall Vision localization method, specific as follows:
1) on the basis of establishing systematic error equation, by location error equation, attitude error equations and inertia type instrument The observed value that error equation combines as integrated navigation Kalman filter can write out INS systematic error state side The general expression of journey is as follows
X (k)=F (k-1) X (k-1)+G (k-1) W (k-1)
Wherein, state variable X=[δ O, δ A, δ H, φNEDrxryrz,△x,△y,△z], δ O, δ A, δ H are Latitude, longitude and altitude error, φN、φE、φDFor platform error angle, εrx、εry、εrzIt drifts about for gyroscope single order markov, △x、△y、△zFor the drift of accelerometer single order markov.F is state transition matrix, and G is noise transition matrix, and W is system noise.
2) use the difference of vision measurement value and IMU, OD, Geomagnetic fuse information value as measuring value, observational equation For
Z (k)=H (k) X (k)+V (k)
Wherein, Z=[δ O, δ A, δ H, φNED]T, H is observing matrix, and V is measurement noise matrix.
3) after establishing the state equation and observational equation of system, so that it may carry out Kalman filtering.Wherein state-noise battle array Q It is chosen according to the fused related parameter of IMU, OD, Geomagnetic information, observation noise matrix R is according to the property of vision measurement It can choose.
4) error that inertial navigation system is estimated via Kalman filter, is then corrected inertial navigation system.
The advantages and positive effects of the present invention are: due to the adoption of the above technical scheme, it is known that the position of camera Set, direction, and the model towards geographical environment, so that it may be easily calculated the position of each target within sweep of the eye It sets;The positioning devices such as vision and GPS, IMU, OD and earth magnetism are cooperated, high-precision navigator fix can be obtained.
Detailed description of the invention
Fig. 1 is system layout.
Fig. 2 is vision positioning process flow of the invention.
Fig. 3 is vision positioning handling principle of the invention.
Fig. 4 is camera position, posture and imaging plane coordinate.
Fig. 5 is the direction ray from camera.
Fig. 6 is to calculate target position from camera and direction ray.
Fig. 7 is vision and the information fusion integrated navigation process flow of IMU, OD, Geomagnetic.
Fig. 8 is Kalman filter Correcting INS.
Fig. 9 is the application schematic diagram of the indoor positioning technologies based on overall Vision of embodiment 1.
Figure 10 is the application schematic diagram of the sweeping robot based on overall Vision of embodiment 2.
In figure:
1, A camera 2, A upright bar 3, camera A field range
4, B camera 5, B upright bar 6, camera B field range
7, target 8, C camera 9, camera C field range
10, C upright bar
Specific embodiment
As shown in Figure 1, having A camera 1, B camera 4, C camera 8, A camera 1, B camera 4, C along road arrangement Camera 8 is respectively provided in the A upright bar 2 on road, B upright bar 5 and C upright bar 10, and the field range of camera is respectively camera A Field range 3, camera B field range 6, camera C field range 9.There is no the entire roads of all standing in the visual field of camera. 7 trolley of target travels on road, and trolley is likely to be at 0,1,2 camera within sweep of the eye.If trolley is in 0 and takes the photograph As head within sweep of the eye when, trolley by IMU, OD, Geomagnetic navigate;If trolley is in the view of 1,2 camera When in wild range, trolley is by the navigation of the fuse information of vision and IMU, OD, Geomagnetic.
A kind of localization method based on overall Vision, includes the following steps:
As shown in Figure 2,3, if the exact position (longitude O, latitude A, height H) of known camera and posture (α c, β c, γ It c), and the geometrical model in known place, can be according to target in the picture then after finding target in the image of camera Position determine the azimuth (α O, β O, γ O) of target Yu camera line, and according to the geometrical model at this azimuth and place Calculate position and the posture of target.Specific step is as follows:
(1) exact position of camera is obtained: if in field, using high-precision differential GPS device;If in room It is interior, whole building is positioned with high-precision differential GPS device, calculates camera further according to the size of constructure inner structure Exact position.World coordinate system origin is arranged at camera focal length, the direction longitude O (east) is directed toward in a direction, another The direction latitude A (north) is directed toward in a direction, and the direction height H is directed toward in third direction;
(2) it obtains the posture of camera: using the calibrating template calibration for cameras with level meter and compass;Horizontal positioned mark The direction longitude O (east) is directed toward in one direction of fixed board, and the direction latitude A (north) is directed toward in a direction, with the world at camera Coordinate system is consistent;After bidding is fixed, camera coordinate system and world coordinate system are transformed to R1 | T, it can be by such as from spin matrix R1 Lower formula determines three attitude angles (α c, β c, γ c) of camera, as shown in Figure 4;
(3) to target imaging: whole system puts into effect, to target imaging;
(4) target is detected in the picture: the method that target detection can be used, it can also be in target with the side for presetting label Method determines the position of target in the picture, and target sizes λ, relative to the offset (r, c) of picture centre, target is sat in image Posture θ in mark;
(5) it calculates direction ray: because being monocular visual angle, can not determine the height and distance of target;But for specific For, often on the ground, and target is often certain type, such as people, vehicle, AGV etc. for determining to target, therefore big Small, height is fixed.After finding target in the picture, according to the offset (r, c) at target relative image center, in amendment camera deformation Afterwards, the deflection angle of target Yu camera optical axis can be determinedAs shown in Figure 5;
And spin matrix R2 of the target relative to camera coordinates is calculated, accordingly, it can be determined that direction ray is in world coordinates Angle (α O, β O, γ O) in system;
(6) calculate target position: after known target ray, can determine target position there are two types of mode: (a) if it is known that The geometrical model in place: if ground be not it is horizontal, after the geometrical model S in place is translated up object height, this is three-dimensional The intersection point of curved surface and direction ray is exactly target position;If ground be it is horizontal, the geometrical model in place is not needed, by phase It hands over after calculating, so that it may determine target position;(b) according to target sizes: according to the size λ of target in the picture, estimating target At a distance from camera, so that it is determined that the position coordinates of target, as shown in Figure 6;
(7) it calculates targeted attitude: according to the posture of posture θ and camera of the target in image coordinate, taking vision The posture of target can be determined with the information fusion integrated navigation of IMU, OD, Geomagnetic.
As shown in fig. 7, a kind of information in overall Vision localization method merges Combinated navigation method, it is specific as follows:
Wherein vision, IMU, OD, Geomagnetic are the several frequently seen sensors in AGV trolley electricity, can determine target Position and posture.But these sensors, every kind is all defective, therefore will be by the way of information fusion, comprehensive several biographies The information of sensor obtains relatively accurate position and posture.Currently, having had GPS and IMU, OD, Geomagnetic group The method for closing navigation, but the method for vision and IMU, OD, Geomagnetic integrated navigation not yet.
The fusion of IMU, OD, Geomagnetic information has ready-made method, and details are not described herein again.
1) on the basis of establishing systematic error equation, by location error equation, attitude error equations and inertia type instrument The observed value that error equation combines as integrated navigation Kalman filter can write out INS systematic error state side The general expression of journey is as follows
X (k)=F (k-1) X (k-1)+G (k-1) W (k-1)
Wherein, state variable X=[δ O, δ A, δ H, φ N, φ E, φ D, ε rx, ε ry, ε rz, △ x, △ y, △ z], δ O, δ A, δ H is latitude, longitude and altitude error, and φ N, φ E, φ D are platform error angle, and ε rx, ε ry, ε rz are gyroscope single order Ma Erke Husband's drift, △ x, △ y, △ z are the drift of accelerometer single order markov.F is state transition matrix, and G is noise transition matrix, and W is System noise.
2) use the difference of vision measurement value and IMU, OD, Geomagnetic fuse information value as measuring value, observational equation For
Z (k)=H (k) X (k)+V (k)
Wherein, Z=[δ O, δ A, δ H, φ N, φ E, φ D] T, H is observing matrix, and V is measurement noise matrix.
3) after establishing the state equation and observational equation of system, so that it may carry out Kalman filtering.Wherein state-noise battle array Q It is chosen according to the fused related parameter of IMU, OD, Geomagnetic information, observation noise matrix R is according to the property of vision measurement It can choose, as shown in Figure 8;
4) error that inertial navigation system is estimated via Kalman filter, is then corrected inertial navigation system.
Indoor positioning technologies of the embodiment 1 based on overall Vision
The localization method for taking overall Vision of the invention, using in location technology indoors.As shown in figure 9, indoor fixed There is important value in position, but current technology level has become the bottleneck for hindering application.If using overall Vision, mesh Mark issues visual positioning request signal, and indoor locating system provides accurate location information service to target, solves current Indoor positioning problem.
Overall Vision: refer to vertical view, it can be seen that the significantly camera of range.
Visual positioning request signal: the visual signals that camera is able to detect that, such as the light of flashing.Make With: (1) tell the position of camera detection target;(2) tell that whom camera detection target is;(3) Synchronous camera head and target On time.
Step:
(1) target issues visual positioning request signal;
(2) target position, posture are detected;
(3) target is identified;
(4) camera and target establish radio communication chain circuit;
(5) camera notifies target position, posture by radio communication chain circuit.
Sweeping robot of the embodiment 2 based on overall Vision
The localization method for taking overall Vision of the invention, is applied in sweeping robot.As shown in Figure 10, due to not having Cognition to entire environment, sweeping robot can not establish the cruise strategy of optimization;Importantly, not to effect of sweeping the floor Feedback, sweeping robot can not know which place needs to sweep, which place does not need to sweep.Even if having the ability to build environment The sweeping robot of mould can not also establish the environment of accurate model, especially dynamic change to entire environment.
Overall Vision refer to vertical view, it can be seen that the significantly camera of range.There are two effects for this camera: (1) accurate model is established to entire environment, to facilitate sweeping robot to cruise;(2) it dirty where is capable of detecting when, where needs It sweeps, arranges clean-up task to sweeping robot;(3) the cleaning effect for detecting sweeping robot, adjusts beating for sweeping robot Parameter is swept, cleaning effect is improved.But global camera can only be seen from above, can't see the place blocked.
Therefore, based on the sweeping robot of overall Vision, complete model can be established to entire environment, it also can be by sweeping The laser sensor of floor-washing robot establishes the localized mode that the partial model of plane of travel, especially overall situation camera are blocked place Type.And more importantly, by overall Vision, camera can by wireless telecommunications notify sweeping robot where sweep, Where do not need to sweep, arranges that clean-up task, and the cleaning effect of detection sweeping robot, adjustment are swept the floor to sweeping robot The cleaning parameter of robot improves cleaning effect.
One embodiment of the present invention has been described in detail above, but the content is only preferable implementation of the invention Example, should not be considered as limiting the scope of the invention.It is all according to all the changes and improvements made by the present patent application range Deng should still be within the scope of the patent of the present invention.

Claims (2)

1. the localization method of view-based access control model, it is characterised in that:
(1) exact position of camera is obtained;
(2) posture of camera is obtained;
(3) to target imaging: whole system puts into effect, to target imaging;
(4) target is detected in the picture;
(5) direction ray is calculated;
(6) target position is calculated;
(7) calculate targeted attitude: according to the posture of posture and camera of the target in image coordinate, take vision and The information fusion integrated navigation of IMU, OD, Geomagnetic can determine the posture of target;
In step 1), if in field, using high-precision differential GPS device;If indoors, with high-precision differential GPS Device positions whole building, and the exact position of camera is calculated further according to the size of constructure inner structure;By world coordinates It is that origin is arranged at camera focal length, the direction longitude O (east) is directed toward in a direction, and the direction latitude A is directed toward in another direction The direction height H is directed toward in (north), third direction;
In step 2), using the calibrating template calibration for cameras with level meter and compass;Horizontal positioned one direction of scaling board refers to To the direction longitude O (east), the direction latitude A (north) is directed toward in a direction, consistent with the world coordinate system at camera;Bidding After fixed, camera coordinate system and world coordinate system are transformed to R1 | and T can determine to take the photograph as follows from spin matrix R1 As three attitude angles (α c, β c, γ c) of head;
In step 4), the method that can use target detection can also determine that target is being schemed in target with the method for presetting label Position as in, target sizes λ, relative to the offset (r, c) of picture centre, posture θ of the target in image coordinate.
2. a kind of localization method of view-based access control model according to claim 1, it is characterised in that: in step 5), because being single Visual angle can not determine the height and distance of target;But for concrete application, target often on the ground, and mesh Mark is often determining type, and size, height are fixed;After finding target in the picture, according to the inclined of target relative image center It moves (r, c), after amendment camera deformation, can determine the deflection angle of target Yu camera optical axis
And spin matrix R2 of the target relative to camera coordinates is calculated, therefore can determine direction ray in world coordinate system Angle (α O, β O, γ O);
In step 6), it is known that after direction ray, can determine target position there are two types of mode: (a) if it is known that the geometry in place Model: if ground be not it is horizontal, after the geometrical model S in place is translated up object height, this three-dimension curved surface and target The intersection point of ray is exactly target position;If ground be it is horizontal, do not need the geometrical model in place, by intersection calculate after, It is assured that target position;(b) according to target sizes: according to the size λ of target in the picture, estimating target and camera Distance, so that it is determined that the position coordinates of target.
CN201810391663.9A 2018-04-28 2018-04-28 Localization method based on overall Vision Withdrawn CN108955683A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810391663.9A CN108955683A (en) 2018-04-28 2018-04-28 Localization method based on overall Vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810391663.9A CN108955683A (en) 2018-04-28 2018-04-28 Localization method based on overall Vision

Publications (1)

Publication Number Publication Date
CN108955683A true CN108955683A (en) 2018-12-07

Family

ID=64499644

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810391663.9A Withdrawn CN108955683A (en) 2018-04-28 2018-04-28 Localization method based on overall Vision

Country Status (1)

Country Link
CN (1) CN108955683A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110980084A (en) * 2019-12-13 2020-04-10 灵动科技(北京)有限公司 Warehousing system and related method
CN111105455A (en) * 2019-12-13 2020-05-05 灵动科技(北京)有限公司 Warehousing system and related method
CN111487889A (en) * 2020-05-08 2020-08-04 北京金山云网络技术有限公司 Method, device and equipment for controlling intelligent equipment, control system and storage medium
CN116108873A (en) * 2022-12-12 2023-05-12 天津大学 Motion posture assessment system based on RFID/IMU fusion

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110980084A (en) * 2019-12-13 2020-04-10 灵动科技(北京)有限公司 Warehousing system and related method
CN111105455A (en) * 2019-12-13 2020-05-05 灵动科技(北京)有限公司 Warehousing system and related method
WO2021115189A1 (en) * 2019-12-13 2021-06-17 灵动科技(北京)有限公司 Warehouse system and related method
WO2021115185A1 (en) * 2019-12-13 2021-06-17 灵动科技(北京)有限公司 Warehousing system and related method
CN111105455B (en) * 2019-12-13 2024-04-16 灵动科技(北京)有限公司 Warehouse system and related method
CN111487889A (en) * 2020-05-08 2020-08-04 北京金山云网络技术有限公司 Method, device and equipment for controlling intelligent equipment, control system and storage medium
CN116108873A (en) * 2022-12-12 2023-05-12 天津大学 Motion posture assessment system based on RFID/IMU fusion
CN116108873B (en) * 2022-12-12 2024-04-19 天津大学 Motion posture assessment system based on RFID/IMU fusion

Similar Documents

Publication Publication Date Title
CN108759834A (en) A kind of localization method based on overall Vision
CN108759815A (en) A kind of information in overall Vision localization method merges Combinated navigation method
Atia et al. Integrated indoor navigation system for ground vehicles with automatic 3-D alignment and position initialization
CN110501024A (en) A kind of error in measurement compensation method of vehicle-mounted INS/ laser radar integrated navigation system
CN108955683A (en) Localization method based on overall Vision
Brenner Extraction of features from mobile laser scanning data for future driver assistance systems
CN106197406B (en) A kind of fusion method based on inertial navigation and RSSI wireless location
US10704902B2 (en) Surveying pole
KR20180101717A (en) Vehicle component control using maps
KR20110043538A (en) Method and systems for the building up of a roadmap and for the determination of the position of a vehicle
CN109186597B (en) Positioning method of indoor wheeled robot based on double MEMS-IMU
US20140118536A1 (en) Visual positioning system
WO2014134710A1 (en) Method and apparatus for fast magnetometer calibration
CN106705962B (en) A kind of method and system obtaining navigation data
CN103175524A (en) Visual-sense-based aircraft position and attitude determination method under mark-free environment
CN111025366B (en) Grid SLAM navigation system and method based on INS and GNSS
CN110617795B (en) Method for realizing outdoor elevation measurement by using sensor of intelligent terminal
CN110631579A (en) Combined positioning method for agricultural machine navigation
CN110763238A (en) High-precision indoor three-dimensional positioning method based on UWB (ultra wide band), optical flow and inertial navigation
CN110388939A (en) One kind being based on the matched vehicle-mounted inertial navigation position error modification method of Aerial Images
CN111221020A (en) Indoor and outdoor positioning method, device and system
CN109883416A (en) A kind of localization method and device of the positioning of combination visible light communication and inertial navigation positioning
JP5355443B2 (en) Position correction system
CN109932707A (en) Take the traverse measurement system calibrating method of radar arrangement into account
CN102128618B (en) Active dynamic positioning method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20181207