CN106556395A  A kind of air navigation aid of the single camera vision system based on quaternary number  Google Patents
A kind of air navigation aid of the single camera vision system based on quaternary number Download PDFInfo
 Publication number
 CN106556395A CN106556395A CN201611010993.6A CN201611010993A CN106556395A CN 106556395 A CN106556395 A CN 106556395A CN 201611010993 A CN201611010993 A CN 201611010993A CN 106556395 A CN106556395 A CN 106556395A
 Authority
 CN
 China
 Prior art keywords
 directions
 degree
 point
 turn
 quaternary number
 Prior art date
 Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
 Pending
Links
Classifications

 G—PHYSICS
 G01—MEASURING; TESTING
 G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
 G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00  G01C19/00
 G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00  G01C19/00 specially adapted for navigation in a road network
Abstract
The present invention discloses a kind of air navigation aid of the single camera vision system based on quaternary number, including：Step 1, constructing environment map：Step 2, preservation map datum and tracing point coordinate information；Step 3, loading map and track point coordinates calculate corner β according to yaw angle θ, and the direction of step 4, the angle beta obtained according to step 3 and corner sends a command to vehicle bottom control.Using technical scheme, the navigation of the single camera vision system based on quaternary number on the ROS platforms under linux system is realized, research cost is reduced.
Description
Technical field
The invention belongs to the unmanned cruiser independent navigation field of low speed, is related to a kind of merely using room in monocular cam room
Outside fix and the method for navigation.
Background technology
The unmanned intelligent vehicle that China is researching and developing just, in the middle of like a raging fire carrying out, has attracted countless colleges and universities, section
Grind the extensive concern of mechanism's research worker and automobile vendor.The working environment of low speed patrol intelligent vehicle patrol is zoo, dynamic
In thing garden, vegetation is grown prosperity, and it is weaker that Jing part of detecting patrol section receives gps signal, it is impossible to is advanced according to GPS navigation vehicle.Cause
This, carries out map structuring using sensor acquisition external information and navigation is particularly important.SLAM problems are in driving intelligent in recent years
Car field is constantly subjected to the extensive concern of research worker.The sensor that it can utilize robot selfcontained sets up the environment being incremented by
Map, then realizes self poisoning by building map, without the need for any external reference systems (such as:GPS) and other sensors,
With potential economic worth and being widely applied prospect.
For in the research of unmanned intelligent vehicle, map structuring and positioning are vehicle DAS (Driver Assistant System)s and unmanned
The key technology in intelligent vehicle field.What the various algorithms for SLAM had developed at present is more and more ripe, mainly uses one
Plant or multiple sensors information allows the robot to independently build figure positioning.Its method for solving is divided into the side based on Kalman filtering
Method, the method optimized based on particle filter method and based on figure.Method based on figure optimization is as which is in extensive environment
It is excellent to build figure performance, become at present the main method of research both at home and abroad.But at present for the research of this respect is only limited to determine
Position and figure is built, be substantially the navigation of joint other sensors information for navigation feature, such as ultrasonic sensor, Laser Measuring
Distance meter, radar, stereoscopic vision etc., for the use of multisensor increased Financial cost.
The content of the invention
For the problems referred to above that prior art is present, the present invention is proposed on a kind of ROS platforms under linux system
Air navigation aid based on the single camera vision system of quaternary number.Firstly, it is necessary to realize the map structuring based on key frame, it is critical only that
The trajectory coordinates information of the image center that preservation is built during figure；Secondly tracing point coordinate information is carried out according to certain rule
Overstocked point screening；Then map and track point coordinate data are reloaded；Last calculating according to navigational portions algorithm of the present invention is worked as
Previous frame coordinate points complete navigation feature to the traveling angle and route of the next position image center point coordinates.
For achieving the above object, the present invention is adopted the following technical scheme that：
A kind of air navigation aid of the single camera vision system based on quaternary number, comprises the following steps：
Step 1, constructing environment map, comprise the following steps：
1) carry out camera calibration；
2) photographic head is fixed to into right ahead；
3) input picture is transformed into into greyscale color space from RGB color；
4) feature extraction and matching is carried out to image；
5) map initialization；
6) closed loop detection and reorientation；
7) obtain Current camera pose, calculate quaternary number be transformed into 180 degree to 180 degree scope Eulerian angles ψ, θ,It is described
ψ、θ、Respectively about the z axis, the anglec of rotation of Yaxis, Xaxis, yaw angle θ are the angle of present frame；
Step 2, preservation map datum and tracing point coordinate information；
Step 3, loading map and track point coordinates calculate corner β according to yaw angle θ,
The direction of step 4, the angle beta obtained according to step 3 and corner, sends a command to vehicle bottom control.
Preferably, ψ, θ andScope is 90 degree to 90 degree, and correspondence solution formula is：
When θ scopes for 180 degree to+180 degree when, correspondence solution part formula is：
Preferably, the process that step 3 calculates corner β is：
If (dx, dy), (gx, gy) are respectively current point and will reach point coordinates, present frame angle is θ, if current point and
Impact point line with xaxis angle isScope is 0 to 90 degree；β is corner, and bottom Body Control is from left to right
180 degree is arrived for 0；γ judges angle for middle decisionmaking level, and different target point has different values from current point position relationship.Current location
When being divided into following four kinds of different situations with aiming spot relative position relation：
When the direction of motion is E to F, i.e., as (dx>gx&&dy<When gy)：
If 0=<θ<=90 i.e. L1 directions when, then lefthand rotation β= θ +90 α, if 180=<θ< 90 i.e. L4 directions when, then
Righthand rotation β= θ  90+ α, if 90=<θ<0 i.e. L2 and L3 directions when, then judge γ=90 α θ , if γ>0 i.e. L2 directions are then
Lefthand rotation β=γ, otherwise i.e. then turn right β= γ in L3 directions；
When the direction of motion is that F ' arrives E ', i.e., as (dx<gx&&dy>When gy)：
If 0=<θ<=90 i.e. L1 directions when, then righthand rotation β=90  θ +α, if 180=<θ<=90 i.e. L4 directions when,
Then lefthand rotation β=180  θ +90 α, if 90<θ<=180 i.e. L2 and L3 directions when, then judge γ=90 α(180  θ ), if
γ>0 i.e. L3 directions when then turn left that otherwise i.e. then turn right β=γ β= γ in L2 directions；
When direction of motion P is to Q, i.e., as (dx<gx&&dy<When gy)：
If 90=<θ<=0 i.e. L1 directions when, then righthand rotation β=90+  θ α, if 90=<θ<=180 i.e. L4 directions when, then
Lefthand rotation β= θ  90+ α, if 0<θ<90 i.e. L2 and L3 directions when, then judge γ=90 α θ , if γ>0 i.e. L2 directions when then
Righthand rotation β=γ is otherwise that L3 directions are then turned left β= γ；
When the direction of motion is that Q ' arrives P ', i.e., as (dx>gx&&dy>When gy)：
If 90=<θ<=0 i.e. L1 directions when, then lefthand rotation β=90  θ +α, if 90=<θ<=180 i.e. L4 directions when, then
Righthand rotation β=180  θ +90 α, if 180=<θ< 90 i.e. L2 and L3 directions when, then judge γ=90+ α θ , if γ>0 is
Then turning left during L2 directions, otherwise i.e. then turn right β=γ β= γ in L3 directions；
The present invention is as using above technical scheme, which has advantages below：1st, the present invention utilizes monocular vision SLAM systems
System calculates image center pose coordinate during figure is built as driving trace point coordinates.2nd, the present invention proposes that a kind of utilization builds figure
Image center pose after rear monocular track point coordinates and reorientation calculates vehicle using quaternary number and the conversion of Eulerian angles
Angle data is navigated, and the method is not used any other external sensor, is completed navigation using photographic head merely, is reduced
Research cost, has potential using value.
Description of the drawings
Fig. 1 is method flow diagram involved in the present invention；
Fig. 2 is the chessboard figure that Zhang Zhengyou standardizitions are used；
Fig. 3 is FAST Corner Detection schematic diagrams；
Fig. 4 is the closed loop overhaul flow chart based on bag of words；
Fig. 5 is yaw angle calculation flow chart；
Fig. 6 is the corresponding Eulerian angles broken line graph of each frame；
Fig. 7 is corner direction calculating schematic diagram；
Fig. 8 is the navigation path result figure under unencryption map；
Fig. 9 is Fig. 8 trajector deviation analysis charts；
Figure 10 is the navigation path result figure under encryption map；
Figure 11 is Figure 10 trajector deviation analysis charts；
The GPS track comparison diagram that Figure 12 is built under figure and navigation for image；
Table 1 turns Eulerian angles value result for different quaternary numbers；
Table 2 is corner decision table.
Specific embodiment
The invention will be further described with reference to the accompanying drawings and examples.
The flow chart of the method for the invention is as shown in figure 1, comprise the following steps：
Step 1, map structuring
Step 1.1, camera calibration
Camera calibration method adopts Zhang Zhengyou camera calibration methods：If Fig. 2 lineaments are 7 black and white for taking advantage of that 7 length of sides are 0.02m
Alternate grid.7 plane chessboard, handheld chessboard is taken advantage of to obtain 30 multiple checkerboard images at various orientations camera alignment 7, i.e.,
The intrinsic parameter and distortion parameter of camera can be calculated.
Ultimate principle is as follows：
Wherein, Intrinsic Matrixes of the k for camera, [X Y 1]^{T}For the homogeneous coordinates put on plane chessboard, [u v 1]^{T}Position chess
Homogeneous coordinates of the spot projection to corresponding point on the plane of delineation, [r on disk_{1} r_{2} r_{3}] and t be the rotation under camera coordinates system respectively
Turn and translation vector, k [r_{1} r_{2}T] for homography matrix H.According to matrix solving method, when picture number is more than or equal to 3, k can
To obtain unique solution.
Photographic head is fixed on right ahead by step 1.2, it is ensured that during map structuring and when navigating, position and direction are protected
Hold constant.
Input picture is transformed into greyscale color space, conversion formula from RGB color by step 1.3：Gray=R*
0.299+G*0.587+B*0.114；
Step 1.4, carries out feature extraction and matching to image
Feature extraction and matching adopts ORB (Oriented Brief) method, detects initially with FAST12 algorithms
Characteristic point is as shown in Figure 3：
Gray values of the I (x) for any point on circumference, gray scales of the I (p) for the center of circle, ε_{d}For the threshold value of gray value differences, if greatly
Then think that p is a characteristic point in threshold value.
Then it is sub to calculate the description of a characteristic point using BRIEF algorithms：Select around characteristic point after smoothed image
One Patch, by a kind of selected method picking out 256 points pair in this Patch.Then for each point
Compare its brightness value to (p, q), if I (p)>I (q) then this to generate one in twovalue string value be 1, if I (p)
<I (q), then the value corresponded in twovalue string is 1, is otherwise 0.All 256 points pair, between being all compared, then obtain one
The binary string of individual 256 length, represents description using binary number string.
Step 1.5, map initialization
The map initialization of monocular SLAM is carried out, the first segment distance of dollying obtains two key frames, by key frame
Between characteristic matching calculate Homography models or Fundamental models and obtain initialized cartographic model,
The computational methods of Homography models are normalized DLT, and the computational methods of Fundamental matrix are
normalized 8points.In visual odometry part, first to two width picture frame P_{K2}、P_{K1}Carry out feature extraction and
Match somebody with somebody；Then trigonometric ratio P_{K2}、P_{K1}Feature, to new image P_{K}Feature extraction and P_{K1}Characteristic matching；Finally by employing
Camera pose is estimated in matching of the PnP algorithms from 3D to 2D, is optimized by Bundle Adjustment, can be excellent using figure
Chemical industry tool g2o carries out map optimization.The model of PnP (PerspectivenPoint) problem is：
Attitudes of the R for camera, calibration matrixes of the C for camera, (u, v) is twodimensional pixel coordinate, and (x, y, z) is threedimensional coordinate
Coordinate i.e. under world coordinate system.
Step 1.6, closed loop detection and reorientation
Present frame is converted into image bag of words, image data base is then retrieved, view data bag of words storehouse part is using classical
DBoW2 bag of words storehouse model, by the similar scene for calculating the image that is input into and model in bag of words storehouse in monocular cam, be complete
It is as shown in Figure 4 that key frame is searched in office's reorientation.The insertion standard of key frame is when the characteristic point cloud that new frame is detected is closed with reference
When key frame characteristic point cloud gap is less than 90%, determines that present frame is key frame, preserve into map datum.It is special that we calculate ORB
Seek peace each key frame map cloud point corresponding relation, RANSAC iterative calculation is then performed to each key frame, is calculated with PnP
Method is found out the i.e. camera pose quaternary number in position of the camera in world coordinate system and is represented.Quaternary number is a scalar (w) and
The combination of individual 3D vectors (x, y, z).The definition of quaternary number：
Q=[w x y z]^{T}
q^{2}=w^{2}+x^{2}+y^{2}+z^{2}=1
Step 1.7, obtains Current camera pose
The posture information of the Current camera that acquisition is calculated by PnP+RANSAC, that is, represent the quaternary of three dimensions rotation
After number.Quaternary number is transformed into into Eulerian angles, ψ, θ,Respectively about the z axis, the anglec of rotation of Yaxis, Xaxis, yaw angle θ are current
The angle of frame.Generally ψ, θ of definition andScope is 90 degree to 90 degree, and correspondence solution formula is：
Eulerian angles turn quaternary number formula：
Driftage angle range can not uniquely represent current positional information when being 90 to 90 degree, it is therefore desirable to by yaw angle model
Enclose expand to 180 degree is to 180 degree.When θ scopes for 180 degree to+180 degree when correspondence solution part formula it is as follows.It is only right
For yaw angle is sought whole algorithm with reference to 1,2,3 its flow chart of formula as shown in figure 5, table 1 turns Eulerian angles for different quaternary numbers
Value result, Fig. 6 are the Eulerian angles of correspondence each frame position when rotation takes two turns.
1 quaternary number of table turns the conversion output contrast (unit of Eulerian angles：Degree)
Present frame quaternary number  True Eulerian angles θ  This algorithm is exported 
(0.0297,0.9994,0.0152,0.0046)  180  180 
( 0.0275, 0.8640,0.0126,0.5025)  120  120 
( 0.0214, 0.5709,0.0086,0.8207)  70  70 
(0.3214,0.117,0.3214,0.883)  0  0 
(0.0049,0.5010, 0.0087,0.8654)  60  60 
(0.0190,0.8360, 0.1951,0.5124)  120  120 
(0.0348,0.9713, 0.2353, 0.0075)  180  180 
Step 2, preserves map datum and tracing point coordinate information
By cloud data (threedimensional coordinate, ORB Feature Descriptors in world coordinate system), key by the way of data flow
Frame data (camera posture information) save as .bin formatted files, and the data that point coordinates information includes x and z directions are saved in
In txt file.
Step 3, loading map and track point coordinates calculate corner
Step 3.1 loads the map built up, and execution resets bit function, calculates Current camera posture information with this.
Can be derived in the navigation algorithm based on camera tracing point according to the navigation algorithm based on GPS.Propose accordingly to be based on
The navigation decision making algorithm of vision SLAM, as shown in Figure 7.Algorithm is divided into two parts：Choose pre described point algorithm, calculate corner number of degrees β
Algorithm.
Step 3.2 tracing point is encrypted, and is chosen pre described point and is filtered pretreatment to pre described point：Obtain by key frame approach
Tracing point data division place can be than sparse, it is therefore desirable to carries out next step operation after being encrypted again.Using average
The method of filtering is to sparse point encryption：
When choosing pre described point, if current point is B points, according to search in the threshold value 0.35 before and after method a little, search
Institute of the rope apart from all distances of B points in the range of 0.35 a little, is saved in array M [n]={ A, B, C, D ... }, then logarithm
Group sorted from small to large sort (M, M+n) choose M [n] be pre described point.For different map track datas has different
Pre described point selected threshold, needs carry out parameter selection according to actual map datum.For the nonstarting point of closed path and terminal position
Put：For the closed path of 200 tracing points：Initial piont mark 0, terminal label 199, after the point for searching is sorted most
Big value chooses 0 point of label when being 199 be that next impact point is pre described point, it is desirable to which 0 point to 199 dot spacings from greatthan search threshold
Value.
Step 3.3 calculates corner：Calculate corner number of degrees β algorithms：Propose two kinds of calculating corner algorithms.(if dx, dy),
(gx, gy) is respectively current point and will reach point coordinates：
By angle, θ before Fig. 7 under the coordinate system of scope ± 180 °, if current point and impact point line with xaxis angle areScope is 0 to 90 degree；β is corner, and bottom Body Control is from left to right 0 to 180 degree；γ determines for centre
Plan layer judges angle, and different target point has different values from current point position relationship.Corner decision table of the table 2 for correspondence Fig. 6, currently
Position and aiming spot relative position relation are divided into four kinds of different situations shown in table.
2 corner decision table of table
When the direction of motion is E to F, i.e., as (dx>gx&&dy<When gy)：
If 0=<θ<=90 i.e. L1 directions when, then lefthand rotation β= θ +90 α, if 180=<θ< 90 i.e. L4 directions when, then
Righthand rotation β= θ  90+ α, if 90=<θ<0 i.e. L2 and L3 directions when, then judge γ=90 α θ , if γ>0 i.e. L2 directions are then
Lefthand rotation β=γ, otherwise i.e. then turn right β= γ in L3 directions；
When the direction of motion is that F ' arrives E ', i.e., as (dx<gx&&dy>When gy)：
If 0=<θ<=90 i.e. L1 directions when, then righthand rotation β=90  θ +α, if 180=<θ<=90 i.e. L4 directions when,
Then lefthand rotation β=180  θ +90 α, if 90<θ<=180 i.e. L2 and L3 directions when, then judge γ=90 α(180  θ ), if
γ>0 i.e. L3 directions when then turn left that otherwise i.e. then turn right β=γ β= γ in L2 directions；
When direction of motion P is to Q, i.e., as (dx<gx&&dy<When gy)：
If 90=<θ<=0 i.e. L1 directions when, then righthand rotation β=90+  θ α, if 90=<θ<=180 i.e. L4 directions when, then
Lefthand rotation β= θ  90+ α, if 0<θ<90 i.e. L2 and L3 directions when, then judge γ=90 α θ , if γ>0 i.e. L2 directions when then
Righthand rotation β=γ is otherwise that L3 directions are then turned left β= γ；
When the direction of motion is that Q ' arrives P ', i.e., as (dx>gx&&dy>When gy)：
If 90=<θ<=0 i.e. L1 directions when, then lefthand rotation β=90  θ +α, if 90=<θ<=180 i.e. L4 directions when, then
Righthand rotation β=180  θ +90 α, if 180=<θ< 90 i.e. L2 and L3 directions when, then judge γ=90+ α θ , if γ>0 is
Then turning left during L2 directions, otherwise i.e. then turn right β=γ β= γ in L3 directions；
Step 4, the angle beta obtained according to step 3 and the direction of corner, send a command to vehicle bottom control.
Conclusion：Fig. 8 is original camera trace information and independent navigation trace information, wherein it is not pass through to build figure track
The data of encryption.Analysis finds that leading line and ground figure line are essentially coincided.The navigation path deviation data of concrete analysis Fig. 8
As shown in figure 9, maximum deviation is 0.029 about 0.1m, deviation is very little.Figure 10 is to build figure track and right through encryption
The three circle navigation paths answered, Figure 11 is trajector deviation data, and trajector deviation maximum is 0.017, can be considered that route is essentially coincided.
Figure 12 is during monocular vision builds figure while the gps data and the monocular vision under using said method of collection are navigated through
The gps data trajectory diagram gathered in journey, is about 0.0013km at two track maximum deviations, and it is due to turning that deviation is larger herein
The turning number of degrees at place are larger, and during selfnavigation, turn inside diameter can be completely superposed where the turning number of degrees are little.
In sum, the present invention utilizes merely monocular cam, it is not necessary to any other external sensor, in less error
In the range of realize navigation system based on monocular vision SLAM, reduce research and application cost, with potentially applying valency
Value.
Claims (3)
1. a kind of air navigation aid of the single camera vision system based on quaternary number, it is characterised in that step includes：
Step 1, constructing environment map, comprise the following steps：
1) carry out camera calibration；
2) photographic head is fixed to into right ahead；
3) input picture is transformed into into greyscale color space from RGB color；
4) feature extraction and matching is carried out to image；
5) map initialization；
6) closed loop detection and reorientation；
7) obtain Current camera pose, calculate quaternary number be transformed into 180 degree to 180 degree scope Eulerian angles ψ, θ,The ψ, θ,Respectively about the z axis, the anglec of rotation of Yaxis, Xaxis, yaw angle θ are the angle of present frame；
Step 2, preservation map datum and tracing point coordinate information；
Step 3, loading map and track point coordinates calculate corner β according to yaw angle θ,
The direction of step 4, the angle beta obtained according to step 3 and corner, sends a command to vehicle bottom control.
2. the air navigation aid of the single camera vision system based on quaternary number as claimed in claim 1, it is characterised in that ψ, θ andModel
Enclose be 90 degree to 90 degree, correspondingly solution formula is：
When θ scopes for 180 degree to+180 degree when, correspondence solution part formula is：
3. the air navigation aid of the single camera vision system based on quaternary number as claimed in claim 2, it is characterised in that step 3 is counted
Calculate corner β process be：
If (dx, dy), (gx, gy) are respectively current point and will reach point coordinates, if current point and impact point line and xaxis angle
ForScope is 0 to 90 degree；β is corner, and bottom Body Control is from left to right 0 to 180 degree；γ is centre
Decisionmaking level judges angle, and different target point has different values from current point position relationship.Current location and aiming spot are with respect to position
When the relation of putting is divided into following four kinds of different situations：
When the direction of motion is E to F, i.e., as (dx>gx&&dy<When gy)：
If 0=<θ<=90 i.e. L1 directions when, then lefthand rotation β= θ +90 α, if 180=<θ< 90 i.e. L4 directions when, then turn right β
= θ  90+ α, if 90=<θ<0 i.e. L2 and L3 directions when, then judge γ=90 α θ , if γ>Then turn left β in 0 i.e. L2 directions
=γ, otherwise i.e. then turn right β= γ in L3 directions；
When the direction of motion is that F ' arrives E ', i.e., as (dx<gx&&dy>When gy)：
If 0=<θ<=90 i.e. L1 directions when, then righthand rotation β=90  θ +α, if 180=<θ<=90 i.e. L4 directions when, then it is left
Turn β=180  θ +90 α, if 90<θ<=180 i.e. L2 and L3 directions when, then judge γ=90 α(180  θ ), if γ>0
I.e. L3 directions when then turn left that otherwise i.e. then turn right β=γ β= γ in L2 directions；
When direction of motion P is to Q, i.e., as (dx<gx&&dy<When gy)：
If 90=<θ<=0 i.e. L1 directions when, then righthand rotation β=90+  θ α, if 90=<θ<=180 i.e. L4 directions when, then turn left
β= θ  90+ α, if 0<θ<90 i.e. L2 and L3 directions when, then judge γ=90 α θ , if γ>0 i.e. L2 directions when then turn right β
=γ is otherwise that L3 directions are then turned left β= γ；
When the direction of motion is that Q ' arrives P ', i.e., as (dx>gx&&dy>When gy)：
If 90=<θ<=0 i.e. L1 directions when, then lefthand rotation β=90  θ +α, if 90=<θ<=180 i.e. L4 directions when, then turn right
β=180  θ +90 α, if 180=<θ< 90 i.e. L2 and L3 directions when, then judge γ=90+ α θ , if γ>0 i.e. L2 side
To when then turn left that otherwise i.e. then turn right β=γ β= γ in L3 directions.
Priority Applications (1)
Application Number  Priority Date  Filing Date  Title 

CN201611010993.6A CN106556395A (en)  20161117  20161117  A kind of air navigation aid of the single camera vision system based on quaternary number 
Applications Claiming Priority (1)
Application Number  Priority Date  Filing Date  Title 

CN201611010993.6A CN106556395A (en)  20161117  20161117  A kind of air navigation aid of the single camera vision system based on quaternary number 
Publications (1)
Publication Number  Publication Date 

CN106556395A true CN106556395A (en)  20170405 
Family
ID=58443365
Family Applications (1)
Application Number  Title  Priority Date  Filing Date 

CN201611010993.6A Pending CN106556395A (en)  20161117  20161117  A kind of air navigation aid of the single camera vision system based on quaternary number 
Country Status (1)
Country  Link 

CN (1)  CN106556395A (en) 
Cited By (7)
Publication number  Priority date  Publication date  Assignee  Title 

CN107369181A (en) *  20170613  20171121  华南理工大学  Cloud data collection and processing method based on biprocessor architecture 
CN107680133A (en) *  20170915  20180209  重庆邮电大学  A kind of mobile robot visual SLAM methods based on improvement closed loop detection algorithm 
CN108051002A (en) *  20171204  20180518  上海文什数据科技有限公司  Transport vehicle spacelocation method and system based on inertia measurement auxiliary vision 
CN109900272A (en) *  20190225  20190618  浙江大学  Vision positioning and build drawing method, device and electronic equipment 
CN110174892A (en) *  20190408  20190827  北京百度网讯科技有限公司  Processing method, device, equipment and the computer readable storage medium of vehicle direction 
CN111353941A (en) *  20181221  20200630  广州幻境科技有限公司  Space coordinate conversion method 
CN111798574A (en) *  20200611  20201020  广州恒沙数字科技有限公司  Corner positioning method for threedimensional field 
Citations (3)
Publication number  Priority date  Publication date  Assignee  Title 

CN101067557A (en) *  20070703  20071107  北京控制工程研究所  Environment sensing oneeye visual navigating method adapted to selfaid moving vehicle 
CN101441769A (en) *  20081211  20090527  上海交通大学  Real time vision positioning method of monocular camera 
CN101598556B (en) *  20090715  20110504  北京航空航天大学  Unmanned aerial vehicle vision/inertia integrated navigation method in unknown environment 

2016
 20161117 CN CN201611010993.6A patent/CN106556395A/en active Pending
Patent Citations (3)
Publication number  Priority date  Publication date  Assignee  Title 

CN101067557A (en) *  20070703  20071107  北京控制工程研究所  Environment sensing oneeye visual navigating method adapted to selfaid moving vehicle 
CN101441769A (en) *  20081211  20090527  上海交通大学  Real time vision positioning method of monocular camera 
CN101598556B (en) *  20090715  20110504  北京航空航天大学  Unmanned aerial vehicle vision/inertia integrated navigation method in unknown environment 
NonPatent Citations (2)
Title 

刘伟等: "《未知复杂环境中的无人机平滑飞行路径规划》", 《控制理论与应用》 * 
张帆等: "《一种新的全角度四元数与欧拉角的转换算法》", 《南京理工大学学报》 * 
Cited By (9)
Publication number  Priority date  Publication date  Assignee  Title 

CN107369181A (en) *  20170613  20171121  华南理工大学  Cloud data collection and processing method based on biprocessor architecture 
CN107369181B (en) *  20170613  20201222  华南理工大学  Point cloud data acquisition and processing method based on dualprocessor structure 
CN107680133A (en) *  20170915  20180209  重庆邮电大学  A kind of mobile robot visual SLAM methods based on improvement closed loop detection algorithm 
CN108051002A (en) *  20171204  20180518  上海文什数据科技有限公司  Transport vehicle spacelocation method and system based on inertia measurement auxiliary vision 
CN111353941A (en) *  20181221  20200630  广州幻境科技有限公司  Space coordinate conversion method 
CN109900272A (en) *  20190225  20190618  浙江大学  Vision positioning and build drawing method, device and electronic equipment 
CN109900272B (en) *  20190225  20210713  浙江大学  Visual positioning and mapping method and device and electronic equipment 
CN110174892A (en) *  20190408  20190827  北京百度网讯科技有限公司  Processing method, device, equipment and the computer readable storage medium of vehicle direction 
CN111798574A (en) *  20200611  20201020  广州恒沙数字科技有限公司  Corner positioning method for threedimensional field 
Similar Documents
Publication  Publication Date  Title 

Filipenko et al.  Comparison of various slam systems for mobile robot in an indoor environment  
US11530924B2 (en)  Apparatus and method for updating high definition map for autonomous driving  
CN106556395A (en)  A kind of air navigation aid of the single camera vision system based on quaternary number  
Poddar et al.  Evolution of visual odometry techniques  
CN111263960B (en)  Apparatus and method for updating high definition map  
CN111273312B (en)  Intelligent vehicle positioning and loop detection method  
CN112734765A (en)  Mobile robot positioning method, system and medium based on example segmentation and multisensor fusion  
CN111812978B (en)  Cooperative SLAM method and system for multiple unmanned aerial vehicles  
Han et al.  Robust egomotion estimation and map matching technique for autonomous vehicle localization with high definition digital map  
Sun et al.  Autonomous state estimation and mapping in unknown environments with onboard stereo camera for micro aerial vehicles  
Jun et al.  Autonomous driving system design for formula student driverless racecar  
Rehder et al.  Submapbased SLAM for road markings  
Manivannan et al.  Vision based intelligent vehicle steering control using single camera for automated highway system  
Krejsa et al.  Fusion of local and global sensory information in mobile robot outdoor localization task  
Roggeman et al.  Embedded visionbased localization and model predictive control for autonomous exploration  
Atsuzawa et al.  Robot navigation in outdoor environments using odometry and convolutional neural network  
Zong et al.  Vehicle model based visualtag monocular ORBSLAM  
Luo et al.  Stereo Visionbased Autonomous Target Detection and Tracking on an Omnidirectional Mobile Robot.  
Velat et al.  Vision based vehicle localization for autonomous navigation  
Klappstein et al.  Applying kalman filtering to road homography estimation  
CN113554705A (en)  Robust positioning method for laser radar in changing scene  
Zhang et al.  A Robust Lidar SLAM System Based on MultiSensor Fusion  
Pan  Challenges in visual navigation of agv and comparison study of potential solutions  
Uehara et al.  Linebased SLAM Considering Directional Distribution of Line Features in an Urban Environment.  
Du et al.  Hierarchical path planning and obstacle avoidance control for unmanned surface vehicle 
Legal Events
Date  Code  Title  Description 

PB01  Publication  
PB01  Publication  
SE01  Entry into force of request for substantive examination  
SE01  Entry into force of request for substantive examination  
RJ01  Rejection of invention patent application after publication  
RJ01  Rejection of invention patent application after publication 
Application publication date: 20170405 