CN103424112B - A kind of motion carrier vision navigation method auxiliary based on laser plane - Google Patents

A kind of motion carrier vision navigation method auxiliary based on laser plane Download PDF

Info

Publication number
CN103424112B
CN103424112B CN201310323658.1A CN201310323658A CN103424112B CN 103424112 B CN103424112 B CN 103424112B CN 201310323658 A CN201310323658 A CN 201310323658A CN 103424112 B CN103424112 B CN 103424112B
Authority
CN
China
Prior art keywords
bright spot
laser
range finding
wire
motion carrier
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310323658.1A
Other languages
Chinese (zh)
Other versions
CN103424112A (en
Inventor
曾庆化
邓孝逸
王云舒
赵伟
刘建业
熊智
赖际洲
李荣冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN201310323658.1A priority Critical patent/CN103424112B/en
Publication of CN103424112A publication Critical patent/CN103424112A/en
Application granted granted Critical
Publication of CN103424112B publication Critical patent/CN103424112B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The present invention discloses a kind of motion carrier vision navigation method auxiliary based on laser plane, comprise build range finding and environmental system, determine the bright spot of rectangle to be measured position in the picture, bright spot of determining rectangle centre coordinate, ask for geometry range finding model, correction range finding model, segmentation visual pattern, the Local Vector environment space determining to have range information, orient carrier self-position and build the steps such as global context space. The method of the present invention is different from traditional motion carrier independent navigation middle distance acquisition methods, introduce laser plane as supplementary means, and in conjunction with single or multiple vision sensor, make full use of the bright spot of wire that laser plane reflects to form on irradiated object as cooperation measuring distance of target, effectively make use of the geometric relationship of vision sensor and LASER Light Source, reach the requirement of distance calculation accuracy, success ratio, real-time, it is a kind of navigation method possessing reliability, can be applicable in motion carrier autonomous navigation system.

Description

A kind of motion carrier vision navigation method auxiliary based on laser plane
Technical field
The present invention relates to a kind of vision navigation method for motion carrier, particularly relate to and a kind of adopt laser plane to assist the method carrying out visual token navigation, belong to motion carrier independent navigation field.
Background technology
When lacking high-precision real, external environmental information inputs, all kinds of motion carrier is difficult to realize independent navigation by inertia/GNSS system. Vision technique is utilized to realize motion carrier to the high precision perception of unknown obstacle with hiding, it is possible to effectively to improve the application ability of carrier. Single order or multi-vision visual rely on its structure simple, and the advantage such as low-power consumption, small volume, has broad application prospects in indoor and outdoor navigation field. Studying the motion carrier based on laser assisted to the ranging technology of preceding object thing, estimation collision time, not only effectively for obstacle evades service, can position oneself and exterior three dimensional environment reconstruct offer information simultaneously. Therefore, vision navigation method is significant to the independent navigation of motion carrier fast and accurately.
Vision guided navigation algorithm is carried out many research both at home and abroad, part paper and document have been proposed to be realized by the method for pure vision the independent navigation of carrier, but pure vision guided navigation is affected by environment relatively big and obtaining information is limited; Proposing to employ the multiple measurement means such as vision/inertia/ultrasonic wave in some papers and document, such method is lacking limited efficiency in cooperative target target situation; Some researchists are on the basis employing vision sensor, using lidar as main range finding means, achieving navigation feature, after being aided with other sensors such as inertia, odometer, ultrasonic wave, sonar, effect is very outstanding, but cost and energy penalty are higher. And in general lidar can only realize two dimension range finding, environment construction limited efficiency.
Summary of the invention
Technical problem
The technical problem to be solved in the present invention be the autonomous vision guided navigation algorithm of conventional motion carrier is derived by two dimensional image coordinate draw the space ambiguity problem of three-dimensional world coordinate and range information amount not fully, algorithm process real-time and distance accuracy problem. In order to simplify LASER Light Source and pick up camera mounting condition (without the need to the particular requirement such as parallel, vertical), provide a kind of navigation method of visual token auxiliary based on laser plane, autonomous positioning, environment construction, the method necessary hardware structure is simple, volume is little, energy consumption is low, overcomes the restriction being subject to self carrying large volume, big power consumption device due to motion carrier; The range information of acquisition is carried out matrixing storage, and combining image process obtains Vectorgraph, merge Inertial Measurement Unit information realization to carrier self location, the hiding and the acquisition of environmental information of peripheral obstacle.
Technical scheme
In order to solve above-mentioned technical problem, the motion carrier vision navigation method auxiliary based on laser plane of the present invention, comprises the steps:
Step one: by by the LASER Light Source being arranged on motion carrier and vision sensor as range finding and environment construction system. Utilize LASER Light Source to send laser plane, carrier front testee produces reflect to form the bright spot of wire, then obtain, by vision sensor shooting, the visual pattern comprising the bright spot of this wire;
Step 2: determine one group of bright spot to be measured and coordinate range thereof in step one gained visual pattern, utilize stencil matching method, in image, the bright spot of rectangle to be measured position in the picture is determined in search, wherein:
R ( i , j ) = Σ m = 1 M Σ n = 1 N ( S i , j ( m , n ) × T ( m , n ) ) Σ m = 1 M Σ n = 1 N ( S i , j ( m , n ) ) 2
Template and subgraph size are all the rectangle of M �� N pixel; (i, j) is reference point coordinate, represents subgraph position;For masterplate, it it is constant value;For variable, change with reference point;Represent masterplate and the subgraph degree of correlation, when subgraph masterplate and masterplate similarity are maximum, the subgraph position that reference point is (I, J) can be obtained, i.e. the bright spot position of rectangle to be measured;
Step 3: the brightness choosing the bright spot of rectangle to be measured, as weights, by gray scale weights method, to each pixel weighted sum of the bright spot of rectangle, draws the centre coordinate of the bright spot of rectangle after average:
x c = Σ x = I I + m Σ y = J J + n S ( x , y ) x Σ x = I I + m Σ y = J J + n S ( x , y ) , y c = Σ x = I I + m Σ y = J J + n S ( x , y ) y Σ x = I I + m Σ y = J J + n S ( x , y ) , ( S ( x , y ) ≥ T )
Wherein, the reference point coordinate of the bright spot of rectangle that (I, J) obtains for step 3, Si,j(m, n) is weight, and T is weight threshold, if Si,j(m, n) is less than T and is then set to 0, obtains the bright spot centre coordinate of rectangle after weighted mean;
Step 4: according to the relative position of the vision sensor on carrier in step one and laser source, obtains geometry range finding model according to pinhole imaging system principle, and geometry range finding model is as follows:
S = d m - cos β ( tan θ + tan a tan β ) , m = ( v B 0 - v B 20 ) cos β ay
Wherein, d is LASER Light Source and camera light distance in the heart; ��, ��, �� are each installation angle, and ay is focal length and pixel physics dimension ratio, vB0,vB20For each auxiliary point image coordinate;
Step 5: the parameter in model of finding range according to geometry in step 4 and known condition, in conjunction with error estimate formula principle, the reason affecting distance accuracy is analyzed, Binding experiment data, compensate the parameter producing error in range finding model, the range finding model after being corrected;
Step 6: when every time carrying out laser ranging, substitutes into the range finding model after correction by the to be measured bright spot centre coordinate by obtaining in step 3, obtains range finding result; Or before navigation, directly bring image coordinate into range finding model by a fixed step size, the result obtained is carried out matrixing storage, navigation procedure obtains the distance value that extracting directly after facula position is corresponding, to obtain the actual range of laser beam place point fast;
Step 7: according to the known laser line segment determining distance, utilize region growing method, visual pattern is split, thus obtain the image segmentation result with range information;
Step 8: the image segmentation result with range information is carried out vector, and realize environment reconstruct with vectorizations algorithms. The Local Vector environment space with range information all can be obtained after each laser scanning;
Step 9: in conjunction with the Inertial Measurement Unit on motion carrier, and the local environment space that step 8 obtains, carry out Multi-information acquisition by means of filtering, orients carrier self-position and builds global context space, it is achieved real-time independent navigation function.
In the method for the present invention, the first step adopts the method for laser assisted, introduce laser plane as supplementary means, and in conjunction with vision sensor, make full use of the bright spot of wire that laser plane reflects to form on irradiated object as cooperation measuring distance of target, can solve in some other visual token algorithms, drawn the space ambiguity problem existing for three-dimensional world coordinate or the less problem being difficult to range finding of quantity of information by two dimensional image coordinate, LASER Light Source is waited particular requirement with the installation position of pick up camera without the need to parallel by this distance-finding method. The range information obtained is carried out the efficiency that matrixing storage can improve the actual range obtaining laser beam place point by the 6th step. 7th step and the 8th step obtain vector environment space by region growing and vector, can effectively reduce local environmental information amount, can realize the independent navigation of motion carrier after the 9th step information fusion.
Useful effect
The method of the present invention has following useful effect:
(1) it is different from traditional motion carrier independent navigation middle distance acquisition methods, introduce laser plane as supplementary means, and in conjunction with single or multiple vision sensor, make full use of the bright spot of wire that laser plane reflects to form on irradiated object as cooperation measuring distance of target, and the range information obtained can be carried out matrixing storage by this method, need not calculate when bright spot being detected, directly utilize matrix to extract the distance measurements corresponding with image coordinate. Computing amount is little, separates evaluation time short, is simple and easy to realize, and the range information of acquisition is stablized, the not problem of Existential Space ambiguity.
(2), when device is installed, the installation position of pick up camera and light source is not had particular requirement, does not require that the laser plane sent must be parallel with camera optical axis, or vertical with carrier levels face, reduce installation and requirement is set.
(3) geometric relationship of vision sensor and LASER Light Source is effectively utilized, derive range finding model and improve bright spot detection efficiency to be measured, and effectively improve distance accuracy by errot analysis, suppress the situation that range finding result is dispersed, the information of visual pattern is effectively utilized by region growing, and construct the Local Vector environment space with accurate distance information, overall situation space can be built after having merged each local space, and realize independent navigation within this space.
(4) by the bright spot of detection laser, in conjunction with region growing, can effectively detect out all kinds of obstacle in carrier front, contribute to realizing avoidance and environment reconstruct, more simpler than the method for pure vision effective.
Generally speaking, present method reaches the requirement of distance calculation accuracy, success ratio, real-time, is a kind of navigation method possessing reliability, can be applicable in motion carrier autonomous navigation system.
Accompanying drawing explanation
Fig. 1 is the overall flow figure of the inventive method;
Fig. 2 is laser plane range finding horizontal field of view scheme of installation;
Fig. 3 is laser plane range finding model scenograph;
Fig. 4 is laser plane range images sciagraph.
Embodiment
The motion carrier vision navigation method auxiliary based on laser plane of the present embodiment comprises the steps:
Step one: by by the LASER Light Source being arranged on motion carrier and vision sensor as range finding and environment construction system. Utilize LASER Light Source to send laser plane, carrier front testee produces reflect to form the bright spot of wire, then obtain, by vision sensor shooting, the visual pattern comprising the bright spot of this wire; Step 2: determine one group of bright spot to be measured and coordinate range thereof in step one gained image, utilize stencil matching method, in image, the bright spot of rectangle to be measured position in the picture is determined in search, wherein:
R ( i , j ) = Σ m = 1 M Σ n = 1 N ( S i , j ( m , n ) × T ( m , n ) ) Σ m = 1 M Σ n = 1 N ( S i , j ( m , n ) ) 2
Template and subgraph size are all the rectangle of M �� N pixel; (i, j) is reference point coordinate, represents subgraph position;For masterplate, it it is constant value;For variable, change with reference point;Represent masterplate and the subgraph degree of correlation, when subgraph masterplate and masterplate similarity are maximum, the subgraph position that reference point is (I, J) can be obtained, i.e. the bright spot position of rectangle to be measured;
Step 3: the brightness choosing the bright spot of rectangle to be measured, as weights, by gray scale weights method, to each pixel weighted sum of the bright spot of rectangle, draws the centre coordinate of the bright spot of rectangle after average:
x c = Σ x = I I + m Σ y = J J + n S ( x , y ) x Σ x = I I + m Σ y = J J + n S ( x , y ) , y c = Σ x = I I + m Σ y = J J + n S ( x , y ) y Σ x = I I + m Σ y = J J + n S ( x , y ) , ( S ( x , y ) ≥ T )
Wherein, the reference point coordinate of the bright spot of rectangle that (I, J) obtains for step 3, Si,j(m, n) is weight, and T is weight threshold, if Si,j(m, n) is less than T and is then set to 0, obtains the bright spot centre coordinate of rectangle after weighted mean;
Step 4: according to the relative position of the vision sensor on carrier in step one and laser source, obtains geometry range finding model according to pinhole imaging system principle, and geometry range finding model is as follows:
S = d m - cos β ( tan θ + tan a tan β ) , m = ( v B 0 - v B 20 ) cos β ay
D is LASER Light Source and camera light distance in the heart; ��, ��, �� are each installation angle, and ay is focal length and pixel physics dimension ratio, vB0,vB20For each auxiliary point image coordinate;
Step 5: the parameter in model of finding range according to geometry in step 4 and known condition, in conjunction with error estimate formula principle, the reason affecting distance accuracy is analyzed, Binding experiment data, compensate the parameter producing error in range finding model, the range finding model after being corrected;
Step 6: when every time carrying out laser ranging, substitutes into the range finding model after correction by the to be measured bright spot centre coordinate by obtaining in step 3, obtains range finding result; Or before navigation, directly bring image coordinate into range finding model by a fixed step size, the result obtained is carried out matrixing storage, navigation procedure obtains the distance value that extracting directly after facula position is corresponding, to obtain the actual range of laser beam place point fast;
Step 7: according to the known laser line segment determining distance, utilize region growing method, visual pattern is split, thus obtain the image segmentation result with range information.
Step 8: the image segmentation result with range information is carried out vector, and realize environment reconstruct with vectorizations algorithms. The Local Vector environment space with range information all can be obtained after each laser scanning.
Step 9: in conjunction with the Inertial Measurement Unit on motion carrier, and the local environment space that step 8 obtains, carry out Multi-information acquisition by means of filtering, orients carrier self-position and builds global context space, it is achieved real-time independent navigation function.
Hereinafter main performing step the present embodiment method related to is described further:
(1) determination of the bright spot position of laser
Laser source sends after optical plane is mapped to objects in front, owing to forming reflection therefore can produce the bright spot of line-like respectively. And due to brightness higher, namely if image is converted into 8 gray-scale maps, the pixel value of spot center part can reach maximum 255; If from aberration angle analysis, it is assumed that what send is red laser, then the red color component value in the bright spot of laser is higher than green and blue component. When this kind possesses certain priori, stencil matching is a kind of feasible method. Preparation work before stencil matching comprises and image carries out filtering, the operation such as level and smooth, remove portion noise jamming.
It should be noted that, after laser plane reflects on objects in front, a bright spot of wire can be formed. But owing to the position of each irradiated object is different, in most cases the bright spot of wire can not be a complete straight line, there will be interrupted situation, project after in image plane, can not be a continuous print straight line equally, now can determine some rectangle object points to be measured and image horizontal stroke (indulging) coordinate range thereof according to actual demand on the interrupted straight line of each bar, extract the accurate position of rectangle tested point afterwards again.
Assuming that masterplate is T, size is M �� N, and searched figure is that S(length and width are all more than or equal to masterplate), the search graph covered by masterplate is subgraph Si,j, subgraph size is identical with masterplate. Wherein (i, j) is subgraph top left co-ordinate, is reference point.
Can by doing differ from by masterplate and reference drawing respective pixel and sue for peace, check similarity:
sum ( i , j ) = Σ m = 1 M Σ n = 1 N ( S i , j ( m , n ) - T ( m , n ) ) 2 - - - ( 1 )
After expansion:
sum ( i , j ) = Σ m = 1 M Σ n = 1 N ( S i , j ( m , n ) ) 2 - Σ m = 1 M Σ n = 1 N ( S i , j ( m , n ) × T ( m , n ) ) + Σ m = 1 M Σ n = 1 N ( T ( m , n ) ) 2 - - - ( 2 )
In formula (2)For masterplate, it it is constant value;For variable, represent the energy of subgraph, change with reference point;Representing masterplate and the subgraph degree of correlation, if when both mate mutually, value is maximum, it is possible to thus obtain related function:
R ( i , j ) = Σ m = 1 M Σ n = 1 N ( S i , j ( m , n ) × T ( m , n ) ) Σ m = 1 M Σ n = 1 N ( S i , j ( m , n ) ) 2 - - - ( 3 )
Masterplate and subgraph similarity are more big, then R (i, j) is more close to 1. When subgraph masterplate and masterplate similarity are maximum, the subgraph position that reference point is (I, J) can be obtained, i.e. the bright spot position of rectangle to be measured.
(2) Facula Center Location
After determining the bright spot position of rectangle to be measured that laser is formed on testee, the centre coordinate of the bright spot of rectangle in image coordinate can be extracted on this basis. When processing image problem, it is determined that in image, coordinate can only be accurate to pixel scale, generally can meet accuracy requirement. If when measured target is away from carrier positions, often changes a pixel, bigger deviation in range will be formed. Therefore need to consider sub-pix problem in Facula Center Location process, to improve distance accuracy.
Based on considering of the aspect such as complexity and real-time, adopt grey scale centre of gravity method, imaging characteristic according to bright spot region, the gray-scale value at Liang Ban center is maximum, frontier area gray-scale value is lower, so using the pixel value of each coordinate as weight, centre coordinate can be obtained, it is achieved the solving precision of sub-pix after weighted mean.
x c = Σ x = I I + m Σ y = J J + n S ( x , y ) x Σ x = I I + m Σ y = J J + n S ( x , y ) , y c = Σ x = I I + m Σ y = J J + n S ( x , y ) y Σ x = I I + m Σ y = J J + n S ( x , y ) , ( S ( x , y ) ≥ T ) - - - ( 4 )
By summit (I in formula, J) and length and width be respectively m, the rectangle that n is formed need to slightly larger than (needing the pixel count ensureing to calculate Liang Ban center to be greater than certain numerical value to obtain good result) Matching sub-image picture, and add threshold determination when calculating, if certain some pixel value is less than threshold value T, then weight is set to 0.
(3) based on the geometry range finding model of pinhole imaging system
The straight line hot spot that the laser plane that laser assisted distance-finding method sends by light source is formed on testee is found range. Model of below this being found range illustrates in detail.
As shown in Figure 3,4: establish world system of coordinates XwYwZw, CC1For camera optical axis, uov is that imaging plane, optical axis and imaging plane meet at photocentre C0, image coordinate is (u0,v0), make the relative horizontal throw of pick up camera and tested point as to be measured.
L is LASER Light Source, it is assumed that optical axis CC1Being parallel to Yw axle, imaging plane uov is perpendicular to horizontal plane XwOwYw, and ou, ov are parallel to axle Xw and Zw respectively; As shown in Figure 2, laser source L is positioned at above camera, and distance is d, the laser plane that order is launched and optical axis CC1Angle be ��0, and the angle of the friendship line of laser plane and horizontal plane XwOwYw and axle Xw can not be 0.
If laser plane is radiated on the different vertical surface in two positions, launches and form two bright spots of wire, easily one is got an A wherein, make the projection A in an A image plane0(uA0,vA0) X-coordinate and photocentre C0Identical, and in laser plane, cross a some A be L ' and be perpendicular to Yw axle; Getting a B on another bright spot, projection on the image plane is B0(uB0,vB0), LB and L ' meets at B ', and crosses B and make L ' ' and be parallel to L '; The angle of the coaxial Xw of L ' and L ' ' can be set all as �� simultaneously.
Crossing L, C makes the plane that two are parallel to horizontal plane XwOwYw respectively, and some A, B, B ' it is respectively A in the projection of these two planes1And C1��B1And B2��B1' and B2', B2It is B in image plane projection20(uB20,vB20). Make �� ALA1For ��, �� BLB1(�� B ' LB1') it is �� ', �� B1LA1(�� B2CC1) it is ��.
LB1As the i.e. relative horizontal throw of photocentre C with tested point B to be measured, it is concrete derivation below:
First tested point A is sought distance, can by LC (A at this1C1), ��, �� as known quantity during erecting equipment, at �� ACC1In:
A 0 C 0 AC 1 = CC 0 CC 1 - - - ( 5 )
Wherein can directly try to achieve AC1��A0C0And CC0Value:
AC 1 = AA 1 + A 1 C 1 = CC 1 tan θ + d A 0 C 0 = ( v A 0 - v C 0 ) dy CC 0 = f - - - ( 6 )
Three equatioies of (6) are substituted in (5) and can obtain:
CC 1 = d m - tan θ , m = ( v A 0 - v C 0 ) ay - - - ( 7 )
By above step, the information about tested point A all can obtain, and just can be calculated by the tested point B that the process that solve is more complicated as condition.
At �� B2��CC1In:
tan β B 2 C 0 CC 0 = 0 ( u B 0 - u C 0 ) d f = ( xu B 0 - u C 0 ) ax - - - ( 8 )
At �� BCB2In:
B 0 B 20 BB 2 = CB 20 CB 2 ,
(9) in, 4 variablees only need to obtain except CB2In addition 3 amount values or only comprise CB2Expression formula, so that it may obtain CB2Result.
Wherein B0B20It is the physical distance of 2 pixel coordinates:
B 0 B 20 = ( v B 0 - v B 20 ) dy - - - ( 10 )
(9) BB in2Can be expressed as:
BB 2 = BB 1 + B 1 B 2 = d + BB 1 - - - ( 11 )
(11) BB in1Can obtain by the following step,
At �� LBB1In:
B ′ B 1 ′ BB 1 = LB 1 ′ LB 1 - - - ( 12 )
Wherein:
B ′ B 1 ′ = AA 1 + A 1 B 1 ′ tan α = LA 1 tan θ + A 1 B 1 ′ tan α A 1 B 1 ′ = C 1 B 2 ′ = CC 1 tan β = LA 1 tan β - - - ( 13 )
LB 1 ′ = LA 1 cos β = CC 1 cos β - - - ( 14 )
By (13), (14) substitute in (12), obtain:
BB 1 = LB 1 cos β ( tan θ + tan a tan β ) - - - ( 15 )
(9) CB in20:
CB 20 CC 0 cos β = f cos β - - - ( 16 )
Finally (10) (11) (15) (16) are substituted in (9), can finally obtain:
CB 2 = d m - cos β ( tan θ + tan a tan β ) , m = ( v B 0 - v B 20 ) cos β ay - - - ( 17 )
From formula, (21) can the results are as follows, namely only needs to know the focal length of pick up camera and the physics size of image pixel, so that it may in the hope of bright spot to be measured and pick up camera relative distance.
According to formula (17), when �� trends towards 0, formula turns into:
CB 2 = d m - tan θ , m = ( v B 0 - v B 20 ) ay - - - ( 18 )
Finding that formula (18) and formula (7) are very similar, just the ordinate zou of tested point image coordinate is different. According to this feature, it is possible to obtain such description, when �� is close to 0, the laser plane launched draws in becomes laser beam, and planar Ranging model degradation becomes the range finding model to the single imaging point that laser beam is formed.
The pick up camera used is demarcated, the following internal reference of shooting need to be obtained:
K = ax 0 u C 0 0 ay v C 0 0 0 1
When adopting laser plane distance-finding method, being fixed on carrier by camera, LASER Light Source, install position as shown in Figure 2, wherein photocentre and light source distance are d.
(4) errot analysis and correction
Owing to there is systematic error, it is necessary to the error of system is compensated. It is compensated as example with installation error below, namely the spacing d of camera and laser source and angle theta and �� is compensated.
4.1 laser plane range finding errot analyses
Formula (18) makes CB2Respectively d, �� and �� are asked respectively and partially lead:
∂ CB 2 ∂ d = 1 m - cos β ( tan θ + tan α tan β ) ∂ CB 2 ∂ θ = cos β d [ m - cos β ( tan θ + tan α tan β ) ] 2 1 cos 2 θ ∂ CB 2 ∂ α = cos β tan β d [ m - cos β ( tan θ + tan α tan β ) ] 2 1 cos 2 α - - - ( 19 )
Owing to d, �� and �� are separate, therefore according to systematic error composition principle:
Δ CB 2 = Δd ∂ CB 2 ∂ d + Δθ ∂ CB 2 ∂ θ + Δα ∂ CB 2 ∂ a - - - ( 20 )
4.2 error correction
Using �� d, �� �� and �� �� unknown parameter in measurement model, carry out nonlinear fitting with True Data, solve d, �� and �� value revised, the modified value obtained is substituted in range finding model, the range finding model after being corrected.
(5) matrixing stores
Can learn from above-mentioned steps, once range finding each parameter of model determine, each coordinate of image bring into range finding model after result also no longer change, one_to_one corresponding own with image coordinate. In order to range information can be obtained fast, it is possible to obtaining accurately finding range the disposable distance value (image coordinate need to be brought into according to a fixed step size in range finding model) calculating all image coordinate after model, and its matrixing is stored. Follow-up need to be detected bright spot coordinate to be measured and just can directly obtain distance value.
(6) environment map vector
According to the known laser line segment determining distance, by region growing method, the image that vision sensor obtains is split, thus obtain the image segmentation result with range information. Boundary sections in the image with range information is carried out vector, and realizes the reconstruct of indoor environment with vectorizations algorithms, thus obtain the Local Vector environment space with range information.
(7) information fusion
After obtaining Local Vector environment space, in conjunction with the movable information that the Inertial Measurement Unit on motion carrier exports, information fusion is carried out by means of filtering, orient carrier self-position, and obtain accurate carrier pose change information, and just can realize after merging local space accordingly building global context space, it is achieved real-time independent navigation function.
So far, terminated by the motion carrier vision navigation method idiographic flow auxiliary based on laser plane.

Claims (2)

1. the motion carrier vision navigation method assisted based on laser plane, it is characterised in that, comprise the steps:
Step one: by by the LASER Light Source being arranged on motion carrier and vision sensor as range finding and environment construction system, LASER Light Source is utilized to send laser plane, motion carrier front testee produces reflect to form the bright spot of wire, then obtain, by vision sensor shooting, the image comprising the bright spot of this wire;
Step 2: determine one section of bright spot of wire and coordinate range thereof in step one gained image, utilize stencil matching method, in image, this section of bright spot of wire position in the picture is determined in search, wherein:
R ( i , j ) = Σ m = 1 M Σ n = 1 N ( S i , j ( m , n ) × T ( m , n ) ) Σ m = 1 M Σ n = 1 N ( S i , j ( m , n ) ) 2
Described template and subgraph size are all the rectangle of M �� N pixel, and subgraph is the image covered by template, and this rectangle only comprises one section of bright spot of above-mentioned wire; (i, j) is reference point coordinate, represents subgraph position;For masterplate, it it is constant value;For variable, change with reference point;Represent masterplate and the subgraph degree of correlation, when subgraph masterplate and masterplate similarity are maximum, the subgraph position that reference point is (I, J) can be obtained, i.e. the bright spot position of wire to be measured;
Step 3: the brightness choosing the bright spot of wire to be measured, as weights, by gray scale weights method, to each pixel weighted sum of the bright spot of wire, draws the centre coordinate of the bright spot of wire after average;
Step 4: according to the relative position of the vision sensor on motion carrier in step one and laser source, obtains geometry range finding model according to pinhole imaging system principle, and geometry range finding model is as follows:
S = d m - cos β ( tan θ + tan α tan β ) , m = ( v B 0 - v B 20 ) cos β a y
Wherein, d is LASER Light Source and camera light distance in the heart; ��, ��, �� are each installation angle, and ay is focal length and pixel physics dimension ratio, vB0,vB20For each auxiliary point image coordinate;
Step 5: the parameter in model of finding range according to geometry in step 4 and known condition, in conjunction with error estimate formula principle, the reason affecting distance accuracy is analyzed, Binding experiment data, compensate the parameter producing error in range finding model, the range finding model after being corrected;
Step 6: when every time carrying out laser ranging, substitutes into bright for the wire by obtaining in step 3 spot centre coordinate the range finding model after correction, obtains range finding result;
Step 7: according to the known laser line segment determining distance, utilize region growing method, visual pattern is split, thus obtain the image segmentation result with range information;
Step 8: the image segmentation result with range information is carried out vector, and realize environment reconstruct with vectorizations algorithms, all can obtain the Local Vector environment space with range information after each laser scanning;
Step 9: in conjunction with the Inertial Measurement Unit on motion carrier, and the Local Vector environment space that step 8 obtains, carry out Multi-information acquisition by means of filtering, orient motion carrier self-position and build global context space, it is achieved real-time independent navigation function.
2. as claimed in claim 1 based on the motion carrier vision navigation method that laser plane is auxiliary, it is characterized in that, in described step 6, before navigation, directly bring image coordinate into range finding model by a fixed step size, the result obtained is carried out matrixing storage, navigation procedure obtains the distance value that extracting directly after wire bright spot coordinate is corresponding, to obtain the actual range of laser beam place point fast, obtain range finding result.
CN201310323658.1A 2013-07-29 2013-07-29 A kind of motion carrier vision navigation method auxiliary based on laser plane Active CN103424112B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310323658.1A CN103424112B (en) 2013-07-29 2013-07-29 A kind of motion carrier vision navigation method auxiliary based on laser plane

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310323658.1A CN103424112B (en) 2013-07-29 2013-07-29 A kind of motion carrier vision navigation method auxiliary based on laser plane

Publications (2)

Publication Number Publication Date
CN103424112A CN103424112A (en) 2013-12-04
CN103424112B true CN103424112B (en) 2016-06-01

Family

ID=49649214

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310323658.1A Active CN103424112B (en) 2013-07-29 2013-07-29 A kind of motion carrier vision navigation method auxiliary based on laser plane

Country Status (1)

Country Link
CN (1) CN103424112B (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10800329B2 (en) 2010-04-19 2020-10-13 SMR Patents S.à.r.l. Rear view mirror simulation
US10703299B2 (en) 2010-04-19 2020-07-07 SMR Patents S.à.r.l. Rear view mirror simulation
CN104467960B (en) * 2014-12-25 2017-01-18 湖北工程学院 Beacon light spot stable positioning system in wireless optical communication and implementation method thereof
CN105371840B (en) * 2015-10-30 2019-03-22 北京自动化控制设备研究所 A kind of inertia/visual odometry/laser radar Combinated navigation method
CN105894505A (en) * 2016-03-30 2016-08-24 南京邮电大学 Quick pedestrian positioning method based on multi-camera geometrical constraint
CN106056107B (en) * 2016-07-28 2021-11-16 福建农林大学 Pile avoidance control method based on binocular vision
CN106842219B (en) * 2017-01-18 2019-10-29 北京商询科技有限公司 A kind of space ranging method and system for mixed reality equipment
US11009882B2 (en) * 2018-01-12 2021-05-18 Pixart Imaging Inc. Method, system for obstacle detection and a sensor subsystem
CN108873897A (en) * 2018-06-26 2018-11-23 广州数娱信息科技有限公司 Short distance traffic equipment and system
CN109443325A (en) * 2018-09-25 2019-03-08 上海市保安服务总公司 Utilize the space positioning system of floor-mounted camera
CN112179361B (en) * 2019-07-02 2022-12-06 华为技术有限公司 Method, device and storage medium for updating work map of mobile robot
CN110460758B (en) * 2019-08-28 2021-04-30 上海集成电路研发中心有限公司 Imaging device and imaging method based on laser ranging point identification
CN110887486B (en) * 2019-10-18 2022-05-20 南京航空航天大学 Unmanned aerial vehicle visual navigation positioning method based on laser line assistance
CN111383239B (en) * 2020-02-24 2022-06-03 上海航天控制技术研究所 Mars image false edge elimination and contour accurate fitting method based on iterative search
CN113376573B (en) * 2021-06-01 2023-05-26 北京航空航天大学 Fusion positioning system based on radio ranging and artificial light source angle measurement
CN113379652B (en) * 2021-08-11 2021-10-22 深圳市先地图像科技有限公司 Linear image correction method and system for laser imaging and related equipment
CN114046768B (en) * 2021-11-10 2023-09-26 重庆紫光华山智安科技有限公司 Laser ranging method, device, laser ranging equipment and storage medium
CN113820720B (en) * 2021-11-22 2022-01-25 成都星宇融科电力电子股份有限公司 Three-dimensional laser center ranging method, system and terminal based on multiple reference base points

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2709545B1 (en) * 1993-04-19 1995-10-20 Giat Ind Sa Interactive navigation aid method, in particular for road navigation, and device for its implementation.
JP2000171199A (en) * 1998-12-03 2000-06-23 Mitsubishi Electric Corp Passive impact splash standardizing apparatus
JP4486737B2 (en) * 2000-07-14 2010-06-23 アジア航測株式会社 Spatial information generation device for mobile mapping
CN101393029B (en) * 2007-09-17 2011-04-06 王保合 Automobile navigation apparatus and navigation system using the same
DE102010034140A1 (en) * 2010-08-12 2012-02-16 Valeo Schalter Und Sensoren Gmbh Method for displaying images on a display device and driver assistance system
CN102322857B (en) * 2011-05-24 2013-04-17 武汉理工大学 Position and posture measuring system and method for mechanical equipment

Also Published As

Publication number Publication date
CN103424112A (en) 2013-12-04

Similar Documents

Publication Publication Date Title
CN103424112B (en) A kind of motion carrier vision navigation method auxiliary based on laser plane
US11632536B2 (en) Method and apparatus for generating three-dimensional (3D) road model
US11593950B2 (en) System and method for movement detection
US11094114B2 (en) Satellite SAR artifact suppression for enhanced three-dimensional feature extraction, change detection, and visualizations
WO2017124901A1 (en) Information processing method, device, and terminal
JP2022000636A (en) Method and device for calibrating external parameter of on-board sensor, and related vehicle
EP2769239B1 (en) Methods and systems for creating maps with radar-optical imaging fusion
CN113870343B (en) Relative pose calibration method, device, computer equipment and storage medium
CN110073362A (en) System and method for lane markings detection
CN105160702A (en) Stereoscopic image dense matching method and system based on LiDAR point cloud assistance
CN104463108A (en) Monocular real-time target recognition and pose measurement method
CN104574393A (en) Three-dimensional pavement crack image generation system and method
CN113657224A (en) Method, device and equipment for determining object state in vehicle-road cooperation
CN103605978A (en) Urban illegal building identification system and method based on three-dimensional live-action data
CN112258590B (en) Laser-based depth camera external parameter calibration method, device and storage medium thereof
Zhao et al. Reconstruction of textured urban 3D model by fusing ground-based laser range and CCD images
CN113888639B (en) Visual odometer positioning method and system based on event camera and depth camera
CN113643345A (en) Multi-view road intelligent identification method based on double-light fusion
CN112068152A (en) Method and system for simultaneous 2D localization and 2D map creation using a 3D scanner
Zhang et al. Deep learning based object distance measurement method for binocular stereo vision blind area
Meissner et al. Simulation and calibration of infrastructure based laser scanner networks at intersections
Chenchen et al. A camera calibration method for obstacle distance measurement based on monocular vision
CN116817891A (en) Real-time multi-mode sensing high-precision map construction method
CN112146647B (en) Binocular vision positioning method and chip for ground texture
Steinemann et al. Determining the outline contour of vehicles In 3D-LIDAR-measurements

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant