CN102168973B - Automatic navigating Z-shaft positioning method for omni-directional vision sensor and positioning system thereof - Google Patents

Automatic navigating Z-shaft positioning method for omni-directional vision sensor and positioning system thereof Download PDF

Info

Publication number
CN102168973B
CN102168973B CN201110006181A CN201110006181A CN102168973B CN 102168973 B CN102168973 B CN 102168973B CN 201110006181 A CN201110006181 A CN 201110006181A CN 201110006181 A CN201110006181 A CN 201110006181A CN 102168973 B CN102168973 B CN 102168973B
Authority
CN
China
Prior art keywords
vision sensor
theta
tan
omnibearing vision
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201110006181A
Other languages
Chinese (zh)
Other versions
CN102168973A (en
Inventor
李明
刘仲华
黄建安
戴思慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Agricultural University
Original Assignee
Hunan Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Agricultural University filed Critical Hunan Agricultural University
Priority to CN201110006181A priority Critical patent/CN102168973B/en
Publication of CN102168973A publication Critical patent/CN102168973A/en
Application granted granted Critical
Publication of CN102168973B publication Critical patent/CN102168973B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Navigation (AREA)

Abstract

The invention discloses an automatic navigating Z-shaft positioning method for an omni-directional vision sensor and a positioning system thereof. The method comprises the following steps: placing four colored identifications at the peripheral corners around an operation area, wherein a rectangle is formed by the four colored identifications; arranging the omni-directional vision sensor on an operation vehicle; acquiring an onsite omni-directional image containing the images of the four colored identifications by the omni-directional vision sensor; calculating the height H of a current omni-directional vision system according to a formula; and finally calculating the difference between the height H and a standard height Hs, namely a positioning coordinate value along the Z-shaft direction, thereby finishing the Z-shaft positioning for operation machinery. The device and method have the advantages that the positioning speed is high, the use scope is wide, the practicability is strong and the cost is competitive.

Description

Omnibearing vision sensor self-navigation Z axle localization method and positioning system thereof
Technical field
The present invention relates to a kind of omnibearing vision sensor self-navigation Z axle localization method and positioning system thereof.
Background technology
At present, people require increasingly high to the automaticity of machinery; The development of mechanical automation operation and robot thereof is following Mechanization Development direction; The location is concerning automatic job vehicle, robot, and is extremely important.Can the position of oneself must be known by vehicle, robot when automatic job, just can determine robot accomplish executing the task of next step on request.GPS is present unique commercialization and uses maximum positioning systems; But the precision of GPS depends on job position, and in city high rise building street or mountain area, GPS is because barriers such as house, the woods, mountain range hinder microwave transmission, and precision obviously descends; Especially GPS can not be in indoor use.Be similar to mammal, realize that through vision the location is one of present robot The Location focus, the vision sensor localization method mainly is divided into geometry location and topology location.Typical geometry location is in the 2D plane, to calculate through Flame Image Process identification marking position or images match is found out the absolute position of robot.The topology location is on the basis of setting up topological map, finds the abstract position of robot; Topology position application scope is wide, does not need manual work that sign is set, but generally sets up the topological map complicacy, and the image information calculated amount is big, and travelling speed is slower.Realize that through artificial sign is set the geometry location method is simple, in the robot location, be widely used.And implement how much absolute fixs through common vision sensor, generally draw the planar positioning result; GPS can obtain the positioning result of robot space three-dimensional.In the rough operating environment of reality, 3D location in space is necessary, can effectively judge the ground environment of robot; A kind of in recent years in addition in the RTK-GPS of the technical development of GPS technology, precision is higher, but costs an arm and a leg.Common vehicle and Work machine be agricultural machinery and agricultural transport vehicle especially, and operating environment place area is more abominable, chamber operation especially, and the GPS technology possibly not realize the self-navigation location.
Omnibearing vision sensor is a kind of novel vision sensor; It can provide the abundant information of 360 degree scopes, and image can not change with the rotation of sensor, helps reducing the part negative effect of wheelslip and vibrations; And low price, be widely used in the robot field; Patent of invention " a kind of autopilot and method " (application number: 200910044412.4) disclose the computing method of plane space two dimension, and will realize three location, also must accomplish the location of Z axle.
Summary of the invention
Technical matters to be solved by this invention is to propose a kind of omnibearing vision sensor self-navigation Z axle localization method and positioning system thereof; The location that this omnibearing vision sensor self-navigation Z axle localization method and positioning system thereof can realize the Z axle; Thereby on the basis of existing two-dimensional localization, accomplish the space three-dimensional location, this method and apparatus locating speed is fast, usable range is wide, practical, have a price advantage.
Technical solution of the present invention is following:
A kind of omnibearing vision sensor self-navigation Z axle localization method,
On the peripheral corner of operating area, place 4 coloured sign of forming rectangle: L 1, L 2, L 3, L 4, omnibearing vision sensor is arranged on the working truck; Omnibearing vision sensor obtains the on-the-spot omnidirectional images that includes 4 coloured sign imagings;
The relational expression of omnibearing vision sensor system height and omnibearing vision sensor and sign distance, image-forming range is:
( - 2 c x i X 0 f + x i H + c ) 2 b 1 2 - ( 2 cX 0 x i X 0 f + x i H ) 2 a 1 2 = 1 , X o,x i>0;
A wherein 1, b 1, c is the structural parameters on hyperboloid surface; F is a focal length; H is the computed altitude of current fully-directional visual system;
X 0For omnibearing vision sensor on XOY plane arrives spatial point P (X 0, Z 0) between distance, X 0Be spatial point P (X 0, Z 0) in subpoint on the XOY plane and the distance of omnibearing vision sensor between the subpoint on the XOY plane;
x iBe image-forming range, i.e. 4 coloured imaging actual range mean values that are identified to the omnibearing vision sensor projection centre; x iBy calculating the imaging pixel distance that is identified to the omnibearing vision sensor projection centre respectively, be calculated to be the picture actual range through demarcating pixel distance again;
Calculate the computed altitude H of current fully-directional visual system; Last computed altitude H and calibrated altitude H sDifference, i.e. the elements of a fix value of Z-direction, thereby the Z axle location of the machinery that fulfils assignment.
The position at four coloured sign places is corner location of the boundary rectangle of operating area.
The position of omnibearing vision sensor projection centre in omnidirectional images fixed.
x iAsk method following:
The demarcation pixel distance is 0.01389inch/pixel, calculates 4 signs earlier and divides the imaging actual range be clipped to the omnibearing vision sensor projection centre, and the mean value of hoping for success again as actual range obtains x i
X 0Ask method following:
According to 4 coloured 4 location points and the location points of omnibearing vision sensor projection centre in image that are identified in the image, calculate every adjacent two coloured signs and the azimuth angle theta of omnibearing vision sensor projection centre in omnidirectional images in four coloured signs 1~θ 4
The rectangle that 4 coloured sign L1~L4 form around in the operation area long for a, wide be b; Form principle according to the geometry circular arc, by L1 and coloured identification point of L2 and above-mentioned azimuth angle theta 1Be that central angle forms arc S 1, in like manner, L2 and L3, θ 2Form arc S 2, L3 and L4, θ 3Form arc S 3, L3 and L4, θ 4Form arc S 4, arc S 1~S 4Two intersect respectively, draw four intersection points, and 4 intersecting point coordinates are:
Tan θ 1 = BX I 1 / ( X I 1 2 + Y I 1 2 - BY I 1 ) Tan θ 2 = AY I 1 / ( X I 1 2 + Y I 1 2 - AX I 1 ) ; Wherein c 1 = b / Tan θ 1 - a a / Tan θ 2 - b ;
X I 2 = b ( c 2 + 1 / Tan θ 2 ) 1 + c 2 2 Y I 2 = Bc 2 ( c 2 + 1 / Tan θ 2 ) 1 + c 2 2 , Wherein c 2 = b / Tan θ 2 - a a / Tan θ 3 - b ;
X I 3 = b ( c 3 + 1 / Tan θ 3 ) 1 + c 3 2 Y I 3 = Bc 3 ( c 3 + 1 / Tan θ 3 ) 1 + c 3 2 , Wherein c 3 = b / Tan θ 3 - a a / Tan θ 4 - b ;
X I 4 = b ( c 4 + 1 / Tan θ 4 ) 1 + c 4 2 Y I 4 = Bc 4 ( c 4 + 1 / Tan θ 4 ) 1 + c 4 2 , Wherein c 4 = b / Tan θ 4 - a a / Tan θ 1 - b ;
Draw intersection point barycentric coordinates P (X 1, Y 1), as follows:
X 1 = ( X I 1 + X I 2 + X I 3 + X I 4 ) / 4 Y 1 = ( Y I 1 + Y I 2 + Y I 3 + Y I 4 ) / 4 ;
Barycentric coordinates P (X 1, Y 1) be the planar location of omnibearing vision sensor;
Calculate P (X at XOY plane according to 2 common range formulas 1, Y 1) o'clock to 4 sign L1, L2, L3 and L4 apart from l1, l2, l3 and l4; Obtain at last
Figure BDA0000043604650000041
A kind of omnibearing vision sensor self-navigation Z axle positioning system is placed 4 coloured sign of forming rectangle: L on the peripheral corner of operating area 1, L 2, L 3, L 4, the omnibearing vision sensor that is used to the omnidirectional images that includes 4 coloured signs imagings at the scene of obtaining is arranged on working truck; On working truck, be provided with microprocessor; Have the image that is used for obtaining in the microprocessor and carry out the graphics processing unit that data processing is obtained coloured home position coordinate, also have the computing unit that is used for according to the elements of a fix value of aforementioned omnibearing vision sensor self-navigation Z axle localization method and final acquisition Z-direction in the microprocessor according to omnibearing vision sensor.
Omnibearing vision sensor has hyperbolic mirror.
Beneficial effect:
A kind of omnibearing vision sensor self-navigation Z axle localization method of the present invention and positioning system thereof, the operation area forms rectangular-shaped with 4 signs, measure the length and the width of rectangle; Extract through Flame Image Process and to be identified at position in the image, further obtain in image, 4 orientation angles that each sign and sensor projection centre form; Obtain the two-dimensional localization absolute coordinate (X of the space plane XY of relative 4 the sign formation of Work machine through geometric algorithm; Y), adopt cartesian coordinate system, with one in 4 signs as initial point; In reality, be designated initial point with that of the lower left corner in the image.
(application number: 200910044412.4) " a kind of automatic navigating and positioning device and method " Z axle method for calculating and locating of conforming to comprises the relational expression of derivation device system height and omnibearing vision sensor and identifier space distance, image-forming range to the invention provides a kind of and patent of invention; Calculating is identified to the image-forming range of omnibearing vision sensor projection centre; Calculate the space length of omnibearing vision sensor and sign; Calculate Z axle locator value, it is characterized in that:
The relational expression of said derivation device system height and omnibearing vision sensor and identifier space distance, image-forming range comprises: omnibearing vision sensor hyperbolic mirror Hyperbolic Equation is set up, the omnibearing vision sensor image-forming principle is used and the utilization of mathematics geometry; Said omnibearing vision sensor hyperbolic mirror Hyperbolic Equation is set up and is comprised the Hyperbolic Equation that is transformed to the apparatus system height correlation by the plane Hyperbolic Equation; Said omnibearing vision sensor image-forming principle utilization comprises that the spatial point incident ray reflects in the imaging surface image-forming principle through hyperboloid, is characterized in intersecting at projection centre in a bit after incident ray is through the hyperboloid refraction; Space line is imaged as circular arc on imaging surface; The utilization of said mathematics geometry comprises that spatial point omnibearing vision sensor image-forming principle several picture representes that the geometry mathematical relation utilization of intermediate cam shape derives apparatus system height and sensor and identify the relational expression apart from, image-forming range;
The image-forming range that said calculating is identified to the omnibearing vision sensor projection centre comprises that (application number: 200910044412.4) sign of a kind of automatic navigating and positioning device and method is extracted according to patent of invention; The center of gravity representative of extracting identification characteristics and obtaining the identification characteristics pixel through Flame Image Process is identified at the position in the image; Calculating is identified to the imaging pixel distance of omnibearing vision sensor projection centre, goes out to be identified to the imaging actual range of omnibearing vision sensor projection centre through the pixel distance calibrated and calculated; Said pixel distance is demarcated the relation of demarcating actual range and pixel count through experimental result, draws the actual range that single pixel is demarcated, and it is actual range that pixel distance multiply by single pixel demarcation distance.
The space length of said calculating omnibearing vision sensor and sign comprises that (application number: 200910044412.4) a kind of automatic navigating and positioning device and method are obtained (X, Y) coordinate figure on the XY plane according to patent of invention; The sensor plane position utilization air line distance accounting equation of being asked calculates the space length of omnibearing vision sensor and sign;
Said calculating Z axle locator value comprises that apparatus system height that utilization obtains and omnibearing vision sensor and identifier space are apart from the relational expression of, image-forming range be identified to the space length three of image-forming range, omnibearing vision sensor and the sign of omnibearing vision sensor projection centre, calculating Z axle locator value.
In sum, a kind of omnibearing vision sensor self-navigation Z axle method for calculating and locating according to the invention provides a kind of and patent of invention (application number: 200910044412.4) the Z axle method for calculating and locating that conforms to method of a kind of automatic navigating and positioning device; As long as provide a pair to comprise the realtime graphic of sign; Extract identification characteristics and by patent of invention (application number: 200910044412.4) the planar locator value obtained of a kind of automatic navigating and positioning device and method through Flame Image Process; Completion space three-dimensional location; Reach the positioning function same, really realize automated machine or robot self-navigation location with GPS.This method is simple, and travelling speed is fast, and is practical; Automatic navigating and positioning device is simple in structure, and cost is not high, is easy to be extended and applied.
Description of drawings
Fig. 1 omnibearing vision sensor point image-forming principle of the present invention;
Fig. 2 is the mutual relationship principle schematic of system height of the present invention and omnibearing vision sensor and identifier space distance, image-forming range;
Fig. 3 instance omnibearing vision sensor of the present invention calibrated altitude H sSynoptic diagram;
The process flow diagram of Fig. 4 the inventive method;
The space length of Fig. 5 instance omnibearing vision sensor of the present invention and sign calculates principle schematic.
Label declaration: 1-omnibearing vision sensor, 2-Work machine, 3-imaging plane, 4-hyperboloid.
Embodiment
Below will combine accompanying drawing and specific embodiment that the present invention is explained further details:
Embodiment 1:
Like Fig. 1, during the omnibearing vision sensor imaging, spatial point P (X, Y; Z), the picture point p that forms (x, y) vertical with surface level (XOY) with the plane of projection centre formation, spatial point P (X; Y, the straight line that Z) is linked to be to true origin is α 1 in the projection straight line of XOY plane with respect to the position angle of X axle, picture point p (x; Y) and the line of imaging plane initial point be α 2 with respect to the position angle of imaging plane x axle, α 1 is identical with α 2, is all the θ among Fig. 1.
Like Fig. 2, definition coordinate system XZ; According to the omnibearing vision sensor image-forming principle, space ground point P (X 0, Z 0), incident ray i is through focus O M(0, H) on hyperboloid, put M (x m, z m) converge at O after the reflection c, on imaging surface, form picture point p (x i, z i); O cBe projection centre, f is a focal length; According to general Hyperbolic Equation:
The equation according to the invention (1) of deriving:
( z m - H + c ) 2 b 2 - x m 2 a 2 = 1 ; c = a 2 + b 2 - - - ( 1 )
A, b, c are the structural parameters on hyperboloid surface, and general producer provides standard value; H is an omnibearing vision sensor system height overhead; Can know that from the omnibearing vision sensor image-forming principle spatial point and imaging point xsect can be represented with coordinate system as shown in Figure 2.The definition omnibearing vision sensor is to P (X 0, Z 0) between distance be omnibearing vision sensor and (explain: Fig. 2 is a derivation of equation synoptic diagram to sign, has only a sign respectively; If there are four signs the back, then obtain mean value with four sign distances.--the mean value of-4 tag distance refers to 4 mean values that indicate the distance value of omnibearing vision sensor) distance X of projection on XOY plane 0Be defined as picture point p (x i, z i) be image-forming range x to the xoy plan range at omnibearing vision sensor imaging surface center i, as shown in Figure 2.
Point O M(0, H), M (x m, z m) and P (X o, Z o) on same straight line, can be write as (2):
z m = H - H X 0 x m - - - ( 2 )
According to how much leg-of-mutton similaritys of mathematics, draw (3):
x i x m = f z m - H + 2 c - - - ( 3 )
Obtain (4) according to (2) and (3):
x m = 2 cX 0 x i X 0 f + x i H - - - ( 4 )
In (4) formula substitution (3), can obtain Zm:
z m = ( 1 - 2 cx i X 0 f + x i H ) H - - - ( 5 )
In (4) formula and (5) substitution (1), obtain the relational expression of following omnibearing vision sensor system height and omnibearing vision sensor and sign distance, image-forming range:
( - 2 cx i X 0 f + x i H ) 2 b 2 - ( 2 cX 0 x i X 0 f + x i H ) 2 a 2 = 1 , ( X o , x i > 0 ) - - - ( 6 )
Therefore, according to (6) formula, a, b, c and f are definite value, if can draw x iAnd X 0Value substitution (6) formula, can instead ask the computed altitude H of fully-directional visual system that obtains this moment.When omnibearing vision sensor was installed, terrain clearance was the constant of fixing, and on Fig. 3 coordinate system Z-direction, is defined as calibrated altitude H sObtain the difference of computed altitude and calibrated altitude, i.e. the elements of a fix value of Z-direction, the Z axle location of the machinery that fulfils assignment.
Definition capitalization such as XYZ representation space coordinate amount, lowercase such as xyz represent the amount relevant with imaging surface.
Next step explains x iAnd X 0Value computation process:
Like Fig. 4, the said position calculating method that is identified in the image of present embodiment comprises artificial sign is set, and opens that sign power supply, view data input, Flame Image Process calculate, the result deposits.Specifically describe as follows:
Sign is set: employing red colouration glass passes through, and bright material is made the inner manual work sign that the power supply light fixture is installed, and centers on 4 red signs of placing on the maximum peripheral corner of operating area: sign L 1, the sign L 2, the sign L 3, the sign L 4And the composition rectangle, and turn on the power switch enhance color brightness.
View data input: take sign L in real time through the omnibearing vision sensor camera head 1, the sign L 2, the sign L 3, the sign L 4Comprehensive 360 degree images, and be input in the reservoir in the computing machine through real-time USB interface.
Flame Image Process and calculating: transfer the omnidirectional images that stores in the computing machine, the definition image coordinate, the image upper left corner is initial point (0,0), unit is a pixel; In the Flame Image Process, can carry out Flame Image Process to the redness sign according to the sign color.
Utilize the red brightness of 7 pairs of image pixels of formula to calculate, and program loop calculates, and maximal value is r in the red color intensity of image pixel Max, be stored in the internal memory;
According to formula 8, in computing machine, set the threshold value t that extracts red identification characteristics amount of color 1Line by line image is scanned, the brightness of red pixel is greater than t in utilizing formula 7 computed image 1The time, extract from image and to meet all red character pixels of above condition;
r=R-(B+G)/2-|B-G| ----7
t 1=[r max-N 1] ----8
R, G, B: red, green and blue brightness; R: extract red pixel brightness; r Max: the red color intensity maximal value of pixel in the image; t 1: the threshold value of extracting red identification characteristics amount of color; N 1Be testing constant, 10<N 1<50;
Maximum red pixel distance is provided with red pixel apart from maximal value in the image according to being identified at arbitrarily; Calculating extracts Euclidean distance between red pixel; When Euclidean distance between red pixel is judged as a red feature pixel zone less than setting red all pixels during apart from maximal value, be divided into 4 feature pixel zones;
The calculating of single characteristic area pixel; Extract the horizontal ordinate and the ordinate of characteristic area pixel, the image coordinate of i pixel is defined as: Rix and Riy; And the pixel total amount that counts this feature pixel zone is n 1
Utilize formula 9 to calculate the coordinate mean value of these all pixels of feature pixel zone, promptly obtain the center of gravity in feature pixel zone, the horizontal ordinate of center of gravity and ordinate are defined as Xr and Yr respectively; Coordinate (Xr, Yr) corresponding redness is identified at the location point of representing in the image;
[ Xr , Yr ] = [ Σ i = 1 n Rix / n 1 , Σ i = 1 n Riy / n 1 ] - - - 9
In the formula: Rix, Riy: the image coordinate of the i of extraction red identification characteristics pixel;
When the image recognition sign is not equal to 4, then return the image input; Draw every two adjacent images identification point and the orientation angles θ of omnibearing vision sensor projection centre in omnidirectional images through Flame Image Process 1~θ 4(in reality, the projection centre of omnibearing vision sensor is under the constant situation of sensor parameters, and the position in image is unique fixing, artificial input data in program)
According to drawing the orientation angles θ of sign with sensor 1~θ 4After, like Fig. 5, (application number: the 200910044412.4-publication number is 101660912 according to patent of invention.) a kind of automatic navigating and positioning device and method, sign (rectangle that L1~L4) forms long for a, wide be b; By L1 and coloured identification point of L2 and above-mentioned diagonal angle θ 1Form arc S 1, L2 and L3 successively, θ 2Form arc S 2, L3 and L4, θ 3Form arc S 3, L3 and L4, θ 4Form arc S 4, arc S 1~S 4Two intersect respectively, draw four intersection points, and intersection point can draw formula (10) through geometric analysis:
tan θ 1 = bX I 1 / ( X I 1 2 + Y I 1 2 - bY I 1 ) tan θ 2 = aY I 1 / ( X I 1 2 + Y I 1 2 - aX I 1 ) - - - 10
Formula 1 is carried out conversion, obtains:
X I 1 = b ( c + 1 / tan θ 1 ) 1 + c 2 Y I 1 = bc ( c + 1 / tan θ 1 ) 1 + c 2 - - - 11
Wherein c = b / Tan θ 1 - a a / Tan θ 2 - b .
In like manner obtain the coordinate (X of other three intersection points I2, Y I2), (X I3, Y I3), (X I4, Y I4), as follows:
X I 2 = b ( c + 1 / tan θ 2 ) 1 + c 2 Y I 2 = bc ( c + 1 / tan θ 2 ) 1 + c 2 , c = b / tan θ 2 - a a / tan θ 3 - b
X I 3 = b ( c + 1 / tan θ 3 ) 1 + c 2 Y I 3 = bc ( c + 1 / tan θ 3 ) 1 + c 2 , c = b / tan θ 3 - a a / tan θ 4 - b
X I 4 = b ( c + 1 / tan θ 4 ) 1 + c 2 Y I 4 = bc ( c + 1 / tan θ 4 ) 1 + c 2 , c = b / tan θ 4 - a a / tan θ 1 - b
Draw intersection point barycentric coordinates P (X 1, Y 1), as follows:
X 1 = ( X I 1 + X I 2 + X I 3 + X I 4 ) / 4 Y 1 = ( Y I 1 + Y I 2 + Y I 3 + Y I 4 ) / 4 - - - 12
Barycentric coordinates P (X 1, Y 1) be the planar location of omnibearing vision sensor.
Calculate the planar location P (X with respect to sign of camera 1, Y 1) after, calculate P (X at XOY plane according to 2 common range formulas 1, Y 1) o'clock to 4 sign L1, L2, L3 and L4 apart from l1, l2, l3 and l4.The coordinate of sign is confirmed according to being provided with when identifying.
X 0 = 11 + 12 + 13 + 14 4 .
Then, the center of gravity representative of obtaining the identification characteristics pixel according to 9 formulas is identified at the position in the image, calculates the imaging pixel distance (in reality, the sensor projection centre is a steady state value, artificial input) that is identified to the omnibearing vision sensor projection centre.The plain distance of instance acceptance of the bid fixation of the present invention calculates the imaging actual range mean value x that is identified to the omnibearing vision sensor projection centre for 0.01389inch/pixel i
Draw xi and X0, use 6 formulas to calculate the actual computation value of the omnibearing vision sensor of Work machines such as robot and vehicle, obtain the difference of computed altitude and calibrated altitude, be i.e. the elements of a fix value of Z-direction, the Z axle location of the machinery that fulfils assignment.In conjunction with patent of invention (application number: 200910044412.4) a kind of automatic navigating and positioning device and method, three-dimensional (X, Y, Z) location that both can accomplish the omnibearing vision sensor self-navigation of Work machines such as robot and vehicle.

Claims (2)

1. an omnibearing vision sensor self-navigation Z axle localization method is characterized in that,
On the peripheral corner of operating area, place 4 coloured sign of forming rectangle: L 1, L 2, L 3, L 4, omnibearing vision sensor is arranged on the working truck; Omnibearing vision sensor obtains the on-the-spot omnidirectional images that includes 4 coloured sign imagings; Omnibearing vision sensor has hyperbolic mirror;
Distance between the computed altitude of fully-directional visual system and omnibearing vision sensor and spatial point, the relational expression of image-forming range are:
( - 2 c x i X 0 f + x i H + c ) 2 b 1 2 - ( 2 c X 0 x i X 0 f + x i H ) 2 a 1 2 = 1 , X o,x i>0;
A wherein 1, b 1, c is the structural parameters on hyperboloid surface; F is a focal length; H is the computed altitude of current fully-directional visual system;
X 0For omnibearing vision sensor on XOY plane arrives spatial point P (X 0, Z 0) between distance, be specially spatial point P (X 0, Z 0) in subpoint on the XOY plane and the distance of omnibearing vision sensor between the subpoint on the XOY plane;
x iBe image-forming range, i.e. 4 coloured imaging actual range mean values that are identified to the omnibearing vision sensor projection centre; x iBy calculating the imaging pixel distance that is identified to the omnibearing vision sensor projection centre respectively, be calculated to be the picture actual range through demarcating pixel distance again;
Calculate the computed altitude H of current fully-directional visual system; Last computed altitude H and calibrated altitude H sDifference, i.e. the elements of a fix value of Z-direction, thereby the Z axle location of the machinery that fulfils assignment; Calibrated altitude H sTerrain clearance when being the omnibearing vision sensor installation.
2. omnibearing vision sensor self-navigation Z axle localization method according to claim 1 is characterized in that,
x iAsk method following:
The demarcation pixel distance is 0.01389inch/pixel, calculates 4 signs earlier and divides the imaging actual range be clipped to the omnibearing vision sensor projection centre, and the mean value of hoping for success again as actual range obtains x i
X 0Ask method following:
According to 4 coloured 4 location points and the location points of omnibearing vision sensor projection centre in image that are identified in the image, calculate every adjacent two coloured signs and the azimuth angle theta of omnibearing vision sensor projection centre in omnidirectional images in four coloured signs 1~θ 4
The rectangle that 4 coloured sign L1~L4 form around in the operation area long for a, wide be b; Form principle according to the geometry circular arc, by L1 and coloured identification point of L2 and above-mentioned azimuth angle theta 1Be that central angle forms arc S 1, in like manner, L2 and L3, θ 2Form arc S 2, L3 and L4, θ 3Form arc S 3, L3 and L4, θ 4Form arc S 4, arc S 1~S 4Successively according to S 1With S 2, S 2With S 3, S 3With S 4And S 4With S 1Order intersect in twos, draw four intersection points, 4 intersecting point coordinates are:
X I 1 = b ( c 1 + 1 / Tan θ 1 ) 1 + c 1 2 Y I 1 = b c 1 ( c 1 + 1 / Tan θ 1 ) 1 + c 1 2 ; Wherein c 1 = b / Tan θ 1 - a a / Tan θ 2 - b ;
X I 2 = b ( c 2 + 1 / Tan θ 2 ) 1 + c 2 2 Y I 2 = b c 2 ( c 2 + 1 / Tan θ 2 ) 1 + c 2 2 , Wherein c 2 = b / Tan θ 2 - a a / Tan θ 3 - b ;
X I 3 = b ( c 3 + 1 / Tan θ 3 ) 1 + c 3 2 Y I 3 = b c 3 ( c 3 + 1 / Tan θ 3 ) 1 + c 3 2 , Wherein c 3 = b / Tan θ 3 - a a / Tan θ 4 - b ;
X I 4 = b ( c 4 + 1 / Tan θ 4 ) 1 + c 4 2 Y I 4 = b c 4 ( c 4 + 1 / Tan θ 4 ) 1 + c 4 2 , Wherein c 4 = b / Tan θ 4 - a a / Tan θ 1 - b ;
Draw intersection point barycentric coordinates P (X 1, Y 1), as follows:
X 1 = ( X I 1 + X I 2 + X I 3 + X I 4 ) / 4 Y 1 = ( Y I 1 + Y I 2 + Y I 3 + Y I 4 ) / 4 ;
Barycentric coordinates P (X 1, Y 1) be the planar location of omnibearing vision sensor;
Calculate P (X at XOY plane according to 2 common range formulas 1, Y 1) o'clock to 4 sign L1, L2, L3 and L4 apart from l1, l2, l3 and l4; Obtain at last
Figure FDA00001374859300000210
CN201110006181A 2011-01-12 2011-01-12 Automatic navigating Z-shaft positioning method for omni-directional vision sensor and positioning system thereof Active CN102168973B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110006181A CN102168973B (en) 2011-01-12 2011-01-12 Automatic navigating Z-shaft positioning method for omni-directional vision sensor and positioning system thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110006181A CN102168973B (en) 2011-01-12 2011-01-12 Automatic navigating Z-shaft positioning method for omni-directional vision sensor and positioning system thereof

Publications (2)

Publication Number Publication Date
CN102168973A CN102168973A (en) 2011-08-31
CN102168973B true CN102168973B (en) 2012-10-03

Family

ID=44490225

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110006181A Active CN102168973B (en) 2011-01-12 2011-01-12 Automatic navigating Z-shaft positioning method for omni-directional vision sensor and positioning system thereof

Country Status (1)

Country Link
CN (1) CN102168973B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103123682B (en) * 2013-01-17 2015-09-16 无锡普智联科高新技术有限公司 The mobile robot positioning system of rule-based graphic code composite label and method
CN106949881B (en) * 2017-02-24 2019-04-30 浙江大学 A kind of mobile robot fast vision localization method
CN107252785B (en) * 2017-06-29 2019-05-10 顺丰速运有限公司 A kind of express mail grasping means applied to quick despatch robot piece supplying
CN110045365B (en) * 2019-03-26 2023-03-14 西北工业大学 Image target positioning method based on radar information

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4459155B2 (en) * 2005-11-14 2010-04-28 株式会社東芝 Optical position measuring device
CN101408422B (en) * 2008-10-16 2010-09-15 浙江工业大学 Traffic accident on-site mapper based on binocular tridimensional all-directional vision
CN101660912B (en) * 2009-09-25 2012-12-05 湖南农业大学 Automatic navigating and positioning device and method

Also Published As

Publication number Publication date
CN102168973A (en) 2011-08-31

Similar Documents

Publication Publication Date Title
CN104200086B (en) Wide-baseline visible light camera pose estimation method
CN111473739B (en) Video monitoring-based surrounding rock deformation real-time monitoring method for tunnel collapse area
CN110148169B (en) Vehicle target three-dimensional information acquisition method based on PTZ (pan/tilt/zoom) pan-tilt camera
CN105678785B (en) A kind of laser and the scaling method of camera relative pose relation
CN109685855B (en) Camera calibration optimization method under road cloud monitoring platform
CN103424112B (en) A kind of motion carrier vision navigation method auxiliary based on laser plane
CN104217439B (en) Indoor visual positioning system and method
CN108594245A (en) A kind of object movement monitoring system and method
CN102183250B (en) Automatic navigation and positioning device and method for field road of agricultural machinery
CN103196370B (en) Measuring method and measuring device of conduit connector space pose parameters
CN103994762B (en) Method for positioning mobile robot based on data matrix code
CN108388244A (en) Mobile-robot system, parking scheme based on artificial landmark and storage medium
CN106651990A (en) Indoor map construction method and indoor map-based indoor locating method
CN103971404A (en) 3D real-scene copying device having high cost performance
CN103759669A (en) Monocular vision measuring method for large parts
CN101660912B (en) Automatic navigating and positioning device and method
CN102168973B (en) Automatic navigating Z-shaft positioning method for omni-directional vision sensor and positioning system thereof
CN114413958A (en) Monocular vision distance and speed measurement method of unmanned logistics vehicle
CN104318566B (en) Can return to the new multi-view images plumb line path matching method of multiple height values
CN115984766A (en) Rapid monocular vision three-dimensional target detection method for underground coal mine
CN102129679B (en) Local positioning system and method
CN101620672A (en) Method for positioning and identifying three-dimensional buildings on the ground by using three-dimensional landmarks
CN208350997U (en) A kind of object movement monitoring system
CN105424059B (en) Wide baseline near infrared camera position and orientation estimation method
CN110415299B (en) Vehicle position estimation method based on set guideboard under motion constraint

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant