CN102168973A - Automatic navigating Z-shaft positioning method for omni-directional vision sensor and positioning system thereof - Google Patents

Automatic navigating Z-shaft positioning method for omni-directional vision sensor and positioning system thereof Download PDF

Info

Publication number
CN102168973A
CN102168973A CN 201110006181 CN201110006181A CN102168973A CN 102168973 A CN102168973 A CN 102168973A CN 201110006181 CN201110006181 CN 201110006181 CN 201110006181 A CN201110006181 A CN 201110006181A CN 102168973 A CN102168973 A CN 102168973A
Authority
CN
China
Prior art keywords
vision sensor
theta
omnibearing vision
tan
coloured
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 201110006181
Other languages
Chinese (zh)
Other versions
CN102168973B (en
Inventor
李明
刘仲华
黄建安
戴思慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Agricultural University
Original Assignee
Hunan Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Agricultural University filed Critical Hunan Agricultural University
Priority to CN201110006181A priority Critical patent/CN102168973B/en
Publication of CN102168973A publication Critical patent/CN102168973A/en
Application granted granted Critical
Publication of CN102168973B publication Critical patent/CN102168973B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Navigation (AREA)

Abstract

The invention discloses an automatic navigating Z-shaft positioning method for an omni-directional vision sensor and a positioning system thereof. The method comprises the following steps: placing four colored identifications at the peripheral corners around an operation area, wherein a rectangle is formed by the four colored identifications; arranging the omni-directional vision sensor on an operation vehicle; acquiring an onsite omni-directional image containing the images of the four colored identifications by the omni-directional vision sensor; calculating the height H of a current omni-directional vision system according to a formula; and finally calculating the difference between the height H and a standard height Hs, namely a positioning coordinate value along the Z-shaft direction, thereby finishing the Z-shaft positioning for operation machinery. The device and method have the advantages that the positioning speed is high, the use scope is wide, the practicability is strong and the cost is competitive.

Description

Omnibearing vision sensor self-navigation Z axle localization method and positioning system thereof
Technical field
The present invention relates to a kind of omnibearing vision sensor self-navigation Z axle localization method and positioning system thereof.
Background technology
At present, people require more and more higher to the automaticity of machinery; The development of mechanical automation operation and robot thereof is following Mechanization Development direction; The location is concerning automatic job vehicle, robot, and is extremely important.Can the position of oneself must be known by vehicle, robot when automatic job, just can determine robot finish executing the task of next step on request.GPS is present unique commercialization and uses maximum positioning systems; But the precision of GPS depends on job position, and in city high rise building street or mountain area, GPS is because barriers such as house, the woods, mountain range hinder microwave transmission, and precision obviously descends; Especially GPS can not be in indoor use.Be similar to mammal, realize that by vision the location is one of present robot The Location focus, the vision sensor localization method mainly is divided into geometry location and topology location.Typical geometry location is to calculate by Flame Image Process identification marking position in the 2D plane or images match is found out the absolute position of robot.The topology location is on the basis of setting up topological map, finds the abstract position of robot; Topology position application scope is wide, does not need sign manually is set, but generally sets up the topological map complexity, and the image information calculated amount is big, and travelling speed is slower.Realize that by artificial sign is set the geometry location method is simple, in the robot location, be widely used.And implement how much absolute fixs by common vision sensor, generally draw the planar positioning result; GPS can obtain the positioning result of robot space three-dimensional.In the rough operating environment of reality, 3D location in space is necessary, can effectively judge the ground environment of robot; A kind of in recent years in addition precision is higher in the RTK-GPS of the technical development of GPS technology, but costs an arm and a leg.Common vehicle and Work machine be agricultural machinery and agricultural transport vehicle especially, and operating environment place area is more abominable, chamber operation especially, and the GPS technology may not realize the self-navigation location.
Omnibearing vision sensor is a kind of novel vision sensor, it can provide the abundant information of 360 degree scopes, and image can not change with the rotation of sensor, helps reducing the part negative effect of wheelslip and vibrations, and low price, be widely used in the robot field; Patent of invention " a kind of autopilot and method " (application number: 200910044412.4) disclose the computing method of plane space two dimension, and will realize three location, also must finish the location of Z axle.
Summary of the invention
Technical matters to be solved by this invention is to propose a kind of omnibearing vision sensor self-navigation Z axle localization method and positioning system thereof, the location that this omnibearing vision sensor self-navigation Z axle localization method and positioning system thereof can realize the Z axle, thereby on the basis of existing two-dimensional localization, finish the space three-dimensional location, this method and apparatus locating speed is fast, usable range is wide, practical, have a price advantage.
Technical solution of the present invention is as follows:
A kind of omnibearing vision sensor self-navigation Z axle localization method,
On the peripheral corner of operating area, place 4 coloured sign of forming rectangle: L 1, L 2, L 3, L 4, omnibearing vision sensor is arranged on the working truck; Omnibearing vision sensor obtains the on-the-spot omnidirectional images that includes 4 coloured sign imagings;
The relational expression of omnibearing vision sensor system height and omnibearing vision sensor and sign distance, image-forming range is:
( - 2 c x i X 0 f + x i H + c ) 2 b 1 2 - ( 2 cX 0 x i X 0 f + x i H ) 2 a 1 2 = 1 , X o,x i>0;
A wherein 1, b 1, c is the structural parameters on hyperboloid surface; F is a focal length; H is the computed altitude of current fully-directional visual system;
X 0For omnibearing vision sensor on XOY plane arrives spatial point P (X 0, Z 0) between distance, X 0Be spatial point P (X 0, Z 0) distance between the subpoint on the XOY plane at subpoint on the XOY plane and omnibearing vision sensor;
x iBe image-forming range, i.e. 4 coloured imaging actual range mean values that are identified to the omnibearing vision sensor projection centre; x iBy calculating the imaging pixel distance that is identified to the omnibearing vision sensor projection centre respectively, be calculated to be the picture actual range by demarcating pixel distance again;
Calculate the computed altitude H of current fully-directional visual system; Last computed altitude H and calibrated altitude H sDifference, i.e. the elements of a fix value of Z-direction, thereby the Z axle location of the machinery that fulfils assignment.
The position at four coloured sign places is corner location of the boundary rectangle of operating area.
The position of omnibearing vision sensor projection centre in omnidirectional images fixed.
x iAsk method as follows:
The demarcation pixel distance is 0.01389inch/pixel, calculates 4 signs earlier and divides the imaging actual range be clipped to the omnibearing vision sensor projection centre, and the mean value of hoping for success again as actual range obtains x i
X 0Ask method as follows:
According to 4 coloured 4 location points and the location points of omnibearing vision sensor projection centre in image that are identified in the image, calculate every adjacent two coloured signs and the azimuth angle theta of omnibearing vision sensor projection centre in omnidirectional images in four coloured signs 1~θ 4
The rectangle that 4 coloured sign L1~L4 form around in the operation area long for a, wide be b; Form principle according to the geometry circular arc, by L1 and coloured identification point of L2 and above-mentioned azimuth angle theta 1Be that central angle forms arc S 1, in like manner, L2 and L3, θ 2Form arc S 2, L3 and L4, θ 3Form arc S 3, L3 and L4, θ 4Form arc S 4, arc S 1~S 4Two intersect respectively, draw four intersection points, and 4 intersecting point coordinates are:
tan θ 1 = bX I 1 / ( X I 1 2 + Y I 1 2 - bY I 1 ) tan θ 2 = aY I 1 / ( X I 1 2 + Y I 1 2 - aX I 1 ) ; Wherein c 1 = b / tan θ 1 - a a / tan θ 2 - b ;
X I 2 = b ( c 2 + 1 / tan θ 2 ) 1 + c 2 2 Y I 2 = bc 2 ( c 2 + 1 / tan θ 2 ) 1 + c 2 2 , Wherein c 2 = b / tan θ 2 - a a / tan θ 3 - b ;
X I 3 = b ( c 3 + 1 / tan θ 3 ) 1 + c 3 2 Y I 3 = bc 3 ( c 3 + 1 / tan θ 3 ) 1 + c 3 2 , Wherein c 3 = b / tan θ 3 - a a / tan θ 4 - b ;
X I 4 = b ( c 4 + 1 / tan θ 4 ) 1 + c 4 2 Y I 4 = bc 4 ( c 4 + 1 / tan θ 4 ) 1 + c 4 2 , Wherein c 4 = b / tan θ 4 - a a / tan θ 1 - b ;
Draw intersection point barycentric coordinates P (X 1, Y 1), as follows:
X 1 = ( X I 1 + X I 2 + X I 3 + X I 4 ) / 4 Y 1 = ( Y I 1 + Y I 2 + Y I 3 + Y I 4 ) / 4 ;
Barycentric coordinates P (X 1, Y 1) be the planar location of omnibearing vision sensor;
Calculate P (X in XOY plane according to 2 common range formulas 1, Y 1) o'clock to 4 the sign L1, L2, L3 and L4 apart from l1, l2, l3 and l4; Obtain at last
Figure BDA0000043604650000041
A kind of omnibearing vision sensor self-navigation Z axle positioning system is placed 4 coloured sign of forming rectangle: L on the peripheral corner of operating area 1, L 2, L 3, L 4, the omnibearing vision sensor that is used to the omnidirectional images that includes 4 coloured sign imagings at the scene of obtaining is arranged on working truck; On working truck, be provided with microprocessor, have the image that is used for obtaining in the microprocessor and carry out the graphics processing unit that data processing is obtained coloured home position coordinate, also have the computing unit that is used for according to the elements of a fix value of aforementioned omnibearing vision sensor self-navigation Z axle localization method and final acquisition Z-direction in the microprocessor according to omnibearing vision sensor.
Omnibearing vision sensor has hyperbolic mirror.
Beneficial effect:
A kind of omnibearing vision sensor self-navigation Z axle localization method of the present invention and positioning system thereof, the operation area forms rectangular-shaped with 4 signs, measure the length and the width of rectangle, extract by Flame Image Process and to be identified at position in the image, further obtain in image, 4 orientation angles that each sign and sensor projection centre form, obtain the two-dimensional localization absolute coordinate (X of the space plane XY of relative 4 the sign formation of Work machine by geometric algorithm, Y), adopt cartesian coordinate system, with one in 4 signs as initial point, in practice, be designated initial point with that of the lower left corner in the image.
(application number: 200910044412.4) " a kind of automatic navigating and positioning device and method " Z axle method for calculating and locating of conforming to comprises the relational expression of derivation device system height and omnibearing vision sensor and identifier space distance, image-forming range to the invention provides a kind of and patent of invention; Calculating is identified to the image-forming range of omnibearing vision sensor projection centre; Calculate the space length of omnibearing vision sensor and sign; Calculate Z axle locator value, it is characterized in that:
The relational expression of described derivation device system height and omnibearing vision sensor and identifier space distance, image-forming range comprises: omnibearing vision sensor hyperbolic mirror Hyperbolic Equation is set up, the omnibearing vision sensor image-forming principle is used and the utilization of mathematics geometry; Described omnibearing vision sensor hyperbolic mirror Hyperbolic Equation is set up and is comprised the Hyperbolic Equation that is transformed to the apparatus system height correlation by the plane Hyperbolic Equation; Described omnibearing vision sensor image-forming principle utilization comprises that the spatial point incident ray reflects in the imaging surface image-forming principle by hyperboloid, is characterized in intersecting at projection centre in a bit after incident ray is by the hyperboloid refraction; Space line is imaged as circular arc on imaging surface; The utilization of described mathematics geometry comprises that spatial point omnibearing vision sensor image-forming principle several picture represents that the geometry mathematical relation utilization of intermediate cam shape derives apparatus system height and sensor and the sign relational expression apart from, image-forming range;
The image-forming range that described calculating is identified to the omnibearing vision sensor projection centre comprises that (application number: 200910044412.4) sign of a kind of automatic navigating and positioning device and method is extracted according to patent of invention, the center of gravity representative of extracting identification characteristics and obtaining the identification characteristics pixel by Flame Image Process is identified at the position in the image, calculating is identified to the imaging pixel distance of omnibearing vision sensor projection centre, goes out to be identified to the imaging actual range of omnibearing vision sensor projection centre by the pixel distance calibrated and calculated; Described pixel distance is demarcated the relation that result by experiment demarcates actual range and pixel count, draws the actual range that single pixel is demarcated, and pixel distance multiply by single pixel, and to demarcate distance be actual range.
The space length of described calculating omnibearing vision sensor and sign comprises that (application number: 200910044412.4) a kind of automatic navigating and positioning device and method are obtained (X, Y) coordinate figure on the XY plane according to patent of invention; The sensor plane position utilization air line distance accounting equation of being asked calculates the space length of omnibearing vision sensor and sign;
Described calculating Z axle locator value comprises the relational expression of apparatus system height that utilization is obtained and omnibearing vision sensor and identifier space distance, image-forming range and is identified to the space length three of image-forming range, omnibearing vision sensor and the sign of omnibearing vision sensor projection centre, calculating Z axle locator value.
In sum, a kind of omnibearing vision sensor self-navigation Z axle method for calculating and locating of the present invention provides a kind of and patent of invention (application number: 200910044412.4) the Z axle method for calculating and locating that conforms to method of a kind of automatic navigating and positioning device; As long as provide a pair to comprise the realtime graphic of sign, extract identification characteristics and by patent of invention (application number: 200910044412.4) the planar locator value obtained of a kind of automatic navigating and positioning device and method by Flame Image Process, finish the space three-dimensional location, reach the positioning function same, really realize automated machine or robot self-navigation location with GPS.This method is simple, and travelling speed is fast, and is practical; Automatic navigating and positioning device is simple in structure, and cost is not high, is easy to be extended and applied.
Description of drawings
Fig. 1 omnibearing vision sensor point image-forming principle of the present invention;
Fig. 2 is the mutual relationship principle schematic of system height of the present invention and omnibearing vision sensor and identifier space distance, image-forming range;
Fig. 3 example omnibearing vision sensor of the present invention calibrated altitude H sSynoptic diagram;
The process flow diagram of Fig. 4 the inventive method;
The space length of Fig. 5 example omnibearing vision sensor of the present invention and sign calculates principle schematic.
Label declaration: 1-omnibearing vision sensor, 2-Work machine, 3-imaging plane, 4-hyperboloid.
Embodiment
Below with reference to the drawings and specific embodiments the present invention is described in further details:
Embodiment 1:
As Fig. 1, during the omnibearing vision sensor imaging, spatial point P (X, Y, Z), the picture point p that forms (x, y) and the plane of projection centre formation vertical with surface level (XOY), spatial point P (X, Y, Z) straight line that is linked to be to true origin is α 1 in the projection straight line of XOY plane with respect to the position angle of X-axis, picture point p (x, y) and the line of imaging plane initial point be α 2 with respect to the position angle of imaging plane x axle, α 1 is identical with α 2, is all the θ among Fig. 1.
As Fig. 2, definition coordinate system XZ; According to the omnibearing vision sensor image-forming principle, space ground point P (X 0, Z 0), incident ray i is by focus O M(0, H) on hyperboloid, put M (x m, z m) converge at O after the reflection c, on imaging surface, form picture point p (x i, z i); O cBe projection centre, f is a focal length; According to general Hyperbolic Equation:
Figure BDA0000043604650000061
The equation according to the invention (1) of deriving:
( z m - H + c ) 2 b 2 - x m 2 a 2 = 1 ; c = a 2 + b 2 - - - ( 1 )
A, b, c are the structural parameters on hyperboloid surface, general producer provides standard value; H is an omnibearing vision sensor system height overhead; Can know that from the omnibearing vision sensor image-forming principle spatial point and imaging point xsect can be represented with coordinate system as shown in Figure 2.The definition omnibearing vision sensor is to P (X 0, Z 0) between distance be omnibearing vision sensor and (illustrate: Fig. 2 is a derivation of equation synoptic diagram to sign, has only a sign respectively; If there are four signs the back, then obtain mean value with four sign distances.--the mean value of-4 tag distance refers to 4 mean values that indicate the distance value of omnibearing vision sensor) distance X of projection on XOY plane 0Be defined as picture point p (x i, z i) be image-forming range x to the xoy plan range at omnibearing vision sensor imaging surface center i, as shown in Figure 2.
Point O M(0, H), M (x m, z m) and P (X o, Z o) on same straight line, can be write as (2):
z m = H - H X 0 x m - - - ( 2 )
According to how much leg-of-mutton similaritys of mathematics, draw (3):
x i x m = f z m - H + 2 c - - - ( 3 )
Obtain (4) according to (2) and (3):
x m = 2 cX 0 x i X 0 f + x i H - - - ( 4 )
In (4) formula substitution (3), can obtain Zm:
z m = ( 1 - 2 cx i X 0 f + x i H ) H - - - ( 5 )
In (4) formula and (5) substitution (1), obtain the relational expression of following omnibearing vision sensor system height and omnibearing vision sensor and sign distance, image-forming range:
( - 2 cx i X 0 f + x i H ) 2 b 2 - ( 2 cX 0 x i X 0 f + x i H ) 2 a 2 = 1 , ( X o , x i > 0 ) - - - ( 6 )
Therefore, according to (6) formula, a, b, c and f are definite value, if can draw x iAnd X 0Value substitution (6) formula, can instead ask the computed altitude H of fully-directional visual system that obtains this moment.When omnibearing vision sensor was installed, terrain clearance was the constant of fixing, and on Fig. 3 coordinate system Z-direction, is defined as calibrated altitude H sObtain the difference of computed altitude and calibrated altitude, i.e. the elements of a fix value of Z-direction, the Z axle location of the machinery that fulfils assignment.
Definition capitalization such as XYZ representation space coordinate amount, lowercase such as xyz represent the amount relevant with imaging surface.
Next step illustrates x iAnd X 0Value computation process:
As Fig. 4, the described position calculating method that is identified in the image of present embodiment comprises artificial sign is set, and opens that the input of sign power supply, view data, Flame Image Process are calculated, the result deposits.Specifically describe as follows:
Sign is set: adopt red colouration glass saturating, bright material is made the inner artificial sign that the power supply light fixture is installed, and centers on 4 red signs of placing on the maximum peripheral corner of operating area: sign L 1, the sign L 2, the sign L 3, the sign L 4And the composition rectangle, and turn on the power switch, strengthen colour brightness.
View data input: take sign L in real time by the omnibearing vision sensor camera head 1, the sign L 2, the sign L 3, the sign L 4Comprehensive 360 degree images, and be input in the reservoir in the computing machine by real-time USB interface.
Flame Image Process and calculating: transfer the omnidirectional images that stores in the computing machine, the definition image coordinate, the image upper left corner is initial point (0,0), unit is a pixel; In the Flame Image Process, can carry out Flame Image Process to the redness sign according to the sign color.
Utilize the red brightness of 7 pairs of image pixels of formula to calculate, and program loop calculates, and maximal value is r in the red color intensity of image pixel Max, be stored in the internal memory;
According to formula 8, in computing machine, set the threshold value t that extracts red identification characteristics amount of color 1Line by line image is scanned, the brightness of red pixel is greater than t in utilizing formula 7 computed image 1The time, extract from image and to meet all red character pixels of above condition;
r=R-(B+G)/2-|B-G| ----7
t 1=[r max-N 1] ----8
R, G, B: red, green and blue brightness; R: extract red pixel brightness; r Max: the red color intensity maximal value of pixel in the image; t 1: the threshold value of extracting red identification characteristics amount of color; N 1Be testing constant, 10<N 1<50;
Maximum red pixel distance is provided with red pixel apart from maximal value in the image according to being identified at arbitrarily, calculating extracts Euclidean distance between red pixel, when Euclidean distance between red pixel is judged as a red feature pixel zone less than setting red all pixels during apart from maximal value, be divided into 4 feature pixel zones;
The calculating of single characteristic area pixel; Extract the horizontal ordinate and the ordinate of characteristic area pixel, the image coordinate of i pixel is defined as: Rix and Riy; And the pixel total amount that counts this feature pixel zone is n 1
Utilize formula 9 to calculate the coordinate mean value of these all pixels of feature pixel zone, promptly obtain the center of gravity in feature pixel zone, the horizontal ordinate of center of gravity and ordinate are defined as Xr and Yr respectively; Coordinate (Xr, Yr) corresponding redness is identified at the location point of representing in the image;
[ Xr , Yr ] = [ Σ i = 1 n Rix / n 1 , Σ i = 1 n Riy / n 1 ] - - - 9
In the formula: Rix, Riy: the image coordinate of the i of extraction red identification characteristics pixel;
When the image recognition sign is not equal to 4, then return the image input; Draw every two adjacent images identification point and the orientation angles θ of omnibearing vision sensor projection centre in omnidirectional images by Flame Image Process 1~θ 4(in practice, the projection centre of omnibearing vision sensor is under the constant situation of sensor parameters, and the position in image is unique fixing, artificial input data in program)
According to drawing the orientation angles θ of sign with sensor 1~θ 4After, as Fig. 5, (application number: the 200910044412.4-publication number is 101660912 according to patent of invention.) a kind of automatic navigating and positioning device and method, sign (rectangle that L1~L4) forms long for a, wide be b; By L1 and coloured identification point of L2 and above-mentioned diagonal angle θ 1Form arc S 1, L2 and L3 successively, θ 2Form arc S 2, L3 and L4, θ 3Form arc S 3, L3 and L4, θ 4Form arc S 4, arc S 1~S 4Two intersect respectively, draw four intersection points, and intersection point can draw formula (10) by geometric analysis:
tan θ 1 = bX I 1 / ( X I 1 2 + Y I 1 2 - bY I 1 ) tan θ 2 = aY I 1 / ( X I 1 2 + Y I 1 2 - aX I 1 ) - - - 10
Formula 1 is carried out conversion, obtains:
X I 1 = b ( c + 1 / tan θ 1 ) 1 + c 2 Y I 1 = bc ( c + 1 / tan θ 1 ) 1 + c 2 - - - 11
Wherein c = b / tan θ 1 - a a / tan θ 2 - b .
In like manner obtain the coordinate (X of other three intersection points I2, Y I2), (X I3, Y I3), (X I4, Y I4), as follows:
X I 2 = b ( c + 1 / tan θ 2 ) 1 + c 2 Y I 2 = bc ( c + 1 / tan θ 2 ) 1 + c 2 , c = b / tan θ 2 - a a / tan θ 3 - b
X I 3 = b ( c + 1 / tan θ 3 ) 1 + c 2 Y I 3 = bc ( c + 1 / tan θ 3 ) 1 + c 2 , c = b / tan θ 3 - a a / tan θ 4 - b
X I 4 = b ( c + 1 / tan θ 4 ) 1 + c 2 Y I 4 = bc ( c + 1 / tan θ 4 ) 1 + c 2 , c = b / tan θ 4 - a a / tan θ 1 - b
Draw intersection point barycentric coordinates P (X 1, Y 1), as follows:
X 1 = ( X I 1 + X I 2 + X I 3 + X I 4 ) / 4 Y 1 = ( Y I 1 + Y I 2 + Y I 3 + Y I 4 ) / 4 - - - 12
Barycentric coordinates P (X 1, Y 1) be the planar location of omnibearing vision sensor.
Calculate the planar location P (X with respect to sign of camera 1, Y 1) after, calculate P (X in XOY plane according to 2 common range formulas 1, Y 1) o'clock to 4 the sign L1, L2, L3 and L4 apart from l1, l2, l3 and l4.The coordinate of sign is determined according to being provided with when identifying.
X 0 = 11 + 12 + 13 + 14 4 .
Then, obtain the center of gravity of identification characteristics pixel according to 9 formulas and represent the position that is identified in the image, calculate the imaging pixel distance (in practice, the sensor projection centre is a steady state value, artificial input) that is identified to the omnibearing vision sensor projection centre.The plain distance of example acceptance of the bid fixation of the present invention calculates the imaging actual range mean value x that is identified to the omnibearing vision sensor projection centre for 0.01389inch/pixel i
Draw xi and X0, use 6 formulas to calculate the actual computation value of the omnibearing vision sensor of Work machines such as robot and vehicle, obtain the difference of computed altitude and calibrated altitude, be i.e. the elements of a fix value of Z-direction, the Z axle location of the machinery that fulfils assignment.In conjunction with patent of invention (application number: 200910044412.4) a kind of automatic navigating and positioning device and method, three-dimensional (X, Y, Z) location that both can finish the omnibearing vision sensor self-navigation of Work machines such as robot and vehicle.

Claims (4)

1. an omnibearing vision sensor self-navigation Z axle localization method is characterized in that,
On the peripheral corner of operating area, place 4 coloured sign of forming rectangle: L 1, L 2, L 3, L 4, omnibearing vision sensor is arranged on the working truck; Omnibearing vision sensor obtains the on-the-spot omnidirectional images that includes 4 coloured sign imagings;
The relational expression of omnibearing vision sensor system height and omnibearing vision sensor and sign distance, image-forming range is: ( - 2 c x i X 0 f + x i H + c ) 2 b 1 2 - ( 2 cX 0 x i X 0 f + x i H ) 2 a 1 2 = 1 , X o,x i>0;
A wherein 1, b 1, c is the structural parameters on hyperboloid surface; F is a focal length; H is the computed altitude of current fully-directional visual system;
X 0For omnibearing vision sensor on XOY plane arrives spatial point P (X 0, Z 0) between distance, X 0Be spatial point P (X 0, Z 0) distance between the subpoint on the XOY plane at subpoint on the XOY plane and omnibearing vision sensor;
x iBe image-forming range, i.e. 4 coloured imaging actual range mean values that are identified to the omnibearing vision sensor projection centre; x iBy calculating the imaging pixel distance that is identified to the omnibearing vision sensor projection centre respectively, be calculated to be the picture actual range by demarcating pixel distance again;
Calculate the computed altitude H of current fully-directional visual system; Last computed altitude H and calibrated altitude H sDifference, i.e. the elements of a fix value of Z-direction, thereby the Z axle location of the machinery that fulfils assignment.
2. omnibearing vision sensor self-navigation Z axle localization method according to claim 1 is characterized in that,
x iAsk method as follows:
The demarcation pixel distance is 0.01389inch/pixel, calculates 4 signs earlier and divides the imaging actual range be clipped to the omnibearing vision sensor projection centre, and the mean value of hoping for success again as actual range obtains x i
X 0Ask method as follows:
According to 4 coloured 4 location points and the location points of omnibearing vision sensor projection centre in image that are identified in the image, calculate every adjacent two coloured signs and the azimuth angle theta of omnibearing vision sensor projection centre in omnidirectional images in four coloured signs 1~θ 4
The rectangle that 4 coloured sign L1~L4 form around in the operation area long for a, wide be b; Form principle according to the geometry circular arc, by L1 and coloured identification point of L2 and above-mentioned azimuth angle theta 1Be that central angle forms arc S 1, in like manner, L2 and L3, θ 2Form arc S 2, L3 and L4, θ 3Form arc S 3, L3 and L4, θ 4Form arc S 4, arc S 1~S 4Two intersect respectively, draw four intersection points, and 4 intersecting point coordinates are:
tan θ 1 = bX I 1 / ( X I 1 2 + Y I 1 2 - bY I 1 ) tan θ 2 = aY I 1 / ( X I 1 2 + Y I 1 2 - aX I 1 ) ; Wherein c 1 = b / tan θ 1 - a a / tan θ 2 - b ;
X I 2 = b ( c 2 + 1 / tan θ 2 ) 1 + c 2 2 Y I 2 = bc 2 ( c 2 + 1 / tan θ 2 ) 1 + c 2 2 , Wherein c 2 = b / tan θ 2 - a a / tan θ 3 - b ;
X I 3 = b ( c 3 + 1 / tan θ 3 ) 1 + c 3 2 Y I 3 = bc 3 ( c 3 + 1 / tan θ 3 ) 1 + c 3 2 , Wherein c 3 = b / tan θ 3 - a a / tan θ 4 - b ;
X I 4 = b ( c 4 + 1 / tan θ 4 ) 1 + c 4 2 Y I 4 = bc 4 ( c 4 + 1 / tan θ 4 ) 1 + c 4 2 , Wherein c 4 = b / tan θ 4 - a a / tan θ 1 - b ;
Draw intersection point barycentric coordinates P (X 1, Y 1), as follows:
X 1 = ( X I 1 + X I 2 + X I 3 + X I 4 ) / 4 Y 1 = ( Y I 1 + Y I 2 + Y I 3 + Y I 4 ) / 4 ;
Barycentric coordinates P (X 1, Y 1) be the planar location of omnibearing vision sensor;
Calculate P (X in XOY plane according to 2 common range formulas 1, Y 1) o'clock to 4 the sign L1, L2, L3 and L4 apart from l1, l2, l3 and l4; Obtain at last
Figure FDA00000436046400000210
3. omnibearing vision sensor self-navigation Z axle positioning system, it is characterized in that, on the peripheral corner of operating area, place 4 coloured signs of forming rectangle: L1, L2, L3, L4, the omnibearing vision sensor that is used to the omnidirectional images that includes 4 coloured sign imagings at the scene of obtaining is arranged on working truck; On working truck, be provided with microprocessor, have the image that is used for obtaining in the microprocessor and carry out the graphics processing unit that data processing is obtained coloured home position coordinate, also have the computing unit that is used for according to the elements of a fix value of claim 1 or 2 described omnibearing vision sensor self-navigation Z axle localization methods and final acquisition Z-direction in the microprocessor according to omnibearing vision sensor.
4. omnibearing vision sensor self-navigation Z axle positioning system according to claim 3 is characterized in that omnibearing vision sensor has hyperbolic mirror.
CN201110006181A 2011-01-12 2011-01-12 Automatic navigating Z-shaft positioning method for omni-directional vision sensor and positioning system thereof Active CN102168973B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110006181A CN102168973B (en) 2011-01-12 2011-01-12 Automatic navigating Z-shaft positioning method for omni-directional vision sensor and positioning system thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110006181A CN102168973B (en) 2011-01-12 2011-01-12 Automatic navigating Z-shaft positioning method for omni-directional vision sensor and positioning system thereof

Publications (2)

Publication Number Publication Date
CN102168973A true CN102168973A (en) 2011-08-31
CN102168973B CN102168973B (en) 2012-10-03

Family

ID=44490225

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110006181A Active CN102168973B (en) 2011-01-12 2011-01-12 Automatic navigating Z-shaft positioning method for omni-directional vision sensor and positioning system thereof

Country Status (1)

Country Link
CN (1) CN102168973B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103123682A (en) * 2013-01-17 2013-05-29 无锡普智联科高新技术有限公司 System and method for positioning mobile robot based on regular graphic code composite tags
CN106949881A (en) * 2017-02-24 2017-07-14 浙江大学 A kind of mobile robot fast vision localization method
CN107252785A (en) * 2017-06-29 2017-10-17 顺丰速运有限公司 A kind of express mail grasping means applied to quick despatch robot piece supplying
CN110045365A (en) * 2019-03-26 2019-07-23 西北工业大学 A kind of image target positioning method based on radar information

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007130398A (en) * 2005-11-14 2007-05-31 Toshiba Corp Optical position measuring unit
CN101408422A (en) * 2008-10-16 2009-04-15 浙江工业大学 Traffic accident on-site mapper based on binocular tridimensional all-directional vision
CN101660912A (en) * 2009-09-25 2010-03-03 湖南农业大学 Automatic navigating and positioning device and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007130398A (en) * 2005-11-14 2007-05-31 Toshiba Corp Optical position measuring unit
CN101408422A (en) * 2008-10-16 2009-04-15 浙江工业大学 Traffic accident on-site mapper based on binocular tridimensional all-directional vision
CN101660912A (en) * 2009-09-25 2010-03-03 湖南农业大学 Automatic navigating and positioning device and method

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103123682A (en) * 2013-01-17 2013-05-29 无锡普智联科高新技术有限公司 System and method for positioning mobile robot based on regular graphic code composite tags
CN103123682B (en) * 2013-01-17 2015-09-16 无锡普智联科高新技术有限公司 The mobile robot positioning system of rule-based graphic code composite label and method
CN106949881A (en) * 2017-02-24 2017-07-14 浙江大学 A kind of mobile robot fast vision localization method
CN107252785A (en) * 2017-06-29 2017-10-17 顺丰速运有限公司 A kind of express mail grasping means applied to quick despatch robot piece supplying
CN110045365A (en) * 2019-03-26 2019-07-23 西北工业大学 A kind of image target positioning method based on radar information

Also Published As

Publication number Publication date
CN102168973B (en) 2012-10-03

Similar Documents

Publication Publication Date Title
CN104200086B (en) Wide-baseline visible light camera pose estimation method
CN110148169B (en) Vehicle target three-dimensional information acquisition method based on PTZ (pan/tilt/zoom) pan-tilt camera
CN108594245A (en) A kind of object movement monitoring system and method
CN109685855B (en) Camera calibration optimization method under road cloud monitoring platform
CN103424112B (en) A kind of motion carrier vision navigation method auxiliary based on laser plane
CN109993793B (en) Visual positioning method and device
CN102183250B (en) Automatic navigation and positioning device and method for field road of agricultural machinery
CN103196370B (en) Measuring method and measuring device of conduit connector space pose parameters
CN108388244A (en) Mobile-robot system, parking scheme based on artificial landmark and storage medium
CN106651990A (en) Indoor map construction method and indoor map-based indoor locating method
CN103971404A (en) 3D real-scene copying device having high cost performance
CN103759669A (en) Monocular vision measuring method for large parts
CN105574812B (en) Multi-angle three-dimensional data method for registering and device
CN104835173A (en) Positioning method based on machine vision
CN103994762A (en) Mobile robot localization method based on data matrix code
CN102168973B (en) Automatic navigating Z-shaft positioning method for omni-directional vision sensor and positioning system thereof
CN106908064A (en) A kind of indoor night vision navigation method based on Kinect2 sensors
CN206441237U (en) A kind of caliberating device of three-dimensional camera module
CN114413958A (en) Monocular vision distance and speed measurement method of unmanned logistics vehicle
CN115984766A (en) Rapid monocular vision three-dimensional target detection method for underground coal mine
CN104318566B (en) Can return to the new multi-view images plumb line path matching method of multiple height values
CN102129679A (en) Local positioning system and method
CN101620672A (en) Method for positioning and identifying three-dimensional buildings on the ground by using three-dimensional landmarks
CN208350997U (en) A kind of object movement monitoring system
CN105424059B (en) Wide baseline near infrared camera position and orientation estimation method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant