CN102650886A - Vision system based on active panoramic vision sensor for robot - Google Patents

Vision system based on active panoramic vision sensor for robot Download PDF

Info

Publication number
CN102650886A
CN102650886A CN2012101335959A CN201210133595A CN102650886A CN 102650886 A CN102650886 A CN 102650886A CN 2012101335959 A CN2012101335959 A CN 2012101335959A CN 201210133595 A CN201210133595 A CN 201210133595A CN 102650886 A CN102650886 A CN 102650886A
Authority
CN
China
Prior art keywords
barrier
point
prime
laser
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012101335959A
Other languages
Chinese (zh)
Other versions
CN102650886B (en
Inventor
汤一平
周静恺
王鼎
徐海涛
刘康
林璐璐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201210133595.9A priority Critical patent/CN102650886B/en
Publication of CN102650886A publication Critical patent/CN102650886A/en
Application granted granted Critical
Publication of CN102650886B publication Critical patent/CN102650886B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a vision system based on an active panoramic vision sensor for a robot. The vision system comprises an omnibearing vision sensor, a key surface laser light source, and a microprocessor used for carrying out three-dimensional stereoscopic image pick-up measurement, obstacle detection, obstacle avoidance and navigation on an omnibearing image; the omnibearing vision sensor and the key surface laser light source are configured on the same axial lead; the microprocessor internally comprises a video image reading module, an omnibearing video sensor calibrating module, a Bird-View converting module, an omnibearing surface laser information reading module, an obstacle characteristic point calculating module, an obstacle space distribution estimating module between key surfaces, an obstacle contour line generating module and a storing unit; spatial data points of the key surface, scanned by laser and corresponding pixel points in the omnibearing image are subjected to data fusion, so that the spatial point has geometric information and color information at the same time; a quasi-three-dimensional omnibearing map in an unknown environment is finally established; the resource consumption of a computer can be reduced; the measurement is quickly completed; and the obstacle avoidance and the navigation of the robot are convenient.

Description

Robotic vision system based on the active panoramic vision sensor
Technical field
The present invention relates to the application aspect the walking robot navigation of LASER Light Source, omnibearing vision sensor and computer vision technique.
Background technology
The autonomous mobile robot system need pass through sensor senses external environment and oneself state, has been implemented in object-oriented autokinetic movement in the obstacle environment, thereby has accomplished certain operation function.Itself can cognitive task environment and target, can come to work independently with " self " understanding external world according to the instruction that the people gives, can utilize operating mechanism and travel mechanism completion complicated operations task.Therefore, make the mobile robot have specific intelligence, at first just must have multiple perceptional function, and then carry out complicated reasoning from logic, planning and decision-making, independently action in operating environment.The mobile robot can run into usually in the process of walking and will solve following three problems: now where (1) robot? Will (2) robot be toward where walking? Does (3) how machine arrive the destination? Wherein first problem is location and the tracking problem thereof in its navigational system, and second and third is the path planning problem of navigational system.The task of Mobile Robotics Navigation and location technology is exactly three problems above solving.For first problem, adopt the GPS location, adopt the WLAN location can solve preferably in indoor environment at outdoor environment.
For the mobile robot, robotic vision system is important " organ " of robot perception local environment as people's eyes, and the environmental information of perception realizes the navigation to robot according to this simultaneously.Can visual information correctly, handle the robot ride speed that is directly connected in real time, path trace and to the collision prevention of barrier, and the real-time and the robustness of system had conclusive effect.The treatment technology of visual information is one of technology of most critical during the mobile robot studies.
In many mobile robots used, detection of obstacles was a very important task.Great majority dependence is obtained range data from sensor and is carried out detection of obstacles.Usually be used in based on the sensor in the obstacle detection system of scope and comprise ultrasonic sensor, laser range finder, radar, stereoscopic vision etc.Because these sensors can be measured the distance of barrier to robot, so thereby the detection that can be used for barrier keeps away barrier.Ultrasonic sensor is cheap, still exists problems such as easy generation direct reflection and angular resolution are not high; The resolution of laser range finder and radar is much better, but more complicated and very expensive; In addition, above-mentioned these range sensors can provide accurately distance about barrier, towards etc. information, but can only detect the barrier that is positioned on the plane of scanning motion.Vision system can provide abundant and real-time environmental information for the mobile robot, but the calculated amount of present stereoscopic vision and light stream is very big, and real-time becomes a bottleneck.
The Chinese invention patent application number is 201010189865.9 to disclose a kind of ground obstacle detection method based on binocular stereo vision of robot; According to the base length and the focal length of binocular, utilize the geometrical configuration of known image to parse each ground parallax values of going in the image; On the basis of ground parallax values, calculate the three-dimensional coordinate at certain pixel corresponding fields sight spot through projection model, thereby judge that tentatively this pixel belongs to barrier or ground point; Give various colors respectively to barrier and ground point; The above results is carried out aftertreatment, remove false barrier; Eliminate three-dimensional error, set up grating map.Computer resource usage is big, real-time performance is poor, sensing range is limited, online in real time is demarcated problems such as difficulty but this ground obstacle detection method exists.
Because omni-directional visual has the advantage of big view field imaging, progressively is widely used in mobile robot's research field in recent years; People such as Japan scholar Yoshiro Negishi use laser scanner and panoramic vision has been realized the navigation of mobile robot under circumstances not known.Document Yoshiro Negishi; Jun Miura; Yoshiaki Shirai; Mobile Robot Navigation in Unknown Environments using Omnidirenctional Stereo and Laser Range Finder [C] .In Proceedings of 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp:2737~2742,2004.What but this robot vision obtained is the obstacle information on a certain height of level face, and the barrier under circumstances not known is not all to be plane rule three-dimensional shape, has the 3 D stereo characteristic; Therefore, on a certain direction on some height of level faces, not detecting barrier just can not simply infer and on other surface levels on this direction, also not have barrier.
Summary of the invention
For the computer resource usage that overcomes existing robot vision is big, real-time performance is poor, sensing range is limited, sensing range is limited, online in real time is demarcated deficiencies such as difficulty, the present invention provide a kind of can reduce computer resource usage, fast accomplish measure, real-time is good, omnibearing stereo detects robotic vision system based on the active panoramic vision sensor.
The technical solution adopted for the present invention to solve the technical problems is:
A kind of robotic vision system based on the active panoramic vision sensor; Comprise omnibearing vision sensor, crucial face LASER Light Source and be used for to omnidirectional images carry out 3 D stereo videographic measurment, detection of obstacles, keep away the barrier and the navigation microprocessor, said omnibearing vision sensor and said crucial face LASER Light Source are configured on the same axis heart line;
Said omnibearing vision sensor comprises hyperboloid minute surface, loam cake, transparent semicircle outer cover, lower fixed seat, image unit holder, image unit, linkage unit and goes up cover; Described hyperboloid minute surface is fixed on and covers on described; Described linkage unit links into an integrated entity described lower fixed seat and transparent semicircle outer cover; Described transparent semicircle outer cover and described loam cake and the described cover of going up are fixed together; Described image unit is fixed on the described image unit holder, and described image unit holder is fixed on the described lower fixed seat, and described image unit is connected with said microprocessor;
Said crucial face LASER Light Source comprises ruddiness line laser generating unit, line laser generation assembled unit, green light rays laser generating unit, support bar and chassis; Described green light rays laser generating unit is fixed on and forms a comprehensive LASER Light Source unit that sends green glow in the hole of described line laser generation assembled unit; Described ruddiness line laser generating unit is fixed on and forms a comprehensive LASER Light Source unit that sends ruddiness in the hole of described line laser generation assembled unit; Described support bar is vertically fixed on the described chassis, described comprehensive LASER Light Source unit of green glow and the two ends that described comprehensive the LASER Light Source unit that sends ruddiness is separately fixed at described support bar of sending;
Said microprocessor comprises:
The video image read module is used to read the video image of omnibearing vision sensor, and is kept in the described storage unit, and its output is connected with the Bird-View conversion module with the omnibearing vision sensor demarcating module;
The omnibearing vision sensor demarcating module is used for confirming the parameter of mapping relations between the X-Y scheme picture point on three dimensions point and the video camera imaging plane, and calibrated parameter leaves in the described storage unit;
The Bird-View conversion module; Be used for reading the calibrating parameters value of the omnibearing vision sensor that leaves described storage unit in; Revise the serious distortion distortion of omni-directional visual imaging back through the Bird-View conversion; Omnidirectional images is transformed to the Bird-View view, and the Bird-View view is that birds are looking down formed image on this ground, and the Bird-View view that obtains after the conversion leaves in the described storage unit;
Comprehensive laser intelligence read module is used for the Bird-View view is handled the geological information that obtains space or body surface three-dimensional data point fast, and its result of calculation is submitted to barrier unique point computing module;
Barrier unique point computing module is used to calculate and indicate the distribution situation of the barrier unique point around the mobile robot, and its result of calculation is submitted to the barrier space distribution estimation block between the crucial face;
Barrier space distribution estimation block between the crucial face is used to be evaluated at the barrier that exists between two crucial faces comprehensive the visual field in, estimates that the result is submitted to barrier outline line generation module;
Barrier outline line generation module, being used for generating with the mobile apparatus people automatically is the peripheral barrier outline line at center;
Keep away the estimation block in barrier path, be used to judge whether robot can pass through non-barrier region; At first to estimate the width value of non-barrier region, according to the azimuth information in the zone that does not have barrier to exist and should estimate with formula (13) from the distance value of barrier in the zone,
P path = 2 × R m × tan ( β 2 - β 1 2 ) - - - ( 13 )
In the formula, P PathFor not having the width in the zone that barrier exists, β 1For the left side barrier with do not have the marginal position angle of barrier, β 2For the right side barrier with do not have the marginal position angle of barrier, R mFor at β 1And β 2The mean value of the obstacle distance under the situation of position angle;
At last the breadth extreme of robot is obtained Robot for last 1.1 times on duty WIf, Robot WWidth P less than the zone that does not have barrier to exist PathJust judge that robot can pass through this zone, otherwise judge and to pass through this zone.
Further; Described omnibearing vision sensor demarcating module; The process of confirming the parameter of mapping relations between the X-Y scheme picture point on three dimensions point and the video camera imaging plane is: set up getting in touch between picture point and the incident ray of an imaging plane; Promptly and the corresponding relation between the incident angle, represent with formula (6);
tan α = | | u ′ ′ | | f ( | | u ′ ′ | | ) = | | u ′ ′ | | a 0 + a 1 | | u ′ ′ | | + a 2 | | u ′ ′ | | 2 + . . . + a N | | u ′ ′ | | N - - - ( 6 )
In the formula, the incident angle of α representation space object point, || u " || for sensor plane is put the distance of this planar central point, a 0, a 1, a 2, a NFor the inside and outside parameter of the omnibearing vision sensor demarcated, set up the mapping table between an arbitrary pixel of imaging plane and the incident angle through formula (6), table 1 is the calibration result of a kind of single view omnibearing vision sensor of the present invention's use,
Figure BDA0000158993880000043
The calibration result of table 1ODVS.
Further again; In the described Bird-View conversion module, said Bird-View conversion can be a kind of inverse transformation of omnibearing vision sensor imaging process, and the result who utilizes formula (6) to demarcate carries out the Bird-View conversion; Convert omnidirectional images to the Bird-View image through this calibration result; Wherein, object point P in space calculates with formula (7) to the distance R of observation station Om between the subpoint on the level ground
R = h × cos β / tan α = h × | f ( | | u ′ ′ | | ) × cos β | u ′ ′ | | =
h × cos β × ( a 0 + a 1 | | u ′ ′ | | + a 2 | | u ′ ′ | | 2 + . . . + a N | | u ′ ′ | | N ) | u ′ ′ | | - - - ( 7 )
In the formula, R is that space object point P is to observation station O mDistance between the subpoint on the level ground, h is that space object point P is to observation station O mDistance between the subpoint on the vertical plane, α is that space object point P is to observation station O mIncident angle, β is space object point P to observation station O mThe position angle.
Further; In described comprehensive the laser intelligence read module; Be incident upon on two different crucial faces by said crucial face LASER Light Source; Produce a planar projective green laser near the ground surface level, on the surface level of certain altitude overhead, produce a planar projective red laser; Through resolving green laser and red laser incident point on the Bird-View view;
Parsing method in green laser and red laser incident point on the Bird-View view is that the brightness according to the pixel in green laser and red laser incident point is greater than the mean flow rate on the imaging plane; At first be that RGB color space with the Bird-View view changes into the HIS color space; Then with 1.2 times of the mean flow rate on the imaging plane as the threshold values of extracting green laser and red laser incident point; After extracting green laser and red laser incident point, need further to distinguish green laser and red laser incident point, judge according to the tone value H in the HIS color space among the present invention, if tone value H is (0; 30) just be judged as the red laser incident point between; If tone value H just is judged as the green laser incident point between (105,135), rest of pixels point just is judged as interference.
Preferably, in the described barrier unique point computing module, at first confirm the barrier unique point, the barrier unique point among the present invention is to confirm through the intersection point of the near normal edge line on the laser projections line of crucial face and the Bird-View conversion figure;
The method of asking of the near normal edge line on the Bird-View conversion figure is to obtain the near normal edge line on the Bird-View conversion figure through Sobel vertical direction template traversal Bird-View conversion figure, and Sobel vertical direction template is provided by formula (8);
W v = 1 4 - 1 - 2 - 1 0 0 0 1 2 1 - - - ( 8 )
For two unique points of intersection point Pa that is tried to achieve and Pb, promptly the barrier unique point can use formula (9) to observation station O mDistance R a between the subpoint on the level ground and Rb,
R a=h greenline×cosβ/tanα a (9)
R b=h redline×cosβ/tanα b
In the formula, R aFor unique point Pa to the distance of observation station Om between the subpoint on the level ground, h GreenlineFor unique point Pa arrives the distance between the observation station Om, β is the position angle of unique point Pa, α aIncident angle for unique point Pa; R bFor unique point Pb to the distance of observation station Om between the subpoint on the level ground, h GreenlineFor unique point Pb arrives the distance between the observation station Om, β is the position angle of unique point Pb, α bIncident angle for unique point Pb.
In the barrier space distribution estimation block between the described crucial face; The unique point that exists red laser and green laser projection for a certain azimuthal angle beta; The distance of its walking robot and barrier is got nearest incident point distance as obstacle distance, calculates with formula (10)
R β=min(Ra,Rb) (10)
In the formula, R βDistance for walking robot and barrier under some azimuthal angle beta situation; Ra be under some azimuthal angle beta situation walking robot when subaerial elevation of water and the distance of barrier, Rb be under some azimuthal angle beta situation walking robot when height of level face own and the distance of barrier;
The unique point that only exists the red laser projection for a certain azimuthal angle beta; Show and at this moment only exist the boss type barrier; As obstacle distance, the distance of walking robot and barrier is calculated with formula (11) under some azimuthal angle beta situation with the distance in red laser incident point
R β=Rb (11)
The unique point that only exists the green laser projection for a certain azimuthal angle beta; Show and at this moment only exist roadside type barrier; As obstacle distance, the distance of walking robot and barrier is calculated with formula (12) under some azimuthal angle beta situation with the distance in green laser incident point
R β=Ra (12)
Do not exist the situation of unique point of red laser and green laser projection for a certain azimuthal angle beta, just judge that this azimuth direction does not have barrier.
In the described barrier outline line generation module; After the barrier space distribution estimation block between the described crucial face is handled, having obtained each azimuthal barrier distribution situation, is that horizontal ordinate, obstacle distance value are that ordinate is drawn barrier outline line curve map with the azimuthal angle beta with these data.
The optical system that described hyperboloid minute surface constitutes is represented by following 5 equalities;
((X 2+ Y 2)/a 2)-((Z-c) 2/ b 2)=-1 is when Z>0 (14)
c = a 2 + b 2 - - - ( 15 )
β=tan -1(Y/X) (16)
α=tan -1[(b 2+c 2)sinγ-2bc]/(b 2+c 2)cosγ(17)
γ = tan - 1 [ f / ( x 2 + y 2 ) ] - - - ( 18 )
In the formula, X, Y, Z representation space coordinate, c representes the focus of hyperbolic mirror; 2c representes two distances between the focus, and a, b are respectively the real axis of hyperbolic mirror and the length of the imaginary axis; β represent incident ray on the XY projection plane with the angle of X axle, i.e. position angle, α represent incident ray on the XZ projection plane with the angle of X axle; Here α is called incident angle, α is more than or equal to being called the angle of depression at 0 o'clock, with α less than being called the elevation angle at 0 o'clock; F representes the distance of imaging plane to the virtual focus of hyperbolic mirror, and γ representes the angle of catadioptric light and Z axle; X, y are illustrated in a point on the imaging plane.
The such scheme that the present invention proposes has solved three key problems: a kind of LASER Light Source that scans crucial plane is realized in (1); (2) realize a kind of active panoramic vision sensor that can obtain the actual object depth information fast; (3) a kind of three-dimensional rebuilding method based on omni-directional visual Bird-View conversion.
Beneficial effect of the present invention mainly shows:
1) respective pixel point in the spatial data points of crucial face laser scanning and the panoramic picture is carried out data fusion, make spatial point have geological information and colouring information simultaneously, finally can obtain realistic environment three-dimensional model;
2) no longer need steps such as loaded down with trivial details camera calibration work, feature extraction, stereo-picture coupling, for the fast full-view stereo photography measurement provides a kind of new means;
3) be utilized in panoramic picture Bird-View conversion is obtained under the differing heights situation two-dimentional global map and extrapolate the barrier space distribution situation between two crucial faces with reference to the Bird-View view;
4) adopt laser scanning and the barrier unique point computing method on crucial plane, can reduce computer resource usage, accomplish and measure, make things convenient for keeping away barrier and navigating of robot fast.
Description of drawings
Fig. 1 is a kind of structural drawing of omnibearing vision sensor;
Fig. 2 is a kind of structural drawing of crucial face LASER Light Source;
Fig. 3 is a kind of robotic vision system schematic diagram based on the active panoramic vision sensor;
Fig. 4 is a kind of part description figure of crucial face LASER Light Source;
Fig. 5 is the imaging schematic diagram of omnibearing vision sensor;
Fig. 6 is a kind of mobile robot who disposes based on the robotic vision system of active panoramic vision sensor;
Fig. 7 is the explanation front elevation that robotic vision system detection plane barrier and obstacle distance calculate;
Fig. 8 is the explanation vertical view that robotic vision system detection plane barrier and obstacle distance calculate;
Fig. 9 is a single view catadioptric omnibearing vision sensor imaging model, Fig. 9 (a) perspective imaging process, Fig. 9 (b) sensor plane, Fig. 9 (c) plane of delineation;
Figure 10 is the omnibearing vision sensor calibration result 3 dimensional drawing of single view;
Figure 11 is based on the automatic barrier outline line that generates of the robotic vision system of active panoramic vision sensor;
Figure 12 is the key diagram of mobile robot's moving direction and position angle relation;
Figure 13 is the key diagram that detects roadside type barrier based on the robotic vision system of active panoramic vision sensor;
Figure 14 is the key diagram that detects the boss type barrier based on the robotic vision system of active panoramic vision sensor;
Figure 15 is the key diagram that detects the barrier on the differing heights face based on the robotic vision system of active panoramic vision sensor.
Embodiment
Below in conjunction with accompanying drawing the present invention is further described.
With reference to Fig. 1~Figure 15; A kind of robotic vision system based on the active panoramic vision sensor; Comprise omnibearing vision sensor, crucial face LASER Light Source and be used for to omnidirectional images carry out 3 D stereo videographic measurment, detection of obstacles, keep away the barrier and the navigation microprocessor, the center configuration of the center of described omnibearing vision sensor and said crucial face LASER Light Source is on same axis heart line; Said omnibearing vision sensor comprise hyperboloid minute surface 2, loam cake 1, transparent semicircle outer cover 3, lower fixed seat 4, image unit holder 5, image unit 6, linkage unit 7, on cover 8, shown in accompanying drawing 1; Described hyperboloid minute surface 2 is fixed on the described loam cake 1; Described linkage unit 7 links into an integrated entity described lower fixed seat 4 with transparent semicircle outer cover 3; Described transparent semicircle outer cover 3 passes through screw retention together with described loam cake 1 and the described cover 8 of going up; Described image unit 6 is screwed on described image unit holder 5; Described image unit holder 5 is screwed on described lower fixed seat 4, and the output of the described image unit 6 in the said omnibearing vision sensor is connected with said microprocessor;
Said crucial face LASER Light Source comprises that ruddiness line laser generating unit 2-1, line laser generation assembled unit 2-2, green light rays laser generating unit 2-3, support bar 2-4, chassis 2-5 are shown in accompanying drawing 2; Described green light rays laser generating unit 2-3 is fixed in the hole of described line laser generation assembled unit 2-2, shown in accompanying drawing 4, can form a comprehensive LASER Light Source unit that sends green glow through the line laser after making up like this; Described ruddiness line laser generating unit 2-1 is fixed in the hole of described line laser generation assembled unit 2-2, can form a comprehensive LASER Light Source unit that sends ruddiness through the line laser after making up like this; Described support bar 2-4 is vertically fixed on the described chassis 2-6, described comprehensive LASER Light Source unit of green glow and the two ends that described comprehensive the LASER Light Source unit that sends ruddiness is separately fixed at described support bar 2-4 of sending;
Described omnibearing vision sensor is installed in described guiding support bar 2-4 through web joint, shown in accompanying drawing 3; Described omnibearing vision sensor is connected with described microprocessor through USB interface;
Comprise in the described microprocessor: barrier space distribution estimation block, the barrier outline line generation module between video image read module, omnibearing vision sensor demarcating module, Bird-View conversion module, comprehensive laser intelligence read module, barrier unique point computing module, the crucial face, the estimation block and the storage unit of keeping away the barrier path;
Described video image read module is used to read the video image of omnibearing vision sensor, and is kept in the described storage unit, and its output is connected with the Bird-View conversion module with described omnibearing vision sensor demarcating module;
Described omnibearing vision sensor demarcating module; Be used for confirming the parameter of mapping relations between the X-Y scheme picture point on three dimensions point and the video camera imaging plane; Adopted the omnibearing vision sensor of single view among the present invention, the omnibearing vision sensor that is made up of hyperboloid catadioptric mirror image principle has the single view imaging characteristic; Its image-forming principle is as shown in Figure 5; In order to set up the mapping relations on three dimensions point and the imaging plane picture point, adopt the perspective projection imaging model of Micus í k here, as shown in Figure 9; In this imaging model, consider two different reference planes, the plane of delineation (u '; V ') and sensor plane (u "; v "), the CCD of the plane of delineation and video camera is relevant, is to represent with pixel coordinate.Sensor plane be a hypothesis with minute surface optical axis plane orthogonal, its center origin is an optical axis and the intersection point on this plane; With the focus of hyperboloid minute surface, promptly single view Om is that initial point is set up coordinate system, z " axle and minute surface optical axis alignment; If X=[X, Y, Z] TFor in the space a bit, u "=[u ", v "] TBe the projection of X in sensor plane, u '=[u ', v '] TIt is the pixel of its corresponding plane of delineation; Volume coordinate point X earlier projects A point place on the minute surface through projective transform matrix, and A point focuses on camera optics central point C by direct reflection, and u on the friendship sensor plane "=[u ", v "] T" point is put u '=[u ', v '] to the plane of delineation through affined transformation for point, u TWhole single view catadioptric video camera imaging model description be by spatial point to the catadioptric mirror point, the catadioptric mirror point is to the sensor plane point, sensor plane point is the process of the pixel in the plane of delineation point formation image again;
The catadioptric minute surface is represented with formula (1) to the conversion between the sensor plane;
λp ′ ′ = λ x ′ ′ T z ′ ′ = λ h ( | | u ′ ′ | | ) u ′ ′ g | | u ′ ′ | | = P · X , λ > 0 - - - ( 1 )
In the formula, x ∈ R 4The inferior coordinate of representation space point X, P=[R|T] ∈ R 3 * 4Be projective transform matrix, R ∈ R 3 * 3Be the rotation matrix of spatial point to the catadioptric mirror point, T ∈ R 3 * 1Be the translation matrix of spatial point to the catadioptric mirror point;
Represent with formula (2) to the conversion the plane of delineation by sensor plane:
u″=Au′+t (2)
In the formula, A ∈ R 2 * 2, t ∈ R 2 * 1
Scaramuzza replaces the function g in the formula (1) with a function f=g/h on Micusik perspective projection model based, h, promptly characterize with function f that three dimensions is selected and the two dimensional millet cake between relation, obtain formula (3),
λ p ′ ′ = λ u ′ ′ f ( | | u ′ ′ | | ) = λ Au ′ + t f ( | | Au ′ + t | | ) P · X , λ > 0 - - - ( 3 )
Because bi-curved rotational symmetry, Scaramuzza launch polynomial expression with Taylor and come described function f, represent with formula (4):
f(||u″||)=a 0+a 1||u″||+a 2||u″|| 2+…+a n||u″||N?(4)
In the formula, || u " || put the distance of this planar central point for sensor plane;
The prerequisite of the model of Scaramuzza and Micusik all is desirable catadioptric camera model, owing to when reality processing assembling omnibearing vision sensor, can introduce some errors; The omnibearing vision sensor of hypothesis demarcation here satisfies ideal model, and the simplified model conversion formula with the imperfect model substitution Scaramuzza that has certain error proposes obtains formula (5);
λ p ′ ′ = λ u ′ ′ f ( | | u ′ ′ | | ) = λ Au ′ + t f ( | | Au ′ + t | | ) P · R · X , λ > 0 - - - ( 5 )
Concrete calibration process is around one week of omnibearing vision sensor with scaling board; Take some groups of panoramic pictures; Set up some equalities of pixel in spatial point and the imaging plane; Use optimization algorithm to obtain optimum solution, result of calculation is the calibrating parameters of the omnibearing vision sensor that uses among the present invention shown in 1;
The calibration result of table 2ODVS
Figure BDA0000158993880000111
After calibrating the inside and outside parameter of omnibearing vision sensor, just can set up the picture point and the incident ray of an imaging plane, promptly the corresponding relation between the incident angle is represented like formula (6);
tan α = | | u ′ ′ | | f ( | | u ′ ′ | | ) = | | u ′ ′ | | a 0 + a 1 | | u ′ ′ | | + a 2 | | u ′ ′ | | 2 + . . . + a N | | u ′ ′ | | N - - - ( 6 )
In the formula, the incident angle of α representation space object point, || u " || for sensor plane is put the distance of this planar central point, a 0, a 1, a 2, a NFor the inside and outside parameter of the omnibearing vision sensor demarcated, set up the mapping table between an arbitrary pixel of imaging plane and the incident angle through formula (6); About the concrete derivation of calibration formula and the implementation method document that sees reference, Yi-ping Tang, QingWang; Ming-li Zong, Jun Jiang, and Yi-hua Zhu; Design of Vertically Aligned Binocular Omnistereo Vision Sensor; EURASIP Journal on Image and Video Processing, 2010, P1~24; Calibrated result can set up the mapping relations between image coordinate and the ground location, and is shown in figure 10;
Described Bird-View conversion module; Be used to revise the serious distortion distortion of omni-directional visual imaging back; Omnidirectional images is transformed to the Bird-View view, and the Bird-View view is similar to birds and is looking down formed image on this ground, through this conversion full-view video image is reduced into undeformed full-view video image on the surface level direction; The object that forms images in the panoramic picture that conversion obtains through Bird-View can become big along with highly increasing; And remain unchanged on the horizontal direction, the closer to the central area of omnidirectional images, it is big more that the object in the image can seem; These characteristics are in Mobile Robotics Navigation, to have crucial meaning and effect; Utilize said crucial face LASER Light Source to be incident upon on two different crucial faces; Be on the different horizontal face; Wherein the green laser light source produces a planar projective green laser near the ground surface level; Red laser light source produces a planar projective red laser on the surface level of certain altitude overhead, as shown in Figure 6; Utilizing the Bird-View conversion is to observe images acquired from a level altitude; On its level ground with the surface level of certain altitude on the characteristic that remains unchanged of the physical form of object; The characteristic of promptly utilizing two physical forms on the crucial face to remain unchanged can be judged around the robot existence and the barrier of the barrier in the 360 ° of scopes distance from the mobile robot effectively; Laser scanning through two crucial faces and Bird-View conversion can obtain subaerial scene two-dimensional map and at the scene two-dimensional map of certain altitude;
The Bird-View conversion can be regarded a kind of inverse transformation of omnibearing vision sensor imaging process as; The result that therefore can utilize formula (6) to demarcate carries out the Bird-View conversion; Convert omnidirectional images to the Bird-View image through this calibration result; Wherein object point P in space calculates with formula (7) to the distance R of observation station Om between the subpoint on the level ground
R = h × cos β / tan α = h × | f ( | | u ′ ′ | | ) × cos β | u ′ ′ | | = - - - ( 7 )
h × cos β × ( a 0 + a 1 | | u ′ ′ | | + a 2 | | u ′ ′ | | 2 + . . . + a N | | u ′ ′ | | N ) | u ′ ′ | |
In the formula, R is that space object point P is to observation station O mDistance between the subpoint on the level ground, h is that space object point P is to observation station O mDistance between the subpoint on the vertical plane, α is that space object point P is to observation station O mIncident angle, β is space object point P to observation station O mThe position angle;
The azimuthal angle beta of space object point is consistent with azimuthal angle beta on the omnidirectional images on Bird-View conversion figure; Such width of cloth omnidirectional images can construct the two-dimentional global map of some differing heights on the different horizontal face; From the Bird-View changing image, it not only has with the machine people is the overall vision at center, also has the depth information on the differing heights surface level simultaneously;
Described comprehensive laser intelligence read module is used for obtaining fast the precise geometrical information of space or body surface; Be incident upon on two different crucial faces by said crucial face LASER Light Source among the present invention, produce a planar projective green laser near the ground surface level, on the surface level of certain altitude overhead, produce a planar projective red laser; Through resolving green laser and red laser incident point on the Bird-View view; Green laser and red laser projection object point pixel coordinate value on omnidirectional images can be obtained, the space object point can be calculated to observation station O through formula (7) according to these pixel coordinate value mTherefore distance and bearing angle β between the subpoint on the level ground can obtain the space three-dimensional data point on the crucial face, is called the cloud data of crucial face here;
Parsing method in green laser and red laser incident point on the Bird-View view is that the brightness according to the pixel in green laser and red laser incident point is greater than the mean flow rate on the imaging plane; At first be that RGB color space with the Bird-View view changes into the HIS color space; Then with 1.2 times of the mean flow rate on the imaging plane as the threshold values of extracting green laser and red laser incident point; After extracting green laser and red laser incident point, need further to distinguish green laser and red laser incident point, judge according to the tone value H in the HIS color space among the present invention, if tone value H is (0; 30) just be judged as the red laser incident point between; If tone value H just is judged as the green laser incident point between (105,135), rest of pixels point just is judged as interference;
Described barrier unique point computing module is used to calculate and indicate the distribution situation of the barrier around the mobile robot; In order to calculate fast, at first confirm the barrier unique point, the barrier unique point among the present invention is to confirm through the intersection point of the near normal edge line on the laser projections line of crucial face and the Bird-View conversion figure; The method of asking of the near normal edge line on the Bird-View conversion figure is to obtain the near normal edge line on the Bird-View conversion figure through Sobel vertical direction template traversal Bird-View conversion figure, and Sobel vertical direction template is provided by formula (8);
W v = 1 4 - 1 - 2 - 1 0 0 0 1 2 1 - - - ( 8 )
The barrier unique point of trying to achieve, two unique points of Pa shown in accompanying drawing 7 and Pb can use formula (9) to observation station O mDistance R a between the subpoint on the level ground and Rb,
R a=h greenline×cosβ/tanαa (9)
R b=h redline×cosβ/tanα b
In the formula, R aFor unique point Pa to the distance of observation station Om between the subpoint on the level ground, h GreenlineFor unique point Pa arrives the distance between the observation station Om, β is the position angle of unique point Pa, α aIncident angle for unique point Pa; R bFor unique point Pb to the distance of observation station Om between the subpoint on the level ground, h GreenlineFor unique point Pb arrives the distance between the observation station Om, β is the position angle of unique point Pb, α bIncident angle for unique point Pb; For the ease of navigation calculating, stipulate with the X coordinate to be that starting point counterclockwise is a positive dirction here, the position angle of the positive dirction of robot ambulation is 90 °, shown in accompanying drawing 16;
Barrier space distribution estimation block between the described crucial face is used to be evaluated in the comprehensive visual field barrier that exists between two crucial faces; The barrier that walking robot is met with in the process of walking is not all to appear on two crucial faces; Bird-View conversion figure provides a kind of and has observed on different level the information of barrier in the comprehensive scope from depression angle; Specific practice is the characteristic that in the Bird-View image, still keeps straight line through the cloud data of crucial face and the straight line in the actual environment; Detect the cloud data of same azimuthal two crucial faces, judge the barrier between the crucial face with table 2;
Barrier between following two the crucial faces of a certain azimuthal angle beta situation of table 2 and apart from determination methods
Figure BDA0000158993880000132
Figure BDA0000158993880000141
The determination methods of following his-and-hers watches 2 is done an explanation; The unique point that exists red laser and green laser projection for a certain azimuthal angle beta; Situation shown in accompanying drawing 7, accompanying drawing 15; The distance of its walking robot and barrier is got nearest incident point distance as obstacle distance, calculates with formula (10)
R β=min(Ra,Rb) (10)
In the formula, R βDistance for walking robot and barrier under some azimuthal angle beta situation; Ra be under some azimuthal angle beta situation walking robot when subaerial elevation of water and the distance of barrier, Rb be under some azimuthal angle beta situation walking robot when height of level face own and the distance of barrier;
The unique point that only exists the red laser projection for a certain azimuthal angle beta; Situation shown in accompanying drawing 14; Show and at this moment only exist the boss type barrier; As obstacle distance, the distance of walking robot and barrier is calculated with formula (11) under some azimuthal angle beta situation with the distance in red laser incident point
R β=Rb (11)
The unique point that only exists the green laser projection for a certain azimuthal angle beta; Situation shown in accompanying drawing 13; Show and at this moment only exist roadside type barrier; As obstacle distance, the distance of walking robot and barrier is calculated with formula (12) under some azimuthal angle beta situation with the distance in green laser incident point
R β=Ra (12)
Do not exist the situation of unique point of red laser and green laser projection for a certain azimuthal angle beta, just judge that this azimuth direction does not have barrier; Certainly Bird-View conversion figure is confirmed further to improve accuracy of detection through edge detection method;
Described barrier outline line generation module, being used for generating with the mobile apparatus people automatically is the peripheral barrier outline line at center; Keep away barrier and navigation for the ease of the mobile robot; , the barrier space distribution estimation block between the described crucial face obtained each azimuthal barrier distribution situation after handling; With these data is that horizontal ordinate, obstacle distance value are that ordinate is drawn barrier outline line curve map with the azimuthal angle beta, shown in figure 12; As can beappreciated from fig. 12; In azimuthal angle beta is not have barrier to exist between 140 °~168 °; Because in described barrier unique point computing module, having defined the position angle of the positive dirction of robot ambulation is 90 °; Therefore need adjust the direction of travel of robot here, the adjustment angle calculation is 140+ (168-140)/2=154, and the direction of travel angle of the adjustment robot that promptly need adjust is 64 °;
Described estimation block of keeping away the barrier path is used to judge whether robot can pass through non-barrier region; Robot can be estimated from barrier outline line shown in Figure 12 and whether non-barrier region can be passed through; Here; At first to estimate the width value of non-barrier region; Zone according to there not being barrier to exist among Figure 12 is 140 °~168 ° azimuth informations that can obtain 28 °, and at the position angle less than 140 ° with the position angle during greater than 168 ° the distance value of barrier estimate with formula (13)
P path = 2 × R m × tan ( β 2 - β 1 2 ) - - - ( 13 )
In the formula, P PathFor not having the width in the zone that barrier exists, β 1For the left side barrier with there is not the marginal position angle of barrier, 140 ° shown in figure 12, β 2For the right side barrier with there is not the marginal position angle of barrier, 168 ° shown in figure 12, R mFor at β 1And β 2The mean value of the obstacle distance under the situation of position angle;
At last the breadth extreme of robot is obtained Robot for last 11 times on duty WIf, Robot WWidth P less than the zone that does not have barrier to exist PathJust judge that robot can pass through this zone, otherwise judge and to pass through this zone; Here be noted that the mobile robot whenever moves a distance and can generate one automatically with the corresponding barrier outline line in its position, the mobile robot revises automatically according to the situation of current barrier outline line and keeps away barrier and navigation way.
The principle of work of omnibearing vision sensor is: get into the light at the center of hyperbolic mirror, reflect towards its virtual focus according to bi-curved minute surface characteristic.Material picture reflexes in the collector lens through hyperbolic mirror and forms images, a some P on this imaging plane (x, y) corresponding the coordinate A of a point spatially in kind (X, Y, Z);
2-1-hyperbolic curve face mirror among Fig. 5, the 12-incident ray, the real focus Om of 13-hyperbolic mirror (0,0, c); The virtual focus of 14-hyperbolic mirror, promptly the center O c of image unit 6 (0,0 ,-c); The 15-reflection ray, 16-imaging plane, the volume coordinate A of 17-material picture (X, Y; Z), 18-incides the volume coordinate of the image on the hyperboloid minute surface, 19-be reflected in some P on the imaging plane (x, y).
The optical system that hyperbolic mirror shown in Fig. 5 constitutes can be represented by following 5 equalities;
((X 2+ Y 2)/a 2)-((Z-c) 2/ b 2)=-1 is when Z>0 (14)
c = a 2 + b 2 - - - ( 15 )
β=tan -1(Y/X) (16)
α=tan -1[(b 2+c 2)sinγ-2bc]/(b 2+c 2)cosγ?(17)
γ = tan - 1 [ f / ( x 2 + y 2 ) ] - - - ( 18 )
X, Y, Z representation space coordinate in the formula, c representes the focus of hyperbolic mirror, 2c representes two distances between the focus; A, b are respectively the real axis of hyperbolic mirror and the length of the imaginary axis, β represent incident ray on the XY projection plane with the angle of X axle; Be the position angle, α represent incident ray on the XZ projection plane with the angle of X axle, α is called incident angle here; α is more than or equal to being called the angle of depression at 0 o'clock; α was called the elevation angle less than 0 o'clock, and f representes the distance of imaging plane to the virtual focus of hyperbolic mirror, and γ representes the angle of catadioptric light and Z axle; X, y are illustrated in a point on the imaging plane.

Claims (8)

1. robotic vision system based on the active panoramic vision sensor; It is characterized in that: comprise omnibearing vision sensor, crucial face LASER Light Source and be used for to omnidirectional images carry out 3 D stereo videographic measurment, detection of obstacles, keep away the barrier and the navigation microprocessor, said omnibearing vision sensor and said crucial face LASER Light Source are configured on the same axis heart line;
Said omnibearing vision sensor comprises hyperboloid minute surface, loam cake, transparent semicircle outer cover, lower fixed seat, image unit holder, image unit, linkage unit and goes up cover; Described hyperboloid minute surface is fixed on and covers on described; Described linkage unit links into an integrated entity described lower fixed seat and transparent semicircle outer cover; Described transparent semicircle outer cover and described loam cake and the described cover of going up are fixed together; Described image unit is fixed on the described image unit holder, and described image unit holder is fixed on the described lower fixed seat, and described image unit is connected with said microprocessor;
Said crucial face LASER Light Source comprises ruddiness line laser generating unit, line laser generation assembled unit, green light rays laser generating unit, support bar and chassis; Described green light rays laser generating unit is fixed on and forms a comprehensive LASER Light Source unit that sends green glow in the hole of described line laser generation assembled unit; Described ruddiness line laser generating unit is fixed on and forms a comprehensive LASER Light Source unit that sends ruddiness in the hole of described line laser generation assembled unit; Described support bar is vertically fixed on the described chassis, described comprehensive LASER Light Source unit of green glow and the two ends that described comprehensive the LASER Light Source unit that sends ruddiness is separately fixed at described support bar of sending;
Said microprocessor comprises:
The video image read module is used to read the video image of omnibearing vision sensor, and is kept in the described storage unit, and its output is connected with the Bird-View conversion module with the omnibearing vision sensor demarcating module;
The omnibearing vision sensor demarcating module is used for confirming the parameter of mapping relations between the X-Y scheme picture point on three dimensions point and the video camera imaging plane, and calibrated parameter leaves in the described storage unit;
The Bird-View conversion module; Be used for reading the calibrating parameters value of the omnibearing vision sensor that leaves described storage unit in; Revise the serious distortion distortion of omni-directional visual imaging back through the Bird-View conversion; Omnidirectional images is transformed to the Bird-View view, and the Bird-View view is that birds are looking down formed image on this ground, and the Bird-View view that obtains after the conversion leaves in the described storage unit;
Comprehensive laser intelligence read module is used for the Bird-View view is handled the geological information that obtains space or body surface three-dimensional data point fast, and its result of calculation is submitted to barrier unique point computing module;
Barrier unique point computing module is used to calculate and indicate the distribution situation of the barrier unique point around the mobile robot, and its result of calculation is submitted to the barrier space distribution estimation block between the crucial face;
Barrier space distribution estimation block between the crucial face is used to be evaluated at the barrier that exists between two crucial faces comprehensive the visual field in, estimates that the result is submitted to barrier outline line generation module;
Barrier outline line generation module, being used for generating with the mobile apparatus people automatically is the peripheral barrier outline line at center;
Keep away the estimation block in barrier path, be used to judge whether robot can pass through non-barrier region; At first to estimate the width value of non-barrier region, according to the azimuth information in the zone that does not have barrier to exist and should estimate with formula (13) from the distance value of barrier in the zone,
P path = 2 × R m × tan ( β 2 - β 1 2 ) - - - ( 13 )
In the formula, P PathFor not having the width in the zone that barrier exists, β 1For the left side barrier with do not have the marginal position angle of barrier, β 2For the right side barrier with do not have the marginal position angle of barrier, R mFor at β 1And β 2The mean value of the obstacle distance under the situation of position angle;
At last the breadth extreme of robot is obtained Robot for last 1.1 times on duty WIf, Robot WWidth P less than the zone that does not have barrier to exist PathJust judge that robot can pass through this zone, otherwise judge and to pass through this zone.
2. the robotic vision system based on the active panoramic vision sensor as claimed in claim 1; It is characterized in that: described omnibearing vision sensor demarcating module; The process of confirming the parameter of mapping relations between the X-Y scheme picture point on three dimensions point and the video camera imaging plane is: set up getting in touch between picture point and the incident ray of an imaging plane; Promptly and the corresponding relation between the incident angle, represent with formula (6);
tan α = | | u ′ ′ | | f ( | | u ′ ′ | | ) = | | u ′ ′ | | a 0 + a 1 | | u ′ ′ | | + a 2 | | u ′ ′ | | 2 + . . . + a N | | u ′ ′ | | N - - - ( 6 )
In the formula, the incident angle of α representation space object point, || u " || for sensor plane is put the distance of this planar central point, a 0, a 1, a 2, a NFor the inside and outside parameter of the omnibearing vision sensor demarcated, set up the mapping table between an arbitrary pixel of imaging plane and the incident angle through formula (6), table 1 is the calibration result of a kind of single view omnibearing vision sensor of the present invention's use,
Figure FDA0000158993870000031
The calibration result of table 1ODVS.
3. the robotic vision system based on the active panoramic vision sensor as claimed in claim 2; It is characterized in that: in the described Bird-View conversion module; Said Bird-View conversion can be a kind of inverse transformation of omnibearing vision sensor imaging process; The result who utilizes formula (6) to demarcate carries out the Bird-View conversion, converts omnidirectional images to the Bird-View image through this calibration result, wherein; Space object point P calculates with formula (7) to the distance R of observation station Om between the subpoint on the level ground
R = h × cos β / tan α = h × | f ( | | u ′ ′ | | ) × cos β | u ′ ′ | | = - - - ( 7 )
h × cos β × ( a 0 + a 1 | | u ′ ′ | | + a 2 | | u ′ ′ | | 2 + . . . + a N | | u ′ ′ | | N ) | u ′ ′ | |
In the formula, R is that space object point P is to observation station O mDistance between the subpoint on the level ground, h is that space object point P is to observation station O mDistance between the subpoint on the vertical plane, α is that space object point P is to observation station O mIncident angle, β is space object point P to observation station O mThe position angle.
4. like claim 1 or 2 or 3 described robotic vision systems based on the active panoramic vision sensor; It is characterized in that: in described comprehensive the laser intelligence read module; Be incident upon on two different crucial faces by said crucial face LASER Light Source; Produce a planar projective green laser near the ground surface level, on the surface level of certain altitude overhead, produce a planar projective red laser; Through resolving green laser and red laser incident point on the Bird-View view;
Parsing method in green laser and red laser incident point on the Bird-View view is that the brightness according to the pixel in green laser and red laser incident point is greater than the mean flow rate on the imaging plane; At first be that RGB color space with the Bird-View view changes into the HIS color space; Then with 1.2 times of the mean flow rate on the imaging plane as the threshold values of extracting green laser and red laser incident point; After extracting green laser and red laser incident point, need further to distinguish green laser and red laser incident point, judge according to the tone value H in the HIS color space among the present invention, if tone value H is (0; 30) just be judged as the red laser incident point between; If tone value H just is judged as the green laser incident point between (105,135), rest of pixels point just is judged as interference.
5. like claim 1 or 2 or 3 described robotic vision systems based on the active panoramic vision sensor; It is characterized in that: in the described barrier unique point computing module; At first confirm the barrier unique point, the barrier unique point among the present invention is to confirm through the intersection point of the near normal edge line on the laser projections line of crucial face and the Bird-View conversion figure;
The method of asking of the near normal edge line on the Bird-View conversion figure is to obtain the near normal edge line on the Bird-View conversion figure through Sobel vertical direction template traversal Bird-View conversion figure, and Sobel vertical direction template is provided by formula (8);
W v = 1 4 - 1 - 2 - 1 0 0 0 1 2 1 - - - ( 8 )
For two unique points of intersection point Pa that is tried to achieve and Pb, promptly the barrier unique point can use formula (9) to observation station O mDistance R a between the subpoint on the level ground and Rb,
R a=h greenline×cosβ/tanα a (9)
R b=h redline×cosβ/tanα b
In the formula, R aFor unique point Pa to the distance of observation station Om between the subpoint on the level ground, h GreenlineFor unique point Pa arrives the distance between the observation station Om, β is the position angle of unique point Pa, α aIncident angle for unique point Pa; R bFor unique point Pb to the distance of observation station Om between the subpoint on the level ground, h GreenlineFor unique point Pb arrives the distance between the observation station Om, β is the position angle of unique point Pb, α bIncident angle for unique point Pb.
6. like claim 1 or 2 or 3 described robotic vision systems based on the active panoramic vision sensor; It is characterized in that: in the barrier space distribution estimation block between the described crucial face; The unique point that exists red laser and green laser projection for a certain azimuthal angle beta; The distance of its walking robot and barrier is got nearest incident point distance as obstacle distance, calculates with formula (10)
R β=min(Ra,Rb) (10)
In the formula, R βDistance for walking robot and barrier under some azimuthal angle beta situation; Ra be under some azimuthal angle beta situation walking robot when subaerial elevation of water and the distance of barrier, Rb be under some azimuthal angle beta situation walking robot when height of level face own and the distance of barrier;
The unique point that only exists the red laser projection for a certain azimuthal angle beta; Show and at this moment only exist the boss type barrier; As obstacle distance, the distance of walking robot and barrier is calculated with formula (11) under some azimuthal angle beta situation with the distance in red laser incident point
R β=Rb (11)
The unique point that only exists the green laser projection for a certain azimuthal angle beta; Show and at this moment only exist roadside type barrier; As obstacle distance, the distance of walking robot and barrier is calculated with formula (12) under some azimuthal angle beta situation with the distance in green laser incident point
R β=Ra (12)
Do not exist the situation of unique point of red laser and green laser projection for a certain azimuthal angle beta, just judge that this azimuth direction does not have barrier.
7. like claim 1 or 2 or 3 described robotic vision systems based on the active panoramic vision sensor; It is characterized in that: in the described barrier outline line generation module; After the barrier space distribution estimation block between the described crucial face is handled, having obtained each azimuthal barrier distribution situation, is that horizontal ordinate, obstacle distance value are that ordinate is drawn barrier outline line curve map with the azimuthal angle beta with these data.
8. like claim 1 or 2 or 3 described robotic vision systems based on the active panoramic vision sensor, it is characterized in that: the optical system that described hyperboloid minute surface constitutes is represented by following 5 equalities;
((X 2+ Y 2)/a 2)-((Z-c) 2/ b 2)=-1 is when Z>0 (14)
c = a 2 + b 2 - - - ( 15 )
β=tan -1(Y/X) (16)
α=tan -1[(b 2+c 2)sinγ-2bc]/(b 2+c 2)cosγ(17)
γ = tan - 1 [ f / ( x 2 + y 2 ) ] - - - ( 18 )
In the formula, X, Y, Z representation space coordinate, c representes the focus of hyperbolic mirror; 2c representes two distances between the focus, and a, b are respectively the real axis of hyperbolic mirror and the length of the imaginary axis; β represent incident ray on the XY projection plane with the angle of X axle, i.e. position angle, α represent incident ray on the XZ projection plane with the angle of X axle; Here α is called incident angle, α is more than or equal to being called the angle of depression at 0 o'clock, with α less than being called the elevation angle at 0 o'clock; F representes the distance of imaging plane to the virtual focus of hyperbolic mirror, and γ representes the angle of catadioptric light and Z axle; X, y are illustrated in a point on the imaging plane.
CN201210133595.9A 2012-04-28 2012-04-28 Vision system based on active panoramic vision sensor for robot Expired - Fee Related CN102650886B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210133595.9A CN102650886B (en) 2012-04-28 2012-04-28 Vision system based on active panoramic vision sensor for robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210133595.9A CN102650886B (en) 2012-04-28 2012-04-28 Vision system based on active panoramic vision sensor for robot

Publications (2)

Publication Number Publication Date
CN102650886A true CN102650886A (en) 2012-08-29
CN102650886B CN102650886B (en) 2014-03-26

Family

ID=46692899

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210133595.9A Expired - Fee Related CN102650886B (en) 2012-04-28 2012-04-28 Vision system based on active panoramic vision sensor for robot

Country Status (1)

Country Link
CN (1) CN102650886B (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102980513A (en) * 2012-11-02 2013-03-20 浙江工业大学 Monocular panoramic three-dimensional vision sensor focusing on objects
CN103048049A (en) * 2012-12-19 2013-04-17 安徽国防科技职业学院 Obstacle avoidance detecting device of intelligent assembling robot
CN104216408A (en) * 2014-09-04 2014-12-17 兰州理工大学 Wireless remote control terrain exploration trolley
CN104634328A (en) * 2013-11-14 2015-05-20 沈阳新松机器人自动化股份有限公司 Spatial measurement robot and spatial measurement method thereof
CN105023270A (en) * 2015-05-29 2015-11-04 汤一平 Proactive 3D stereoscopic panorama visual sensor for monitoring underground infrastructure structure
CN105953787A (en) * 2016-04-27 2016-09-21 河北德普电器有限公司 Robot navigation map generation system
CN107004298A (en) * 2016-04-25 2017-08-01 深圳前海达闼云端智能科技有限公司 Method and device for establishing three-dimensional model of robot and electronic equipment
CN107037813A (en) * 2017-04-28 2017-08-11 成都科力夫科技有限公司 A kind of system for projecting control machine people motion
CN107305380A (en) * 2016-04-20 2017-10-31 上海慧流云计算科技有限公司 A kind of automatic obstacle-avoiding method and apparatus
CN107966989A (en) * 2017-12-25 2018-04-27 北京工业大学 A kind of robot autonomous navigation system
CN108415413A (en) * 2018-03-28 2018-08-17 华南农业大学 A kind of intelligent forklift part obstacle-avoiding route planning method based on round region of interest
CN109916408A (en) * 2019-02-28 2019-06-21 深圳市鑫益嘉科技股份有限公司 Robot indoor positioning and air navigation aid, device, equipment and storage medium
CN110989642A (en) * 2019-11-27 2020-04-10 中国民用航空总局第二研究所 Aircraft ground traction intelligent auxiliary method and system based on three-dimensional path tracking
CN111862302A (en) * 2019-04-12 2020-10-30 北京城市网邻信息技术有限公司 Image processing method, image processing apparatus, object modeling method, object modeling apparatus, image processing apparatus, object modeling apparatus, and medium
CN111880166A (en) * 2018-05-03 2020-11-03 联发科技股份有限公司 Object detection method and device
CN112155487A (en) * 2019-08-21 2021-01-01 追创科技(苏州)有限公司 Sweeping robot, control method of sweeping robot and storage medium
CN112601496A (en) * 2018-08-22 2021-04-02 皇家飞利浦有限公司 3D tracking of interventional medical devices
WO2021179448A1 (en) * 2020-03-13 2021-09-16 厦门理工学院 Obstacle detector, method, storage medium, and mobile machine
CN113955136A (en) * 2021-09-02 2022-01-21 浙江图盛输变电工程有限公司温州科技分公司 Unmanned aerial vehicle target hanging point calibration transfer station for automatic power grid inspection
CN118331282A (en) * 2024-06-13 2024-07-12 四川大学 Barrier avoiding method, device and system for desert tree planting robot

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11305833A (en) * 1998-04-20 1999-11-05 Yaskawa Electric Corp Device for generating land mark information
CN101655347A (en) * 2009-08-20 2010-02-24 浙江工业大学 Driving three-dimensional omni-directional vision sensor based on laser diode light source
EP2256667A1 (en) * 2009-05-28 2010-12-01 Honda Research Institute Europe GmbH Driver assistance system or robot with dynamic attention module
JP2011008320A (en) * 2009-06-23 2011-01-13 Toyota Motor Corp Autonomous mobile unit, self position estimating device, and program
CN101958154A (en) * 2010-02-11 2011-01-26 浙江工业大学 Platform automatic leveling device based on machine vision
CN102289144A (en) * 2011-06-30 2011-12-21 浙江工业大学 Intelligent three-dimensional (3D) video camera equipment based on all-around vision sensor

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11305833A (en) * 1998-04-20 1999-11-05 Yaskawa Electric Corp Device for generating land mark information
EP2256667A1 (en) * 2009-05-28 2010-12-01 Honda Research Institute Europe GmbH Driver assistance system or robot with dynamic attention module
JP2011008320A (en) * 2009-06-23 2011-01-13 Toyota Motor Corp Autonomous mobile unit, self position estimating device, and program
CN101655347A (en) * 2009-08-20 2010-02-24 浙江工业大学 Driving three-dimensional omni-directional vision sensor based on laser diode light source
CN101958154A (en) * 2010-02-11 2011-01-26 浙江工业大学 Platform automatic leveling device based on machine vision
CN102289144A (en) * 2011-06-30 2011-12-21 浙江工业大学 Intelligent three-dimensional (3D) video camera equipment based on all-around vision sensor

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
汤一平等: "基于全方位视觉传感器的车辆违章检测系统的设计", 《计算机测量与控制》 *
汤一平等: "无死角的全方位视觉传感器的设计", 《仪器仪表学报》 *

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102980513B (en) * 2012-11-02 2016-01-20 浙江工业大学 Monocular full-view stereo vision sensor centered by thing
CN102980513A (en) * 2012-11-02 2013-03-20 浙江工业大学 Monocular panoramic three-dimensional vision sensor focusing on objects
CN103048049B (en) * 2012-12-19 2014-12-24 安徽国防科技职业学院 Obstacle avoidance detecting device of intelligent assembling robot
CN103048049A (en) * 2012-12-19 2013-04-17 安徽国防科技职业学院 Obstacle avoidance detecting device of intelligent assembling robot
CN104634328A (en) * 2013-11-14 2015-05-20 沈阳新松机器人自动化股份有限公司 Spatial measurement robot and spatial measurement method thereof
CN104216408A (en) * 2014-09-04 2014-12-17 兰州理工大学 Wireless remote control terrain exploration trolley
CN105023270A (en) * 2015-05-29 2015-11-04 汤一平 Proactive 3D stereoscopic panorama visual sensor for monitoring underground infrastructure structure
CN107305380A (en) * 2016-04-20 2017-10-31 上海慧流云计算科技有限公司 A kind of automatic obstacle-avoiding method and apparatus
CN107004298B (en) * 2016-04-25 2020-11-10 深圳前海达闼云端智能科技有限公司 Method and device for establishing three-dimensional model of robot and electronic equipment
CN107004298A (en) * 2016-04-25 2017-08-01 深圳前海达闼云端智能科技有限公司 Method and device for establishing three-dimensional model of robot and electronic equipment
CN105953787A (en) * 2016-04-27 2016-09-21 河北德普电器有限公司 Robot navigation map generation system
CN107037813A (en) * 2017-04-28 2017-08-11 成都科力夫科技有限公司 A kind of system for projecting control machine people motion
CN107966989A (en) * 2017-12-25 2018-04-27 北京工业大学 A kind of robot autonomous navigation system
CN108415413A (en) * 2018-03-28 2018-08-17 华南农业大学 A kind of intelligent forklift part obstacle-avoiding route planning method based on round region of interest
CN108415413B (en) * 2018-03-28 2021-03-30 华南农业大学 Intelligent forklift local obstacle avoidance path planning method based on circular useful domain
CN111880166A (en) * 2018-05-03 2020-11-03 联发科技股份有限公司 Object detection method and device
CN112601496A (en) * 2018-08-22 2021-04-02 皇家飞利浦有限公司 3D tracking of interventional medical devices
CN109916408A (en) * 2019-02-28 2019-06-21 深圳市鑫益嘉科技股份有限公司 Robot indoor positioning and air navigation aid, device, equipment and storage medium
CN111862302A (en) * 2019-04-12 2020-10-30 北京城市网邻信息技术有限公司 Image processing method, image processing apparatus, object modeling method, object modeling apparatus, image processing apparatus, object modeling apparatus, and medium
CN112155487A (en) * 2019-08-21 2021-01-01 追创科技(苏州)有限公司 Sweeping robot, control method of sweeping robot and storage medium
CN110989642A (en) * 2019-11-27 2020-04-10 中国民用航空总局第二研究所 Aircraft ground traction intelligent auxiliary method and system based on three-dimensional path tracking
CN110989642B (en) * 2019-11-27 2023-09-01 中国民用航空总局第二研究所 Intelligent aircraft ground traction auxiliary method and system based on three-dimensional path tracking
WO2021179448A1 (en) * 2020-03-13 2021-09-16 厦门理工学院 Obstacle detector, method, storage medium, and mobile machine
CN113955136A (en) * 2021-09-02 2022-01-21 浙江图盛输变电工程有限公司温州科技分公司 Unmanned aerial vehicle target hanging point calibration transfer station for automatic power grid inspection
CN113955136B (en) * 2021-09-02 2024-04-05 浙江图盛输变电工程有限公司温州科技分公司 Automatic unmanned aerial vehicle target hanging point calibration transfer station that patrols and examines of electric wire netting
CN118331282A (en) * 2024-06-13 2024-07-12 四川大学 Barrier avoiding method, device and system for desert tree planting robot
CN118331282B (en) * 2024-06-13 2024-08-20 四川大学 Barrier avoiding method, device and system for desert tree planting robot

Also Published As

Publication number Publication date
CN102650886B (en) 2014-03-26

Similar Documents

Publication Publication Date Title
CN102650886B (en) Vision system based on active panoramic vision sensor for robot
Zhang et al. Intelligent collaborative localization among air-ground robots for industrial environment perception
CN106681353B (en) The unmanned plane barrier-avoiding method and system merged based on binocular vision with light stream
De Silva et al. Fusion of LiDAR and camera sensor data for environment sensing in driverless vehicles
Scaramuzza et al. Extrinsic self calibration of a camera and a 3d laser range finder from natural scenes
CN111914715B (en) Intelligent vehicle target real-time detection and positioning method based on bionic vision
CN112785702A (en) SLAM method based on tight coupling of 2D laser radar and binocular camera
JP6589926B2 (en) Object detection device
CN102005039B (en) Fish-eye camera stereo vision depth measuring method based on Taylor series model
CN102679959B (en) Omnibearing 3D (Three-Dimensional) modeling system based on initiative omnidirectional vision sensor
CN105389543A (en) Mobile robot obstacle avoidance device based on all-dimensional binocular vision depth information fusion
JP7502440B2 (en) Method for measuring the topography of an environment - Patents.com
Nienaber et al. A comparison of low-cost monocular vision techniques for pothole distance estimation
CN106228538A (en) Binocular vision indoor orientation method based on logo
CN103424112A (en) Vision navigating method for movement carrier based on laser plane assistance
Heng et al. Real-time photo-realistic 3d mapping for micro aerial vehicles
CN114413958A (en) Monocular vision distance and speed measurement method of unmanned logistics vehicle
Bok et al. Extrinsic calibration of non-overlapping camera-laser system using structured environment
CN102081798A (en) Epipolar rectification method for fish-eye stereo camera pair
John et al. Automatic calibration and registration of lidar and stereo camera without calibration objects
Hillemann et al. UCalMiCeL–Unified intrinsic and extrinsic calibration of a multi-camera-system and a laserscanner
Zhang et al. An overlap-free calibration method for LiDAR-camera platforms based on environmental perception
Jensen et al. Laser range imaging using mobile robots: From pose estimation to 3d-models
Kim et al. LiDAR configuration comparison for urban mapping system
Gehrig et al. 6D vision goes fisheye for intersection assistance

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20140326