CN105014675B - A kind of narrow space intelligent mobile robot vision navigation system and method - Google Patents

A kind of narrow space intelligent mobile robot vision navigation system and method Download PDF

Info

Publication number
CN105014675B
CN105014675B CN201410281773.1A CN201410281773A CN105014675B CN 105014675 B CN105014675 B CN 105014675B CN 201410281773 A CN201410281773 A CN 201410281773A CN 105014675 B CN105014675 B CN 105014675B
Authority
CN
China
Prior art keywords
image
cross
location
point
analyzing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410281773.1A
Other languages
Chinese (zh)
Other versions
CN105014675A (en
Inventor
娄小平
董明利
吕乃光
林义闽
王达
姚艳彬
邹方
魏志强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
AVIC Beijing Aeronautical Manufacturing Technology Research Institute
Beijing Information Science and Technology University
Original Assignee
AVIC Beijing Aeronautical Manufacturing Technology Research Institute
Beijing Information Science and Technology University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by AVIC Beijing Aeronautical Manufacturing Technology Research Institute, Beijing Information Science and Technology University filed Critical AVIC Beijing Aeronautical Manufacturing Technology Research Institute
Priority to CN201410281773.1A priority Critical patent/CN105014675B/en
Publication of CN105014675A publication Critical patent/CN105014675A/en
Application granted granted Critical
Publication of CN105014675B publication Critical patent/CN105014675B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention provides a kind of narrow space intelligent mobile robot vision navigation system and method.System includes image acquisition end and analyzing and processing end, image acquisition end includes cross hairs laser projecting apparatus, the first microcam and the second microcam, control unit, and method comprises the steps: that cross hairs laser projecting apparatus projects cross hairs optical strip image;First microcam and the second microcam gather cross optical strip image respectively, and export cross optical strip image to analyzing and processing end;Image is analyzed by analyzing and processing end, draws the three-dimensional coordinate set of Rhizoma Dioscoreae (peeled);Analyzing and processing end, according to the three-dimensional coordinate set of Rhizoma Dioscoreae (peeled), analyzes the regional location of the barrier that front occurs;Regional location is back to the control unit of image acquisition end by analyzing and processing end, completes to hide barrier controlling robot.The automatic obstacle avoiding navigation of robot in narrow working environment complicated, unknown, can be realized exactly according to the present invention.

Description

A kind of narrow space intelligent mobile robot vision navigation system and method
Technical field
The present invention relates to technical field of visual navigation, particularly one narrow space intelligent mobile robot vision guided navigation system System and method.
Background technology
Intelligent mobile robot is that a class obtains ambient condition information and oneself state by self-contained sensor, complete Become detection and the identification of target of barrier in environment, thus realize from original position autonomic movement to target location and right Target object carries out the robot system of operation task.Itself there is the modules such as complete perception, analysis, decision-making and execution, Production activity can be engaged in the most in the environment as autonomous mankind.The vision of intelligent robot under circumstances not known Airmanship relates to artificial intelligence and divides with pattern recognition theory, Theory of Automatic Control, sensor technology, computer technology and image Analysis and treatment technology etc., have become as artificial intelligence and the Frontier problem of sophisticated machine people's research field, be current intelligence One of the focus of robot development's research and difficulties.
In known or structurized environment, the research of mobile robot autonomous navigation control theory and method achieved with A lot of research and application achievement.In the actual application of current and future, mobile robot needs often to be operated in the unknown Environment, and at present the robot autonomous navigation problem in the unknown or unstructured moving grids is the most well solved Certainly.In mobile robot autonomous navigation technology, conventional sensor includes ultrasonic sensor, infrared sensor, Laser Measuring at present Away from radar, chemical sensor and vision sensor.Compared with other sensor, vision sensor has higher intellectuality and place The advantage of reason speed, and the scarcity of priori in circumstances not known and the uncertainty of environment etc. factor can be overcome, because of The application in mobile robot of this vision sensor is increasingly subject to people's attention.
But, existing vision navigation system cannot realize identifying that motion front goes out accurately in narrow and small complex environment Existing barrier and target object.
Accordingly, it would be desirable to a kind of narrow space intelligent mobile robot vision navigation system and method, to adapt to narrow and small work Make environment, realize exactly unknown barrier and target object are detected and identified.
Summary of the invention
It is an object of the invention to provide a kind of narrow space intelligent mobile robot vision navigation system and method.
According to an aspect of the invention, it is provided a kind of narrow space intelligent mobile robot vision navigation system, bag Include: image acquisition end, be used for gathering cross optical strip image, and by described cross optical strip image to analyzing and processing end output, described Image acquisition end includes: cross hairs laser projecting apparatus, is used for projecting described cross optical strip image;First microcam and Two microcams, project the described cross optical strip image on barrier for shooting respectively and obtain the first location image and the Two location images, and described cross optical strip image is transmitted to described analyzing and processing end;Control unit, for according to from described point Analysis processes the data message representing Obstacle Position that end receives, and controls the traveling of described image acquisition end;Analyzing and processing end, uses According to the described cross optical strip image inputted respectively from the first microcam and described second microcam, complete ten Word Rhizoma Dioscoreae (peeled) coupling on the first location image and the second location image, with the decussation point of described cross Rhizoma Dioscoreae (peeled) as boundary, To lay respectively at described first location image and described second location image described in decussation point phase the same side Rhizoma Dioscoreae (peeled) in Heart coordinate set, and, lay respectively at decussation point phase described in described first location image and described second location image With the optical losses coordinate set of other side, analyze the three-dimensional coordinate set obtaining described cross optical strip image, and judge The region of existing barrier, is finally back to described control unit by described region, to control the traveling of described image acquisition end.
Preferably, described analyzing and processing end, with the decussation point of described cross Rhizoma Dioscoreae (peeled) as boundary, obtain laying respectively at described The optical losses coordinate set of decussation point phase the same side described in first location image and described second location image, with And, lay respectively at the light of described first location image and the decussation identical other side of point described in described second location image Bar centre coordinate set, algorithm be: a) described first position image and described second location image extract respectively described Cross optical strip image, and choose gray threshold by described cross optical strip image binaryzation;B) image and institute are positioned described first State in the second location image, scan described cross optical strip image, with described decussation point as boundary, obtain laying respectively at described All Rhizoma Dioscoreae (peeled) collection of pixels of decussation point phase the same side in certain bit image and described second location image, and, respectively It is positioned at described first location image and all Rhizoma Dioscoreae (peeled) pixels of the decussation identical other side of point in described second location image Set;And c) obtain Rhizoma Dioscoreae (peeled) pixel center coordinate, thus extract and lay respectively at described first location image and described second calmly The optical losses coordinate set of the some phase of decussation described in bit image the same side, and, lay respectively at described first location figure Picture positions the optical losses coordinate set of the decussation identical other side of point described in image with described second.
Preferably, described analyzing and processing end determine according to formula (1) described cross optical strip image any point three Dimension coordinate (x, y, z):
( x , y , z ) = x = b ( u Li - u 0 ) u Li - u Ri y = bα x ( v Li - v 0 ) α y ( u Li - u Ri ) z = bα x u Li - u Ri - - - ( 1 )
Wherein, αxyIt is respectively camera x, the effective focal length in y-axis, (u0,v0) it is the principal point image coordinate of camera, b is Parallax range, and i ∈ [1,2], 1,2 represent respectively: lay respectively in described first location image and described second location image The sequence number of the optical losses coordinate set of described decussation point phase the same side, and, lay respectively at described first location image Sequence number with the optical losses coordinate set of the decussation identical other side of point described in described second location image.
Preferably, described analyzing and processing end analysis occurs that the algorithm in region of barrier is: first direction of advance divided For upper left, upper right, bottom right, and region, four, lower-left, add up described four regions the most respectively at zPOwning in the range of≤H The number of coordinate set is { NA,NB,NC,ND, wherein H is distance alarm threshold value, works as NiDuring >=TH, i ∈ (A, B, C, D), then recognize Being to occur in that barrier in the i-th region, wherein, TH is the threshold value of coordinate number.
Preferably, the size of described image acquisition end is less than 50mm3
Preferably, described first microcam and the second microcam are fixed on same horizontal plane.
Preferably, described first microcam and the second microcam are provided with LED light source.
According to a further aspect in the invention, it is provided that a kind of narrow space intelligent mobile robot vision navigation method, root Including image acquisition end and analyzing and processing end according to the system of described method, described image acquisition end includes cross hairs laser projections Device, the first microcam and the second microcam, control unit, described method comprises the steps: a) described cross Line laser projector projection cross hairs optical strip image;B) described first microcam and described second microcam are respectively Gather described cross optical strip image, and by the output of described cross optical strip image to described analyzing and processing end;C) described analyzing and processing Image is analyzed by end, draws the three-dimensional coordinate set { x of Rhizoma Dioscoreae (peeled)P,yP,zP};D) described analyzing and processing end is according to the three of Rhizoma Dioscoreae (peeled) Dimension coordinate set { xP,yP,zP, analyze the regional location of the barrier that front occurs;And e) described analyzing and processing end by described Regional location is back to the described control unit of described image acquisition end, completes to hide barrier controlling robot.
Preferably, in step c, image is analyzed by described analyzing and processing end, draws the three of described cross optical strip image Dimension coordinate set { xP,yP,zPMethod comprise the steps: c1) described first position image and described second location image Middle extract described cross optical strip image respectively, and choose gray threshold by described cross optical strip image binaryzation;C2) described In certain bit image and described second location image, scan described cross optical strip image, with described decussation point as boundary, obtain Lay respectively at all Rhizoma Dioscoreae (peeled) pixels of decussation point phase the same side in described first location image and described second location image Set, and, lay respectively at described first location image and the decussation identical other side of point in described second location image All Rhizoma Dioscoreae (peeled) collection of pixels;C3) obtain Rhizoma Dioscoreae (peeled) pixel center coordinate, thus extract lay respectively at described first location image and The optical losses coordinate set of decussation point phase the same side described in described second location image, and, lay respectively at described First location image positions the optical losses coordinate set of the decussation identical other side of point described in image with described second; And, c4) determine that according to formula (1) (x, y z), thus obtain for the three-dimensional coordinate of any point of described cross optical strip image Three-dimensional coordinate set { the x of described cross optical strip imageP,yP,zP}:
( x , y , z ) = x = b ( u Li - u 0 ) u Li - u Ri y = bα x ( v Li - v 0 ) α y ( u Li - u Ri ) z = bα x u Li - u Ri - - - ( 1 )
Wherein, αxyIt is respectively camera x, the effective focal length in y-axis, (u0,v0) it is the principal point image coordinate of camera, b is Parallax range, and i ∈ [1,2], 1,2 represent respectively: lay respectively in described first location image and described second location image The sequence number of the optical losses coordinate set of described decussation point phase the same side, and, lay respectively at described first location image Sequence number with the optical losses coordinate set of the decussation identical other side of point described in described second location image.
Preferably, in step d, analyzing and processing end analysis occurs that the algorithm in the region of barrier is: first by direction of advance Be divided into top left region A | xP∈[0,b/2],yP>=0}, right regions B | xP∈(b/2,b],yP>=0}, lower right area C | xP ∈(b/2,b],yP< 0}, and lower left region D | xP∈[0,b/2],yP< this four districts are added up in tetra-regions of 0} the most respectively Territory is at zPThe number of all coordinate sets in the range of≤H is { NA,NB,NC,ND, wherein H is distance alarm threshold value, works as Ni≥TH Time, it is believed that occurring in that barrier in the i-th region, wherein, i ∈ (A, B, C, D), TH are the threshold value of coordinate number.
Narrow space intelligent mobile robot vision navigation system according to the present invention and method, it is possible to narrow for the unknown Assembly space moves robot self-navigation and the particularity of avoidance and required precision, in narrow working environment complicated, unknown, Realize exactly unknown barrier and target object are detected and identified, the Autonomous Control of robot, and independently keep away Barrier navigation.
Accompanying drawing explanation
With reference to the accompanying drawing enclosed, as follows by by embodiment of the present invention of the more purpose of the present invention, function and advantage Description is illustrated, wherein:
Fig. 1 diagrammatically illustrates narrow space intelligent mobile robot vision navigation system and the structure of method of the present invention Figure.
Fig. 2 diagrammatically illustrates the axonometric chart of image acquisition end.
Fig. 3 diagrammatically illustrates the method flow diagram that analyzing and processing end is analyzed processing.
Fig. 4 diagrammatically illustrates the first microcam and the schematic diagram of the second microcam coordinate system.
Fig. 5 diagrammatically illustrates the schematic diagram of Rhizoma Dioscoreae (peeled) region pre-stored.
Fig. 6 diagrammatically illustrates the schematic diagram of a coordinate system representing barrier region.
Fig. 7 diagrammatically illustrates the flow process of the narrow space intelligent mobile robot vision navigation method according to the present invention Figure.
Detailed description of the invention
By with reference to one exemplary embodiment, the purpose of the present invention and function and for realizing the side of these purposes and function Method will be illustrated.But, the present invention is not limited to one exemplary embodiment disclosed below;Can be come by multi-form It is realized.The essence of description is only the detail helping the various equivalent modifications Integrated Understanding present invention.
Hereinafter, embodiments of the invention will be described with reference to the drawings.In the accompanying drawings, identical reference represents identical Or similar parts, or same or similar step.
Fig. 1 diagrammatically illustrates the narrow space intelligent mobile robot vision navigation system block diagram of the present invention.Such as Fig. 1 institute Showing, the narrow space intelligent mobile robot vision navigation system 100 of the present invention includes image acquisition end 110 and analyzing and processing End 120.Wherein, image acquisition end 110 can be installed in mobile robot, and described image acquisition end 110 at least includes first Microcam 111a, the second microcam 111b, cross wire laser projecting apparatus 112, and control unit 113.
Image acquisition end 110 projects the optical strip image of imaging on object for gathering by laser projecting apparatus 112, and View data is exported to analyzing and processing end 120 by the way of wired or wireless.Analog video signal BNC is such as utilized to connect The wired mode of head, video connecting line and video frequency collection card carries out data transmission, the most such as by the skill such as bluetooth, WiFi, NFC Art carries out wireless data transmission.Especially, image acquisition end 110 is from power supply.Preferably, the size of image acquisition end 110 Less than 50mm3
Image acquisition end 110 specifically includes:
First microcam 111a and the second microcam 111b, projects the cross light on object for shooting Bar image.This cross optical strip image is sent by cross hairs laser projecting apparatus 112.For the cross optical strip image of the present invention, except Beyond positive cross, it is also possible to be other cross, the most non-perpendicular intersection optical strip image.
Preferably, two video cameras are fixed on same horizontal plane.The optical axis keeping parallelism of two video cameras is placed, and two Parallax range between person is between 30-50mm, preferably 40mm.It is highly preferred that the overall dimensions of two video cameras is 12 ×12×12mm3
Cross hairs laser projecting apparatus 112 is used for projecting cross optical strip image.Preferably, cross Rhizoma Dioscoreae (peeled) laser projecting apparatus is installed In the middle of the first microcam 111a and the second microcam 111b, especially, if cross hairs laser projecting apparatus 112 It is positioned on the straight line being perpendicular to this line at two microcam 111a and place, 111b line midpoint.Further, its Overall dimensions is preferably 10 × 20 × 10mm3.Each Rhizoma Dioscoreae (peeled) that cross hairs laser projecting apparatus 112 is launched is excellent with the angle of level Elect 45 degree as.Overall overall dimensions is preferably more than 50 × 25 × 30mm3.Fig. 2 diagrammatically illustrates the vertical of image acquisition end Body figure.As in figure 2 it is shown, further preferably install in the outer rim of the first microcam 111a and the second microcam 111b There is illuminating LED light source 114, it is ensured that robot can work in dark environment, such as can be two shootings The LED light source 114 of a ring shape it is respectively mounted in machine outer rim.
Control unit 113, for according to the data message representing Obstacle Position received from analyzing and processing end 120, control The traveling of imaged collection terminal 110 is with avoiding obstacles.
The image acquisition end 110 of the present invention, compared with existing vision navigation system, has volume little, lightweight etc. excellent Gesture, can be easily installed in narrow space and move robot front end.And image acquisition end 110 can be by analyzing and processing described later Three-dimensional information in end 120 acquisition direction of advance, in order to robot avoids obstacle under complicated unknown on-plane surface environment smoothly Thing arrives at.
Analyzing and processing end 120, it is according to the cross Rhizoma Dioscoreae (peeled) inputted from the first microcam and the second microcam Image, completes cross Rhizoma Dioscoreae (peeled) coupling on left images, obtains the seat of cross Rhizoma Dioscoreae (peeled) pixel in the camera coordinates system of left and right Mark set { uL,vL},{uR,vR, then analyze the three-dimensional coordinate set { x of described cross optical strip imageP,yP,zP, and judge The region of the barrier occurred, is finally back to described control unit by the regional location of barrier, to control image acquisition end Traveling.
Preferably, analyzing and processing end 120 includes the display for showing image analysis processing result.
Hereinafter describing for convenience, the first microcam 111a is also referred to as left camera, the cross optical strip image of its shooting It is referred to as left image;Second microcam 111b is also referred to as right camera, and the cross optical strip image of its shooting is referred to as right image.
Fig. 3 schematically shows the method flow diagram that analyzing and processing end 120 is analyzed processing, as shown in Figure 3:
Step 301, completes cross Rhizoma Dioscoreae (peeled) coupling on left images, with decussation point w (with reference to Fig. 5) as boundary, To the optical losses coordinate set (hereinafter referred to as 1# essence set) laying respectively at decussation point phase the same side in left and right image (uL1,vL1) and (uR1,vR1), and, the optical losses laying respectively at the identical other side of decussation point in left and right image is sat Mark set (hereinafter referred to as 2# essence set) (uL2,vL2) and (uR2,vR2)。
Specifically below to the 1# essence set obtaining coordinate set: (uL1,vL1) and (uR1,vR1) and 2# essence set: (uL2,vL2) and (uR2,vR2) algorithm illustrate.Fig. 4 diagrammatically illustrates left camera and the signal of right camera coordinates system Figure, as shown in Figure 4: OL-XLYLZLIt is the coordinate system of left camera, OR-XRYRZRIt it is the coordinate system of right camera.Left camera in this model And right camera to be idealized as inner parameter identical, x-axis is identical, the perfect condition that y, z-axis are parallel.P is that cross hairs projector is thrown Shadow is respectively P to any point on testee, coordinate corresponding on left images after collected by cameraL,PR
First, left image and right image extract cross optical strip image respectively, and chooses suitable gray threshold by ten Word optical strip image binaryzation.
Secondly, in left image or right image, progressively scan Rhizoma Dioscoreae (peeled) pixel, with decussation point W (with reference to Fig. 5) as boundary, Obtain laying respectively at all Rhizoma Dioscoreae (peeled) collection of pixels (hereinafter referred to as 1# rough set of decussation point phase the same side in left and right image Close), and, lay respectively in left and right image all Rhizoma Dioscoreae (peeled) collection of pixels of the identical other side of decussation point (hereinafter referred to as Close for 2# rough set).Fig. 5 diagrammatically illustrates the schematic diagram of Rhizoma Dioscoreae (peeled) region pre-stored.As it is shown in figure 5, at same a line image pixel On from left to right search for the region of Rhizoma Dioscoreae (peeled), with decussation point W as boundary, the Rhizoma Dioscoreae (peeled) region origin coordinates value on the limit that keeps left first run into (x_b1, y) with end coordinate value (x_e1, y) it is stored in the conjunction of 1# rough set, the Rhizoma Dioscoreae (peeled) region origin coordinates value (x_ on remaining limit of keeping right b2, y) with end coordinate value (x_e2, y) it is stored in the conjunction of 2# rough set.
Especially, although depositing ten with decussation point W as boundary respectively in 1# rough set closes and 2# rough set closes here Left and right two parts of word optical strip image, but it is not limited to this, it is also possible to deposit upper and lower two parts of cross optical strip image, i.e. press According to from top to bottom, or way of search by column from bottom to top scans for scanning to image.Such as searching by column from top to bottom During rope scanning, the Rhizoma Dioscoreae (peeled) region origin coordinates value on the top limit first run into and end coordinate value can be stored in 1# rough set and close, remaining The origin coordinates value in Rhizoma Dioscoreae (peeled) region on limit on the lower and end coordinate value be stored in 2# rough set and close.
Again, determine parameter a3 with formula 1, thus obtain Rhizoma Dioscoreae (peeled) pixel center coordinate (a3, yi), thus extract 1# essence collection Close and 2# essence set.From (x_b in 1# gathers1,yi) begin stepping through to (x_e1,yi), wherein i ∈ [1, picture altitude H], Gray value corresponding for their x-axis coordinate figure is recorded as I, then carries out gaussian curve approximation by following equation (1),
I = a 1 + a 2 exp [ - ( x - a 3 ) 2 a 4 ] - - - ( 1 )
Wherein, a1, a2, a3And a4It is the parameter of curve.Result a of matching3It is the center of Gaussian curve, namely swashs Centre coordinate (a of light Rhizoma Dioscoreae (peeled)3, yi).So can be obtained by all Rhizoma Dioscoreae (peeled) regions in the conjunction of 1# rough set and the conjunction of 2# rough set Heart coordinate set, i.e. 1# essence set and 2# essence set, be designated as (uL1,vL1) and (uR1,vR1) and (uL2,vL2) and (uR2, vR2).Wherein, (uL1,vL1) represent the 1# essence set in left image, (uR1,vR1) represent the 1# essence set in right image, (uL2, vL2) represent the 2# essence set in left image, (uR2,vR2) represent the 2# essence set in right image.Fig. 5 diagrammatically illustrates Rhizoma Dioscoreae (peeled) The schematic diagram of region pre-stored, Fig. 5 does not limit left image or right image here, tentatively sets Fig. 5 as left image here.As Shown in Fig. 5, G (L1) and G (L2) represents the set of 1# essence and 2# essence set in left image respectively, represents cross Rhizoma Dioscoreae (peeled) the most respectively Centre coordinate point set G (L2) on the left of the W of cross point refers to the centre coordinate point set on the right side of cross Rhizoma Dioscoreae (peeled) cross point W.
Step 302, obtain Rhizoma Dioscoreae (peeled) is three-dimensional coordinate set.Specifically, Rhizoma Dioscoreae (peeled) is analyzed according to the inside and outside parameter of camera In space any point three-dimensional coordinate (x, y, z) be:
( x , y , z ) = x = b ( u Li - u 0 ) u Li - u Ri y = b&alpha; x ( v Li - v 0 ) &alpha; y ( u Li - u Ri ) z = b&alpha; x u Li - u Ri
Wherein, αxyIt is respectively camera x, the effective focal length in y-axis, (u0,v0) it is the principal point image coordinate of camera, b is Parallax range, and the sequence number that i ∈ [1,2] is 1# or 2# coordinate set.Travel through all of Rhizoma Dioscoreae (peeled) coordinate set { uL,vL},{uR, vR, and
{uL,vL}={ uL1,vL1}∪{uL2,vL2}
{uR,vR}={ uR1,vR1}∪{uR2,vR2}
Additionally, U represents that abscissa, V represent vertical coordinate.
The three-dimensional coordinate set that i.e. can get Rhizoma Dioscoreae (peeled) is designated as { xP,yP,zP}。
Step 303, according to the three-dimensional coordinate set { x of Rhizoma Dioscoreae (peeled)P,yP,zP, it is judged that the region of the barrier that front occurs.Tool Body ground, Fig. 6 is the O-x of a 3 d space coordinate systempypFloor map, zpIt is outside that axle is perpendicular to paper.Real thick line is two Orthogonal laser Rhizoma Dioscoreae (peeled), the three-dimensional coordinate of the two intersection point is (b/2,0,0).The direction of advance of visual system can be divided into four Individual region: A represent top left region A | xP∈[0,b/2],yP>=0}, B represent right regions B | xP∈(b/2,b],yP>=0}, C Expression lower right area C | xP∈(b/2,b],yP< 0} and D represent lower left region D | xP∈[0,b/2],yP< 0}, as shown in the figure. Additionally, surface represents A Yu B region, underface represents C Yu D region, and left side represents A Yu D region, and right side represents B Yu C region. Add up these four regions respectively at zPThe number of all coordinate sets in the range of≤H (H is distance alarm threshold value) is { NA,NB, NC,ND}.Work as NiDuring >=TH (i ∈ (A, B, C, D), TH are the threshold value of coordinate number), it is believed that the i-th region occurs in that barrier. Different NiConstitute 16 kinds of different coding situation, it is assumed for example that wherein { NA,NB,NC,NDBeing all higher than TH, then encoded radio is (1111), show that barrier occurs in front, without transitable direction.Remaining 15 kinds clog-free directions of can passing through accordingly are as follows Shown in form:
Table 1 preceding object object location distribution situation signal table
After analyzing and processing end 120 completes the judgement to preceding object object location, positional information is back to image acquisition end The control unit 113 of 110, smoothly completes and hides barrier controlling the robot at image acquisition end 110 place.About dividing Analysis processes the mode that this positional information is back to the control unit 113 of image acquisition end 110 by end 120, it is possible to use simulation regards Frequently the wired mode of signal BNC connector, video connecting line and video frequency collection card carries out data transmission, the most such as by bluetooth, The technology such as WiFi, NFC carry out wireless data transmission.
Fig. 7 diagrammatically illustrates the flow process of the narrow space intelligent mobile robot vision navigation method according to the present invention Figure, as shown in Figure 7:
Step 710, projects cross hairs optical strip image.The cross hairs laser projecting apparatus 112 of video acquisition end 110 launches cross Optical strip image.Preferably, cross Rhizoma Dioscoreae (peeled) laser projecting apparatus is installed in the middle of left camera and right camera.Further, its overall dimensions It is preferably 10 × 20 × 10mm3.Each Rhizoma Dioscoreae (peeled) that cross hairs laser projecting apparatus 112 is launched is preferably 45 with the angle of level Degree.Overall overall dimensions is preferably more than 50 × 25 × 30mm3.Fig. 2 diagrammatically illustrates the axonometric chart of image acquisition end.As Shown in Fig. 2, left camera and right camera each outer rim are further preferably installed an illuminating annular LED light source of circle, it is ensured that Robot can work in dark environment.
Step 720, left camera and right camera gather cross optical strip image respectively, and are exported extremely by this cross optical strip image Analyzing and processing end 120.That is, left camera and right camera projects on objects in front by cross hairs laser projecting apparatus 112 respectively Cross optical strip image.Preferably, two video cameras are fixed on same horizontal plane, the optical axis keeping parallelism of two video cameras Placing, parallax range therebetween is 40mm.It is highly preferred that the overall dimensions of video camera is 12 × 12 × 12mm3
Step 730, image is analyzed by analyzing and processing end 120, draws the three-dimensional coordinate set of Rhizoma Dioscoreae (peeled).Specifically, bag Include following steps:
Step a, extracts left image and right image (the alternatively referred to as first cross hairs laser strip image and the second cross hairs Rhizoma Dioscoreae (peeled) Image), and choose suitable gray threshold by image binaryzation.
Step b, in left image or right image, progressively scans Rhizoma Dioscoreae (peeled) pixel, with decussation point W (with reference to Fig. 5) as boundary, Obtain laying respectively at all Rhizoma Dioscoreae (peeled) collection of pixels (hereinafter referred to as 1# rough set of decussation point phase the same side in left and right image Close), and, lay respectively in left and right image all Rhizoma Dioscoreae (peeled) collection of pixels of the identical other side of decussation point (hereinafter referred to as Close for 2# rough set).Fig. 5 diagrammatically illustrates the schematic diagram of Rhizoma Dioscoreae (peeled) region pre-stored.As it is shown in figure 5, at same a line image pixel On from left to right search for the region of Rhizoma Dioscoreae (peeled), the Rhizoma Dioscoreae (peeled) region origin coordinates value (x_b on the limit that keeps left first run into1, y) and end coordinate Value (x_e1, y) it is stored in the conjunction of 1# rough set, the Rhizoma Dioscoreae (peeled) region origin coordinates value (x_b on remaining limit of keeping right2, y) with end coordinate value (x_ e2, y) it is stored in the conjunction of 2# rough set.
Step c, determines Rhizoma Dioscoreae (peeled) pixel center coordinate with formula 1, thus extracts 1# essence set and 2# essence set.At 1# collection From (x_b in conjunction1,yi) begin stepping through to (x_e1,yi), wherein i ∈ [1, picture altitude H], their x-axis coordinate figure is corresponding Gray value be recorded as I, then carry out gaussian curve approximation by following equation (2),
I = a 1 + a 2 exp [ - ( x - a 3 ) 2 a 4 ] - - - ( 2 )
Wherein, a1, a2, a3And a4It is the parameter of curve.Result a of matching3It is the center of Gaussian curve, namely swashs Centre coordinate (a of light Rhizoma Dioscoreae (peeled)3, yi).So can be obtained by all Rhizoma Dioscoreae (peeled) regions in the conjunction of 1# rough set and the conjunction of 2# rough set Heart coordinate set, i.e. 1# essence set and 2# essence set, be designated as (uL1,vL1) and (uR1,vR1) and (uL2,vL2) and (uR2, vR2).Wherein, (uL1,vL1) represent the 1# essence set in left image, (uR1,vR1) represent the 1# essence set in right image, (uL2, vL2) represent the 2# essence set in left image, (uR2,vR2) represent the 2# essence set in right image.Fig. 5 diagrammatically illustrates Rhizoma Dioscoreae (peeled) The schematic diagram of region pre-stored, (and Fig. 5 does not limit left image or right image here, tentatively sets Fig. 5 as left figure here Picture) as it is shown in figure 5, G (L1) and G (L2) represents the set of 1# essence and 2# essence set in left image respectively, represent ten the most respectively Centre coordinate point set G (L2) on the left of word Rhizoma Dioscoreae (peeled) cross point W refers to the centre coordinate point set on the right side of cross Rhizoma Dioscoreae (peeled) cross point W.
Step d, obtain Rhizoma Dioscoreae (peeled) is three-dimensional coordinate set.Specifically, analyze Rhizoma Dioscoreae (peeled) according to the inside and outside parameter of camera to exist In space any point three-dimensional coordinate (x, y, z) be:
( x , y , z ) = x = b ( u Li - u 0 ) u Li - u Ri y = b&alpha; x ( v Li - v 0 ) &alpha; y ( u Li - u Ri ) z = b&alpha; x u Li - u Ri
Wherein, αxyIt is respectively camera x, the effective focal length in y-axis, (u0,v0) it is the principal point image coordinate of camera, b is Parallax range, and the sequence number that i ∈ [1,2] is 1# or 2# coordinate set.Travel through all of Rhizoma Dioscoreae (peeled) coordinate set { uL,vL},{uR, vR, and
{uL,vL}={ uL1,vL1}∪{uL2,vL2}
{uR,vR}={ uR1,vR1}∪{uR2,vR2}
The three-dimensional coordinate set that i.e. can get Rhizoma Dioscoreae (peeled) is designated as { xP,yP,zP}。
Step 740, analyzing and processing end 120 is according to the three-dimensional coordinate set { x of Rhizoma Dioscoreae (peeled)P,yP,zP, it is judged that the barrier that front occurs Hinder the regional location of thing.Analyzing and processing end analysis occurs that the algorithm in the region of barrier is: first direction of advance is divided into a left side Upper region A | xP∈[0,b/2],yP>=0}, right regions B | xP∈(b/2,b],yP>=0}, lower right area C | xP∈(b/2, b],yP< 0}, and lower left region D | xP∈[0,b/2],yP< these four regions are added up the most respectively at z in tetra-regions of 0}P≤ The number of all coordinate sets in the range of H is { NA,NB,NC,ND, wherein H is distance alarm threshold value, works as NiDuring >=TH, it is believed that Occurring in that barrier in i-th region, wherein, i ∈ (A, B, C, D), TH are the threshold value of coordinate number.
Step 750, this position data is back to the control unit 113 of image acquisition end 110 by analyzing and processing end 120, with The robot at control image acquisition end place smoothly completes to be hidden barrier, it is achieved independent navigation.
Narrow space intelligent mobile robot vision navigation system according to the present invention and method, it is possible to narrow for the unknown Assembly space moves robot self-navigation and the particularity of avoidance and required precision, in narrow working environment complicated, unknown, Realize exactly unknown barrier and target object are detected and identified, the Autonomous Control of robot, and independently keep away Barrier navigation.
In conjunction with explanation and the practice of the present invention disclosed here, other embodiments of the present invention are for those skilled in the art All it is easy to expect and understand.Illustrating and embodiment is to be considered only as exemplary, true scope and the purport of the present invention are equal It is defined in the claims.

Claims (8)

1. a narrow space intelligent mobile robot vision navigation system, including:
Image acquisition end, is used for gathering cross optical strip image, and by described cross optical strip image to analyzing and processing end output, described Image acquisition end includes:
Cross hairs laser projecting apparatus, is used for projecting described cross optical strip image;
First microcam and the second microcam, project the described cross Rhizoma Dioscoreae (peeled) on barrier for shooting respectively Image obtains the first location image and the second location image, and is transmitted to described analyzing and processing end by described cross optical strip image;
Control unit, for according to the data message representing Obstacle Position received from described analyzing and processing end, controls described The traveling of image acquisition end;
Analyzing and processing end, for according to described ten inputted respectively from the first microcam and described second microcam Word optical strip image, completes cross Rhizoma Dioscoreae (peeled) coupling on the first location image and the second location image, with described cross Rhizoma Dioscoreae (peeled) Decussation point is boundary, obtains laying respectively at decussation point described in described first location image and described second location image The optical losses coordinate set of phase the same side, and, lay respectively in described first location image and described second location image The optical losses coordinate set of the described decussation identical other side of point, analyzes the three-dimensional seat obtaining described cross optical strip image Mark set, and judge the region of the barrier occurred, finally described region is back to described control unit, to control described figure Traveling as collection terminal;
Wherein, described analyzing and processing end, with the decussation point of described cross Rhizoma Dioscoreae (peeled) as boundary, obtain laying respectively at described first fixed The optical losses coordinate set of decussation point phase the same side described in bit image and described second location image, and, respectively It is positioned at the optical losses of described first location image and the decussation identical other side of point described in described second location image Coordinate set, algorithm be:
A) in described first location image and described second location image, extract described cross optical strip image respectively, and choose ash Degree threshold value is by described cross optical strip image binaryzation;
B) position in image and described second location image described first, scan described cross optical strip image, with described cross Cross point is boundary, obtains laying respectively at decussation point phase the same side in described first location image and described second location image All Rhizoma Dioscoreae (peeled) collection of pixels, and, lay respectively at described first location image and described second location image in decussation All Rhizoma Dioscoreae (peeled) collection of pixels of the identical other side of point;And
C) obtain Rhizoma Dioscoreae (peeled) pixel center coordinate, thus extract and lay respectively at described first location image and described second location image Described in the optical losses coordinate set of decussation point phase the same side, and, lay respectively at described first location image and institute State the optical losses coordinate set of the identical other side of decussation point described in the second location image.
System the most according to claim 1, it is characterised in that described analyzing and processing end determines described according to formula (1) The three-dimensional coordinate of any point of cross optical strip image (x, y, z):
( x , y , z ) = x = b ( u L i - u 0 ) u L i - u R i y = b&alpha; x ( v L i - v 0 ) &alpha; y ( u L i - u R i ) z = b&alpha; x u L i - u R i - - - ( 1 )
Wherein, αxyIt is respectively camera x, the effective focal length in y-axis, (u0,v0) it is the principal point image coordinate of camera, b is baseline distance From, and i ∈ [1,2], 1,2 represent respectively: lay respectively at described in described first location image and described second location image ten The sequence number of the optical losses coordinate set of phase the same side, word cross point, and, lay respectively at described first location image and described The sequence number of the optical losses coordinate set of the decussation identical other side of point described in second location image.
System the most according to claim 1, it is characterised in that the region of barrier occurs in described analyzing and processing end analysis Algorithm is: first direction of advance is divided into upper left, upper right, bottom right, and region, four, lower-left, adds up described four the most respectively Individual region is at zPThe number of all coordinate sets in the range of≤H is { NA,NB,NC,ND, wherein H is distance alarm threshold value, works as Ni During >=TH, i ∈ (A, B, C, D), then it is assumed that occur in that barrier in the i-th region, wherein, TH is the threshold value of coordinate number.
System the most according to claim 1, it is characterised in that the size of described image acquisition end is less than 50mm3
System the most according to claim 1, it is characterised in that described first microcam and the second microcam It is fixed on same horizontal plane.
System the most according to claim 1, it is characterised in that described first microcam and the second microcam On LED light source is installed.
7. a narrow space intelligent mobile robot vision navigation method, includes image acquisition end according to the system of described method And analyzing and processing end, described image acquisition end includes that cross hairs laser projecting apparatus, the first microcam and second are miniature Video camera, control unit, described method comprises the steps:
A) described cross hairs laser projecting apparatus projection cross hairs optical strip image;
B) described first microcam and described second microcam gather described cross optical strip image respectively, and by institute State the output of cross optical strip image to described analyzing and processing end;
C) image is analyzed by described analyzing and processing end, draws the three-dimensional coordinate set { x of Rhizoma Dioscoreae (peeled)P,yP,zP};
Wherein, image is analyzed by described analyzing and processing end, draws the three-dimensional coordinate set { x of described cross optical strip imageP, yP,zPMethod comprise the steps:
C1) in described first location image and described second location image, extract described cross optical strip image respectively, and choose Gray threshold is by described cross optical strip image binaryzation;
C2) position in image and described second location image described first, scan described cross optical strip image, with described cross Cross point is boundary, obtains laying respectively at decussation point phase the same side in described first location image and described second location image All Rhizoma Dioscoreae (peeled) collection of pixels, and, lay respectively at described first location image and described second location image in decussation All Rhizoma Dioscoreae (peeled) collection of pixels of the identical other side of point;
C3) obtain Rhizoma Dioscoreae (peeled) pixel center coordinate, thus extract and lay respectively at described first location image and described second location figure The optical losses coordinate set of the some phase of decussation described in Xiang the same side, and, lay respectively at described first location image and The optical losses coordinate set of the decussation identical other side of point described in described second location image;And
C4) determine according to formula (1) that (x, y z), thus obtain for the three-dimensional coordinate of any point of described cross optical strip image Three-dimensional coordinate set { the x of described cross optical strip imageP,yP,zP}:
( x , y , z ) = x = b ( u L i - u 0 ) u L i - u R i y = b&alpha; x ( v L i - v 0 ) &alpha; y ( u L i - u R i ) z = b&alpha; x u L i - u R i - - - ( 1 )
Wherein, αxyIt is respectively camera x, the effective focal length in y-axis, (u0,v0) it is the principal point image coordinate of camera, b is baseline distance From, and i ∈ [1,2], 1,2 represent respectively: lay respectively at described in described first location image and described second location image ten The sequence number of the optical losses coordinate set of phase the same side, word cross point, and, lay respectively at described first location image and described The sequence number of the optical losses coordinate set of the decussation identical other side of point described in second location image;
D) described analyzing and processing end is according to the three-dimensional coordinate set { x of Rhizoma Dioscoreae (peeled)P,yP,zP, analyze the district of the barrier that front occurs Position, territory;And
E) described regional location is back to the described control unit of described image acquisition end by described analyzing and processing end, to control machine Device people completes to hide barrier.
Method the most according to claim 7, it is characterised in that in step d, there is the district of barrier in analyzing and processing end analysis The algorithm in territory is: first direction of advance is divided into top left region A | xP∈[0,b/2],yP>=0}, right regions B | xP∈ (b/2,b],yP>=0}, lower right area C | xP∈(b/2,b],yP< 0}, and lower left region D | xP∈[0,b/2],yP< 0} tetra- Individual region, adds up these four regions the most respectively at zPThe number of all coordinate sets in the range of≤H is { NA,NB,NC,ND, Wherein H is distance alarm threshold value, works as NiDuring >=TH, it is believed that occur in that barrier in the i-th region, wherein, i ∈ (A, B, C, D), TH Threshold value for coordinate number.
CN201410281773.1A 2014-06-20 2014-06-20 A kind of narrow space intelligent mobile robot vision navigation system and method Active CN105014675B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410281773.1A CN105014675B (en) 2014-06-20 2014-06-20 A kind of narrow space intelligent mobile robot vision navigation system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410281773.1A CN105014675B (en) 2014-06-20 2014-06-20 A kind of narrow space intelligent mobile robot vision navigation system and method

Publications (2)

Publication Number Publication Date
CN105014675A CN105014675A (en) 2015-11-04
CN105014675B true CN105014675B (en) 2016-08-17

Family

ID=54405157

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410281773.1A Active CN105014675B (en) 2014-06-20 2014-06-20 A kind of narrow space intelligent mobile robot vision navigation system and method

Country Status (1)

Country Link
CN (1) CN105014675B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105856227A (en) * 2016-04-18 2016-08-17 呼洪强 Robot vision navigation technology based on feature recognition
CN107305380A (en) * 2016-04-20 2017-10-31 上海慧流云计算科技有限公司 A kind of automatic obstacle-avoiding method and apparatus
CN106383518A (en) * 2016-09-29 2017-02-08 国网重庆市电力公司电力科学研究院 Multi-sensor tunnel robot obstacle avoidance control system and method
CN107020640A (en) * 2017-04-28 2017-08-08 成都科力夫科技有限公司 Robot interactive formula games system
CN107806857A (en) * 2017-11-08 2018-03-16 沈阳上博智像科技有限公司 Unpiloted movable equipment
CN109872558A (en) * 2017-12-04 2019-06-11 广州市捷众智能科技有限公司 A kind of indoor parking stall state-detection method based on Cross hair laser projection
CN113639748B (en) * 2020-04-26 2024-04-05 苏州北美国际高级中学 Pipeline trolley navigation method based on cross-shaped laser and monocular vision system
CN111765849B (en) * 2020-07-31 2021-08-27 南京航空航天大学 Device and method for measuring assembly quality of airplane in narrow space

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6141863A (en) * 1996-10-24 2000-11-07 Fanuc Ltd. Force-controlled robot system with visual sensor for performing fitting operation
CN1161600C (en) * 2001-04-30 2004-08-11 北京航空航天大学 Structure-light 3D double-visual calibrating point generating method nad device
JP2004030445A (en) * 2002-06-27 2004-01-29 National Institute Of Advanced Industrial & Technology Method, system, and program for estimating self-position of moving robot
CN102183216A (en) * 2011-03-14 2011-09-14 沈阳飞机工业(集团)有限公司 Three-dimensional measurement method and device based on linear structured light
CN102313536B (en) * 2011-07-21 2014-02-19 清华大学 Method for barrier perception based on airborne binocular vision

Also Published As

Publication number Publication date
CN105014675A (en) 2015-11-04

Similar Documents

Publication Publication Date Title
CN105014675B (en) A kind of narrow space intelligent mobile robot vision navigation system and method
CA2950791C (en) Binocular visual navigation system and method based on power robot
Bai et al. Smart guiding glasses for visually impaired people in indoor environment
WO2020093436A1 (en) Three-dimensional reconstruction method for inner wall of pipe
CN109901590B (en) Recharging control method of desktop robot
CN104842358A (en) Autonomous mobile multifunctional robot
CN106407857B (en) System and method for automation equipment pairing
CN207115193U (en) A kind of mobile electronic device for being used to handle the task of mission area
CN207488823U (en) A kind of mobile electronic device
CN104268933B (en) Scanning imaging method for three-dimensional environment in vehicle-mounted two-dimensional laser movement
CN102435174A (en) Method and device for detecting barrier based on hybrid binocular vision
WO2019001237A1 (en) Mobile electronic device, and method in mobile electronic device
CN102622732A (en) Front-scan sonar image splicing method
CN103869824A (en) Biological antenna model-based multi-robot underwater target searching method and device
CN106162144A (en) A kind of visual pattern processing equipment, system and intelligent machine for overnight sight
CN207067803U (en) A kind of mobile electronic device for being used to handle the task of mission area
CN115597659B (en) Intelligent safety management and control method for transformer substation
KR20200020465A (en) Apparatus and method for acquiring conversion information of coordinate system
CN106370160A (en) Robot indoor positioning system and method
CN108931982A (en) Vision navigation system and method for robot moving equipment
CN105043351A (en) Biological robot-based miniature wireless active omni-directional vision sensor
Shen et al. A multi-view camera-projector system for object detection and robot-human feedback
CN107607939B (en) Optical target tracking and positioning radar device based on real map and image
CN206833252U (en) A kind of mobile electronic device
Mathews et al. Supervised morphogenesis: morphology control of ground-based self-assembling robots by aerial robots.

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant