CN102853830A - Robot vision navigation method based on general object recognition - Google Patents

Robot vision navigation method based on general object recognition Download PDF

Info

Publication number
CN102853830A
CN102853830A CN2012103211664A CN201210321166A CN102853830A CN 102853830 A CN102853830 A CN 102853830A CN 2012103211664 A CN2012103211664 A CN 2012103211664A CN 201210321166 A CN201210321166 A CN 201210321166A CN 102853830 A CN102853830 A CN 102853830A
Authority
CN
China
Prior art keywords
robot
general object
method based
vision
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2012103211664A
Other languages
Chinese (zh)
Inventor
李新德
张晓�
朱博
金晓彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN2012103211664A priority Critical patent/CN102853830A/en
Publication of CN102853830A publication Critical patent/CN102853830A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses a robot vision navigation method based on general object recognition. The method comprises the following steps that: a hand-drawn map of an actual environment is drawn; an airborne camera reads a picture of the actual environment; a discriminant function is adopted to judge the read picture and recognize a natural landmark; and a robot completes self-positioning through a relationship between the natural landmark recognized and measured by a sensor and a position of the robot. According to the present invention, a problem of autonomous navigation of the robot in an unknown indoor environment is solved.

Description

A kind of robot visual guidance method based on general object identification
Technical field
The invention belongs to field of artificial intelligence, particularly a kind of robot visual guidance method based on general object identification.
Background technology
Along with the progress of science and technology and the development of society, at present increasing intelligent mobile robot is come into daily life, will replace gradually the mankind to do the thing of loaded down with trivial details repetition such as household service robot, partner robot etc.In the research of mobile robot technology, navigation is one of its research gordian technique, and its goal in research is under nobody's intervention, makes robot on purpose move to the target area, in order to carry out specific operation, finishes particular task.In the research of Mobile Robotics Navigation control theory and method, the navigation control method of determinacy environment has been obtained a large amount of research and application achievements.Navigation also launches some researchs to Mobile Robots in Unknown Environment, and has proposed some methods, but does not form unified and perfect system, also has many key theory and technology to have to be solved and perfect.These problems mainly comprise study and optimization, fault diagnosis, motion planning online and the control of environmental modeling, location, navigation controller.
The navigation problem of robot can be summed up as " I am at which ", " which I will go " reaches " how I get there " these three problems.Robot must solve following four problems in order to finish navigation task: motion control (Motion Control), map structuring (World Modeling), path planning (Planning) and location (Localization).Wherein location, the position of path planning and robot self is robot navigation's basis and key link.Mobile robot's path planning can be divided into the global path planning of map-based and based on the local paths planning of the self-contained sensor of robot.For the off-line Global Planning under the environment known conditions, a lot of achievements in research has been arranged now.Local paths planning based on the self-contained sensor of robot is the important technology of realizing that the mobile robot navigates in circumstances not known, also is the study hotspot of Mobile Robotics Navigation research field.
Be analogous to the human mode of asking the way in circumstances not known, Li Xinde, the people such as Wu Xuejian are at " a kind of dynamic environment vision navigation method based on hand-drawing map " (Li Xinde, Wu Xuejian, Zhu Bo, Dai Xianzhong. a kind of dynamic environment vision navigation method [J] based on hand-drawing map. robot, 2011,33 (4): 490-501.) proposed the dynamic environment vision navigation method of hand-drawing map, the method is by the mode of hand-drawing map, provide robot initial, termination message, and the cardinal principle relative position relation of the significant object of reference that in traveling process, runs into, do not need actual environment is accurately measured, only need the general azimuth-range of range estimation to process foreign environment quickly and easily.
Because indoor environment has diversity and changeability, if the definite image information of the object self that in advance video camera will be searched for is told robot, allow robot in traveling process, mate identification, when environment changes, also need again the information of object to be assigned to robot, understand like this applicability of considerable restraint robot navigation method.In order to solve general object identification problem in changing environment, someone is at artificial some marks of interpolation in natural forms surface, such as red or the green scraps of paper, seek natural forms by the label that allows robot searches paste in advance, to have searched out natural forms on the method surface, but the party's nature of law is the process that the label of artificial definition is identified, therefore do not satisfy in true environment general object identification, in addition, body surface adds the true outward appearance that label has also changed object.Thereby and in actual applications in the image processing process operand can affect mobile robot's real-time greatly.
Summary of the invention
Goal of the invention: the problem and shortage for above-mentioned prior art exists, the purpose of this invention is to provide a kind of robot visual guidance method based on general object identification, solve the independent navigation problem of robot in indoor circumstances not known.
Technical scheme: for achieving the above object, the technical solution used in the present invention is a kind of robot visual guidance method based on general object identification, comprises the steps:
(1) hand-drawing map of drafting actual environment;
(2) Airborne camera reads in the picture of actual environment;
(3) use discriminant function that the picture that reads in is adjudicated, the identification natural landmark;
(4) robot finishes self-align by the natural landmark of sensor measurement identification and the position relationship of robot.
Further, in the described step (1), the information that hand-drawing map is presented to robot comprises: the starting point of robot and direction, the starting point in path and terminal point, general actual physics distance between starting point and the terminal point, the critical path target general location that robot can run into from the origin-to-destination process; Robot is directed near the crucial road sign, then towards the crucial road sign operation of the next one.
Further, described step (2) also comprises:
1. set up the vision dictionary: carry out the feature point detection of picture, generate the vision dictionary of each type objects;
2. image pre-service: each unique point in the picture to be detected and each the vision word in the vision dictionary are carried out similarity calculating, if the error of unique point and corresponding vision word is not more than the threshold value of setting, think that this unique point is the unique point that consists of target object;
3. with a multi-C vector pretreated image is described.
Further, described step 1. in, also comprise the step of the vision word in the vision dictionary being carried out cluster analysis.
Further, described step 2. in, also comprise the step of using the random sampling consistency algorithm to process the noise spot on the background.
Further, described step 3. in, the character of the element in the multi-C vector comprises the description of the quantity of vision word own and vision word spatial relation description, the number of times that wherein vision word in the description finger vision word library of the quantity of vision word own is occurred, vision word spatial relation description refers to describe apart from feature and angle character with respect to the object geometric center with each vision word the position feature of each vision word.
Further, the discriminant function in the described step (3) is drawn by off-line training.
Further, described feature point detection and description adopt GPU accelerogram picture to process.
Further, described sensor is sonar sensor.
Beneficial effect: the present invention proposes a kind of new general object identification method, solves the independent navigation problem of robot in circumstances not known.Not only solve general object identification problem in changing environment, and do not changed the true outward appearance of object, can also subdue background environment to the impact of general object identification process.Thereby affect greatly the problem of mobile robot's real-time for operand in the image processing process, introduce GPU and accelerate the elevator system processing speed.By imitating mankind's mode of asking the way, utilize hand-drawing map, general object identification method and GPU to accelerate to process and carry out robot navigation's operation at last.Method proposed by the invention realizes that to robot intelligent navigation is significant in true environment complicated and changeable, its real-time and can promote household service robot to come into smoothly human life to the robustness of environment.
Description of drawings
Fig. 1 is robot navigation's process flow diagram;
Fig. 2 is the photo of true laboratory environment;
Fig. 3 is the planimetric map of Fig. 2; ,
Fig. 4 is hand-drawing map;
Fig. 5 is the process flow diagram of general object identification;
Fig. 6 is the unique point on the picture;
Fig. 7 (a1), (a2), (b1), (b2), (c1), (c2) are respectively the design sketch that the different target feature point extraction is processed front (mantissa is 1) rear (mantissa is 2);
Fig. 8 is the locus figure of unique point;
Fig. 9 is hardware device information flow direction figure;
Figure 10 is the different objects that identifies in three experiments, and Image to left is for testing for the first time object, and intermediate picture is for testing object for the second time, and Image to right is for testing for the third time object;
Figure 11 is the path profile of experiment one Freehandhand-drawing path and No. three experimental robot walkings;
Figure 12 is the navigation circuit figure after the change robot course, and 1 among the figure, 2,3,4,5 is masked as road sign;
Figure 13 is the path profile of experiment two Freehandhand-drawing paths and robot ambulation;
Figure 14 is the path profile of experiment three Freehandhand-drawing paths and No. three experimental robot walkings;
Figure 15 is the path profile of experiment four Freehandhand-drawing paths and No. three experimental robot walkings.
Embodiment
Below in conjunction with the drawings and specific embodiments, further illustrate the present invention, should understand these embodiment only is used for explanation the present invention and is not used in and limits the scope of the invention, after having read the present invention, those skilled in the art all fall within the application's claims limited range to the modification of the various equivalent form of values of the present invention.
The objective of the invention is in order to solve the independent navigation problem of robot in indoor circumstances not known.Path planning is robot navigation's basis and key link, and general path planning can be divided into the global path planning of map-based and based on the local paths planning of the self-contained sensor of robot.For the off-line Global Planning under the environment known conditions, a lot of achievements in research has been arranged now, local paths planning based on the self-contained sensor of robot is the important technology of realizing that the mobile robot navigates in circumstances not known, also is the study hotspot of Mobile Robotics Navigation research field.The present invention uses the self-contained vision sensor of robot to come natural landmark in the environment-identification, finishes the independent navigation of robot in circumstances not known under the help of hand-drawing map.The present invention proposes a kind of general object identification method of novelty, adopt the thought of statistics, in the iamge description link, abstract the reflecting of local feature dot information that a width of cloth picture is comprised converts a multidimensional row vector (x to 0x 1x 2... x P-3x P-2x P-1y 0y 1y 2... y Q-3y Q-2y Q-1), wherein front P dimension is for appearing at the statistical information of feature in this picture, and rear Q dimension is the spatial relationship statistical information between these features.Utilize SIFT (Scale-invariant feature transform, yardstick invariant features conversion) detects operator and carry out proper vector and describe, take into full account and utilize the statistical information of object part spatial relationship to describe space (relative distance and the angle) relation of all unique points in the width of cloth picture.In order to subdue background environment to the impact of general object identification process, before to each width of cloth iamge description, its contained unique point is carried out threshold process.On this basis, adopt random sampling consistent, further the unique point on the wiping out background.Meanwhile consider the real-time of SIFT feature extraction and description, adopt the GPU speed technology based on CUDA (Compute Unified Device Architecture, unified calculation equipment framework) platform to realize.Then, on the basis of hand-drawing map assisting navigation, with being applied in the indoor mobile robot navigation of the method success.Method proposed by the invention realizes that to robot intelligent navigation is significant in true environment complicated and changeable, its real-time and can promote household service robot to come into smoothly human life to the robustness of environment.
Robot complete navigation framework as shown in Figure 1, at first draw out robot navigation's hand-drawing map at the hand-drawing map interactive interface, off-line training goes out the discriminant function of general object, robot utilizes the discriminant function that obtains that the general object in the reality is identified in navigation procedure, auxiliary lower robot at sonar finishes self-align by the identification of road markings, consider robot navigation's real time problems, in image processing process, use the GPU based on the CUDA platform to accelerate to process.Last experiment show feasibility of the present invention and robustness.
1, the characteristics of hand-drawing map and expression
With respect to Metric map, topological map, hand-drawing map can be applied to changeable indoor environment more flexibly.Designed the hand-drawing map navigation based on natural landmark identification.Following principle is followed in the drafting of map: according to reference target residing general location in actual environment, and its profile of approximate location Freehandhand-drawing of correspondence in the drawing panel, and mark its semantic information; According to general location and the direction of robot in actual map, correspondence position is drawn it in hand-drawing map, and determines simultaneously the starting point in path, then drawing path and impact point.The mapping relations that have " loose " between hand-drawing map and the actual environment map.This is that map does not have accurate engineer's scale yet, so the path of manual drawing can't represent the accurate route that robot will walk because the accurate size of environment can't be known.On the other hand, the just impulse that play robot in the path of drafting, robot navigation's final purpose is to arrive the target area, so robot there is no need to move according to specified path fully.
The advantage of hand-drawing map do not need to be very accurate environmental information to be passed to robot, map will be presented to the information of robot: the starting point of robot and direction, the starting point in path and terminal point, general actual physics distance between starting point and the terminal point, the critical path target general location that from the origin-to-destination process, can run into.
Fig. 2 is the true environment photo, and Fig. 3 is its planimetric map, and Fig. 4 is hand-drawing map.Mapping relations by above 3 width of cloth map analysis hand-drawing maps and actual environment.Actual environment M with Fig. 3 representative RealBe expressed as:
Figure BDA00002092798100051
The natural landmark of L () expression navigation setting; S () is illustrated in the barrier that is not suitable in the navigation procedure as natural landmark, such as very long cupboard; D () is illustrated in the dynamic object in the environment in the process that robot advances; T () expression target or Mission Operations zone; The initial pose of R () expression robot.Fig. 4 hand-drawing map M SketchCan be expressed as
Figure BDA00002092798100052
Wherein Namely there are mapping relations in the general location of expression natural landmark L () in hand-drawing map
Figure BDA00002092798100054
Figure BDA00002092798100055
Comprise starting point S, terminal point D and the path wiring diagram in path, this path profile is not Actual path or the true path that robot is walked, and just guides the mobile robot along the approximate trend walking in this path.
Figure BDA00002092798100056
The initial summary pose of expression robot.Extension trend from painting path is divided into several sections with original path, and every section has a crucial pilot point, robot is directed near the crucial pilot point, then towards the crucial pilot point operation of the next one.For the ease of control, the linear running mode is adopted in the motion between crucial pilot point, the cumulative errors that can effectively avoid so the frequent rotation of robot to bring.Robot is under the condition of the general information of known environment, and in the process of advancing, the environment road sign carries out self poisoning in the road by seeking.
2, the asking for of the identification of natural landmark and discriminant function in the navigation procedure
As shown in Figure 1, thus an important step is the location of the identification of natural landmark in the actual environment being finished self in robot navigation's process.To come the picture that reads in video camera is identified according to the general object decision function that off-line training goes out in the actual navigation procedure of robot.The below elaborates the identifying of general object and the process of asking for of discriminant function.
Generally, the image object thing has own peculiar external appearance characteristic own, human vision system is when judging certain object, to describe according to the external appearance characteristic that this object shows to a great extent, in conjunction with oneself priori with the image object feature abstraction to a certain extent, form the height semantic feature of this object.Before the mankind are seeing and during unseen object, also can analyze it and judge by the priori of oneself.Follow the process of the general object of human identification, design following general indoor object identification method.
Fig. 5 is the identification framework figure of general object, and is last, draws the discriminant function of certain type objects by using support vector machines.
2.1, set up the vision dictionary
The general object identification that the present invention proposes is based on the thought of statistics, and the first step is to set up the vision word of certain type objects.
David. the SIFT that professor Lao Yi proposes detects son good performance aspect feature detection, this algorithm satisfies image rotation, convergent-divergent, translation invariant characteristic, and the present invention uses SIFT to carry out the feature point detection of picture.
Because the vision word quantity that directly generates with a large amount of pictures is a lot, and some vision words are later on very close with the statement of SIFT descriptor, so next will carry out cluster analysis to the vision word in the vision dictionary.The K mean cluster is the clustering method of commonly using, and the present invention uses the realization of K mean cluster to the cluster analysis of picture feature.
Through this link, the vision word library of every type objects is set up and is finished, the 128 feature description vectors that each vision word detected through the SIFT algorithm behind the K mean cluster for (being called for short " word ").
2.2, the image pre-service
The purpose of this link is to subdue background environment to the negative effect of later image identification.
Now drawn the vision dictionary of certain type objects, before describing a picture to be detected, now each unique point in the picture to be detected and each word in the dictionary are carried out similarity calculating, in the situation that satisfy certain threshold value, think that this unique point is the unique point that consists of target object.
Suppose through the similarity computing, the unique point number on the picture is reduced to T by X, and as shown in Figure 6, what want is object in the black box, but also has some noise spots on the background.The negative effect of successive image being described in order to reduce these noise spots, the density characteristics of utilizing unique point to distribute use RANSAC (RANdom SAmple Consensus, random sampling consistency algorithm) to process.For the purpose of easy and versatility, come the densely distributed part zone of Cover Characteristics point with a border circular areas.Fig. 7 is the effect before and after the different target feature point extraction is processed.By experimental result as seen, unique point major part after treatment all concentrates on the object, and description object that more can closing to reality is for good preliminary work has been done in follow-up iamge description.
2.3, the description of image
The purpose of this link is that certain expression way of design is come abstract picture.
Use for reference BOW to the description of picture feature, come Description Image with a multi-C vector.The character of description vectors element is divided into two classes, and a class is the description to the quantity of word own, and a class is the description to the spatial relationship of word.
1) to the description of the quantity of word own: the number of times that the vision word occurs in the vision word library.For example in experiment, word list has P word, and then the vision word of this type objects vector dimension is the P dimension, (x 0x 1x 2... x P-2x P-1) each dimension word size representative be the number of times that this word occurs.
2) vision word spatial relation description: the location expression of each vision word can be described with two features of angle with respect to object geometric center distance with each vision word.Specifically describe as follows:
Suppose that the geometric center that unique point is new is through the processing of 2.2 joints:
( x 1 , y 1 ) = 1 m ( Σ i = 1 m x i , Σ i = 1 m y i ) ,
Wherein m is the number of unique point after processing, and geometric center is shown in the center of circle among Fig. 8.Mark around the center of circle is the unique point on the object, and take the regular pentagon in the upper right corner as example, the distance of its corresponding geometric center is L, and angle is θ.
For distance: calculate each unique point and geometric center (x 1, y 1) Euclidean distance, (L 1, L 2, L 3... L M-1, L m), get intermediate value as unit length L, other length are divided into 0 ~ 0.5L, 0.5L ~ L, L ~ 1.5L, four intervals of 1.5L ~ MAX according to the ratio of length and L separately.
For angle: select arbitrarily a unique point, calculate other have a fews and this angle with respect to central point.By simple mathematic(al) manipulation, obtain each and put corresponding angle (θ 1, θ 2, θ 3... θ M-1, θ m), consider that each θ angle can be very not large, therefore be as follows with the interval division of θ: 0 ° ~ 30 ° 30 ° ~ 60 °, 60 ° ~ 90 °, 90 ° ~ 120 °, these five intervals of 120 ° ~ MAX.
2.4, ask for the discriminant function of certain type objects
In the actual navigation procedure of robot, Airborne camera constantly reads in the picture of actual environment, and the discriminant function that uses off-line training to draw is adjudicated the picture that reads in, and this link will be introduced asking for of discriminant function.
Sorter is the conventional means of carrying out pattern classification, and support vector machine classifier has the advantage strong to Generalization Ability under the small sample, receives higher concern.Support vector machine is by finding a lineoid that satisfies largest interval to realize the division of two class data in feature space.To a variable X, and affiliated class label Y=± 1, come implementation pattern study and classification by classification function.
The present invention selects linear SVM, and the vision word histogram of image library is gathered the training that exercises supervision.In training process, input as positive picture with the picture that contains with target object, at this moment Y=1; Do not contain target object as negative picture input, at this moment Y=-1.Thus can off-line training go out the support vector machine function to general object discrimination, discriminant function has been arranged, robot can identify natural landmark in traveling process accordingly, thereby finishes self poisoning.
3, robot self coarse positioning in the navigation procedure
In the introduction of the relevant hand-drawing map of the 1st trifle, provide the information that hand-drawing map comprises: the location of pixels of target, the initial pixel locations of robot and starting point roughly air line distance to terminal.The sensor that hand-drawing map information and robot self are equipped with combines the perception of real world, can finish the coarse positioning of robot self.
In drawing robot guidance path process, extract the key point of k+1, a hand-drawing map path is divided into k part, each segment table is shown from node n I-1To n iStraight line
Figure BDA00002092798100072
The robot reference position is r 0≈ n 0The factors such as slipperiness on roads can cause the error of the odometer information of robot own, and robot needs basis
Figure BDA00002092798100073
Between actual walking distance Upgrade engineer's scale m I+1Distance
Figure BDA00002092798100075
Initial proportion chi m 1Provided by formula (1), wherein d (n 0, n k) representative be actual range between first node and last node, d'(n 0, n k) representative be the pixel distance between first node and last node in the hand-drawing map.Suppose a certain moment Robot
Figure BDA00002092798100081
Walking is also observed n iNear natural landmark L i, this moment robot and L iBetween rough metric system distance calculated by formula (2), wherein r represents the current location of robot in hand-drawing map, r I-1The expression robot arrives n I-1Through the position of self-align acquisition, d' represents the relative pixel distance in the hand-drawing map, m behind the node iRepresentative
Figure BDA00002092798100082
The map scale of this section distance, t is the pixel distance threshold value, d'(r, L i) that represent is current position and the natural landmark L of robot in the hand-drawing map iBetween pixel distance, d'(r I-1, n i) what represent is once to make bit position and n in hand-drawing map in the robot by oneself iPixel distance between the individual node, d (r I-1, r i) what represent is the robot last time to make bit position r by oneself I-1This makes bit position r by oneself to robot iActual range, d'(r, n I-1) that represent is current position and the upper natural landmark n of robot in the hand-drawing map I-1Between pixel distance, s (r, L i) representative meaning be current robot and the natural landmark n that the self-contained sonar sensor of robot records iBetween distance.Arrive order ground node n iAfter, upgrade the physical location r of robot after self-align by formula (3) i,
Figure BDA00002092798100083
What represent is to make bit position r by oneself by the robot last time I-1This makes bit position r by oneself to robot iVector,
Figure BDA00002092798100084
What represent is to make bit position r by oneself by the robot last time I-1To natural landmark L iVector,
Figure BDA00002092798100085
Representative is by natural landmark L iThis makes bit position r by oneself to robot iVector, use at last formula (4) to upgrade the engineer's scale m of map I+1, m wherein iWhat represent is last engineer's scale, d'(r I-1, n i) represent in the hand-drawing map robot last time and make bit position r by oneself I-1With node n iBetween pixel distance, d (r I-1, r i) what represent is the robot last time to make bit position r by oneself I-1This makes bit position r by oneself with robot iBetween distance.
m 1 = d ( n 0 , n k ) d ′ ( n 0 , n k ) - - - ( 1 )
d ( r , L i ) = d &prime; ( r , L i ) * m i if d &prime; ( r , n i - 1 ) d &prime; ( r i - 1 , n i ) < t s ( r , L i ) else - - - ( 2 )
r i - 1 r i &RightArrow; = r i - 1 l i &RightArrow; + L i r i &RightArrow; - - - ( 3 )
m i + 1 = d &prime; ( r i - 1 , n i ) d ( r i - 1 , r i ) &CenterDot; m i if ( RC ) m i else - - - ( 4 )
RC represents the ratio update condition, is defined as 0.33<d'(r here I-1, n i)/d (r I-1, r i)<3.
4, GPU accelerogram picture is processed in the navigation procedure
Can use the SIFT algorithm in the general object identification implementation procedure that the present invention proposes and carry out critical point detection, thereby finish feature extraction.SIFT mainly finishes following two tasks: the detection of unique point and the unique point of describing to detect with the vector of one 128 dimension.
So-called unique point, exactly detected Local Extremum with directional information under the image in different scale space.Three features that unique point has: yardstick, size, direction.In the process of feature point detection and 128 dimension descriptors foundation, all can consume a lot of times to the gray scale conversion of image, the foundation of difference of Gaussian, the foundation of gaussian pyramid, the foundation of histogram of gradients, to operating in indoor robot real-time certain influence is arranged, therefore, utilize the GPU accelerated method that the SIFT calculating section at most of elapsed time in the whole algorithm is raised speed.
The English full name Graphic of GPU Processing Unit is translated as " graphic process unit ".GPU is a concept with respect to CPU (central processing unit), on the image handling property, with respect to traditional CPU very large advantage is arranged, the parallel processing capability that it is powerful and Programmable Pipeline, become possibility so that process non-picture data with stream handle, at the operand of processing in the face of single-instruction multiple-data stream (SIMD) and data during much larger than the needing of data dispatch and transmission, GPU is very large on performance to have surmounted traditional central processing unit application program.The advantage of GPU numerical evaluation is the floating-point operation function, and it can carry out floating-point operation fast by concurrent operation.NVDIA companies in 2007 have issued the development platform CUDA of official of GPU programming, the present invention on this platform in the SIFT algorithm: (1) input color image carries out gradation conversion, to the down-sampled of input picture with rise sampling; (2) create Gaussian image pyramid; (3) feature point detection, the location of sub-pix and inferior yardstick; (4) calculated characteristics direction and descriptor link are carried out concurrent operation, reduce Riming time of algorithm.
Respectively different size, the image that comprises different number unique points are carried out SIFT test after GPU accelerates, and and conventional SIFT compare.Experimental situation is as follows:
Operating system: 32 win7;
Internal memory: 2G;
CPU:Intel(R)Core(TM)2Duo E7500@2.93GHz;
GPU:nVIDIA GeForce310, special image internal memory 512MB, sharing system internal memory 766MB;
Compiler environment vs2010.
The original SIFT of table 1 and GPU SIFT result are relatively
Figure BDA00002092798100093
240×320 133 157 81 1.938
240×240 340 234 78 3.000
240×240 641 628 95 8.373
377×508 1171 687 160 4.294
480×640 2368 1264 244 5.180
1024×1024 589 1294 170 7.612
1000×1000 1376 1513 169 8.953
1200×1600 4052 6690 392 17.066
Table 1 is experimental result.By experiment as can be known, when, dimension of picture more in unique point was larger, GPU accelerates computing obvious effect.The image that gathers in robot is 320 * 240 pixels, and the unique point number is about 100~200, in SIFT algorithm process link, acceleration effect is clearly arranged.
5 experiments
5.1 experiment condition
The Pioneer3-DX type mobile robot who has used U.S. ActivMedia Robotics company to produce in the experiment, required hardware device comprises: one in one in the robot of built-in PC, PTZ camera, sonar sensor, image pick-up card, wireless network card, wireless router, high-performance PC.Fig. 9 has shown the hardware device information flow direction figure of system.Experimental situation is robotization institute of Southeast China University mobile robot laboratory, and physical size is approximately 10m * 8m.Actual environment as shown in Figure 2.
The Software for Design of client-side comprises drafting module, robotary display module, communication module, navigation algorithm design module of map etc.Utilize C# and the C Plus Plus hybrid programming under Visual Studio 2008 development environments, based on Windows Forms window application, utilize C# language development machines people's running environment map, be used for drafting, the setting of parameter and the demonstration of robot running status of user's map; Based on Win32 Console application program, utilize the design of other modules in the C Plus Plus navigational system, parts such as communication, image detection coupling, navigation algorithm design.
At server end, the Software for Design part mainly comprises communication module, sensor information acquisition module, bottom layer driving module etc., because ActivMedia Robotics company provides the api interface software ARIA of a cover about the sensor on the Pioneer3-DX and motor, write the program code of modules.
For the robot visual guidance method based on general object identification of verifying that the present invention proposes, from following four aspects, verify at Pioneer3-DX.
5.2 concrete experiment:
Experiment one:
Experiment purpose:
The recognition methods that the present invention proposes is the identification for general object, so its core is the identification to similar not jljl, experiment one is to verify in the situation that identical hand-drawing map is the object in the conversion actual environment test navigation performance.
Experimentation:
This experiment is divided into three times to be finished, and each experiment all contains five crucial road signs: chair, guitar, wastepaper basket, umbrella, fan, but the concrete object that each experiment comprises is different.Figure 10 be identify in three experiments without object, Image to left is for testing for the first time object, intermediate picture is for testing object for the second time, the right side is for testing for the third time object.Adjust the different order of road sign in different experiments, in true experimental situation, robot advances to the upper right corner from the lower left corner in laboratory, and Figure 11 is the path of hand-drawing map and No. three experimental robot walkings.
Experimental result and analysis:
By guidance path as can be known, in the situation that three similar not jljls and change road sign order, can well finish the work by this method with robot for hand-drawing map, and robot advances to presumptive area under the critical path target helps.
Experiment two:
Experiment purpose:
When the checking guidance path changes, the stability of this method.
Experimentation:
This experiment, the route that the change robot advances from the upper right corner in laboratory, ends at the lower left corner in laboratory.Figure 12 is the true environment figure after the change robot course.Figure 13 is hand-drawing map and the robot course of this time experiment.
Experimental result and analysis:
Change robot navigation path, so that robot marches to the lower left corner from the upper right corner in laboratory, experimental result shows, in the situation that transform path, robot also can finish the work smoothly.
Experiment three:
Experiment purpose:
Keeping under the constant prerequisite of hand-drawing map, the object in the mobile actual environment is tested it to the robustness of the variation of object angle and change in location among a small circle.
Experimentation:
Test under the constant condition of hand-drawing map for the first time keeping, for the second time experiment with all road signs within 1m among a small circle to a side translation, for the third time experiment is the round about translation of all road signs, the robot experiment of navigating.
Experimental result and analysis:
Figure 14 is the advance schematic diagram of robot in three kinds of situations.By navigation path figure as can be known, the position of variation targets thing does not affect the navigation performance of robot among a small circle, and the robustness of algorithm proposed by the invention is described.
Experiment four:
Experiment purpose:
Reduce the quantity of road sign in the actual environment, detection machine people's navigation performance.
Experimentation:
As road sign, as road sign, use for the third time 3 objects as road sign with 4 objects the second time with 5 objects in for the first time experiment.
Experimental result and analysis:
Robot navigation's route as shown in figure 15.By the navigation schematic diagram as can be known, in the situation that road sign quantity reduces, the navigation performance of robot is substantially unaffected.But larger at environment, when the road sign in the environment was considerably less, because vision is subjected to the restriction of distance, robot can only rely on the odometer of self to navigate and locate, and its navigation effect may be subject to certain impact.
The present invention is on the basis that proposes a kind of general object identification method newly, apply it to simultaneously in the Indoor Robot navigation, its navigation experiment effect has been verified validity and the robustness of the method fully, for finishing of robot navigation's task provides a new way.

Claims (9)

1. the robot visual guidance method based on general object identification comprises the steps:
(1) hand-drawing map of drafting actual environment;
(2) Airborne camera reads in the picture of actual environment;
(3) use discriminant function that the picture that reads in is adjudicated, the identification natural landmark;
(4) robot finishes self-align by the natural landmark of sensor measurement identification and the position relationship of robot.
2. described robot visual guidance method based on general object identification according to claim 1, it is characterized in that: in the described step (1), the information that hand-drawing map is presented to robot comprises: the starting point of robot and direction, the starting point in path and terminal point, general actual physics distance between starting point and the terminal point, the critical path target general location that robot can run into from the origin-to-destination process; Robot is directed near the crucial road sign, then towards the crucial road sign operation of the next one.
3. described a kind of robot visual guidance method based on general object identification according to claim 1, it is characterized in that: described step (2) also comprises:
1. set up the vision dictionary: carry out the feature point detection of picture, generate the vision dictionary of each type objects;
2. image pre-service: each unique point in the picture to be detected and each the vision word in the vision dictionary are carried out similarity calculating, if the error of unique point and corresponding vision word is not more than the threshold value of setting, think that this unique point is the unique point that consists of target object;
3. with a multi-C vector pretreated image is described.
4. described a kind of robot visual guidance method based on general object identification according to claim 3 is characterized in that: described step 1. in, also comprise the step of the vision word in the vision dictionary being carried out cluster analysis.
5. described a kind of robot visual guidance method based on general object identification according to claim 3 is characterized in that: described step 2. in, also comprise the step of using the random sampling consistency algorithm to process the noise spot on the background.
6. described a kind of robot visual guidance method based on general object identification according to claim 3, it is characterized in that: described step 3. in, the character of the element in the multi-C vector comprises the description of the quantity of vision word own and vision word spatial relation description, the number of times that wherein vision word in the description finger vision word library of the quantity of vision word own is occurred, vision word spatial relation description refers to describe apart from feature and angle character with respect to the object geometric center with each vision word the position feature of each vision word.
7. described a kind of robot visual guidance method based on general object identification according to claim 1, it is characterized in that: the discriminant function in the described step (3) is drawn by off-line training.
8. described a kind of robot visual guidance method based on general object identification according to claim 3 is characterized in that: described feature point detection and describe and adopt GPU accelerogram picture to process.
9. described a kind of robot visual guidance method based on general object identification according to claim 1, it is characterized in that: described sensor is sonar sensor.
CN2012103211664A 2012-09-03 2012-09-03 Robot vision navigation method based on general object recognition Pending CN102853830A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2012103211664A CN102853830A (en) 2012-09-03 2012-09-03 Robot vision navigation method based on general object recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2012103211664A CN102853830A (en) 2012-09-03 2012-09-03 Robot vision navigation method based on general object recognition

Publications (1)

Publication Number Publication Date
CN102853830A true CN102853830A (en) 2013-01-02

Family

ID=47400631

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2012103211664A Pending CN102853830A (en) 2012-09-03 2012-09-03 Robot vision navigation method based on general object recognition

Country Status (1)

Country Link
CN (1) CN102853830A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103196430A (en) * 2013-04-27 2013-07-10 清华大学 Mapping navigation method and system based on flight path and visual information of unmanned aerial vehicle
CN103353758A (en) * 2013-08-05 2013-10-16 青岛海通机器人系统有限公司 Indoor robot navigation device and navigation technology thereof
CN105910599A (en) * 2016-04-15 2016-08-31 深圳乐行天下科技有限公司 Robot device and method for locating target
CN107045355A (en) * 2015-12-10 2017-08-15 松下电器(美国)知识产权公司 Control method for movement, autonomous mobile robot
CN107167144A (en) * 2017-07-07 2017-09-15 武汉科技大学 A kind of mobile robot indoor environment recognition positioning method of view-based access control model
CN107358189A (en) * 2017-07-07 2017-11-17 北京大学深圳研究生院 It is a kind of based on more object detecting methods under the indoor environments of Objective extraction
CN107967473A (en) * 2016-10-20 2018-04-27 南京万云信息技术有限公司 Based on picture and text identification and semantic robot autonomous localization and navigation
CN110069058A (en) * 2018-01-24 2019-07-30 南京机器人研究院有限公司 Navigation control method in a kind of robot chamber
CN110262477A (en) * 2019-05-22 2019-09-20 汕头大学 The dominoes automatic putting trolley and method of app control
CN111158384A (en) * 2020-04-08 2020-05-15 炬星科技(深圳)有限公司 Robot mapping method, device and storage medium
CN111738528A (en) * 2020-07-20 2020-10-02 北京云迹科技有限公司 Robot scheduling method and first robot

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306145A (en) * 2011-07-27 2012-01-04 东南大学 Robot navigation method based on natural language processing
EP2428934A1 (en) * 2010-09-14 2012-03-14 Astrium SAS Method for estimating the movement of a carrier in relation to an environment and calculation device for a navigation system
KR20120056536A (en) * 2010-11-25 2012-06-04 연세대학교 산학협력단 Homing navigation method of mobile robot based on vision information

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2428934A1 (en) * 2010-09-14 2012-03-14 Astrium SAS Method for estimating the movement of a carrier in relation to an environment and calculation device for a navigation system
KR20120056536A (en) * 2010-11-25 2012-06-04 연세대학교 산학협력단 Homing navigation method of mobile robot based on vision information
CN102306145A (en) * 2011-07-27 2012-01-04 东南大学 Robot navigation method based on natural language processing

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李新德等: ""一种基于GOR+GPU算法的机器人视觉导航方法"", 《机器人ROBOT》 *
李新德等: ""一种基于手绘地图的动态环境视觉导航方法"", 《机器人ROBOT》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103196430B (en) * 2013-04-27 2015-12-09 清华大学 Based on the flight path of unmanned plane and the mapping navigation method and system of visual information
CN103196430A (en) * 2013-04-27 2013-07-10 清华大学 Mapping navigation method and system based on flight path and visual information of unmanned aerial vehicle
CN103353758A (en) * 2013-08-05 2013-10-16 青岛海通机器人系统有限公司 Indoor robot navigation device and navigation technology thereof
CN103353758B (en) * 2013-08-05 2016-06-01 青岛海通机器人系统有限公司 A kind of Indoor Robot navigation method
CN107045355A (en) * 2015-12-10 2017-08-15 松下电器(美国)知识产权公司 Control method for movement, autonomous mobile robot
CN105910599A (en) * 2016-04-15 2016-08-31 深圳乐行天下科技有限公司 Robot device and method for locating target
CN107967473A (en) * 2016-10-20 2018-04-27 南京万云信息技术有限公司 Based on picture and text identification and semantic robot autonomous localization and navigation
CN107967473B (en) * 2016-10-20 2021-09-24 南京万云信息技术有限公司 Robot autonomous positioning and navigation based on image-text recognition and semantics
CN107167144A (en) * 2017-07-07 2017-09-15 武汉科技大学 A kind of mobile robot indoor environment recognition positioning method of view-based access control model
CN107358189A (en) * 2017-07-07 2017-11-17 北京大学深圳研究生院 It is a kind of based on more object detecting methods under the indoor environments of Objective extraction
CN110069058A (en) * 2018-01-24 2019-07-30 南京机器人研究院有限公司 Navigation control method in a kind of robot chamber
CN110262477A (en) * 2019-05-22 2019-09-20 汕头大学 The dominoes automatic putting trolley and method of app control
CN111158384A (en) * 2020-04-08 2020-05-15 炬星科技(深圳)有限公司 Robot mapping method, device and storage medium
CN111738528A (en) * 2020-07-20 2020-10-02 北京云迹科技有限公司 Robot scheduling method and first robot

Similar Documents

Publication Publication Date Title
CN102853830A (en) Robot vision navigation method based on general object recognition
CN102313547B (en) Vision navigation method of mobile robot based on hand-drawn outline semantic map
Zubizarreta et al. A framework for augmented reality guidance in industry
CN108089572A (en) For the algorithm and infrastructure of steady and effective vehicle location
Aider et al. A model-based method for indoor mobile robot localization using monocular vision and straight-line correspondences
US20160125243A1 (en) Human body part detection system and human body part detection method
CN109341703A (en) A kind of complete period uses the vision SLAM algorithm of CNNs feature detection
CN102087530A (en) Vision navigation method of mobile robot based on hand-drawing map and path
Murillo et al. Wearable omnidirectional vision system for personal localization and guidance
WO2021021862A1 (en) Mapping and localization system for autonomous vehicles
CN110675453A (en) Self-positioning method for moving target in known scene
CN112329559A (en) Method for detecting homestead target based on deep convolutional neural network
Wang et al. Fast vanishing point detection method based on road border region estimation
Tao et al. Indoor 3D semantic robot VSLAM based on mask regional convolutional neural network
Wei et al. GMSK-SLAM: a new RGB-D SLAM method with dynamic areas detection towards dynamic environments
Yin et al. Overview of robotic grasp detection from 2D to 3D
Yu et al. Accurate and robust visual localization system in large-scale appearance-changing environments
Alcantarilla et al. Visibility learning in large-scale urban environment
Melo et al. An embedded monocular vision approach for ground-aware objects detection and position estimation
Narayanan et al. An integrated perception pipeline for robot mission execution in unstructured environments
Sujiwo et al. Robust and accurate monocular vision-based localization in outdoor environments of real-world robot challenge
Zhao et al. Improved ORB based image registration acceleration algorithm in visual-inertial navigation system
Lee et al. Camera pose estimation using voxel-based features for autonomous vehicle localization tracking
Juang Humanoid robot runs maze mode using depth-first traversal algorithm
Atanasov et al. Nonmyopic view planning for active object detection

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20130102