CN102306145A - Robot navigation method based on natural language processing - Google Patents

Robot navigation method based on natural language processing Download PDF

Info

Publication number
CN102306145A
CN102306145A CN201110211946A CN201110211946A CN102306145A CN 102306145 A CN102306145 A CN 102306145A CN 201110211946 A CN201110211946 A CN 201110211946A CN 201110211946 A CN201110211946 A CN 201110211946A CN 102306145 A CN102306145 A CN 102306145A
Authority
CN
China
Prior art keywords
robot
navigation
dictionary
road sign
locality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201110211946A
Other languages
Chinese (zh)
Inventor
李新德
张秀龙
戴先中
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201110211946A priority Critical patent/CN102306145A/en
Publication of CN102306145A publication Critical patent/CN102306145A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Manipulator (AREA)

Abstract

The invention relates to a robot navigation method based on natural language processing, which belongs to the field of navigation of intelligent robots. In the method, by analyzing a sentence unit structure of a natural language expressing path, a landmark module and an azimuth converting module are extracted; a spatial position relationship among signposts is derived; a key guide point is established; a navigation intention chart is obtained; and a navigation task of a robot is finished by updating the scale of the navigation intention chart and an actual map. According to the robot navigation method provided by the invention, adaptability and intelligence of the robot for a strange environment and the navigation efficiency of the robot are enhanced.

Description

A kind of robot navigation's method based on natural language processing
Technical field
The present invention relates to a kind of robot navigation's method, belong to the intelligent robot navigation field based on natural language processing.
Background technology
When the people faced a foreign environment, " asking the way " was a high-efficiency method that arrives the destination.Through show the way people's language of understanding, the people can form the image of each key point on a general course and the route in brain, and the people relies on this image can successfully arrive the destination.Based on same principle; If as mobile robot during in the face of a complete strange indoor environment; The people tells about to it through natural language and how to advance, robot the understanding of natural language with instruct under run to the destination automatically, this is undoubtedly a kind of air navigation aid efficiently.The path re-establishing that this air navigation aid relates under the natural language in path is described (Natural Language Represention of Path; NLRP) " based on the GIS path re-establishing research of restricted Chinese " (. Liu Yu; Gao Yong; The woods newspaper is good; Deng. the remote sensing journal; 2004,8 (4): 323-330), and the navigation map of natural language dependence.
Abroad; Natural language research to English has reached certain level; Some scholars utilize English natural language that robot navigation's method has been launched research; Like people such as YuanWei " WheretoGo:InterpretingNaturalDirectionsUsingGlobalInfere nce " (YuanWei; EmmaBrunskill; Etal.2009IEEEInternationalConferenceonRoboticsandAutomat ion.kobe:Proceedings-IEEEInternationalConferenceonRoboti csandAutomation; 2009.3761-3767.) the path extraction algorithm based on natural language has been proposed on the basis of grating map, this algorithm is being obtained result preferably by improved latent Ma Er imperial examination hall algorithm aspect the path extraction.People " Toward understanding natural language directions " (Kollar T such as Thomas kollar; Tellex S; Et al.Human-Robot Interaction (HRI) .Boston:2010 5th ACM/IEEE International Conference; 2010.259-267.) on the basis of Yuan Wei work, increased the understanding of verb and preposition, increased the understanding of system to unrestricted natural language.
At home, the analysis that is directed against robot navigation's NLRP specially also rarely has the people to study, but in the Geographic Information System field, part scholar studies the NLRP analytical approach towards Chinese, has very big reference and is worth.Wherein people such as Liu Yu specializes in the verb in the Chinese path description, and has proposed a kind of NLRP analytical approach based on restricted Chinese based on this.Zhang Xueying etc. " towards the natural language path description method of Chinese. " (official builds for Zhang Xueying, Lu Guonian. Earth Information Science, 2008,10 (6): 757-762) syntactic pattern of Chinese commonly used is studied, and proposed relevant algorithm.
On the navigation map that relies on; People such as Yuan Wei and Thomas have used grating map, and this map can be understood natural language by auxiliary robot, but obtain this map; Need robot in environment, to walk in advance, have in advance at environment and describe accurately.Even adopt other traditional map, also all be on the basis that is based upon the abundant perception of environment, not high to treatment effeciency strange, dynamic environment.
Summary of the invention
Technical matters to be solved by this invention is the deficiency to the above-mentioned background technology, and a kind of robot navigation's method based on natural language processing is provided.
The present invention adopts following technical scheme for realizing the foregoing invention purpose:
A kind of robot navigation's method based on natural language processing comprises the steps:
Step 1 representes that to natural language the statement cellular construction in path does grammatical analysis according to Chinese path description rule and dictionary, obtains active path information after rejecting Invalid path information;
Step 2, forward direction landmark information, orientation transitional information and back in the extraction active path information are to landmark information;
Step 3, the spatial relation between the derivation road sign specifically comprises the steps:
Step 3-1 extracts the effective noun of locality in each statement cellular construction orientation transitional information;
Step 3-2 converts all effective nouns of locality to the absolute orientation speech;
Step 3-3, previous road sign preserve position and the range information of a back road sign with respect to self;
Step 4, it is that the Cartesian coordinates at zero point is fastened that the relative position of road sign is mapped to the current coordinate of robot, then this coordinate system is mapped on the navigation mental map through coordinate transform;
Step 5 according to expression and the noun of locality model of road sign in navigation purpose figure, is extracted natural language and is represented the crucial pilot point in the path;
Step 6 according to the starting point of input and the starting point pixel distance among actual range between the terminal point and the navigation purpose figure, is confirmed the initial proportion chi;
Step 7 is calculated the distance between current crucial pilot point and next crucial pilot point according to the initial proportion chi, confirms the operational mode between two crucial pilot points;
Step 8, robot moves according to the operational mode in the step 7, and according to the navigation of prediction estimation approach; In navigation procedure, utilize the SURF algorithm that realtime graphic and original image are mated, seek reference substance; Angle according to reference substance adjustment robot graphics harvester;
Step 9 after robot runs to next crucial pilot point, positions or positions through odometer information according to the pixels tall of road sign in the realtime graphic;
Step 10 is upgraded the position and the map scale of next crucial pilot point, to step 9, runs to last crucial pilot point up to robot according to the engineer's scale repeating step 7 after upgrading.
In said a kind of robot navigation's method based on natural language processing; The said dictionary of step 1 comprises landmark dictionary, transient state verb dictionary, noun of locality dictionary, preposition dictionary, refers to the verb dictionary apart from dictionary, object space, and the noun of locality mark synonym zone bit of equivalent is obtained noun of locality model.
The present invention adopts technique scheme, has following beneficial effect:
1. strengthened the adaptability of robot to foreign environment.When robot faced strange fully environment, as long as operator's task description, robot accomplished the corresponding navigation task;
2. increased the intelligent of robot, when carrying out navigation task, only just can accomplish navigation task through simple man-machine interaction, do not needed technical skill personnel's participation, the ordinary people just can accomplish.
3. improved robot navigation's efficient, with respect to existing various navigate modes, this navigate mode does not need in advance environment to be measured or scanned, and only needs operating personnel to detect by an unaided eye to provide general azimuth-range information to get final product.
Description of drawings
Fig. 1 is a process flow diagram of setting up navigation purpose figure.
Fig. 2 is the synoptic diagram of reference target prediction estimation method.
Fig. 3 is road sign position synoptic diagram in the actual environment of experiment one.
Fig. 4 is the actual running route of road sign relative position and robot in the experiment one.
Fig. 5 is road sign position synoptic diagram in the actual environment of experiment two.
Fig. 6 is the actual running route of road sign relative position and robot in the experiment two.
Fig. 7 is road sign position synoptic diagram in the actual environment of experiment three.
Fig. 8 is the actual running route of road sign relative position and robot in the experiment three.
Fig. 9 is the actual running route of road sign relative position and robot in the experiment four.
Label declaration among the figure: A is a robot; B is No. 1 chest; C is the express delivery box; D is Compaq; E is No. 2 chests.
Embodiment
Be elaborated below in conjunction with the technical scheme of accompanying drawing to invention:
A kind of robot navigation's method based on natural language processing is divided into some unit with the path description natural language according to landmark and orientation modular converter.Is the core of handling with landmark, confirms specifically to comprise the steps: the relative position of each landmark through the orientation modular converter
Step 1 representes that to natural language the statement cellular construction in path does grammatical analysis according to Chinese path description rule and dictionary, rejects and obtains active path information after the Invalid path information:
Said dictionary comprises landmark dictionary, transient state verb dictionary, noun of locality dictionary, preposition dictionary, refers to the verb dictionary apart from dictionary, object space, and the noun of locality mark synonym zone bit of equivalent is obtained noun of locality model;
Step 2, forward direction landmark module, orientation modular converter and the back of extracting the statement cellular construction are to the landmark module;
Step 3, the spatial relation between the derivation road sign specifically comprises the steps:
Step 3-1 extracts the effective noun of locality in each statement cellular construction orientation modular converter;
Step 3-2 converts all effective nouns of locality to the absolute orientation speech;
Step 3-3, previous road sign preserve position and the range information of a back road sign with respect to self;
Step 4, it is that the Cartesian coordinates at zero point is fastened that the relative position of road sign is mapped to the current coordinate of robot, then this coordinate system is mapped on the navigation mental map through coordinate transform;
Step 5 according to expression and the noun of locality model of road sign in navigation purpose figure, is extracted natural language and is represented the crucial pilot point in the path.The process flow diagram of step 1 to step 4 as shown in Figure 1.
Step 6, the initial proportion chi of calculating Freehandhand-drawing map and actual map specifically comprises the steps:
Step 6-1 utilizes the SURF algorithm that realtime graphic and original image are mated, the coarse localization robot;
Step 6-2 asks for the pixels tall of each road sign in the realtime graphic after mating successfully, and then tries to achieve the pixel distance of robot and each road sign;
Step 6-3 according to the starting point of input and the starting point pixel distance among actual range between the terminal point and the navigation purpose figure, confirms the initial proportion chi;
Step 7 is calculated the distance between current crucial pilot point and next crucial pilot point according to the initial proportion chi, confirms the operational mode between two crucial pilot points;
Step 8, robot moves according to the operational mode in the step 7, and according to the navigation of prediction estimation approach.In navigation procedure, utilize the SURF algorithm that realtime graphic and original image are mated, seek reference substance.
Step 9 after robot runs to next crucial pilot point, positions or positions through odometer information according to the pixels tall of road sign in the realtime graphic.
Step 10 is upgraded the position and the map scale of next crucial pilot point, to step 9, runs to last crucial pilot point up to robot according to the engineer's scale repeating step 7 after upgrading.
Extracting landmark and azimuth-range thereof is the NLRP analysis of purpose, just like giving a definition:
(1) NLRP is mapping: the NLRP → P and the NLRP={L of path P, and D} is doublet, wherein a L={l 1, l 2, L, l nBe the set of landmark, D={d 1, d 2, L, d mIt is the set of orientation modular converter.
(2) L can be divided into entity landmark (EL={el 1, el 2, L, el x) and virtual landmark (VL={vl 1, vl 2, L, vl y).
(3) D is divided into effective orientation modular converter (VD={vd 1, vd 2, L, vd z) and null modular converter (ID={id 1, id 2, L, id i), promptly this module is not with distance.
(4) vd iA route segment unit vd among the respective path P i→ P i, P=UP i
Because Chinese, it is difficult understanding to entirely accurate its semanteme, and for this reason, many scholars turn to the research of restricted Chinese.Based on top analysis, we have provided the NLRP analytical model of extracting based on landmark, and the principle of this model is to cover path expression mode commonly used, take into account air navigation aid and the robot of the semantic map of the Freehandhand-drawing recognition capability to object simultaneously.Model is following:
< NLRP >: :={ < NLRP short sentence>}
< NLRP short sentence >: :=< (forward direction) landmark module>{ < orientation modular converter>} < (back to) landmark module >
< landmark module >: :=< position directive property verb>< landmark >
< orientation modular converter >: :=< preposition>< noun of locality>< transient state verb>< distance >
< noun of locality >: :=< relative orientation>| < absolute orientation >
< distance >: :=< numerical value>< length linear module >
< transient state verb >: :=' OK ' | ' walking ' | ' commentaries on classics ' | ' turning ' |
< position directive property verb >: :=' to ' | ' extremely ' | ' approaching ' | ' going to ' | ' walking to ' |
The orientation modular converter has provided typical statement in the above-mentioned model, and all possible being expressed in hereinafter provides.
The orientation modular converter is the chief component that characterizes the locus in NLRP.Relatively more typical statement is that " go ahead 10 meters and arrived " such statement has comprised the various elements of mentioning in the above-mentioned model.But we often have statement more flexibly in the actual statement, and the present invention has carried out special classification, and possible model has:
(1) < orientation modular converter >: :=< preposition>< noun of locality>< transient state verb>< distance >
(2) < orientation modular converter >: :=< noun of locality>< transient state verb>< distance >, like " turning left 10 meters "
(3) < orientation modular converter >: :=< noun of locality>< transient state verb >, like " left-hand rotation ".
(4) < orientation modular converter >: :=< transient state verb>< distance >, as: " walking 10 meters ".
(5) < orientation modular converter >: :=< preposition>< noun of locality>< transient state verb >, as: " turning left ".
(6) < orientation modular converter >: :=< preposition>< noun of locality>< distance >, as: " 10 meters forward ".
More than six kinds of forms summarized the situation that can be applied in the daily interchange basically.When analyzing, satisfy the element of such form, then think the orientation modular converter.
Document " based on the concept of space Modeling Research of natural language processing " (the quiet .[doctorate of Li Han paper] Harbin: Harbin Institute of Technology, 2007) point out that the preposition of expression direction in the Chinese and the quantity of the noun of locality are certain, and provided corresponding vocabulary.To indoor environment, the present invention has done corresponding improvement, has deleted some speech that can not use, like " the inside ", " the higher authorities " or the like.Increased the speech that a part is not included simultaneously, as " left front ", " along " etc.
The part-of-speech tagging of native system and word match are carried out simultaneously; The parameter D={sIndex that need write down when the orientation modular converter mates; IsDis; Unit; Dis; Kind} wherein sIndex representes this position of orientation modular converter in whole sentence, and isDis representes whether this landmark has distance.Unit and dis represent the unit and the numerical value of distance respectively.Kind is the corresponding agreement zone bit of the noun of locality in this orientation modular converter.
In NLRP,, often have position directive property verb before the landmark for the linking up and the smoothness of narration of semanteme, such as " go ahead 100 meters and just can arrive the Wangfujing " wherein " arrival " be exactly the verb of a kind of position directive property.In indoor environment and robot navigation; Such verb has its singularity; As because based on the restriction of present stage robot object identification ability in the air navigation aid of vision; When object is seen by robot; Robot is very near apart from object, therefore " sees " in native system it also being position directive property verb.The directivity verb is to judge the method that the speech that matches is effective landmark, can remove such as " chest is red " such interference statement.
The parameter that need write down when landmark mates:
L={sIndex,dIndex,isVirtual,dis,nameR,position}
Wherein, sIndex and dIndex represent the position of this landmark in sentence and landmark dictionary respectively.IsVirtual representes that this landmark is virtual landmark.Dis and nameR are 3 * 3 arrays, represent the distance and bearing relation between this landmark and the next landmark respectively.
Forward direction all is one with the back to the effective quantity of landmark among the NLRP, if a plurality of landmark, when short sentence extracts, only extracts that of the most close orientation modular converter.
Tend to a more than orientation state exchange module in the NLRP short sentence.Such as " turn left, go ahead about 10 meters again and arrived " such sentence is fine understanding, but does not meet works and expressions for everyday use custom such as " turn left, walk " such sentence to the right, is invalid therefore.Rational orientation modular converter can be effective orientation modular converter in the short sentence, and perhaps an idle space modular converter back connects a useful space modular converter (noun of locality of this orientation modular converter must be the agreement speech of " preceding " or " back ").If the latter, then the content with two orientation modular converters is organized into effective orientation modular converter.
In short sentence extracted, the back of one of front was defaulted as next one forward direction landmark to landmark.
Because the object that relates among the indoor NLRP and the finiteness of vocabulary, so we can carry out part-of-speech tagging and coupling with the method for dictionary matching.The dictionary that the present invention relates to has 6 kinds: landmark dictionary, transient state verb dictionary, noun of locality dictionary, preposition dictionary, refer to the verb dictionary apart from dictionary, object space.
Synon phenomenon is many in the Chinese, owing to need to resolve the orientation of object among the present invention, therefore the noun of locality to equivalent need mark the synonym zone bit, specifies its corresponding model.So just need all not analyze each noun of locality.
Direction is a most important instructional information among the NLRP, is the important leverage of correct walking.Direction judges that when generally appearing at the path conversion, the noun of locality of expression absolute direction has: East, West, South, North, the southeast, northeast, southwest, northwest; Relative direction is that the vision point with the observer is the center, with the observer from as object of reference.The noun of locality of expression relative direction has: front, rear, left and right, left front, left back, right front, right back etc.For the ease of handling the position relation of each landmark in the sentence, native system all is unified into absolute orientation to all orientation.
Because indoor landmark is smaller object, with respect to whole indoor space scale, landmark can abstractly be a particle when handling position relation.Therefore, direction do not consider of the influence of object size when deriving to the orientation.The step of deriving is following:
(1) establishing robot arrives in i the short sentence back and when landmark, is oriented absolute direction D i, D then i={ D I-1, RD i.RD iIt is the effective noun of locality in the orientation modular converter in i the short sentence.Like Fig. 1, if D I-1=" east ", RD i=" left side ", then D i=" south ".If but RD iDuring for the absolute orientation speech, D then i=RD i.
(2) establish that forward direction landmark is l in i the NLRP short sentence i, the back is l to landmark I+1, effectively the director space modular converter is VD i, dis (i), unit (i) are VD iIn distance and unit.Native system all changes into rice to all long measures.In handling procedure, all set up model, like D for the absolute orientation speech i=" east ", unit (i)=" rice ", dis (i)=10.DIndex (i), dIndex (i+1) are respectively l iAnd l I+1Call number in the landmark dictionary.
Through top calculating; Just can from NLRP, extract the azimuth-range relation of each landmark, form one chained list, each landmark is one of them node; It is storing the next landmark azimuth-range value of oneself relatively, like Fig. 4.
(3) direction relations between the Landmark in the navigation mental map is a relative position relation, and promptly as long as the relative position relation between each landmark is correct, their whole absolute orientation are to not influence of result.Therefore for the ease of deriving, the initial absolute direction of robot is made as north here.Coordinate with robot is zero point, sets up two-dimentional cartesian coordinate system, is designated as that { A}, this coordinate system are reference with Fig. 2, are y axle positive dirction with the north, are x axle positive dirction with east.Through the relative position of each landmark of obtaining in (2) and the coordinate that distance relation is easy to obtain each object.
Set up navigation purpose figure according to following method; Coordinate system in the note navigation mental map be B}, be the upper left corner at interface the zero point of this coordinate, x axle pros are with { A} is identical; Y axle positive dirction is with { A} is opposite, and this just must be { point in the A} coordinate is mapped to { among the B} according to rational proportion.
Mapping method is following:
(1) establishes robot in that { coordinate among the B} is (u 0, v 0), generally this value is set in the center at interface.
(2) road sign l iAt { coordinate (the x among the A} i, y i) and it is at { coordinate (the u among the B} i, v i) relation be:
u i = u 0 + x i &CenterDot; j v i = v 0 - y i &CenterDot; j - - - ( 1 )
Wherein j is the parameter of the pixel distance of adjustment road sign on map interface.In the navigation mental map, need certain distance to satisfy the requirement of extracting key point between the road sign, if establish (u i, v i) be l iCorresponding pixel coordinate.Then:
u max=max{u 1,u 2,u 3,…,u n} (2)
u min=min{u 1,u 2,u 3,…,u n} (3)
v max=max{v 1,v 2,v 3,…,v n} (4)
v min=min{v 1,v 2,v 3,…,v n} (5)
Wherein, n is the quantity of road sign; The initial value of j is 1, and increases gradually, till the one or more amounts in formula (2) to formula (5) just do not exceed interfacial boundary.
Can find out that the distance measurements among the NLRP is not directly to be used for navigation, but be used for setting up the relative position of road sign in the navigation mental map.In navigation procedure, robot can refresh the engineer's scale of world coordinate system and pixel coordinate system in real time, and the distance among the NLRP is the auxiliary quantity of carrying out vision localization in the robot operational process, mainly still relies on vision to position and navigates.
The relative position relation of robot and road sign in the foundation navigation mental map directly extracts crucial pilot point through extracting road sign point on every side, and method is following:
(1) l iAnd l I+1{ coordinate among the B} is: (u at coordinate system i, v i), (u I+1, v I+1), l I+1The coordinate of corresponding key point is (ku I+1, kv I+1), generally crucial pilot point is apart from fixed range D of road sign, and general D is made as 40 pixels.
(2) specifically separate classification discussion:
A) if u i=u I+1And v i>v I+1, then
ku i + 1 = u i + 1 , kv i + 1 = v i + 1 + d v i > v i + 1 v i + 1 - d v i < v i + 1 - - - ( 6 )
B) if v i=v I+1, kv then I+1=v I+1
ku i + 1 = u i + 1 + d u i > u i + 1 u i + 1 - d u i < u i + 1 - - - ( 7 )
C) if v i≠ v I+1And u i≠ u I+1, then
ku i + 1 = - u i + 1 + d ( u i - u i + 1 ) &CenterDot; ( v i - v i + 1 ) 2 + ( u i - u i + 1 ) 2 ( v i - v i + 1 ) 2 + ( u i - u i + 1 ) - - - ( 8 )
If (ku i-u I+1) (u i-u I+1)>0, then
kv i + 1 = 2 u i + 1 v i - u i + 1 v i + 1 - v i + 1 u i u i + 1 - u i + d ( u i - u i + 1 ) &CenterDot; ( v i - v i + 1 ) 2 + ( u i - u i + 1 ) 2 ( v i - v i + 1 ) 2 + ( u i - u i + 1 ) - - - ( 9 )
If (ku i-u I+1) (u i-u I+1)<0, then
ku i + 1 = - u i + 1 - d ( u i - u i + 1 ) &CenterDot; ( v i - v i + 1 ) 2 + ( u i - u i + 1 ) 2 ( v i - v i + 1 ) 2 + ( u i - u i + 1 ) - - - ( 10 )
kv i + 1 = 2 u i + 1 v i - u i + 1 v i + 1 - v i + 1 u i u i + 1 - u i + d ( u i - u i + 1 ) &CenterDot; ( v i - v i + 1 ) 2 + ( u i - u i + 1 ) 2 ( v i - v i + 1 ) 2 + ( u i - u i + 1 ) - - - ( 11 )
So far, the extraction of robot progress path and expression are accomplished, and have formed the navigation mental map of robot.
The synoptic diagram of reference target prediction estimation method as shown in Figure 2, two dark node are represented current crucial pilot point N ThisWith the crucial pilot point N of the next one Next. suppose that the Robot of robot has been in N ThisAnd towards
Figure BDA0000078913140000106
Direction, two grey node N 0.5And N 0.75Expression is vectorial respectively
Figure BDA0000078913140000107
Go up and N ThisApart at 0.5D Dist(N This, N Next) and 0.75D Dist(N This, N Next) the position. target 1 to target 4 is N NextOn every side with its target in the environment in certain camera coverage scope, d 1To d 4And α 1To α 4Represent each target and N respectively NextDistance (can calculate) and each target and robot traffic direction through pixel distance and map scale
Figure BDA0000078913140000108
Angle. through analyzing; Certain target target as a reference is relevant with two factors: the distance of this target and crucial pilot point; The departure degree of this target and robot motion's direction. distance too closely or too far away locks into the recognition capability of image, all unsuitable recognition image; Deviation in driction is too many, also is not easy to robot control camera and comes recognition image. in view of this consideration, two constraint function f have been proposed 1(d) and f 2(α), their distances of representing target respectively and deviation in driction are to its influence of target as a reference. wherein,
f 1 ( d ) = 0 d < 0.5 D 0.5 d / D + 0.25 0.5 D &le; d &le; 1.5 D - 0.4 d / D + 1.6 1.5 D &le; d &le; 4 D 0 4 D < d &le; 6 D
f 2 ( &alpha; ) = - 3 &alpha; / ( 2 &pi; ) + 1 0 < &alpha; &le; 2 &pi; / 3 0 2 &pi; / 3 < &alpha; &le; &pi;
For each target i, it is the comprehensive possibility degree F of target as a reference iBut through type (12) calculates:
F i=f 1(d i)·f 2i) (12)
Rule of thumb, if
Figure BDA0000078913140000113
Then think N NextNear do not have reference target. otherwise, make F iGet peaked target i and can be used as reference target. if exist a plurality of targets can both make F iObtain maximal value, then select α in these targets iMinimum target as a reference.
In the Freehandhand-drawing map; Provided the location of pixels of each target; And starting point straight linear distance to terminal; According to starting point pixel distance to terminal; Just can obtain the initial proportion chi of Freehandhand-drawing map and actual environment. the robot motion is to positioning based on target image on every side near the crucial pilot point. according to the variation of robot position in map before and after the location, then can upgrade the engineer's scale of map.
As if upgrading the position of back robot on map new variation has been arranged, then can upgrade the engineer's scale of map through this variation. establishing the engineer's scale that upgrades preceding map is R Oldruler, the crucial pilot point position of this section operation beginning is L 1, the end point position is L 2, robot is L ' according to the position of end point on map, image information location 2, the engineer's scale R after then upgrading NewrulerFunctional relation below utilizing calculates:
R newruler = D dist ( L 1 , L 2 ) D dist ( L 1 , L 2 &prime; ) &CenterDot; R oldruler , RC R oldruler , other - - - ( 13 )
Wherein, D DistThe distance of () expression point-to-point transmission, RC representes the ratio update condition. rule of thumb is made as here
0.33<D dist(L 1,L 2)/D dist(L 1,L′ 2)<3
Can carry out robot navigation's task below in conjunction with four description of test the present invention:
Experiment one: all be the ideal situation appearance according to the noun of locality between the road sign in the experimental situation, for example: " left side " is exactly " positive left ".
A is a robot among Fig. 3, and B is a chest 1, and C is the express delivery box, and D is Compaq, and E is chest 2 (object that the letter in the subsequent figures of the present invention refers to is identical with Fig. 9).
To above-mentioned scene, require robot to move to E by position A.Providing limited NLRP describes: " from current location, go ahead general 3 meters, arrive chest, walk about 5 meters to the right, just can arrive the express delivery box, turn right, walk general 4 meters, just can arrive Compaq, Compaq is blue.Turn left again, go on along about 4 meters, just reached chest ".Here limited NLRP has also comprised distracter " Compaq is blue " in describing.
Robot actual motion route, like Fig. 4, pentagram is represented road sign, resolves through NLRP and can obtain their positions on the navigation mental map.
Experiment two: provide limited NLRP to environment shown in Figure 5 and describe: " from current location, go ahead general 3 meters, arrive chest, walk about 3 meters to the right, just can arrive the express delivery box, walk general 3 meters to the right front again, just can arrive Compaq ".There is the speech of 45 in expression wherein " to walk general 3 meters to the right front again ".
The actual motion route of robot is like Fig. 6.
Experimental analysis: the road sign of setting up through NLRP all is to confirm own position in strict accordance with the literal meaning of the noun of locality.Above in two experiments, robot has all gone to the destination.Can know that from above-mentioned experiment robot can go to the target location if road sign all is to be in the pairing ideal position of the noun of locality meaning of a word.In addition, the distracter that comprises in the limited NLRP description has no influence to navigation.
Experiment three: road sign all is an obvious object of getting key positions such as turning in actual indoor environment, and some is not in desirable position, can this navigation algorithm be effective?
Like Fig. 7, variation has taken place with respect to Fig. 3 in the position of express delivery box among the figure, and it has departed from about 1 meter of original position.
Provide and test NLRP the same in one to above-mentioned scene and describe the analysis result of road sign and robot actual motion route such as Fig. 8.
Experimental analysis: road sign is sought in robot scanning about can using video camera in certain angle, to carry out when the road sign in the used air navigation aid of the present invention, so robot has very strong processing power to the ambiguity of the environment of reality.
Experiment four: the situation that key position is not mentioned road sign appears in NLRP sometimes, can this navigation algorithm be effective?
According to Fig. 3, provide NLRP: " from current location, go ahead general 3 meters, arrive chest, walk about 5 meters to the right, turn right, walk general 4 meters, arrive Compaq, Compaq is blue, goes on along about 4 meters left, up to seeing chest ".Fig. 9 has provided the operation result that robot navigates with above-mentioned NLRP.
Experimental analysis: among the NLRP that this experiment provides; The 2nd road sign place do not provide the title of road sign among Fig. 9; So extracted virtual road sign herein; When robot is going near the virtual road sign in the used air navigation aid of the present invention; Can not scan specific direction; But positioning of rotating was directly carried out the operation in next stage then according to former engineer's scale.Can know that through experiment when some key position was not mentioned road sign in NLRP, robot still can go to the destination.
Can know that through above-mentioned experiment through limited natural language understanding, robot can be good at accomplishing navigation task.

Claims (2)

1. the robot navigation's method based on natural language processing is characterized in that comprising the steps:
Step 1 representes that to natural language the statement cellular construction in path does grammatical analysis according to Chinese path description rule and dictionary, obtains active path information after rejecting Invalid path information;
Step 2, forward direction landmark information, orientation transitional information and back in the extraction active path information are to landmark information;
Step 3, the spatial relation between the derivation road sign specifically comprises the steps:
Step 3-1 extracts the effective noun of locality in each statement cellular construction orientation transitional information;
Step 3-2 converts all effective nouns of locality to the absolute orientation speech;
Step 3-3, previous road sign preserve position and the range information of a back road sign with respect to self;
Step 4, it is that the Cartesian coordinates at zero point is fastened that the relative position of road sign is mapped to the current coordinate of robot, then this coordinate system is mapped on the navigation mental map through coordinate transform;
Step 5 according to expression and the noun of locality model of road sign in navigation purpose figure, is extracted natural language and is represented the crucial pilot point in the path;
Step 6 according to the starting point of input and the starting point pixel distance among actual range between the terminal point and the navigation purpose figure, is confirmed the initial proportion chi;
Step 7 is calculated the distance between current crucial pilot point and next crucial pilot point according to the initial proportion chi,
Confirm the operational mode between two crucial pilot points;
Step 8, robot moves according to the operational mode in the step 7, and according to the navigation of prediction estimation approach; In navigation procedure, utilize the SURF algorithm that realtime graphic and original image are mated, seek reference substance; Angle according to reference substance adjustment robot graphics harvester;
Step 9 after robot runs to next crucial pilot point, positions or positions through odometer information according to the pixels tall of road sign in the realtime graphic;
Step 10 is upgraded the position and the map scale of next crucial pilot point, to step 9, runs to last crucial pilot point up to robot according to the engineer's scale repeating step 7 after upgrading.
2. a kind of robot navigation's method according to claim 1 based on natural language processing; It is characterized in that: the said dictionary of step 1 comprises landmark dictionary, transient state verb dictionary, noun of locality dictionary, preposition dictionary, refers to the verb dictionary apart from dictionary, object space, and the noun of locality mark synonym zone bit of equivalent is obtained noun of locality model.
CN201110211946A 2011-07-27 2011-07-27 Robot navigation method based on natural language processing Pending CN102306145A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110211946A CN102306145A (en) 2011-07-27 2011-07-27 Robot navigation method based on natural language processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110211946A CN102306145A (en) 2011-07-27 2011-07-27 Robot navigation method based on natural language processing

Publications (1)

Publication Number Publication Date
CN102306145A true CN102306145A (en) 2012-01-04

Family

ID=45380009

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110211946A Pending CN102306145A (en) 2011-07-27 2011-07-27 Robot navigation method based on natural language processing

Country Status (1)

Country Link
CN (1) CN102306145A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102853830A (en) * 2012-09-03 2013-01-02 东南大学 Robot vision navigation method based on general object recognition
CN103514157A (en) * 2013-10-21 2014-01-15 东南大学 Path natural language processing method for indoor intelligent robot navigation
CN104898675A (en) * 2015-06-05 2015-09-09 东华大学 Robot intelligent navigation control method
CN105043375A (en) * 2015-06-04 2015-11-11 上海斐讯数据通信技术有限公司 Navigation method, navigation system and corresponding mobile terminal
CN108629443A (en) * 2017-10-12 2018-10-09 环达电脑(上海)有限公司 Path description is converted to the paths planning method and navigation system of machine readable format
CN108734262A (en) * 2018-03-21 2018-11-02 北京猎户星空科技有限公司 Smart machine control method, device, smart machine and medium
CN110631578A (en) * 2019-09-29 2019-12-31 电子科技大学 Indoor pedestrian positioning and tracking method under map-free condition
CN112346449A (en) * 2019-08-08 2021-02-09 和硕联合科技股份有限公司 Semantic map orientation device and method and robot
CN112714896A (en) * 2018-09-27 2021-04-27 易享信息技术有限公司 Self-aware visual-text common ground navigation agent
CN113670310A (en) * 2021-07-27 2021-11-19 际络科技(上海)有限公司 Visual voice navigation method, device, equipment and storage medium
US11341334B2 (en) 2020-01-28 2022-05-24 Here Global B.V. Method and apparatus for evaluating natural language input to identify actions and landmarks
CN117824663A (en) * 2024-03-05 2024-04-05 南京思伽智能科技有限公司 Robot navigation method based on hand-drawn scene graph understanding
CN113670310B (en) * 2021-07-27 2024-05-31 际络科技(上海)有限公司 Visual voice navigation method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1598487A (en) * 2004-07-23 2005-03-23 东北大学 Method for visual guiding by manual road sign
JP2006184976A (en) * 2004-12-24 2006-07-13 Toshiba Corp Mobile robot, its movement method, and movement program
US20080294338A1 (en) * 2005-12-09 2008-11-27 Nakju Doh Method of Mapping and Navigating Mobile Robot by Artificial Landmark and Local Coordinate
CN102087530A (en) * 2010-12-07 2011-06-08 东南大学 Vision navigation method of mobile robot based on hand-drawing map and path

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1598487A (en) * 2004-07-23 2005-03-23 东北大学 Method for visual guiding by manual road sign
JP2006184976A (en) * 2004-12-24 2006-07-13 Toshiba Corp Mobile robot, its movement method, and movement program
US20080294338A1 (en) * 2005-12-09 2008-11-27 Nakju Doh Method of Mapping and Navigating Mobile Robot by Artificial Landmark and Local Coordinate
CN102087530A (en) * 2010-12-07 2011-06-08 东南大学 Vision navigation method of mobile robot based on hand-drawing map and path

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
THOMAS KOLLAR 等: "《Toward understanding Natural Language Directions》", 《HUMAN-ROBOT INTERACTION(HRI).BOSTON:2010 5TH ACM/IEEE INTERNATIONAL CONFERENCE》, 31 December 2010 (2010-12-31), pages 259 - 267 *
YUAN WEI 等: "《Where to Go: Interpreting Natural Directions Using Global Inference》", 《2009 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION KOBE INTERNATIONAL CONFERENCE CENTER, KOBE, JAPAN》, 17 May 2009 (2009-05-17), pages 3761 - 3767 *
刘瑜 等: "《基于受限汉语的GIS路径重建研究》", 《遥感学报》, vol. 8, no. 4, 31 July 2004 (2004-07-31), pages 323 - 330 *
张雪英 等: "《面向汉语的自然语言路径描述方法》", 《地球信息科学》, vol. 10, no. 6, 31 December 2008 (2008-12-31), pages 757 - 762 *
田雨 等: "《湖水清污机器人自然语言理解系统》", 《现代计算机》, no. 265, 31 December 2007 (2007-12-31), pages 106 - 107 *
聂仙丽 等: "《采用自然语言的移动机器人任务编程》", 《机器人》, vol. 25, no. 4, 31 July 2003 (2003-07-31) *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102853830A (en) * 2012-09-03 2013-01-02 东南大学 Robot vision navigation method based on general object recognition
CN103514157A (en) * 2013-10-21 2014-01-15 东南大学 Path natural language processing method for indoor intelligent robot navigation
CN103514157B (en) * 2013-10-21 2016-01-27 东南大学 A kind of path natural language processing method of intelligent robot navigation in faced chamber
CN105043375A (en) * 2015-06-04 2015-11-11 上海斐讯数据通信技术有限公司 Navigation method, navigation system and corresponding mobile terminal
CN104898675A (en) * 2015-06-05 2015-09-09 东华大学 Robot intelligent navigation control method
CN108629443B (en) * 2017-10-12 2022-02-01 环达电脑(上海)有限公司 Path planning method and navigation system for converting path description into machine-readable format
CN108629443A (en) * 2017-10-12 2018-10-09 环达电脑(上海)有限公司 Path description is converted to the paths planning method and navigation system of machine readable format
CN108734262A (en) * 2018-03-21 2018-11-02 北京猎户星空科技有限公司 Smart machine control method, device, smart machine and medium
CN108734262B (en) * 2018-03-21 2020-12-08 北京猎户星空科技有限公司 Intelligent device control method and device, intelligent device and medium
CN112714896A (en) * 2018-09-27 2021-04-27 易享信息技术有限公司 Self-aware visual-text common ground navigation agent
CN112714896B (en) * 2018-09-27 2024-03-08 硕动力公司 Self-aware vision-text common ground navigation agent
CN112346449A (en) * 2019-08-08 2021-02-09 和硕联合科技股份有限公司 Semantic map orientation device and method and robot
CN110631578A (en) * 2019-09-29 2019-12-31 电子科技大学 Indoor pedestrian positioning and tracking method under map-free condition
US11341334B2 (en) 2020-01-28 2022-05-24 Here Global B.V. Method and apparatus for evaluating natural language input to identify actions and landmarks
CN113670310A (en) * 2021-07-27 2021-11-19 际络科技(上海)有限公司 Visual voice navigation method, device, equipment and storage medium
CN113670310B (en) * 2021-07-27 2024-05-31 际络科技(上海)有限公司 Visual voice navigation method, device, equipment and storage medium
CN117824663A (en) * 2024-03-05 2024-04-05 南京思伽智能科技有限公司 Robot navigation method based on hand-drawn scene graph understanding
CN117824663B (en) * 2024-03-05 2024-05-10 南京思伽智能科技有限公司 Robot navigation method based on hand-drawn scene graph understanding

Similar Documents

Publication Publication Date Title
CN102306145A (en) Robot navigation method based on natural language processing
Baek et al. Augmented reality system for facility management using image-based indoor localization
CN109470247B (en) Complex sea area navigation safety auxiliary information indicating system based on electronic chart
US20230039293A1 (en) Method of processing image, electronic device, and storage medium
Treuillet et al. Outdoor/indoor vision-based localization for blind pedestrian navigation assistance
CN110222137A (en) One kind is based on oblique photograph and augmented reality Intelligent campus system
CN102853830A (en) Robot vision navigation method based on general object recognition
CN110285818A (en) A kind of Relative Navigation of eye movement interaction augmented reality
CN103901895A (en) Target positioning method based on unscented FastSLAM algorithm and matching optimization and robot
Skubic et al. Using a hand-drawn sketch to control a team of robots
CN107144281A (en) Unmanned plane indoor locating system and localization method based on cooperative target and monocular vision
CN103499352A (en) Mobile GPS (global positioning system) real-scene navigation system based on street scene technology
Li et al. Intelligent mobile drone system based on real-time object detection
CN116518960B (en) Road network updating method, device, electronic equipment and storage medium
Katz et al. NAVIG: Navigation assisted by artificial vision and GNSS
CN113591518A (en) Image processing method, network training method and related equipment
Qian et al. Wearable-assisted localization and inspection guidance system using egocentric stereo cameras
Chidsin et al. AR-based navigation using RGB-D camera and hybrid map
CN112364137A (en) Knowledge graph construction method for space target situation
CN107463871A (en) A kind of point cloud matching method based on corner characteristics weighting
CN110843772B (en) Method, device, equipment and storage medium for judging relative direction of potential collision
CN112651991A (en) Visual positioning method, device and computer system
Zhao et al. A multi-sensor fusion system for improving indoor mobility of the visually impaired
CN105243665A (en) Robot biped positioning method and apparatus
CN105447875A (en) Automatic geometric correction method for electronic topographical map

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20120104