CN113485382A - Mobile robot autonomous navigation method and system for man-machine natural interaction - Google Patents

Mobile robot autonomous navigation method and system for man-machine natural interaction Download PDF

Info

Publication number
CN113485382A
CN113485382A CN202110990651.XA CN202110990651A CN113485382A CN 113485382 A CN113485382 A CN 113485382A CN 202110990651 A CN202110990651 A CN 202110990651A CN 113485382 A CN113485382 A CN 113485382A
Authority
CN
China
Prior art keywords
sequence
landmark
instruction
time sequence
word
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110990651.XA
Other languages
Chinese (zh)
Other versions
CN113485382B (en
Inventor
迟文政
徐晴川
洪阳
叶荣广
陈国栋
孙立宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou University
Original Assignee
Suzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou University filed Critical Suzhou University
Priority to CN202110990651.XA priority Critical patent/CN113485382B/en
Publication of CN113485382A publication Critical patent/CN113485382A/en
Application granted granted Critical
Publication of CN113485382B publication Critical patent/CN113485382B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0234Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons
    • G05D1/0236Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle

Abstract

The invention relates to a mobile robot autonomous navigation method and system for man-machine natural interaction, which comprises the following steps: controlling the robot to walk in an unknown environment, and establishing a map for the unknown environment; obtaining an instruction landmark sequence and an instruction time sequence word sequence; pairing the landmarks in the instruction landmark sequence with the time sequence words in the instruction time sequence word sequence; calculating the weight of each time sequence word in the instruction time sequence word sequence, and reordering the time sequence words according to the weight to obtain a new instruction time sequence word sequence; reordering landmarks in the instruction landmark sequence based on the pairing relationship between the landmarks and the time sequence words by taking the new instruction time sequence word sequence as a standard to obtain a navigation landmark sequence; and generating a robot navigation target point sequence according to the navigation landmark sequence, and sequentially navigating according to the robot navigation target point sequence. The robot navigation task under the natural language instruction can be realized, any corpus or labeled data set is not needed, the cost can be reduced, and the flexibility is high.

Description

Mobile robot autonomous navigation method and system for man-machine natural interaction
Technical Field
The invention relates to the technical field of robot navigation, in particular to a mobile robot autonomous navigation method and system for man-machine natural interaction.
Background
With the major breakthrough of artificial intelligence technology and the continuous development of service robots in the present year, the research of mobile service robots has gained unprecedented attention and development, which also drives the explosive growth of the whole service robot industry. The mobile service robot can be frequently seen in hospitals, airports, malls and the like in China. Different from the experimental environment, in practical application, the service robot is generally instructed by a common user rather than a robot expert. Users no longer satisfy the requirement of using complicated buttons or controllers to send commands, and the guidance of the service robot by using natural language becomes a great trend of research. The biggest difficulty for a robot to understand natural language instructions is the rich sequential logic of natural language. At present, methods for guiding robot navigation through natural language mainly include an end-to-end method and a linear sequential logic (LTL) expression, but both methods have certain limitations.
The end-to-end method requires that the input instruction includes specific actions of the robot, such as "turn left", "turn right", or "forward". However, such specific instructions are too complex and cumbersome for the user. When using a robot to perform a particular task, few have enough patience to tell the robot constantly to turn left, turn right or move forward. These methods also require the user to give instructions in some fixed form, hindering the naturalness and fluency of the interaction with the robot language.
The linear sequential logic expression method is very large in corpus required for training a learning model capable of generating the linear sequential logic expression from natural language instructions due to rich bottom-layer semantics of the linear sequential logic expression. The requirement for a large amount of annotation data means that this approach is very costly.
Disclosure of Invention
Therefore, the technical problem to be solved by the invention is to overcome the technical defects of complicated and rigid instruction form and high cost in the current natural language guidance robot navigation technology in the prior art.
In order to solve the technical problem, the invention provides a mobile robot autonomous navigation method facing to human-computer natural interaction, which comprises the following steps:
s1, controlling the robot to move in the unknown environment, and establishing a map for the current unknown environment;
s2, acquiring a natural language instruction, extracting information of landmark points and time sequence words from the natural language instruction, and acquiring an instruction landmark sequence and an instruction time sequence word sequence;
s3, pairing the landmarks in the instruction landmark sequence and the time sequence words in the instruction time sequence word sequence to enable each landmark point to be paired with the time sequence words, and obtaining the pairing relation between the landmarks and the time sequence words;
s4, calculating the weight of each time sequence word in the instruction time sequence word sequence, and reordering the time sequence words according to the weight to obtain a new instruction time sequence word sequence;
s5, reordering the landmarks in the instruction landmark sequence based on the new instruction time sequence word sequence according to the pairing relationship between the landmarks and the time sequence words in S3, updating the instruction landmark sequence, and obtaining a navigation landmark sequence;
and S6, generating a robot navigation target point sequence according to the navigation landmark sequence, and sequentially navigating according to the robot navigation target point sequence.
Preferably, the S1 includes:
controlling the robot to walk in an unknown environment by using a handle, and establishing a map for the current position environment;
in the process of establishing the map, the landmark positions and landmark names of the robots at all landmarks are recorded, and the coordinate positions of the landmarks are marked on the map.
Preferably, in S1, a map is created for the current location environment by the mapping function package and the laser sensor carried by the robot itself.
Preferably, in S2, extracting information of landmark points and time-series words from the natural language instruction to obtain an instruction landmark sequence and an instruction time-series word sequence, the method includes:
storing landmark names and corresponding landmark positions in an unknown environment in a landmark table, and enriching the landmark table by adding aliases to the landmark names;
storing the common time sequence words and the basic weight thereof in a time sequence word list;
performing word segmentation processing on the instruction through a Chinese word segmentation algorithm to obtain an instruction sequence;
matching the instruction sequence with landmarks in a landmark table, and extracting the landmarks in the instruction sequence to obtain an instruction landmark sequence;
and matching the instruction sequence with the time sequence words in the time sequence word list, and extracting the time sequence words in the instruction sequence to obtain the instruction time sequence word sequence.
Preferably, the S3 includes:
s31, judging whether the landmark in the command landmark sequence lacks the matched time sequence word, if not, executing the step S35; if yes, executing the next step;
s32, judging whether the time sequence words contain 'before' or 'after', if so, moving the instruction clauses containing 'before' or 'after' to the instruction tail; if not, executing the next step;
s33, judging whether the first landmark lacks a time sequence word matched with the first landmark, if so, adding the time sequence word 'first' at the head end of the time sequence word, and if not, executing the next step;
s34, judging whether the landmark lacks a time sequence word matched with the landmark, if so, copying the time sequence word in front of the missing position of the time sequence word, and adding the time sequence word to the missing position of the time sequence word; if not, executing the next step; repeating the steps until the judgment result is not yes;
s35, pairing the landmarks in the instruction landmark sequence and the time sequence words in the instruction time sequence word sequence one by one in sequence.
Preferably, in S4, the calculating a weight of each time-series word in the sequence of instruction time-series words includes:
s41, judging whether the time sequence word is of a 'pass' type, if not, executing the next step; if yes, the weight calculation formula of the time sequence word is as follows:
Wi=wi-1-0.01
wherein, WiIs the weight of the time series word, wi-1The weight of the time sequence word before the time sequence word;
s42, judging whether the time sequence word is of a 'then' type, if not, executing the next step; if yes, the weight calculation formula of the time sequence word is as follows:
Wi=wi-1+0.02;
s43, the time sequence word weight calculation formula of the type which is not the type of 'pass' or 'then' is as follows:
Wi=wi+0.1*i,
wherein, wiI is the base weight of the time series word, i is the order of the time series word in the sequence of instruction time series words.
Preferably, in S6, the generating a robot navigation target point sequence according to the navigation landmark sequence specifically includes:
searching the landmarks in the navigation landmark sequence in the landmark table to obtain corresponding coordinate values, and taking the coordinate values as the coordinates of the navigation target point;
generating navigation target point orientation in a quaternion manner:
Figure BDA0003232189260000051
qx=0,
qy=0,
Figure BDA0003232189260000052
Figure BDA0003232189260000053
wherein (x)1,y1) And (x)2,y2) Coordinates of the current target point and the next target point, q, respectivelyx,qy,qzAnd q iswTogether constituting a quaternion representing the orientation of the target point.
Preferably, in S6, the sequentially navigating according to the sequence of robot navigation target points includes:
s61, setting a first target point of the navigation target point sequence as a current target point;
s62, using the A-x algorithm as a global path planner, rapidly planning a global path from the current position to a target point in the current environment, and using a dynamic window method as a local path planner to guide the robot to complete local obstacle avoidance and realize the navigation of the robot from the current position to the current target point;
s63, judging whether the robot reaches the current target point, if so, executing the next step, otherwise, circularly executing the steps S62-S63;
s64, judging whether the arrived target point is the last target point of the navigation target point sequence, if so, finishing the whole navigation process by the robot, and if not, executing the next step;
s65, setting the next target point of the navigation target point sequence as the current target point and returning to the step S62.
Preferably, in the process of executing step S6, if the robot receives a new command, the robot stops navigating, returns to step S2, and executes the new command; if the robot does not receive a new command, the navigation task ends.
The invention discloses a mobile robot autonomous navigation system facing to human-computer natural interaction, which is characterized by comprising the following components:
the mapping module is used for controlling the robot to move in an unknown environment and establishing a map for the current unknown environment;
the system comprises an instruction landmark and time sequence word sequence acquisition module, a time sequence word sequence acquisition module and a time sequence word sequence acquisition module, wherein the instruction landmark and time sequence word sequence acquisition module is used for acquiring a natural language instruction, extracting information of landmark points and time sequence words from the natural language instruction and acquiring an instruction landmark sequence and an instruction time sequence word sequence;
the pairing module is used for pairing the landmarks in the instruction landmark sequence with the time sequence words in the instruction time sequence word sequence so that each landmark point is paired with the time sequence words to obtain the pairing relation between the landmarks and the time sequence words;
the instruction time sequence word sequence updating module is used for calculating the weight of each time sequence word in the instruction time sequence word sequence and reordering the time sequence words according to the weight to obtain a new instruction time sequence word sequence;
the navigation landmark sequence acquisition module is used for reordering landmarks in the instruction landmark sequence based on the pairing relationship between the landmarks and the time sequence words by taking the new instruction time sequence word sequence as a reference, updating the instruction landmark sequence and acquiring a navigation landmark sequence;
and the navigation module generates a robot navigation target point sequence according to the navigation landmark sequence and sequentially navigates according to the robot navigation target point sequence.
Compared with the prior art, the technical scheme of the invention has the following advantages:
1. the invention extracts the landmarks and time sequence words in the natural language instruction, realizes the sequencing of landmark sequences by a method of adding weight to the time sequence words, generates a navigation target point sequence, and completes the target point navigation in sequence according to the sequence to realize the robot navigation task under the natural language instruction.
2. The invention can process the natural language instruction with complex time logic and no fixed format, and does not need any corpus or labeled data set, thereby reducing the cost and having high flexibility.
Drawings
FIG. 1 is a flow chart of a navigation method of the present invention;
FIG. 2 is an environmental plan view of an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating an exemplary sequence ordering of landmark sequences;
fig. 4 is a robot path diagram according to an embodiment of the present invention.
Detailed Description
The present invention is further described below in conjunction with the following figures and specific examples so that those skilled in the art may better understand the present invention and practice it, but the examples are not intended to limit the present invention.
Referring to fig. 1-4, the invention discloses a mobile robot autonomous navigation method facing human-computer natural interaction, comprising the following steps:
the method comprises the following steps of controlling a robot to walk in an unknown environment, and establishing a map for the current unknown environment, wherein the method comprises the following steps:
controlling the robot to walk in an unknown environment by using a handle, and establishing a map for the current position environment through a mapping function package and a laser sensor carried by the robot;
in the process of establishing the map, the landmark positions and landmark names of the robots at all landmarks are recorded, and the coordinate positions of the landmarks are marked on the map.
And step two, acquiring a natural language instruction, extracting information of landmark points and time sequence words from the natural language instruction, and acquiring an instruction landmark sequence and an instruction time sequence word sequence.
The method comprises the following steps of extracting information of landmark points and time sequence words from natural language instructions to obtain instruction landmark sequences and instruction time sequence word sequences, wherein the method comprises the following steps:
storing landmark names and corresponding landmark positions in an unknown environment in a landmark table, and enriching the landmark table by adding aliases to the landmark names;
storing the common time sequence words and the basic weight thereof in a time sequence word list;
performing word segmentation processing on the instruction through a Chinese word segmentation algorithm to obtain an instruction sequence;
matching the instruction sequence with landmarks in a landmark table, and extracting the landmarks in the instruction sequence to obtain an instruction landmark sequence;
and matching the instruction sequence with the time sequence words in the time sequence word list, and extracting the time sequence words in the instruction sequence to obtain the instruction time sequence word sequence.
Step three, pairing the landmarks in the instruction landmark sequence with the time sequence words in the instruction time sequence word sequence to ensure that each landmark point is paired with the time sequence words, and obtaining the pairing relation between the landmarks and the time sequence words, wherein the pairing relation comprises the following steps:
s31, judging whether the landmark in the command landmark sequence lacks the matched time sequence word, if not, executing the step S35; if yes, executing the next step;
s32, judging whether the time sequence words contain 'before' or 'after', if so, moving the instruction clauses containing 'before' or 'after' to the instruction tail; if not, executing the next step;
s33, judging whether the first landmark lacks a time sequence word matched with the first landmark, if so, adding the time sequence word 'first' at the head end of the time sequence word, and if not, executing the next step;
s34, judging whether the landmark lacks a time sequence word matched with the landmark, if so, copying the time sequence word in front of the missing position of the time sequence word, and adding the time sequence word to the missing position of the time sequence word; if not, executing the next step; repeating the steps until the judgment result is not yes;
s35, pairing the landmarks in the instruction landmark sequence and the time sequence words in the instruction time sequence word sequence one by one in sequence.
And step four, calculating the weight of each time sequence word in the instruction time sequence word sequence, and reordering the time sequence words according to the weight to obtain a new instruction time sequence word sequence.
Wherein, calculating the weight of each time sequence word in the instruction time sequence word sequence comprises:
s41, judging whether the time sequence word is of a 'pass' type, if not, executing the next step; if yes, the weight calculation formula of the time sequence word is as follows:
Wi=wi-1-0.01
wherein, WiIs the weight of the time series word, wi-1The weight of the time sequence word before the time sequence word;
s42, judging whether the time sequence word is of a 'then' type, if not, executing the next step; if yes, the weight calculation formula of the time sequence word is as follows:
Wi=wi-1+0.02;
s43, the time sequence word weight calculation formula of the type which is not the type of 'pass' or 'then' is as follows:
Wi=wi+0.1*i,
wherein, wiI is the base weight of the time series word, i is the order of the time series word in the sequence of instruction time series words.
Step five, reordering the landmarks in the instruction landmark sequence based on the new instruction time sequence word sequence according to the pairing relationship between the landmarks and the time sequence words in the step 3, updating the instruction landmark sequence, and obtaining a navigation landmark sequence;
and step six, generating a robot navigation target point sequence according to the navigation landmark sequence, and sequentially navigating according to the robot navigation target point sequence.
The method comprises the following steps of generating a robot navigation target point sequence according to a navigation landmark sequence, and specifically comprises the following steps:
searching the landmarks in the navigation landmark sequence in the landmark table to obtain corresponding coordinate values, and taking the coordinate values as the coordinates of the navigation target point;
generating navigation target point orientation in a quaternion manner:
Figure BDA0003232189260000101
qx=0,
qy=0,
Figure BDA0003232189260000102
Figure BDA0003232189260000103
wherein (x)1,y1) And (x)2,y2) Coordinates of the current target point and the next target point, q, respectivelyx,qy,qzAnd q iswTogether constituting a quaternion representing the orientation of the target point.
Wherein, navigate in proper order according to the robot navigation target point sequence, include:
s61, setting a first target point of the navigation target point sequence as a current target point;
s62, using the A-x algorithm as a global path planner, rapidly planning a global path from the current position to a target point in the current environment, and using a dynamic window method as a local path planner to guide the robot to complete local obstacle avoidance and realize the navigation of the robot from the current position to the current target point;
s63, judging whether the robot reaches the current target point, if so, executing the next step, otherwise, circularly executing the steps S62-S63;
s64, judging whether the arrived target point is the last target point of the navigation target point sequence, if so, finishing the whole navigation process by the robot, and if not, executing the next step;
s65, setting the next target point of the navigation target point sequence as the current target point and returning to the step S62.
In the process of executing the step S6, if the robot receives a new instruction, the robot stops navigating, returns to the step S2 and executes the new instruction; if the robot does not receive a new command, the navigation task ends.
The invention also discloses a mobile robot autonomous navigation system facing the human-computer natural interaction, which comprises a mapping module, an instruction landmark and time sequence word sequence acquisition module, a pairing module, an instruction time sequence word sequence updating module, a navigation landmark sequence acquisition module and a navigation module.
The mapping module is used for controlling the robot to move in the unknown environment and establishing a map for the current unknown environment. The instruction landmark and time sequence word sequence acquisition module is used for acquiring a natural language instruction, extracting information of landmark points and time sequence words from the natural language instruction and acquiring an instruction landmark sequence and an instruction time sequence word sequence. The matching module is used for matching the landmarks in the instruction landmark sequence with the time sequence words in the instruction time sequence word sequence so that each landmark point is matched with the time sequence words to obtain the matching relationship between the landmarks and the time sequence words. And the instruction time sequence word sequence updating module is used for calculating the weight of each time sequence word in the instruction time sequence word sequence and reordering the time sequence words according to the weight to obtain a new instruction time sequence word sequence. The navigation landmark sequence acquisition module is used for reordering the landmarks in the instruction landmark sequence based on the pairing relationship between the landmarks and the time sequence words by taking the new instruction time sequence word sequence as a reference, updating the instruction landmark sequence and acquiring the navigation landmark sequence. And the navigation module generates a robot navigation target point sequence according to the navigation landmark sequence and sequentially navigates according to the robot navigation target point sequence.
The technical solution of the present invention is further described below with reference to specific examples.
In order to achieve the object of the present invention, as shown in fig. 1, in one embodiment of the present invention, there is provided a mobile robot autonomous navigation method facing human-computer natural interaction, including the steps of:
and S1, initializing. Placing the robot at a location in an unknown indoor environment;
as shown in fig. 2, fig. 2 is a plan view of an indoor environment.
And S2, establishing a diagram. Controlling the robot to walk in an unknown environment by using a handle, and establishing a map for the current unknown environment by using a mapping function package and a laser sensor carried by the robot;
and S3, information extraction. Extracting two types of key information, namely landmark points and time sequence words from the natural language instruction;
and S4, pairing the landmark chronologies. Matching the landmarks extracted in the step S3 with the time sequence words one by one, and ensuring that each landmark point has the time sequence word matched with the landmark point;
and S5, calculating the weight of the time sequence word. Calculating a weight for each time sequence word in the sequence of instruction time sequence words;
and S6, ordering the command landmark sequence. And reordering the time sequence words in the instruction time sequence word sequence according to the weights of the time sequence words from small to large. Then, the landmarks in the instruction landmark sequence are reordered according to the pairing relationship between the landmarks and the time sequence words established in the step S4 by taking the new instruction time sequence word sequence as the standard, so that a correct landmark sequence is obtained and is called as a navigation landmark sequence;
as shown in fig. 3, fig. 3 illustrates the process from information extraction to instruction landmark sequence sorting, where the sequence outlined by the red dashed line is the correct landmark sequence, i.e. the navigation landmark sequence.
And S7, generating a navigation target point sequence. Generating a robot navigation target point sequence from the navigation landmark sequence obtained in the step S6;
and S8, navigation. Navigating in sequence according to the navigation target point sequence generated in the step S7;
as shown in fig. 4, the red mark is a landmark point in the environment, and the blue solid line is the navigation path of the robot.
In the above technical solution, the step S2 of creating the map includes the following steps:
and S21, establishing a grid map. The robot is controlled to move in an unknown environment by using a handle, and a grid map is established for the current unknown environment through a map establishing function packet and a laser sensor carried by the robot;
and S22, recording landmark points. In the process of controlling the robot to build a map by using the handle in the step S21, when some landmark points such as a dining table, a sofa and the like are reached, the coordinate position of the robot at the moment and the corresponding landmark name are recorded;
and S23, marking landmark points. After the grid map of the entire environment is built in step S21, the coordinate positions recorded in step S22 are marked on the map to form prior information.
In the above technical solution, the information extraction in step S3 includes the following steps:
and S31, constructing a landmark table. Storing the landmark names in the environment in a landmark table, wherein the table contains the coordinate positions of the landmark points recorded in the step S22 besides the landmark names;
and S32, enriching the landmark table. In view of different users having different calling laws for the same landmark, such as for the landmark of "toilet", we will add aliases of "toilet" and "toilet", so as to prevent the users from being unable to recognize when the users use the aliases to call the landmark points. The landmark table is constructed separately for each environment and cannot be commonly used in different environments. (ii) a
S33, constructing a time sequence word list. The common time sequence words such as "before", "passing", "then" etc. are stored in a time sequence word list, which also contains the basic weight of each time sequence word. The time sequence word list is common to different environments, and part of the time sequence word list is shown as a table I.
TABLE 1
Time sequence word Basis weight
Then, after 0
Firstly, first of all 1
Front, front 2
Finally, the 3
Is passed through, then Is free of
And S34, instruction word segmentation. Performing word segmentation processing on the instruction through a Chinese word segmentation algorithm to obtain an instruction sequence;
and S35, extracting the landmark. Matching the instruction sequence obtained in the step S34 with the landmarks in the landmark table, extracting the landmarks in the instruction sequence, and calling the landmarks as instruction landmarks;
and S36, extracting time sequence words. And matching the instruction sequence obtained in the step S34 with the time sequence words in the time sequence word list, extracting the time sequence words in the instruction sequence, and calling the time sequence words as the instruction time sequence words.
In the above technical solution, the pairing of landmark chronologues in step S4 mainly includes the following steps:
s41, judging that the instruction landmark lacks an instruction time sequence word matched with the instruction landmark;
s42, judging that the command time sequence word contains 'before', and moving the command clause containing 'before' to the command end;
s43, judging that the first instruction landmark 'dining table' does not lack an instruction time sequence word matched with the first instruction landmark 'dining table';
s44, judging that the instruction landmarks 'microwave oven' and 'wardrobe' lack the instruction time sequence words matched with the instruction landmarks 'microwave oven' and 'wardrobe', copying 'front' and 'first' and adding the copied data to the missing positions of the time sequence words;
and S45, pairing the landmarks and the time sequence words in the two sequences of the instruction landmarks and the instruction time sequence words one by one in sequence.
In the above technical solution, the step S5 of calculating the time series word weight mainly includes the following steps:
s51, judging whether the time sequence word is of a 'pass' type, and if not, executing the next step; if yes, the weight calculation formula of the time sequence word is as follows:
Wi=wi-1-0.01
wherein WiIs the weight of the time series word, wi-1The weight of the time series word before the time series word.
S52, judging whether the time sequence word is of a 'then' type, if not, executing the next step; if yes, the weight calculation formula of the time sequence word is as follows:
Wi=wi-1+0.02
s53, the time sequence word weight calculation formula of the type which is not the type of 'passing' or 'then' is as follows:
Wi=wi+0.1*i
wherein wiI is the base weight of the time series word, i is the number of bits (counted from 0) of the time series word in the sequence of instruction time series words.
In the above technical solution, the step S7 of generating the navigation target point sequence mainly includes the following steps:
and S71, generating coordinates of the navigation target point. Searching the landmarks in the navigation landmark sequence in the landmark table to obtain corresponding coordinate values, and taking the coordinate values as the coordinates of the navigation target point;
and S72, generating the orientation of the navigation target point. In addition to the coordinate position, navigating the target point also needs to take into account the pose, i.e. the orientation, of the robot to the target point. In the navigation framework we use, the target point orientation needs to be given in quaternion, which is calculated as follows:
Figure BDA0003232189260000151
qx=0;
qy=0;
Figure BDA0003232189260000152
Figure BDA0003232189260000153
wherein (x)1,y1) And (x)2,y2) Coordinates of the current target point and the next target point, q, respectivelyx,qy,qzAnd q iswJointly form the expression eyeQuaternion of punctuation orientation.
In the above technical solution, the path forming process in step S8 mainly includes the following steps:
s81, setting a first target point of the navigation target point sequence as a current target point;
s82, using an A-x algorithm as a global path planner, rapidly planning a global path from the current position to a target point in the current environment, and simultaneously using a dynamic window method (DWA) as a local path planner to guide the robot to complete local obstacle avoidance and realize the navigation of the robot from the current position to the current target point;
s83, judging whether the robot reaches the current target point or not in the process of executing the step S82, if so, executing the next step, and if not, circularly executing the steps S82-S83;
s84, judging whether the arrived target point is the last target point of the navigation target point sequence, if so, finishing the whole navigation process by the robot, and if not, executing the next step;
and S85, setting the next target point of the navigation target point sequence as the current target point and returning to the step S82.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications therefrom are within the scope of the invention.

Claims (10)

1. A mobile robot autonomous navigation method facing human-computer natural interaction is characterized by comprising the following steps:
s1, controlling the robot to move in the unknown environment, and establishing a map for the current unknown environment;
s2, acquiring a natural language instruction, extracting information of landmark points and time sequence words from the natural language instruction, and acquiring an instruction landmark sequence and an instruction time sequence word sequence;
s3, pairing the landmarks in the instruction landmark sequence and the time sequence words in the instruction time sequence word sequence to enable each landmark point to be paired with the time sequence words, and obtaining the pairing relation between the landmarks and the time sequence words;
s4, calculating the weight of each time sequence word in the instruction time sequence word sequence, and reordering the time sequence words according to the weight to obtain a new instruction time sequence word sequence;
s5, reordering the landmarks in the instruction landmark sequence based on the new instruction time sequence word sequence according to the pairing relationship between the landmarks and the time sequence words in S3, updating the instruction landmark sequence, and obtaining a navigation landmark sequence;
and S6, generating a robot navigation target point sequence according to the navigation landmark sequence, and sequentially navigating according to the robot navigation target point sequence.
2. The human-computer natural interaction-oriented mobile robot autonomous navigation method according to claim 1, wherein the S1 includes:
controlling the robot to walk in an unknown environment by using a handle, and establishing a map for the current position environment;
in the process of establishing the map, the landmark positions and landmark names of the robots at all landmarks are recorded, and the coordinate positions of the landmarks are marked on the map.
3. The autonomous navigation method for the mobile robot facing the human-computer natural interaction as claimed in claim 1, wherein in S1, a map is established for the current location environment through a mapping function package and a laser sensor carried by the robot itself.
4. The autonomous navigation method for a mobile robot based on natural human-computer interaction of claim 1, wherein in S2, extracting information of landmark points and time-series words from natural language commands to obtain command landmark sequences and command time-series word sequences, comprises:
storing landmark names and corresponding landmark positions in an unknown environment in a landmark table, and enriching the landmark table by adding aliases to the landmark names;
storing the common time sequence words and the basic weight thereof in a time sequence word list;
performing word segmentation processing on the instruction through a Chinese word segmentation algorithm to obtain an instruction sequence;
matching the instruction sequence with landmarks in a landmark table, and extracting the landmarks in the instruction sequence to obtain an instruction landmark sequence;
and matching the instruction sequence with the time sequence words in the time sequence word list, and extracting the time sequence words in the instruction sequence to obtain the instruction time sequence word sequence.
5. The human-computer natural interaction-oriented mobile robot autonomous navigation method according to claim 1, wherein the S3 includes:
s31, judging whether the landmark in the command landmark sequence lacks the matched time sequence word, if not, executing the step S35; if yes, executing the next step;
s32, judging whether the time sequence words contain 'before' or 'after', if so, moving the instruction clauses containing 'before' or 'after' to the instruction tail; if not, executing the next step;
s33, judging whether the first landmark lacks a time sequence word matched with the first landmark, if so, adding the time sequence word 'first' at the head end of the time sequence word, and if not, executing the next step;
s34, judging whether the landmark lacks a time sequence word matched with the landmark, if so, copying the time sequence word in front of the missing position of the time sequence word, and adding the time sequence word to the missing position of the time sequence word; if not, executing the next step; repeating the steps until the judgment result is not yes;
s35, pairing the landmarks in the instruction landmark sequence and the time sequence words in the instruction time sequence word sequence one by one in sequence.
6. The autonomous navigation method for a mobile robot based on natural human-computer interaction of claim 1, wherein in S4, calculating the weight of each time-series word in the sequence of instruction time-series words includes:
s41, judging whether the time sequence word is of a 'pass' type, if not, executing the next step; if yes, the weight calculation formula of the time sequence word is as follows:
Wi=wi-1-0.01
wherein, WiIs the weight of the time series word, wi-1The weight of the time sequence word before the time sequence word;
s42, judging whether the time sequence word is of a 'then' type, if not, executing the next step; if yes, the weight calculation formula of the time sequence word is as follows:
Wi=wi-1+0.02;
s43, the time sequence word weight calculation formula of the type which is not the type of 'pass' or 'then' is as follows:
Wi=wi+0.1*i,
wherein, wiI is the base weight of the time series word, i is the order of the time series word in the sequence of instruction time series words.
7. The autonomous navigation method for a mobile robot facing natural human-computer interaction according to claim 1, wherein in S6, generating a robot navigation target point sequence according to a navigation landmark sequence specifically includes:
searching the landmarks in the navigation landmark sequence in the landmark table to obtain corresponding coordinate values, and taking the coordinate values as the coordinates of the navigation target point;
generating navigation target point orientation in a quaternion manner:
Figure FDA0003232189250000041
qx=0,
qy=0,
Figure FDA0003232189250000042
Figure FDA0003232189250000043
wherein (x)1,y1) And (x)2,y2) Coordinates of the current target point and the next target point, q, respectivelyx,qy,qzAnd q iswTogether constituting a quaternion representing the orientation of the target point.
8. The autonomous navigation method for the mobile robot facing the natural human-computer interaction of claim 1, wherein in S6, navigating sequentially according to the sequence of the robot navigation target points comprises:
s61, setting a first target point of the navigation target point sequence as a current target point;
s62, using the A-x algorithm as a global path planner, rapidly planning a global path from the current position to a target point in the current environment, and using a dynamic window method as a local path planner to guide the robot to complete local obstacle avoidance and realize the navigation of the robot from the current position to the current target point;
s63, judging whether the robot reaches the current target point, if so, executing the next step, otherwise, circularly executing the steps S62-S63;
s64, judging whether the arrived target point is the last target point of the navigation target point sequence, if so, finishing the whole navigation process by the robot, and if not, executing the next step;
s65, setting the next target point of the navigation target point sequence as the current target point and returning to the step S62.
9. The autonomous navigation method of a mobile robot for natural human-computer interaction according to claim 1, wherein in the process of executing step S6, if the robot receives a new command, the robot stops navigating, returns to step S2 and executes the new command; if the robot does not receive a new command, the navigation task ends.
10. A mobile robot autonomous navigation system oriented to human-computer natural interaction, comprising:
the mapping module is used for controlling the robot to move in an unknown environment and establishing a map for the current unknown environment;
the system comprises an instruction landmark and time sequence word sequence acquisition module, a time sequence word sequence acquisition module and a time sequence word sequence acquisition module, wherein the instruction landmark and time sequence word sequence acquisition module is used for acquiring a natural language instruction, extracting information of landmark points and time sequence words from the natural language instruction and acquiring an instruction landmark sequence and an instruction time sequence word sequence;
the pairing module is used for pairing the landmarks in the instruction landmark sequence with the time sequence words in the instruction time sequence word sequence so that each landmark point is paired with the time sequence words to obtain the pairing relation between the landmarks and the time sequence words;
the instruction time sequence word sequence updating module is used for calculating the weight of each time sequence word in the instruction time sequence word sequence and reordering the time sequence words according to the weight to obtain a new instruction time sequence word sequence;
the navigation landmark sequence acquisition module is used for reordering landmarks in the instruction landmark sequence based on the pairing relationship between the landmarks and the time sequence words by taking the new instruction time sequence word sequence as a reference, updating the instruction landmark sequence and acquiring a navigation landmark sequence;
and the navigation module generates a robot navigation target point sequence according to the navigation landmark sequence and sequentially navigates according to the robot navigation target point sequence.
CN202110990651.XA 2021-08-26 2021-08-26 Mobile robot autonomous navigation method and system for man-machine natural interaction Active CN113485382B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110990651.XA CN113485382B (en) 2021-08-26 2021-08-26 Mobile robot autonomous navigation method and system for man-machine natural interaction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110990651.XA CN113485382B (en) 2021-08-26 2021-08-26 Mobile robot autonomous navigation method and system for man-machine natural interaction

Publications (2)

Publication Number Publication Date
CN113485382A true CN113485382A (en) 2021-10-08
CN113485382B CN113485382B (en) 2022-07-12

Family

ID=77946324

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110990651.XA Active CN113485382B (en) 2021-08-26 2021-08-26 Mobile robot autonomous navigation method and system for man-machine natural interaction

Country Status (1)

Country Link
CN (1) CN113485382B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106767876A (en) * 2017-02-02 2017-05-31 王恒升 A kind of semantic understanding model of robot navigation's natural language instruction
CN106970614A (en) * 2017-03-10 2017-07-21 江苏物联网研究发展中心 The construction method of improved trellis topology semantic environment map
CN107437419A (en) * 2016-05-27 2017-12-05 广州零号软件科技有限公司 A kind of method, instruction set and the system of the movement of Voice command service robot
CN108592936A (en) * 2018-04-13 2018-09-28 北京海风智能科技有限责任公司 A kind of service robot and its interactive voice air navigation aid based on ROS
CN108731663A (en) * 2018-03-21 2018-11-02 北京猎户星空科技有限公司 Correspondence method for building up, device, medium and electronic equipment
CN110285813A (en) * 2019-07-01 2019-09-27 东南大学 A kind of man-machine co-melting navigation device of indoor mobile robot and method
CN110825829A (en) * 2019-10-16 2020-02-21 华南理工大学 Method for realizing autonomous navigation of robot based on natural language and semantic map
CN110928302A (en) * 2019-11-29 2020-03-27 华中科技大学 Man-machine cooperative natural language space navigation method and system
CN112857370A (en) * 2021-01-07 2021-05-28 北京大学 Robot map-free navigation method based on time sequence information modeling
CN112883737A (en) * 2021-03-03 2021-06-01 山东大学 Robot language instruction analysis method and system based on Chinese named entity recognition
KR20210087903A (en) * 2020-12-22 2021-07-13 바이두 유에스에이 엘엘씨 Natural language based indoor autonomous navigation

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107437419A (en) * 2016-05-27 2017-12-05 广州零号软件科技有限公司 A kind of method, instruction set and the system of the movement of Voice command service robot
CN106767876A (en) * 2017-02-02 2017-05-31 王恒升 A kind of semantic understanding model of robot navigation's natural language instruction
CN106970614A (en) * 2017-03-10 2017-07-21 江苏物联网研究发展中心 The construction method of improved trellis topology semantic environment map
CN108731663A (en) * 2018-03-21 2018-11-02 北京猎户星空科技有限公司 Correspondence method for building up, device, medium and electronic equipment
CN108592936A (en) * 2018-04-13 2018-09-28 北京海风智能科技有限责任公司 A kind of service robot and its interactive voice air navigation aid based on ROS
CN110285813A (en) * 2019-07-01 2019-09-27 东南大学 A kind of man-machine co-melting navigation device of indoor mobile robot and method
CN110825829A (en) * 2019-10-16 2020-02-21 华南理工大学 Method for realizing autonomous navigation of robot based on natural language and semantic map
CN110928302A (en) * 2019-11-29 2020-03-27 华中科技大学 Man-machine cooperative natural language space navigation method and system
KR20210087903A (en) * 2020-12-22 2021-07-13 바이두 유에스에이 엘엘씨 Natural language based indoor autonomous navigation
EP3879371A2 (en) * 2020-12-22 2021-09-15 Baidu USA LLC Natural language based indoor autonomous navigation
CN112857370A (en) * 2021-01-07 2021-05-28 北京大学 Robot map-free navigation method based on time sequence information modeling
CN112883737A (en) * 2021-03-03 2021-06-01 山东大学 Robot language instruction analysis method and system based on Chinese named entity recognition

Also Published As

Publication number Publication date
CN113485382B (en) 2022-07-12

Similar Documents

Publication Publication Date Title
Hatori et al. Interactively picking real-world objects with unconstrained spoken language instructions
Huang et al. Code3: A system for end-to-end programming of mobile manipulator robots for novices and experts
Bisk et al. Natural language communication with robots
Chen et al. Learning to interpret natural language navigation instructions from observations
Gray et al. Craftassist: A framework for dialogue-enabled interactive agents
CN100390794C (en) Method for organizing command set of telecommunciation apparatus by navigation tree mode
US20090244071A1 (en) Synthetic image automatic generation system and method thereof
JP2009015388A (en) Electronic calculator and control program
Talbot et al. Robot navigation in unseen spaces using an abstract map
Bowen et al. Asymptotically optimal motion planning for learned tasks using time-dependent cost maps
MacGlashan et al. Training an agent to ground commands with reward and punishment
Barrett et al. Driving under the influence (of language)
CN113485382B (en) Mobile robot autonomous navigation method and system for man-machine natural interaction
US20050062740A1 (en) User interface method and apparatus, and computer program
CN108154238A (en) Moving method, device, storage medium and the electronic equipment of machine learning flow
CN112509392B (en) Robot behavior teaching method based on meta-learning
Dai et al. Think, act, and ask: Open-world interactive personalized robot navigation
Roesler et al. Action learning and grounding in simulated human–robot interactions
US20220314432A1 (en) Information processing system, information processing method, and nonvolatile storage medium capable of being read by computer that stores information processing program
CN116205294A (en) Knowledge base self-updating method and device for robot social contact and robot
Meriçli et al. An interactive approach for situated task teaching through verbal instructions
CN114511653A (en) Progress tracking with automatic symbol detection
Jernite et al. Craftassist instruction parsing: Semantic parsing for a minecraft assistant
Daniele et al. Natural language generation in the context of providing indoor route instructions
MacMahon Following natural language route instructions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant