CN109144237B - Multi-channel man-machine interactive navigation method for robot - Google Patents
Multi-channel man-machine interactive navigation method for robot Download PDFInfo
- Publication number
- CN109144237B CN109144237B CN201710678165.8A CN201710678165A CN109144237B CN 109144237 B CN109144237 B CN 109144237B CN 201710678165 A CN201710678165 A CN 201710678165A CN 109144237 B CN109144237 B CN 109144237B
- Authority
- CN
- China
- Prior art keywords
- area
- map
- obstacle
- closed
- robot
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
Abstract
The invention discloses a multi-channel man-machine interactive navigation method for a robot, which mainly comprises the following implementation processes: and starting a navigation function, loading a navigation grid map, modifying the grid map in real time, and then synchronously updating the robot navigation map to finish interactive navigation. According to the invention, by means of multi-channel touch control, voice control and eye movement control, the analysis and generation process of human interaction behaviors in the human-human interaction process can be better simulated, so that more natural, vivid, convenient and effective interaction output is obtained, and the application experience of the robot is greatly improved; the navigation is flexible, and the user can autonomously select the passing area of the robot.
Description
Technical Field
The invention relates to the field of intelligent robots, in particular to a multi-channel man-machine interactive navigation method for a robot.
Background
With the continuous development of computer technology and the continuous progress of artificial intelligence technology, robots are applied more and more widely. Currently robots are developed to interact with users through key operation, remote control operation, single voice, body movements. Although the interaction form is enriched to a certain extent, the diversity and the accuracy of the interaction have higher level requirements in the face of complex application scenes.
A robot interaction method and a robot system are disclosed in the invention patent 201610179223.8. The method comprises the following steps: acquiring multi-mode external input information, wherein the external input information comprises character information, image information, sound information, robot self-checking information and induction information; analyzing the external input information to acquire interactive input information, interactive object characteristic information and interactive environment characteristic information; analyzing the interactive object characteristic information and the interactive environment characteristic information to obtain a matched interactive scenario limit; performing semantic analysis on the interaction input information to acquire an interaction intention of an interaction object; and under the interactive scene limitation, carrying out multi-modal interactive behavior output according to the interactive intention. Compared with the prior art, the method and the system can better simulate the analysis and generation process of human interaction behaviors in the human-human interaction process, thereby obtaining more natural and vivid interaction output and greatly improving the application experience of the robot.
The invention patent (application No. 201610078417.9) discloses a robot system and an interaction method and system, comprising: the system comprises an acquisition module, a display module and a control module, wherein the acquisition module is configured to acquire multi-modal external input information and comprises a voice acquisition unit, a visual sensor and a touch sensor; the interaction processing module is configured to make a decision and analyze on the multi-modal external input information, and comprises the steps of analyzing the external input information of each modality respectively and outputting multi-modal interaction output result information by synthesizing analysis results of the modalities; and the interaction output module is configured to control the robot to make a matched multi-mode interaction response according to the multi-mode interaction output result information. Compared with the prior art, the robot system and the method can acquire multi-mode interactive information and output the interactive information, so that multi-mode between a user and the robot is realized, the application range of the robot is expanded, and the user experience of the robot is greatly improved.
The invention patent 201410026255.5 provides a human-computer facial expression interactive system based on biological signals, which includes: the robot head comprises a head-wearing myoelectricity acquisition instrument and a robot head with sensory and expression functions; the head of the robot has two neck motion degrees of freedom, two lip motion degrees of freedom and two eye motion degrees of freedom, so that the motion of facial organs such as eyes and lips can be realized, various expressions are formed, and various interaction channels with operators are provided. The method for interactively acquiring the facial expression of the person based on the combination of the method based on the biological signals and the digital image processing overcomes the limitation that people must face a camera, the using environment must be illuminated and the like in the traditional expression interactive system, and greatly improves the using flexibility. Experiments prove that the method has feasibility, can overcome the limitation of the use environment which cannot be overcome by the traditional expression interaction system, and has good use flexibility.
The above prior art has the following disadvantages: the construction flexibility of the robot navigation map is poor, and the deep human-computer interaction form of the robot is single. Due to the change of the operating environment, the map cannot be updated in time. If updating is required, the map can be reconstructed only once; if not, the efficiency of the robot for path planning and navigation walking according to the original map is greatly reduced. In addition, if the changing environment is not sensed by the robot, such as a pit or an open fire, a certain potential safety hazard is brought to the robot. The user cannot autonomously select a passage area of the robot. If the user does not want the robot to pass through an area, or even to be far away from an area, the robot may still move into the area because the path planning algorithm does not take into account that the area is not accessible.
Disclosure of Invention
The invention aims to solve the technical problem of providing a multi-channel man-machine interactive navigation method for a robot aiming at the defects of the prior art.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows: a robot multi-channel man-machine interactive navigation method mainly comprises the following implementation processes: and starting a navigation function, loading a navigation grid map, modifying the grid map in real time, and then synchronously updating the robot navigation map to finish interactive navigation.
The interactive navigation method is touch interactive navigation, and the touch interactive navigation comprises the steps of adding obstacles, deleting the obstacles and moving the obstacles; wherein the content of the first and second substances,
the step of adding obstacles comprises:
1) after the terminal displays the grid map, selecting a plurality of points in the grid map area to be connected into a closed area;
2) adding obstacles in the closed area, if the closed area needs to be reselected, selecting to abandon the addition of the obstacles, and returning to the step 1);
3) determining a two-dimensional coordinate point set (X0, Y0) of each point of the closed area;
4) verifying whether the point set is closed, if not, returning a corresponding prompt and reselecting the point; if the area enclosed by the two-position coordinate point set is closed, verifying whether the closed area is an obstacle area, if the area is in the obstacle area, returning to the step 1), and giving a corresponding prompt; if part or all of the closed area is not in the obstacle area, the verification is successful;
5) after the verification is successful, map updating and synchronization are carried out;
6) setting attribute values of all grids in the closed region in the grid map as an obstacle region, and after updating is finished, synchronizing the modified map to a robot navigation program and a display terminal;
the step of removing the obstacle includes:
1) after the terminal displays the grid map, selecting a plurality of points in the grid map area to be connected into a closed area;
2) deleting the barrier in the closed area, if the closed area needs to be reselected, selecting to abandon deletion of the barrier, and returning to the step 1);
3) determining a two-dimensional set of coordinate points (X0, Y0) for each point of the enclosed area.
4) Verifying whether the point set is closed, if not, returning a corresponding prompt and reselecting the point; if the area enclosed by the two-position coordinate point set is closed, verifying whether the selected area is a passable area, if the closed area is completely in the passable area, returning to the step 1), and giving a corresponding prompt; if part or all of the closed area is not in the passable area, the verification is successful;
5) after the verification is successful, map updating and synchronization are carried out;
6) setting attribute values of all grids in the closed region in the grid map as passable regions, and after updating, synchronizing the modified map to a robot navigation program and a display terminal;
the step of moving the obstacle includes:
1) after the terminal displays the grid map, selecting a plurality of points in the grid map area to be connected into a closed area;
2) moving the barrier in the closed area, if the closed area needs to be reselected, selecting to abandon the moving of the barrier, and returning to the step 1);
3) determining a two-dimensional set of coordinate points (X0, Y0) for each point of the enclosed area.
4) Verifying whether the point set is closed, if not, returning a corresponding prompt and reselecting the point; if the area enclosed by the two-position coordinate point set is closed, verifying whether the closed area is a passable area, if the closed area is completely in the passable area, returning to the step 1), and giving a corresponding prompt; if part or all of the closed area is not in the passable area, the verification is successful;
5) dragging the closed area to a target position after the verification is successful;
6) setting all grid attribute values in an original selected area of the grid map as a passable area, replacing the attribute value of a closed area corresponding to a target position with the original selected area, and after updating, synchronizing the modified map to a robot navigation program and a display terminal:
the interactive navigation method is voice interactive navigation, and the voice interactive navigation comprises the steps of adding obstacles and deleting the obstacles; wherein the content of the first and second substances,
the step of adding obstacles comprises:
1) entering a voice interaction mode, and displaying a regional split map of the grid map by the induction control terminal;
2) judging the semanteme, and selecting an increased barrier area in the split map of the terminal display area;
3) determining a set of boundary grid point two-dimensional coordinate points (X0, Y0) of the increased obstacle area.
4) Verifying whether the added obstacle area is the obstacle area, returning to the step 1) if the area is in the obstacle area, and giving a corresponding prompt; if part or all of the closed area is not in the obstacle area, the verification is successful;
5) and after the verification is successful, updating and synchronizing the map.
6) Setting attribute values of all grids in the increased obstacle area in the area splitting map as an obstacle area, and after updating is finished, synchronizing the modified map to a robot navigation program and a display terminal;
the step of removing the obstacle includes:
1) entering a voice interaction mode, and displaying a regional split map of the grid map by the induction control terminal;
2) judging the semanteme, selecting a deleted barrier area in the split map of the terminal display area;
3) determining a boundary grid point two-dimensional coordinate point set (X0, Y0) of the deleted obstacle region;
4) verifying whether the deleted barrier area is a passable area, if the deleted barrier area is completely in the passable area, returning to the step 1), and giving a corresponding prompt; if part or all of the closed area is not in the passable area, the verification is successful;
5) and after the verification is successful, updating and synchronizing the map.
6) Setting attribute values of all grids in the deleted obstacle area in the area splitting map as passable areas, and after updating, synchronizing the modified map to a robot navigation program and a display terminal;
the interactive navigation method is eye movement interactive navigation, and the eye movement interactive navigation comprises the steps of adding obstacles and deleting the obstacles; wherein the content of the first and second substances,
the step of adding obstacles comprises:
1) entering an eye movement interaction mode, and displaying an area split map of the grid map by the induction control terminal;
2) judging eye movement, namely selecting an increased obstacle area in the split map of the terminal display area;
3) determining a set of boundary grid point two-dimensional coordinate points (X0, Y0) of the increased obstacle area.
4) Verifying whether the added obstacle area is the obstacle area, returning to the step 1) if the area is in the obstacle area, and giving a corresponding prompt; if part or all of the closed area is not in the obstacle area, the verification is successful;
5) and after the verification is successful, updating and synchronizing the map.
6) Setting attribute values of all grids in the increased obstacle area in the area splitting map as an obstacle area, and after updating is finished, synchronizing the modified map to a robot navigation program and a display terminal;
the step of removing the obstacle includes:
1) entering an eye movement interaction mode, and displaying an area split map of the grid map by the induction control terminal;
2) judging eye movement, namely selecting a deleted obstacle area in the split map of the terminal display area;
3) determining a boundary grid point two-dimensional coordinate point set (X0, Y0) of the deleted obstacle region;
4) verifying whether the deleted barrier area is a passable area, if the deleted barrier area is completely in the passable area, returning to the step 1), and giving a corresponding prompt; if part or all of the closed area is not in the passable area, the verification is successful;
5) and after the verification is successful, updating and synchronizing the map.
6) Setting attribute values of all grids in the deleted obstacle area in the area splitting map as passable areas, and after updating, synchronizing the modified map to a robot navigation program and a display terminal;
compared with the prior art, the invention has the beneficial effects that: according to the invention, by means of multi-channel touch control, voice control and eye movement control, the analysis and generation process of human interaction behaviors in the human-human interaction process can be better simulated, so that more natural, vivid, convenient and effective interaction output is obtained, and the application experience of the robot is greatly improved; the navigation is flexible, and the user can autonomously select the passing area of the robot.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a touch interactive navigation flow diagram in accordance with the present invention;
FIG. 3 is a flow chart of the voice interactive navigation of the present invention;
FIG. 4 is a flow chart of the eye movement interactive navigation of the present invention.
Detailed Description
Generally, a map on which a robot path planning depends needs to be constructed offline in advance, and the most common map format is a grid map. The grid map stores map information in two-dimensional meshes, each of which stores different attribute values according to a passable area or a barrier area. Once the grid map is loaded and run by the robot, the path planning algorithm selects a path in a feasible grid. Since the grid map is built in advance, it is often not modifiable during operation. The interactive navigation can realize real-time map modification through touch interaction, voice interaction and eye movement interaction, the modified map is immediately synchronized to the autonomous navigation program of the robot, and the subsequent autonomous navigation path planning is based on the newly modified map navigation. As shown in particular in figure 1.
The touch interactive navigation process of the present invention is shown in fig. 2.
(a) And (3) increasing obstacles:
selecting an area to be modified on the terminal: after a terminal (pc or mobile phone) displays a map, a plurality of points are selected to be connected into a closed area (which can be square, circular or multi-point connection) in the map area by using a mouse (pc) or a touch (mobile phone).
Selecting a modification operation: choose 'add obstacle', choose to give up if the area needs to be reselected.
And (3) coordinate conversion: the terminal records a two-dimensional set of coordinate points (X0, Y0) for each point of the enclosed area. The coordinate point is the coordinate of the terminal display map, and is the coordinate point corresponding to the grid map obtained after the original grid map is zoomed, moved and rotated by the terminal screen and coordinate conversion is needed.
Verifying the validity of the region selection: firstly, verifying whether the point set is closed, if not, returning a corresponding prompt and reselecting the point; if the area is closed, whether the selected area is the obstacle area or not needs to be verified, if the area is completely in the obstacle area, the step 1 is returned, and a corresponding prompt is given. If part or all of the selected area is not in the obstacle area, the verification is successful.
Confirming the increasing operation: after the verification is successful, popping up a confirmation operation dialog box, selecting 'confirmation', and entering map updating and synchronization; if 'cancel' is selected, the operation is cancelled, and the step 1 is returned;
updating and synchronizing the map: the terminal sets attribute values of all grids in the selected area in the grid map as the obstacle area. And after the updating is finished, synchronizing the modified map to the robot navigation program and the display terminal.
(b) And (3) removing the obstacles:
selecting an area to be modified on the terminal: after a terminal (pc or mobile phone) displays a map, a plurality of points are selected to be connected into a closed area (which can be square, circular or multi-point connection) in the map area by using a mouse (pc) or a touch (mobile phone).
Selecting a modification operation: if 'delete barrier' is selected, the selection is abandoned if the area needs to be reselected.And (3) coordinate conversion: the terminal records a two-dimensional set of coordinate points (X0, Y0) for each point of the enclosed area. The coordinate point is the coordinate of the terminal display map, and is the coordinate point corresponding to the grid map obtained after the original grid map is zoomed, moved and rotated by the terminal screen and coordinate conversion is needed.
Verifying the validity of the region selection: firstly, verifying whether the point set is closed, if not, returning a corresponding prompt and reselecting the point; if the area is closed, it is also verified whether the selected area is a passable area. And if all the areas are in the passable area, returning to the step 1, and giving corresponding prompts. If the selected area is partially or completely absent from the passable area, the verification is successful.
Confirming the increasing operation: after the verification is successful, popping up a confirmation operation dialog box, selecting 'confirmation', and entering map updating and synchronization; if 'cancel' is selected, the operation is cancelled, and the step 1 is returned;
updating and synchronizing the map: the terminal sets the attribute values of all grids in the selected area in the grid map as the passable area. And after the updating is finished, synchronizing the modified map to the robot navigation program and the display terminal.
(c) Moving the obstacle:
selecting an area to be modified on the terminal: after a terminal (pc or mobile phone) displays a map, a plurality of points are selected to be connected into a closed area (which can be square, circular or multi-point connection) in the map area by using a mouse (pc) or a touch (mobile phone).
Selecting a modification operation: select 'move barrier', if the area needs to be reselected, then the selection is abandoned.And (3) coordinate conversion: the terminal records a two-dimensional set of coordinate points (X0, Y0) for each point of the enclosed area. The coordinate point is the coordinate of the terminal display map, and is the coordinate point corresponding to the grid map obtained after the original grid map is zoomed, moved and rotated by the terminal screen and coordinate conversion is needed.
Verifying the validity of the region selection: firstly, verifying whether the point set is closed, if not, returning a corresponding prompt and reselecting the point; and if the area is closed, entering the next step.
Confirming the increasing operation: after the verification is successful, the user drags the selected area to the target position;
confirming movement: after the dragging is finished, popping up a dialog box, and if the user selects 'abandon move', cancelling all the operations and returning to the step 1 again; if 'reselect target location' is selected, the user can drag again; if 'confirm' is selected, entering the next step;
updating and synchronizing the map: and the terminal sets all the grid attribute values in the original selected area in the grid map as the passable area, and replaces the attribute value of the area corresponding to the target position with the original selected area. And after the updating is finished, synchronizing the modified map to the robot navigation program and the display terminal.
The voice interactive navigation process of the present invention is shown in fig. 3. The voice interaction means that interactive operation is realized by voice real-time perception (microphone voice recording) and application of a voice semantic parsing technology (in the prior art, voice content is mainly judged through voiceprint recognition). After the user wakes up the induction control terminal through a specific word or sentence, for example, "enter voice interaction mode", the robot replies "voice interaction mode open", and enters voice interaction navigation mode.
(a) And (3) increasing obstacles:
selecting an area to be modified on the terminal: after a terminal (pc or a mobile phone) displays a regional split map of a grid map, a real-time perceived voice command is recognized, such as 'adding an obstacle region A', the regional split map of the grid map is split into a part of the region A, and the regional split map starts to be displayed on the terminal in a distinguishing manner, such as color distinguishing.
And (3) coordinate conversion: the terminal records a two-dimensional coordinate point set (X0, Y0) of boundary grid points that add an obstacle area (area a). The coordinate point is the coordinate of the terminal display map, and the coordinate point corresponding to the original grid map is obtained after the area splitting map is subjected to coordinate conversion.
Verifying the validity of the region selection: verification that the selected increased barrier area (area A) isIf the area is not in the barrier area, returning to the step 1 if the area is in the barrier area, and giving a corresponding prompt (such as adding the area A to be invalid). If part or all of the selected area is not in the obstacle area, the verification is successful, and a corresponding prompt is given (such as starting to increase the obstacle area A).
Updating and synchronizing the map: the terminal sets the attribute values of all grids of the selected increased obstacle area (area A) in the grid map as the obstacle area. And after the updating is finished, synchronizing the modified map to the robot navigation program and the display terminal.
(b) And (3) removing the obstacles:
selecting an area to be modified on the terminal: after a terminal (pc or a mobile phone) displays a region splitting map of a grid map, a real-time perceived voice command is recognized, such as 'deleting an obstacle region A', the region splitting map of the grid map is split into a part of the region A, and the region splitting map starts to be displayed on the terminal in a distinguishing mode, such as color distinguishing.
And (3) coordinate conversion: the terminal records a boundary grid point two-dimensional coordinate point set (X0, Y0) of the deletion obstacle area (area a). The coordinate point is the coordinate of the terminal display map, and the coordinate point corresponding to the original grid map is obtained after the area splitting map is subjected to coordinate conversion.
Verifying the validity of the region selection: and (4) verifying whether the selected deletion obstacle area (area A) is a passable area, returning to the step 1 if the areas are all in the passable area, and giving a corresponding prompt (such as invalidation of deletion of the area A). If part or all of the selected area is not in the passable area, the verification is successful, and corresponding prompt is given (such as starting to delete the obstacle)Zone a zone).
Updating and synchronizing the map: the terminal sets the attribute values of all grids in the selected deletion obstacle area (area A) in the grid map as passable areas. And after the updating is finished, synchronizing the modified map to the robot navigation program and the display terminal.
The eye movement interactive navigation process of the invention is shown in figure 4. The eye movement interaction means that interaction operation is realized by real-time perception of eye movement information (information is recorded by an eye movement instrument) and application of an eye movement tracking technology (in the prior art, the eye movement is tracked and judged mainly by measuring the position of a fixation point of eyes or the movement of eyeballs relative to the head). After the eyes of the user enter the sensing position to wake up the sensing control terminal, the robot replies 'eye movement interaction mode starting' to enter the eye movement interaction navigation mode.
(a) And (3) increasing obstacles:
selecting an area to be modified on the terminal: and after the terminal (pc or mobile phone) displays the area split map of the grid map, identifying the real-time perceived eye movement command. Firstly, the user adds/deletes the eye jump of the fixation point to complete the switching of the adding/deleting interface, secondly, the region selection frame is selected through eye movement to be adjusted to an added obstacle region (region A), the user blinks for 1 time, the region split map of the grid map is split into the part of the region A, and the user starts to display the region split map in a terminal in a distinguishing way, such as color separation.
And (3) coordinate conversion: the terminal records a two-dimensional coordinate point set (X0, Y0) of boundary grid points that add an obstacle area (area a). The coordinate point is the coordinate of the terminal display map, and the coordinate point corresponding to the original grid map is obtained after the area splitting map is subjected to coordinate conversion.
Verifying the validity of the region selection: and (4) verifying whether the selected increased barrier area (area A) is already a barrier area, returning to the step 1 if the area is completely in the barrier area, and giving a corresponding prompt (such as area A increase invalidation). If the selected area is partially or completely absent from the obstacle area, the verification is successful, and a corresponding prompt query is given (do you determine to add the obstacle area a.
Confirmation of increase: the terminal identifies the real-time perceived eye movement command, and the user eye movement outputs a determined instruction (blinking for 2 times quickly), so that the increase is confirmed, and a corresponding prompt is given (the increase of the area A of the barrier area is started); and (4) outputting a denial instruction (in any direction of eye movement) by the user, denying the increase and returning to the step 1, and simultaneously giving a corresponding prompt (failed in the increase).
Updating and synchronizing the map: the terminal sets the attribute values of all grids of the selected increased obstacle area (area A) in the grid map as the obstacle area. And after the updating is finished, synchronizing the modified map to the robot navigation program and the display terminal.
(b) And (3) removing the obstacles:
selecting an area to be modified on the terminal: and after the terminal (pc or mobile phone) displays the area split map of the grid map, identifying the real-time perceived eye movement command. Firstly, the user adds/deletes the eye jump of the fixation point to complete the switching of the adding/deleting interface, secondly, the region selection frame is selected through eye movement to be adjusted to the region for deleting the obstacle (region A), the user blinks for 1 time, the region split map of the grid map is split into the part of the region A, and the user starts to display the region split map in a terminal in a distinguishing way, such as color separation.
And (3) coordinate conversion: the terminal records a boundary grid point two-dimensional coordinate point set (X0, Y0) of the deletion obstacle area (area a). The coordinate point is the coordinate of the terminal display map, and the coordinate point corresponding to the original grid map is obtained after the area splitting map is subjected to coordinate conversion.
Verifying the validity of the region selection: and (4) verifying whether the selected deletion obstacle area (area A) is a passable area, returning to the step 1 if the areas are all in the passable area, and giving a corresponding prompt (such as invalidation of deletion of the area A). If the selected area is partially or completely absent from the passable area, the verification is successful, and a corresponding prompt query is given (do you determine to delete the obstacle area a.
Confirmation of increase: the terminal identifies the real-time perceived eye movement command, and the user eye movement outputs a determination instruction (blinking for 2 times quickly), then deletion is confirmed, and a corresponding prompt is given (the deletion of the obstacle area A is started); and (4) outputting a denial instruction (in any direction of eye movement) by the user, denying the deletion, returning to the step 1, and giving a corresponding prompt (deletion failure).
Updating and synchronizing the map: the terminal sets the attribute values of all grids in the selected deletion obstacle area (area A) in the grid map as passable areas. And after the updating is finished, synchronizing the modified map to the robot navigation program and the display terminal.
Robots typically have a variety of additional functions, such as manipulators, searchlights, entertainment interactions, and the like. The robot can also be realized in one or more combined modes of touch interaction, voice interaction, eye movement interaction and the like in the operation of the application function. The multi-channel man-machine interaction mode of the robot can be used independently or in a combined mode according to needs, and the interaction capacity of the robot is greatly improved.
Claims (3)
1. A robot multi-channel man-machine interactive navigation method is characterized in that the method mainly comprises the following implementation processes: starting a navigation function, loading a grid map for navigation, modifying the grid map in real time, and then synchronously updating the navigation map of the robot to finish interactive navigation;
the interactive navigation method is touch interactive navigation, and the touch interactive navigation comprises the steps of adding obstacles, deleting the obstacles and moving the obstacles; wherein the content of the first and second substances,
the step of adding obstacles comprises:
1) after the terminal displays the grid map, selecting a plurality of points in the grid map area to be connected into a closed area;
2) adding obstacles in the closed area, if the closed area needs to be reselected, selecting to abandon the addition of the obstacles, and returning to the step 1);
3) determining a two-dimensional coordinate point set (X0, Y0) of each point of the closed area;
4) verifying whether the point set is closed, if not, returning a corresponding prompt and reselecting the point; if the area enclosed by the two-dimensional coordinate point set is closed, verifying whether the closed area is an obstacle area, if the area is in the obstacle area, returning to the step 1), and giving a corresponding prompt; if part or all of the closed area is not in the obstacle area, the verification is successful;
5) after the verification is successful, map updating and synchronization are carried out;
6) setting attribute values of all grids in the closed region in the grid map as an obstacle region, and after updating is finished, synchronizing the modified map to a robot navigation program and a display terminal;
the step of removing the obstacle includes:
1) after the terminal displays the grid map, selecting a plurality of points in the grid map area to be connected into a closed area;
2) deleting the barrier in the closed area, if the closed area needs to be reselected, selecting to abandon deletion of the barrier, and returning to the step 1);
3) determining a two-dimensional set of coordinate points (X0, Y0) for each point of the enclosed area.
4) Verifying whether the point set is closed, if not, returning a corresponding prompt and reselecting the point; if the area enclosed by the two-dimensional coordinate point set is closed, verifying whether the selected area is a passable area, if the closed area is completely in the passable area, returning to the step 1), and giving a corresponding prompt; if part or all of the closed area is not in the passable area, the verification is successful;
5) after the verification is successful, map updating and synchronization are carried out;
6) setting attribute values of all grids in the closed region in the grid map as passable regions, and after updating, synchronizing the modified map to a robot navigation program and a display terminal;
the step of moving the obstacle includes:
1) after the terminal displays the grid map, selecting a plurality of points in the grid map area to be connected into a closed area;
2) moving the barrier in the closed area, if the closed area needs to be reselected, selecting to abandon the moving of the barrier, and returning to the step 1);
3) determining a two-dimensional set of coordinate points (X0, Y0) for each point of the enclosed area.
4) Verifying whether the point set is closed, if not, returning a corresponding prompt and reselecting the point; if the area enclosed by the two-position coordinate point set is closed, verifying whether the closed area is a passable area, if the closed area is completely in the passable area, returning to the step 1), and giving a corresponding prompt; if part or all of the closed area is not in the passable area, the verification is successful;
5) dragging the closed area to a target position after the verification is successful;
6) and setting all grid attribute values in the original selected area of the grid map as a passable area, replacing the attribute value of the closed area corresponding to the target position with the original selected area, and after updating, synchronizing the modified map to the robot navigation program and the display terminal.
2. A multi-channel human-computer interactive navigation method for a robot according to claim 1, wherein the interactive navigation method is voice interactive navigation, and the voice interactive navigation comprises adding obstacles and deleting obstacles; wherein the content of the first and second substances,
the step of adding obstacles comprises:
1) entering a voice interaction mode, and displaying a regional split map of the grid map by the induction control terminal;
2) judging the semanteme, and selecting an increased barrier area in the area splitting map;
3) determining a set of boundary grid point two-dimensional coordinate points (X0, Y0) of the increased obstacle area.
4) Verifying whether the added obstacle area is the obstacle area, returning to the step 1) if the area is in the obstacle area, and giving a corresponding prompt; if part or all of the closed area is not in the obstacle area, the verification is successful;
5) after the verification is successful, map updating and synchronization are carried out;
6) setting attribute values of all grids in the increased obstacle area in the area splitting map as an obstacle area, and after updating is finished, synchronizing the modified map to a robot navigation program and a display terminal;
the step of removing the obstacle includes:
1) entering a voice interaction mode, and displaying a regional split map of the grid map by the induction control terminal;
2) judging the semanteme, and selecting the deleted barrier area in the area splitting map;
3) determining a boundary grid point two-dimensional coordinate point set (X0, Y0) of the deleted obstacle region;
4) verifying whether the deleted barrier area is a passable area, if the deleted barrier area is completely in the passable area, returning to the step 1), and giving a corresponding prompt; if part or all of the closed area is not in the passable area, the verification is successful;
5) after the verification is successful, map updating and synchronization are carried out;
6) and setting attribute values of all grids in the deleted obstacle area in the area splitting map as a passable area, and after updating, synchronizing the modified map to a robot navigation program and a display terminal.
3. A robot multichannel man-machine interactive navigation method according to claim 1, characterized in that the interactive navigation method is an eye-movement interactive navigation, and the eye-movement interactive navigation comprises adding obstacles and deleting obstacles; wherein the content of the first and second substances,
the step of adding obstacles comprises:
1) entering an eye movement interaction mode, and displaying an area split map of the grid map by the induction control terminal;
2) judging eye movement, and selecting an increased obstacle area in the area splitting map;
3) determining a set of boundary grid point two-dimensional coordinate points (X0, Y0) of the increased obstacle area.
4) Verifying whether the added obstacle area is the obstacle area, returning to the step 1) if the area is in the obstacle area, and giving a corresponding prompt; if part or all of the closed area is not in the obstacle area, the verification is successful;
5) after the verification is successful, map updating and synchronization are carried out;
6) setting attribute values of all grids in the increased obstacle area in the area splitting map as an obstacle area, and after updating is finished, synchronizing the modified map to a robot navigation program and a display terminal;
the step of removing the obstacle includes:
1) entering an eye movement interaction mode, and displaying an area split map of the grid map by the induction control terminal;
2) judging eye movement, and selecting a deleted obstacle area in the area splitting map;
3) determining a boundary grid point two-dimensional coordinate point set (X0, Y0) of the deleted obstacle region;
4) verifying whether the deleted barrier area is a passable area, if the deleted barrier area is completely in the passable area, returning to the step 1), and giving a corresponding prompt; if part or all of the closed area is not in the passable area, the verification is successful;
5) after the verification is successful, map updating and synchronization are carried out;
6) and setting attribute values of all grids in the deleted obstacle area in the area splitting map as a passable area, and after updating, synchronizing the modified map to a robot navigation program and a display terminal.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710678165.8A CN109144237B (en) | 2017-08-10 | 2017-08-10 | Multi-channel man-machine interactive navigation method for robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710678165.8A CN109144237B (en) | 2017-08-10 | 2017-08-10 | Multi-channel man-machine interactive navigation method for robot |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109144237A CN109144237A (en) | 2019-01-04 |
CN109144237B true CN109144237B (en) | 2021-03-16 |
Family
ID=64803255
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710678165.8A Active CN109144237B (en) | 2017-08-10 | 2017-08-10 | Multi-channel man-machine interactive navigation method for robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109144237B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109839829A (en) * | 2019-01-18 | 2019-06-04 | 弗徕威智能机器人科技(上海)有限公司 | A kind of scene and expression two-way synchronization method |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103294054A (en) * | 2012-02-24 | 2013-09-11 | 联想(北京)有限公司 | Robot navigation method and system |
CN103914068A (en) * | 2013-01-07 | 2014-07-09 | 中国人民解放军第二炮兵工程大学 | Service robot autonomous navigation method based on raster maps |
CN104750232A (en) * | 2013-12-28 | 2015-07-01 | 华为技术有限公司 | Eye tracking method and eye tracking device |
CN105652876A (en) * | 2016-03-29 | 2016-06-08 | 北京工业大学 | Mobile robot indoor route planning method based on array map |
CN105955273A (en) * | 2016-05-25 | 2016-09-21 | 速感科技(北京)有限公司 | Indoor robot navigation system and method |
CN106779857A (en) * | 2016-12-23 | 2017-05-31 | 湖南晖龙股份有限公司 | A kind of purchase method of remote control robot |
CN106949893A (en) * | 2017-03-24 | 2017-07-14 | 华中科技大学 | The Indoor Robot air navigation aid and system of a kind of three-dimensional avoidance |
-
2017
- 2017-08-10 CN CN201710678165.8A patent/CN109144237B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103294054A (en) * | 2012-02-24 | 2013-09-11 | 联想(北京)有限公司 | Robot navigation method and system |
CN103914068A (en) * | 2013-01-07 | 2014-07-09 | 中国人民解放军第二炮兵工程大学 | Service robot autonomous navigation method based on raster maps |
CN104750232A (en) * | 2013-12-28 | 2015-07-01 | 华为技术有限公司 | Eye tracking method and eye tracking device |
CN105652876A (en) * | 2016-03-29 | 2016-06-08 | 北京工业大学 | Mobile robot indoor route planning method based on array map |
CN105955273A (en) * | 2016-05-25 | 2016-09-21 | 速感科技(北京)有限公司 | Indoor robot navigation system and method |
CN106779857A (en) * | 2016-12-23 | 2017-05-31 | 湖南晖龙股份有限公司 | A kind of purchase method of remote control robot |
CN106949893A (en) * | 2017-03-24 | 2017-07-14 | 华中科技大学 | The Indoor Robot air navigation aid and system of a kind of three-dimensional avoidance |
Also Published As
Publication number | Publication date |
---|---|
CN109144237A (en) | 2019-01-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6902683B2 (en) | Virtual robot interaction methods, devices, storage media and electronic devices | |
WO2022048403A1 (en) | Virtual role-based multimodal interaction method, apparatus and system, storage medium, and terminal | |
US20200126250A1 (en) | Automated gesture identification using neural networks | |
CN109176535B (en) | Interaction method and system based on intelligent robot | |
US11580454B2 (en) | Dynamic learning method and system for robot, robot and cloud server | |
US9431027B2 (en) | Synchronized gesture and speech production for humanoid robots using random numbers | |
CN108942919B (en) | Interaction method and system based on virtual human | |
CN108983636B (en) | Man-machine intelligent symbiotic platform system | |
CN109086860B (en) | Interaction method and system based on virtual human | |
CN105468145A (en) | Robot man-machine interaction method and device based on gesture and voice recognition | |
JP2018014094A (en) | Virtual robot interaction method, system, and robot | |
JP6910629B2 (en) | How to combine historical images and current images and express audio guides. | |
CN110874859A (en) | Method and equipment for generating animation | |
JP7278307B2 (en) | Computer program, server device, terminal device and display method | |
CN104969145A (en) | Target and press natural user input | |
CN114998491B (en) | Digital human driving method, device, equipment and storage medium | |
CN113506377A (en) | Teaching training method based on virtual roaming technology | |
KR20200059112A (en) | System for Providing User-Robot Interaction and Computer Program Therefore | |
CN109144237B (en) | Multi-channel man-machine interactive navigation method for robot | |
CN112764530A (en) | Ammunition identification method based on touch handle and augmented reality glasses | |
KR101525011B1 (en) | tangible virtual reality display control device based on NUI, and method thereof | |
CN112711331A (en) | Robot interaction method and device, storage equipment and electronic equipment | |
Putra et al. | Designing translation tool: Between sign language to spoken text on kinect time series data using dynamic time warping | |
Wang et al. | The verification system for interface intelligent perception of human-computer interaction | |
CN112424736A (en) | Machine interaction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |