CN112882481A - Mobile multi-mode interactive navigation robot system based on SLAM - Google Patents
Mobile multi-mode interactive navigation robot system based on SLAM Download PDFInfo
- Publication number
- CN112882481A CN112882481A CN202110462802.4A CN202110462802A CN112882481A CN 112882481 A CN112882481 A CN 112882481A CN 202110462802 A CN202110462802 A CN 202110462802A CN 112882481 A CN112882481 A CN 112882481A
- Authority
- CN
- China
- Prior art keywords
- robot
- slam
- information
- navigation
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000002452 interceptive effect Effects 0.000 title claims abstract description 15
- 230000003993 interaction Effects 0.000 claims abstract description 31
- 238000004891 communication Methods 0.000 claims abstract description 6
- 238000013135 deep learning Methods 0.000 claims abstract description 5
- 239000002245 particle Substances 0.000 claims description 67
- 238000004422 calculation algorithm Methods 0.000 claims description 39
- 238000000034 method Methods 0.000 claims description 20
- 238000013507 mapping Methods 0.000 claims description 10
- 230000005236 sound signal Effects 0.000 claims description 7
- 230000015572 biosynthetic process Effects 0.000 claims description 5
- 238000012545 processing Methods 0.000 claims description 5
- 238000003786 synthesis reaction Methods 0.000 claims description 5
- 230000008569 process Effects 0.000 description 15
- 230000006870 function Effects 0.000 description 11
- 238000005070 sampling Methods 0.000 description 6
- 238000012952 Resampling Methods 0.000 description 5
- 230000000875 corresponding effect Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 238000012937 correction Methods 0.000 description 4
- 238000011156 evaluation Methods 0.000 description 4
- 238000001914 filtration Methods 0.000 description 4
- 230000007704 transition Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 230000001276 controlling effect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 239000012634 fragment Substances 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000004888 barrier function Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 230000001502 supplementing effect Effects 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0238—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
- G05D1/024—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0221—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0223—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0255—Control of position or course in two dimensions specially adapted to land vehicles using acoustic signals, e.g. ultra-sonic singals
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0257—Control of position or course in two dimensions specially adapted to land vehicles using a radar
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0276—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Aviation & Aerospace Engineering (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Acoustics & Sound (AREA)
- Optics & Photonics (AREA)
- Electromagnetism (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The invention relates to a mobile multi-mode interactive navigation robot system based on SLAM, comprising: the mobile control module comprises a SLAM based on a laser radar and is used for controlling the movement of the robot and acquiring current positioning information; the mobile navigation module is electrically connected with the mobile control module and used for receiving the current positioning information and performing fixed-point cruise explanation by combining preset path planning according to the current positioning information; the information display module is electrically connected with the mobile control module and used for displaying preset information and sending the received motion control instruction to the mobile control module; and the voice interaction module adopts a C/S (client/server) framework, is respectively in communication connection with the information display module and the mobile navigation module, adopts a deep learning chat system and is used for realizing man-machine interaction, and the man-machine interaction comprises the step of receiving a motion control instruction. The invention reduces the cost of the robot.
Description
Technical Field
The invention relates to the technical field of navigation robots, in particular to a mobile multi-mode interactive navigation robot system based on SLAM.
Background
At present, China has vigorous development of leisure agriculture and village tourism, and the village revivification capability is continuously enhanced. By going deep into Changshun county, Guizhou province, the local characteristic scenic spot has the characteristics of less tourist guide, wide area and long route in the process of poverty alleviation and hardness removal, and the contradiction between the increased tourists and less tourist guide hinders the scenic spot development. Although the tour guide robot replaces the tour guide at present, the cost of the background adopted by the tour guide robot is high.
Disclosure of Invention
The invention aims to provide a mobile multi-mode interactive navigation robot system based on SLAM, which reduces the cost of the robot.
In order to achieve the purpose, the invention provides the following scheme:
a SLAM-based mobile multi-modal interactive navigation robot system, comprising:
the mobile control module comprises a SLAM based on a laser radar and is used for controlling the movement of the robot and acquiring current positioning information;
the mobile navigation module is electrically connected with the mobile control module and used for receiving the current positioning information and performing fixed-point cruise explanation by combining preset path planning according to the current positioning information;
the information display module is electrically connected with the mobile control module and used for displaying preset information and sending the received motion control instruction to the mobile control module;
and the voice interaction module adopts a C/S (client/server) framework, is respectively in communication connection with the information display module and the mobile navigation module, adopts a deep learning chat system and is used for realizing man-machine interaction, and the man-machine interaction comprises the step of receiving a motion control instruction.
Optionally, the movement control module comprises:
the mapping unit is used for receiving data of the laser radar in real time and constructing a two-dimensional map of the surrounding environment according to the data of the laser radar;
the positioning unit is used for determining the pose of the robot on the two-dimensional map according to the data of the laser radar and the mileage data of the robot;
and the navigation unit is used for carrying out global path planning according to the destination and the departure place of the robot, and carrying out local real-time planning according to the detected obstacle when the robot moves according to the global path planning.
Optionally, the local real-time planning is implemented by a dynamic windowing method.
Optionally, the global path planning is implemented by dijkstra algorithm.
Optionally, the voice interaction module includes a front end and a server;
the front end is used for collecting sound signals, converting the sound signals into text information, sending the text information to the server and receiving return information of the server, wherein the return information is converted into human voice based on a voice synthesis SDK;
and the server is used for receiving the character information, performing intention identification and dialogue processing on the character information in a mode of cooperation of the retrieval class and the generation class, and sending return information.
Optionally, the front end employs a scientific news annular six-microphone array.
Optionally, the server is a cloud server.
Optionally, the robot further comprises an ultrasonic sensor for detecting an obstacle signal around the robot and sending the obstacle signal to the movement control module.
Optionally, the map building unit builds a two-dimensional map of the surrounding environment according to the data of the laser radar by using a GMapping algorithm.
Optionally, the positioning unit determines the pose of the robot on the two-dimensional map according to data of a laser radar and robot mileage data by using an active monte carlo particle filtering positioning algorithm.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the mobile control module comprises a SLAM based on a laser radar, and is used for controlling the movement of the robot, realizing the autonomous navigation of the robot, and improving the navigation efficiency through the mobile navigation module, the voice interaction module adopts a C/S framework, is respectively in communication connection with the information display module and the mobile navigation module, adopts a deep learning chat system, is used for realizing human-computer interaction, and reduces the background cost of the navigation robot.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
Fig. 1 is a schematic structural diagram of a mobile multi-modal interactive navigation robot system based on SLAM according to the present invention;
FIG. 2 is a schematic diagram of a two-dimensional map construction process of the present invention;
FIG. 3 is a schematic diagram illustrating a convergence process of particle groups in the active Monte Carlo particle filter positioning algorithm according to the present invention;
FIG. 4 is a schematic diagram of the operation of the voice interaction module of the present invention;
FIG. 5 is a schematic view of a fixed point interpretation function page of the mobile navigation module according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide a mobile multi-mode interactive navigation robot system based on SLAM, which reduces the cost of the robot.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Fig. 1 is a schematic structural diagram of a mobile multi-modal interactive navigation robot system based on SLAM, and as shown in fig. 1, the mobile multi-modal interactive navigation robot system based on SLAM includes:
the movement control module 101, including a SLAM based laser radar, is used to control the movement of the robot via the mobile base 106, while acquiring current positioning information.
The laser radar scans the environmental distance information at a fixed frequency, and the map building, positioning and navigation nodes in the SLAM all receive the data information of the laser radar.
The movement control module 101 includes:
and the map building unit is used for receiving the data of the laser radar in real time and building a two-dimensional map of the surrounding environment according to the data of the laser radar.
And the positioning unit is used for determining the pose of the robot on the two-dimensional map according to the data of the laser radar and the mileage data of the robot.
And the navigation unit is used for carrying out global path planning according to the destination and the departure place of the robot, and carrying out local real-time planning according to the detected obstacle when the robot moves according to the global path planning.
And realizing the local real-time planning by a dynamic window method.
And realizing the global path planning by a Dijkstra algorithm.
The robot system further includes an ultrasonic sensor for detecting an obstacle signal around the robot and transmitting the obstacle signal to the movement control module 101.
And the map building unit builds a two-dimensional map of the surrounding environment according to the data of the laser radar by adopting a GMapping algorithm.
And the positioning unit determines the pose of the robot on the two-dimensional map according to the data of the laser radar and the mileage data of the robot by adopting an active Monte Carlo particle filtering positioning algorithm.
The specific working principle involved in the motion control module 101 is as follows:
the method comprises the following steps of firstly, establishing a map, wherein the main work flow of a map establishing algorithm adopted in a map establishing unit is that the input of laser radar data is continuously received in the moving process of a robot, so that a two-dimensional map with complete surrounding environment is established. The mapping algorithm is GMapping, and GMapping is based on an RBpf particle filter algorithm, namely, the positioning and mapping process is separated, and positioning and mapping are carried out firstly. A plurality of particles are preset in the image building process based on particle filtering, each particle carries a possible pose of a robot and a current map, the RBpf algorithm needs to calculate the probability of each particle, and the formula for calculating the probability is。
Is a joint probability distribution with the aim of being based on the observation data z from time 1 to time t1:tAnd motion control data u from time 1 to time t-11:t-1To simultaneously predict the trajectory x of the robot1:t(a series of particles from time 1 to time t) and a map m. The joint probability can be converted into a conditional probability, so the above equation is equivalent to。
The position and the posture of the robot and the RBpf algorithm can firstly estimate the track of the robot for each particle, and then the probability of the correctness of the map carried by the particle is estimated according to the track. A more common particle filtering algorithm is sir (sampling impedance sampling) filter, and is divided into the following four steps under the application of SLAM mapping:
1) particle initialization
The main work of the first step of the algorithm is the initialization of the particle swarm, namely, a large number of particles are generated according to the prediction of the state transfer function, the particles are endowed with initial weights, the weights are updated by the algorithm, and the posterior probability is approximated by the weights of the particles.
2) Correction calculation
The second step is mainly correction, the robot collects data output of a series of sensors in the process of traveling, namely observed values, and the weight values of the particles are calculated according to the observed values. Suppose there are n particles, where the weight of the ith particle is denoted as wiRepresenting the probability that the particle will acquire during the course of the observation correction. Because the process of robot graph building is a Markov process and the current state result is only related to the previous state, the algorithm is based on the previous stateThe state transition equation of (1) is shown as the following formula, where η is a constant.
After the correction calculation, each particle will obtain its weight. Wherein,the weight value at the moment t of the ith particle is shown,z tthe observed data at time t is represented by,the trajectory of the ith particle prediction robot at time t is shown.
3) Particle resampling
The main work of the third step is resampling, namely, eliminating particles with lower value and supplementing particles with higher value. Because the process of the motion of the robot in the real environment is continuous, and the initial state of the particles is random, the weight of the particles with non-continuous state distribution is gradually reduced. The algorithm eliminates the particles with lower weights according to the weight proportion, and supplements the newly sampled particles into a state transition equation. The newly sampled particles are calculated from the lidar data. After the robot moves, positions which are not mapped appear, particles are required to be distributed in the positions in a concentrated mode, and distribution information of the particles to be sampled next (namely positions where the particles are generated) is calculated through a state transition equation according to laser radar data.
4) Map computation
And finally counting the track of each particle sample and the observation result of the sensor by the algorithm, and calculating the map estimation with the maximum probability from the track. And updates the newly calculated map to the original map.
The robot explores the surrounding scenes in the motion process and continuously iterates the four steps. In the process of constructing the map, whether the currently constructed map is complete or not can be judged according to actual needs, the current map can be stored and the execution of the algorithm can be interrupted after the map is constructed, and the stored map is the final map construction result. The mapping process is shown in fig. 2.
And secondly, positioning, namely determining the position of the current space of the robot and providing basis for subsequent movement and path planning. The function outputs the pose of the robot on the built map of the SLAM system according to the laser radar data and the odometer data of the robot. And an active Monte Carlo particle filter positioning Algorithm (AMCL) is adopted, and the AMCL algorithm can obtain the pose of the robot in a particle filter mode according to the approximate position provided by the odometer on the premise that the map is established. The specific flow of the AMCL algorithm is similar to that of a mapping algorithm, is based on an SIR filter, firstly initializes a series of random particles with weights, and then approximates the posterior probability density of any state by using the particle swarm, and the whole flow is divided into the following five steps:
1) particle initialization
Similar to the mapping algorithm, the main work of the first step is the initialization of the particle swarm. Each particle in the population contains a set of robot pose information, such as position and orientation. For a single particle, the weight used by the algorithm is used for measuring the matching degree of the particle and the real state of the robot pose. The initial state of each particle in the particle swarm is different, but the weight is the same.
2) State prediction
And updating the pose information of each particle in the particle swarm according to the motion state of the robot in the real scene. If the robot moves in the positive x-axis direction, all the particles in the particle group move in the positive x-axis direction. In the step, only the pose is updated, and the weight value is not updated.
3) Weight value updating
The main work of the step is to update the weight values of all the particles in the particle swarm according to the sensor information. On the basis of the updating of the pose in the previous step, the particles with higher matching degree with the real state of the pose of the robot are set with higher weights.
4) Particle resampling
The particle resampling principle in the step is similar to that in the graph building algorithm, and the specific flow is to discard part of particles with the lowest weight and resample the particles with high weight so as to update the particle swarm. Through a round of resampling, the particle swarm has certain convergence, and the real state of the pose of the robot can be reflected better after the convergence.
5) Weighted average
In the step, weighted average is carried out on the pose carried by each particle in the particle swarm, and the obtained result is the result estimated by the algorithm, namely the pose of the robot in the map in a real scene.
The steps are iterated continuously, and the positioning of the robot can be considered to be accurate enough when the particle swarm converges to a given threshold value. The particle swarm convergence process is shown in FIG. 3.
And thirdly, navigation, the final purpose of the SLAM system is to endow the robot with the capability of autonomous navigation. In most cases, navigation is based on a map built by a map building module, and a navigation algorithm firstly carries out path planning and is divided into a global path planning part and a local real-time planning part. And then tracking the path according to the planned path and the positioning information by using a PID control algorithm.
The global path planning is to calculate the path which takes the least to reach the B point from the A point by the robot through a classical Dijkstra optimal path algorithm. In the moving process, the robot inevitably encounters an unmarked obstacle on the map, and a local real-time path planning is needed to flexibly avoid the obstacle.
The local real-time planning is realized by an algorithm of a Dynamic Window Approach (DWA), and the algorithm flow is as follows:
1) speed sampling
The algorithm first performs multiple sets of sampling in the robot's velocity space, the result of the sampling being a series of directional velocities, including magnitude and direction.
2) Trajectory simulation
And respectively predicting the corresponding running track of the robot after the robot runs for a period of time at the corresponding directional speed aiming at each sampling.
3) Trajectory evaluation
And scoring the tracks according to the evaluation function, and screening out the optimal track as the basis for driving the robot. The evaluation function G (v, w) is normalized sum of the track end and the current state azimuth heading (v, w), the shortest distance dist (v, w) between the track and the obstacle and the magnitude velocity (v, w) of the track speed, and the evaluation function G (v, w) is as follows:
wherein, sigma, alpha, beta and gamma are all set parameters, and (v and w) are linear velocity and angular velocity respectively.
4) Circulating and repeating the above steps
Based on the steps, after the target point is given, the algorithm determines a global path according to the current pose and the target pose of the robot, and updates a local path in real time according to the barrier information.
In order to detect the unmarked obstacles in the environment, besides the laser radar, the system also uses five ultrasonic sensors distributed in a pentagon shape to detect the close-range obstacles so as to improve the obstacle identification capability.
And the mobile navigation module 102 is electrically connected with the mobile control module 101 and is used for receiving the current positioning information and performing fixed-point cruise explanation by combining preset path planning according to the current positioning information.
The mobile navigation module 102 performs mobile navigation according to the path and point position prefabricated by the system. Before moving, the voice interaction module 104 plays a 'moving prompt tone', and the screen is switched to a specific page. During the movement, movement control is performed according to the movement control module 101. When the specific point is reached, the movement is paused, and the voice interaction module 104 plays the pre-stored audio information. After the playing is finished, the screen picture is automatically switched to the next position, and then the 'moving prompt tone' is played to start moving. And finishing after all the paths are moved. The functional interface for the mobile navigation module 102 for fixed point interpretation is shown in fig. 5.
And the information display module 103 is electrically connected with the mobile control module 101, and is configured to display preset information and send the received motion control instruction to the mobile control module 101.
The information display module 103 is developed based on an Android Jetpack library, a single-page application is constructed based on a Navigation component, a single Activity component is adopted by a plurality of fragments, a common UI component is abstracted, and code coupling among projects is reduced. And according to the received navigation request, triggering the corresponding action and then switching to the corresponding Fragment. The page includes: the first-level page is a function selection page, and the second-level page comprises specific function pages such as scenic spot overview, special products and tourist photo. The information to be displayed on the page is preset in the system database and can be adjusted according to different scenes and requirements. The database uses the android from framework as a persistent data storage framework, i.e., from database 107, to facilitate database maintenance and version updates. And voice awakening, voice synthesis and voice recognition are realized by using the communication flight SDK, and data exchange is carried out between the https protocol and the voice interaction module 104, so that human-computer interaction of the robot is realized. The behavior of the robot is controlled through an upper android interface provided by Ros, and functions of 'calibration positioning', 'mobile navigation' and the like are provided. If the operation request is not received for a long time (3 minutes), a component built in the android can cause interruption of page switching and enter a standby state, and a preset film and television resource is displayed on a screen in the standby state; when a user operates, the function selection page is awakened again.
The voice interaction module 104 adopts a C/S architecture, is respectively in communication connection with the information display module 103 and the mobile navigation module 102, and adopts a combination of task-oriented retrieval conversation and deep learning chat systems to realize man-machine interaction, thereby avoiding the problem of high cost of manual background widely adopted in the market. The human-computer interaction comprises receiving a motion control instruction.
The voice interaction module 104 includes a front end (voice front end acquisition module 1041) and a server (voice interaction server 1042).
The front end is used for collecting sound signals, converting the sound signals into text information, sending the text information to the server and receiving return information of the server, wherein the return information is converted into voice based on a voice synthesis SDK.
And the server is used for receiving the character information, performing intention identification and dialogue processing on the character information in a mode of cooperation of the retrieval class and the generation class, and sending return information.
The server side is a cloud server.
The system further comprises an interactive display output 105 for displaying information of the information presentation module 103 and the mobile navigation module 102 by means of a display screen.
As a specific embodiment, the hardware of the front end part adopts a scientific news flying ring six-microphone array for collecting sound signals, and the front end algorithm of the front end part is utilized for processing such as echo cancellation and noise suppression; the voice awakening technology is utilized to start a conversation, the voice is converted into character information based on science popularization flight voice recognition SDK, a service end conversation interface is called through RESTFul API, returned information is obtained and then converted into voice based on voice synthesis SDK, finally voice playing is carried out through a power amplifier and a loudspeaker, and the working principle of the voice interaction module 104 is shown in figure 4.
And the server is developed based on flash light weight, and deployed by adopting Docker, thereby providing RESTFul API. And performing intention recognition and dialogue processing based on the mode of the cooperation of the retrieval class and the generation class, and returning relevant information. Meanwhile, through the annular six-microphone array, sound source positioning is carried out by utilizing a self-contained front-end algorithm, and the robot is controlled to interact towards a user; and playing corresponding interactive prompt information for other functions executed by the robot.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.
Claims (10)
1. A SLAM-based mobile multi-modal interactive navigation robot system, comprising:
the mobile control module comprises a SLAM based on a laser radar and is used for controlling the movement of the robot and acquiring current positioning information;
the mobile navigation module is electrically connected with the mobile control module and used for receiving the current positioning information and performing fixed-point cruise explanation by combining preset path planning according to the current positioning information;
the information display module is electrically connected with the mobile control module and used for displaying preset information and sending the received motion control instruction to the mobile control module;
and the voice interaction module adopts a C/S (client/server) framework, is respectively in communication connection with the information display module and the mobile navigation module, adopts a deep learning chat system and is used for realizing man-machine interaction, and the man-machine interaction comprises the step of receiving a motion control instruction.
2. The SLAM-based mobile multi-modal interaction navigation robot system of claim 1, wherein the movement control module comprises:
the mapping unit is used for receiving data of the laser radar in real time and constructing a two-dimensional map of the surrounding environment according to the data of the laser radar;
the positioning unit is used for determining the pose of the robot on the two-dimensional map according to the data of the laser radar and the mileage data of the robot;
and the navigation unit is used for carrying out global path planning according to the destination and the departure place of the robot, and carrying out local real-time planning according to the detected obstacle when the robot moves according to the global path planning.
3. The SLAM-based mobile multi-modal interactive navigation robot system of claim 2, wherein the local real-time planning is achieved by a dynamic windowing method.
4. The SLAM-based mobile multi-modal interactive navigation robot system of claim 2, wherein the global path planning is achieved by dijkstra's algorithm.
5. The SLAM-based mobile multi-modal interaction navigation robot system of claim 1, wherein the voice interaction module comprises a front end and a server end;
the front end is used for collecting sound signals, converting the sound signals into text information, sending the text information to the server and receiving return information of the server, wherein the return information is converted into human voice based on a voice synthesis SDK;
and the server is used for receiving the character information, performing intention identification and dialogue processing on the character information in a mode of cooperation of the retrieval class and the generation class, and sending return information.
6. The SLAM-based mobile multi-modal interaction navigation robot system of claim 5, wherein the front end employs a scientific news flying ring six-microphone array.
7. The SLAM-based mobile multi-modal interaction navigation robot system of claim 5, wherein the server is a cloud server.
8. The SLAM-based mobile multi-modal interaction navigation robot system of claim 1, further comprising an ultrasonic sensor for detecting an obstacle signal around the robot and transmitting the obstacle signal to the movement control module.
9. The SLAM-based mobile multi-modal interaction navigation robot system of claim 2, wherein the mapping unit employs a GMaping algorithm to construct a two-dimensional map of the surrounding environment from lidar data.
10. The SLAM-based mobile multi-modal interaction navigation robot system of claim 2, wherein the positioning unit determines the pose of the robot on the two-dimensional map according to lidar data and robot range data using an active Monte Carlo particle filter positioning algorithm.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110462802.4A CN112882481A (en) | 2021-04-28 | 2021-04-28 | Mobile multi-mode interactive navigation robot system based on SLAM |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110462802.4A CN112882481A (en) | 2021-04-28 | 2021-04-28 | Mobile multi-mode interactive navigation robot system based on SLAM |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112882481A true CN112882481A (en) | 2021-06-01 |
Family
ID=76040085
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110462802.4A Pending CN112882481A (en) | 2021-04-28 | 2021-04-28 | Mobile multi-mode interactive navigation robot system based on SLAM |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112882481A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113370229A (en) * | 2021-06-08 | 2021-09-10 | 山东新一代信息产业技术研究院有限公司 | Exhibition hall intelligent explanation robot and implementation method |
Citations (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101267441A (en) * | 2008-04-23 | 2008-09-17 | 北京航空航天大学 | A realization method and platform for C/S and B/S mixed architecture mode |
CN102480510A (en) * | 2010-11-30 | 2012-05-30 | 汉王科技股份有限公司 | Realization method of C/S and B/S mixed architecture and apparatus thereof |
CN103914068A (en) * | 2013-01-07 | 2014-07-09 | 中国人民解放军第二炮兵工程大学 | Service robot autonomous navigation method based on raster maps |
CN105278532A (en) * | 2015-11-04 | 2016-01-27 | 中国科学技术大学 | Personalized autonomous explanation method of guidance by robot tour guide |
CN106182027A (en) * | 2016-08-02 | 2016-12-07 | 西南科技大学 | A kind of open service robot system |
CN106842230A (en) * | 2017-01-13 | 2017-06-13 | 深圳前海勇艺达机器人有限公司 | Mobile Robotics Navigation method and system |
CN107065863A (en) * | 2017-03-13 | 2017-08-18 | 山东大学 | A kind of guide to visitors based on face recognition technology explains robot and method |
CN107167141A (en) * | 2017-06-15 | 2017-09-15 | 同济大学 | Robot autonomous navigation system based on double line laser radars |
CN107168320A (en) * | 2017-06-05 | 2017-09-15 | 游尔(北京)机器人科技股份有限公司 | A kind of tourist guide service robot |
CN206541196U (en) * | 2017-03-13 | 2017-10-03 | 山东大学 | A kind of guide to visitors based on face recognition technology explains robot |
CN107421544A (en) * | 2017-08-10 | 2017-12-01 | 上海大学 | A kind of modular hotel's handling robot system |
CN206892921U (en) * | 2017-02-27 | 2018-01-16 | 江苏慧明智能科技有限公司 | Electronic pet with family endowment function |
CN108227706A (en) * | 2017-12-20 | 2018-06-29 | 北京理工华汇智能科技有限公司 | The method and device of dynamic disorder is hidden by robot |
CN108510048A (en) * | 2017-02-27 | 2018-09-07 | 江苏慧明智能科技有限公司 | Electronic pet with family endowment function |
CN108687783A (en) * | 2018-08-02 | 2018-10-23 | 合肥市徽马信息科技有限公司 | One kind is led the way explanation guide to visitors robot of formula museum |
CN108710647A (en) * | 2018-04-28 | 2018-10-26 | 苏宁易购集团股份有限公司 | A kind of data processing method and device for chat robots |
CN108733059A (en) * | 2018-06-05 | 2018-11-02 | 湖南荣乐科技有限公司 | A kind of guide method and robot |
CN108748213A (en) * | 2018-08-02 | 2018-11-06 | 合肥市徽马信息科技有限公司 | A kind of guide to visitors robot |
CN208497010U (en) * | 2018-06-05 | 2019-02-15 | 湖南荣乐科技有限公司 | Intelligent exhibition guiding machine device people |
CN109471440A (en) * | 2018-12-10 | 2019-03-15 | 北京猎户星空科技有限公司 | Robot control method, device, smart machine and storage medium |
CN208629445U (en) * | 2017-10-13 | 2019-03-22 | 刘杜 | Autonomous introduction system platform robot |
CN110044359A (en) * | 2019-04-30 | 2019-07-23 | 厦门大学 | A kind of guide to visitors robot path planning method, device, robot and storage medium |
CN110136711A (en) * | 2019-04-30 | 2019-08-16 | 厦门大学 | A kind of voice interactive method of the guide to visitors robot based on cloud platform |
CN110135551A (en) * | 2019-05-15 | 2019-08-16 | 西南交通大学 | A kind of robot chat method of word-based vector sum Recognition with Recurrent Neural Network |
CN110659468A (en) * | 2019-08-21 | 2020-01-07 | 江苏大学 | File encryption and decryption system based on C/S architecture and speaker identification technology |
CN110750097A (en) * | 2019-10-17 | 2020-02-04 | 上海飒智智能科技有限公司 | Indoor robot navigation system and map building, positioning and moving method |
CN110986977A (en) * | 2019-11-21 | 2020-04-10 | 新石器慧通(北京)科技有限公司 | Movable unmanned carrier for navigation, navigation method and unmanned vehicle |
CN111090285A (en) * | 2019-12-24 | 2020-05-01 | 山东华尚电气有限公司 | Navigation robot control system and navigation information management method |
CN111210821A (en) * | 2020-02-07 | 2020-05-29 | 普强时代(珠海横琴)信息技术有限公司 | Intelligent voice recognition system based on internet application |
CN111259441A (en) * | 2020-01-14 | 2020-06-09 | Oppo广东移动通信有限公司 | Device control method, device, storage medium and electronic device |
CN111430044A (en) * | 2020-03-19 | 2020-07-17 | 郑州大学第一附属医院 | Natural language processing system and method of nursing robot |
CN111488254A (en) * | 2019-01-25 | 2020-08-04 | 顺丰科技有限公司 | Deployment and monitoring device and method of machine learning model |
CN111611269A (en) * | 2020-05-23 | 2020-09-01 | 上海自古红蓝人工智能科技有限公司 | Artificial intelligence emotion accompanying and attending system in conversation and chat mode |
CN211517481U (en) * | 2019-12-30 | 2020-09-18 | 深圳市汉伟智能技术有限公司 | Guide robot |
US20210041246A1 (en) * | 2019-08-08 | 2021-02-11 | Ani Dave Kukreja | Method and system for intelligent and adaptive indoor navigation for users with single or multiple disabilities |
CN112364148A (en) * | 2020-12-08 | 2021-02-12 | 吉林大学 | Deep learning method-based generative chat robot |
CN112527972A (en) * | 2020-12-25 | 2021-03-19 | 东云睿连(武汉)计算技术有限公司 | Intelligent customer service chat robot implementation method and system based on deep learning |
-
2021
- 2021-04-28 CN CN202110462802.4A patent/CN112882481A/en active Pending
Patent Citations (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101267441A (en) * | 2008-04-23 | 2008-09-17 | 北京航空航天大学 | A realization method and platform for C/S and B/S mixed architecture mode |
CN102480510A (en) * | 2010-11-30 | 2012-05-30 | 汉王科技股份有限公司 | Realization method of C/S and B/S mixed architecture and apparatus thereof |
CN103914068A (en) * | 2013-01-07 | 2014-07-09 | 中国人民解放军第二炮兵工程大学 | Service robot autonomous navigation method based on raster maps |
CN105278532A (en) * | 2015-11-04 | 2016-01-27 | 中国科学技术大学 | Personalized autonomous explanation method of guidance by robot tour guide |
CN106182027A (en) * | 2016-08-02 | 2016-12-07 | 西南科技大学 | A kind of open service robot system |
CN106842230A (en) * | 2017-01-13 | 2017-06-13 | 深圳前海勇艺达机器人有限公司 | Mobile Robotics Navigation method and system |
CN206892921U (en) * | 2017-02-27 | 2018-01-16 | 江苏慧明智能科技有限公司 | Electronic pet with family endowment function |
CN108510048A (en) * | 2017-02-27 | 2018-09-07 | 江苏慧明智能科技有限公司 | Electronic pet with family endowment function |
CN107065863A (en) * | 2017-03-13 | 2017-08-18 | 山东大学 | A kind of guide to visitors based on face recognition technology explains robot and method |
CN206541196U (en) * | 2017-03-13 | 2017-10-03 | 山东大学 | A kind of guide to visitors based on face recognition technology explains robot |
CN107168320A (en) * | 2017-06-05 | 2017-09-15 | 游尔(北京)机器人科技股份有限公司 | A kind of tourist guide service robot |
CN107167141A (en) * | 2017-06-15 | 2017-09-15 | 同济大学 | Robot autonomous navigation system based on double line laser radars |
CN107421544A (en) * | 2017-08-10 | 2017-12-01 | 上海大学 | A kind of modular hotel's handling robot system |
CN208629445U (en) * | 2017-10-13 | 2019-03-22 | 刘杜 | Autonomous introduction system platform robot |
CN108227706A (en) * | 2017-12-20 | 2018-06-29 | 北京理工华汇智能科技有限公司 | The method and device of dynamic disorder is hidden by robot |
CN108710647A (en) * | 2018-04-28 | 2018-10-26 | 苏宁易购集团股份有限公司 | A kind of data processing method and device for chat robots |
CN208497010U (en) * | 2018-06-05 | 2019-02-15 | 湖南荣乐科技有限公司 | Intelligent exhibition guiding machine device people |
CN108733059A (en) * | 2018-06-05 | 2018-11-02 | 湖南荣乐科技有限公司 | A kind of guide method and robot |
CN108687783A (en) * | 2018-08-02 | 2018-10-23 | 合肥市徽马信息科技有限公司 | One kind is led the way explanation guide to visitors robot of formula museum |
CN108748213A (en) * | 2018-08-02 | 2018-11-06 | 合肥市徽马信息科技有限公司 | A kind of guide to visitors robot |
CN109471440A (en) * | 2018-12-10 | 2019-03-15 | 北京猎户星空科技有限公司 | Robot control method, device, smart machine and storage medium |
CN111488254A (en) * | 2019-01-25 | 2020-08-04 | 顺丰科技有限公司 | Deployment and monitoring device and method of machine learning model |
CN110044359A (en) * | 2019-04-30 | 2019-07-23 | 厦门大学 | A kind of guide to visitors robot path planning method, device, robot and storage medium |
CN110136711A (en) * | 2019-04-30 | 2019-08-16 | 厦门大学 | A kind of voice interactive method of the guide to visitors robot based on cloud platform |
CN110135551A (en) * | 2019-05-15 | 2019-08-16 | 西南交通大学 | A kind of robot chat method of word-based vector sum Recognition with Recurrent Neural Network |
US20210041246A1 (en) * | 2019-08-08 | 2021-02-11 | Ani Dave Kukreja | Method and system for intelligent and adaptive indoor navigation for users with single or multiple disabilities |
CN110659468A (en) * | 2019-08-21 | 2020-01-07 | 江苏大学 | File encryption and decryption system based on C/S architecture and speaker identification technology |
CN110750097A (en) * | 2019-10-17 | 2020-02-04 | 上海飒智智能科技有限公司 | Indoor robot navigation system and map building, positioning and moving method |
CN110986977A (en) * | 2019-11-21 | 2020-04-10 | 新石器慧通(北京)科技有限公司 | Movable unmanned carrier for navigation, navigation method and unmanned vehicle |
CN111090285A (en) * | 2019-12-24 | 2020-05-01 | 山东华尚电气有限公司 | Navigation robot control system and navigation information management method |
CN211517481U (en) * | 2019-12-30 | 2020-09-18 | 深圳市汉伟智能技术有限公司 | Guide robot |
CN111259441A (en) * | 2020-01-14 | 2020-06-09 | Oppo广东移动通信有限公司 | Device control method, device, storage medium and electronic device |
CN111210821A (en) * | 2020-02-07 | 2020-05-29 | 普强时代(珠海横琴)信息技术有限公司 | Intelligent voice recognition system based on internet application |
CN111430044A (en) * | 2020-03-19 | 2020-07-17 | 郑州大学第一附属医院 | Natural language processing system and method of nursing robot |
CN111611269A (en) * | 2020-05-23 | 2020-09-01 | 上海自古红蓝人工智能科技有限公司 | Artificial intelligence emotion accompanying and attending system in conversation and chat mode |
CN112364148A (en) * | 2020-12-08 | 2021-02-12 | 吉林大学 | Deep learning method-based generative chat robot |
CN112527972A (en) * | 2020-12-25 | 2021-03-19 | 东云睿连(武汉)计算技术有限公司 | Intelligent customer service chat robot implementation method and system based on deep learning |
Non-Patent Citations (7)
Title |
---|
JIANTAOCD: "Android Jetpack架构组件最佳实践", 《HTTPS://WWW.JIANSHU.COM/P/4AD7AA0FC356》 * |
于镭 等: "基于单舵轮搬运机器人的导航系统设计", 《电子测量技术》 * |
张瑜 等: "基于改进动态窗口法的户外清扫机器人局部路径规划", 《机器人》 * |
李涛 等: "自主导览互动机器人的设计", 《甘肃科学学报》 * |
翁星: "轮式智能小车的全局路径规划算法与实验研究", 《中国优秀硕士学文论文全文数据库信息科技辑》 * |
詹宇娴 等: "基于树莓派的智能居家机器人系统设计", 《电脑知识与技术》 * |
赵林山 等: "基于云计算的陪护机器人设计与实现", 《机器人技术与应用》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113370229A (en) * | 2021-06-08 | 2021-09-10 | 山东新一代信息产业技术研究院有限公司 | Exhibition hall intelligent explanation robot and implementation method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11625508B2 (en) | Artificial intelligence device for guiding furniture placement and method of operating the same | |
CN108673501B (en) | Target following method and device for robot | |
Shoval et al. | Auditory guidance with the navbelt-a computerized travel aid for the blind | |
Cao et al. | Target search control of AUV in underwater environment with deep reinforcement learning | |
KR102281602B1 (en) | Artificial intelligence apparatus and method for recognizing utterance voice of user | |
US11409295B1 (en) | Dynamic positioning of an autonomous mobile device with respect to a user trajectory | |
JP6330200B2 (en) | SOUND SOURCE POSITION ESTIMATION DEVICE, MOBILE BODY, AND MOBILE BODY CONTROL METHOD | |
US10908609B2 (en) | Apparatus and method for autonomous driving | |
JP5763384B2 (en) | Movement prediction apparatus, robot control apparatus, movement prediction program, and movement prediction method | |
CN110844402B (en) | Garbage bin system is summoned to intelligence | |
US20210114213A1 (en) | Robot cleaner and operating method thereof | |
CN114237235A (en) | Mobile robot obstacle avoidance method based on deep reinforcement learning | |
Steckel et al. | Acoustic flow-based control of a mobile platform using a 3D sonar sensor | |
CN112882481A (en) | Mobile multi-mode interactive navigation robot system based on SLAM | |
CN117289691A (en) | Training method for path planning agent for reinforcement learning in navigation scene | |
Zhang et al. | A convolutional neural network method for self-driving cars | |
US11322134B2 (en) | Artificial intelligence device and operating method thereof | |
US11480968B1 (en) | System for dynamic positioning of an autonomous mobile device with respect to a user | |
Carmena et al. | The use of Doppler in Sonar-based mobile robot navigation: inspirations from biology | |
CN115657664A (en) | Path planning method, system, equipment and medium based on human teaching learning | |
Temsamani et al. | A multimodal AI approach for intuitively instructable autonomous systems: a case study of an autonomous off-highway vehicle | |
O'Reilly et al. | A novel development of acoustic SLAM | |
Toan et al. | Environment exploration for mapless navigation based on deep reinforcement learning | |
Gebellí Guinjoan et al. | A multi-modal AI approach for intuitively instructable autonomous systems | |
Rupasinghe et al. | Integrated Assistive System for Precise Indoor Navigation, Object Recognition, and Interaction for Visually Impaired Individuals |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210601 |
|
RJ01 | Rejection of invention patent application after publication |