CN112882481A - Mobile multi-mode interactive navigation robot system based on SLAM - Google Patents
Mobile multi-mode interactive navigation robot system based on SLAM Download PDFInfo
- Publication number
- CN112882481A CN112882481A CN202110462802.4A CN202110462802A CN112882481A CN 112882481 A CN112882481 A CN 112882481A CN 202110462802 A CN202110462802 A CN 202110462802A CN 112882481 A CN112882481 A CN 112882481A
- Authority
- CN
- China
- Prior art keywords
- robot
- slam
- information
- navigation
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000002452 interceptive effect Effects 0.000 title claims abstract description 14
- 230000033001 locomotion Effects 0.000 claims abstract description 37
- 230000003993 interaction Effects 0.000 claims abstract description 30
- 238000013135 deep learning Methods 0.000 claims abstract description 5
- 238000004891 communication Methods 0.000 claims abstract description 3
- 239000002245 particle Substances 0.000 claims description 68
- 238000004422 calculation algorithm Methods 0.000 claims description 38
- 238000000034 method Methods 0.000 claims description 21
- 238000013507 mapping Methods 0.000 claims description 18
- 230000005236 sound signal Effects 0.000 claims description 7
- 230000015572 biosynthetic process Effects 0.000 claims description 5
- 238000012545 processing Methods 0.000 claims description 5
- 238000003786 synthesis reaction Methods 0.000 claims description 5
- 230000008569 process Effects 0.000 description 15
- 230000006870 function Effects 0.000 description 11
- 238000012952 Resampling Methods 0.000 description 6
- 230000000875 corresponding effect Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 238000011156 evaluation Methods 0.000 description 5
- 238000010276 construction Methods 0.000 description 4
- 238000012937 correction Methods 0.000 description 4
- 230000007704 transition Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 239000012634 fragment Substances 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0238—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
- G05D1/024—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0221—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0223—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0255—Control of position or course in two dimensions specially adapted to land vehicles using acoustic signals, e.g. ultra-sonic singals
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0257—Control of position or course in two dimensions specially adapted to land vehicles using a radar
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0276—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Aviation & Aerospace Engineering (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Acoustics & Sound (AREA)
- Optics & Photonics (AREA)
- Electromagnetism (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
Description
技术领域technical field
本发明涉及导览机器人技术领域,特别是涉及一种基于SLAM的移动式多模态交互导览机器人系统。The invention relates to the technical field of navigation robots, in particular to a mobile multi-modal interactive navigation robot system based on SLAM.
背景技术Background technique
目前,中国国内休闲农业和乡村旅游蓬勃发展,助力乡村振兴能力不断增强。通过深入贵州省长顺县,在扶贫攻坚的过程中了解到,当地特色景区存在“导游少,面积广,线路长”的特点,而且激增的游客与较少的导游的矛盾态势阻碍了景区发展。虽然目前有导览机器人替代导游,但是目前的导览机器人采用的后台的成本很高。At present, China's domestic leisure agriculture and rural tourism are booming, and the ability to help rural revitalization is continuously enhanced. Through going deep into Changshun County, Guizhou Province, in the process of poverty alleviation, I learned that the local characteristic scenic spots have the characteristics of "few tour guides, wide area and long lines", and the contradiction between the surge of tourists and fewer tour guides hinders the development of scenic spots. . Although there are currently guide robots to replace tour guides, the cost of the background used by the current guide robots is very high.
发明内容SUMMARY OF THE INVENTION
本发明的目的是提供一种基于SLAM的移动式多模态交互导览机器人系统,降低了机器人成本。The purpose of the present invention is to provide a mobile multi-modal interactive navigation robot system based on SLAM, which reduces the cost of the robot.
为实现上述目的,本发明提供了如下方案:For achieving the above object, the present invention provides the following scheme:
一种基于SLAM的移动式多模态交互导览机器人系统,包括:A mobile multi-modal interactive guided robot system based on SLAM, including:
移动控制模块,包括基于激光雷达的SLAM,用于控制机器人的移动,获取当前定位信息;The movement control module, including SLAM based on lidar, is used to control the movement of the robot and obtain the current positioning information;
移动导览模块,与所述移动控制模块电连接,用于接收所述当前定位信息,根据所述当前定位信息结合预设的路径规划进行定点巡航讲解;a mobile navigation module, electrically connected to the mobile control module, for receiving the current positioning information, and performing fixed-point cruise explanation according to the current positioning information combined with a preset path plan;
信息展示模块,与所述移动控制模块电连接,用于显示预设信息,将接收到的运动控制指令发送到所述移动控制模块;an information display module, electrically connected to the mobile control module, for displaying preset information, and sending the received motion control instruction to the mobile control module;
语音交互模块,采用C/S架构,分别与所述信息展示模块和所述移动导览模块通讯连接,采用深度学习聊天系统,用于实现人机交互,所述人机交互包括接收运动控制指令。The voice interaction module adopts a C/S architecture, and is connected to the information display module and the mobile navigation module respectively, and adopts a deep learning chat system to realize human-computer interaction, and the human-computer interaction includes receiving motion control instructions .
可选地,所述移动控制模块包括:Optionally, the movement control module includes:
建图单元,用于实时接收激光雷达的数据,根据激光雷达的数据构建周边环境的二维地图;The mapping unit is used to receive the data of the lidar in real time, and build a two-dimensional map of the surrounding environment according to the data of the lidar;
定位单元,根据激光雷达的数据和机器人里程数据,确定所述机器人在所述二维地图上的位姿;a positioning unit, which determines the pose of the robot on the two-dimensional map according to the data of the lidar and the mileage data of the robot;
导航单元,用于根据所述机器人的目的地和出发地进行全局路径规划,当所述机器人按照所述全局路径规划进行运动时,根据检测到的障碍物进行本地实时规划。A navigation unit, configured to perform global path planning according to the destination and departure place of the robot, and perform local real-time planning according to detected obstacles when the robot moves according to the global path planning.
可选地,通过动态窗口法实现所述本地实时规划。Optionally, the local real-time planning is implemented through a dynamic window method.
可选地,通过迪杰斯特拉算法实现所述全局路径规划。Optionally, the global path planning is implemented through Dijkstra's algorithm.
可选地,所述语音交互模块包括前端和服务端;Optionally, the voice interaction module includes a front-end and a server;
所述前端用于采集声音信号,将所述声音信号转换为文字信息,将所述文字信息发送到所述服务端并接收所述服务端的返回信息,所述返回信息基于语音合成SDK转换为人声;The front end is used to collect sound signals, convert the sound signals into text information, send the text information to the server and receive return information from the server, and the return information is converted into human voice based on the speech synthesis SDK ;
所述服务端用于接收所述文字信息,基于检索类与生成类协同的方式对所述文字信息进行意图识别和对话处理,并发送返回信息。The server is used for receiving the text information, performing intention recognition and dialogue processing on the text information based on the synergy between the retrieval class and the generation class, and sending return information.
可选地,所述前端采用科大讯飞环形六麦阵列。Optionally, the front end adopts iFLYTEK annular six-mic array.
可选地,所述服务端为云端服务器。Optionally, the server is a cloud server.
可选地,还包括超声波传感器,用于检测所述机器人周围的障碍信号,并将所述障碍信号发送到所述移动控制模块。Optionally, an ultrasonic sensor is also included for detecting obstacle signals around the robot, and sending the obstacle signals to the movement control module.
可选地,所述建图单元采用GMapping算法,根据激光雷达的数据构建周边环境的二维地图。Optionally, the mapping unit uses a GMapping algorithm to construct a two-dimensional map of the surrounding environment according to the data of the lidar.
可选地,所述定位单元采用主动蒙特卡罗粒子滤波定位算法,根据激光雷达的数据和机器人里程数据,确定所述机器人在所述二维地图上的位姿。Optionally, the positioning unit adopts an active Monte Carlo particle filter positioning algorithm to determine the pose of the robot on the two-dimensional map according to the data of the lidar and the mileage data of the robot.
根据本发明提供的具体实施例,本发明公开了以下技术效果:According to the specific embodiments provided by the present invention, the present invention discloses the following technical effects:
本发明移动控制模块包括基于激光雷达的SLAM,用于控制机器人的移动,实现机器人的自主导航,并通过移动导览模块提高了导览效率,语音交互模块采用C/S架构,分别与所述信息展示模块和所述移动导览模块通讯连接,采用深度学习聊天系统,用于实现人机交互,降低了导览机器人的后台成本。The mobile control module of the present invention includes SLAM based on laser radar, which is used to control the movement of the robot, realize the autonomous navigation of the robot, and improve the navigation efficiency through the mobile navigation module. The information display module is in communication connection with the mobile navigation module, and a deep learning chatting system is used to realize human-computer interaction and reduce the background cost of the navigation robot.
附图说明Description of drawings
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the accompanying drawings required in the embodiments will be briefly introduced below. Obviously, the drawings in the following description are only some of the present invention. In the embodiments, for those of ordinary skill in the art, other drawings can also be obtained according to these drawings without creative labor.
图1为本发明一种基于SLAM的移动式多模态交互导览机器人系统结构示意图;1 is a schematic structural diagram of a mobile multi-modal interactive navigation robot system based on SLAM of the present invention;
图2为本发明二维地图构建流程示意图;Fig. 2 is the schematic flow chart of two-dimensional map construction of the present invention;
图3为本发明主动蒙特卡罗粒子滤波定位算法中粒子群收敛过程示意图;3 is a schematic diagram of a particle swarm convergence process in the active Monte Carlo particle filter positioning algorithm of the present invention;
图4为本发明语音交互模块工作原理示意图;4 is a schematic diagram of the working principle of the voice interaction module of the present invention;
图5为本发明移动导览模块定点讲解功能页面示意图。FIG. 5 is a schematic diagram of a fixed-point explanation function page of the mobile navigation module of the present invention.
具体实施方式Detailed ways
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only a part of the embodiments of the present invention, but not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
本发明的目的是提供一种基于SLAM的移动式多模态交互导览机器人系统,降低了机器人成本。The purpose of the present invention is to provide a mobile multi-modal interactive navigation robot system based on SLAM, which reduces the cost of the robot.
为使本发明的上述目的、特征和优点能够更加明显易懂,下面结合附图和具体实施方式对本发明作进一步详细的说明。In order to make the above objects, features and advantages of the present invention more clearly understood, the present invention will be described in further detail below with reference to the accompanying drawings and specific embodiments.
图1为本发明一种基于SLAM的移动式多模态交互导览机器人系统结构示意图,如图1所示,一种基于SLAM的移动式多模态交互导览机器人系统,包括:FIG. 1 is a schematic structural diagram of a SLAM-based mobile multi-modal interactive navigation robot system according to the present invention. As shown in FIG. 1 , a SLAM-based mobile multi-modal interactive navigation robot system includes:
移动控制模块101,包括基于激光雷达的SLAM,用于通过移动底座106控制机器人的移动,同时获取当前定位信息。The
激光雷达以固定频率扫描环境距离信息,SLAM中的建图、定位和导航节点都会接收激光雷达的数据信息。The lidar scans the environmental distance information at a fixed frequency, and the mapping, positioning and navigation nodes in SLAM will receive the lidar data information.
所述移动控制模块101包括:The
建图单元,用于实时接收激光雷达的数据,根据激光雷达的数据构建周边环境的二维地图。The mapping unit is used to receive the data of the lidar in real time, and build a two-dimensional map of the surrounding environment according to the data of the lidar.
定位单元,根据激光雷达的数据和机器人里程数据,确定所述机器人在所述二维地图上的位姿。The positioning unit determines the pose of the robot on the two-dimensional map according to the laser radar data and the robot mileage data.
导航单元,用于根据所述机器人的目的地和出发地进行全局路径规划,当所述机器人按照所述全局路径规划进行运动时,根据检测到的障碍物进行本地实时规划。A navigation unit, configured to perform global path planning according to the destination and departure place of the robot, and perform local real-time planning according to detected obstacles when the robot moves according to the global path planning.
通过动态窗口法实现所述本地实时规划。The local real-time planning is realized by a dynamic window method.
通过迪杰斯特拉算法实现所述全局路径规划。The global path planning is realized by Dijkstra's algorithm.
该机器人系统还包括超声波传感器,用于检测所述机器人周围的障碍信号,并将所述障碍信号发送到所述移动控制模块101。The robot system further includes an ultrasonic sensor for detecting obstacle signals around the robot and sending the obstacle signals to the
所述建图单元采用GMapping算法,根据激光雷达的数据构建周边环境的二维地图。The mapping unit uses the GMapping algorithm to construct a two-dimensional map of the surrounding environment according to the data of the lidar.
所述定位单元采用主动蒙特卡罗粒子滤波定位算法,根据激光雷达的数据和机器人里程数据,确定所述机器人在所述二维地图上的位姿。The positioning unit adopts an active Monte Carlo particle filter positioning algorithm, and determines the pose of the robot on the two-dimensional map according to the data of the lidar and the mileage data of the robot.
移动控制模块101涉及到的具体工作原理如下:The specific working principle involved in the
①建图,建图单元中采用的建图算法的主要工作流程为在机器人移动的过程中,不断地接收激光雷达数据的输入,以此构建周边环境完整的二维地图。采用的建图算法为GMapping,GMapping基于RBpf粒子滤波算法,即将定位和建图过程分离,先进行定位再进行建图。基于粒子滤波的建图过程会预设很多粒子,每一个粒子都携带了机器人可能的位姿和当前地图,RBpf算法需要对每一个粒子分别计算概率,计算概率的公式为。①Mapping, the main workflow of the mapping algorithm used in the mapping unit is to continuously receive the input of lidar data during the movement of the robot, so as to construct a complete two-dimensional map of the surrounding environment. The mapping algorithm used is GMapping. GMapping is based on the RBpf particle filter algorithm, that is, the positioning and mapping process are separated, and the positioning is performed first and then the mapping is performed. The mapping process based on particle filtering will preset many particles, and each particle carries the possible pose and current map of the robot. The RBpf algorithm needs to calculate the probability of each particle separately. The formula for calculating the probability is: .
为一个联合概率分布,目的是根据时刻1到时刻t的观测数据z1:t,以及时刻1到时刻t-1的运动控制数据u1:t-1,来同时预测机器人的轨迹x1:t(从时刻1到时刻t一连串的粒子)和地图m。而联合概率可以转换为条件概率,故上式等价于。 is a joint probability distribution, the purpose is to simultaneously predict the trajectory x 1 of the robot according to the observation data z 1:t from time 1 to time t, and the motion control data u 1:t-1 from time 1 to time t- 1: t (a sequence of particles from time 1 to time t) and map m. The joint probability can be converted into a conditional probability, so the above formula is equivalent to .
根据,每个粒子携带的地图都依赖于机according to , the map carried by each particle depends on the machine
器人的位置和姿态,RBpf算法即可对每个粒子先估计机器人的轨迹,再根据此轨迹推测该粒子所携带地图正确的概率。较为通用的粒子滤波算法为SIR(SampingImportance Resampling)滤波器,在SLAM建图的应用下分为以下四个步骤:The position and attitude of the robot, the RBpf algorithm can first estimate the trajectory of the robot for each particle, and then infer the correct probability of the map carried by the particle based on this trajectory. The more common particle filter algorithm is SIR (Samping Importance Resampling) filter, which is divided into the following four steps under the application of SLAM mapping:
1)粒子初始化1) Particle initialization
算法第一步的主要工作是粒子群的初始化,即根据状态转移函数的预测,生成大量上述粒子,这些粒子将被赋予初始权值,算法后续将更新这些权值,利用这些粒子的加权逼近后验概率。The main work of the first step of the algorithm is the initialization of the particle swarm, that is, according to the prediction of the state transition function, a large number of the above particles are generated, and these particles will be given initial weights, and the algorithm will update these weights later, using the weighted approximation of these particles. test probability.
2)矫正计算2) Correction calculation
第二步主要工作是矫正,机器人在行进过程中会采集一系列传感器的数据输出,即观测值,粒子的权值就是根据这些观测值来进行计算的。假设有n个粒子,其中第i个粒子权值记为wi,代表此粒子在观测校正的过程中获得的概率。因机器人建图的过程是马尔科夫过程,当前的状态结果只与上一状态有关,所以算法采用是基于上一状态的权值评估方式,状态转移方程如下式所示,其中η是常数。The main work of the second step is correction. The robot will collect the data output of a series of sensors during the traveling process, that is, the observation values. The weights of the particles are calculated based on these observation values. Assuming that there are n particles, the weight of the ith particle is denoted as w i , which represents the probability that this particle obtains in the process of observation and correction. Because the process of building a robot map is a Markov process, the current state result is only related to the previous state, so the algorithm adopts the weight evaluation method based on the previous state. The state transition equation is shown in the following formula, where η is a constant.
进行校正计算后,每个粒子都会获得其权值。其中,表示第i个粒子t时刻的权值,z t表示t时刻的观测数据,表示第i个粒子预测机器人t时刻的轨迹。After the correction calculation, each particle gets its weight. in, represents the weight of the i-th particle at time t, z t represents the observation data at time t, Represents the trajectory of the i-th particle predicting the robot at time t.
3)粒子重采样3) Particle resampling
第三步的主要工作是重采样,即淘汰价值较低的粒子,补充价值较高的粒子。因为机器人在现实环境下的运动的过程是连续的,而粒子的初始状态是随机的,状态非连续分布的粒子权值会逐渐降低。算法根据权值比例淘汰这部分权值较低的粒子,并将新采样的粒子补充到状态转移方程中。新采样的粒子是根据激光雷达的数据计算出来的。机器人移动后,出现了未建图的地方,想要把粒子集中分布在这些地方,根据激光雷达数据用状态转移方程计算出下一步要采样的粒子的分布信息(也就是粒子在哪些位置产生)。The main work of the third step is resampling, that is, eliminating particles with lower value and replenishing particles with higher value. Because the movement process of the robot in the real environment is continuous, and the initial state of the particles is random, the weight of the particles with non-continuous distribution of states will gradually decrease. The algorithm eliminates this part of the particles with lower weights according to the weight ratio, and adds the newly sampled particles to the state transition equation. The newly sampled particles are calculated from the lidar data. After the robot moves, there are places that have not been mapped. I want to distribute the particles in these places. According to the lidar data, the state transition equation is used to calculate the distribution information of the particles to be sampled in the next step (that is, where the particles are generated). .
4)地图计算4) Map calculation
算法最终统计每个粒子采样的轨迹和传感器的观测结果,从中计算出最大概率的地图估计。并将新计算出来的地图更新到原有地图上。The algorithm finally counts the trajectories sampled by each particle and the observations of the sensors, from which it calculates the map estimate of the maximum probability. And update the newly calculated map to the original map.
机器人在运动过程中探索四周的场景,并不断迭代四个步骤。在地图的构建过程中,可根据实际需要来判断当前构建出的地图是否完整,当地图已构建完成,可以保存当前地图并中断算法的执行,保存的地图即为最终的建图结果。地图构建流程如图2所示。During the movement, the robot explores the surrounding scene and continuously iterates four steps. In the process of map construction, you can judge whether the currently constructed map is complete according to actual needs. When the map has been constructed, you can save the current map and interrupt the execution of the algorithm. The saved map is the final map construction result. The map construction process is shown in Figure 2.
②定位,明确机器人当前所在空间的位置,为后续移动和路径规划提供依据。该功能根据激光雷达数据及机器人的里程计数据,输出机器人在SLAM系统已建好地图上的位姿。采用主动蒙特卡罗粒子滤波定位算法(AMCL),AMCL算法可以在地图已经建立完成的前提下,根据里程计提供的大概位置,再使用粒子滤波的方式得到机器人的位姿。AMCL算法的具体流程与建图算法类似,均基于SIR滤波器,首先初始化一系列带权值的随机粒子,然后用此粒子群来逼近任意状态的后验概率密度,整体流程分为以下五个步骤:②Positioning, clarifying the current position of the robot in the space, providing a basis for subsequent movement and path planning. This function outputs the pose of the robot on the map built by the SLAM system based on the lidar data and the robot's odometer data. Using the active Monte Carlo particle filter localization algorithm (AMCL), the AMCL algorithm can obtain the pose of the robot by particle filtering according to the approximate position provided by the odometer on the premise that the map has been established. The specific process of the AMCL algorithm is similar to that of the mapping algorithm. It is based on the SIR filter. First, a series of random particles with weights are initialized, and then the particle swarm is used to approximate the posterior probability density of any state. The overall process is divided into the following five step:
1)粒子初始化1) Particle initialization
与建图算法类似,第一步的主要工作为粒子群的初始化。粒子群中每个粒子都包含一组机器人的位姿信息,如位置和方向。对于单个粒子,算法用权值是衡量它和机器人位姿真实状态匹配度。粒子群中每个粒子初始状态不同,但权值相同。Similar to the mapping algorithm, the main work of the first step is the initialization of the particle swarm. Each particle in the particle swarm contains a set of pose information of the robot, such as position and orientation. For a single particle, the algorithm uses the weight to measure its matching degree with the real state of the robot pose. The initial state of each particle in the particle swarm is different, but the weight is the same.
2)状态预测2) State prediction
根据机器人在真实场景下的运动状态,更新粒子群中每一个粒子的位姿信息。如机器人朝x轴正方向移动时,粒子群中所有的粒子都趋向x轴正方向移动。此步骤只更新位姿,不更新权值。According to the motion state of the robot in the real scene, the pose information of each particle in the particle swarm is updated. For example, when the robot moves in the positive direction of the x-axis, all the particles in the particle swarm tend to move in the positive direction of the x-axis. This step only updates the pose, not the weights.
3)权值更新3) Weight update
此步骤主要工作是根据传感器信息更新粒子群中所有粒子的权值。在上一步位姿更新的基础上,和机器人位姿真实状态匹配度越高的粒子会被设定更高的权值。The main work of this step is to update the weights of all particles in the particle swarm according to the sensor information. Based on the pose update in the previous step, the particles that match the real state of the robot pose more closely will be assigned higher weights.
4)粒子重采样4) Particle resampling
此步骤和建图算法中的粒子重采样原理类似,具体流程为抛弃权值最低的部分粒子,并根据权值高的粒子进行重采样,以更新粒子群。经过一轮重采样,粒子群会有一定程度的收敛,收敛后往往更能反映机器人位姿的真实状态。This step is similar to the principle of particle resampling in the mapping algorithm. The specific process is to discard some of the particles with the lowest weights, and resample according to the particles with higher weights to update the particle swarm. After a round of resampling, the particle swarm will converge to a certain extent, and after convergence, it can often better reflect the true state of the robot's pose.
5)加权平均5) Weighted average
此步骤将粒子群中的每一个粒子所携带的位姿作加权平均,得出的结果即为算法估计的结果,即在真实场景下机器人在地图中的位姿。In this step, the pose carried by each particle in the particle swarm is weighted and averaged, and the result obtained is the result estimated by the algorithm, that is, the pose of the robot in the map in the real scene.
以上步骤不断迭代,粒子群收敛到给定阈值即可认为机器人的定位已足够准确。粒子群收敛过程如图3所示。The above steps are iterated continuously, and the particle swarm can be considered to be sufficiently accurate when the particle swarm converges to a given threshold. The particle swarm convergence process is shown in Figure 3.
③导航,SLAM系统的最终目的就是赋予机器人自主导航的能力。导航在大多数情况下是基于建图模块建好的地图,导航算法首先进行路径规划,分为全局路径规划和本地实时规划两个部分。然后根据规划出的路径和定位信息使用PID控制算法进行路径追踪。③ Navigation, the ultimate purpose of the SLAM system is to give the robot the ability to navigate autonomously. In most cases, the navigation is based on the map built by the mapping module. The navigation algorithm firstly performs path planning, which is divided into two parts: global path planning and local real-time planning. Then use the PID control algorithm to track the path according to the planned path and positioning information.
全局路径规划是通过经典Dijkstra最优路径算法计算出机器人从A点到达B点花费最少的路径。机器人在运动过程中,不可避免会遇到地图上未标注的障碍物,为了灵活避开障碍物,需要本地实时规划路径。Global path planning is to calculate the path with the least cost for the robot to get from point A to point B through the classical Dijkstra optimal path algorithm. During the movement of the robot, it will inevitably encounter obstacles that are not marked on the map. In order to flexibly avoid obstacles, it needs to plan the path locally in real time.
本地实时规划是通过动态窗口法(Dynamic Window Approach,DWA)这一算法来实现的,算法流程如下:The local real-time planning is realized by the dynamic window method (Dynamic Window Approach, DWA). The algorithm flow is as follows:
1)速度采样1) Speed sampling
算法首先对机器人的速度空间中进行多组采样,采样的结果是一系列有向速度,包含大小和方向。The algorithm first performs multiple sets of samples in the robot's velocity space, and the result of the sampling is a series of directed velocities, including magnitude and direction.
2)轨迹模拟2) Trajectory simulation
针对上述每个采样,分别预测出机器人以相应有向速度行驶一段时间后对应的行驶轨迹。For each of the above samples, the corresponding driving trajectory of the robot after driving at the corresponding directional speed for a period of time is predicted respectively.
3)轨迹评价3) Trajectory evaluation
根据评价函数对上述轨迹进行打分,筛选出最优轨迹作为机器人驱动的依据。评价函数G(v,w)为轨迹末端与当前状态方位角heading(v,w)、轨迹与障碍物之间的最短距离dist(v,w)和轨迹速度的大小velocity(v,w)的归一化加和,评价函数G(v,w)如下:The above trajectory is scored according to the evaluation function, and the optimal trajectory is screened out as the basis for the robot to drive. The evaluation function G(v,w) is the difference between the trajectory end and the current state azimuth heading(v,w), the shortest distance between the trajectory and the obstacle dist(v,w) and the trajectory velocity velocity(v,w). Normalized summation, the evaluation function G(v,w) is as follows:
。 .
其中σ、α、β、γ均为设定参数,(v,w)分别为线速度和角速度。Among them, σ, α, β, and γ are all setting parameters, and (v, w) are the linear velocity and angular velocity, respectively.
4)循环,重复上述步骤4) Loop, repeat the above steps
基于上述步骤,算法会在目标点给定后,根据机器人当前的位姿和目标位姿确定全局路径,并根据障碍物信息实时更新局部路径。Based on the above steps, after the target point is given, the algorithm will determine the global path according to the robot's current pose and target pose, and update the local path in real time according to the obstacle information.
为了检测到环境中未标注的障碍物,除了激光雷达外,此系统还使用了呈五边形分布的五个超声波传感器检测近距离障碍,以提升识别障碍物能力。In order to detect unlabeled obstacles in the environment, in addition to lidar, the system also uses five ultrasonic sensors distributed in a pentagon to detect close-range obstacles to improve the ability to identify obstacles.
移动导览模块102,与所述移动控制模块101电连接,用于接收所述当前定位信息,根据所述当前定位信息结合预设的路径规划进行定点巡航讲解。The
移动导览模块102根据系统预制的路径和点位,机器人进行移动导览。在移动前,通过语音交互模块104播放“移动提示音”,屏幕画面切换到特定页面。在移动过程中,根据移动控制模块101进行移动控制。到达特定点位时,暂停移动,语音交互模块104播放预存的音频信息。在播放完毕后,自动切换屏幕画面到下一位置,之后播放“移动提示音”,开始移动。待路径全部移动完成后,结束。移动导览模块102进行定点讲解的功能界面如图5所示。The
信息展示模块103,与所述移动控制模块101电连接,用于显示预设信息,将接收到的运动控制指令发送到所述移动控制模块101。The
信息展示模块103基于Android Jetpack库进行开发,基于Navigation组件构建单页应用,采用多个Fragment单独一个Activity组件,抽象出公用的UI组件,减小项目间的代码耦合。根据接收到的导览请求,触发对应的action后切换到相应的Fragment。页面包含:一级页面“功能选择”页面,二级页面包括“景区全览”、“特色产品”和“游客拍照”等具体功能页面。页面所要展示的信息将预置在系统数据库内,可以根据不同的场景和需求进行调整。该数据库使用安卓Room框架作为持久化数据存储框架,即Room数据库107,便于数据库的维护和版本更新。使用讯飞SDK实现语音唤醒、语音合成、语音识别,使用https协议与语音交互模块104进行数据交换,实现机器人的人机交互。通过Ros提供的上层安卓接口控制机器人的行为提供“校准定位”“移动导览”等功能。长时间(3分钟)未接收到操作请求,安卓中内置的组件会引发中断切换页面,进入“待机状态”,待机状态下,屏幕放映预置好的影视资源;当有用户操作时,重新唤醒“功能选择”页面。The
语音交互模块104,采用C/S架构,分别与所述信息展示模块103和所述移动导览模块102通讯连接,采用任务导向性检索对话与深度学习聊天系统相结合,用于实现人机交互,避免了市面上广泛采用的人工后台的高额成本问题。所述人机交互包括接收运动控制指令。The
所述语音交互模块104包括前端(语音前端采集模块1041)和服务端(语音交互服务端1042)。The
所述前端用于采集声音信号,将所述声音信号转换为文字信息,将所述文字信息发送到所述服务端并接收所述服务端的返回信息,所述返回信息基于语音合成SDK转换为人声。The front end is used to collect sound signals, convert the sound signals into text information, send the text information to the server and receive return information from the server, and the return information is converted into human voice based on the speech synthesis SDK .
所述服务端用于接收所述文字信息,基于检索类与生成类协同的方式对所述文字信息进行意图识别和对话处理,并发送返回信息。The server is used for receiving the text information, performing intention recognition and dialogue processing on the text information based on the synergy between the retrieval class and the generation class, and sending return information.
所述服务端为云端服务器。The server is a cloud server.
该系统还包括交互显示输出105,用于通过显示屏的方式显示信息展示模块103和移动导览模块102的信息。The system also includes an
作为具体实施例,前端部分硬件采用了科大讯飞环形六麦阵列,用于采集声音信号,并利用其自带的前端算法进行回声消除,噪音抑制等处理;利用语音唤醒技术启动对话,并基于科大讯飞语音识别SDK将语音转化为文字信息,通过RESTFul API调用服务端对话接口,获取返回信息后基于语音合成SDK转换为人声,最后经过功率放大器和喇叭进行语音播放,语音交互模块104的工作原理如图4所示。As a specific example, the front-end hardware adopts iFLYTEK ring six-mic array to collect sound signals, and uses its own front-end algorithm to perform echo cancellation, noise suppression and other processing; use voice wake-up technology to start dialogue, and based on The iFLYTEK speech recognition SDK converts the voice into text information, calls the server-side dialogue interface through the RESTFul API, obtains the returned information and converts it into human voice based on the speech synthesis SDK, and finally plays the voice through the power amplifier and speakers. The work of the
服务端基于Flask轻量化开发,采用Docker进行部署,提供RESTFul API。基于检索类与生成类协同的方式进行意图识别、对话处理,并返回相关信息。同时,通过环形六麦阵列,利用其自带的前端算法进行声源定位,控制机器人面向用户进行交互;对于机器人执行的其他功能,也会播放相应交互提示信息。The server is developed based on Flask lightweight, deployed using Docker, and provides RESTFul API. Intent recognition and dialogue processing are performed based on the synergy between retrieval class and generation class, and relevant information is returned. At the same time, through the ring-shaped six-mic array, it uses its own front-end algorithm to locate the sound source, and control the robot to interact with the user; for other functions performed by the robot, the corresponding interactive prompt information will also be played.
本说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似部分互相参见即可。The various embodiments in this specification are described in a progressive manner, and each embodiment focuses on the differences from other embodiments, and the same and similar parts between the various embodiments can be referred to each other.
本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处。综上所述,本说明书内容不应理解为对本发明的限制。In this paper, specific examples are used to illustrate the principles and implementations of the present invention. The descriptions of the above embodiments are only used to help understand the methods and core ideas of the present invention; meanwhile, for those skilled in the art, according to the present invention There will be changes in the specific implementation and application scope. In conclusion, the contents of this specification should not be construed as limiting the present invention.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110462802.4A CN112882481A (en) | 2021-04-28 | 2021-04-28 | Mobile multi-mode interactive navigation robot system based on SLAM |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110462802.4A CN112882481A (en) | 2021-04-28 | 2021-04-28 | Mobile multi-mode interactive navigation robot system based on SLAM |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112882481A true CN112882481A (en) | 2021-06-01 |
Family
ID=76040085
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110462802.4A Pending CN112882481A (en) | 2021-04-28 | 2021-04-28 | Mobile multi-mode interactive navigation robot system based on SLAM |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112882481A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113370229A (en) * | 2021-06-08 | 2021-09-10 | 山东新一代信息产业技术研究院有限公司 | Exhibition hall intelligent explanation robot and implementation method |
Citations (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101267441A (en) * | 2008-04-23 | 2008-09-17 | 北京航空航天大学 | A C/S and B/S mixed architecture pattern realization method and platform |
CN102480510A (en) * | 2010-11-30 | 2012-05-30 | 汉王科技股份有限公司 | Method and device for realizing C/S and B/S mixed architecture |
CN103914068A (en) * | 2013-01-07 | 2014-07-09 | 中国人民解放军第二炮兵工程大学 | Service robot autonomous navigation method based on raster maps |
CN105278532A (en) * | 2015-11-04 | 2016-01-27 | 中国科学技术大学 | Personalized autonomous explanation method of guidance by robot tour guide |
CN106182027A (en) * | 2016-08-02 | 2016-12-07 | 西南科技大学 | A kind of open service robot system |
CN106842230A (en) * | 2017-01-13 | 2017-06-13 | 深圳前海勇艺达机器人有限公司 | Mobile Robotics Navigation method and system |
CN107065863A (en) * | 2017-03-13 | 2017-08-18 | 山东大学 | A kind of guide to visitors based on face recognition technology explains robot and method |
CN107167141A (en) * | 2017-06-15 | 2017-09-15 | 同济大学 | Robot autonomous navigation system based on double line laser radars |
CN107168320A (en) * | 2017-06-05 | 2017-09-15 | 游尔(北京)机器人科技股份有限公司 | A kind of tourist guide service robot |
CN206541196U (en) * | 2017-03-13 | 2017-10-03 | 山东大学 | A kind of guide to visitors based on face recognition technology explains robot |
CN107421544A (en) * | 2017-08-10 | 2017-12-01 | 上海大学 | A kind of modular hotel's handling robot system |
CN206892921U (en) * | 2017-02-27 | 2018-01-16 | 江苏慧明智能科技有限公司 | Electronic pet with family endowment function |
CN108227706A (en) * | 2017-12-20 | 2018-06-29 | 北京理工华汇智能科技有限公司 | The method and device of dynamic disorder is hidden by robot |
CN108510048A (en) * | 2017-02-27 | 2018-09-07 | 江苏慧明智能科技有限公司 | Electronic pet with family endowment function |
CN108687783A (en) * | 2018-08-02 | 2018-10-23 | 合肥市徽马信息科技有限公司 | One kind is led the way explanation guide to visitors robot of formula museum |
CN108710647A (en) * | 2018-04-28 | 2018-10-26 | 苏宁易购集团股份有限公司 | A kind of data processing method and device for chat robots |
CN108733059A (en) * | 2018-06-05 | 2018-11-02 | 湖南荣乐科技有限公司 | A kind of guide method and robot |
CN108748213A (en) * | 2018-08-02 | 2018-11-06 | 合肥市徽马信息科技有限公司 | A kind of guide to visitors robot |
CN208497010U (en) * | 2018-06-05 | 2019-02-15 | 湖南荣乐科技有限公司 | Intelligent exhibition guiding machine device people |
CN109471440A (en) * | 2018-12-10 | 2019-03-15 | 北京猎户星空科技有限公司 | Robot control method, device, smart machine and storage medium |
CN208629445U (en) * | 2017-10-13 | 2019-03-22 | 刘杜 | Autonomous introduction system platform robot |
CN110044359A (en) * | 2019-04-30 | 2019-07-23 | 厦门大学 | A kind of guide to visitors robot path planning method, device, robot and storage medium |
CN110136711A (en) * | 2019-04-30 | 2019-08-16 | 厦门大学 | A voice interaction method based on cloud platform for tour robot |
CN110135551A (en) * | 2019-05-15 | 2019-08-16 | 西南交通大学 | A chatting method for robots based on word vectors and recurrent neural networks |
CN110659468A (en) * | 2019-08-21 | 2020-01-07 | 江苏大学 | File encryption and decryption system based on C/S architecture and speaker identification technology |
CN110750097A (en) * | 2019-10-17 | 2020-02-04 | 上海飒智智能科技有限公司 | Indoor robot navigation system and map building, positioning and moving method |
CN110986977A (en) * | 2019-11-21 | 2020-04-10 | 新石器慧通(北京)科技有限公司 | Movable unmanned carrier for navigation, navigation method and unmanned vehicle |
CN111090285A (en) * | 2019-12-24 | 2020-05-01 | 山东华尚电气有限公司 | Navigation robot control system and navigation information management method |
CN111210821A (en) * | 2020-02-07 | 2020-05-29 | 普强时代(珠海横琴)信息技术有限公司 | Intelligent voice recognition system based on internet application |
CN111259441A (en) * | 2020-01-14 | 2020-06-09 | Oppo广东移动通信有限公司 | Device control method, device, storage medium and electronic device |
CN111430044A (en) * | 2020-03-19 | 2020-07-17 | 郑州大学第一附属医院 | A kind of natural language processing system and method of nursing robot |
CN111488254A (en) * | 2019-01-25 | 2020-08-04 | 顺丰科技有限公司 | Deployment and monitoring device and method of machine learning model |
CN111611269A (en) * | 2020-05-23 | 2020-09-01 | 上海自古红蓝人工智能科技有限公司 | Artificial intelligence emotion accompanying and attending system in conversation and chat mode |
CN211517481U (en) * | 2019-12-30 | 2020-09-18 | 深圳市汉伟智能技术有限公司 | Guide robot |
US20210041246A1 (en) * | 2019-08-08 | 2021-02-11 | Ani Dave Kukreja | Method and system for intelligent and adaptive indoor navigation for users with single or multiple disabilities |
CN112364148A (en) * | 2020-12-08 | 2021-02-12 | 吉林大学 | Deep learning method-based generative chat robot |
CN112527972A (en) * | 2020-12-25 | 2021-03-19 | 东云睿连(武汉)计算技术有限公司 | Intelligent customer service chat robot implementation method and system based on deep learning |
-
2021
- 2021-04-28 CN CN202110462802.4A patent/CN112882481A/en active Pending
Patent Citations (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101267441A (en) * | 2008-04-23 | 2008-09-17 | 北京航空航天大学 | A C/S and B/S mixed architecture pattern realization method and platform |
CN102480510A (en) * | 2010-11-30 | 2012-05-30 | 汉王科技股份有限公司 | Method and device for realizing C/S and B/S mixed architecture |
CN103914068A (en) * | 2013-01-07 | 2014-07-09 | 中国人民解放军第二炮兵工程大学 | Service robot autonomous navigation method based on raster maps |
CN105278532A (en) * | 2015-11-04 | 2016-01-27 | 中国科学技术大学 | Personalized autonomous explanation method of guidance by robot tour guide |
CN106182027A (en) * | 2016-08-02 | 2016-12-07 | 西南科技大学 | A kind of open service robot system |
CN106842230A (en) * | 2017-01-13 | 2017-06-13 | 深圳前海勇艺达机器人有限公司 | Mobile Robotics Navigation method and system |
CN206892921U (en) * | 2017-02-27 | 2018-01-16 | 江苏慧明智能科技有限公司 | Electronic pet with family endowment function |
CN108510048A (en) * | 2017-02-27 | 2018-09-07 | 江苏慧明智能科技有限公司 | Electronic pet with family endowment function |
CN107065863A (en) * | 2017-03-13 | 2017-08-18 | 山东大学 | A kind of guide to visitors based on face recognition technology explains robot and method |
CN206541196U (en) * | 2017-03-13 | 2017-10-03 | 山东大学 | A kind of guide to visitors based on face recognition technology explains robot |
CN107168320A (en) * | 2017-06-05 | 2017-09-15 | 游尔(北京)机器人科技股份有限公司 | A kind of tourist guide service robot |
CN107167141A (en) * | 2017-06-15 | 2017-09-15 | 同济大学 | Robot autonomous navigation system based on double line laser radars |
CN107421544A (en) * | 2017-08-10 | 2017-12-01 | 上海大学 | A kind of modular hotel's handling robot system |
CN208629445U (en) * | 2017-10-13 | 2019-03-22 | 刘杜 | Autonomous introduction system platform robot |
CN108227706A (en) * | 2017-12-20 | 2018-06-29 | 北京理工华汇智能科技有限公司 | The method and device of dynamic disorder is hidden by robot |
CN108710647A (en) * | 2018-04-28 | 2018-10-26 | 苏宁易购集团股份有限公司 | A kind of data processing method and device for chat robots |
CN208497010U (en) * | 2018-06-05 | 2019-02-15 | 湖南荣乐科技有限公司 | Intelligent exhibition guiding machine device people |
CN108733059A (en) * | 2018-06-05 | 2018-11-02 | 湖南荣乐科技有限公司 | A kind of guide method and robot |
CN108687783A (en) * | 2018-08-02 | 2018-10-23 | 合肥市徽马信息科技有限公司 | One kind is led the way explanation guide to visitors robot of formula museum |
CN108748213A (en) * | 2018-08-02 | 2018-11-06 | 合肥市徽马信息科技有限公司 | A kind of guide to visitors robot |
CN109471440A (en) * | 2018-12-10 | 2019-03-15 | 北京猎户星空科技有限公司 | Robot control method, device, smart machine and storage medium |
CN111488254A (en) * | 2019-01-25 | 2020-08-04 | 顺丰科技有限公司 | Deployment and monitoring device and method of machine learning model |
CN110044359A (en) * | 2019-04-30 | 2019-07-23 | 厦门大学 | A kind of guide to visitors robot path planning method, device, robot and storage medium |
CN110136711A (en) * | 2019-04-30 | 2019-08-16 | 厦门大学 | A voice interaction method based on cloud platform for tour robot |
CN110135551A (en) * | 2019-05-15 | 2019-08-16 | 西南交通大学 | A chatting method for robots based on word vectors and recurrent neural networks |
US20210041246A1 (en) * | 2019-08-08 | 2021-02-11 | Ani Dave Kukreja | Method and system for intelligent and adaptive indoor navigation for users with single or multiple disabilities |
CN110659468A (en) * | 2019-08-21 | 2020-01-07 | 江苏大学 | File encryption and decryption system based on C/S architecture and speaker identification technology |
CN110750097A (en) * | 2019-10-17 | 2020-02-04 | 上海飒智智能科技有限公司 | Indoor robot navigation system and map building, positioning and moving method |
CN110986977A (en) * | 2019-11-21 | 2020-04-10 | 新石器慧通(北京)科技有限公司 | Movable unmanned carrier for navigation, navigation method and unmanned vehicle |
CN111090285A (en) * | 2019-12-24 | 2020-05-01 | 山东华尚电气有限公司 | Navigation robot control system and navigation information management method |
CN211517481U (en) * | 2019-12-30 | 2020-09-18 | 深圳市汉伟智能技术有限公司 | Guide robot |
CN111259441A (en) * | 2020-01-14 | 2020-06-09 | Oppo广东移动通信有限公司 | Device control method, device, storage medium and electronic device |
CN111210821A (en) * | 2020-02-07 | 2020-05-29 | 普强时代(珠海横琴)信息技术有限公司 | Intelligent voice recognition system based on internet application |
CN111430044A (en) * | 2020-03-19 | 2020-07-17 | 郑州大学第一附属医院 | A kind of natural language processing system and method of nursing robot |
CN111611269A (en) * | 2020-05-23 | 2020-09-01 | 上海自古红蓝人工智能科技有限公司 | Artificial intelligence emotion accompanying and attending system in conversation and chat mode |
CN112364148A (en) * | 2020-12-08 | 2021-02-12 | 吉林大学 | Deep learning method-based generative chat robot |
CN112527972A (en) * | 2020-12-25 | 2021-03-19 | 东云睿连(武汉)计算技术有限公司 | Intelligent customer service chat robot implementation method and system based on deep learning |
Non-Patent Citations (7)
Title |
---|
JIANTAOCD: "Android Jetpack架构组件最佳实践", 《HTTPS://WWW.JIANSHU.COM/P/4AD7AA0FC356》 * |
于镭 等: "基于单舵轮搬运机器人的导航系统设计", 《电子测量技术》 * |
张瑜 等: "基于改进动态窗口法的户外清扫机器人局部路径规划", 《机器人》 * |
李涛 等: "自主导览互动机器人的设计", 《甘肃科学学报》 * |
翁星: "轮式智能小车的全局路径规划算法与实验研究", 《中国优秀硕士学文论文全文数据库信息科技辑》 * |
詹宇娴 等: "基于树莓派的智能居家机器人系统设计", 《电脑知识与技术》 * |
赵林山 等: "基于云计算的陪护机器人设计与实现", 《机器人技术与应用》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113370229A (en) * | 2021-06-08 | 2021-09-10 | 山东新一代信息产业技术研究院有限公司 | Exhibition hall intelligent explanation robot and implementation method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11241789B2 (en) | Data processing method for care-giving robot and apparatus | |
Steckel et al. | BatSLAM: Simultaneous localization and mapping using biomimetic sonar | |
CN104106267B (en) | Signal enhancing beam forming in augmented reality environment | |
JP6330200B2 (en) | SOUND SOURCE POSITION ESTIMATION DEVICE, MOBILE BODY, AND MOBILE BODY CONTROL METHOD | |
CN108673501A (en) | A kind of the target follower method and device of robot | |
CN107174418A (en) | A kind of intelligent wheel chair and its control method | |
CN109557920A (en) | A kind of self-navigation Jian Tu robot and control method | |
CN114895563B (en) | Novel intelligent cooperation distribution robot system based on reinforcement learning | |
JP2012208782A (en) | Move prediction apparatus, robot control device, move prediction program and move prediction method | |
CN110844402B (en) | An intelligent summoning trash can system | |
WO2022134680A1 (en) | Method and device for robot positioning, storage medium, and electronic device | |
CN107390175A (en) | A kind of auditory localization guider with the artificial carrier of machine | |
CN112882481A (en) | Mobile multi-mode interactive navigation robot system based on SLAM | |
WO2022009602A1 (en) | Information processing device, information processing method, and program | |
CN108646759A (en) | Intelligent dismountable moving robot system based on stereoscopic vision and control method | |
CN110434859B (en) | An intelligent service robot system for commercial office environment and its operation method | |
CN115164931B (en) | System, method and equipment for assisting blind person in going out | |
O'Reilly et al. | A novel development of acoustic SLAM | |
Chen et al. | Research on BatSLAM Algorithm for UAV Based on Audio Perceptual Hash Closed-Loop Detection | |
CN211484452U (en) | Self-moving cleaning robot | |
CN111289947B (en) | Information processing method, device and equipment | |
CN221968077U (en) | A campus navigation robot based on intelligent voice | |
Ahmed et al. | Assistive system for navigating complex realistic simulated world using reinforcement learning | |
CN117532633A (en) | Language interactive robot capable of serving user | |
Kulikov et al. | Using Neural Networks to Navigate Robots Among Obstacles |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210601 |