CN108320051B - Mobile robot dynamic collision avoidance planning method based on GRU network model - Google Patents

Mobile robot dynamic collision avoidance planning method based on GRU network model Download PDF

Info

Publication number
CN108320051B
CN108320051B CN201810044018.XA CN201810044018A CN108320051B CN 108320051 B CN108320051 B CN 108320051B CN 201810044018 A CN201810044018 A CN 201810044018A CN 108320051 B CN108320051 B CN 108320051B
Authority
CN
China
Prior art keywords
output
data
mobile robot
input
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810044018.XA
Other languages
Chinese (zh)
Other versions
CN108320051A (en
Inventor
王宏健
刘大伟
秦佳宇
王莹
周佳加
袁建亚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Engineering University
Original Assignee
Harbin Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Engineering University filed Critical Harbin Engineering University
Priority to CN201810044018.XA priority Critical patent/CN108320051B/en
Publication of CN108320051A publication Critical patent/CN108320051A/en
Application granted granted Critical
Publication of CN108320051B publication Critical patent/CN108320051B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • G06Q10/047Optimisation of routes or paths, e.g. travelling salesman problem
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/061Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using biological neurons, e.g. biological neurons connected to an integrated circuit
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Molecular Biology (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Neurology (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Development Economics (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Game Theory and Decision Science (AREA)
  • Tourism & Hospitality (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • General Business, Economics & Management (AREA)
  • Quality & Reliability (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Feedback Control In General (AREA)

Abstract

本发明公开了一种基于GRU网络模型的移动机器人动态避碰规划方法,属于移动机器人导航领域;本发明是一种基于深度学习网络的避碰算法,通过对传感器的数据进行前期归一化处理然后输入到GRU网络模型中,通过输入层将数据传输到隐藏层,通过隐藏层GRU模块单元对数据进行处理,将处理后的数据输出到输出层,得到下一时刻移动机器人在全局坐标系中的方向θ和速度v;该算法作用下机器人利用简单的感知设备,便可以具有高智能的动态规划水平,在保证安全的前提下,使移动机器人的反应速度优于传统避碰算法。

Figure 201810044018

The invention discloses a dynamic collision avoidance planning method for a mobile robot based on a GRU network model, and belongs to the field of mobile robot navigation; the invention is a collision avoidance algorithm based on a deep learning network. Then input it into the GRU network model, transmit the data to the hidden layer through the input layer, process the data through the hidden layer GRU module unit, and output the processed data to the output layer to get the mobile robot in the global coordinate system at the next moment. direction θ and speed v; under the action of this algorithm, the robot can have a high intelligent dynamic programming level by using a simple sensing device, and under the premise of ensuring safety, the reaction speed of the mobile robot is better than that of the traditional collision avoidance algorithm.

Figure 201810044018

Description

Mobile robot dynamic collision avoidance planning method based on GRU network model
Technical Field
The invention relates to the field of mobile robot navigation, in particular to a dynamic collision avoidance planning method for a mobile robot based on a GRU network model.
Background
When the mobile robot works in an unknown environment, the dynamic collision avoidance planning of a local path is carried out by sensing surrounding environment information through the laser range finder, the safety of the path is guaranteed when the planning is carried out, and meanwhile the real-time performance is also met. At present, a plurality of documents about autonomous collision avoidance of mobile robots exist, most collision avoidance methods utilize traditional algorithms or group intelligent methods, and the algorithms need to establish an environment model and iterate the algorithms, so that the calculation time of the algorithms is prolonged, and the real-time performance is influenced. In the modern day with highly developed science and technology, deep learning gradually enters the lives of people, and the rise of unmanned automobiles is the specific application of deep learning networks in the aspects of collision avoidance and path planning. Compared with the traditional algorithm, the deep learning algorithm does not need to construct a model, and after a large amount of training, the strong nonlinear fitting capacity of the deep learning network can express various input and output. For a trained network, the iteration of the algorithm is not needed, and the corresponding output result can be obtained only through forward propagation, so that the real-time performance is good.
For the problem of dynamic collision avoidance of the mobile robot, the result of the planning is not only related to the environmental information at the current moment, but also comprehensively considers the environmental information and the planning result at the previous moments, the input information is related to the time sequence, and in order to process the input data related to the time sequence, a proper network needs to be selected.
Aiming at the problem of robot collision avoidance in an unknown environment, the invention designs a collision avoidance algorithm based on a deep learning network, and under the action of the algorithm, the robot can have a high intelligent dynamic planning level by using simple sensing equipment, so that the reaction speed of the mobile robot is superior to that of the traditional collision avoidance algorithm on the premise of ensuring safety.
Disclosure of Invention
The invention aims to design a collision avoidance algorithm based on a deep learning network aiming at the robot collision avoidance problem in an unknown environment, the robot can have a high intelligent dynamic programming level by using simple sensing equipment under the action of the algorithm, and the reaction speed of the mobile robot is superior to that of the traditional collision avoidance algorithm on the premise of ensuring safety.
A dynamic collision avoidance planning method for a mobile robot based on a GRU network model is characterized by comprising an input layer, a hidden layer and an output layer; the input original data is changed into 61-dimensional data after normalization and combination processing, and the 61-dimensional data sequentially enters an input layer, a hidden layer and an output layer.
The input layer receives input data with 61 dimensions, wherein the first 60-dimensional data is distance information detected by a laser range finder, and the last one-dimensional data is an angle of a connection line between the mobile robot and a target point in a global coordinate system
Figure BDA0001550311750000011
The data is stored in the form of row vectors, and the input layer and the hidden layer are in a full connection mode.
The hidden layer is two layers in total, the hidden layer 1 is composed of 40 GRU module units, the hidden layer 2 comprises 30 neurons, and the connection mode between the hidden layers is full connection.
The output layer comprises 2 neurons which respectively correspond to the direction theta and the speed v of the mobile robot in the global coordinate system at the next planned moment; the hidden layer data output then enters the output layer in a fully connected manner through the activation function.
A dynamic collision avoidance planning method for a mobile robot based on a GRU network model is characterized by comprising the following implementation methods:
the method comprises the following steps: establishing a coordinate system model, wherein the origin of the coordinate system is selected at the center of the laser range finder, the X axis points to the right front of the laser range finder, and the Y axis points to the left of the laser range finder; the horizontal opening angle of the laser range finder is 180 degrees, the maximum detection radius is set to be 8m, 181 beams are totally formed, and the beam angle is 1 degree;
step two: collecting and preprocessing sample data, performing dynamic collision avoidance and path planning by a mobile robot under the control of a teacher system in a simulation training field, collecting the sample data at the same time, wherein each group of sample data has 184 dimensions, then performing normalization processing, merging the collected data from 1 to 181 dimensions, keeping the 182 th dimension unchanged, and inputting the processed data with 61 dimensions;
step three: inputting the 61-dimensional data into an input layer;
step four: the GRU module unit in the hidden layer receives input data from the input layer in time sequence, firstly xt-9Producing an output h after forward propagationt-9,ht-9And xt-8Together asThe next unit inputs, and so on, the data is transmitted in this way, the output of the module at the last moment is the output of the unit structure, and the calculation formula of the GRU output value is:
zt=σ(xtWz+ht-1Uz)
rt=σ(xtWr+ht-1Ur)
Figure BDA0001550311750000021
Figure BDA0001550311750000022
wherein: z is a radical oft、rtRespectively representing the output of an update gate and a reset gate at the time t; x is the number oftAnd htInput and output vectors at time t; wz、WrAnd W is the module input and update gate, reset gate and
Figure BDA0001550311750000023
an inter-weight matrix; u shapez、UrAnd U is the output of the module at the time t-1, the refresh gate, the reset gate and
Figure BDA0001550311750000024
an inter-weight matrix;
step five: transmitting the calculation result of the hidden layer to an output layer to be used as the direction theta and the speed v of the mobile robot in the global coordinate system at the next moment;
step six: training a GRU network model; initializing the weight of the GRU network model by using standard positive-space distribution;
step seven: training times k is 0, i is 0;
step eight: randomly extracting data [ x ] of 10 continuous time instants from a training sett-9,…,xt]Input network for obtaining corresponding output y through forward propagation processt
Step nine: computing network output ytAnd a corresponding label ltAn error of (2);
step ten: updating the weight by using a gradient descent algorithm;
step eleven: if i is more than or equal to 500, making i equal to 0, randomly selecting 50 sequences in the test set, testing by using the current network, calculating the mean square error, and stopping training if the mean square error in the test set is not reduced for 10 times continuously;
step twelve: k +1, i + 1;
step thirteen: if k is larger than the maximum training times, jumping to the step fourteen, otherwise, jumping to the step eight;
fourteen steps: finishing the training; and obtaining the trained GRU network model.
The invention has the beneficial effects that:
after the network training is finished, the mobile robot uses the training result of the GRU network model to perform dynamic collision avoidance planning in an unknown environment and compares the dynamic collision avoidance planning with the ant colony algorithm, the two algorithms can enable the mobile robot to avoid all obstacles, the route is smooth, and the algorithm can guarantee the safety of the robot. However, the reaction time of the mobile robot is short under the GRU network model, the real-time performance is higher, and the high efficiency of the algorithm is proved.
Drawings
FIG. 1 is a diagram of a coordinate system of a mobile robot according to the present invention;
FIG. 2 shows the angle θ of the present invention
Figure BDA0001550311750000031
A schematic diagram;
FIG. 3 is a diagram of a GRU-based dynamic path planning model according to the present invention;
FIG. 4 is a time-series expanded view of a single GRU modular unit of the present invention;
fig. 5 is a diagram of the effect of GRU network and ant colony algorithm planning in the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
And (3) constructing a coordinate system model, as shown in fig. 1, wherein a global coordinate system adopts a northeast coordinate system, the lower left corner of a map is an origin, the east-righting direction is an X-axis, the north-righting direction is a Y-axis, the origin of a local coordinate system is selected at the center of the laser range finder, the X-axis points to the front of the laser range finder, and the Y-axis points to the left of the laser range finder. The horizontal opening angle of the laser range finder is 180 degrees, the maximum detection radius is 8m, 181 wave beams are totally formed, and the wave beam angle is 1 degree.
Collecting and processing sample data, wherein before training, sample data collection and processing are firstly carried out, a mobile robot carries out dynamic collision avoidance and path planning under the control of a teacher system in a simulation training field, and simultaneously sample data are collected, wherein each group of sample data has 184 dimensions, the data of the first 181 dimensions are obtained by collecting surrounding environment information by a laser range finder, the sensor returns 181 points representing distances, the data of the second 182 dimensions are angle information of a connecting line of the mobile robot and a target point in a global coordinate system, as shown in figure 2,
Figure BDA0001550311750000041
namely, 182 th dimension data. The last two-dimensional data are the robot's heading angle θ and velocity v. Where the first 182-dimensional data are input values to the network and the last two-dimensional data are training labels. In order to make the training result more accurate, the data must be correspondingly processed before entering the input layer, firstly, normalization processing is carried out, then, the data of 1 to 181 dimensions are merged, finally, the data of 182 th dimension is kept unchanged, and the input data has 61 dimensions after processing.
The mobile robot dynamically plans the structural design of the GRU network model, and the structural design of the GRU network model designed by the invention is shown in figure 3. According to the forward propagation direction, input original data are changed into 61-dimension data after normalization and combination processing, and the input data sequentially enter an input layer, a hidden layer and an output layer.
An input layer: the input data of the layer is 61-dimensional, wherein the first 60-dimensional data is distance information detected by a laser range finder, and the last one-dimensional data is an angle of a connection line of the mobile robot and a target point in a global coordinate system
Figure BDA0001550311750000042
The data is stored in the form of row vectors, and the input layer and the hidden layer are in a full connection mode.
Hiding the layer: the hidden layer is two layers in total, hidden layer 1 comprises 40 GRU modular unit, and hidden layer 2 contains 30 neurons, and the connection mode between the hidden layer is full connection.
An output layer: the number of neurons in the output layer is 2, and the neurons respectively correspond to the direction theta and the speed v of the mobile robot in the global coordinate system at the next planned moment. The hidden layer data output then enters the output layer in a fully connected manner through the activation function.
The most important structure in the above-mentioned GRU network model is the GRU module unit, each unit includes two gate control units, namely an update gate and a reset gate, and the inputs of the two gates are the current time input xtAnd the output h of the previous momentt-1The long-term information can be retained by the cooperation of the two doors. The calculation method of the unit output is shown as the following formula:
zt=σ(xtWz+ht-1Uz)
rt=σ(xtWr+ht-1Ur)
Figure BDA0001550311750000043
Figure BDA0001550311750000044
wherein: z is a radical oft、rtRespectively representing the output of an update gate and a reset gate at the time t; x is the number oftAnd htInput and output vectors at time t; wz、WrAnd W is the module input and update gate, reset gate and
Figure BDA0001550311750000045
an inter-weight matrix; u shapez、UrAnd U is the output and update gate of the module at the time t-1,Reset gate and
Figure BDA0001550311750000046
the weight matrix of (2).
The GRU module units are expanded according to time sequence as shown in FIG. 4, the GRU module units receive input data according to time sequence, firstly, xt-9Producing an output h after forward propagationt-9,ht-9And xt-8The data are transmitted in this way, and the output of the module at the last moment is the output of the unit structure.
GRU network model training process:
the method comprises the following steps: initializing the weight of the GRU network model by using standard positive-space distribution;
step two: training times k is 0, i is 0;
step three: randomly extracting data [ x ] of 10 continuous time instants from a training sett-9,…,xt]Input network for obtaining corresponding output y through forward propagation processt
Step four: computing network output ytAnd a corresponding label ltAn error of (2);
step five: updating the weight by using a gradient descent algorithm;
step six: if i is more than or equal to 500, making i equal to 0, randomly selecting 50 sequences in the test set, testing by using the current network, calculating the mean square error, and stopping training if the mean square error in the test set is not reduced for 10 times continuously;
step seven: k +1, i + 1;
step eight: if k is larger than the maximum training times, jumping to the step nine, otherwise, jumping to the step three;
step nine: and finishing the training.
The invention has the following effects:
after the network training is finished, the mobile robot uses the training result of the GRU network model to perform dynamic collision avoidance planning in an unknown environment and compares the dynamic collision avoidance planning with the ant colony algorithm, the result is shown in fig. 5, and as can be seen from the figure, the two algorithms can enable the mobile robot to avoid all obstacles, and the route is smooth, which indicates that the algorithm can ensure the safety of the robot. The average reaction time required by the robot for collision avoidance under the two algorithms is 236ms and 391ms respectively, so that the reaction time of the mobile robot under the GRU network model is obviously short, the real-time performance is higher, and the high efficiency of the algorithm is proved.

Claims (4)

1.一种基于GRU网络模型的移动机器人动态避碰规划方法,其特征在于,包含输入层、隐藏层和输出层;输入的原始数据经过归一化和合并处理后变为61维,依次进入输入层、隐藏层和输出层;其实现方法如下:1. a mobile robot dynamic collision avoidance planning method based on GRU network model, is characterized in that, comprises input layer, hidden layer and output layer; The original data of input becomes 61 dimensions after normalization and merge processing, enter successively. Input layer, hidden layer and output layer; its implementation is as follows: 步骤一:建立坐标系模型,坐标系原点选在激光测距仪中心处,X轴指向激光测距仪正前方,Y轴指向激光测距仪左方;激光测距仪水平开角为180°,设定最大探测半径为8m,共181个波束,波束角为1°;Step 1: Establish a coordinate system model, the origin of the coordinate system is selected at the center of the laser rangefinder, the X axis points to the front of the laser rangefinder, and the Y axis points to the left of the laser rangefinder; the horizontal opening angle of the laser rangefinder is 180° , set the maximum detection radius to 8m, a total of 181 beams, and a beam angle of 1°; 步骤二:对样本数据进行采集与预处理,在仿真训练场中移动机器人在教师系统的控制下进行动态避碰及路径规划,与此同时收集样本数据,每组样本数据共有184维,然后,进行归一化处理,然后对收集到的1到181维数据进行合并,最后第182维数据保持不变,处理后输入数据共有61维;Step 2: Collect and preprocess sample data. In the simulation training field, the mobile robot performs dynamic collision avoidance and path planning under the control of the teacher system. At the same time, sample data is collected. Each group of sample data has a total of 184 dimensions. After normalization processing, the collected data from 1 to 181 dimensions are merged, and finally the 182-dimensional data remains unchanged. After processing, the input data has a total of 61 dimensions; 步骤三:将这61维数据输入到输入层;Step 3: Input the 61-dimensional data into the input layer; 步骤四:隐藏层中的GRU模块单元按时间顺序接收来自输入层的输入数据,首先是xt-9经过前向传播后产生输出ht-9,ht-9与xt-8共同作为下一个单元输入,以此类推,数据以这种方式进行传递,模块在最后一时刻的输出才是该单元结构的输出,GRU输出值计算公式为:Step 4: The GRU module unit in the hidden layer receives the input data from the input layer in chronological order. First, x t-9 generates the output h t- 9 after forward propagation, and h t-9 and x t-8 act together as The next unit input, and so on, the data is transmitted in this way, the output of the module at the last moment is the output of the unit structure, the GRU output value calculation formula is: zt=σ(xtWz+ht-1Uz)z t =σ(x t W z +h t-1 U z ) rt=σ(xtWr+ht-1Ur)r t =σ(x t W r +h t-1 U r )
Figure FDA0003267658980000011
Figure FDA0003267658980000011
Figure FDA0003267658980000012
Figure FDA0003267658980000012
其中:zt、rt分别代表t时刻更新门、重置门的输出;xt和ht为t时刻的输入和输出向量;Wz、Wr和W分别为模块输入与更新门、重置门和
Figure FDA0003267658980000013
间的权重矩阵;Uz、Ur和U分别为t-1时刻模块的输出与更新门、重置门和
Figure FDA0003267658980000014
间的权重矩阵;
Among them: z t , r t respectively represent the output of the update gate and reset gate at time t; x t and h t are the input and output vectors at time t; W z , W r and W are the module input and update gate, reset gate respectively door and
Figure FDA0003267658980000013
The weight matrix between U z , U r and U are the output of the module at time t-1 and the update gate, reset gate and
Figure FDA0003267658980000014
between the weight matrix;
步骤五:将隐藏层计算结果传输到输出层中作为下一时刻移动机器人在全局坐标系中的方向θ和速度v;Step 5: Transfer the calculation result of the hidden layer to the output layer as the direction θ and speed v of the mobile robot in the global coordinate system at the next moment; 步骤六:对GRU网络模型进行训练;利用标准正太分布对GRU网络模型的权值进行初始化;Step 6: Train the GRU network model; use the standard normal distribution to initialize the weights of the GRU network model; 步骤七:训练次数k=0,i=0;Step 7: training times k=0, i=0; 步骤八:从训练集中随机抽取10个连续时刻的数据[xt-9,…,xt]输入网络,通过前向传播过程得到对应输出ytStep 8: Randomly extract data [x t-9 ,...,x t ] from the training set at 10 consecutive moments into the network, and obtain the corresponding output y t through the forward propagation process; 步骤九:计算网络输出yt与对应标签lt的误差;Step 9: Calculate the error between the network output y t and the corresponding label lt ; 步骤十:利用梯度下降的算法进行权值更新;Step 10: Use the gradient descent algorithm to update the weights; 步骤十一:如果i≥500,则令i=0,并在测试集中随机选取50个序列,使用当前网络进行测试,计算均方误差,若测试集中均方误差连续10次没有减小,则停止训练;Step 11: If i≥500, set i=0, and randomly select 50 sequences in the test set, use the current network for testing, and calculate the mean square error. If the mean square error in the test set does not decrease for 10 consecutive times, then stop training; 步骤十二:k=k+1,i=i+1;Step 12: k=k+1, i=i+1; 步骤十三:如果k大于最大训练次数则跳转至步骤十四,否则跳转至步骤八;Step 13: If k is greater than the maximum training times, go to Step 14, otherwise go to Step 8; 步骤十四:训练结束;得到训练后的GRU网络模型。Step 14: The training is over; the trained GRU network model is obtained.
2.根据权利要求1所述的一种基于GRU网络模型的移动机器人动态避碰规划方法,其特征在于,所述的输入层接收输入数据,共61维,其中前60维数据是激光测距仪探测的距离信息,最后一维数据是移动机器人与目标点连线在全局坐标系中的角度
Figure FDA0003267658980000021
数据以行向量的形式保存,输入层与隐藏层之间为全连接方式。
2. a kind of mobile robot dynamic collision avoidance planning method based on GRU network model according to claim 1, is characterized in that, described input layer receives input data, totally 61 dimensions, wherein the first 60 dimensions data is laser ranging The distance information detected by the instrument, the last one-dimensional data is the angle between the mobile robot and the target point in the global coordinate system
Figure FDA0003267658980000021
The data is stored in the form of row vectors, and the input layer and the hidden layer are fully connected.
3.根据权利要求1所述的一种基于GRU网络模型的移动机器人动态避碰规划方法,其特征在于,所述的隐藏层共有两层,隐藏层1由40个GRU模块单元构成,隐藏层2包含30个神经元,隐藏层间的连接方式为全连接。3. a kind of mobile robot dynamic collision avoidance planning method based on GRU network model according to claim 1, is characterized in that, described hidden layer has two layers altogether, and hidden layer 1 is made up of 40 GRU module units, and hidden layer 2 contains 30 neurons, and the connection between hidden layers is fully connected. 4.根据权利要求1所述的一种基于GRU网络模型的移动机器人动态避碰规划方法,其特征在于,所述输出层其含有2个神经元,分别对应所规划的下一时刻移动机器人在全局坐标系中的方向θ和速度v;隐藏层数据输出之后通过激活函数以全连接方式进入输出层。4. a kind of mobile robot dynamic collision avoidance planning method based on GRU network model according to claim 1, is characterized in that, it contains 2 neurons in described output layer, corresponding to the next moment of planning mobile robot in respectively. The direction θ and velocity v in the global coordinate system; after the hidden layer data is output, it enters the output layer in a fully connected manner through the activation function.
CN201810044018.XA 2018-01-17 2018-01-17 Mobile robot dynamic collision avoidance planning method based on GRU network model Active CN108320051B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810044018.XA CN108320051B (en) 2018-01-17 2018-01-17 Mobile robot dynamic collision avoidance planning method based on GRU network model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810044018.XA CN108320051B (en) 2018-01-17 2018-01-17 Mobile robot dynamic collision avoidance planning method based on GRU network model

Publications (2)

Publication Number Publication Date
CN108320051A CN108320051A (en) 2018-07-24
CN108320051B true CN108320051B (en) 2021-11-23

Family

ID=62895127

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810044018.XA Active CN108320051B (en) 2018-01-17 2018-01-17 Mobile robot dynamic collision avoidance planning method based on GRU network model

Country Status (1)

Country Link
CN (1) CN108320051B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111027313A (en) * 2018-10-08 2020-04-17 中国科学院沈阳计算技术研究所有限公司 BiGRU judgment result tendency analysis method based on attention mechanism
CN109556609B (en) * 2018-11-15 2020-11-17 武汉南华工业设备工程股份有限公司 Artificial intelligence-based collision avoidance method and device
CN109739218A (en) * 2018-12-24 2019-05-10 江苏大学 A method for establishing a lane-changing model based on GRU network for imitating excellent drivers
CN109901589B (en) * 2019-03-29 2022-06-07 北京易达图灵科技有限公司 Mobile robot control method and device
CN117031365A (en) * 2023-08-02 2023-11-10 迈胜医疗设备有限公司 Magnetic field measuring device, control method thereof, electronic equipment and related device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106875511A (en) * 2017-03-03 2017-06-20 深圳市唯特视科技有限公司 A kind of method for learning driving style based on own coding regularization network
CN106950969A (en) * 2017-04-28 2017-07-14 深圳市唯特视科技有限公司 It is a kind of based on the mobile robot continuous control method without map movement planner
CN107065881A (en) * 2017-05-17 2017-08-18 清华大学 A kind of robot global path planning method learnt based on deeply
CN107122736A (en) * 2017-04-26 2017-09-01 北京邮电大学 A kind of human body based on deep learning is towards Forecasting Methodology and device
CN107229967A (en) * 2016-08-22 2017-10-03 北京深鉴智能科技有限公司 A kind of hardware accelerator and method that rarefaction GRU neutral nets are realized based on FPGA
CN108334677A (en) * 2018-01-17 2018-07-27 哈尔滨工程大学 A kind of UUV Realtime collision free planing methods based on GRU networks

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103926927A (en) * 2014-05-05 2014-07-16 重庆大学 Binocular vision positioning and three-dimensional mapping method for indoor mobile robot
CN105045260A (en) * 2015-05-25 2015-11-11 湖南大学 Mobile robot path planning method in unknown dynamic environment
CN106774942A (en) * 2017-01-18 2017-05-31 华南理工大学 A kind of real-time 3D remote human-machines interactive system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107229967A (en) * 2016-08-22 2017-10-03 北京深鉴智能科技有限公司 A kind of hardware accelerator and method that rarefaction GRU neutral nets are realized based on FPGA
CN106875511A (en) * 2017-03-03 2017-06-20 深圳市唯特视科技有限公司 A kind of method for learning driving style based on own coding regularization network
CN107122736A (en) * 2017-04-26 2017-09-01 北京邮电大学 A kind of human body based on deep learning is towards Forecasting Methodology and device
CN106950969A (en) * 2017-04-28 2017-07-14 深圳市唯特视科技有限公司 It is a kind of based on the mobile robot continuous control method without map movement planner
CN107065881A (en) * 2017-05-17 2017-08-18 清华大学 A kind of robot global path planning method learnt based on deeply
CN108334677A (en) * 2018-01-17 2018-07-27 哈尔滨工程大学 A kind of UUV Realtime collision free planing methods based on GRU networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
A Novel GRU-RNN Network Model for Dynamic;JIANYA YUAN等;《IEEEAccess》;20190124;第7卷;第15140-15151 *

Also Published As

Publication number Publication date
CN108320051A (en) 2018-07-24

Similar Documents

Publication Publication Date Title
CN108320051B (en) Mobile robot dynamic collision avoidance planning method based on GRU network model
CN114384920B (en) Dynamic obstacle avoidance method based on real-time construction of local grid map
Cao et al. Target search control of AUV in underwater environment with deep reinforcement learning
CN108334677B (en) UUV real-time collision avoidance planning method based on GRU network
CN108319293B (en) UUV real-time collision avoidance planning method based on LSTM network
CN102915039B (en) A kind of multirobot joint objective method for searching of imitative animal spatial cognition
CN113848974B (en) Aircraft trajectory planning method and system based on deep reinforcement learning
CN112857370A (en) Robot map-free navigation method based on time sequence information modeling
CN114495036A (en) Vehicle track prediction method based on three-stage attention mechanism
CN111027627A (en) Vibration information terrain classification and identification method based on multilayer perceptron
CN110401978A (en) Indoor orientation method based on neural network and particle filter multi-source fusion
CN111445498A (en) Target tracking method adopting Bi-L STM neural network
KR20210054355A (en) Vision and language navigation system
CN118261051A (en) A method for constructing a pedestrian and vehicle trajectory prediction model at intersections based on heterogeneous graph networks
CN113609999A (en) Human body model building method based on gesture recognition
CN114153216B (en) Lunar surface path planning system and method based on deep reinforcement learning and block planning
CN108459614B (en) A real-time collision avoidance planning method for UUV based on CW-RNN network
Masmoudi et al. Autonomous car-following approach based on real-time video frames processing
Xu et al. Avoidance of manual labeling in robotic autonomous navigation through multi-sensory semi-supervised learning
Zhang et al. A convolutional neural network method for self-driving cars
Che Multi-sensor data fusion method based on ARIMA-LightGBM for AGV positioning
CN118484004A (en) A path planning method for highway emergency operation robot
CN118643858A (en) A hybrid digital-analog unmanned swarm brain-like collaborative navigation method
CN112560571A (en) Intelligent autonomous visual navigation method based on convolutional neural network
Bai et al. Research of environmental modeling method of coal mine rescue snake robot based on information fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant