CN113959446B - Autonomous logistics transportation navigation method for robot based on neural network - Google Patents

Autonomous logistics transportation navigation method for robot based on neural network Download PDF

Info

Publication number
CN113959446B
CN113959446B CN202111222526.0A CN202111222526A CN113959446B CN 113959446 B CN113959446 B CN 113959446B CN 202111222526 A CN202111222526 A CN 202111222526A CN 113959446 B CN113959446 B CN 113959446B
Authority
CN
China
Prior art keywords
neural network
robot
layer
data
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111222526.0A
Other languages
Chinese (zh)
Other versions
CN113959446A (en
Inventor
陈逸阳
贺海东
程传鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou University
Original Assignee
Suzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou University filed Critical Suzhou University
Priority to CN202111222526.0A priority Critical patent/CN113959446B/en
Publication of CN113959446A publication Critical patent/CN113959446A/en
Application granted granted Critical
Publication of CN113959446B publication Critical patent/CN113959446B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/343Calculating itineraries, i.e. routes leading from a starting point to a series of categorical destinations using a global route restraint, round trips, touristic trips
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/02Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
    • G01S13/06Systems determining position data of a target
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Automation & Control Theory (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application discloses autonomous logistics transportation navigation method of robot based on neural network belongs to robot navigation control field, and its design main points lie in: the method comprises the following steps: a plurality of laser sensors, radar positioning devices and gyroscopes are mounted on the robot; the laser sensor is used for detecting the distribution of the front obstacle; the radar positioning device and the gyroscope are respectively used for judging the end point direction and the running direction of the robot, and the difference value of the end point direction and the running direction is expressed as the relative advancing direction of the robot; a plurality of black lines and square representing obstacles are randomly combined to form an indoor scene similar to a maze, and the scene of a real storage environment is simulated. Compared with the traditional navigation algorithm, the robot autonomous logistics transportation navigation method based on the neural network does not need to build an environment model, the trained neural network is similar to the brain of a human, makes corresponding decisions according to perceived information, and has the capability of completely coping with emergency.

Description

Autonomous logistics transportation navigation method for robot based on neural network
Technical Field
The application relates to the field of robot transportation navigation, in particular to a robot autonomous logistics transportation navigation method based on a neural network.
Background
Autonomous navigation of a robot in the logistics industry requires that the robot be able to independently reach an end point from a start point and not collide with any obstacle during movement.
Related studies are as follows:
CN113240331a discloses a whole-course intelligent logistics robot system, which comprises a robot automatic running system, a user picking and sending system, a server and a logistics cabinet system, wherein the robot automatic running system is used for controlling the robot to automatically avoid obstacle running and transferring express items to a picking position appointed by the user picking and sending system; the server is used for information interaction between the user pickup and sending system and the robot automatic driving system.
CN107436610a discloses a vehicle and robot carrying navigation method and system in intelligent outdoor environment, the technical key point is that step 1: dividing an outdoor environment space into corresponding building-span transportation areas according to building-span transportation tasks, wherein each building-span transportation area is provided with a corresponding unmanned aerial vehicle; step 2: the unmanned aerial vehicle recognizes the outline of the unmanned aerial vehicle through Kinect; step 3: the unmanned aerial vehicle navigates the carrying robot according to the transportation task, the carrying robot recognizes landmarks carried on the unmanned aerial vehicle in real time through the positioning sensor, and the carrying robot moves forward along with the unmanned aerial vehicle until the carrying robot reaches the transportation task end point; step 4: obstacle appears in unmanned aerial vehicle and the communication of carrying robot, and carrying robot passes through Kinect discernment obstacle profile, calculates the biggest angle of hindering to adjust unmanned aerial vehicle position.
From the above prior art, it is known that: networking and autonomous navigation robots have a good future development potential that can subvert the traditional transportation industry and facilitate the operation of on-demand services and applications. Autonomous navigation capability is extremely important for an internet-connected mobile robot, and is the basis for executing various instructions, so that the autonomous navigation capability is widely focused. The navigation strategies based on path planning can be divided into two main types according to known environmental information, one type is global navigation, also called classical navigation, the surrounding environment is completely mastered before path planning is carried out, and common strategies include a cell decomposition method, a roadmap method, an artificial potential field method and the like, and the methods usually avoid the position of an obstacle in a certain way and simultaneously select a path as short as possible. However, the accuracy of this type of navigation method depends on the accuracy of the model of the environment, and if there is a large deviation between the model and the real scene, serious errors occur in the navigation process, and modeling the environment consumes a lot of computing resources. In addition, the real environment is usually dynamically changed, and the planned path is difficult to cope with the emergency situations occurring in the environment.
In the face of increasingly complex navigation tasks, researchers have proposed related local navigation algorithms that can better address the uncertainty in the environment. Common algorithms include genetic algorithms, fuzzy logic, firefly algorithms, particle swarm optimization algorithms, and the like. The algorithms can react in real time according to the dynamic change of the environment, can search in an unknown environment, and can also realize the multi-robot networking collaborative path planning. While local navigation algorithms are more intelligent, efficient, and easier to implement than classical methods, they may require a high computational load in order to plan the path of movement of the networked robot, which contradicts the performance of existing devices, e.g., microprocessors installed on networked robots may not have so efficient computational load to find the correct path on the fly, resulting in these methods not being suitable for low cost vehicles.
Disclosure of Invention
The purpose of the application is to solve the autonomous navigation problem of the mobile robot in the logistics transportation scene from the angle of artificial intelligence, namely to provide a robot autonomous logistics transportation navigation method based on a neural network. Specifically, two kinds of information of related task scenes and human behaviors are collected through an autonomously built data acquisition platform, and parameters of the neural network are adjusted through the collected information data by designing a proper neural network so that the human behaviors can be re-carved; the neural network model trained by the method can make correct navigation decision according to the scene, and has relatively small real-time calculation load.
The technical scheme of the application is as follows:
a robot autonomous logistics transportation navigation method based on a neural network comprises the following steps:
s1, building a data acquisition platform:
a plurality of laser sensors, radar positioning devices and gyroscopes are mounted on the robot; the laser sensor is used for detecting the distribution of the front obstacle; the radar positioning device and the gyroscope are respectively used for judging the end point direction and the running direction of the robot, and the difference value of the end point direction and the running direction is expressed as the relative advancing direction of the robot;
a plurality of black lines and square representing obstacles are randomly combined to form an indoor scene similar to a maze, and a scene of a real storage environment is simulated;
s2, collecting human data:
controlling the movement of the robot through a computer keyboard; meanwhile, recording distance information of each beam of laser detection, angle information of the robot relative to the advancing direction and human behavior information controlled by a keyboard;
s3, building a neural network
Building a proper neural network according to the input and output information dimension, wherein related parameters comprise the layer number of the neural network, a connection mode, an activation function, a loss function, a training data set capacity, a small batch data capacity, a learning rate and total training times;
s4, training a neural network
Training the neural network by taking the data collected in the step S2 as a training data set; the distance information of each beam of laser detection and the angle information of the robot relative to the advancing direction are used as input layers, and the output layers are human behavior information.
And S5, loading the trained neural network to the robot.
Further, in step S2, distance information of each laser detection, angle information of the robot relative to the advancing direction, and human behavior information controlled by the keyboard are recorded, specifically as follows:
the number of the laser sensors is n, and the distance information data matrix D of each laser detection at the moment of recording t0 … … tm is as follows:
wherein d h,td Representing the distance measured at time td under the h laser sensor;
the matrix V of angle information of the robot relative to the forward direction is:
wherein V is td An angle indicating the relative advancing direction of the robot at the td-th time;
the matrix AT of human behavior information controlled by the keyboard is:
at td the human behavior information can be 1 number value or 1 vector, and corresponds to the human behavior information at td time.
Further, the robot has p wheels, and the matrix of human behavior information is:
wherein at f,td Representing the motion command state of any f-th wheel at any td moment;
further, in step S4, the neural network is trained:
D. v is used as input layer data, AT is used as output layer data, and the neural network is trained:
namely: d, d 1,td ......d h,td ......d n,td ,V td (n+1) dataAt as input layer data 1, td ...at f,td ...at p,td (p data) as output layer data.
Further, n=17, p=3; the neural network structure built by S3 is as follows: a 7-layer neural network, wherein the first-layer node number r 1 Number of second layer nodes r=18 2 Number of third layer nodes r=64 3 Number of fourth layer nodes r=128 4 Number of fifth layer nodes r=64 5 Number of sixth layer nodes r =32 6 Number of seventh layer nodes r=8 7 =3 (corresponding to 3 wheels), wherein the first layer is the input layer and the seventh layer is the output layer; the RReLU () activation function is added by a fully connected 5-layer hidden layer, and the Softmax () function is used by an output layer.
Further, training data set capacity c=640, small batch data capacity n batch 10, learning rate lr=0.0008, total training number n epoch =1000。
Further, the number of the laser sensors is n, an n+1-dimensional vector is formed by the distance detected by n laser beams and the relative advancing angle, the n+1-dimensional vector is used as the input of the neural network, the human behaviors are represented by numbers such as 0, 1, 2 (0 represents left turn, 1 represents advance, 2 represents right turn), the errors between the output value and the label value are calculated in batches, and the weights and the neuron thresholds of the neural network are adjusted through error reverse transmission until all data are extracted. If the neural network cannot be converged, returning to the step (3) to change the parameters of the neural network until the neural network can be converged.
Further, between step S4 and step S5, further comprising: testing the trained neural network: and (3) taking other data different from the step (S4) as a test data set, inputting the training data set and the test data set neural network after each round of training, comparing the output with the human behavior data to obtain the accuracy of the neural network for re-carving human behaviors, if the accuracy exceeds an ideal value, storing the trained neural network, and if the accuracy is lower, returning to the step (S3) for adjusting the neural network parameters.
The beneficial effects of this application lie in:
first, the present application addresses the navigation problem described above from an artificial intelligence perspective, utilizing collected historical information to train a neural network so that it can replicate human behavior to autonomously make the correct decision. The method can be applied to an intelligent warehouse, so that the internet mobile robot can transport goods to a designated position safely.
Secondly, the robot autonomous logistics transportation navigation method based on the neural network, provided by the application, can replicate human behaviors with accuracy as high as 85% through multiple tests. Since the same state information of the robot may correspond to various executable human behaviors, the neural network output of the robot has a little deviation, but the robot is not prevented from successfully completing the navigation task in practice. Thus, the accuracy of the method should be more than 85%, and this data is higher than some other navigation methods in the same scene.
Third, the third advantage of the present application is that: the real-time output of the action information according to the scene information requires a large amount of calculation load, but the application separates training and execution, and the requirement on calculation capability is effectively reduced. Firstly, collecting information through human operation, wherein each group of information is independent, and has low requirements on real-time performance and action continuity; the neural network is trained on the computer through the collected information, the robot is not required to independently install an operation module, and the trained neural network can be used after being loaded on the robot.
Drawings
The present application is described in further detail below in conjunction with the embodiments in the drawings, but is not to be construed as limiting the present application in any way.
Fig. 1 is a design drawing of a mobile robot model of the present application.
Fig. 2 is a design drawing of an indoor navigation scene model of the present application.
Fig. 3 is a structural diagram of a neural network of the present application.
Fig. 4 is a schematic diagram of a robotic collision warning of the present application.
Fig. 5 is a schematic diagram of a robot arrival endpoint prompt of the present application.
Fig. 6 is a schematic diagram of a mobile robot movement path case of the present application.
Detailed Description
An embodiment I, a robot autonomous logistics transportation navigation method based on a neural network, includes the following steps:
the first step: the simulation platform is used for simulating a real scene and comprises a mobile robot and an obstacle.
Among them, a mobile robot is equipped with a laser sensor, a radar positioning device, and a gyroscope. 17 rays represent 17 laser beams as shown in fig. 1, and the arrow in fig. 1 points to the end point direction.
For the obstacle, as shown in fig. 2, a plurality of black lines and squares (in essence, may also be used in other manners) are used to represent the random combination of the obstacle to form a maze-like indoor scene, so as to simulate a narrow and tortuous L-shaped scene frequently occurring in a storage environment.
And a second step of: and collecting human data, so that a human testee can select the moving gesture of the robot in different environments by knocking corresponding direction keys on a mechanical keyboard.
The movement data of the current round robot at each time step is saved, including the detection distance of several laser beams, the length of the distance end point, the relative forward angle of the robot, and the actions taken by the robot in the current state (e.g., forward, in-place left turn, and in-place right turn).
And a third step of: the neural network is built, as shown in fig. 3, to select the neural network structure (the neural network structure design belongs to the prior art and is not described any more).
Fourth step: randomly extracting C pieces from the data collected in the second step as training data set to train the neural network, forming n+1-dimensional vectors by the distance and relative advancing angle of n laser detection as the input of the neural network, and using human as the label to represent the human by integer numbers of 0, 1, 2 and the like, wherein n is extracted each time batch And calculating errors between the output value and the label value by the data, representing the errors by using a loss function, and adjusting each weight and neuron threshold value of the neural network through error reverse transfer until the data are all extracted. If the neural network can not be converged, returning to the third step for repairingThe neural network parameters are changed until the neural network is able to converge. Can select 7 layers of neural network (r 1 =18,r 2 =64,r 3 =128,r 4 =64,r 5 =32,r 6 =8,r 7 =3), the rrehu () activation function is added with the full connection form, the 5-layer hidden layer, and the Softmax () function is used by the output layer. Specific parameter setting: cross entropy loss function, training data set capacity c=640, small batch data capacity n batch 10, learning rate lr=0.0008, total training number n epoch =1000. Through experiments, the parameter settings meet the requirements of the neural network training results.
Fifth step: and after each training round is finished, respectively using the training data set and the testing data set as the neural network input, comparing the output with the human behavior data to obtain the accuracy of the neural network for re-carving the human behavior, if the accuracy exceeds an ideal value, storing the trained neural network, and if the accuracy is lower, returning to the third step for adjusting the neural network parameters.
Sixth step: the autonomous robot motion simulation platform also supports the navigation algorithm test, manual operation is changed into algorithm output (namely, the neural network is adopted), and the running state of the robot can be clearly observed by the platform. If a collision occurs during the movement of the robot, the interface will appear in the form of a "CollisionWarning" word, as shown in fig. 4. When the robot successfully reaches the end point, the system prompts "Task Complete" and the length from the end point is always displayed in the navigation interface as shown in fig. 5. The map interface in the platform can completely present the path the robot walks through. Repeated tests are carried out on the trained neural network on the data acquisition platform, the distance information can be observed to be continuously reduced, warning information can not appear, the moving path of the robot can be seen to be smooth and relatively short after the task is completed, and various moving paths of the robot are shown in fig. 6.
The above examples are preferred embodiments of the present application, and are merely for convenience of explanation, not limitation, and any person having ordinary skill in the art shall make local changes or modifications by using the technical disclosure of the present application without departing from the technical features of the present application, and all the embodiments still fall within the scope of the technical features of the present application.

Claims (1)

1. The autonomous logistics transportation navigation method of the robot based on the neural network is characterized by comprising the following steps of:
s1, building a data acquisition platform:
n laser sensors, a radar positioning device and a gyroscope are mounted on the robot; the laser sensor is used for detecting the distribution of the front obstacle; the radar positioning device and the gyroscope are respectively used for determining the advancing direction of the robot;
constructing an indoor scene;
s2, collecting human data:
controlling the movement of the robot through a computer;
in the process, the distance information of each beam of laser detection, the angle information of the robot relative to the advancing direction and the human behavior information controlled by a keyboard are recorded simultaneously;
in step S2, distance information of each laser detection, angle information of the robot relative to the advancing direction, and human behavior information controlled by a keyboard are recorded, specifically as follows:
the number of the laser sensors is n, and the distance information data matrix D of each laser detection at the moment of recording t0 … … tm is as follows:
wherein d h,td Representing the distance measured at time td under the h laser sensor;
the matrix V of angle information of the robot relative to the forward direction is:
wherein V is td An angle indicating the relative advancing direction of the robot at the td-th time;
the matrix AT of human behavior information controlled by the keyboard is:
at td is 1 number value or 1 vector, which corresponds to human behavior information at td time;
the robot has p wheels, and the matrix of human behavior information is:
wherein at f,td Representing the motion command state of any f-th wheel at any td moment;
s3, building a neural network;
s4, training a neural network: training the neural network by taking the data collected in the step S2 as a training data set; the distance information of each beam of laser detection and the angle information of the robot relative to the advancing direction are used as input layers, and the output layers are human behavior information;
in step S4, the neural network is trained:
D. v is used as input layer data, AT is used as output layer data, and the neural network is trained:
namely: d, d 1,td ......d h,td ......d n,td 、V td At as input layer data 1,td ...at f,td ...at p,td As output layer data;
n=11, p=3; the neural network structure built by S3 is as follows: a 7-layer neural network, wherein the first-layer node number r 1 Number of second layer nodes r=18 2 Number of third layer nodes r=64 3 Number of fourth layer nodes r=128 4 Number of fifth layer nodes r=64 5 Number of nodes of sixth layer =32r 6 Number of seventh layer nodes r=8 7 =3, wherein the first layer is an input layer, and the seventh layer is an output layer; adding RReLU () activation function by adopting a full connection form and a 5-layer hidden layer, and using Softmax () function by an output layer;
s5, loading the trained neural network onto a robot, wherein the robot carries out autonomous logistics transportation navigation based on the loaded neural network;
training data set capacity c=640, small batch data capacity n batch 10, learning rate lr=0.0008, total training number n epoch =1000;
The number of the laser sensors is n, and n-dimensional vectors are formed by the distance detected by n laser beams and the relative advancing angle and used as the input of the neural network;
between step S4 and step S5, further comprising: testing the trained neural network: and (3) taking other data different from the step (S4) as a test data set, inputting the training data set and the test data set neural network after each round of training, comparing the output with the human behavior data to obtain the accuracy of the neural network for re-carving human behaviors, if the accuracy exceeds an ideal value, storing the trained neural network, and if the accuracy is lower, returning to the step (S3) for adjusting the neural network parameters.
CN202111222526.0A 2021-10-20 2021-10-20 Autonomous logistics transportation navigation method for robot based on neural network Active CN113959446B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111222526.0A CN113959446B (en) 2021-10-20 2021-10-20 Autonomous logistics transportation navigation method for robot based on neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111222526.0A CN113959446B (en) 2021-10-20 2021-10-20 Autonomous logistics transportation navigation method for robot based on neural network

Publications (2)

Publication Number Publication Date
CN113959446A CN113959446A (en) 2022-01-21
CN113959446B true CN113959446B (en) 2024-01-23

Family

ID=79464911

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111222526.0A Active CN113959446B (en) 2021-10-20 2021-10-20 Autonomous logistics transportation navigation method for robot based on neural network

Country Status (1)

Country Link
CN (1) CN113959446B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7211980B1 (en) * 2006-07-05 2007-05-01 Battelle Energy Alliance, Llc Robotic follow system and method
CN104777839A (en) * 2015-04-16 2015-07-15 北京工业大学 BP neural network and distance information-based robot autonomous obstacle avoiding method
WO2018117872A1 (en) * 2016-12-25 2018-06-28 Baomar Haitham The intelligent autopilot system
US10032111B1 (en) * 2017-02-16 2018-07-24 Rockwell Collins, Inc. Systems and methods for machine learning of pilot behavior
CN110632931A (en) * 2019-10-09 2019-12-31 哈尔滨工程大学 Mobile robot collision avoidance planning method based on deep reinforcement learning in dynamic environment
WO2021101561A1 (en) * 2019-11-22 2021-05-27 Siemens Aktiengesellschaft Sensor-based construction of complex scenes for autonomous machines
CN112857370A (en) * 2021-01-07 2021-05-28 北京大学 Robot map-free navigation method based on time sequence information modeling
CN112873211A (en) * 2021-02-24 2021-06-01 清华大学 Robot man-machine interaction method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10621448B2 (en) * 2017-08-02 2020-04-14 Wing Aviation Llc Systems and methods for determining path confidence for unmanned vehicles
US10739775B2 (en) * 2017-10-28 2020-08-11 Tusimple, Inc. System and method for real world autonomous vehicle trajectory simulation
US11740630B2 (en) * 2018-06-12 2023-08-29 Skydio, Inc. Fitness and sports applications for an autonomous unmanned aerial vehicle
US20210165413A1 (en) * 2018-07-26 2021-06-03 Postmates Inc. Safe traversable area estimation in unstructured free-space using deep convolutional neural network
US20210354729A1 (en) * 2020-05-18 2021-11-18 Nvidia Corporation Efficient safety aware path selection and planning for autonomous machine applications

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7211980B1 (en) * 2006-07-05 2007-05-01 Battelle Energy Alliance, Llc Robotic follow system and method
CN104777839A (en) * 2015-04-16 2015-07-15 北京工业大学 BP neural network and distance information-based robot autonomous obstacle avoiding method
WO2018117872A1 (en) * 2016-12-25 2018-06-28 Baomar Haitham The intelligent autopilot system
US10032111B1 (en) * 2017-02-16 2018-07-24 Rockwell Collins, Inc. Systems and methods for machine learning of pilot behavior
CN110632931A (en) * 2019-10-09 2019-12-31 哈尔滨工程大学 Mobile robot collision avoidance planning method based on deep reinforcement learning in dynamic environment
WO2021101561A1 (en) * 2019-11-22 2021-05-27 Siemens Aktiengesellschaft Sensor-based construction of complex scenes for autonomous machines
CN112857370A (en) * 2021-01-07 2021-05-28 北京大学 Robot map-free navigation method based on time sequence information modeling
CN112873211A (en) * 2021-02-24 2021-06-01 清华大学 Robot man-machine interaction method

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
刘爽 ; 朱国栋 ; .基于操作者表现的机器人遥操作方法.机器人.2018,(第04期), *
在未知环境下基于递阶模糊行为的移动机器人控制算法;李寿涛, 李元春;吉林大学学报(工学版)(第04期);全文 *
基于操作者表现的机器人遥操作方法;刘爽;朱国栋;;机器人(第04期) *
基于改进模糊算法的移动机器人自主避障研究;胡静波;陈定方;吴俊峰;梅杰;李波;;自动化与仪表(第06期) *
胡静波 ; 陈定方 ; 吴俊峰 ; 梅杰 ; 李波 ; .基于改进模糊算法的移动机器人自主避障研究.自动化与仪表.2018,(第06期), *

Also Published As

Publication number Publication date
CN113959446A (en) 2022-01-21

Similar Documents

Publication Publication Date Title
CN111780777B (en) Unmanned vehicle route planning method based on improved A-star algorithm and deep reinforcement learning
Faust et al. Prm-rl: Long-range robotic navigation tasks by combining reinforcement learning and sampling-based planning
CN110989576B (en) Target following and dynamic obstacle avoidance control method for differential slip steering vehicle
Zhang et al. 2D Lidar‐Based SLAM and Path Planning for Indoor Rescue Using Mobile Robots
Tai et al. Towards cognitive exploration through deep reinforcement learning for mobile robots
Ross et al. Learning monocular reactive uav control in cluttered natural environments
Grigorescu et al. Neurotrajectory: A neuroevolutionary approach to local state trajectory learning for autonomous vehicles
Botteghi et al. On reward shaping for mobile robot navigation: A reinforcement learning and SLAM based approach
CN113848974B (en) Aircraft trajectory planning method and system based on deep reinforcement learning
Chen et al. Robot navigation with map-based deep reinforcement learning
El Ferik et al. A Behavioral Adaptive Fuzzy controller of multi robots in a cluster space
Al Dabooni et al. Heuristic dynamic programming for mobile robot path planning based on Dyna approach
Guo et al. A fusion method of local path planning for mobile robots based on LSTM neural network and reinforcement learning
CN116679719A (en) Unmanned vehicle self-adaptive path planning method based on dynamic window method and near-end strategy
Sani et al. Pursuit-evasion game for nonholonomic mobile robots with obstacle avoidance using NMPC
Chen et al. Deep reinforcement learning of map-based obstacle avoidance for mobile robot navigation
CN113485323B (en) Flexible formation method for cascading multiple mobile robots
Rasib et al. Are Self‐Driving Vehicles Ready to Launch? An Insight into Steering Control in Autonomous Self‐Driving Vehicles
Liang et al. Multi-UAV autonomous collision avoidance based on PPO-GIC algorithm with CNN–LSTM fusion network
Wu et al. UAV Path Planning Based on Multicritic‐Delayed Deep Deterministic Policy Gradient
Sebastian et al. Neural network based heterogeneous sensor fusion for robot motion planning
Sun et al. Event-triggered reconfigurable reinforcement learning motion-planning approach for mobile robot in unknown dynamic environments
Nayak et al. A heuristic-guided dynamical multi-rover motion planning framework for planetary surface missions
Xu et al. Avoidance of manual labeling in robotic autonomous navigation through multi-sensory semi-supervised learning
Xu et al. Automated labeling for robotic autonomous navigation through multi-sensory semi-supervised learning on big data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant