CN112440281A - Robot trajectory planning method based on digital twins - Google Patents
Robot trajectory planning method based on digital twins Download PDFInfo
- Publication number
- CN112440281A CN112440281A CN202011278855.2A CN202011278855A CN112440281A CN 112440281 A CN112440281 A CN 112440281A CN 202011278855 A CN202011278855 A CN 202011278855A CN 112440281 A CN112440281 A CN 112440281A
- Authority
- CN
- China
- Prior art keywords
- robot
- environment
- dimensional
- trajectory
- track
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1664—Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1674—Programme controls characterised by safety, monitoring, diagnostic
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Manipulator (AREA)
Abstract
The invention discloses a robot track planning method based on digital twins, which comprises the steps of establishing an actual robot track planning hardware system, establishing a virtual robot track planning digital model and a core robot track planning algorithm part based on LSTM-RRT; the hardware system part comprises a robot, a mechanical claw, a working environment barrier and the like, and is a part for executing trajectory planning in a physical environment; the digital model part comprises a constructed digital robot model and a virtual environment model and is a part for executing trajectory planning in a virtual environment; the method provided by the invention innovatively combines the fast expanding random tree algorithm and the long-short term memory neural network to fast solve the robot grabbing path, greatly improves the solving efficiency of the path under the condition of not influencing the solving success rate of the robot grabbing path, realizes real-time state monitoring of the robot by using a digital twin technology, and improves the control precision of robot trajectory planning.
Description
Technical Field
The invention relates to the field of robot trajectory planning and the field of digital twins, in particular to a robot trajectory planning method based on digital twins.
Background
Digital Twin technology: a digital twin is a set of virtual information structures, from the microscopic atomic level to the macroscopic geometric level, that fully describe a potential or actual physical manufactured product. The digital twin technology can fully utilize data such as physical models, sensor updating, operation history and the like, integrate a multidisciplinary, multi-physical quantity, multi-scale and multi-probability simulation process and complete the mapping of a physical world in a virtual world. In its best state, any information that may be obtained from examining a physically manufactured product may be obtained from its digital twin brother.
The trajectory planning technology comprises the following steps: in brief, the robot trajectory planning is to generate a path between a specified start point and an end point, and when the robot moves along the path, the robot does not collide with surrounding obstacles. At present, in robot trajectory planning algorithms, widely used methods include a rapid-expansion Random Tree (RRT) algorithm, a dikstra (Djikstra) algorithm, an a-algorithm, a Random Roadmap (PRM) algorithm, and the like. Because most of the traditional algorithms are based on search-type solution, space needs to be continuously explored and collision detection needs to be carried out in the solution process, and therefore the problem of low solution speed occurs in the solution process. Because the traditional random sampling algorithm does not further optimize the track, the solved track has low quality. Because the traditional algorithm does not adopt a digital twin technology to simulate the robot in real time, real-time feedback cannot be carried out when the robot is used for grabbing, and the action of the robot cannot be accurately controlled.
Disclosure of Invention
Aiming at the characteristics of low track solving speed and low control precision in the existing robot motion planning, a robot track planning method based on digital twinning is provided, and the method combines the traditional fast expanding random tree algorithm and the long-short term memory neural network to solve. According to the invention, by constructing the robot digital twin model which is captured by adopting the LSTM-RRT method, the solving speed of robot capture and the control precision of the robot are effectively improved.
The purpose of the invention is realized by the following technical scheme:
a robot track planning method based on digital twins comprises the steps of constructing a virtual scene and a real scene of a robot and an environment with obstacles where the robot is located, transmitting virtual and real information of the virtual and real scenes through physical hardware, constructing a digital twins model, and planning a path of the robot through a neural network model running in an upper computer, and specifically comprises the following steps:
s1: constructing a training data set in a virtual environment;
respectively generating a two-dimensional environment vector and a three-dimensional environment vector from a two-dimensional environment image and a three-dimensional environment image of an environment model with an obstacle in a virtual environment, respectively generating a plurality of two-dimensional tracks and three-dimensional tracks for avoiding the obstacle by using an RRT algorithm for each two-dimensional environment image and each three-dimensional environment image, and then generating a training data set { I }(r),P(rn)In which I(r)Representing an ambient vector, P(rn)Representing a trajectory;
s2: inputting the two-dimensional environment vector and the plurality of two-dimensional tracks, the three-dimensional environment vector and the plurality of three-dimensional tracks obtained in the step S1 into an LSTM network for training respectively; the specific training process is as follows:
s2.1: from said training data set { I(r),P(rn)Extracting the coordinate x of the kth point on one trackkAnd input into the LSTM at the k-th call;
s2.2: by xkTo calculate the predicted value y of the coordinates of the (k + 1) th point on the trajectoryk;
S2.3: calculating ykAnd a track P(rn)Accumulating the difference between the real values of the coordinates of the (k + 1) th point, namely, a callback error, and updating the parameters of the LSTM according to the callback error;
s2.4: using ykStructure xk+1Replacing the time sequence k with k +1, and returning to S2.2;
s2.5: traversing all tracks in the data set to generate an optimized LSTM network;
s3: inputting the environment with obstacles to be subjected to track planning and the initial point position and the final point position into the optimized LSTM network, and generating a collision-free track;
s4: and transmitting the collision-free track to act on the robot through virtual and real information, so that the robot runs according to the collision-free track.
Further, after a collision-free trajectory is obtained from S3, redundant points in the trajectory are removed according to the geometric features of the trajectory and smoothed using a B-spline curve.
Further, before constructing the training set, the convolutional auto-encoder is trained by using the training data, and then the trained convolutional auto-encoder is used to compress the environment vector to be input into the LSTM.
Further, in S2.1, a training data set { I }is first obtained(r),P(rn)And (4) disordering, splitting into small batches of data sets, and inputting into the LSTM network in batches.
Further, the robot running model in the virtual environment comprises a robot model in an equal proportion to that in the physical environment, an environment model in an equal proportion to that in the physical environment, and the same constraint conditions in the running process of the physical robot; the robot trajectory information includes the robot end positions at different times, which are coordinates in a cartesian coordinate system.
The invention has the following beneficial effects:
(1) the invention can quickly generate a track between the terminal points of any starting point in the trained environment, thereby greatly improving the grabbing efficiency of the robot;
(2) the running state of the robot is monitored in real time by using the constructed digital twin body, so that the robot can be controlled more accurately, and accidents can be effectively prevented.
Drawings
FIG. 1 is a diagram of a digital twin virtual model for robot trajectory planning;
fig. 2 is a diagram of a digital twin effect of robot trajectory planning.
FIG. 3 is a flow chart of a trajectory planning method of the present invention;
FIG. 4 is a two-dimensional environment map in training data;
FIG. 5 is a three-dimensional environment map in training data;
FIG. 6 is a trajectory diagram in a two-dimensional environment;
FIG. 7 is a trajectory diagram in a three-dimensional environment;
FIG. 8 is a diagram of a result of trajectory optimization in a two-dimensional environment;
FIG. 9 is a diagram of a result of trajectory optimization in a three-dimensional environment;
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and preferred embodiments, and the objects and effects of the present invention will become more apparent, it being understood that the specific embodiments described herein are merely illustrative of the present invention and are not intended to limit the present invention.
The robot trajectory planning method based on the digital twin firstly needs to respectively construct the robot and the virtual and real environments where the robot is located. The actual robot trajectory planning hardware system comprises a physical robot, a mechanical claw and a working environment obstacle; the mechanical claw is a two-finger parallel self-adaptive clamping claw.
Sending the data to the constructed digital twin body in a constructed real-time data transmission mode, wherein the constructed digital twin body comprises a physical environment and a virtual environment of robot motion; the virtual environment is constructed by the upper computer through virtual and real information transmission with physical hardware, the LSTM-RRT algorithm is deployed on the upper computer, and the trajectory generation method is adopted to plan the trajectory of the mechanical gripper in the virtual environment and control the mechanical gripper in the physical environment to move. The method comprises the following specific steps:
the robot and the computer are connected through a network cable, a node named 'ur 5e _ missing up' runs in a Robot Operating System (ROS), and a local area network static IP address set in the robot is input to realize virtual-to-real communication. A digital twin of the robot and its environment is constructed in a robot simulation environment (V-REP), as shown in FIG. 1. In a simulation environment, besides geometric modeling of the robot, physical properties such as mass, centroid position, moment of inertia and the like of each part of the robot need to be defined. And compiling an automation control script in the MATLAB, and controlling the virtual simulation robot to move by using a communication interface between the MATLAB and the V-REP program. The information such as the position, the speed and the like of each joint of the robot is read in real time in the V-REP, and the information is transmitted to the robot operating system by using an interface between the V-REP and the ROS, and the information is transmitted to the robot operating system by using Moveit! The module automatically interpolates the middle position, generates an instruction set required by the control robot, and sends the instruction set to the actual robot through a network cable, so that the robot entity completes corresponding actions. The robot character twin effect is shown in fig. 2.
The robot trajectory planning algorithm based on the LSTM-RRT runs in an upper computer and comprises a fast expanding random tree algorithm and a long-short term memory neural network. The algorithm part combines a long-short term memory network and a fast expanding random tree algorithm, and pseudo codes of the algorithm are as follows:
the robot trajectory planning method based on the digital twin specifically comprises the following steps (as shown in fig. 3):
s1: constructing a training data set in a virtual environment;
respectively generating a two-dimensional environment vector (shown in figure 4) and a three-dimensional environment vector (shown in figure 5) from the environment model with the obstacle in the virtual environment in the constructed digital twin body, respectively generating a plurality of two-dimensional tracks and three-dimensional tracks for avoiding the obstacle by using an RRT algorithm for each two-dimensional environment graph and three-dimensional environment graph, as shown in figures 6 and 7, and then generating a training data set { I }(r),P(rn)In which I(r)Representing an ambient vector, P(rn)The trajectory is represented.
S2: training a convolutional auto-encoder;
according to the embodiment, the convolution self-encoder is adopted to extract the input environment information, the constructed convolution self-encoder is trained by using the vector generated by the environment image, and then the trained convolution self-encoder is used to compress the vector generated by the input environment image.
S3: inputting the data pair compressed by the S2 into an LSTM network for training; the specific training process is as follows:
s3.1: scrambled data set { I(r),P(rn)Dividing the data into small batch processing sets;
s3.2: performing the following on the first processing set:
s3.2.1: from said training data set { I(r),P(rn)Extracting the coordinate x of the kth point on one trackkAnd input into the LSTM network at the k-th call;
s3.2.2: by xkTo calculate the predicted value y of the coordinates of the (k + 1) th point on the trajectoryk;
S3.2.3: calculating ykAnd a track P(rn)Accumulating the difference between the real values of the coordinates of the (k + 1) th point, namely, a callback error, and updating the parameters of the LSTM network according to the callback error;
s3.2.4: using ykStructure xk+1Replace timing k with k +1, return to S3.2.2;
s3.2.5: traversing all tracks in the data set, traversing all processing sets and generating an optimized LSTM network;
s4: and inputting the environment with the obstacles to be subjected to the track planning and the positions of the starting point and the end point into the optimized LSTM network to generate a collision-free track. As shown in fig. 8 and 9.
S5: and further optimizing the track obtained in the step S4, removing redundant points in the track according to the geometrical characteristics of the track, and smoothing the redundant points by using a B-spline curve to obtain a final track.
S6: and transmitting the final track to act on the robot through virtual and real information, so that the robot runs according to the collision-free track.
The process of the invention is described below by means of specific examples.
The results obtained in the two-dimensional environment of this example are shown in fig. 8. As can be seen from fig. 8, the trajectory obtained by the RRT algorithm and the trained model is relatively tortuous and long in length. After the redundant points are removed, the track length is greatly reduced, but an acute angle still exists, so that the track is not smooth. And finally, after B-spline interpolation, the track becomes smooth.
The following describes the processing in a two-dimensional, three-dimensional environment.
The method comprises the following steps: constructing a digital twin body virtual environment three-dimensional environment diagram as shown in FIG. 5 or a two-dimensional environment diagram as shown in FIG. 4, wherein 10 two-dimensional diagrams and 5 three-dimensional diagrams are generated. Under a two-dimensional environment, 3000 tracks shown in FIG. 6 are generated for each graph; for a three-dimensional environment, 12000 traces as shown in FIG. 7 were generated for each graph. The size of the three-dimensional environment graph generated by the embodiment is 50 × 50 pixels, the size of the two-dimensional environment graph is 100 × 100, and a plurality of obstacles are arranged in the obtained environment, wherein the obstacles can be cuboids, window shapes, cuboid combinations and the like. These obstacles divide the space into free space and obstacle space, and in MATLAB this environment is represented by a two-dimensional array or three-dimensional array, with the free space portion set to 1 and the obstacle space set to 0.
Step two: 12000 tracks are generated for each graph of the three-dimensional environment as shown in FIG. 5, and 3000 tracks are generated for the two-dimensional environment. The starting point and the end point of each track are different, each track comprises 50 vertexes at most, and when the number of the vertexes is less than 50, the track lengths are uniformly set to be 50 by adopting a zero filling method.
Step three: the three-dimensional convolutional autocoder was designed using the parameters shown in table 1, and the two-dimensional convolutional autocoder was designed using the parameters shown in table 2.
TABLE 1 three-dimensional convolution autocoder setup parameters
Setting item | Parameter(s) |
Ambient image I(r)∈RA×B×CSize of (2) | A=B=C=50 |
Number of convolution layers LC | 3 |
Number of layers P of poolC | 2 |
Number of filters per layer | 32、8、2 |
Step length s of each layer | 1 |
Filter size (V X Q X W) | V=Q=W=3 |
Activation function eta (·) | Rectifier function |
Error function | Mean squared error |
Processing set |
5 |
Initial weight | [-0.1,0.1]Random number in (1) |
Initial deviation value | None |
Learning rate adjustment | Adam |
TABLE 2 two-dimensional convolution autoencoder setup parameters
Setting item | Parameter(s) |
Ambient image I(r)∈RA×BSize of (2) | A=B=100 |
Number of convolution layers LC | 3 |
Number of filters per layer | 12、6、3 |
Step length s of each layer | 1、3、5 |
Filter size (V X Q) | V=Q=5 |
Activation function eta (·) | Rectifier function |
Error function | Mean squared error |
Processing set |
10 |
Initial weight | [-0.1,0.1]Random number in (1) |
Initial deviation value | None |
Learning rate adjustment | Adam |
After the convolution self-encoder, the three-dimensional environment information can be compressed into a vector of 1 × 250 by a three-dimensional array of 50 × 50, and the two-dimensional environment information can be compressed into a vector of 1 × 108 by a two-dimensional array of 100 × 100, which greatly reduces the data size of the environment information.
Step four: and constructing and training an LSTM network. The LSTM network constructed in this embodiment is divided into three parts: input layer, hidden layer, output layer. The input layer is two fully connected layers, and one dimension for receiving the environment information is 1 × 250 or 1 × 108. The other receiving node information comprises the current node coordinates and the target node coordinates, so that the layer has the dimension of 1 x 6 when receiving the three-dimensional coordinates and the dimension of 1 x 4 when receiving the two-dimensional coordinates; the hidden layer has two total layers and is an LSTM layer, and the LSTM1 layer contains two LSTM units, one for receiving environment information and the other for receiving track information. The LSTM2 layer comprises an LSTM unit for receiving the environment information and the path information output from the previous hidden layer and fusing the two information to predict the next vertex; the output layer contains only one fully connected layer of 1 x 3 or 1 x 2, the output of which is the predicted next node. And (4) making a mean square error between the predicted vertex value and the actual value of the output layer, taking the mean square error value as loss, and reversely adjusting the weight of each layer according to the loss. And training the constructed LSTM network by using the environment information extracted by the convolutional self-encoder and the track information generated by MATLAB.
Step five: after the LSTM network is constructed and trained according to the above method, the optimized LSTM network needs to be called to generate a new trajectory. When the optimized LSTM network is called to generate a new track, the extracted environment information and any starting point and end point in free space in the environment need to be simultaneously input into the optimized LSTM network, the model firstly generates prediction of the next point according to the input information, then recombines and inputs the predicted value and the end point information into the network to generate prediction of another point, and a predicted track can be generated by storing the series of predicted points. In this way, the model can quickly generate a track according to the input environment information and node information, and the result is shown in fig. 7.
And (3) transmitting the track obtained in the upper computer to the robot through virtual and real information, so that the robot runs according to the collision-free track.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and although the invention has been described in detail with reference to the foregoing examples, it will be apparent to those skilled in the art that various changes in the form and details of the embodiments may be made and equivalents may be substituted for elements thereof. All modifications, equivalents and the like which come within the spirit and principle of the invention are intended to be included within the scope of the invention.
Claims (5)
1. A robot track planning method based on digital twins is characterized by comprising the following steps of constructing a virtual scene and a real scene of an environment where a robot and a barrier are located, transmitting virtual and real information of the virtual scene and the real scene through physical hardware, constructing a digital twins model, and planning a path of the robot through a neural network model running in an upper computer, wherein the method specifically comprises the following steps:
s1: constructing a training data set in a virtual environment;
respectively generating a two-dimensional environment vector and a three-dimensional environment vector from a two-dimensional environment image and a three-dimensional environment image of an environment model with an obstacle in a virtual environment, respectively generating a plurality of two-dimensional tracks and three-dimensional tracks for avoiding the obstacle by using an RRT algorithm for each two-dimensional environment image and each three-dimensional environment image, and then generatingTraining data set { I(r),P(rn)In which I(r)Representing an ambient vector, P(rn)Representing a trajectory;
s2: inputting the two-dimensional environment vector and the plurality of two-dimensional tracks, the three-dimensional environment vector and the plurality of three-dimensional tracks obtained in the step S1 into an LSTM network for training respectively; the specific training process is as follows:
s2.1: from said training data set { I(r),P(rn)Extracting the coordinate x of the kth point on one trackkAnd input into the LSTM at the k-th call;
s2.2: by xkTo calculate the predicted value y of the coordinates of the (k + 1) th point on the trajectoryk;
S2.3: calculating ykAnd a track P(rn)Accumulating the difference between the real values of the coordinates of the (k + 1) th point, namely, a callback error, and updating the parameters of the LSTM according to the callback error;
s2.4: using ykStructure xk+1Replacing the time sequence k with k +1, and returning to S2.2;
s2.5: traversing all tracks in the data set to generate an optimized LSTM network;
s3: inputting the environment with obstacles to be subjected to track planning and the initial point position and the final point position into the optimized LSTM network, and generating a collision-free track;
s4: and transmitting the collision-free track to act on the robot through virtual and real information, so that the robot runs according to the collision-free track.
2. The digital twin-based robot trajectory planning method according to claim 1, wherein after a collision-free trajectory is obtained from S3, redundant points in the trajectory are removed according to the geometric features of the trajectory and smoothed using a B-spline curve.
3. The method of claim 1, wherein training the convolutional auto-encoder with training data before constructing the training set, and compressing the environment vector to be input into the LSTM using the trained convolutional auto-encoder.
4. The method as claimed in claim 1, wherein in S2.1, the training data set { I } is first set(r),P(rn)And (4) disordering, splitting into small batches of data sets, and inputting into the LSTM network in batches.
5. The digital twin robot trajectory planning method according to claim 1, wherein the robot operation model in the virtual environment includes a robot model in an equal proportion to that in the physical environment, an environment model in an equal proportion to that in the physical environment, and the same constraint conditions as those in the operation process of the physical robot; the robot trajectory information includes the robot end positions at different times, which are coordinates in a cartesian coordinate system.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011278855.2A CN112440281A (en) | 2020-11-16 | 2020-11-16 | Robot trajectory planning method based on digital twins |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011278855.2A CN112440281A (en) | 2020-11-16 | 2020-11-16 | Robot trajectory planning method based on digital twins |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112440281A true CN112440281A (en) | 2021-03-05 |
Family
ID=74737131
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011278855.2A Pending CN112440281A (en) | 2020-11-16 | 2020-11-16 | Robot trajectory planning method based on digital twins |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112440281A (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113110459A (en) * | 2021-04-20 | 2021-07-13 | 上海交通大学 | Motion planning method for multi-legged robot |
CN113365370A (en) * | 2021-05-24 | 2021-09-07 | 内蒙古工业大学 | Intelligent mobile system based on LoRa technique |
CN113378475A (en) * | 2021-06-28 | 2021-09-10 | 清华大学深圳国际研究生院 | Vrep-based quadruped robot control method, system and device |
CN113687659A (en) * | 2021-10-26 | 2021-11-23 | 武汉鼎元同立科技有限公司 | Optimal trajectory generation method and system based on digital twinning |
CN113967910A (en) * | 2021-09-22 | 2022-01-25 | 香港理工大学深圳研究院 | Man-machine cooperative control method and system based on augmented reality and digital twins |
CN114019900A (en) * | 2021-10-27 | 2022-02-08 | 苏州大学 | Time-optimal robot curved surface machining process planning method |
CN114310870A (en) * | 2021-11-10 | 2022-04-12 | 达闼科技(北京)有限公司 | Intelligent agent control method and device, electronic equipment and storage medium |
CN114405019A (en) * | 2022-02-11 | 2022-04-29 | 上海罗博拓机器人科技有限公司 | Modular robot toy and teaching aid control system based on digital twins |
CN114523469A (en) * | 2021-12-31 | 2022-05-24 | 南京理工大学 | ROS-based manipulator motion planning and simulation system |
CN114789454A (en) * | 2022-06-24 | 2022-07-26 | 浙江大学 | Robot digital twin track completion method based on LSTM and inverse kinematics |
CN114896798A (en) * | 2022-05-20 | 2022-08-12 | 梅卡曼德(北京)机器人科技有限公司 | Collision detection method, control method, capture system and computer storage medium |
CN115070780A (en) * | 2022-08-24 | 2022-09-20 | 北自所(北京)科技发展股份有限公司 | Industrial robot grabbing method and device based on digital twinning and storage medium |
WO2023051289A1 (en) * | 2021-09-30 | 2023-04-06 | 达闼机器人股份有限公司 | Navigation method and apparatus for unmanned device, medium, and unmanned device |
CN116277001A (en) * | 2023-03-22 | 2023-06-23 | 宝钢工程技术集团有限公司 | Continuous casting robot management method and system based on digital twin |
CN117577284A (en) * | 2024-01-17 | 2024-02-20 | 湖南洁城环保科技有限公司 | Medical waste intelligent collection vehicle traceability management method and system |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110106306A1 (en) * | 2009-11-02 | 2011-05-05 | Samsung Electronics Co., Ltd. | Path planning apparatus of robot and method and computer-readable medium thereof |
CN109822579A (en) * | 2019-04-10 | 2019-05-31 | 江苏艾萨克机器人股份有限公司 | Cooperation robot security's control method of view-based access control model |
CN109960880A (en) * | 2019-03-26 | 2019-07-02 | 上海交通大学 | A kind of industrial robot obstacle-avoiding route planning method based on machine learning |
CN110738739A (en) * | 2019-10-22 | 2020-01-31 | 同济大学 | Construction system of robot-assembly-oriented digital twin system |
CN110977966A (en) * | 2019-11-27 | 2020-04-10 | 华南理工大学 | Robot obstacle avoidance method based on virtual scene training |
CN111496781A (en) * | 2020-03-17 | 2020-08-07 | 浙江大学 | Mechanical arm modeling, controlling and monitoring integrated system driven by digital twin |
CN111702754A (en) * | 2020-05-14 | 2020-09-25 | 国网安徽省电力有限公司检修分公司 | Robot obstacle avoidance trajectory planning method based on simulation learning and robot |
-
2020
- 2020-11-16 CN CN202011278855.2A patent/CN112440281A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110106306A1 (en) * | 2009-11-02 | 2011-05-05 | Samsung Electronics Co., Ltd. | Path planning apparatus of robot and method and computer-readable medium thereof |
CN109960880A (en) * | 2019-03-26 | 2019-07-02 | 上海交通大学 | A kind of industrial robot obstacle-avoiding route planning method based on machine learning |
CN109822579A (en) * | 2019-04-10 | 2019-05-31 | 江苏艾萨克机器人股份有限公司 | Cooperation robot security's control method of view-based access control model |
CN110738739A (en) * | 2019-10-22 | 2020-01-31 | 同济大学 | Construction system of robot-assembly-oriented digital twin system |
CN110977966A (en) * | 2019-11-27 | 2020-04-10 | 华南理工大学 | Robot obstacle avoidance method based on virtual scene training |
CN111496781A (en) * | 2020-03-17 | 2020-08-07 | 浙江大学 | Mechanical arm modeling, controlling and monitoring integrated system driven by digital twin |
CN111702754A (en) * | 2020-05-14 | 2020-09-25 | 国网安徽省电力有限公司检修分公司 | Robot obstacle avoidance trajectory planning method based on simulation learning and robot |
Non-Patent Citations (2)
Title |
---|
林强: "《机器学习、深度学习与强化学习》", 31 May 2019 * |
高志强等: "《深度学习:从入门到实战》", 30 June 2018 * |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113110459A (en) * | 2021-04-20 | 2021-07-13 | 上海交通大学 | Motion planning method for multi-legged robot |
CN113365370A (en) * | 2021-05-24 | 2021-09-07 | 内蒙古工业大学 | Intelligent mobile system based on LoRa technique |
CN113378475A (en) * | 2021-06-28 | 2021-09-10 | 清华大学深圳国际研究生院 | Vrep-based quadruped robot control method, system and device |
CN113967910A (en) * | 2021-09-22 | 2022-01-25 | 香港理工大学深圳研究院 | Man-machine cooperative control method and system based on augmented reality and digital twins |
WO2023051289A1 (en) * | 2021-09-30 | 2023-04-06 | 达闼机器人股份有限公司 | Navigation method and apparatus for unmanned device, medium, and unmanned device |
CN113687659B (en) * | 2021-10-26 | 2022-01-25 | 武汉鼎元同立科技有限公司 | Optimal trajectory generation method and system based on digital twinning |
CN113687659A (en) * | 2021-10-26 | 2021-11-23 | 武汉鼎元同立科技有限公司 | Optimal trajectory generation method and system based on digital twinning |
CN114019900A (en) * | 2021-10-27 | 2022-02-08 | 苏州大学 | Time-optimal robot curved surface machining process planning method |
CN114310870A (en) * | 2021-11-10 | 2022-04-12 | 达闼科技(北京)有限公司 | Intelligent agent control method and device, electronic equipment and storage medium |
CN114523469B (en) * | 2021-12-31 | 2024-04-23 | 南京理工大学 | ROS-based manipulator motion planning and simulation system |
CN114523469A (en) * | 2021-12-31 | 2022-05-24 | 南京理工大学 | ROS-based manipulator motion planning and simulation system |
CN114405019A (en) * | 2022-02-11 | 2022-04-29 | 上海罗博拓机器人科技有限公司 | Modular robot toy and teaching aid control system based on digital twins |
CN114896798A (en) * | 2022-05-20 | 2022-08-12 | 梅卡曼德(北京)机器人科技有限公司 | Collision detection method, control method, capture system and computer storage medium |
CN114896798B (en) * | 2022-05-20 | 2024-05-24 | 梅卡曼德(北京)机器人科技有限公司 | Collision detection method, control method, grasping system, and computer storage medium |
CN114789454A (en) * | 2022-06-24 | 2022-07-26 | 浙江大学 | Robot digital twin track completion method based on LSTM and inverse kinematics |
CN114789454B (en) * | 2022-06-24 | 2022-09-06 | 浙江大学 | Robot digital twin track completion method based on LSTM and inverse kinematics |
CN115070780A (en) * | 2022-08-24 | 2022-09-20 | 北自所(北京)科技发展股份有限公司 | Industrial robot grabbing method and device based on digital twinning and storage medium |
CN115070780B (en) * | 2022-08-24 | 2022-11-18 | 北自所(北京)科技发展股份有限公司 | Industrial robot grabbing method and device based on digital twinning and storage medium |
CN116277001A (en) * | 2023-03-22 | 2023-06-23 | 宝钢工程技术集团有限公司 | Continuous casting robot management method and system based on digital twin |
CN117577284A (en) * | 2024-01-17 | 2024-02-20 | 湖南洁城环保科技有限公司 | Medical waste intelligent collection vehicle traceability management method and system |
CN117577284B (en) * | 2024-01-17 | 2024-04-26 | 湖南洁城环保科技有限公司 | Medical waste intelligent collection vehicle traceability management method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112440281A (en) | Robot trajectory planning method based on digital twins | |
CN109960880B (en) | Industrial robot obstacle avoidance path planning method based on machine learning | |
CN109213147A (en) | A kind of robot obstacle-avoiding method for planning track and system based on deep learning | |
CN108196453B (en) | Intelligent calculation method for mechanical arm motion planning group | |
CN109483573A (en) | Machine learning device, robot system and machine learning method | |
CN111645065A (en) | Mechanical arm motion planning method based on deep reinforcement learning | |
CN109343345A (en) | Mechanical arm polynomial interopolation method for planning track based on QPSO algorithm | |
CN111309035B (en) | Multi-robot cooperative movement and dynamic obstacle avoidance method, device, equipment and medium | |
CN109676610A (en) | A kind of breaker puts together machines people and its method of realizing working trajectory optimization | |
CN110929422B (en) | Robot cluster simulation method and device | |
CN115091469B (en) | Depth reinforcement learning mechanical arm motion planning method based on maximum entropy frame | |
CN113232016A (en) | Mechanical arm path planning method integrating reinforcement learning and fuzzy obstacle avoidance | |
WO2024198747A1 (en) | Processing method and apparatus for motion capture data, and device and storage medium | |
CN113253744A (en) | Multi-robot collaborative trajectory planning method and device, electronic equipment and storage medium | |
Liu et al. | Sim-and-real reinforcement learning for manipulation: A consensus-based approach | |
CN113843802A (en) | Mechanical arm motion control method based on deep reinforcement learning TD3 algorithm | |
CN117798924A (en) | Control method of double mechanical arms | |
CN116690557A (en) | Method and device for controlling humanoid three-dimensional scanning motion based on point cloud | |
CN114536351B (en) | Redundant double-arm robot teaching method and device, electronic equipment and system | |
CN114964247B (en) | Crowd sensing navigation method and system based on higher-order graph convolution neural network | |
Zhang et al. | Sim2real learning of visionbased obstacle avoidance for robotic manipulators | |
CN115237140A (en) | 3D-TD 3 robot path planning method based on 3D point cloud | |
CN114986501A (en) | Mechanical arm path planning method and system and mechanical arm | |
CN109857110A (en) | Motion planning method, device, equipment and computer readable storage medium | |
CN114820802A (en) | High-freedom-degree dexterous hand grabbing planning method and device and computer equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210305 |